WO2024075510A1 - Image processing method and image processing device - Google Patents

Image processing method and image processing device Download PDF

Info

Publication number
WO2024075510A1
WO2024075510A1 PCT/JP2023/033903 JP2023033903W WO2024075510A1 WO 2024075510 A1 WO2024075510 A1 WO 2024075510A1 JP 2023033903 W JP2023033903 W JP 2023033903W WO 2024075510 A1 WO2024075510 A1 WO 2024075510A1
Authority
WO
WIPO (PCT)
Prior art keywords
analysis
area
image processing
image
detection
Prior art date
Application number
PCT/JP2023/033903
Other languages
French (fr)
Japanese (ja)
Inventor
千奈 澤田
Original Assignee
株式会社Screenホールディングス
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Screenホールディングス filed Critical 株式会社Screenホールディングス
Publication of WO2024075510A1 publication Critical patent/WO2024075510A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image

Definitions

  • the present invention relates to technology for analyzing captured images that are periodically input from outside.
  • the position of a specific object such as an alignment mark
  • the position of the device may be controlled based on the detection results.
  • Patent Document 1 the image is converted to a low resolution and the low resolution data is analyzed to determine whether an object is present. Then, only if it is determined that an object is present is the original high resolution image used to perform calculations again, thereby reducing the amount of calculations.
  • Patent Document 1 is effective when there are times when the object to be detected is not present in the captured image. However, when detecting the position of an alignment mark, the object to be detected is always within the shooting range, and its positional changes are tracked. For such purposes, even if the method of Patent Document 1 is used, the amount of calculations cannot be reduced.
  • the present invention has been developed in consideration of these circumstances, and aims to provide a technology that reduces the amount of calculations required in image processing to detect specific objects from time-series image data.
  • the first invention of the present application is an image processing method for detecting the position of a detection target from a periodically acquired photographed image, the method including the steps of: a) analyzing an analysis target area of the photographed image to obtain the position of the detection target and a detection error; and b) changing the analysis target area based on the magnitude of the detection error, and repeating steps a) and b) multiple times.
  • the second invention of the present application is the image processing method of the first invention, in which in step b), the analysis target area is reduced if the detection error is smaller than a predetermined first threshold value.
  • the third invention of the present application is the image processing method of the second invention, in which in step b), when the analysis target area is reduced, the size is set to at least a predetermined minimum range.
  • the fourth invention of the present application is the image processing method of the second or third invention, in which in step b), the analysis target area is enlarged if the detection error is greater than a predetermined second threshold value.
  • the fifth invention of the present application is an image processing method according to any one of the first to fourth inventions, further comprising a step c) of enlarging the analysis target area if the displacement of the detection target is greater than a reference value after steps a) and b).
  • the sixth invention of the present application is an image processing device that detects the position of a detection target from a photographed image that is periodically acquired, and includes an analysis region extraction unit that extracts an analysis target region of the photographed image input from outside and generates an image for analysis, a mark position analysis unit that analyzes the image for analysis and determines the position of the detection target and the detection error, and an analysis region determination unit that changes the analysis target region based on the detection error.
  • the area to be analyzed is expanded to reduce the detection error, thereby making it possible to keep the detection error within a predetermined range.
  • FIG. 1 is a perspective view of a drawing device equipped with an image processing device.
  • 2 is a block diagram showing electrical connections between a control unit and each unit in the drawing apparatus.
  • FIG. FIG. 2 is a control block diagram of a position detection unit serving as an image processing device.
  • 10 is a flowchart showing a flow of image processing in a position detection unit.
  • FIG. 13 is an image diagram showing the relationship between a captured image and an area to be analyzed.
  • FIG. 13 is an image diagram showing the relationship between a captured image and an area to be analyzed.
  • FIG. 13 is an image diagram showing the relationship between a captured image and an area to be analyzed.
  • FIG. 13 is an image diagram showing the relationship between a captured image and an area to be analyzed.
  • FIG. 13 is an image diagram showing the relationship between a captured image and an area to be analyzed.
  • FIG. 1 is a perspective view of the drawing device 1.
  • Figure 2 is a block diagram showing electrical connections between a control unit 10 including a position detection unit 90 and each unit in the drawing device.
  • This drawing device 1 is a device that irradiates the upper surface of a substrate W, such as a semiconductor substrate or a glass substrate coated with a photosensitive material, with spatially modulated light to draw an exposure pattern on the upper surface of the substrate W.
  • the drawing device 1 includes a transport mechanism 20, a frame 30, a drawing processing unit 40, a camera 50, and a control unit 10.
  • the transport mechanism 20 is a device that transports the flat stage 22 in a horizontal direction in a substantially constant posture on the upper surface of the base 21.
  • the transport mechanism 20 has a main scanning mechanism 23, a sub-scanning mechanism 24, and a rotation mechanism 25 (see FIG. 2).
  • the main scanning mechanism 23 is a mechanism for transporting the stage 22 in the main scanning direction, which is one horizontal direction.
  • the sub-scanning mechanism 24 is a mechanism for transporting the stage 22 in the sub-scanning direction, which is one of the horizontal directions and perpendicular to the main scanning direction.
  • the substrate W is held in a horizontal posture on the upper surface of the stage 22, and moves together with the stage 22 in the main scanning direction and the sub-scanning direction.
  • the rotation mechanism 25 can adjust the angle of the stage 22 around the vertical axis relative to the base 21.
  • the frame 30 is a structure for holding the drawing processing unit 40 above the base 21.
  • the frame 30 has a pair of support pillars 31 and a bridge portion 32.
  • the pair of support pillars 31 are erected with a gap between them in the sub-scanning direction.
  • Each support pillar 31 extends upward from the upper surface of the base 21.
  • the bridge portion 32 extends in the sub-scanning direction between the upper ends of the two support pillars 31.
  • the stage 22 holding the substrate W passes between the pair of support pillars 31 and below the bridge portion 32.
  • the drawing processing unit 40 has two optical heads 41.
  • the two optical heads 41 are fixed to the bridge unit 32 with a gap between them in the sub-scanning direction.
  • the drawing processing unit 40 also has an illumination optical system and a laser oscillator (not shown), and a laser driver 42 (see FIG. 2).
  • the illumination optical system, the laser oscillator, and the laser driver 42 are housed, for example, in the internal space of the bridge unit 32.
  • the laser driver 42 is electrically connected to the laser oscillator. When the laser driver 42 is operated, a pulsed light is emitted from the laser oscillator. The pulsed light emitted from the laser oscillator is then introduced into the optical head 41 via the illumination optical system.
  • An optical system including a spatial modulator is provided inside the optical head 41.
  • the pulsed light introduced into the optical head 41 is modulated by the spatial modulator into a predetermined pattern and irradiated onto the top surface of the substrate W. This exposes a photosensitive material such as a resist applied to the top surface of the substrate W.
  • Camera 50 (see FIG. 2) is not shown in FIG. 1.
  • Camera 50 is attached, for example, to the underside of bridge portion 32 of frame 30.
  • Camera 50 captures an image of the top surface of substrate W placed on stage 22, and inputs the resulting captured image Di to control unit 10.
  • Control unit 10 detects the position of alignment mark M provided on the top surface of substrate W from image Di captured by camera 50. This allows control unit 10 to detect the position and attitude of substrate W.
  • the control unit 10 is a means for controlling the operation of each part of the drawing device 1.
  • the control unit 10 is composed of a computer having a processor 101 such as a CPU, a memory 102 such as a RAM, and a storage unit 103 such as a hard disk drive.
  • a computer program P for controlling the operation of the drawing device 1 is stored in the storage unit 103.
  • control unit 10 is electrically connected to the drawing processing unit 40 (including the optical head 41 and the laser driving unit 42), the main scanning mechanism 23, the sub-scanning mechanism 24, the rotation mechanism 25, and the camera 50.
  • the control unit 10 reads out the computer program P and data D stored in the storage unit 103 into the memory 102, and the processor 101 performs arithmetic processing based on the computer program P and the data D, thereby controlling the operation of each of the above-mentioned units in the drawing device 1. This allows the drawing process in the drawing device 1 to proceed.
  • the control unit 10 has a position detection unit 90, a transport control unit 91, and a drawing control unit 92.
  • the functions of the position detection unit 90, the transport control unit 91, and the drawing control unit 92 are realized by the processor 101 of the computer that constitutes the control unit 10 operating in accordance with a computer program P.
  • the position detection unit 90 periodically receives captured images Di from the camera 50.
  • the position detection unit 90 analyzes the captured images Di and detects the position, size, angle, etc. of the alignment mark M to detect the position and posture of the substrate W.
  • the position detection unit 90 passes the detection result Do, which is information obtained by the analysis, to the transport control unit 91.
  • the transport control unit 91 controls each part of the transport device 20. In order to transport the substrate W in a predetermined manner, the transport control unit 91 transports the substrate W while correcting the position of the substrate W based on the detection result Do input from the position detection unit 90.
  • the drawing control unit 92 controls each part of the drawing processing unit 40.
  • the operation of the transport mechanism 20 by the transport control unit 91 and the operation of the drawing processing unit 40 by the drawing control unit 92 are performed in conjunction with each other.
  • exposure by the optical head 41 and transport of the substrate W by the transport device 20 are repeatedly performed. More specifically, exposure is performed on a strip-shaped area (swath) extending in the sub-scanning direction by irradiating pulsed light from the optical head 41 while the sub-scanning mechanism 24 transports the stage 22 in the sub-scanning direction.
  • the main scanning mechanism 23 transports the stage 22 by one swath in the main scanning direction.
  • the drawing device 1 draws a pattern on the entire top surface of the substrate W by repeating such exposure in the sub-scanning direction and transport of the stage 22 in the main scanning direction.
  • the position detection unit 90 analyzes the position, size, angle, etc. of the alignment mark M based on the captured image Di input from the camera 50, and inputs the detection result Do (see FIG. 3) to the transport control unit 91. This enables the transport control unit 91 to check whether the position of the substrate W has deviated from the desired position, and corrects the position of the substrate W if a deviation occurs. This allows a pattern to be drawn with high precision on the top surface of the substrate W.
  • a position detection unit 90 as an image processing device according to an embodiment of the present invention will be described with reference to Fig. 3.
  • the position detection unit 90 is an image processing device realized by the control unit 10, and detects the position of the alignment mark M, which is the detection target, from the captured image Di periodically captured by the camera 50.
  • the position detection unit 90 has an analysis area extraction unit 81, a mark position analysis unit 82, and an analysis area determination unit 83.
  • the analysis area extraction unit 81 extracts an area to be analyzed from the captured image Di input from outside the position detection unit 90, and generates an image for analysis D1. Specifically, the analysis area extraction unit 81 cuts out a part of the image from the captured image Di captured by the camera 50 based on area information D3 input from the analysis area determination unit 83, and generates an image for analysis D1. Then, the analysis image D1 and area information D3, which is position information of the image for analysis D1, are passed to the mark position analysis unit 82.
  • the mark position analysis unit 82 analyzes the analysis image D1 and obtains a detection result Do, including the position, size, and angle of the alignment mark M, and a detection error D2. The mark position analysis unit 82 then passes the detection result Do to the transport control unit 91, and passes the detection result Do and the detection error D2 to the analysis area determination unit 83.
  • the mark position analysis unit 82 performs inference using a convolution operation on the entire area of the analysis image D1 to detect the position of the alignment mark M. At this time, an estimated error indicating the likelihood of the result is obtained at the same time as the position detection. The mark position analysis unit 82 sets this estimated error as the detection error D2. Since the amount of calculation increases or decreases depending on the size of the analysis image D1, the smaller the size of the analysis image D1, the less the amount of calculation and the faster the calculation can be performed.
  • the analysis area determination unit 83 changes the area to be analyzed based on the detection result Do and the detection error D2, and passes area information D3, which is coordinate information of the area to be analyzed, to the analysis area extraction unit 81. Specifically, the analysis area determination unit 83 determines the size and position of the area to be analyzed based on the detection result Do and the detection error D2. Then, the analysis area determination unit 83 passes area information D3, which includes the size and position of the area to be analyzed, to the analysis area extraction unit 81.
  • the analysis area determination unit 83 of this embodiment reduces the analysis target area when the detection error D2 is smaller than a predetermined first threshold. Note that even when the detection error D2 is smaller than the first threshold, i.e., when the analysis target area is reduced, the analysis area determination unit 83 sets the analysis target area to a size equal to or larger than a predetermined minimum range.
  • the analysis area determination unit 83 of this embodiment enlarges the analysis target area when the detection error D2 is equal to or greater than a predetermined second threshold. In addition, the analysis area determination unit 83 does not change the size of the analysis target area when the detection error D2 is equal to or greater than the first threshold and less than the second threshold.
  • the analysis target area determination unit 83 may enlarge the analysis target area based on the displacement amount of the alignment mark M. In this embodiment, the analysis target area determination unit 83 enlarges the analysis target area when the displacement amount of the alignment mark M is greater than a predetermined reference value.
  • the analysis area determination unit 83 changes the position of the analysis target area according to the displacement of the position of the alignment mark M in the detection result Do. For example, the analysis area determination unit 83 changes the position of the analysis target area so that the center position of the analysis target area is as close as possible to the center position of the nearest alignment mark M.
  • Figure 4 is a flowchart showing the flow of image processing in the position detection unit 90.
  • Figures 5 to 8 are conceptual diagrams showing the relationship between the captured image Di and the area to be analyzed. Below, the flow of image processing will be described with reference to the examples of Figures 5 to 8.
  • the captured image Di is square-shaped and has the same number of vertical pixels as the same number of horizontal pixels, but the present invention is not limited to this.
  • the captured image Di may also be rectangular with a different number of vertical pixels than the horizontal pixels.
  • the alignment mark M is cross-shaped, and the intersection point is referred to as the center position. Note that the alignment mark M may be any other shape.
  • the center position of the alignment mark M may be, for example, the center of the two-dimensional coordinate range of the rectangle in which the alignment mark M exists, or it may be the center of gravity.
  • the image processing shown in FIG. 4 is started at the same time as the transfer of the substrate W begins in the drawing device 1.
  • the position detection unit 90 acquires the first captured image Di. Specifically, regular image capture by the camera 50 is started, and the first captured image Di is input to the position detection unit 90 (step S101).
  • the analysis area extraction unit 81 extracts the area to be analyzed from the input photographed image Di and generates an image for analysis D1 (step S102). Note that in the first step S102, the area to be analyzed is not set, so the area information D3 at the start of image processing is set to the entire range of the photographed image Di. Therefore, the analysis area extraction unit 81 uses the photographed image Di as it is as the image for analysis D1.
  • the mark position analysis unit 82 analyzes the analysis image D1 and detects the position, angle, size, etc. of the alignment mark M (step S103). At this time, the mark position analysis unit 82 also calculates a detection error D2, which is an estimated error. The mark position analysis unit 82 passes the detection result Do, which includes the position, angle, and size of the alignment mark M, and the detection error D2 to the analysis area determination unit 83.
  • the analysis area determination unit 83 compares the value of the detection error D2 with a predetermined first threshold and a second threshold (step S104). Then, the analysis area determination unit 83 changes the size of the analysis target area according to the magnitude of the value of the detection error D2 (steps S105 to S111). Note that the second threshold has a value larger than the first threshold.
  • step S104 determines whether the detection error D2 is smaller than the first threshold value (step S104: less than first threshold value).
  • the analysis region determination unit 83 reduces the size of the analysis region (step S105). Specifically, for example, the size of the analysis region is reduced by p pixels in both the vertical and horizontal directions. In other words, if the size of the analysis region at that time is m pixels x n pixels, it is reduced to (m-p) pixels x (n-p) pixels.
  • the analysis area determination unit 83 also changes the position of the analysis area so that the center position of the area to be analyzed coincides as closely as possible with the center position of the alignment mark M.
  • the analysis area determination unit 83 reduces the area to be analyzed when the detection error D2 is relatively small. This makes it possible to reduce the amount of calculations in the next image analysis process (step S103) while suppressing a decrease in detection accuracy.
  • the analysis area determination unit 83 determines whether the size after reduction is equal to or larger than a predetermined minimum area size (step S106).
  • the minimum area size may be a predetermined fixed value, or may vary based on the detection result Do.
  • the minimum area size may vary based on the size of the alignment mark M in the detection result Do. In that case, for example, if the size of the alignment mark M is approximately x pixels square, the size of the area to be analyzed may be (x+q) pixels square, or a constant multiple of x pixels square.
  • step S104 If it is determined in step S104 that the detection error D2 is equal to or greater than the first threshold and less than the second threshold (step S104: equal to or greater than the first threshold and less than the second threshold), the analysis region determination unit 83 does not change the size of the analysis target region (step S108). At this time, the analysis region determination unit 83 changes the position of the analysis target region so that the center position of the analysis target region coincides with the center position of the alignment mark M as closely as possible.
  • step S104 If it is determined in step S104 that the detection error D2 is equal to or greater than the second threshold (step S104: equal to or greater than the second threshold), the analysis area determination unit 83 expands the size of the area to be analyzed (step S109). Specifically, for example, the size of the area to be analyzed is increased by r pixels in both the vertical and horizontal directions. In other words, if the size of the area to be analyzed at that time is m pixels x n pixels, it is increased to (m + r) pixels x (n + r) pixels.
  • the analysis region determination unit 83 expands the analysis target region when the detection error D2 is relatively large, i.e., when the detection error D2 increases. This reduces the detection error D2 in the next image analysis process (step S103), and suppresses a decrease in detection accuracy.
  • the analysis area determination unit 83 judges whether the displacement of the alignment mark M is greater than a reference value (step S110).
  • the displacement of the alignment mark M is, for example, calculated as the difference between the center position of the latest alignment mark M and the center position of the alignment mark M one time before.
  • the displacement of the alignment mark M may be, for example, calculated as the difference between the center position of the latest alignment mark M and the center position of the alignment mark M a predetermined k times before, or may be calculated by taking into account not only the difference in center positions but also the angle of the alignment mark M.
  • step S110 if the displacement of the alignment mark M is greater than a predetermined reference value (step S110: Yes), the size of the area to be analyzed is enlarged (step S111). Specifically, for example, the size of the area to be analyzed is enlarged by s pixels in both the vertical and horizontal directions. In other words, if the size of the area to be analyzed at that time is m pixels x n pixels, it is set to (m + s) pixels x (n + s) pixels.
  • the analysis area determination unit 83 passes area information D3 including the size and position of the area to be analyzed to the analysis area extraction unit 81, and the process returns to step S101.
  • step S110 if the displacement of the alignment mark M is equal to or less than the predetermined reference value (step S110: No), the size and position of the analysis target area at that time are passed to the analysis area extraction unit 81 as area information D3, and the process returns to step S101.
  • the size of the area to be analyzed is adjusted according to the value of the detection error. This makes it possible to reduce the area to be analyzed without causing the detection error to exceed a predetermined second threshold value. In other words, the amount of calculation required for analysis can be reduced while maintaining the detection accuracy of the alignment mark M at a certain level or higher.
  • Figs. 5 to 8 show examples of changes in the analysis target area in the image processing of this embodiment.
  • Figs. 5 to 8 show a case where the position and angle of the cross-shaped alignment mark M do not move (there is no displacement), the detection error is less than the first threshold value in the first to seventh steps S104, and the displacement of the alignment mark M is equal to or less than the reference value in the first to seventh steps S110.
  • the range of the captured image Di is indicated by a solid line.
  • the first analysis target area A1, the second analysis target area A2, the third analysis target area A3, the fourth analysis target area A4, the fifth analysis target area A5, the sixth analysis target area A6, the seventh analysis target area A7, and the eighth analysis target area A8 are indicated by dashed lines.
  • the range of the first analysis target area A1 is shown slightly inside the solid line showing the captured image Di. In reality, A1 is the same range as Di, but because the range is unclear when the solid and dashed lines overlap, they are intentionally shifted when displayed.
  • the captured image Di and the first analysis target area A1 are 200 pixels square.
  • the detection error is less than the first threshold, so in step S105, the size of the analysis target area is reduced by 20 pixels to 180 pixels square.
  • the analysis target area is set so that its center coincides with the center of the alignment mark M, so the area indicated by A2' in Figure 5 becomes the analysis target area. Since area A2' has an area that does not overlap with the captured image Di, in the examples of Figures 5 to 8, the position of the analysis target area is corrected, and area A2 in the captured image Di, which is 180 pixels square and whose center is closest to the center of the alignment mark M, is set as the analysis target area.
  • step S105 since the detection error is less than the first threshold, in step S105, the size of the analysis target area is reduced by 20 pixels to 160 pixels square.
  • the analysis target area is set so that its center coincides with the center of the alignment mark M, the area indicated by A3' in FIG. 6 becomes the analysis target area. Since area A3' has an area that does not overlap with the captured image Di, the position of the analysis target area is corrected, and area A4 in the captured image Di, which is 160 pixels square and whose center is closest to the center of the alignment mark M, is set as the analysis target area.
  • step S104 the size of the area to be analyzed is similarly reduced by 20 pixels each time to 140 pixels square.
  • the area indicated by A4' in FIG. 7 becomes the area to be analyzed. Since area A4' has an area that does not overlap with the captured image Di, the position of the area to be analyzed is corrected, and area A4 in the captured image Di, which is 140 pixels square and whose center is closest to the center of the alignment mark M, is set as the area to be analyzed.
  • step S104 from the fourth to the seventh times the size of the analysis target area is similarly reduced by 20 pixels each time, to 120 pixels square, 100 pixels square, and 80 pixels square, respectively.
  • the analysis target area is set so that its center coincides with the center of the alignment mark M, all of the analysis target areas A5 to A8 shown in FIG. 8 are within the captured image Di, so there is no need to correct their positions.
  • the amount of calculations required for image analysis can be appropriately reduced by reducing the area to be analyzed to a portion of the area of the alignment mark M so that the detection error falls within a specified range.
  • the position of the alignment mark M fluctuates.
  • the area to be analyzed also follows the movement of the alignment mark M to include the alignment mark M and its surrounding area. At that time, the area to be analyzed is appropriately reduced so that the detection error of the alignment mark M falls within a specified range.
  • the control object of the control unit 10 having the position detection unit 90 which is an image processing device, was the drawing device 1. Therefore, this position detection unit 90 appropriately adjusted the analysis target area in order to detect the position and angle of the alignment mark M provided on the substrate W.
  • the objects detected by the image processing device of the present invention are not limited to this. For example, it can be used for various purposes, such as detecting alignment marks provided on a surveying object in a surveying device, or detecting alignment marks provided on a substrate in a substrate inspection device.
  • the detection error value was divided into three ranges: less than the first threshold, greater than or equal to the first threshold and less than the second threshold, and greater than or equal to the second threshold, and three processes were selected for reducing, leaving the analysis target area unchanged, and expanding.
  • the present invention is not limited to this.
  • the first threshold and the second threshold may be the same threshold. Then, if the detection error is less than the threshold, the analysis target area may be reduced, and if the detection error is equal to or greater than the threshold, the analysis target area may be expanded.
  • four ranges of the detection error are set by three thresholds, a first threshold, a second threshold, and a third threshold, and when the detection error is less than the first threshold, the analysis target area is reduced by two steps, when the detection error is equal to or greater than the first threshold and less than the second threshold, the analysis target area is reduced by one step, when the detection error is equal to or greater than the second threshold and less than the third threshold, the analysis target area is expanded by one step, and when the detection error is equal to or greater than the third threshold, the analysis target area is expanded by two steps.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

This image processing method is for detecting a position of a detection target from a captured image acquired periodically, wherein the following steps are repeated multiple times: a) a step (S103) for analyzing a region to be analyzed in the captured image, and deriving a detection error and the position of the detection target; and b) steps (S104 to S107) for changing the region to be analyzed on the basis of the magnitude of the detection error. Due to this configuration, the region to be analyzed can be shrunk while keeping the detection error within a prescribed range. Consequently, it is possible to reduce the amount of computation without increasing the detection error in image processing for detecting a specific object from time-series image data.

Description

画像処理方法および画像処理装置Image processing method and image processing device
 本発明は、外部から定期的に入力される撮影画像を解析するための技術に関する。 The present invention relates to technology for analyzing captured images that are periodically input from outside.
 印刷装置やレーザ描画装置をはじめとした種々の産業機器において、撮影画像データから、アライメントマーク等の特定の物体の位置を検出し、その検出結果に基づいて装置の位置制御を行うことがある。 In various industrial equipment, including printing devices and laser drawing devices, the position of a specific object, such as an alignment mark, may be detected from captured image data, and the position of the device may be controlled based on the detection results.
 このような装置において、撮影画像が入力される度に、撮影画像の全ての範囲に対して画像処理を行って位置検出を行うと、演算量が多くなり、非効率的である。このような問題について、演算量を削減する方法が、例えば、特許文献1に記載されている。 In such a device, if image processing is performed on the entire range of a captured image to detect the position every time a captured image is input, the amount of calculations increases and it is inefficient. A method for reducing the amount of calculations to address this problem is described, for example, in Patent Document 1.
 特許文献1では、画像を低解像度化して、低解像度データを解析して物体の有無を判断する。そして、物体があると判断した場合にのみ、高解像度の元画像を用いて再度演算を行うことにより演算量を削減するという方法が用いられている。 In Patent Document 1, the image is converted to a low resolution and the low resolution data is analyzed to determine whether an object is present. Then, only if it is determined that an object is present is the original high resolution image used to perform calculations again, thereby reducing the amount of calculations.
特開2020-52647号公報JP 2020-52647 A
 特許文献1の方法は、検出すべき物体が撮影画像内に存在しないタイミングがある場合には有効である。しかしながら、アライメントマークの位置検出を行う場合には、検出すべき物体が常に撮影範囲内にあり、その位置変化を追うものである。このような目的においては、特許文献1の方法を用いても、演算量を削減することができない。 The method of Patent Document 1 is effective when there are times when the object to be detected is not present in the captured image. However, when detecting the position of an alignment mark, the object to be detected is always within the shooting range, and its positional changes are tracked. For such purposes, even if the method of Patent Document 1 is used, the amount of calculations cannot be reduced.
 本発明は、このような事情に鑑みなされたものであり、時系列の画像データから特定の物体を検出する画像処理において、演算量を削減する技術を提供することを目的とする。 The present invention has been developed in consideration of these circumstances, and aims to provide a technology that reduces the amount of calculations required in image processing to detect specific objects from time-series image data.
 上記課題を解決するため、本願の第1発明は、定期的に取得される撮影画像から検出対象の位置を検出する画像処理方法であって、a)前記撮影画像の解析対象領域を解析し、前記検出対象の位置と、検出誤差とを求める工程と、b)前記検出誤差の大きさに基づいて、前記解析対象領域を変更する工程と、を含み、前記工程a)および前記工程b)を複数回繰り返す。 In order to solve the above problem, the first invention of the present application is an image processing method for detecting the position of a detection target from a periodically acquired photographed image, the method including the steps of: a) analyzing an analysis target area of the photographed image to obtain the position of the detection target and a detection error; and b) changing the analysis target area based on the magnitude of the detection error, and repeating steps a) and b) multiple times.
 本願の第2発明は、第1発明の画像処理方法であって、前記工程b)において、前記検出誤差が、所定の第1閾値よりも小さい場合に、前記解析対象領域を小さくする。 The second invention of the present application is the image processing method of the first invention, in which in step b), the analysis target area is reduced if the detection error is smaller than a predetermined first threshold value.
 本願の第3発明は、第2発明の画像処理方法であって、前記工程b)において、前記解析対象領域を小さくする場合に、少なくとも所定の最小範囲以上の大きさとする。 The third invention of the present application is the image processing method of the second invention, in which in step b), when the analysis target area is reduced, the size is set to at least a predetermined minimum range.
 本願の第4発明は、第2発明または第3発明の画像処理方法であって、前記工程b)において、前記検出誤差が、所定の第2閾値よりも大きい場合に、前記解析対象領域を大きくする。 The fourth invention of the present application is the image processing method of the second or third invention, in which in step b), the analysis target area is enlarged if the detection error is greater than a predetermined second threshold value.
 本願の第5発明は、第1発明ないし第4発明のいずれかの画像処理方法であって、c)前記工程a)および前記工程b)の後に、前記検出対象の変位が基準値よりも大きい場合に、前記解析対象領域を大きくする工程をさらに有する。 The fifth invention of the present application is an image processing method according to any one of the first to fourth inventions, further comprising a step c) of enlarging the analysis target area if the displacement of the detection target is greater than a reference value after steps a) and b).
 本願の第6発明は、定期的に取得される撮影画像から検出対象の位置を検出する画像処理装置であって、外部から入力された前記撮影画像の解析対象領域を抽出し、解析用画像を生成する解析領域抽出部と、前記解析用画像を解析し、前記検出対象の位置と、検出誤差とを求めるマーク位置解析部と、前記検出誤差に基づいて、前記解析対象領域を変更する解析領域決定部と、を有する。 The sixth invention of the present application is an image processing device that detects the position of a detection target from a photographed image that is periodically acquired, and includes an analysis region extraction unit that extracts an analysis target region of the photographed image input from outside and generates an image for analysis, a mark position analysis unit that analyzes the image for analysis and determines the position of the detection target and the detection error, and an analysis region determination unit that changes the analysis target region based on the detection error.
 本願の第1発明から第6発明によれば、検出誤差を所定の範囲内としつつ、解析対象領域を縮小することができる。したがって、検出精度の低減を抑制しつつ、演算量を削減することができる。 According to the first to sixth aspects of the present application, it is possible to reduce the area to be analyzed while keeping the detection error within a predetermined range. Therefore, it is possible to reduce the amount of calculations while suppressing a decrease in detection accuracy.
 特に、本願の第2明によれば、検出精度の低減を抑制しつつ、演算量を削減することができる。 In particular, according to the second aspect of the present application, it is possible to reduce the amount of calculations while suppressing a decrease in detection accuracy.
 特に、本願の第4発明によれば、検出誤差が増大した場合に、解析対象領域を拡大して検出誤差を低減することにより、検出誤差を所定の範囲内にすることができる。 In particular, according to the fourth aspect of the present invention, when the detection error increases, the area to be analyzed is expanded to reduce the detection error, thereby making it possible to keep the detection error within a predetermined range.
画像処理装置を備えた描画装置の斜視図である。FIG. 1 is a perspective view of a drawing device equipped with an image processing device. 制御部と描画装置内の各部との電気的接続を示したブロック図である。2 is a block diagram showing electrical connections between a control unit and each unit in the drawing apparatus. FIG. 画像処理装置としての位置検出部の制御ブロック図である。FIG. 2 is a control block diagram of a position detection unit serving as an image processing device. 位置検出部における画像処理の流れを示したフローチャートである。10 is a flowchart showing a flow of image processing in a position detection unit. 撮影画像と解析対象領域との関係を示したイメージ図である。FIG. 13 is an image diagram showing the relationship between a captured image and an area to be analyzed. 撮影画像と解析対象領域との関係を示したイメージ図である。FIG. 13 is an image diagram showing the relationship between a captured image and an area to be analyzed. 撮影画像と解析対象領域との関係を示したイメージ図である。FIG. 13 is an image diagram showing the relationship between a captured image and an area to be analyzed. 撮影画像と解析対象領域との関係を示したイメージ図である。FIG. 13 is an image diagram showing the relationship between a captured image and an area to be analyzed.
 以下、本発明の実施形態について、図面を参照しつつ説明する。 Below, an embodiment of the present invention will be described with reference to the drawings.
 <1.描画装置の構成>
 以下では、本発明の一実施形態に係る画像処理装置である位置検出部90を備えた描画装置1について、図1および図2を参照しつつ説明する。図1は、描画装置1の斜視図である。図2は、位置検出部90を備えた制御部10と、描画装置内の各部との電気的接続を示したブロック図である。
1. Configuration of the rendering device
A drawing device 1 including a position detection unit 90, which is an image processing device according to an embodiment of the present invention, will be described below with reference to Figures 1 and 2. Figure 1 is a perspective view of the drawing device 1. Figure 2 is a block diagram showing electrical connections between a control unit 10 including a position detection unit 90 and each unit in the drawing device.
 この描画装置1は、感光材料が塗布された半導体基板やガラス基板等の基板Wの上面に、空間変調された光を照射して、基板Wの上面に露光パターンを描画する装置である。図1および図2に示すように、描画装置1は、搬送機構20、フレーム30、描画処理部40、カメラ50、および制御部10を備える。 This drawing device 1 is a device that irradiates the upper surface of a substrate W, such as a semiconductor substrate or a glass substrate coated with a photosensitive material, with spatially modulated light to draw an exposure pattern on the upper surface of the substrate W. As shown in Figures 1 and 2, the drawing device 1 includes a transport mechanism 20, a frame 30, a drawing processing unit 40, a camera 50, and a control unit 10.
 搬送機構20は、基台21の上面において、平板状のステージ22を、略一定の姿勢で水平方向に搬送する装置である。搬送機構20は、主走査機構23と、副走査機構24と、回転機構25(図2参照)とを有する。主走査機構23は、ステージ22を水平方向の一方向である主走査方向に搬送するための機構である。副走査機構24は、ステージ22を水平方向のうち、主走査方向と直交する副走査方向に搬送するための機構である。基板Wは、ステージ22の上面に水平姿勢で保持され、ステージ22とともに主走査方向および副走査方向に移動する。回転機構25は、基台21に対するステージ22の鉛直軸周りの角度を調整することができる。 The transport mechanism 20 is a device that transports the flat stage 22 in a horizontal direction in a substantially constant posture on the upper surface of the base 21. The transport mechanism 20 has a main scanning mechanism 23, a sub-scanning mechanism 24, and a rotation mechanism 25 (see FIG. 2). The main scanning mechanism 23 is a mechanism for transporting the stage 22 in the main scanning direction, which is one horizontal direction. The sub-scanning mechanism 24 is a mechanism for transporting the stage 22 in the sub-scanning direction, which is one of the horizontal directions and perpendicular to the main scanning direction. The substrate W is held in a horizontal posture on the upper surface of the stage 22, and moves together with the stage 22 in the main scanning direction and the sub-scanning direction. The rotation mechanism 25 can adjust the angle of the stage 22 around the vertical axis relative to the base 21.
 フレーム30は、基台21の上方において、描画処理部40を保持するための構造である。フレーム30は、一対の支柱部31と架橋部32とを有する。一対の支柱部31は、副走査方向に間隔をあけて立設されている。各支柱部31は、基台21の上面から上方へ向けて延びる。架橋部32は、2本の支柱部31の上端部の間において、副走査方向に延びる。基板Wを保持したステージ22は、一対の支柱部31の間、かつ、架橋部32の下方を通過する。 The frame 30 is a structure for holding the drawing processing unit 40 above the base 21. The frame 30 has a pair of support pillars 31 and a bridge portion 32. The pair of support pillars 31 are erected with a gap between them in the sub-scanning direction. Each support pillar 31 extends upward from the upper surface of the base 21. The bridge portion 32 extends in the sub-scanning direction between the upper ends of the two support pillars 31. The stage 22 holding the substrate W passes between the pair of support pillars 31 and below the bridge portion 32.
 描画処理部40は、2つの光学ヘッド41を有する。2つの光学ヘッド41は、副走査方向に間隔をあけて、架橋部32に固定される。また、描画処理部40は、図示しない照明光学系およびレーザ発振器と、レーザ駆動部42(図2参照)を有する。照明光学系、レーザ発振器、およびレーザ駆動部42は、例えば、架橋部32の内部空間に収容される。レーザ駆動部42は、レーザ発振器と電気的に接続されている。レーザ駆動部42を動作させると、レーザ発振器からパルス光が出射される。そして、レーザ発振器から出射されたパルス光が、照明光学系を介して光学ヘッド41へ導入される。 The drawing processing unit 40 has two optical heads 41. The two optical heads 41 are fixed to the bridge unit 32 with a gap between them in the sub-scanning direction. The drawing processing unit 40 also has an illumination optical system and a laser oscillator (not shown), and a laser driver 42 (see FIG. 2). The illumination optical system, the laser oscillator, and the laser driver 42 are housed, for example, in the internal space of the bridge unit 32. The laser driver 42 is electrically connected to the laser oscillator. When the laser driver 42 is operated, a pulsed light is emitted from the laser oscillator. The pulsed light emitted from the laser oscillator is then introduced into the optical head 41 via the illumination optical system.
 光学ヘッド41の内部には、空間変調器を含む光学系が設けられている。光学ヘッド41へ導入されたパルス光は、空間変調器により所定のパターンに変調されて、基板Wの上面に照射される。これにより、基板Wの上面に塗布されたレジスト等の感光材料が露光される。 An optical system including a spatial modulator is provided inside the optical head 41. The pulsed light introduced into the optical head 41 is modulated by the spatial modulator into a predetermined pattern and irradiated onto the top surface of the substrate W. This exposes a photosensitive material such as a resist applied to the top surface of the substrate W.
 カメラ50(図2参照)は、図1中には図示されていない。カメラ50は、例えば、フレーム30の架橋部32の下面に取り付けられる。カメラ50は、ステージ22上に載置された基板Wの上面を撮影し、得られた撮影画像Diを制御部10へ入力する。制御部10では、カメラ50の撮影画像Diから、基板Wの上面に設けられたアライメントマークMの位置を検出する。これにより、制御部10は、基板Wの位置や姿勢を検出することができる。 Camera 50 (see FIG. 2) is not shown in FIG. 1. Camera 50 is attached, for example, to the underside of bridge portion 32 of frame 30. Camera 50 captures an image of the top surface of substrate W placed on stage 22, and inputs the resulting captured image Di to control unit 10. Control unit 10 detects the position of alignment mark M provided on the top surface of substrate W from image Di captured by camera 50. This allows control unit 10 to detect the position and attitude of substrate W.
 制御部10は、描画装置1の各部を動作制御するための手段である。図1中に概念的に示すように、制御部10は、CPU等のプロセッサ101、RAM等のメモリ102、およびハードディスクドライブ等の記憶部103を有するコンピュータにより構成されている。記憶部103には、描画装置1を動作制御するためのコンピュータプログラムPが記憶されている。 The control unit 10 is a means for controlling the operation of each part of the drawing device 1. As conceptually shown in FIG. 1, the control unit 10 is composed of a computer having a processor 101 such as a CPU, a memory 102 such as a RAM, and a storage unit 103 such as a hard disk drive. A computer program P for controlling the operation of the drawing device 1 is stored in the storage unit 103.
 また、図2に示すように、制御部10は、描画処理部40(光学ヘッド41およびレーザ駆動部42を含む)、主走査機構23、副走査機構24、回転機構25、およびカメラ50と、電気的に接続されている。制御部10は、記憶部103に記憶されたコンピュータプログラムPやデータDをメモリ102に読み出し、当該コンピュータプログラムPおよびデータDに基づいて、プロセッサ101が演算処理を行うことにより、描画装置1内の上記各部を動作制御する。これにより、描画装置1における描画処理が進行する。 As shown in FIG. 2, the control unit 10 is electrically connected to the drawing processing unit 40 (including the optical head 41 and the laser driving unit 42), the main scanning mechanism 23, the sub-scanning mechanism 24, the rotation mechanism 25, and the camera 50. The control unit 10 reads out the computer program P and data D stored in the storage unit 103 into the memory 102, and the processor 101 performs arithmetic processing based on the computer program P and the data D, thereby controlling the operation of each of the above-mentioned units in the drawing device 1. This allows the drawing process in the drawing device 1 to proceed.
 制御部10は、位置検出部90、搬送制御部91、および描画制御部92を有する。位置検出部90、搬送制御部91、および描画制御部92の各機能は、制御部10を構成するコンピュータのプロセッサ101が、コンピュータプログラムPに従って動作することにより、実現される。 The control unit 10 has a position detection unit 90, a transport control unit 91, and a drawing control unit 92. The functions of the position detection unit 90, the transport control unit 91, and the drawing control unit 92 are realized by the processor 101 of the computer that constitutes the control unit 10 operating in accordance with a computer program P.
 位置検出部90には、カメラ50から定期的に撮影画像Diが入力される。位置検出部90は、撮影画像Diを解析し、アライメントマークMの位置、大きさ、角度等を検出することにより、基板Wの位置および姿勢を検出する。位置検出部90は、解析によって得られた情報である検出結果Doを、搬送制御部91へと引き渡す。 The position detection unit 90 periodically receives captured images Di from the camera 50. The position detection unit 90 analyzes the captured images Di and detects the position, size, angle, etc. of the alignment mark M to detect the position and posture of the substrate W. The position detection unit 90 passes the detection result Do, which is information obtained by the analysis, to the transport control unit 91.
 搬送制御部91は、搬送装置20の各部を制御する。搬送制御部91は、基板Wを、予め決められた通りに搬送するために、位置検出部90から入力された検出結果Doに基づいて基板Wの位置を補正しつつ、搬送を行う。 The transport control unit 91 controls each part of the transport device 20. In order to transport the substrate W in a predetermined manner, the transport control unit 91 transports the substrate W while correcting the position of the substrate W based on the detection result Do input from the position detection unit 90.
 描画制御部92は、描画処理部40の各部を制御する。描画装置1の稼働時には、搬送制御部91による搬送機構20の動作と、描画制御部92による描画処理部40の動作とが連動して行われる。具体的には、光学ヘッド41による露光と、搬送装置20による基板Wの搬送とが、繰り返し実行される。より具体的には、副走査機構24によりステージ22を副走査方向に搬送しつつ、光学ヘッド41からのパルス光の照射を行うことにより、副走査方向に延びる帯状の領域(スワス)に露光を行う。その後、主走査機構23によりステージ22を主走査方向に1スワス分だけ搬送する。描画装置1は、このような副走査方向の露光と、主走査方向のステージ22の搬送とを繰り返すことにより、基板Wの上面全体にパターンを描画する。 The drawing control unit 92 controls each part of the drawing processing unit 40. When the drawing device 1 is in operation, the operation of the transport mechanism 20 by the transport control unit 91 and the operation of the drawing processing unit 40 by the drawing control unit 92 are performed in conjunction with each other. Specifically, exposure by the optical head 41 and transport of the substrate W by the transport device 20 are repeatedly performed. More specifically, exposure is performed on a strip-shaped area (swath) extending in the sub-scanning direction by irradiating pulsed light from the optical head 41 while the sub-scanning mechanism 24 transports the stage 22 in the sub-scanning direction. Thereafter, the main scanning mechanism 23 transports the stage 22 by one swath in the main scanning direction. The drawing device 1 draws a pattern on the entire top surface of the substrate W by repeating such exposure in the sub-scanning direction and transport of the stage 22 in the main scanning direction.
 このような描画処理を行っている間、位置検出部90は、カメラ50から入力される撮影画像Diに基づいて、アライメントマークMの位置、大きさ、角度等を解析し、その検出結果Do(図3参照)を搬送制御部91へ入力する。これにより、搬送制御部91は、基板Wの位置が所望の位置からずれていないかを確認するとともに、ずれが生じた場合には基板W位置の補正を行う。これにより、基板Wの上面に精度良くパターンを描画できる。 While such a drawing process is being performed, the position detection unit 90 analyzes the position, size, angle, etc. of the alignment mark M based on the captured image Di input from the camera 50, and inputs the detection result Do (see FIG. 3) to the transport control unit 91. This enables the transport control unit 91 to check whether the position of the substrate W has deviated from the desired position, and corrects the position of the substrate W if a deviation occurs. This allows a pattern to be drawn with high precision on the top surface of the substrate W.
 <2.画像処理装置の構成>
 次に、本発明の一実施形態に係る画像処理装置としての位置検出部90について、図3を参照しつつ説明する。上述の通り、位置検出部90は、制御部10によって実現される画像処理装置であって、カメラ50によって定期的に取得される撮影画像Diから検出対象であるアライメントマークMの位置を検出する。図3に示すように、位置検出部90は、解析領域抽出部81、マーク位置解析部82、および解析領域決定部83を有する。
2. Configuration of image processing device
Next, a position detection unit 90 as an image processing device according to an embodiment of the present invention will be described with reference to Fig. 3. As described above, the position detection unit 90 is an image processing device realized by the control unit 10, and detects the position of the alignment mark M, which is the detection target, from the captured image Di periodically captured by the camera 50. As shown in Fig. 3, the position detection unit 90 has an analysis area extraction unit 81, a mark position analysis unit 82, and an analysis area determination unit 83.
 解析領域抽出部81は、位置検出部90の外部から入力された撮影画像Diの解析対象領域を抽出し、解析用画像D1を生成する。具体的には、解析領域抽出部81は、解析領域決定部83から入力される領域情報D3に基づいて、カメラ50の撮影した撮影画像Diから画像の一部を切り出し、解析用画像D1を生成する。そして、解析用画像D1と、解析用画像D1の位置情報である領域情報D3とを、マーク位置解析部82へと引き渡す。 The analysis area extraction unit 81 extracts an area to be analyzed from the captured image Di input from outside the position detection unit 90, and generates an image for analysis D1. Specifically, the analysis area extraction unit 81 cuts out a part of the image from the captured image Di captured by the camera 50 based on area information D3 input from the analysis area determination unit 83, and generates an image for analysis D1. Then, the analysis image D1 and area information D3, which is position information of the image for analysis D1, are passed to the mark position analysis unit 82.
 マーク位置解析部82は、解析用画像D1を解析し、アライメントマークMの位置、大きさおよび角度を含む検出結果Doと、検出誤差D2とを求める。そして、マーク位置解析部82は、検出結果Doを搬送制御部91へ引き渡すとともに、検出結果Doおよび検出誤差D2を解析領域決定部83へと引き渡す。 The mark position analysis unit 82 analyzes the analysis image D1 and obtains a detection result Do, including the position, size, and angle of the alignment mark M, and a detection error D2. The mark position analysis unit 82 then passes the detection result Do to the transport control unit 91, and passes the detection result Do and the detection error D2 to the analysis area determination unit 83.
 マーク位置解析部82は、解析用画像D1の全領域に対して畳み込み演算による推論を行って、アライメントマークMの位置の検出を行う。このとき、位置検出と同時に、結果の確からしさを示す推定誤差が求められる。マーク位置解析部82は、当該推定誤差を検出誤差D2とする。解析用画像D1の大きさに応じて、演算量が増減するため、解析用画像D1の大きさが小さいほど、演算量が減少し、演算の高速化を行うことができる。 The mark position analysis unit 82 performs inference using a convolution operation on the entire area of the analysis image D1 to detect the position of the alignment mark M. At this time, an estimated error indicating the likelihood of the result is obtained at the same time as the position detection. The mark position analysis unit 82 sets this estimated error as the detection error D2. Since the amount of calculation increases or decreases depending on the size of the analysis image D1, the smaller the size of the analysis image D1, the less the amount of calculation and the faster the calculation can be performed.
 解析領域決定部83は、検出結果Doおよび検出誤差D2に基づいて、解析対象領域を変更し、解析対象領域の座標情報である領域情報D3を解析領域抽出部81へと引き渡す。具体的には、解析領域決定部83は、検出結果Doおよび検出誤差D2に基づいて、解析対象領域の大きさと、位置とを決定する。そして、解析領域決定部83は、解析対象の大きさおよび位置を含む領域情報D3を解析領域抽出部81へと引き渡す。 The analysis area determination unit 83 changes the area to be analyzed based on the detection result Do and the detection error D2, and passes area information D3, which is coordinate information of the area to be analyzed, to the analysis area extraction unit 81. Specifically, the analysis area determination unit 83 determines the size and position of the area to be analyzed based on the detection result Do and the detection error D2. Then, the analysis area determination unit 83 passes area information D3, which includes the size and position of the area to be analyzed, to the analysis area extraction unit 81.
 本実施形態の解析領域決定部83は、具体的には、検出誤差D2が、所定の第1閾値よりも小さい場合に、解析対象領域を小さくする。なお、解析領域決定部83は、検出誤差D2が第1閾値よりも小さく、すなわち、解析対象領域を小さくする場合であっても、解析対象領域を所定の最小範囲以上の大きさとする。 Specifically, the analysis area determination unit 83 of this embodiment reduces the analysis target area when the detection error D2 is smaller than a predetermined first threshold. Note that even when the detection error D2 is smaller than the first threshold, i.e., when the analysis target area is reduced, the analysis area determination unit 83 sets the analysis target area to a size equal to or larger than a predetermined minimum range.
 また、本実施形態の解析領域決定部83は、検出誤差D2が、所定の第2閾値以上である場合に、解析対象領域を大きくする。また、解析領域決定部83は、検出誤差D2が、第1閾値以上第2閾値未満である場合には、解析対象領域の大きさを変更しない。 In addition, the analysis area determination unit 83 of this embodiment enlarges the analysis target area when the detection error D2 is equal to or greater than a predetermined second threshold. In addition, the analysis area determination unit 83 does not change the size of the analysis target area when the detection error D2 is equal to or greater than the first threshold and less than the second threshold.
 なお、解析領域決定部83は、検出誤差D2による解析対象領域の大きさの変更の後に、アライメントマークMの変位量に基づいて、解析対象領域を大きくしてもよい。本実施液体の解析領域決定部83は、アライメントマークMの変位量が、所定の基準値よりも多い場合に、解析対象領域を大きくする。 In addition, after changing the size of the analysis target area due to the detection error D2, the analysis target area determination unit 83 may enlarge the analysis target area based on the displacement amount of the alignment mark M. In this embodiment, the analysis target area determination unit 83 enlarges the analysis target area when the displacement amount of the alignment mark M is greater than a predetermined reference value.
 なお、解析領域決定部83は、検出結果DoにおけるアライメントマークMの位置の変位に従って、解析対象領域の位置を変更する。例えば、解析領域決定部83は、解析対象領域の中心位置が、直近のアライメントマークMの中心位置にできるだけ近くなるように、解析対象領域の位置を変更する。 The analysis area determination unit 83 changes the position of the analysis target area according to the displacement of the position of the alignment mark M in the detection result Do. For example, the analysis area determination unit 83 changes the position of the analysis target area so that the center position of the analysis target area is as close as possible to the center position of the nearest alignment mark M.
 <3.画像処理の流れ>
 続いて、位置検出部90における画像処理の流れについて、図4~図6を参照しつつ説明する。図4は、位置検出部90における画像処理の流れを示したフローチャートである。図5~図8は、撮影画像Diと解析対象領域との関係を示したイメージ図である。以下では、図5~図8の例を参照しつつ、画像処理の流れを説明する。
<3. Image processing flow>
Next, the flow of image processing in the position detection unit 90 will be described with reference to Figures 4 to 6. Figure 4 is a flowchart showing the flow of image processing in the position detection unit 90. Figures 5 to 8 are conceptual diagrams showing the relationship between the captured image Di and the area to be analyzed. Below, the flow of image processing will be described with reference to the examples of Figures 5 to 8.
 図5~図8の例では、撮影画像Diが正方形状であり、縦の画素数と、横の画素数とが同じであるが、本発明はこれに限られない。撮影画像Diは、縦の画素数と横の画素数が異なる長方形状であってもよい。また、図5~図8の例では、アライメントマークMは十字形状であり、その交点を中心位置と称する。なお、アライメントマークMは、その他の任意の形状であってよい。また、アライメントマークMの中心位置は、例えば、アライメントマークMが存在する長方形の2次元座標範囲の中心であってもよいし、重心位置であってもよい。 In the examples of Figures 5 to 8, the captured image Di is square-shaped and has the same number of vertical pixels as the same number of horizontal pixels, but the present invention is not limited to this. The captured image Di may also be rectangular with a different number of vertical pixels than the horizontal pixels. Also, in the examples of Figures 5 to 8, the alignment mark M is cross-shaped, and the intersection point is referred to as the center position. Note that the alignment mark M may be any other shape. Also, the center position of the alignment mark M may be, for example, the center of the two-dimensional coordinate range of the rectangle in which the alignment mark M exists, or it may be the center of gravity.
 図4に示す画像処理は、描画装置1において、基板Wの搬送開始と同時に開始される。画像処理において、まず、位置検出部90が、1つめの撮影画像Diを取得する。具体的には、カメラ50における定期的な撮像が開始され、1つめの撮影画像Diが位置検出部90に入力される(ステップS101)。 The image processing shown in FIG. 4 is started at the same time as the transfer of the substrate W begins in the drawing device 1. In the image processing, first, the position detection unit 90 acquires the first captured image Di. Specifically, regular image capture by the camera 50 is started, and the first captured image Di is input to the position detection unit 90 (step S101).
 解析領域抽出部81は、入力された撮影画像Diの解析対象領域を抽出し、解析用画像D1を生成する(ステップS102)。なお、1回目のステップS102においては、解析対象領域が設定されていないため、画像処理開始時の領域情報D3は、撮影画像Diの全範囲に設定されている。したがって、解析領域抽出部81は、撮影画像Diをそのまま解析用画像D1とする。 The analysis area extraction unit 81 extracts the area to be analyzed from the input photographed image Di and generates an image for analysis D1 (step S102). Note that in the first step S102, the area to be analyzed is not set, so the area information D3 at the start of image processing is set to the entire range of the photographed image Di. Therefore, the analysis area extraction unit 81 uses the photographed image Di as it is as the image for analysis D1.
 次に、マーク位置解析部82が、解析用画像D1を解析し、アライメントマークMの位置、角度、大きさ等を検出する(ステップS103)。このとき、マーク位置解析部82は、同時に、推定誤差である検出誤差D2も算出される。マーク位置解析部82は、アライメントマークMの位置、角度および大きさを含む検出結果Doと、検出誤差D2を、解析領域決定部83へと引き渡す。 Next, the mark position analysis unit 82 analyzes the analysis image D1 and detects the position, angle, size, etc. of the alignment mark M (step S103). At this time, the mark position analysis unit 82 also calculates a detection error D2, which is an estimated error. The mark position analysis unit 82 passes the detection result Do, which includes the position, angle, and size of the alignment mark M, and the detection error D2 to the analysis area determination unit 83.
 続いて、解析領域決定部83は、検出誤差D2の値を、所定の第1閾値および第2閾値と比較する(ステップS104)。そして、解析領域決定部83は、検出誤差D2の値の大きさに応じて、解析対象領域の大きさを変更する(ステップS105~S111)。なお、第2閾値は、第1閾値よりも大きい値を有する。 Then, the analysis area determination unit 83 compares the value of the detection error D2 with a predetermined first threshold and a second threshold (step S104). Then, the analysis area determination unit 83 changes the size of the analysis target area according to the magnitude of the value of the detection error D2 (steps S105 to S111). Note that the second threshold has a value larger than the first threshold.
 ステップS104において、検出誤差D2が第1閾値よりも小さいと判断すると(ステップS104:第1閾値未満)、解析領域決定部83は、解析対象領域の大きさを縮小する(ステップS105)。具体的には、例えば、解析対象領域の大きさを、縦横それぞれpピクセルずつ小さくする。すなわち、その時点の解析対象領域の大きさがmピクセル×nピクセルである場合に、(m-p)ピクセル×(n―p)ピクセルとする。 If it is determined in step S104 that the detection error D2 is smaller than the first threshold value (step S104: less than first threshold value), the analysis region determination unit 83 reduces the size of the analysis region (step S105). Specifically, for example, the size of the analysis region is reduced by p pixels in both the vertical and horizontal directions. In other words, if the size of the analysis region at that time is m pixels x n pixels, it is reduced to (m-p) pixels x (n-p) pixels.
 また、解析領域決定部83は、解析対象領域の中心位置が、できるだけアライメントマークMの中心位置と一致するように、解析領域の位置を変更する。 The analysis area determination unit 83 also changes the position of the analysis area so that the center position of the area to be analyzed coincides as closely as possible with the center position of the alignment mark M.
 このように、解析領域決定部83は、検出誤差D2が比較的小さい場合に解析対象領域を縮小する。これにより、検出精度低減を抑制しつつ、次の画像解析工程(ステップS103)における演算量を削減することができる。 In this way, the analysis area determination unit 83 reduces the area to be analyzed when the detection error D2 is relatively small. This makes it possible to reduce the amount of calculations in the next image analysis process (step S103) while suppressing a decrease in detection accuracy.
 ステップS105に続いて、解析領域決定部83は、縮小後の大きさが、所定の最小領域サイズ以上であるか否かを判断する(ステップS106)。最小領域サイズは、予め決められた固定値であってもよいし、検出結果Doに基づいて変動するものであってもよい。例えば、最小領域サイズは、検出結果DoにおけるアライメントマークMの大きさに基づいて変動してもよい。その場合、例えば、アライメントマークMの大きさがおおよそxピクセル四方である場合に、解析対象領域の大きさを(x+q)ピクセル四方としてもよいし、xピクセルの定数倍四方としてもよい。 Following step S105, the analysis area determination unit 83 determines whether the size after reduction is equal to or larger than a predetermined minimum area size (step S106). The minimum area size may be a predetermined fixed value, or may vary based on the detection result Do. For example, the minimum area size may vary based on the size of the alignment mark M in the detection result Do. In that case, for example, if the size of the alignment mark M is approximately x pixels square, the size of the area to be analyzed may be (x+q) pixels square, or a constant multiple of x pixels square.
 ステップS104において、検出誤差D2が第1閾値以上第2閾値未満であると判断すると(ステップS104:第1閾値以上第2閾値未満)、解析領域決定部83は、解析対象領域の大きさを変更しない(ステップS108)。このとき、解析領域決定部83は、解析対象領域の中心位置が、できるだけアライメントマークMの中心位置と一致するように、解析対象領域の位置を変更する。 If it is determined in step S104 that the detection error D2 is equal to or greater than the first threshold and less than the second threshold (step S104: equal to or greater than the first threshold and less than the second threshold), the analysis region determination unit 83 does not change the size of the analysis target region (step S108). At this time, the analysis region determination unit 83 changes the position of the analysis target region so that the center position of the analysis target region coincides with the center position of the alignment mark M as closely as possible.
 ステップS104において、検出誤差D2が第2閾値以上であると判断すると(ステップS104:第2閾値以上)、解析領域決定部83は、解析対象領域の大きさを拡大する(ステップS109)。具体的には、例えば、解析対象領域の大きさを、縦横それぞれrピクセルずつ大きくする。すなわち、その時点の解析対象領域の大きさがmピクセル×nピクセルである場合に、(m+r)ピクセル×(n+r)ピクセルとする。 If it is determined in step S104 that the detection error D2 is equal to or greater than the second threshold (step S104: equal to or greater than the second threshold), the analysis area determination unit 83 expands the size of the area to be analyzed (step S109). Specifically, for example, the size of the area to be analyzed is increased by r pixels in both the vertical and horizontal directions. In other words, if the size of the area to be analyzed at that time is m pixels x n pixels, it is increased to (m + r) pixels x (n + r) pixels.
 このように、解析領域決定部83は、検出誤差D2が比較的大きい場合、すなわち、検出誤差D2が増大した場合に解析対象領域を拡大している。これにより、次の画像解析工程(ステップS103)における検出誤差D2を低減し、検出精度の低減を抑制できる。 In this way, the analysis region determination unit 83 expands the analysis target region when the detection error D2 is relatively large, i.e., when the detection error D2 increases. This reduces the detection error D2 in the next image analysis process (step S103), and suppresses a decrease in detection accuracy.
 ステップS104における判断後、ステップS105~S107、ステップS108、またはステップS109が終了した後に、解析領域決定部83は、アライメントマークMの変位が基準値よりも大きいか否かを判断する(ステップS110)。アライメントマークMの変位は、例えば、最新のアライメントマークMの中心位置と、1回前のアライメントマークMの中心位置との差分を算出したものである。なお、アライメントマークMの変位を、例えば、最新のアライメントマークMの中心位置と、所定のk回前のアライメントマークMの中心位置との差分を算出したものとしてもよいし、中心位置の差分だけでなく、アライメントマークMの角度を考慮したものであってもよい。 After the judgment in step S104, or after steps S105 to S107, step S108, or step S109 are completed, the analysis area determination unit 83 judges whether the displacement of the alignment mark M is greater than a reference value (step S110). The displacement of the alignment mark M is, for example, calculated as the difference between the center position of the latest alignment mark M and the center position of the alignment mark M one time before. Note that the displacement of the alignment mark M may be, for example, calculated as the difference between the center position of the latest alignment mark M and the center position of the alignment mark M a predetermined k times before, or may be calculated by taking into account not only the difference in center positions but also the angle of the alignment mark M.
 ステップS110において、アライメントマークMの変位が所定の基準値よりも大きい場合(ステップS110:Yes)、解析対象領域の大きさを拡大する(ステップS111)。具体的には、例えば、解析対象領域の大きさを、縦横それぞれsピクセルずつ大きくする。すなわち、その時点の解析対象領域の大きさがmピクセル×nピクセルである場合に、(m+s)ピクセル×(n+s)ピクセルとする。その後、解析領域決定部83は、解析対象領域の大きさおよび位置を含む領域情報D3を解析領域抽出部81へと引き渡し、ステップS101へと戻る。 In step S110, if the displacement of the alignment mark M is greater than a predetermined reference value (step S110: Yes), the size of the area to be analyzed is enlarged (step S111). Specifically, for example, the size of the area to be analyzed is enlarged by s pixels in both the vertical and horizontal directions. In other words, if the size of the area to be analyzed at that time is m pixels x n pixels, it is set to (m + s) pixels x (n + s) pixels. The analysis area determination unit 83 then passes area information D3 including the size and position of the area to be analyzed to the analysis area extraction unit 81, and the process returns to step S101.
 一方、ステップS110において、アライメントマークMの変位が所定の基準値以下である場合(ステップS110:No)、その時点の解析対象領域の大きさおよび位置を、領域情報D3として解析領域抽出部81へと引き渡し、ステップS101へと戻る。 On the other hand, in step S110, if the displacement of the alignment mark M is equal to or less than the predetermined reference value (step S110: No), the size and position of the analysis target area at that time are passed to the analysis area extraction unit 81 as area information D3, and the process returns to step S101.
 このように、撮影画像Diの取得および解析の度に、検出誤差の値に応じて、解析対象領域の大きさを調整する。これにより、検出誤差が所定の第2閾値よりも大きくならない範囲で、解析対象領域を縮小することができる。すなわち、アライメントマークMの検出精度を一定以上に保ちつつ、解析の演算量を削減できる。 In this way, each time a captured image Di is acquired and analyzed, the size of the area to be analyzed is adjusted according to the value of the detection error. This makes it possible to reduce the area to be analyzed without causing the detection error to exceed a predetermined second threshold value. In other words, the amount of calculation required for analysis can be reduced while maintaining the detection accuracy of the alignment mark M at a certain level or higher.
 ここで、図5~図8に、本実施形態の画像処理における、解析対象領域の変化の例を示している。図5~図8では、十字状のアライメントマークMの位置および角度が動かず(変位がない)、1回目~7回目のステップS104において、検出誤差が第1閾値未満であり、かつ、1回目~7回目のステップS110においてアライメントマークMの変位が基準値以下であった場合について示している。 Here, Figs. 5 to 8 show examples of changes in the analysis target area in the image processing of this embodiment. Figs. 5 to 8 show a case where the position and angle of the cross-shaped alignment mark M do not move (there is no displacement), the detection error is less than the first threshold value in the first to seventh steps S104, and the displacement of the alignment mark M is equal to or less than the reference value in the first to seventh steps S110.
 図5~図8において、撮影画像Diの範囲が、実線で示されている。また、1回目の解析対象領域A1、2回目の解析対象領域A2、3回目の解析対象領域A3、4回目の解析対象領域A4、5回目の解析対象領域A5、6回目の解析対象領域A6、7回目の解析対象領域A7、および8回目の解析対象領域A8が、破線で示されている。なお、1回目の解析対象領域A1の範囲は、撮影画像Diを示す実線のやや内側に示されている。実際のA1は、Diと同じ範囲であるが、実線と破線が重なると範囲が不明確であるため、あえてずらして表示している。 In Figures 5 to 8, the range of the captured image Di is indicated by a solid line. The first analysis target area A1, the second analysis target area A2, the third analysis target area A3, the fourth analysis target area A4, the fifth analysis target area A5, the sixth analysis target area A6, the seventh analysis target area A7, and the eighth analysis target area A8 are indicated by dashed lines. Note that the range of the first analysis target area A1 is shown slightly inside the solid line showing the captured image Di. In reality, A1 is the same range as Di, but because the range is unclear when the solid and dashed lines overlap, they are intentionally shifted when displayed.
 図5~図8の例では、撮影画像Diおよび1回目の解析対象領域A1の範囲が200ピクセル四方である。1回目のステップS104において、検出誤差が第1閾値未満であるため、ステップS105において、解析対象領域の大きさを20ピクセルずつ小さくし、180ピクセル四方とする。ここで、図5に示すように、解析対象領域の中心が、アライメントマークMの中心と一致するように解析対象領域を設定した場合、図5のA2’で示す領域が解析対象領域となる。領域A2’は、撮影画像Diと重ならない領域を有するため、図5~図8の例では、解析対象領域の位置を補正し、撮影画像Di内において、180ピクセル四方であって、その中心が最もアライメントマークMの中心に近くなる領域A2を、解析対象領域に設定する。 In the examples of Figures 5 to 8, the captured image Di and the first analysis target area A1 are 200 pixels square. In step S104 for the first time, the detection error is less than the first threshold, so in step S105, the size of the analysis target area is reduced by 20 pixels to 180 pixels square. Here, as shown in Figure 5, if the analysis target area is set so that its center coincides with the center of the alignment mark M, the area indicated by A2' in Figure 5 becomes the analysis target area. Since area A2' has an area that does not overlap with the captured image Di, in the examples of Figures 5 to 8, the position of the analysis target area is corrected, and area A2 in the captured image Di, which is 180 pixels square and whose center is closest to the center of the alignment mark M, is set as the analysis target area.
 続いて、2回目のステップS104では、検出誤差が第1閾値未満であるため、ステップS105において、解析対象領域の大きさを20ピクセルずつ小さくし、160ピクセル四方とする。ここで、図6に示すように、解析対象領域の中心が、アライメントマークMの中心と一致するように解析対象領域を設定した場合、図6のA3’で示す領域が解析対象領域となる。領域A3’は、撮影画像Diと重ならない領域を有するため、解析対象領域の位置を補正し、撮影画像Di内において、160ピクセル四方であって、その中心が最もアライメントマークMの中心に近くなる領域A4を、解析対象領域に設定する。 Subsequently, in the second step S104, since the detection error is less than the first threshold, in step S105, the size of the analysis target area is reduced by 20 pixels to 160 pixels square. Here, as shown in FIG. 6, if the analysis target area is set so that its center coincides with the center of the alignment mark M, the area indicated by A3' in FIG. 6 becomes the analysis target area. Since area A3' has an area that does not overlap with the captured image Di, the position of the analysis target area is corrected, and area A4 in the captured image Di, which is 160 pixels square and whose center is closest to the center of the alignment mark M, is set as the analysis target area.
 3回目のステップS104においても、同様に、解析対象領域の大きさを20ピクセルずつ小さくし、140ピクセル四方とする。ここで、図7に示すように、解析対象領域の中心が、アライメントマークMの中心と一致するように解析対象領域を設定した場合、図7のA4’で示す領域が解析対象領域となる。領域A4’は、撮影画像Diと重ならない領域を有するため、解析対象領域の位置を補正し、撮影画像Di内において、140ピクセル四方であって、その中心が最もアライメントマークMの中心に近くなる領域A4を、解析対象領域に設定する。 In the third iteration of step S104, the size of the area to be analyzed is similarly reduced by 20 pixels each time to 140 pixels square. Here, as shown in FIG. 7, if the area to be analyzed is set so that its center coincides with the center of the alignment mark M, the area indicated by A4' in FIG. 7 becomes the area to be analyzed. Since area A4' has an area that does not overlap with the captured image Di, the position of the area to be analyzed is corrected, and area A4 in the captured image Di, which is 140 pixels square and whose center is closest to the center of the alignment mark M, is set as the area to be analyzed.
 4回目から7回目のステップS104においても、同様に、解析対象領域の大きさを20ピクセルずつ小さくし、順に、120ピクセル四方、100ピクセル四方、80ピクセル四方とする。図8に示すように、これらの場合には、解析対象領域の中心がアライメントマークMの中心と一致するように解析対象領域を設定した場合に、図8に示す解析対象領域A5~A8は、いずれも撮影画像Di内に入っているため、位置を補正する必要は無い。 In step S104 from the fourth to the seventh times, the size of the analysis target area is similarly reduced by 20 pixels each time, to 120 pixels square, 100 pixels square, and 80 pixels square, respectively. As shown in FIG. 8, in these cases, when the analysis target area is set so that its center coincides with the center of the alignment mark M, all of the analysis target areas A5 to A8 shown in FIG. 8 are within the captured image Di, so there is no need to correct their positions.
 図5~図8の例に示すように、検出誤差が所定の範囲に収まる程度に、解析対象領域をアライメントマークMの一部の領域に縮小することにより、画像解析における演算量を適切に削減することができる。実際の画像処理においては、アライメントマークMの位置が変動する。そして、解析対象領域も、アライメントマークMの変動に応じて、アライメントマークMおよびその周辺領域を含むように、追従する。その際に、アライメントマークMの検出誤差が所定の範囲に収まる程度に、適切に解析対象領域を縮小する。 As shown in the examples of Figures 5 to 8, the amount of calculations required for image analysis can be appropriately reduced by reducing the area to be analyzed to a portion of the area of the alignment mark M so that the detection error falls within a specified range. In actual image processing, the position of the alignment mark M fluctuates. The area to be analyzed also follows the movement of the alignment mark M to include the alignment mark M and its surrounding area. At that time, the area to be analyzed is appropriately reduced so that the detection error of the alignment mark M falls within a specified range.
 <4.変形例>
 以上、本発明の実施形態について説明したが、本発明は、上記の実施形態に限定されるものではない。
4. Modifications
Although the embodiment of the present invention has been described above, the present invention is not limited to the above embodiment.
 上記の実施形態では、画像処理装置である位置検出部90を有する制御部10の制御対象が描画装置1であった。このため、この位置検出部90は、基板Wに設けられたアライメントマークMの位置や角度を検出するために、解析対象領域を適宜調整するものであった。 しかしながら、本発明の画像処理装置によって検出する物体は、これに限られない。例えば、測量装置において測量対象に設けられたアライメントマークを検出するもの、基板検査装置の基板に設けられたアライメントマークを検出するもの等、種々の用途に適用可能である。 In the above embodiment, the control object of the control unit 10 having the position detection unit 90, which is an image processing device, was the drawing device 1. Therefore, this position detection unit 90 appropriately adjusted the analysis target area in order to detect the position and angle of the alignment mark M provided on the substrate W.  However, the objects detected by the image processing device of the present invention are not limited to this. For example, it can be used for various purposes, such as detecting alignment marks provided on a surveying object in a surveying device, or detecting alignment marks provided on a substrate in a substrate inspection device.
 また、上記の実施形態では、検出誤差の値に応じて、第1閾値未満、第1閾値以上第2閾値未満、第2閾値以上の範囲に場合分けし、解析対象領域を縮小、変更なし、拡大の3通りの工程を選択した。しかしながら、本発明はこれに限られない。 In the above embodiment, the detection error value was divided into three ranges: less than the first threshold, greater than or equal to the first threshold and less than the second threshold, and greater than or equal to the second threshold, and three processes were selected for reducing, leaving the analysis target area unchanged, and expanding. However, the present invention is not limited to this.
 例えば、第1閾値と、第2閾値とが同じ1つの閾値であってもよい。そして、検出誤差が当該閾値未満の場合には解析対象領域を縮小し、検出誤差が当該閾値以上の場合には解析対象領域を拡大する、としてもよい。 For example, the first threshold and the second threshold may be the same threshold. Then, if the detection error is less than the threshold, the analysis target area may be reduced, and if the detection error is equal to or greater than the threshold, the analysis target area may be expanded.
 また、例えば、第1閾値、第2閾値、第3閾値の3つの閾値により、検出誤差に4つの範囲が設定されており、検出誤差が第1閾値未満の場合には解析対象領域を2段階縮小し、検出誤差が第1閾値以上第2閾値未満の場合には解析対象領域を1段階縮小し、検出誤差が第2閾値以上第3閾値未満の場合には解析対象領域を1段階拡大し、検出誤差が第3閾値以上の場合には解析対象領域を2段階拡大する、としてもよい。 Furthermore, for example, four ranges of the detection error are set by three thresholds, a first threshold, a second threshold, and a third threshold, and when the detection error is less than the first threshold, the analysis target area is reduced by two steps, when the detection error is equal to or greater than the first threshold and less than the second threshold, the analysis target area is reduced by one step, when the detection error is equal to or greater than the second threshold and less than the third threshold, the analysis target area is expanded by one step, and when the detection error is equal to or greater than the third threshold, the analysis target area is expanded by two steps.
 また、上記の実施形態や変形例に登場した各要素を、矛盾が生じない範囲で、適宜に組み合わせてもよい。 Furthermore, the elements appearing in the above embodiments and variations may be combined as appropriate to the extent that no contradictions arise.
 1 描画装置
 10 制御部
 50 カメラ
 81 解析領域抽出部
 82 マーク位置解析部
 83 解析領域決定部
 90 位置検出部
 D1 解析用画像
 D2 検出誤差
 D3 領域情報
 Di 撮影画像
 Do 検出結果

 
REFERENCE SIGNS LIST 1 Drawing device 10 Control unit 50 Camera 81 Analysis area extraction unit 82 Mark position analysis unit 83 Analysis area determination unit 90 Position detection unit D1 Image for analysis D2 Detection error D3 Area information Di Photographed image Do Detection result

Claims (6)

  1.  定期的に取得される撮影画像から検出対象の位置を検出する画像処理方法であって、
     a)前記撮影画像の解析対象領域を解析し、前記検出対象の位置と、検出誤差とを求める工程と、
     b)前記検出誤差の大きさに基づいて、前記解析対象領域を変更する工程と、
    を含み、
     前記工程a)および前記工程b)を複数回繰り返す、画像処理方法。
    An image processing method for detecting a position of a detection target from a photographed image that is periodically acquired, comprising:
    a) analyzing an analysis target area of the captured image to obtain a position of the detection target and a detection error;
    b) changing the analysis target region based on the magnitude of the detection error;
    Including,
    The image processing method further comprises repeating the steps a) and b) multiple times.
  2.  請求項1に記載の画像処理方法であって、
     前記工程b)において、前記検出誤差が、所定の第1閾値よりも小さい場合に、前記解析対象領域を小さくする、画像処理方法。
    2. The image processing method according to claim 1,
    In the step b), the analysis target region is reduced if the detection error is smaller than a predetermined first threshold.
  3.  請求項2に記載の画像処理方法であって、
     前記工程b)において、前記解析対象領域を小さくする場合に、少なくとも所定の最小範囲以上の大きさとする、画像処理方法。
    3. The image processing method according to claim 2,
    An image processing method, comprising the steps of: in step b), reducing the analysis target region to a size at least equal to or larger than a predetermined minimum range.
  4.  請求項2または請求項3に記載の画像処理方法であって、
     前記工程b)において、前記検出誤差が、所定の第2閾値よりも大きい場合に、前記解析対象領域を大きくする、画像処理方法。
    4. The image processing method according to claim 2, further comprising:
    In the step b), the analysis target region is enlarged if the detection error is greater than a predetermined second threshold.
  5.  請求項1ないし請求項4のいずれか一項に記載の画像処理方法であって、
     c)前記工程a)および前記工程b)の後に、前記検出対象の変位が基準値よりも大きい場合に、前記解析対象領域を大きくする工程
    をさらに有する、画像処理方法。
    5. The image processing method according to claim 1, further comprising:
    c) after steps a) and b), expanding the analysis target region when the displacement of the detection target is greater than a reference value.
  6.  定期的に取得される撮影画像から検出対象の位置を検出する画像処理装置であって、
     外部から入力された前記撮影画像の解析対象領域を抽出し、解析用画像を生成する解析領域抽出部と、
     前記解析用画像を解析し、前記検出対象の位置と、検出誤差とを求めるマーク位置解析部と、
     前記検出誤差に基づいて、前記解析対象領域を変更する解析領域決定部と、
    を有する、画像処理装置。

     
    An image processing device that detects a position of a detection target from a photographed image that is periodically acquired,
    an analysis region extraction unit that extracts an analysis target region of the photographed image input from outside and generates an analysis image;
    a mark position analysis unit that analyzes the analysis image and obtains a position of the detection target and a detection error;
    an analysis region determination unit that changes the analysis target region based on the detection error;
    The image processing device includes:

PCT/JP2023/033903 2022-10-03 2023-09-19 Image processing method and image processing device WO2024075510A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022159260A JP2024053178A (en) 2022-10-03 2022-10-03 Image processing method and image processing device
JP2022-159260 2022-10-03

Publications (1)

Publication Number Publication Date
WO2024075510A1 true WO2024075510A1 (en) 2024-04-11

Family

ID=90607986

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/033903 WO2024075510A1 (en) 2022-10-03 2023-09-19 Image processing method and image processing device

Country Status (2)

Country Link
JP (1) JP2024053178A (en)
WO (1) WO2024075510A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008251797A (en) * 2007-03-30 2008-10-16 Fujifilm Corp Reference position detection apparatus and method, and drawing apparatus
JP2010258099A (en) * 2009-04-22 2010-11-11 Canon Inc Mark position detecting device and mark position detection method, exposure apparatus using the same, and method of manufacturing device
JP2017009434A (en) * 2015-06-22 2017-01-12 アズビル株式会社 Image inspection device and image inspection method
JP2017032671A (en) * 2015-07-30 2017-02-09 株式会社Screenホールディングス Position measurement device, data correction device, position measurement method, and data correction method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008251797A (en) * 2007-03-30 2008-10-16 Fujifilm Corp Reference position detection apparatus and method, and drawing apparatus
JP2010258099A (en) * 2009-04-22 2010-11-11 Canon Inc Mark position detecting device and mark position detection method, exposure apparatus using the same, and method of manufacturing device
JP2017009434A (en) * 2015-06-22 2017-01-12 アズビル株式会社 Image inspection device and image inspection method
JP2017032671A (en) * 2015-07-30 2017-02-09 株式会社Screenホールディングス Position measurement device, data correction device, position measurement method, and data correction method

Also Published As

Publication number Publication date
JP2024053178A (en) 2024-04-15

Similar Documents

Publication Publication Date Title
JP5441633B2 (en) Mark recognition device
US7194712B2 (en) Method and apparatus for identifying line-end features for lithography verification
WO2018019143A1 (en) Image photographing alignment method and system
US7248333B2 (en) Apparatus with light-modulating unit for forming pattern
JP2006252473A (en) Obstacle detector, calibration device, calibration method and calibration program
CN109213090B (en) Position control system, position detection device, and recording medium
CN102681355A (en) Scanning data correcting device and drawing device
JP2009223262A (en) Exposure system and exposure method
CN112764324B (en) Scanning method of photoetching system and photoetching system
WO2024075510A1 (en) Image processing method and image processing device
US7614031B2 (en) Drawing apparatus with drawing data correction function
JP2008203635A (en) Plotting method and plotting device
JP2006323378A (en) Method and device for acquiring drawing point data and method and device for drawing
CN110249266B (en) Drawing device and drawing method
JP2016206654A (en) Exposure apparatus and exposure method, and manufacturing method of article
JP6306377B2 (en) Drawing method and drawing apparatus
JP2011022329A (en) Drawing device, program, and drawing method
TWI728344B (en) Drawing apparatus and drawing method
JP2007102580A (en) Positioning method and positioning apparatus
TWI819658B (en) Drawing system, drawing method and program product containing program
TWI771080B (en) Substrate position detection method, drawing method, substrate position detection apparatus and drawing apparatus
TWI831264B (en) Drawing apparatus, drawing method and program product containing program
JP7461240B2 (en) Position detection device, drawing system, and position detection method
JP2024073963A (en) Alignment device and alignment method
KR20240041212A (en) Template generating apparatus, drawing system, template generating method and program recorded on recording medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23874639

Country of ref document: EP

Kind code of ref document: A1