WO2024101186A1 - Substrate inspection method, substrate inspection device, and substrate inspection program - Google Patents

Substrate inspection method, substrate inspection device, and substrate inspection program Download PDF

Info

Publication number
WO2024101186A1
WO2024101186A1 PCT/JP2023/038877 JP2023038877W WO2024101186A1 WO 2024101186 A1 WO2024101186 A1 WO 2024101186A1 JP 2023038877 W JP2023038877 W JP 2023038877W WO 2024101186 A1 WO2024101186 A1 WO 2024101186A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
inspection
images
normal
difference
Prior art date
Application number
PCT/JP2023/038877
Other languages
French (fr)
Japanese (ja)
Inventor
健太郎 本田
修児 岩永
達弥 徳丸
Original Assignee
東京エレクトロン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 東京エレクトロン株式会社 filed Critical 東京エレクトロン株式会社
Publication of WO2024101186A1 publication Critical patent/WO2024101186A1/en

Links

Images

Definitions

  • This disclosure relates to a substrate inspection method, a substrate inspection device, and a substrate inspection program.
  • Patent Document 1 discloses a device that classifies defects occurring on a substrate based on an image of the substrate that is the subject of inspection.
  • This disclosure provides technology that is useful for accurately detecting abnormalities on a substrate surface.
  • a substrate inspection method is a substrate inspection method for inspecting a substrate to be inspected using an inspection image obtained by imaging the substrate to be inspected, the method comprising: generating a plurality of normal difference images which are the differences between an average image generated from a plurality of normal images obtained by imaging a plurality of normal substrates and each of the plurality of normal images; generating a plurality of component images in color space from the plurality of normal images and the plurality of normal difference images; generating a preprocessed image for each of the plurality of normal images based on variance information related to the variance of the component values of each pixel included in the plurality of component images; generating an inspection difference image which is the difference between the inspection image and the average image; generating a plurality of component images in color space from the inspection image and the inspection difference image; generating a preprocessed image for the inspection image based on variance information related to the variance of the component values of each pixel included in the plurality of component images; and inspecting the substrate to be generating a plurality of normal
  • This disclosure provides technology that is useful for accurately detecting abnormalities on a substrate surface.
  • FIG. 1 is a perspective view illustrating a substrate processing system according to a first embodiment.
  • FIG. 2 is a side view illustrating an example of a coating and developing apparatus.
  • FIG. 3 is a schematic diagram showing an example of an inspection unit.
  • FIG. 4 is a block diagram illustrating an example of a functional configuration of the control device.
  • FIG. 5 is a block diagram illustrating an example of a hardware configuration of the control device.
  • FIG. 6 is a flowchart showing an example of a substrate inspection method.
  • FIG. 7 is a schematic diagram showing an example of a method for creating a model map.
  • FIG. 8 is a schematic diagram showing an example of an image processing method for a normal image.
  • 9A, 9B, and 9C are schematic diagrams showing an example of an image processing method for a target image.
  • FIGS. 10A, 10B, 10C, and 10D are schematic diagrams showing an example of an image processing method for a target image.
  • FIG. 11 is a schematic diagram showing an example of a method for creating a preprocessed image from an inspection image.
  • FIG. 12 is a schematic diagram showing an example of a defect detection method.
  • a substrate inspection method is a substrate inspection method for inspecting a substrate to be inspected using an inspection image obtained by imaging the substrate to be inspected, and includes: generating a plurality of normal difference images which are the differences between an average image generated from a plurality of normal images obtained by imaging a plurality of normal substrates and each of the plurality of normal images; generating a plurality of component images in color space from the plurality of normal images and the plurality of normal difference images; generating a preprocessed image for each of the plurality of normal images based on variance information related to the variance of the component values of each pixel included in the plurality of component images; generating an inspection difference image which is the difference between the inspection image and the average image; generating a plurality of component images in color space from the inspection image and the inspection difference image; generating a preprocessed image for the inspection image based on variance information related to the variance of the component values of each pixel included in the plurality of component images; and inspecting the substrate to be
  • a preprocessed image for each of the multiple normal images is prepared based on variance information related to the variance of the component values of each pixel contained in the multiple component images in the color space generated from the multiple normal images. Also, a preprocessed image for the inspection image is prepared based on variance information related to the variance of the component values of each pixel contained in the multiple component images in the color space generated from the inspection image of the substrate to be inspected. Then, based on these images, the substrate to be inspected is inspected for defects.
  • a preprocessed image is generated using variance information related to the variance of the component values of multiple components in the color space in the normal image and the inspection image, and the substrate is inspected for defects using this, so that the inspection can be performed using an image that takes into account the variance of the component values in the color space. Therefore, it is useful for accurately detecting abnormalities on the substrate surface.
  • a model map is generated based on variance information of component values of each pixel included in the component images for the normal images, and a preprocessed image for the normal image is created by calculating a difference between the model map and variance information of component values of each pixel included in the component images obtained from the normal image; and in generating a preprocessed image for the inspection image, a preprocessed image for the inspection image is created by calculating a difference between the model map and variance information of component values of each pixel included in the component images obtained from the inspection image.
  • a model map created based on the variance information of the component values of each pixel contained in multiple component images related to multiple normal images can be said to be a compilation of the variance of the component values of each pixel in multiple normal images.
  • the variance information may be obtained by calculating the Mahalanobis distance for the component values of each pixel included in the multiple component images.
  • variance information can be calculated more accurately by calculating the Mahalanobis distance, making it possible to create a preprocessed image that is more useful when detecting abnormalities on the substrate surface.
  • the color space may be at least one of the RGB color space, the HSV color space, the XYZ color space, and the Lab color space.
  • a pre-processed image can be created that is more useful in detecting anomalies on the substrate surface.
  • the normal difference image and the test difference image may include images that have been subjected to enhancement processing to enhance the difference from the average image.
  • the normal difference image and the inspection difference image may include images after unevenness removal processing has been performed to remove unevenness in the image.
  • the normal difference image and the test difference image may include an image obtained after averaging the component values of pixels in a specified area and then performing a process to determine the difference from the averaged value.
  • the normal difference image and the inspection difference image may include images after noise removal processing has been performed to remove noise from the images.
  • Inspecting the substrate to be inspected for defects may include creating a normal image abnormality degree map showing the degree of abnormality for each pixel in a preprocessed image related to the normal image, and an inspection image abnormality degree map showing the degree of abnormality for each pixel in a preprocessed image related to the inspection image, and identifying an area in which a defect is estimated to exist from the difference between the normal image abnormality degree map and the inspection image abnormality degree map.
  • the area where a defect is estimated to exist is identified from the difference between the normal image abnormality map created from the preprocessed image related to the normal image and the inspection image abnormality map created from the preprocessed image related to the inspection image.
  • the abnormality map is a map that indicates the degree of abnormality for each pixel, and the area where a defect exists is estimated from the difference in the degree of abnormality for each pixel between the normal image and the inspection image, so that a more accurate estimation is performed based on the information of each pixel.
  • an algorithm is used that generates an abnormality map by inputting a preprocessed image, and the algorithm may be created by machine learning using as training data at least a portion of the normal images, the normal difference images, and the component images obtained from these.
  • an algorithm for generating an anomaly map is created by machine learning using multiple normal images, multiple normal difference images, and at least some of the multiple component images obtained from these as training data.
  • the number of normal images prepared for creating the algorithm can be reduced, making it easier to create a highly accurate algorithm.
  • a substrate inspection device is a substrate inspection device that inspects a substrate to be inspected using an inspection image obtained by imaging the substrate to be inspected, and includes: a normal preprocessing image creation unit that generates a plurality of normal difference images that are the differences between an average image generated from a plurality of normal images obtained by imaging a plurality of normal substrates and each of the plurality of normal images, generates a plurality of component images in color space from the plurality of normal images and the plurality of normal difference images, and generates a preprocessing image for the normal images based on variance information related to the variance of the component values of each pixel included in the plurality of component images; an inspection preprocessing image creation unit that generates an inspection difference image that is the difference between the inspection image and the average image, generates a plurality of component images in color space from the inspection image and the inspection difference image, and generates a preprocessing image for the inspection image based on variance information related to the variance of the component values of each pixel included in the
  • a preprocessed image for each of the multiple normal images is prepared based on variance information related to the variance of the component values of each pixel contained in the multiple component images in the color space generated from the multiple normal images. Also, a preprocessed image for the inspection image is prepared based on variance information related to the variance of the component values of each pixel contained in the multiple component images in the color space generated from the inspection image of the substrate to be inspected. Then, based on these images, the substrate to be inspected is inspected for defects.
  • a preprocessed image is generated using variance information related to the variance of the component values of multiple components in the color space in the normal image and the inspection image, and the substrate is inspected for defects using this, so that the inspection can be performed using an image that takes into account the variance of the component values in the color space. Therefore, it is useful for accurately detecting abnormalities on the substrate surface.
  • the normal preprocessed image creation unit may generate a model map created based on variance information of component values of each pixel included in the component images related to the normal images, and create a preprocessed image related to the normal image by determining the difference between the model map and the variance information of component values of each pixel included in the component images obtained from the normal image, and the inspection preprocessed image creation unit may create a preprocessed image related to the inspection image by determining the difference between the model map and the variance information of component values of each pixel included in the component images obtained from the inspection image.
  • the variance information may be obtained by calculating the Mahalanobis distance for the component values of each pixel included in the multiple component images.
  • the defect detection unit may be configured to create a normal image abnormality degree map showing the degree of abnormality for each pixel in a preprocessed image related to the normal image, and an inspection image abnormality degree map showing the degree of abnormality for each pixel in a preprocessed image related to the inspection image, and identify an area where a defect is estimated to exist from the difference between the normal image abnormality degree map and the inspection image abnormality degree map.
  • the defect detection unit may use an algorithm that generates an anomaly map by inputting a preprocessed image, and the algorithm may be created by machine learning using as training data at least a portion of the normal images, the normal difference images, and the component images obtained from these.
  • a substrate inspection program is a substrate inspection program that causes a computer to perform substrate inspection using an inspection image obtained by imaging a substrate to be inspected, and causes the computer to perform the following: generate a normal difference image that is the difference between an average image generated from multiple normal images obtained by imaging multiple normal substrates and each of the multiple normal images; generate multiple component images in color space from the multiple normal images and the multiple normal difference images; generate a preprocessed image for each of the multiple normal images based on variance information related to the variance of the component values of each pixel included in the multiple component images; generate an inspection difference image that is the difference between the inspection image and the average image; generate multiple component images in color space from the inspection image and the inspection difference image; generate a preprocessed image for the inspection image based on variance information related to the variance of the component values of each pixel included in the multiple component images; and inspect the substrate to be inspected for defects based on the preprocessed image for the normal image and the preprocessed image for the inspection image.
  • the above-mentioned circuit board inspection program achieves the same effect as the above-mentioned circuit board inspection method.
  • the substrate processing system 1 shown in FIG. 1 is a system that forms a photosensitive film on a workpiece W, exposes the photosensitive film, and develops the photosensitive film.
  • the workpiece W to be processed is, for example, a substrate, or a substrate on which a film or circuit or the like has been formed by performing a predetermined process.
  • the substrate included in the workpiece W is, for example, a wafer containing silicon.
  • the workpiece W (substrate) may be formed in a circular shape.
  • the workpiece W to be processed may be a glass substrate, a mask substrate, or an FPD (Flat Panel Display), or may be an intermediate body obtained by performing a predetermined process on such a substrate.
  • the photosensitive film is, for example, a resist film.
  • the substrate processing system 1 comprises a coating and developing apparatus 2 and an exposure apparatus 3.
  • the exposure apparatus 3 performs an exposure process on a resist film (photosensitive coating) formed on a workpiece W (substrate). Specifically, the exposure apparatus 3 irradiates an energy beam onto an exposure target portion of the resist film using a method such as immersion exposure.
  • the coating and developing apparatus 2 performs a process of forming a resist film on the surface of the workpiece W before the exposure process by the exposure apparatus 3, and performs a development process of the resist film after the exposure process.
  • the coating and developing apparatus 2 includes a carrier block 4, a processing block 5, an interface block 6, and a control device 100.
  • the carrier block 4 introduces the workpiece W into the coating and developing apparatus 2 and removes the workpiece W from the coating and developing apparatus 2.
  • the carrier block 4 can support multiple carriers C (containers) for the workpieces W, and has a built-in transport device A1 including a transfer arm.
  • the carrier C stores, for example, multiple circular workpieces W.
  • the transport device A1 removes the workpiece W from the carrier C and passes it to the processing block 5, and receives the workpiece W from the processing block 5 and returns it to the carrier C.
  • the processing block 5 has multiple processing modules 11, 12, 13, and 14.
  • the processing module 11 incorporates a liquid processing unit U1, a heat processing unit U2, an inspection unit U3, and a transport device A3 that transports the workpiece W to these units.
  • the processing module 11 forms an underlayer film on the surface of the workpiece W using the liquid processing unit U1 and the heat processing unit U2.
  • the liquid processing unit U1 of the processing module 11 applies a processing liquid for forming the underlayer film onto the workpiece W.
  • the heat processing unit U2 of the processing module 11 performs various heat treatments associated with the formation of the underlayer film.
  • the inspection unit U3 performs processing to inspect the condition of the surface of the workpiece W before the formation of the underlayer film, after the formation of the underlayer film, or before the processing liquid for forming the underlayer film is applied and heat treatment is performed.
  • the processing module 12 incorporates a liquid processing unit U1, a heat processing unit U2, an inspection unit U3, and a transport device A3 that transports the workpiece W to these units.
  • the processing module 12 forms a resist film on the underlayer film using the liquid processing unit U1 and the heat processing unit U2.
  • the liquid processing unit U1 of the processing module 12 applies a processing liquid (resist) for forming a resist film onto the underlayer film.
  • the heat processing unit U2 of the processing module 12 performs various heat treatments associated with the formation of the resist film.
  • the inspection unit U3 performs processing to inspect the condition of the surface of the workpiece W before the resist film is formed, after the resist film is formed, or before the resist is applied and heat treatment is performed.
  • the processing module 13 incorporates a liquid processing unit U1, a heat processing unit U2, an inspection unit U3, and a transport device A3 that transports the workpiece W to these units.
  • the processing module 13 forms an upper layer film on the resist film using the liquid processing unit U1 and the heat processing unit U2.
  • the liquid processing unit U1 of the processing module 13 applies a processing liquid for forming the upper layer film onto the resist film.
  • the heat processing unit U2 of the processing module 13 performs various heat treatments associated with the formation of the upper layer film.
  • the inspection unit U3 performs processing to inspect the condition of the surface of the workpiece W before the upper layer film is formed, after the upper layer film is formed, or before the processing liquid for forming the upper layer film is applied and heat treatment is performed.
  • the processing module 14 incorporates a liquid processing unit U1, a heat processing unit U2, an inspection unit U3, and a transport device A3 that transports the workpiece W to these units.
  • the processing module 14 uses the liquid processing unit U1 and the heat processing unit U2 to perform a development process on the resist film after exposure.
  • the liquid processing unit U1 of the processing module 14 performs a development process on the resist film, for example, by supplying a developer onto the surface of the exposed workpiece W and then rinsing it off with a rinse liquid.
  • the heat treatment unit U2 of the processing module 14 performs various heat treatments associated with the development process. Specific examples of heat treatments include heating before the development process (PEB: Post Exposure Bake) and heating after the development process (PB: Post Bake).
  • the inspection unit U3 performs processing to inspect the condition of the surface of the workpiece W before the development process and PEB are performed, after the development process and PB are performed, or before a developer is supplied and PB is performed.
  • a shelf unit U10 is provided on the carrier block 4 side within the processing block 5.
  • the shelf unit U10 is divided into multiple cells arranged in the vertical direction.
  • a transport device A7 including a lifting arm is provided near the shelf unit U10. The transport device A7 raises and lowers the workpiece W between the cells of the shelf unit U10.
  • a shelf unit U11 is provided on the interface block 6 side of the processing block 5.
  • the shelf unit U11 is divided into multiple cells arranged vertically.
  • the interface block 6 transfers the workpiece W to and from the exposure device 3.
  • the interface block 6 has a built-in transport device A8 that includes a transfer arm, and is connected to the exposure device 3.
  • the transport device A8 transfers the workpiece W placed on the shelf unit U11 to the exposure device 3, receives the workpiece W from the exposure device 3, and returns it to the shelf unit U11.
  • the control device 100 controls each device included in the coating and developing apparatus 2 to perform a coating and developing process (substrate processing), for example, in the following procedure. First, the control device 100 controls the transport device A1 to transport the workpiece W in the carrier C to the shelf unit U10, and then controls the transport device A7 to place the workpiece W in the cell for the processing module 11.
  • control device 100 controls the transport device A3 to transport the workpiece W from the shelf unit U10 to the liquid processing unit U1 in the processing module 11.
  • the control device 100 controls the liquid processing unit U1 to form a film of the processing liquid for forming the underlayer film on the surface of the workpiece W.
  • the control device 100 controls the heat processing unit U2 to heat the workpiece W with the film of the processing liquid for forming the underlayer film formed thereon to form the underlayer film.
  • the control device 100 then controls the transport device A3 to return the workpiece W with the underlayer film formed thereon to the shelf unit U10, and controls the transport device A7 to place the workpiece W in a cell for the processing module 12.
  • the control device 100 may also control the inspection unit U3 to inspect the surface of the workpiece W at any timing during processing in the processing module 11.
  • control device 100 controls the transport device A3 to transport the workpiece W from the shelf unit U10 to the liquid processing unit U1 in the processing module 12.
  • the control device 100 controls the liquid processing unit U1 to form a film of the processing liquid for forming a resist film on the surface of the workpiece W.
  • the control device 100 controls the heat processing unit U2 to heat the workpiece W with the film of the processing liquid for forming a resist film formed thereon to form a resist film.
  • the control device 100 then controls the transport device A3 to return the workpiece W to the shelf unit U10, and controls the transport device A7 to place the workpiece W in a cell for the processing module 13.
  • the control device 100 may also control the inspection unit U3 to inspect the surface of the workpiece W at any timing during processing in the processing module 12.
  • the control device 100 then controls the transport device A3 to transport the workpiece W from the shelf unit U10 to the liquid processing unit U1 in the processing module 13.
  • the control device 100 also controls the liquid processing unit U1 to form a film of the processing liquid for forming an upper layer film on the resist film of the workpiece W.
  • the control device 100 controls the heat processing unit U2 to heat the workpiece W with the film of the processing liquid for forming an upper layer film formed thereon to form an upper layer film.
  • the control device 100 then controls the transport device A3 to transport the workpiece W to the shelf unit U11.
  • the control device 100 may also control the inspection unit U3 to inspect the surface of the workpiece W at any timing during processing in the processing module 13.
  • the control device 100 then controls the transport device A8 to send the workpiece W from the shelf unit U11 to the exposure device 3.
  • the control device 100 then controls the transport device A8 to receive the workpiece W that has been subjected to the exposure process from the exposure device 3 and place it in a cell for the processing module 14 in the shelf unit U11.
  • the control device 100 then controls the transport device A3 to transport the workpiece W from the shelf unit U11 to each unit in the processing module 14, and controls the liquid processing unit U1 and the heat processing unit U2 to perform a developing process on the resist film of the workpiece W.
  • the control device 100 then controls the transport device A3 to return the workpiece W to the shelf unit U10, and controls the transport device A7 and the transport device A1 to return the workpiece W into the carrier C.
  • the control device 100 may also control the inspection unit U3 to inspect the surface of the workpiece W at any timing during the processing in the processing module 14. This completes the coating and developing process for one workpiece W.
  • the control device 100 controls each device of the coating and developing apparatus 2 to perform the coating and developing process for each of the subsequent multiple workpieces W in the same manner as described above.
  • the specific configuration of the substrate processing apparatus is not limited to the configuration of the coating and developing apparatus 2 exemplified above.
  • the substrate processing apparatus may be any type that includes a unit that inspects the surface of the workpiece W that is to be subjected to a specified process, and a control device that controls this unit.
  • the inspection unit U3 has a function of capturing an image of the surface of the workpiece W (hereinafter referred to as "surface Wa") to acquire image data.
  • the inspection unit U3 may capture an image of the entire surface Wa of the workpiece W to acquire image data of the entire surface Wa.
  • the inspection unit U3 includes, for example, a housing 30, a holding unit 31, a linear driving unit 32, an imaging unit 33, and a light projecting/reflecting unit 34.
  • the holding unit 31 holds the workpiece W horizontally with the surface Wa facing upward.
  • the linear drive unit 32 includes a power source such as an electric motor, and moves the holding unit 31 along a horizontal linear path.
  • the imaging unit 33 has a camera 35 such as a CCD camera.
  • the camera 35 is provided near one end of the inspection unit U3 in the moving direction of the holding unit 31, and is directed toward the other end in the moving direction.
  • the light projecting/reflecting unit 34 projects light into the imaging range and guides the reflected light from the imaging range to the camera 35.
  • the light projecting/reflecting unit 34 has a half mirror 36 and a light source 37.
  • the half mirror 36 is provided in the middle of the moving range of the linear drive unit 32 at a position higher than the holding unit 31, and reflects light from below to the camera 35.
  • the light source 37 is provided above the half mirror 36, and irradiates illumination light downward through the half mirror 36.
  • the inspection unit U3 operates as follows to acquire image data of the surface Wa of the workpiece W.
  • the linear drive unit 32 moves the holder 31. This causes the workpiece W to pass under the half mirror 36.
  • reflected light from each part of the surface Wa of the workpiece W is sent sequentially to the camera 35.
  • the camera 35 forms an image of the reflected light from each part of the surface Wa of the workpiece W, and acquires image data of the surface Wa of the workpiece W (the entire surface Wa).
  • the captured image obtained by capturing the surface Wa of the workpiece W changes depending on the condition of the surface Wa of the workpiece W.
  • an image (captured image data) of the surface Wa of the workpiece W is acquired as information indicating the condition of the surface Wa of the workpiece W, and this is used to evaluate the condition of the surface Wa, in particular the presence or absence of defects.
  • the captured image data acquired by the camera 35 is sent to the control device 100.
  • the control device 100 can inspect the condition of the surface Wa of the workpiece W based on the captured image data of the surface Wa. For example, the presence or absence of defects on the surface Wa of the workpiece W can be inspected.
  • image data in which pixel values for each pixel are defined may be simply referred to as an "image.”
  • control device 100 has a process control unit 102 and an inspection control unit 110 as functional components (hereinafter referred to as "functional modules").
  • the processes executed by the process control unit 102 and the inspection control unit 110 correspond to the processes executed by the control device 100.
  • the process control unit 102 controls the liquid treatment unit U1 and the heat treatment unit U2 so as to perform the liquid treatment and heat treatment in the above-mentioned coating and developing process on the workpiece W.
  • the inspection control unit 110 inspects the workpiece W based on image data obtained from the inspection unit U3 at any stage when performing the coating and developing process.
  • the inspection of the workpiece W includes determining whether there are any abnormalities (defects) on the surface Wa of the workpiece W.
  • Defects on the surface Wa include, for example, scratches, adhesion of foreign matter, uneven application of the processing liquid, and non-application of the processing liquid.
  • the inspection control unit 110 Before performing the inspection, the inspection control unit 110 prepares normal substrate data to be used in the inspection, using a normal workpiece W (normal substrate) with no defects found on the surface Wa.
  • the inspection control unit 110 performs an inspection of the workpiece W (substrate to be inspected) to be inspected based on the normal substrate data.
  • the normal workpiece W and the workpiece W to be inspected are the same type of workpiece (substrate).
  • the normal workpiece W and the workpiece W to be inspected are subjected to a coating and development process under the same processing conditions, and the preparation of normal substrate data and the inspection of the workpiece W are performed at the same timing in the coating and development process (for example, after the application of resist and before heat treatment).
  • the inspection control unit 110 has, as its functional modules, a model map creation unit 120, a preprocessing unit 130, a defect detection unit 112, and a result output unit 114. Furthermore, the model map creation unit 120 has an image acquisition unit 122, a component image generation unit 124, and a model map generation unit 126. And the preprocessing unit 130 has an image acquisition unit 132, a component image generation unit 134, and a preprocessing image generation unit 136. The model map creation unit 120 and the preprocessing unit 130 of the inspection control unit 110 function as a normal preprocessing image creation unit that creates a preprocessing image related to a normal image.
  • model map creation unit 120 and the preprocessing unit 130 of the inspection control unit 110 also function as an inspection preprocessing image creation unit that creates a preprocessing image related to an inspection image.
  • the processing performed by each functional module of the inspection control unit 110 corresponds to the processing performed by the inspection control unit 110 (control device 100).
  • the model map creation unit 120 has a function of creating a model map to be used for inspection from an image of a normal workpiece W. Note that as image data relating to the normal workpiece W, multiple pieces of image data (e.g., about 3 to 10 pieces) of image data of the surface Wa of the normal workpiece W are prepared.
  • the image acquisition unit 122 has the function of acquiring image data of the surface Wa of a normal workpiece W from the inspection unit U3.
  • the component image generating unit 124 has a function of generating a component image used to create a model map from the image acquired by the image acquiring unit 122. The procedure for generating a component image will be described later.
  • the model map generating unit 126 has a function of creating a model map using the component images generated by the component image generating unit 124.
  • a model map is a map of a shape corresponding to the workpiece W, in which the degree of color variation on the surface Wa in image data relating to a normal workpiece W is quantified for each pixel contained in the image data.
  • One model map is created from multiple image data relating to normal workpieces W. The detailed procedure will be described later, but the color variation for each pixel is quantified using covariance and Mahalanobis distance.
  • the model map created by the model map creating unit 120 is used for pre-processing of image data used in defect detection.
  • the pre-processing unit 130 has the function of creating pre-processed images to be used for inspection from images of a normal workpiece W and images of the workpiece W to be inspected.
  • the image acquisition unit 132 has a function of acquiring image data of the surface Wa of the workpiece W to be inspected from the inspection unit U3. Note that image data of the surface Wa of a normal workpiece W is acquired by the image acquisition unit 122, so this image data is used.
  • the component image generating unit 134 has a function of generating a component image to be used for creating a preprocessed image from the image acquired by the image acquiring unit 132. The procedure for generating the component image will be described later.
  • the preprocessed image generating unit 136 has a function of generating a preprocessed image using the component image generated by the component image generating unit 134.
  • a preprocessed image is an image in which the degree of color variation on the surface Wa in the image data of a normal workpiece W and the image data of the workpiece W to be inspected is quantified for each pixel contained in the image data. The detailed procedure will be described later, but in generating a preprocessed image, the color variation for each pixel is quantified using the covariance used to create the model map and the Mahalanobis distance.
  • One preprocessed image is created for each image data to be processed.
  • a preprocessed image is also created for the image data of a normal workpiece W.
  • the preprocessed image created by the preprocessing unit 130 is used in defect detection.
  • the defect detection unit 112 uses a preprocessed image created from image data of a normal workpiece W and image data of the workpiece W to be inspected to check for the presence or absence of defects on the surface Wa of the workpiece W to be inspected. An example of the processing procedure in the defect detection unit 112 will be described later.
  • the result output unit 114 has a function of outputting the detection result in the defect detection unit 112.
  • the defect detection unit 112 determines that there is an abnormality on the surface Wa of the workpiece W, it may output an abnormality signal indicating that the workpiece W being inspected is abnormal.
  • the result output unit 114 may output the abnormality signal to the process control unit 102, may output it to a higher-level controller, or may output it to an output device such as a monitor for notifying an operator, etc. of information.
  • the control device 100 is composed of one or more computers.
  • the control device 100 has, for example, a circuit 150 shown in FIG. 5.
  • the circuit 150 has one or more processors 152, a memory 154, a storage 156, and an input/output port 158.
  • the storage 156 has a computer-readable storage medium, such as a hard disk.
  • the storage medium stores a program (substrate inspection program) for causing the control device 100 to execute the substrate inspection method described below.
  • the storage medium may be a removable medium, such as a non-volatile semiconductor memory, a magnetic disk, or an optical disk.
  • Memory 154 temporarily stores the programs loaded from the storage medium of storage 156 and the results of calculations performed by processor 152.
  • Processor 152 configures each of the functional modules described above by executing the above programs in cooperation with memory 154.
  • Input/output port 158 inputs and outputs electrical signals between liquid processing unit U1, heat processing unit U2, inspection unit U3, etc., according to instructions from processor 152.
  • the hardware configuration of the control device 100 is not necessarily limited to configuring each functional module by a program.
  • each functional module of the control device 100 may be configured by a dedicated logic circuit or an ASIC (Application Specific Integrated Circuit) that integrates these.
  • ASIC Application Specific Integrated Circuit
  • the control device 100 is configured by multiple computers (multiple circuits), some of the above functional modules may be realized by one computer (circuit), and the remaining parts of the above functional modules may be realized by another computer (circuit).
  • a substrate inspection method As an example of a substrate inspection method, a series of processes executed by the control device 100 (inspection control unit 110) will be described.
  • Fig. 6 is a flow diagram illustrating the series of processes executed by the control device 100.
  • Figs. 7 to 12 are diagrams illustrating the details of the processes performed in each step and an example of image data generated in each step.
  • the control device 100 executes steps S01 to S03, for example, as shown in FIG. 6.
  • step S01 the image acquisition unit 122 of the model map creation unit 120 acquires image data (normal image) relating to a normal workpiece W from the inspection unit U3.
  • step S02 the component image generation unit 124 of the model map creation unit 120 creates a component image from the normal image acquired by the image acquisition unit 122.
  • step S03 the model map generation unit 126 of the model map creation unit 120 generates a model map from the component image generated in the component image generation unit 124.
  • FIG. 7 is a diagram explaining the series of steps S01 to S03 described above.
  • step S01 it is assumed that there are three normal images G1 acquired by the image acquisition unit 122 and used in the procedure described below.
  • image data of 2048 x 2048 pixels is used as the normal images G1.
  • both normal images G1 are color images.
  • the pixel value for each pixel includes, for example, information on the pixel values related to each of the RGB colors.
  • one normal image G1 includes information on the pixel values of 2048 x 2048 pixels.
  • FIG. 7 shows an example in which image generation process P1 is executed on three normal images G1.
  • Image generation process P1 shown in FIG. 7 corresponds to step S02 in FIG. 6, i.e., the process of creating component images.
  • the image generation process P1 includes enhancement processing, difference processing, component decomposition processing, and filter processing.
  • the procedure for creating these component images will be described with reference to FIG. 8 to FIG. 10.
  • an average image G2 is created by averaging the pixel values of each pixel in multiple normal images G1.
  • an enhancement process is performed on the average image G2 to create an average enhancement image G3.
  • an enhancement process using a LUT (lookup table) based on a tone conversion function is performed on a color image.
  • a tone conversion function is a function that associates pixel values in an input image and an output image, and a function is set for each RGB according to the characteristic part to be emphasized.
  • a LUT is a table that shows the correspondence of pixel values before and after conversion.
  • a tone conversion function that focuses on the part to be emphasized is set in advance, and a LUT corresponding to this tone conversion function is prepared, and a process is performed to convert the RGB values of each pixel of the color image based on the LUT.
  • an average enhancement image G3 is obtained in which a specific pixel value area is more emphasized.
  • the unevenness removal process refers to a process for removing concentric unevenness, for example.
  • concentric circles are set with the pixel that captures the center of the workpiece W as the center.
  • a filter image can be prepared in which the pixel values of each of the RGB pixels on the concentric circles are the same value (average value).
  • an average unevenness-removed image G4 is obtained by taking the difference between the pixel values of each pixel included in the filter image from the pixel values of each pixel in the average emphasized image G3.
  • the image from which the component images are to be generated is set as the target image I1, and the target image is subjected to the same processing as in generating the above-mentioned average emphasized image G3 to generate the target emphasized image I2. Furthermore, the target emphasized image I2 is subjected to the same processing as in generating the above-mentioned average unevenness removed image G4 to generate the target unevenness removed image I3. As a result, as shown in FIG. 9(a), the target emphasized image I2 and the target unevenness removed image I3 are generated. Note that the target image I1 corresponds to one of the normal images G1 used to generate the above-mentioned average image G2.
  • a difference emphasized image I4 is obtained by taking the difference between the target emphasized image I2 and the average emphasized image G3.
  • a difference unevenness removed image I5 is obtained by taking the difference between the target unevenness removed image I3 and the average unevenness removed image G4.
  • the difference emphasized image I4 and the difference unevenness removed image I5 are images from which the components of the average emphasized image G3 and the average unevenness removed image G4 have been removed, respectively.
  • the process of obtaining the difference emphasized image I4 and the difference unevenness removed image I5 corresponds to the process of removing the fluctuation components contained in the average emphasized image G3 and the average unevenness removed image G4 generated from the image obtained by averaging three normal images G1.
  • the normal image G1 is the target image I1
  • both the difference emphasized image I4 and the difference unevenness removed image I5 correspond to normal difference images.
  • averaging, difference, and noise removal processes are performed as shown in Figs. 10(a) to 10(d).
  • the averaging process is, for example, a process in which the pixel value of one pixel is set to the average value of the pixel values of the pixels surrounding that pixel, for example, a process in which the pixel value of one pixel is set to the average value of the pixel values of the surrounding 101 x 101 pixels using a mask of 101 x 101 pixels.
  • the difference process is a process in which a large area of color change that is not due to defects is removed by taking the difference between the image data after the averaging process and the image before the process.
  • the noise removal process is a process in which noise that is thought to be unrelated to defects is removed, for example, a process in which a mask of 15 x 15 pixels is used to remove components that are due to noise.
  • object-enhanced image I2, object unevenness-removed image I3, and the four processed images I11, I12, I13, and I14 each contain information on pixel values of the R, G, and B components. If only each component is extracted from these to generate image data, for example, an object-enhanced image I2 for the R component, an object-enhanced image I2 for the G component, and an object-enhanced image I2 for the B component can be obtained from object-enhanced image I2 for the RGB color image. Similarly, images relating to the R, G, and B components can be generated for the other images I3, I11 to I14.
  • a color image using the R, G, and B color components it is also possible to define it in another color space. Examples include the HSV color space, the XYZ color space, and the Lab color space.
  • expressing the color of a pixel using components of different color spaces it is possible to capture the color characteristics of that pixel from a different perspective.
  • a color defined in a specific color space can be expressed using components of another color space using a predefined conversion formula.
  • one color image is expressed in the above three components in the HSV color space (H component, S component, V component), three components in the XYZ color space (X component, Y component, Z component), and three components in the Lab color space (aa component, ba component, La component), and then images related to each of these components are created: an object-emphasized image I2, an object-unevenness-removed image I3, and four processed images I11, I12, I13, and I14.
  • FIG. 7 shows that a component image G10 is obtained as a result of executing image generation process P1 on three normal images G1. Furthermore, FIG. 7 shows that a model map G20 is created from the component image G10. It also shows that the Mahalanobis distance is calculated to create the model map G20.
  • the procedure for creating the model map G20, including the calculation of the Mahalanobis distance, corresponds to step S03 in FIG. 6, i.e., the process for creating the model map.
  • the 72 component images G10 relating to one normal image G1 are considered to contain different features.
  • the 72 component images G10 can be considered to contain 72 types of features for each pixel.
  • the average values of the three images are used for the average and covariance matrix.
  • the Mahalanobis distance at pixel M can be calculated, for example, using the following formula (1).
  • is the average
  • is the covariance matrix.
  • the Mahalanobis distance it is possible to calculate how far the data for that pixel is from the data group, i.e., the degree of abnormality of the data. Since the 72 feature values obtained from one normal image G1 are different for each pixel, it is possible to use the Mahalanobis distance to calculate the degree to which the data for that pixel is abnormal in a pixel group of 2048 x 2048 pixels.
  • the 72 component images G10 obtained from one normal image G1 can be used to calculate the Mahalanobis distance for each pixel in the normal image G1.
  • three Mahalanobis distances are calculated from the three normal images G1 for pixels in the same position.
  • the maximum value of these three Mahalanobis distances is adopted and arranged to correspond to the 2048 x 2048 pixel arrangement.
  • the model map G20 can be created.
  • the model map G20 can be said to be a mapping of the degree of abnormality of the data for each pixel contained in the three normal images G1.
  • step S04 the model map G20 is used to prepare a preprocessed image related to the normal image G1.
  • steps S05 to S07 the model map G20 is used to prepare a preprocessed image related to the image to be inspected.
  • step S05 the image acquisition unit 132 of the pre-processing unit 130 acquires the inspection target image.
  • step S06 the component image generation unit 134 of the pre-processing unit 130 generates a component image of the inspection target image.
  • step S07 the pre-processed image generation unit 136 of the pre-processing unit 130 creates a pre-processed image for the normal image G1 using the component image and model map G20 created in step S06.
  • step S04 the pre-processed image generation unit 136 of the pre-processing unit 130 creates a pre-processed image for the normal image G1 using the component image G10 and model map G20 created in step S02.
  • FIG. 11 shows the procedure for creating a preprocessed image T30 from an inspection image T1, which is an image of the inspection target.
  • an image generation process P1 is executed on the inspection image T1.
  • the image generation process P1 shown in FIG. 11 is the same as the image generation process P1 shown in FIG. 7.
  • the image generation process P1 corresponds to step S06 in FIG. 6, i.e., the process of creating a component image.
  • the image generation process P1 includes an emphasis process, a difference process, a component decomposition process, and a filter process.
  • the procedure for creating these component images is the same as that described in the above embodiment.
  • the average emphasized image G3 and the average unevenness removed image G4 are created from the normal image G1.
  • the target emphasized image I2 and the target unevenness removed image I3 are generated by the procedure shown in FIG. 9(a) using the inspection image T1 as the target image I1.
  • the difference emphasized image I4 is generated by taking the difference between the target emphasized image I2 and the average emphasized image G3. Furthermore, as shown in FIG.
  • the difference unevenness removed image I5 is generated by taking the difference between the target unevenness removed image I3 and the average unevenness removed image G4. In this way, the difference emphasized image I4 and the difference unevenness removed image I5 created when the inspection image T1 is used as the target image I1 both correspond to the inspection difference image.
  • FIGs. 10(a) to 10(d) four processed images I11, I12, I13, and I14 are created by performing averaging, subtraction, and noise removal processes on the object-emphasized image I2, object-unevenness-removed image I3, difference-emphasized image I4, and difference-unevenness-removed image I5 obtained in the process so far.
  • a map image T20 is created from the component image T10.
  • the Mahalanobis distance is calculated.
  • the above-mentioned formula (1) is used to calculate the Mahalanobis distance when creating the map image T20.
  • the mean and covariance matrix of each pixel in the normal image G1 are used.
  • the procedure differs from that when creating the model map G20 in that the mean and covariance matrix in the normal image G1 are used instead of the mean and covariance matrix derived from the test image T1.
  • the Mahalanobis distance it is possible to calculate how far the data for that pixel is from the data group, i.e., the degree of abnormality of the data. Since the 72 feature values obtained from one inspection image T1 are different for each pixel, it is possible to use the Mahalanobis distance to calculate the degree to which the data for that pixel is abnormal in a pixel group of 2048 x 2048 pixels.
  • the 72 component images T10 obtained from one test image T1 can be used to calculate the Mahalanobis distance of each pixel in the test image T1.
  • the Mahalanobis distance of each pixel is arranged so that it corresponds to the 2048 x 2048 pixel arrangement.
  • the map image T20 can be created.
  • the map image T20 is a mapping of the degree of abnormality of the data of each pixel contained in the test image T1.
  • a preprocessed image T30 of the inspection image T1 is created by calculating the difference between the map image T20 and the model map G20.
  • difference processing By performing difference processing using the model map G20, information relating to the degree of abnormality that appears in the model map G20 created from the normal image G1 is removed.
  • the image obtained after processing is performed on the map image T20 to find the difference with the model map G20 becomes the preprocessed image T30. Note that the creation of the map image T20 and the preprocessed image T30 after preparing the component image T10 corresponds to the generation of the preprocessed image in step S07.
  • a pre-processed image T30 is created from a single inspection image T1.
  • step S04 in which a preprocessed image corresponding to the normal image G1 is created, the preprocessed image generating unit 136 of the preprocessing unit 130 creates a preprocessed image for each of the three normal images G1 using the component image G10 and model map G20 created in step S02.
  • the component image G10 for the three normal images G1 is prepared. Therefore, in the procedure shown in FIG. 11, the steps up to the preparation of the component image G10 can be omitted. Therefore, for the normal image G1, the creation of the map image (corresponding to the map image T20 in FIG. 11) and the creation of the preprocessed image (corresponding to the preprocessed image T30 in FIG. 11) after the component image G10 is prepared corresponds to the generation of the preprocessed image in step S07.
  • a preprocessed image G30 can be created for each of the three normal images G1.
  • the control device 100 executes steps S11 to S14.
  • steps S11 and S12 the control device 100 extracts features from the preprocessed image G30 of normal image G1 and calculates the degree of abnormality (step S11), thereby creating an abnormality map to be used for defect detection (step S12).
  • steps S13 and S14 the control device 100 extracts features and calculates the degree of abnormality (step S13), thereby creating an abnormality map to be used for defect detection (step S14). Steps S11 to S14 are performed by the defect detection unit 112 of the control device 100.
  • the extraction of features performed in steps S11 and S13 is a process of extracting features related to defects from the preprocessed images G30 and T30, and can be performed, for example, using Vision Transformer (ViT), which is one of the image classification algorithms.
  • ViT Vision Transformer
  • the calculation of anomaly level performed in steps S12 and S14 is a process of calculating the degree of anomaly related to defects for each pixel from the preprocessed images G30 and T30 based on the features, and can be performed, for example, using Graph Mixture Density Networks (GMDN), which is one of the methods for obtaining a probability distribution using a neural network.
  • GMDN Graph Mixture Density Networks
  • an algorithm generated by machine learning using training data can be used to extract features and calculate the degree of abnormality.
  • the preprocessed image created from multiple normal images prepared in advance is adjusted so that when features are extracted and the degree of abnormality is calculated, the calculated numerical value of the degree of abnormality is lowered.
  • the algorithm (model) whose parameters have been adjusted through such preprocessing to the preprocessed image G30 created from the normal image G1 and the preprocessed image T30 created from the inspection image T1, the degree of abnormality for each pixel can be calculated.
  • the defect detection unit 112 of the control device 100 maps the information on the degree of abnormality for each pixel calculated by the above procedure in correspondence with the arrangement of the pixels.
  • one abnormality map G40 normal image abnormality map
  • one abnormality map T40 inspection image abnormality map
  • step S15 the defect detection unit 112 of the control device 100 uses the above-mentioned anomaly maps G40 and T40 to determine whether or not there is a defect in the inspection image T1.
  • the result output unit 114 of the control device 100 outputs the determination result of whether or not there is a defect.
  • FIG. 12 shows an example of a procedure for determining the presence or absence of a defect.
  • a difference image T41 is obtained by calculating the difference between the abnormality map G40 created from three normal images G1 as described above and the abnormality map T40 created from one inspection image T1. Furthermore, a binarized image T42 in which the value of each pixel is binarized using a predetermined threshold value for this difference image T41, and a binarized image T43 in which the value of each pixel is binarized using a predetermined threshold value for the pre-processed image T30 of the inspection image T1 created by the above procedure are prepared.
  • the two binarized images T42 and T43 are superimposed to create a defect detection image T45 in which pixels converted to white in both images are identified.
  • a portion displayed in white is determined to have a defect. Note that the above procedure is merely an example, and the defect detection method is not particularly limited.
  • the result output unit 114 of the control device 100 may output the defect detection image T45 as the judgment result, or may output only the judgment result of the presence or absence of a defect based on whether or not the defect detection image T45 contains an area that is judged to have a defect.
  • the learning method can be changed as appropriate.
  • the preprocessed image created from the normal image is used for learning, but instead of using only the preprocessed image for learning, the component images used to create the preprocessed image may be used as training data for machine learning.
  • the preprocessed image generated from the normal image is created using 72 component images created from one normal image.
  • component images are images that contain information related to the characteristic parts of the normal image, so they can be used as training data for the algorithm for extracting the feature amount and calculating the degree of abnormality. Therefore, the algorithm for extracting the feature amount and calculating the degree of abnormality may be performed with at least a part of the component images used to generate the preprocessed image included in the training data.
  • the algorithm for extracting the feature amount and calculating the degree of abnormality may be performed with at least a part of the component images used to generate the preprocessed image included in the training data.
  • not only one preprocessed image obtained from one normal image but also component images are used as training data, so that the number of training data can be increased while reducing the number of normal images prepared in advance. Therefore, it is easier to prepare training data that would be necessary for sufficient training.
  • a preprocessed image G30 for each of the plurality of normal images G1 is prepared based on variance information related to the variance of component values of each pixel included in a plurality of component images in a color space generated from a plurality of normal images.
  • a preprocessed image T30 related to the inspection image is prepared based on variance information related to the variance of component values of each pixel included in a plurality of component images in a color space generated from an inspection image T1 obtained by capturing an inspection target substrate. Then, based on these images, an inspection of defects of the substrate to be inspected is performed.
  • preprocessed images G30 and T30 using variance information related to the variance of component values of a plurality of components in a color space in the normal image G1 and the inspection image T1 are generated, and the substrate is inspected for defects using these.
  • preprocessed images G30, T30 used for inspection are created using information related to the variance of component values obtained from the component images.
  • preprocessed images G30, T30 are obtained that reflect information related to the variance of component values contained in the component images.
  • a model map G20 is created based on the variance information of the component values of each pixel included in the multiple component images.
  • a preprocessed image G30 for one normal image is created by calculating the difference between the variance information of the component values of each pixel included in the multiple component images obtained from one normal image and the model map G20.
  • a preprocessed image T30 for the inspection image is created by calculating the difference between the variance information of the component values of each pixel included in the multiple component images obtained from the inspection image and the model map G20.
  • a model map can be said to be a compilation of the variances of the component values of each pixel in multiple normal images G1.
  • the variance information may also be obtained by calculating the Mahalanobis distance related to the component values of each pixel contained in the multiple component images. With this configuration, the variance information can be obtained more accurately by calculating the Mahalanobis distance, making it possible to create a preprocessed image that is more useful when detecting abnormalities on the substrate surface.
  • At least one of the RGB color space, HSV color space, XYZ color space, and Lab color space may be used as the color space.
  • a preprocessed image that is more useful for detecting abnormalities on the substrate surface can be created.
  • component images in multiple color spaces among the RGB color space, HSV color space, XYZ color space, and Lab color space may be used. In this case, a component image when one image is defined in another color space is obtained.
  • a preprocessed image By creating a preprocessed image using such a component image, a component image obtained by decomposing information of one image into multiple different components is used, and the features contained in the image can be captured from multiple perspectives. Therefore, a preprocessed image useful for improving the accuracy of defect detection can be created.
  • the normal difference image and the test difference image may include images that have been subjected to enhancement processing to enhance the difference from the average image. With this configuration, it is possible to obtain images that further enhance the characteristic components of each image, and these images can be used to create pre-processed images.
  • the normal difference image and the inspection difference image may include an image after unevenness removal processing to remove unevenness in the image.
  • an image can be obtained in which unevenness that occurs in the image has been appropriately removed, and can be used to create a preprocessed image.
  • the normal difference image and the test difference image may include an image obtained after averaging the component values of pixels in a specified area and then performing a process to determine the difference from the averaged value.
  • the normal difference image and the test difference image may include an image after noise removal processing to remove noise from the image.
  • an image can be obtained from which noise that may be contained in the image has been removed, and the image can be used to create a preprocessed image.
  • Inspecting the substrate for defects may include creating a normal image abnormality map G40 showing the degree of abnormality for each pixel in the preprocessed image G30 related to the normal image G1, and an inspection image abnormality map T40 showing the degree of abnormality for each pixel in the preprocessed image T30 related to the inspection image T1, and identifying an area where a defect is estimated to exist from the difference between the normal image abnormality map G40 and the inspection image abnormality map T40.
  • an area where a defect is estimated to exist is identified from the difference between the normal image abnormality map G40 created from the preprocessed image G30 related to the normal image G1 and the inspection image abnormality map T40 created from the preprocessed image T30 related to the inspection image T1.
  • the abnormality maps G40 and T40 are maps showing the degree of abnormality for each pixel, and the area where a defect exists is estimated from the difference in the degree of abnormality for each pixel between the normal image G1 and the inspection image T1, so that a more accurate estimation is performed based on the information of each pixel.
  • an algorithm may be used that generates an abnormality map by inputting a preprocessed image.
  • the algorithm may be created by machine learning using the multiple normal images G1, the multiple normal difference images, and at least a portion of the multiple component images obtained therefrom as training data.
  • the algorithm that generates the abnormality map is created by machine learning using the multiple normal images, the multiple normal difference images, and at least a portion of the multiple component images obtained therefrom as training data.
  • target emphasis image I2, target unevenness removal image I3, and four processed images I11, I12, I13, I14) are created from one color image by preprocessing each of the 12 color components (R, G, B, H, S, V, X, Y, Z, aa, ba, La) using the RGB color space, HSV color space, XYZ color space, and Lab color space.
  • this configuration can be modified as appropriate, and for example, only a portion of the above-mentioned color spaces may be used.
  • component images may be created using a color space different from the above-mentioned color spaces.
  • the procedure for creating the anomaly map and detecting defects can be changed as appropriate.
  • the procedure described in the above embodiment is one example, and can be changed to various methods that can be used in defect detection using images.
  • further image processing, etc. can be performed on the preprocessed image depending on the method that can be used in defect detection.
  • control device 100 may execute one step and the next step in parallel, or may execute each step in an order different from the example described above.
  • the control device 100 may omit any step, or may execute a process in any step that is different from the example described above.
  • the computer constituting the inspection control unit 110 may be provided outside the coating and developing apparatus 2.
  • the control device 100 and the inspection control unit 110 may be communicatively connected by wired or wireless communication.
  • the control device 100 may acquire an image from the inspection unit U3 and then transmit the image to the inspection control unit 110.
  • the inspection control unit 110 may transmit information indicating the determination result of whether or not there is an abnormality to the control device 100.
  • 1...substrate processing system 2...coating and developing apparatus, 100...control device, 102...processing control unit, 110...inspection control unit, 112...defect detection unit, 114...result output unit, 120...model map creation unit, 122...image acquisition unit, 124...component image generation unit, 126...model map generation unit, 130...pre-processing unit, 132...image acquisition unit, 134...component image generation unit, 136...pre-processing image generation unit.

Landscapes

  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

A substrate inspection method according to the present invention involves using an inspection image obtained by capturing an image of a substrate to be inspected to inspect the substrate. The method involves: generating a plurality of component images in a color space from a plurality of normal images and a plurality of normal difference images, and generating respective preprocessed images for the plurality of normal images on the basis of distribution information about the distribution of respective component values for the pixels in the plurality of component images; generating an inspection difference image that is the difference between the inspection image and an average image, generating a plurality of component images in a color space from the inspection image and the inspection difference image, and generating a preprocessed image for the inspection image on the basis of distribution information about the distribution of respective component values for the pixels in the plurality of component images; and inspecting the substrate for defects on the basis of the preprocessed images for the normal images and the preprocessed image for the inspection image.

Description

基板検査方法、基板検査装置、及び基板検査プログラムSubstrate inspection method, substrate inspection device, and substrate inspection program
 本開示は、基板検査方法、基板検査装置、及び基板検査プログラムに関する。 This disclosure relates to a substrate inspection method, a substrate inspection device, and a substrate inspection program.
 特許文献1には、基板を撮像して得た検査対象である撮像画像に基づいて、基板に発生している欠陥を分類する装置が開示されている。 Patent Document 1 discloses a device that classifies defects occurring on a substrate based on an image of the substrate that is the subject of inspection.
特開2019-124591号公報JP 2019-124591 A
 本開示は、基板表面における異常を精度良く検出するのに有用な技術を提供する。 This disclosure provides technology that is useful for accurately detecting abnormalities on a substrate surface.
 本開示の一態様による基板検査方法は、検査対象となる基板を撮像して得られた検査画像を用いて、前記検査対象となる基板の検査を行う基板検査方法であって、複数の正常基板を撮像して得られた複数の正常画像から生成した平均画像と、前記複数の正常画像それぞれとの差分である複数の正常差分画像を生成し、前記複数の正常画像及び前記複数の正常差分画像から、色空間における複数の成分画像を生成し、前記複数の成分画像に含まれる各画素の成分値の分散に係る分散情報に基づいて、前記複数の正常画像のそれぞれに係る前処理画像を生成することと、前記検査画像と、前記平均画像との差分である検査差分画像を生成し、前記検査画像及び前記検査差分画像から、色空間における複数の成分画像を生成し、前記複数の成分画像に含まれる各画素の成分値の分散に係る分散情報に基づいて、前記検査画像に係る前処理画像を生成することと、前記正常画像に係る前処理画像と、前記検査画像に係る前処理画像とに基づき、前記検査対象となる基板の欠陥の検査を行うことと、を含む。 A substrate inspection method according to one aspect of the present disclosure is a substrate inspection method for inspecting a substrate to be inspected using an inspection image obtained by imaging the substrate to be inspected, the method comprising: generating a plurality of normal difference images which are the differences between an average image generated from a plurality of normal images obtained by imaging a plurality of normal substrates and each of the plurality of normal images; generating a plurality of component images in color space from the plurality of normal images and the plurality of normal difference images; generating a preprocessed image for each of the plurality of normal images based on variance information related to the variance of the component values of each pixel included in the plurality of component images; generating an inspection difference image which is the difference between the inspection image and the average image; generating a plurality of component images in color space from the inspection image and the inspection difference image; generating a preprocessed image for the inspection image based on variance information related to the variance of the component values of each pixel included in the plurality of component images; and inspecting the substrate to be inspected for defects based on the preprocessed image for the normal image and the preprocessed image for the inspection image.
 本開示によれば、基板表面における異常を精度良く検出するのに有用な技術が提供される。 This disclosure provides technology that is useful for accurately detecting abnormalities on a substrate surface.
図1は、第1実施形態に係る基板処理システムを模式的に示す斜視図である。FIG. 1 is a perspective view illustrating a substrate processing system according to a first embodiment. 図2は、塗布現像装置の一例を模式的に示す側面図である。FIG. 2 is a side view illustrating an example of a coating and developing apparatus. 図3は、検査ユニットの一例を示す模式図である。FIG. 3 is a schematic diagram showing an example of an inspection unit. 図4は、制御装置の機能構成の一例を示すブロック図である。FIG. 4 is a block diagram illustrating an example of a functional configuration of the control device. 図5は、制御装置のハードウェア構成の一例を示すブロック図である。FIG. 5 is a block diagram illustrating an example of a hardware configuration of the control device. 図6は、基板検査方法の一例を示すフローチャートである。FIG. 6 is a flowchart showing an example of a substrate inspection method. 図7は、モデルマップの作成方法の一例を示す模式図である。FIG. 7 is a schematic diagram showing an example of a method for creating a model map. 図8は、正常画像の画像処理方法の一例を示す模式図である。FIG. 8 is a schematic diagram showing an example of an image processing method for a normal image. 図9(a)、図9(b)、及び、図9(c)は、対象画像の画像処理方法の一例を示す模式図である。9A, 9B, and 9C are schematic diagrams showing an example of an image processing method for a target image. 図10(a)、図10(b)、図10(c)、及び、図10(d)は、対象画像の画像処理方法の一例を示す模式図である。10A, 10B, 10C, and 10D are schematic diagrams showing an example of an image processing method for a target image. 図11は、検査画像からの前処理画像の作成方法の一例を示す模式図である。FIG. 11 is a schematic diagram showing an example of a method for creating a preprocessed image from an inspection image. 図12は、欠陥検出方法の一例を示す模式図である。FIG. 12 is a schematic diagram showing an example of a defect detection method.
 以下、種々の例示的形態について説明する。 Various exemplary forms are described below.
 本開示の一例示的形態に係る基板検査方法は、検査対象となる基板を撮像して得られた検査画像を用いて、前記検査対象となる基板の検査を行う基板検査方法であって、複数の正常基板を撮像して得られた複数の正常画像から生成した平均画像と、前記複数の正常画像それぞれとの差分である複数の正常差分画像を生成し、前記複数の正常画像及び前記複数の正常差分画像から、色空間における複数の成分画像を生成し、前記複数の成分画像に含まれる各画素の成分値の分散に係る分散情報に基づいて、前記複数の正常画像のそれぞれに係る前処理画像を生成することと、前記検査画像と、前記平均画像との差分である検査差分画像を生成し、前記検査画像及び前記検査差分画像から、色空間における複数の成分画像を生成し、前記複数の成分画像に含まれる各画素の成分値の分散に係る分散情報に基づいて、前記検査画像に係る前処理画像を生成することと、前記正常画像に係る前処理画像と、前記検査画像に係る前処理画像とに基づき、前記検査対象となる基板の欠陥の検査を行うことと、を含む。 A substrate inspection method according to an exemplary embodiment of the present disclosure is a substrate inspection method for inspecting a substrate to be inspected using an inspection image obtained by imaging the substrate to be inspected, and includes: generating a plurality of normal difference images which are the differences between an average image generated from a plurality of normal images obtained by imaging a plurality of normal substrates and each of the plurality of normal images; generating a plurality of component images in color space from the plurality of normal images and the plurality of normal difference images; generating a preprocessed image for each of the plurality of normal images based on variance information related to the variance of the component values of each pixel included in the plurality of component images; generating an inspection difference image which is the difference between the inspection image and the average image; generating a plurality of component images in color space from the inspection image and the inspection difference image; generating a preprocessed image for the inspection image based on variance information related to the variance of the component values of each pixel included in the plurality of component images; and inspecting the substrate to be inspected for defects based on the preprocessed image for the normal image and the preprocessed image for the inspection image.
 上記の基板検査方法によれば、複数の正常画像から生成される色空間における複数の成分画像に含まれる各画素の成分値の分散に係る分散情報に基づいて、複数の正常画像のそれぞれに係る前処理画像が準備される。また、検査対象となる基板を撮像した検査画像から生成される色空間における複数の成分画像に含まれる各画素の成分値の分散に係る分散情報に基づいて、検査画像に係る前処理画像が準備される。そして、これらの画像に基づいて、検査対象となる基板の欠陥の検査が行われる。このように、正常画像及び検査画像における色空間の複数の成分の成分値の分散に係る分散情報を利用した前処理画像が生成され、これを用いて基板の欠陥の検査を行うことで、色空間における成分値の分散を考慮した画像を用いて検査を行うことができる。したがって、基板表面における異常を精度良く検出するのに有用となる。  According to the above-mentioned substrate inspection method, a preprocessed image for each of the multiple normal images is prepared based on variance information related to the variance of the component values of each pixel contained in the multiple component images in the color space generated from the multiple normal images. Also, a preprocessed image for the inspection image is prepared based on variance information related to the variance of the component values of each pixel contained in the multiple component images in the color space generated from the inspection image of the substrate to be inspected. Then, based on these images, the substrate to be inspected is inspected for defects. In this way, a preprocessed image is generated using variance information related to the variance of the component values of multiple components in the color space in the normal image and the inspection image, and the substrate is inspected for defects using this, so that the inspection can be performed using an image that takes into account the variance of the component values in the color space. Therefore, it is useful for accurately detecting abnormalities on the substrate surface.
 ここで、前記正常画像に係る前処理画像を生成することにおいて、前記複数の正常画像に係る前記複数の成分画像に含まれる各画素の成分値の分散情報に基づいて作成された、1つのモデルマップを生成し、1つの前記正常画像から得られる前記複数の成分画像に含まれる各画素の成分値の分散情報と、前記モデルマップとの差分を求めることで、1つの前記正常画像に係る前処理画像を作成し、前記検査画像に係る前処理画像を生成することにおいて、前記検査画像から得られる前記複数の成分画像に含まれる各画素の成分値の分散情報と、前記モデルマップとの差分を求めることで、前記検査画像に係る前処理画像を作成する態様としてもよい。 In this case, in generating a preprocessed image for the normal image, a model map is generated based on variance information of component values of each pixel included in the component images for the normal images, and a preprocessed image for the normal image is created by calculating a difference between the model map and variance information of component values of each pixel included in the component images obtained from the normal image; and in generating a preprocessed image for the inspection image, a preprocessed image for the inspection image is created by calculating a difference between the model map and variance information of component values of each pixel included in the component images obtained from the inspection image.
 上記の構成とした場合、複数の正常画像に係る複数の成分画像に含まれる各画素の成分値の分散情報に基づいて作成される1つのモデルマップとは、複数の正常画像における各画素の成分値の分散を取りまとめたものといえる。このようなモデルマップを用いて正常画像及び検査画像に係る前処理画像を作成する構成とすることで、正常画像において生じ得る成分値の分散を考慮した前処理画像を作成することができることから、基板表面における異常をより精度良く検出することができる。 In the above configuration, a model map created based on the variance information of the component values of each pixel contained in multiple component images related to multiple normal images can be said to be a compilation of the variance of the component values of each pixel in multiple normal images. By configuring preprocessing images related to normal images and inspection images to be created using such a model map, it is possible to create preprocessing images that take into account the variance of component values that may occur in normal images, making it possible to more accurately detect abnormalities on the substrate surface.
 前記分散情報は、前記複数の成分画像に含まれる各画素の成分値に係るマハラノビス距離を求めることにより得られる態様としてもよい。 The variance information may be obtained by calculating the Mahalanobis distance for the component values of each pixel included in the multiple component images.
 このような構成とすることで、マハラノビス距離を求めることによって分散情報をより精度良く求めることができるため、基板表面における異常を検出する際により有用な前処理画像を作成することができる。 By configuring in this way, variance information can be calculated more accurately by calculating the Mahalanobis distance, making it possible to create a preprocessed image that is more useful when detecting abnormalities on the substrate surface.
 前記色空間として、RGB色空間、HSV色空間、XYZ色空間、及びLab色空間の少なくとも1つが用いられる態様としてもよい。 The color space may be at least one of the RGB color space, the HSV color space, the XYZ color space, and the Lab color space.
 上記のように、RGB色空間、HSV色空間、XYZ色空間、及びLab色空間の少なくとも1つを用いて成分画像を作成することで、基板表面における異常を検出する際により有用な前処理画像を作成することができる。 As described above, by creating component images using at least one of the RGB color space, the HSV color space, the XYZ color space, and the Lab color space, a pre-processed image can be created that is more useful in detecting anomalies on the substrate surface.
 前記正常差分画像及び前記検査差分画像は、前記平均画像との差分を強調するための強調処理を行った後の画像を含む態様としてもよい。 The normal difference image and the test difference image may include images that have been subjected to enhancement processing to enhance the difference from the average image.
 前記正常差分画像及び前記検査差分画像は、画像中のムラを除去するためのムラ除去処理を行った後の画像を含む態様としてもよい。 The normal difference image and the inspection difference image may include images after unevenness removal processing has been performed to remove unevenness in the image.
 前記正常差分画像及び前記検査差分画像は、所定領域の画素の成分値を平均化した後に、平均化した値との差分を求める処理を行った後の画像を含む態様としてもよい。 The normal difference image and the test difference image may include an image obtained after averaging the component values of pixels in a specified area and then performing a process to determine the difference from the averaged value.
 前記正常差分画像及び前記検査差分画像は、画像中のノイズを除去するためのノイズ除去処理を行った後の画像を含む態様としてもよい。 The normal difference image and the inspection difference image may include images after noise removal processing has been performed to remove noise from the images.
 前記検査対象となる基板の欠陥の検査を行うことは、前記正常画像に係る前処理画像における画素毎の異常度を示す正常画像異常度マップと、前記検査画像に係る前処理画像における画素毎の異常度を示す検査画像異常度マップと、を作成することと、前記正常画像異常度マップと、前記検査画像異常度マップとの差分から、欠陥が存在すると推定される領域を特定することと、を含む態様としてもよい。 Inspecting the substrate to be inspected for defects may include creating a normal image abnormality degree map showing the degree of abnormality for each pixel in a preprocessed image related to the normal image, and an inspection image abnormality degree map showing the degree of abnormality for each pixel in a preprocessed image related to the inspection image, and identifying an area in which a defect is estimated to exist from the difference between the normal image abnormality degree map and the inspection image abnormality degree map.
 上記の構成とすることで、正常画像に係る前処理画像から作成された正常画像異常度マップと、検査画像に係る前処理画像から作成された検査画像異常度マップとの差分から欠陥が存在すると推定される領域が特定される。異常度マップは、画素毎の異常度を示すマップであり、正常画像と検査画像とのが画素毎の異常度の差から欠陥が存在する領域が推定されるため、各画素の情報に基づいてより精度良く推定が行われる。 With the above configuration, the area where a defect is estimated to exist is identified from the difference between the normal image abnormality map created from the preprocessed image related to the normal image and the inspection image abnormality map created from the preprocessed image related to the inspection image. The abnormality map is a map that indicates the degree of abnormality for each pixel, and the area where a defect exists is estimated from the difference in the degree of abnormality for each pixel between the normal image and the inspection image, so that a more accurate estimation is performed based on the information of each pixel.
 前記正常画像異常度マップと、前記検査画像異常度マップと、を作成することにおいて、前処理画像を入力することで異常度マップを生成するアルゴリズムが用いられ、前記アルゴリズムは、前記複数の正常画像、前記複数の正常差分画像、及び、これらから得られる前記複数の成分画像の少なくとも一部を教師データとした機械学習によって作成される態様としてもよい。 In creating the normal image abnormality map and the test image abnormality map, an algorithm is used that generates an abnormality map by inputting a preprocessed image, and the algorithm may be created by machine learning using as training data at least a portion of the normal images, the normal difference images, and the component images obtained from these.
 上記の構成とすることで、異常度マップを生成するアルゴリズムが、複数の正常画像、複数の正常差分画像、及び、これらから得られる複数の成分画像の少なくとも一部を教師データとした機械学習によって作成される。このような構成とすることで、アルゴリズムの作成のために準備される正常画像の枚数を減らすことができるため、より簡便に精度の高いアルゴリズムを作成することができる。 With the above configuration, an algorithm for generating an anomaly map is created by machine learning using multiple normal images, multiple normal difference images, and at least some of the multiple component images obtained from these as training data. With this configuration, the number of normal images prepared for creating the algorithm can be reduced, making it easier to create a highly accurate algorithm.
 本開示の一例示的形態に係る基板検査装置は、検査対象となる基板を撮像して得られた検査画像を用いて、前記検査対象となる基板の検査を行う基板検査装置であって、複数の正常基板を撮像して得られた複数の正常画像から生成した平均画像と、前記複数の正常画像それぞれとの差分である複数の正常差分画像を生成し、前記複数の正常画像及び前記複数の正常差分画像から、色空間における複数の成分画像を生成し、前記複数の成分画像に含まれる各画素の成分値の分散に係る分散情報に基づいて、前記正常画像に係る前処理画像を生成する正常前処理画像作成部と、前記検査画像と、前記平均画像との差分である検査差分画像を生成し、前記検査画像及び前記検査差分画像から、色空間における複数の成分画像を生成し、前記複数の成分画像に含まれる各画素の成分値の分散に係る分散情報に基づいて、前記検査画像に係る前処理画像を生成する検査前処理画像作成部と、前記正常画像に係る前処理画像と、前記検査画像に係る前処理画像とに基づき、前記検査対象となる基板の欠陥の検査を行う欠陥検出部と、を含む。 A substrate inspection device according to an exemplary embodiment of the present disclosure is a substrate inspection device that inspects a substrate to be inspected using an inspection image obtained by imaging the substrate to be inspected, and includes: a normal preprocessing image creation unit that generates a plurality of normal difference images that are the differences between an average image generated from a plurality of normal images obtained by imaging a plurality of normal substrates and each of the plurality of normal images, generates a plurality of component images in color space from the plurality of normal images and the plurality of normal difference images, and generates a preprocessing image for the normal images based on variance information related to the variance of the component values of each pixel included in the plurality of component images; an inspection preprocessing image creation unit that generates an inspection difference image that is the difference between the inspection image and the average image, generates a plurality of component images in color space from the inspection image and the inspection difference image, and generates a preprocessing image for the inspection image based on variance information related to the variance of the component values of each pixel included in the plurality of component images; and a defect detection unit that inspects the substrate to be inspected for defects based on the preprocessing image for the normal image and the preprocessing image for the inspection image.
 上記の基板検査装置によれば、複数の正常画像から生成される色空間における複数の成分画像に含まれる各画素の成分値の分散に係る分散情報に基づいて、複数の正常画像のそれぞれに係る前処理画像が準備される。また、検査対象となる基板を撮像した検査画像から生成される色空間における複数の成分画像に含まれる各画素の成分値の分散に係る分散情報に基づいて、検査画像に係る前処理画像が準備される。そして、これらの画像に基づいて、検査対象となる基板の欠陥の検査が行われる。このように、正常画像及び検査画像における色空間の複数の成分の成分値の分散に係る分散情報を利用した前処理画像が生成され、これを用いて基板の欠陥の検査を行うことで、色空間における成分値の分散を考慮した画像を用いて検査を行うことができる。したがって、基板表面における異常を精度良く検出するのに有用となる。  According to the above-mentioned substrate inspection device, a preprocessed image for each of the multiple normal images is prepared based on variance information related to the variance of the component values of each pixel contained in the multiple component images in the color space generated from the multiple normal images. Also, a preprocessed image for the inspection image is prepared based on variance information related to the variance of the component values of each pixel contained in the multiple component images in the color space generated from the inspection image of the substrate to be inspected. Then, based on these images, the substrate to be inspected is inspected for defects. In this way, a preprocessed image is generated using variance information related to the variance of the component values of multiple components in the color space in the normal image and the inspection image, and the substrate is inspected for defects using this, so that the inspection can be performed using an image that takes into account the variance of the component values in the color space. Therefore, it is useful for accurately detecting abnormalities on the substrate surface.
 前記正常前処理画像作成部は、前記複数の正常画像に係る前記複数の成分画像に含まれる各画素の成分値の分散情報に基づいて作成された、1つのモデルマップを生成し、1つの前記正常画像から得られる前記複数の成分画像に含まれる各画素の成分値の分散情報と、前記モデルマップとの差分を求めることで、1つの前記正常画像に係る前処理画像を作成し、前記検査前処理画像作成部は、前記検査画像から得られる前記複数の成分画像に含まれる各画素の成分値の分散情報と、前記モデルマップとの差分を求めることで、前記検査画像に係る前処理画像を作成する態様としてもよい。 The normal preprocessed image creation unit may generate a model map created based on variance information of component values of each pixel included in the component images related to the normal images, and create a preprocessed image related to the normal image by determining the difference between the model map and the variance information of component values of each pixel included in the component images obtained from the normal image, and the inspection preprocessed image creation unit may create a preprocessed image related to the inspection image by determining the difference between the model map and the variance information of component values of each pixel included in the component images obtained from the inspection image.
 前記分散情報は、前記複数の成分画像に含まれる各画素の成分値に係るマハラノビス距離を求めることにより得られる態様としてもよい。 The variance information may be obtained by calculating the Mahalanobis distance for the component values of each pixel included in the multiple component images.
 前記欠陥検出部は、前記正常画像に係る前処理画像における画素毎の異常度を示す正常画像異常度マップと、前記検査画像に係る前処理画像における画素毎の異常度を示す検査画像異常度マップと、を作成し、前記正常画像異常度マップと、前記検査画像異常度マップとの差分から、欠陥が存在すると推定される領域を特定する態様としてもよい。 The defect detection unit may be configured to create a normal image abnormality degree map showing the degree of abnormality for each pixel in a preprocessed image related to the normal image, and an inspection image abnormality degree map showing the degree of abnormality for each pixel in a preprocessed image related to the inspection image, and identify an area where a defect is estimated to exist from the difference between the normal image abnormality degree map and the inspection image abnormality degree map.
 前記欠陥検出部は、前記正常画像異常度マップと、前記検査画像異常度マップと、を作成する際に、前処理画像を入力することで異常度マップを生成するアルゴリズムを使用し、前記アルゴリズムは、前記複数の正常画像、前記複数の正常差分画像、及び、これらから得られる前記複数の成分画像の少なくとも一部を教師データとした機械学習によって作成される態様としてもよい。 When creating the normal image anomaly map and the inspection image anomaly map, the defect detection unit may use an algorithm that generates an anomaly map by inputting a preprocessed image, and the algorithm may be created by machine learning using as training data at least a portion of the normal images, the normal difference images, and the component images obtained from these.
 本開示の一例示的形態に係る基板検査プログラムは、検査対象となる基板を撮像して得られた検査画像を用いて、前記検査対象となる基板の検査を行う基板検査をコンピュータに実行させる基板検査プログラムであって、複数の正常基板を撮像して得られた複数の正常画像から生成した平均画像と、前記複数の正常画像それぞれとの差分である正常差分画像を生成し、前記複数の正常画像及び複数の前記正常差分画像から、色空間における複数の成分画像を生成し、前記複数の成分画像に含まれる各画素の成分値の分散に係る分散情報に基づいて、前記複数の正常画像のそれぞれに係る前処理画像を生成することと、前記検査画像と、前記平均画像との差分である検査差分画像を生成し、前記検査画像及び前記検査差分画像から、色空間における複数の成分画像を生成し、前記複数の成分画像に含まれる各画素の成分値の分散に係る分散情報に基づいて、前記検査画像に係る前処理画像を生成することと、前記正常画像に係る前処理画像と、前記検査画像に係る前処理画像とに基づき、前記検査対象となる基板の欠陥の検査を行うことと、をコンピュータに実行させる。 A substrate inspection program according to an exemplary embodiment of the present disclosure is a substrate inspection program that causes a computer to perform substrate inspection using an inspection image obtained by imaging a substrate to be inspected, and causes the computer to perform the following: generate a normal difference image that is the difference between an average image generated from multiple normal images obtained by imaging multiple normal substrates and each of the multiple normal images; generate multiple component images in color space from the multiple normal images and the multiple normal difference images; generate a preprocessed image for each of the multiple normal images based on variance information related to the variance of the component values of each pixel included in the multiple component images; generate an inspection difference image that is the difference between the inspection image and the average image; generate multiple component images in color space from the inspection image and the inspection difference image; generate a preprocessed image for the inspection image based on variance information related to the variance of the component values of each pixel included in the multiple component images; and inspect the substrate to be inspected for defects based on the preprocessed image for the normal image and the preprocessed image for the inspection image.
 上記の基板検査プログラムによれば、上述の基板検査方法と同様の作用が奏される。 The above-mentioned circuit board inspection program achieves the same effect as the above-mentioned circuit board inspection method.
[例示的実施形態]
 以下、図面を参照して種々の例示的実施形態について詳細に説明する。なお、各図面において同一又は相当の部分に対しては同一の符号を附すこととする。一部の図面には、X軸、Y軸、及びZ軸で規定される直交座標系が示されている。以下の説明では、Z軸が上下方向に対応し、X軸及びY軸が水平方向に対応する。
Exemplary embodiments
Various exemplary embodiments will be described in detail below with reference to the drawings. Note that the same or corresponding parts in each drawing are denoted by the same reference numerals. Some drawings show an orthogonal coordinate system defined by an X-axis, a Y-axis, and a Z-axis. In the following description, the Z-axis corresponds to the up-down direction, and the X-axis and the Y-axis correspond to the horizontal direction.
[基板処理システム]
 図1に示される基板処理システム1は、ワークWに対し、感光性被膜の形成、当該感光性被膜の露光、及び当該感光性被膜の現像を施すシステムである。処理対象のワークWは、例えば基板、あるいは所定の処理が施されることで膜又は回路等が形成された状態の基板である。ワークWに含まれる基板は、一例として、シリコンを含むウェハである。ワークW(基板)は、円形に形成されていてもよい。処理対象のワークWは、ガラス基板、マスク基板、又はFPD(Flat Panel Display)などであってもよく、これらの基板等に所定の処理が施されて得られる中間体であってもよい。感光性被膜は、例えばレジスト膜である。
[Substrate Processing System]
The substrate processing system 1 shown in FIG. 1 is a system that forms a photosensitive film on a workpiece W, exposes the photosensitive film, and develops the photosensitive film. The workpiece W to be processed is, for example, a substrate, or a substrate on which a film or circuit or the like has been formed by performing a predetermined process. The substrate included in the workpiece W is, for example, a wafer containing silicon. The workpiece W (substrate) may be formed in a circular shape. The workpiece W to be processed may be a glass substrate, a mask substrate, or an FPD (Flat Panel Display), or may be an intermediate body obtained by performing a predetermined process on such a substrate. The photosensitive film is, for example, a resist film.
 基板処理システム1は、塗布現像装置2と露光装置3とを備える。露光装置3は、ワークW(基板)上に形成されたレジスト膜(感光性被膜)の露光処理を行う。具体的には、露光装置3は、液浸露光等の方法によりレジスト膜の露光対象部分にエネルギー線を照射する。塗布現像装置2は、露光装置3による露光処理の前に、ワークWの表面にレジスト膜を形成する処理を行い、露光処理後にレジスト膜の現像処理を行う。 The substrate processing system 1 comprises a coating and developing apparatus 2 and an exposure apparatus 3. The exposure apparatus 3 performs an exposure process on a resist film (photosensitive coating) formed on a workpiece W (substrate). Specifically, the exposure apparatus 3 irradiates an energy beam onto an exposure target portion of the resist film using a method such as immersion exposure. The coating and developing apparatus 2 performs a process of forming a resist film on the surface of the workpiece W before the exposure process by the exposure apparatus 3, and performs a development process of the resist film after the exposure process.
(基板処理装置)
 以下、基板処理装置の一例として、塗布現像装置2の構成を説明する。図1及び図2に示されるように、塗布現像装置2は、キャリアブロック4と、処理ブロック5と、インタフェースブロック6と、制御装置100と、を備える。
(Substrate Processing Apparatus)
1 and 2 , the coating and developing apparatus 2 includes a carrier block 4, a processing block 5, an interface block 6, and a control device 100.
 キャリアブロック4は、塗布現像装置2内へのワークWの導入及び塗布現像装置2内からのワークWの導出を行う。例えばキャリアブロック4は、ワークW用の複数のキャリアC(収容部)を支持可能であり、受け渡しアームを含む搬送装置A1を内蔵している。キャリアCは、例えば円形の複数枚のワークWを収容する。搬送装置A1は、キャリアCからワークWを取り出して処理ブロック5に渡し、処理ブロック5からワークWを受け取ってキャリアC内に戻す。処理ブロック5は、複数の処理モジュール11,12,13,14を有する。 The carrier block 4 introduces the workpiece W into the coating and developing apparatus 2 and removes the workpiece W from the coating and developing apparatus 2. For example, the carrier block 4 can support multiple carriers C (containers) for the workpieces W, and has a built-in transport device A1 including a transfer arm. The carrier C stores, for example, multiple circular workpieces W. The transport device A1 removes the workpiece W from the carrier C and passes it to the processing block 5, and receives the workpiece W from the processing block 5 and returns it to the carrier C. The processing block 5 has multiple processing modules 11, 12, 13, and 14.
 処理モジュール11は、液処理ユニットU1と、熱処理ユニットU2と、検査ユニットU3と、これらのユニットにワークWを搬送する搬送装置A3とを内蔵している。処理モジュール11は、液処理ユニットU1及び熱処理ユニットU2によりワークWの表面上に下層膜を形成する。処理モジュール11の液処理ユニットU1は、下層膜形成用の処理液をワークW上に塗布する。処理モジュール11の熱処理ユニットU2は、下層膜の形成に伴う各種熱処理を行う。検査ユニットU3は、下層膜の形成前、下層膜の形成後、又は、下層膜形成用の処理液が塗布されて熱処理が行われる前に、ワークWの表面の状態を検査するための処理を行う。 The processing module 11 incorporates a liquid processing unit U1, a heat processing unit U2, an inspection unit U3, and a transport device A3 that transports the workpiece W to these units. The processing module 11 forms an underlayer film on the surface of the workpiece W using the liquid processing unit U1 and the heat processing unit U2. The liquid processing unit U1 of the processing module 11 applies a processing liquid for forming the underlayer film onto the workpiece W. The heat processing unit U2 of the processing module 11 performs various heat treatments associated with the formation of the underlayer film. The inspection unit U3 performs processing to inspect the condition of the surface of the workpiece W before the formation of the underlayer film, after the formation of the underlayer film, or before the processing liquid for forming the underlayer film is applied and heat treatment is performed.
 処理モジュール12は、液処理ユニットU1と、熱処理ユニットU2と、検査ユニットU3と、これらのユニットにワークWを搬送する搬送装置A3とを内蔵している。処理モジュール12は、液処理ユニットU1及び熱処理ユニットU2により下層膜上にレジスト膜を形成する。処理モジュール12の液処理ユニットU1は、レジスト膜形成用の処理液(レジスト)を下層膜の上に塗布する。処理モジュール12の熱処理ユニットU2は、レジスト膜の形成に伴う各種熱処理を行う。検査ユニットU3は、レジスト膜の形成前、レジスト膜の形成後、又は、レジストが塗布されて熱処理が行われる前に、ワークWの表面の状態を検査するための処理を行う。 The processing module 12 incorporates a liquid processing unit U1, a heat processing unit U2, an inspection unit U3, and a transport device A3 that transports the workpiece W to these units. The processing module 12 forms a resist film on the underlayer film using the liquid processing unit U1 and the heat processing unit U2. The liquid processing unit U1 of the processing module 12 applies a processing liquid (resist) for forming a resist film onto the underlayer film. The heat processing unit U2 of the processing module 12 performs various heat treatments associated with the formation of the resist film. The inspection unit U3 performs processing to inspect the condition of the surface of the workpiece W before the resist film is formed, after the resist film is formed, or before the resist is applied and heat treatment is performed.
 処理モジュール13は、液処理ユニットU1と、熱処理ユニットU2と、検査ユニットU3と、これらのユニットにワークWを搬送する搬送装置A3とを内蔵している。処理モジュール13は、液処理ユニットU1及び熱処理ユニットU2によりレジスト膜上に上層膜を形成する。処理モジュール13の液処理ユニットU1は、上層膜形成用の処理液をレジスト膜の上に塗布する。処理モジュール13の熱処理ユニットU2は、上層膜の形成に伴う各種熱処理を行う。検査ユニットU3は、上層膜の形成前、上層膜の形成後、又は上層膜形成用の処理液が塗布されて熱処理が行われる前に、ワークWの表面の状態を検査するための処理を行う。 The processing module 13 incorporates a liquid processing unit U1, a heat processing unit U2, an inspection unit U3, and a transport device A3 that transports the workpiece W to these units. The processing module 13 forms an upper layer film on the resist film using the liquid processing unit U1 and the heat processing unit U2. The liquid processing unit U1 of the processing module 13 applies a processing liquid for forming the upper layer film onto the resist film. The heat processing unit U2 of the processing module 13 performs various heat treatments associated with the formation of the upper layer film. The inspection unit U3 performs processing to inspect the condition of the surface of the workpiece W before the upper layer film is formed, after the upper layer film is formed, or before the processing liquid for forming the upper layer film is applied and heat treatment is performed.
 処理モジュール14は、液処理ユニットU1と、熱処理ユニットU2と、検査ユニットU3と、これらのユニットにワークWを搬送する搬送装置A3とを内蔵している。処理モジュール14は、液処理ユニットU1及び熱処理ユニットU2により、露光後のレジスト膜の現像処理を行う。処理モジュール14の液処理ユニットU1は、例えば、露光済みのワークWの表面上に現像液を供給した後、これをリンス液により洗い流すことで、レジスト膜の現像処理を行う。 The processing module 14 incorporates a liquid processing unit U1, a heat processing unit U2, an inspection unit U3, and a transport device A3 that transports the workpiece W to these units. The processing module 14 uses the liquid processing unit U1 and the heat processing unit U2 to perform a development process on the resist film after exposure. The liquid processing unit U1 of the processing module 14 performs a development process on the resist film, for example, by supplying a developer onto the surface of the exposed workpiece W and then rinsing it off with a rinse liquid.
 処理モジュール14の熱処理ユニットU2は、現像処理に伴う各種熱処理を行う。熱処理の具体例としては、現像処理前の加熱処理(PEB:Post Exposure Bake)、現像処理後の加熱処理(PB:Post Bake)等が挙げられる。検査ユニットU3は、現像処理及びPEBが行われる前、現像処理及びPBが行われた後、又は、現像液が供給されてPBが行われる前に、ワークWの表面の状態を検査するための処理を行う。 The heat treatment unit U2 of the processing module 14 performs various heat treatments associated with the development process. Specific examples of heat treatments include heating before the development process (PEB: Post Exposure Bake) and heating after the development process (PB: Post Bake). The inspection unit U3 performs processing to inspect the condition of the surface of the workpiece W before the development process and PEB are performed, after the development process and PB are performed, or before a developer is supplied and PB is performed.
 処理ブロック5内におけるキャリアブロック4側には棚ユニットU10が設けられている。棚ユニットU10は、上下方向に並ぶ複数のセルに区画されている。棚ユニットU10の近傍には昇降アームを含む搬送装置A7が設けられている。搬送装置A7は、棚ユニットU10のセル同士の間でワークWを昇降させる。 A shelf unit U10 is provided on the carrier block 4 side within the processing block 5. The shelf unit U10 is divided into multiple cells arranged in the vertical direction. A transport device A7 including a lifting arm is provided near the shelf unit U10. The transport device A7 raises and lowers the workpiece W between the cells of the shelf unit U10.
 処理ブロック5内におけるインタフェースブロック6側には棚ユニットU11が設けられている。棚ユニットU11は、上下方向に並ぶ複数のセルに区画されている。 A shelf unit U11 is provided on the interface block 6 side of the processing block 5. The shelf unit U11 is divided into multiple cells arranged vertically.
 インタフェースブロック6は、露光装置3との間でワークWの受け渡しを行う。例えばインタフェースブロック6は、受け渡しアームを含む搬送装置A8を内蔵しており、露光装置3に接続される。搬送装置A8は、棚ユニットU11に配置されたワークWを露光装置3に渡し、露光装置3からワークWを受け取って棚ユニットU11に戻す。 The interface block 6 transfers the workpiece W to and from the exposure device 3. For example, the interface block 6 has a built-in transport device A8 that includes a transfer arm, and is connected to the exposure device 3. The transport device A8 transfers the workpiece W placed on the shelf unit U11 to the exposure device 3, receives the workpiece W from the exposure device 3, and returns it to the shelf unit U11.
 制御装置100は、例えば以下の手順で塗布現像処理(基板処理)を実行するように塗布現像装置2に含まれる各装置を制御する。まず制御装置100は、キャリアC内のワークWを棚ユニットU10に搬送するように搬送装置A1を制御し、このワークWを処理モジュール11用のセルに配置するように搬送装置A7を制御する。 The control device 100 controls each device included in the coating and developing apparatus 2 to perform a coating and developing process (substrate processing), for example, in the following procedure. First, the control device 100 controls the transport device A1 to transport the workpiece W in the carrier C to the shelf unit U10, and then controls the transport device A7 to place the workpiece W in the cell for the processing module 11.
 次に制御装置100は、棚ユニットU10のワークWを処理モジュール11内の液処理ユニットU1に搬送するように搬送装置A3を制御する。制御装置100は、このワークWの表面上に下層膜形成用の処理液の膜を形成するように液処理ユニットU1を制御する。制御装置100は、下層膜形成用の処理液の膜が形成された状態のワークWを加熱して下層膜を形成するように熱処理ユニットU2を制御する。その後制御装置100は、下層膜が形成されたワークWを棚ユニットU10に戻すように搬送装置A3を制御し、このワークWを処理モジュール12用のセルに配置するように搬送装置A7を制御する。制御装置100は、処理モジュール11内での処理のいずれかのタイミングにおいて、ワークWの表面の検査を行うように検査ユニットU3を制御してもよい。 Then, the control device 100 controls the transport device A3 to transport the workpiece W from the shelf unit U10 to the liquid processing unit U1 in the processing module 11. The control device 100 controls the liquid processing unit U1 to form a film of the processing liquid for forming the underlayer film on the surface of the workpiece W. The control device 100 controls the heat processing unit U2 to heat the workpiece W with the film of the processing liquid for forming the underlayer film formed thereon to form the underlayer film. The control device 100 then controls the transport device A3 to return the workpiece W with the underlayer film formed thereon to the shelf unit U10, and controls the transport device A7 to place the workpiece W in a cell for the processing module 12. The control device 100 may also control the inspection unit U3 to inspect the surface of the workpiece W at any timing during processing in the processing module 11.
 次に制御装置100は、棚ユニットU10のワークWを処理モジュール12内の液処理ユニットU1に搬送するように搬送装置A3を制御する。制御装置100は、このワークWの表面に対してレジスト膜形成用の処理液の膜を形成するように液処理ユニットU1を制御する。制御装置100は、レジスト膜形成用の処理液の膜が形成された状態のワークWを加熱してレジスト膜を形成するように熱処理ユニットU2を制御する。その後制御装置100は、ワークWを棚ユニットU10に戻すように搬送装置A3を制御し、このワークWを処理モジュール13用のセルに配置するように搬送装置A7を制御する。制御装置100は、処理モジュール12内での処理のいずれかのタイミングにおいて、ワークWの表面の検査を行うように検査ユニットU3を制御してもよい。 Then, the control device 100 controls the transport device A3 to transport the workpiece W from the shelf unit U10 to the liquid processing unit U1 in the processing module 12. The control device 100 controls the liquid processing unit U1 to form a film of the processing liquid for forming a resist film on the surface of the workpiece W. The control device 100 controls the heat processing unit U2 to heat the workpiece W with the film of the processing liquid for forming a resist film formed thereon to form a resist film. The control device 100 then controls the transport device A3 to return the workpiece W to the shelf unit U10, and controls the transport device A7 to place the workpiece W in a cell for the processing module 13. The control device 100 may also control the inspection unit U3 to inspect the surface of the workpiece W at any timing during processing in the processing module 12.
 次に制御装置100は、棚ユニットU10のワークWを処理モジュール13内の液処理ユニットU1に搬送するように搬送装置A3を制御する。また、制御装置100は、このワークWのレジスト膜上に上層膜形成用の処理液の膜を形成するように液処理ユニットU1を制御する。制御装置100は、上層膜形成用の処理液の膜が形成された状態のワークWを加熱して上層膜を形成するように熱処理ユニットU2を制御する。その後制御装置100は、ワークWを棚ユニットU11に搬送するように搬送装置A3を制御する。制御装置100は、処理モジュール13内での処理のいずれかのタイミングにおいて、ワークWの表面の検査を行うように検査ユニットU3を制御してもよい。 The control device 100 then controls the transport device A3 to transport the workpiece W from the shelf unit U10 to the liquid processing unit U1 in the processing module 13. The control device 100 also controls the liquid processing unit U1 to form a film of the processing liquid for forming an upper layer film on the resist film of the workpiece W. The control device 100 controls the heat processing unit U2 to heat the workpiece W with the film of the processing liquid for forming an upper layer film formed thereon to form an upper layer film. The control device 100 then controls the transport device A3 to transport the workpiece W to the shelf unit U11. The control device 100 may also control the inspection unit U3 to inspect the surface of the workpiece W at any timing during processing in the processing module 13.
 次に制御装置100は、棚ユニットU11のワークWを露光装置3に送り出すように搬送装置A8を制御する。その後制御装置100は、露光処理が施されたワークWを露光装置3から受け入れて、棚ユニットU11における処理モジュール14用のセルに配置するように搬送装置A8を制御する。 The control device 100 then controls the transport device A8 to send the workpiece W from the shelf unit U11 to the exposure device 3. The control device 100 then controls the transport device A8 to receive the workpiece W that has been subjected to the exposure process from the exposure device 3 and place it in a cell for the processing module 14 in the shelf unit U11.
 次に制御装置100は、棚ユニットU11のワークWを処理モジュール14内の各ユニットに搬送するように搬送装置A3を制御し、このワークWのレジスト膜の現像処理を行うように液処理ユニットU1及び熱処理ユニットU2を制御する。その後制御装置100は、ワークWを棚ユニットU10に戻すように搬送装置A3を制御し、このワークWをキャリアC内に戻すように搬送装置A7及び搬送装置A1を制御する。制御装置100は、処理モジュール14内での処理のいずれかのタイミングにおいて、ワークWの表面の検査を行うように検査ユニットU3を制御してもよい。以上で1枚のワークWについての塗布現像処理が完了する。制御装置100は、後続の複数のワークWのそれぞれについても、上述と同様に塗布現像処理を実行するように塗布現像装置2の各装置を制御する。 The control device 100 then controls the transport device A3 to transport the workpiece W from the shelf unit U11 to each unit in the processing module 14, and controls the liquid processing unit U1 and the heat processing unit U2 to perform a developing process on the resist film of the workpiece W. The control device 100 then controls the transport device A3 to return the workpiece W to the shelf unit U10, and controls the transport device A7 and the transport device A1 to return the workpiece W into the carrier C. The control device 100 may also control the inspection unit U3 to inspect the surface of the workpiece W at any timing during the processing in the processing module 14. This completes the coating and developing process for one workpiece W. The control device 100 controls each device of the coating and developing apparatus 2 to perform the coating and developing process for each of the subsequent multiple workpieces W in the same manner as described above.
 基板処理装置の具体的な構成は、以上に例示した塗布現像装置2の構成に限られない。基板処理装置は、所定の処理が施されるワークWの表面を検査するユニット、及び、このユニットを制御する制御装置を備えていればどのようなものであってもよい。 The specific configuration of the substrate processing apparatus is not limited to the configuration of the coating and developing apparatus 2 exemplified above. The substrate processing apparatus may be any type that includes a unit that inspects the surface of the workpiece W that is to be subjected to a specified process, and a control device that controls this unit.
(検査ユニット)
 続いて、処理モジュール11~14に含まれる検査ユニットU3について説明する。検査ユニットU3は、ワークWの表面(以下、「表面Wa」と表記する。)を撮像して画像データを取得する機能を有する。検査ユニットU3は、ワークWの表面Wa全体を撮像することで、表面Wa全体の画像データを取得してもよい。図3に示されるように、検査ユニットU3は、例えば、筐体30と、保持部31と、リニア駆動部32と、撮像部33と、投光・反射部34と、を含む。
(Inspection unit)
Next, the inspection unit U3 included in the processing modules 11 to 14 will be described. The inspection unit U3 has a function of capturing an image of the surface of the workpiece W (hereinafter referred to as "surface Wa") to acquire image data. The inspection unit U3 may capture an image of the entire surface Wa of the workpiece W to acquire image data of the entire surface Wa. As shown in FIG. 3, the inspection unit U3 includes, for example, a housing 30, a holding unit 31, a linear driving unit 32, an imaging unit 33, and a light projecting/reflecting unit 34.
 保持部31は、表面Waを上方に向けた状態でワークWを水平に保持する。リニア駆動部32は、例えば電動モータ等の動力源を含み、水平な直線状の経路に沿って保持部31を移動させる。撮像部33は、例えばCCDカメラ等のカメラ35を有する。カメラ35は、保持部31の移動方向において検査ユニットU3内の一方の端部寄りに設けられており、その移動方向の他方の端部に向けられている。投光・反射部34は、撮像範囲に投光し、その撮像範囲からの反射光をカメラ35に導く。例えば投光・反射部34は、ハーフミラー36及び光源37を有する。ハーフミラー36は、保持部31よりも高い位置において、リニア駆動部32の移動範囲の中間部に設けられており、下方からの光をカメラ35に反射する。光源37は、ハーフミラー36の上方に設けられており、ハーフミラー36を通して下方に照明光を照射する。 The holding unit 31 holds the workpiece W horizontally with the surface Wa facing upward. The linear drive unit 32 includes a power source such as an electric motor, and moves the holding unit 31 along a horizontal linear path. The imaging unit 33 has a camera 35 such as a CCD camera. The camera 35 is provided near one end of the inspection unit U3 in the moving direction of the holding unit 31, and is directed toward the other end in the moving direction. The light projecting/reflecting unit 34 projects light into the imaging range and guides the reflected light from the imaging range to the camera 35. For example, the light projecting/reflecting unit 34 has a half mirror 36 and a light source 37. The half mirror 36 is provided in the middle of the moving range of the linear drive unit 32 at a position higher than the holding unit 31, and reflects light from below to the camera 35. The light source 37 is provided above the half mirror 36, and irradiates illumination light downward through the half mirror 36.
 検査ユニットU3は、次のように動作してワークWの表面Waの画像データを取得する。まず、リニア駆動部32が保持部31を移動させる。これにより、ワークWがハーフミラー36の下を通過する。この通過過程において、ワークWの表面Waの各部からの反射光がカメラ35に順次送られる。カメラ35は、ワークWの表面Waの各部からの反射光を結像させ、ワークWの表面Wa(表面Wa全体)を撮像した画像データを取得する。ワークWの表面Waを撮像することで得られる撮像画像は、ワークWの表面Waの状態によって変化する。そこで、検査ユニットU3においてワークWの表面Waの状態を示す情報として、ワークWの表面Waを撮像した撮像画像(撮像画像データ)が取得され、これを用いて表面Waの状態、特に欠陥の有無が評価される。 The inspection unit U3 operates as follows to acquire image data of the surface Wa of the workpiece W. First, the linear drive unit 32 moves the holder 31. This causes the workpiece W to pass under the half mirror 36. During this passage, reflected light from each part of the surface Wa of the workpiece W is sent sequentially to the camera 35. The camera 35 forms an image of the reflected light from each part of the surface Wa of the workpiece W, and acquires image data of the surface Wa of the workpiece W (the entire surface Wa). The captured image obtained by capturing the surface Wa of the workpiece W changes depending on the condition of the surface Wa of the workpiece W. Therefore, in the inspection unit U3, an image (captured image data) of the surface Wa of the workpiece W is acquired as information indicating the condition of the surface Wa of the workpiece W, and this is used to evaluate the condition of the surface Wa, in particular the presence or absence of defects.
 カメラ35で取得された撮像画像データは、制御装置100に送られる。制御装置100において、表面Waの撮像画像データに基づいてワークWの表面Waおける状態を検査することができる。例えば、ワークWの表面Waにおける欠陥の有無を検査することができる。本開示では、ピクセルごとの画素値が定められた画像データを単に「画像」という場合がある。 The captured image data acquired by the camera 35 is sent to the control device 100. The control device 100 can inspect the condition of the surface Wa of the workpiece W based on the captured image data of the surface Wa. For example, the presence or absence of defects on the surface Wa of the workpiece W can be inspected. In this disclosure, image data in which pixel values for each pixel are defined may be simply referred to as an "image."
[制御装置(基板検査装置)]
 制御装置100は、図4に示されるように、機能上の構成(以下、「機能モジュール」という。)として、処理制御部102と、検査制御部110とを有する。処理制御部102及び検査制御部110が実行する処理は、制御装置100が実行する処理に相当する。処理制御部102は、上述の塗布現像処理における液処理及び熱処理をワークWに対して施すように、液処理ユニットU1及び熱処理ユニットU2を制御する。
[Control device (circuit board inspection device)]
4, the control device 100 has a process control unit 102 and an inspection control unit 110 as functional components (hereinafter referred to as "functional modules"). The processes executed by the process control unit 102 and the inspection control unit 110 correspond to the processes executed by the control device 100. The process control unit 102 controls the liquid treatment unit U1 and the heat treatment unit U2 so as to perform the liquid treatment and heat treatment in the above-mentioned coating and developing process on the workpiece W.
 検査制御部110(基板検査装置)は、塗布現像処理を実行する際のいずれかの段階において、検査ユニットU3から得られた画像データに基づいて、ワークWの検査を行う。ワークWの検査には、ワークWの表面Waにおける異常(欠陥)の有無を判定することが含まれる。表面Waにおける欠陥には、例えば、傷(スクラッチ)、異物の付着、処理液の塗布ムラ、及び、処理液の未塗布等が挙げられる。 The inspection control unit 110 (substrate inspection device) inspects the workpiece W based on image data obtained from the inspection unit U3 at any stage when performing the coating and developing process. The inspection of the workpiece W includes determining whether there are any abnormalities (defects) on the surface Wa of the workpiece W. Defects on the surface Wa include, for example, scratches, adhesion of foreign matter, uneven application of the processing liquid, and non-application of the processing liquid.
 検査制御部110は、検査実行前に、表面Waに欠陥が確認されなかった正常なワークW(正常基板)を用いて、検査で使用する正常基板データを準備する。検査制御部110は、正常基板データに基づいて、検査対象のワークW(検査対象の基板)の検査を実行する。正常なワークW及び検査対象のワークWは、同じ種類のワーク(基板)である。正常なワークW及び検査対象のワークWには同じ処理条件で塗布現像処理が施され、塗布現像処理における同じタイミングで(例えば、レジストの塗布後、且つ熱処理前において)、正常基板データの準備とワークWの検査とが行われる。 Before performing the inspection, the inspection control unit 110 prepares normal substrate data to be used in the inspection, using a normal workpiece W (normal substrate) with no defects found on the surface Wa. The inspection control unit 110 performs an inspection of the workpiece W (substrate to be inspected) to be inspected based on the normal substrate data. The normal workpiece W and the workpiece W to be inspected are the same type of workpiece (substrate). The normal workpiece W and the workpiece W to be inspected are subjected to a coating and development process under the same processing conditions, and the preparation of normal substrate data and the inspection of the workpiece W are performed at the same timing in the coating and development process (for example, after the application of resist and before heat treatment).
 検査制御部110は、機能モジュールとして、モデルマップ作成部120と、前処理部130と、欠陥検出部112と、結果出力部114とを有する。さらに、モデルマップ作成部120は、画像取得部122と、成分画像生成部124と、モデルマップ生成部126と有する。そして、前処理部130は、画像取得部132と、成分画像生成部134と、前処理画像生成部136とを有する。検査制御部110のモデルマップ作成部120及び前処理部130は、正常画像に係る前処理画像を作成する正常前処理画像作成部としての機能を有する。さらに、検査制御部110のモデルマップ作成部120及び前処理部130は、検査画像に係る前処理画像を作成する検査前処理画像作成部としての機能も有する。検査制御部110が有する各機能モジュールが実行する処理は、検査制御部110(制御装置100)が実行する処理に相当する。 The inspection control unit 110 has, as its functional modules, a model map creation unit 120, a preprocessing unit 130, a defect detection unit 112, and a result output unit 114. Furthermore, the model map creation unit 120 has an image acquisition unit 122, a component image generation unit 124, and a model map generation unit 126. And the preprocessing unit 130 has an image acquisition unit 132, a component image generation unit 134, and a preprocessing image generation unit 136. The model map creation unit 120 and the preprocessing unit 130 of the inspection control unit 110 function as a normal preprocessing image creation unit that creates a preprocessing image related to a normal image. Furthermore, the model map creation unit 120 and the preprocessing unit 130 of the inspection control unit 110 also function as an inspection preprocessing image creation unit that creates a preprocessing image related to an inspection image. The processing performed by each functional module of the inspection control unit 110 corresponds to the processing performed by the inspection control unit 110 (control device 100).
 モデルマップ作成部120は、正常なワークWを撮像した画像から、検査に使用するモデルマップを作成する機能を有する。なお、正常のワークWに係る画像データとして、複数枚(例えば、3枚~10枚程度)の正常なワークWの表面Waを撮像した画像データが準備される。 The model map creation unit 120 has a function of creating a model map to be used for inspection from an image of a normal workpiece W. Note that as image data relating to the normal workpiece W, multiple pieces of image data (e.g., about 3 to 10 pieces) of image data of the surface Wa of the normal workpiece W are prepared.
 画像取得部122は、正常なワークWの表面Waを撮像した画像データを検査ユニットU3から取得する機能を有する。 The image acquisition unit 122 has the function of acquiring image data of the surface Wa of a normal workpiece W from the inspection unit U3.
 成分画像生成部124は、画像取得部122において取得された画像から、モデルマップの作成に使用する成分画像を生成する機能を有する。成分画像の生成手順については後述する。 The component image generating unit 124 has a function of generating a component image used to create a model map from the image acquired by the image acquiring unit 122. The procedure for generating a component image will be described later.
 モデルマップ生成部126は、成分画像生成部124で生成された成分画像を用いてモデルマップを作成する機能を有する。モデルマップとは、正常なワークWに係る画像データにおける、表面Waにおける色のばらつきの度合いを画像データに含まれる画素毎に数値化して、ワークWに対応する形状のマップとしたものである。正常なワークWに係る複数の画像データから、1つのモデルマップが作成される。詳細な手順は後述するが、画素ごとの色のばらつきを共分散及びマハラノビス距離を用いて数値化を行う。モデルマップ作成部120で作成されるモデルマップは、欠陥検出で使用する画像データの前処理に使用される。 The model map generating unit 126 has a function of creating a model map using the component images generated by the component image generating unit 124. A model map is a map of a shape corresponding to the workpiece W, in which the degree of color variation on the surface Wa in image data relating to a normal workpiece W is quantified for each pixel contained in the image data. One model map is created from multiple image data relating to normal workpieces W. The detailed procedure will be described later, but the color variation for each pixel is quantified using covariance and Mahalanobis distance. The model map created by the model map creating unit 120 is used for pre-processing of image data used in defect detection.
 前処理部130は、正常なワークWを撮像した画像、及び検査対象のワークWを撮像した画像から、検査に使用する前処理画像を作成する機能を有する。 The pre-processing unit 130 has the function of creating pre-processed images to be used for inspection from images of a normal workpiece W and images of the workpiece W to be inspected.
 画像取得部132は、検査対象のワークWの表面Waを撮像した画像データを検査ユニットU3から取得する機能を有する。なお、正常なワークWの表面Waを撮像した画像データは、画像取得部122で取得されているため、この画像データを使用する。 The image acquisition unit 132 has a function of acquiring image data of the surface Wa of the workpiece W to be inspected from the inspection unit U3. Note that image data of the surface Wa of a normal workpiece W is acquired by the image acquisition unit 122, so this image data is used.
 成分画像生成部134は、画像取得部132において取得された画像から、前処理画像の作成に使用する成分画像を生成する機能を有する。成分画像の生成手順については後述する。 The component image generating unit 134 has a function of generating a component image to be used for creating a preprocessed image from the image acquired by the image acquiring unit 132. The procedure for generating the component image will be described later.
 前処理画像生成部136は、成分画像生成部134で生成された成分画像を用いて前処理画像を生成する機能を有する。前処理画像とは、正常なワークWに係る画像データ及び検査対象のワークWに係る画像データにおける、表面Waにおける色のばらつきの度合いを画像データに含まれる画素毎に数値化した画像である。詳細な手順は後述するが、前処理画像の生成では、画素ごとの色のばらつきを、モデルマップの作成に使用した共分散と、マハラノビス距離と、を用いて数値化が行われる。前処理画像は、処理対象の画像データごとに1つ作成される。また、正常なワークWに係る画像データに対しても前処理画像が作成される。前処理部130で作成される前処理画像は、欠陥検出において使用される。 The preprocessed image generating unit 136 has a function of generating a preprocessed image using the component image generated by the component image generating unit 134. A preprocessed image is an image in which the degree of color variation on the surface Wa in the image data of a normal workpiece W and the image data of the workpiece W to be inspected is quantified for each pixel contained in the image data. The detailed procedure will be described later, but in generating a preprocessed image, the color variation for each pixel is quantified using the covariance used to create the model map and the Mahalanobis distance. One preprocessed image is created for each image data to be processed. A preprocessed image is also created for the image data of a normal workpiece W. The preprocessed image created by the preprocessing unit 130 is used in defect detection.
 欠陥検出部112は、正常なワークWに係る画像データ及び検査対象のワークWに係る画像データから作成された前処理画像を用いて、検査対象のワークWの表面Waにおける欠陥の有無を検査する。欠陥検出部112における処理の手順の一例は後述する。 The defect detection unit 112 uses a preprocessed image created from image data of a normal workpiece W and image data of the workpiece W to be inspected to check for the presence or absence of defects on the surface Wa of the workpiece W to be inspected. An example of the processing procedure in the defect detection unit 112 will be described later.
 結果出力部114は、欠陥検出部112における検出結果を出力する機能を有する。一例として、欠陥検出部112によりワークWの表面Waに異常が有ると判定された場合に、検査対象のワークWが異常であることを示す異常信号を出力してもよい。結果出力部114が異常信号を出力する場合、異常信号を処理制御部102に出力してもよく、上位コントローラに出力してもよく、オペレータ等に情報を報知するためのモニタ等の出力デバイスに出力してもよい。 The result output unit 114 has a function of outputting the detection result in the defect detection unit 112. As an example, when the defect detection unit 112 determines that there is an abnormality on the surface Wa of the workpiece W, it may output an abnormality signal indicating that the workpiece W being inspected is abnormal. When the result output unit 114 outputs an abnormality signal, it may output the abnormality signal to the process control unit 102, may output it to a higher-level controller, or may output it to an output device such as a monitor for notifying an operator, etc. of information.
 制御装置100は、1つ又は複数のコンピュータによって構成される。制御装置100は、例えば、図5に示される回路150を有する。回路150は、一つ又は複数のプロセッサ152と、メモリ154と、ストレージ156と、入出力ポート158とを有する。ストレージ156は、例えばハードディスク等、コンピュータによって読み取り可能な記憶媒体を有する。記憶媒体は、後述の基板検査方法を制御装置100に実行させるためのプログラム(基板検査プログラム)を記憶している。記憶媒体は、不揮発性の半導体メモリ、磁気ディスク及び光ディスク等の取り出し可能な媒体であってもよい。 The control device 100 is composed of one or more computers. The control device 100 has, for example, a circuit 150 shown in FIG. 5. The circuit 150 has one or more processors 152, a memory 154, a storage 156, and an input/output port 158. The storage 156 has a computer-readable storage medium, such as a hard disk. The storage medium stores a program (substrate inspection program) for causing the control device 100 to execute the substrate inspection method described below. The storage medium may be a removable medium, such as a non-volatile semiconductor memory, a magnetic disk, or an optical disk.
 メモリ154は、ストレージ156の記憶媒体からロードしたプログラム及びプロセッサ152による演算結果を一時的に記憶する。プロセッサ152は、メモリ154と協働して上記プログラムを実行することで、上述した各機能モジュールを構成する。入出力ポート158は、プロセッサ152からの指令に従って、液処理ユニットU1、熱処理ユニットU2、及び検査ユニットU3等との間で電気信号の入出力を行う。 Memory 154 temporarily stores the programs loaded from the storage medium of storage 156 and the results of calculations performed by processor 152. Processor 152 configures each of the functional modules described above by executing the above programs in cooperation with memory 154. Input/output port 158 inputs and outputs electrical signals between liquid processing unit U1, heat processing unit U2, inspection unit U3, etc., according to instructions from processor 152.
 制御装置100のハードウェア構成は、必ずしもプログラムにより各機能モジュールを構成するものに限られない。例えば制御装置100の各機能モジュールは、専用の論理回路又はこれを集積したASIC(Application Specific Integrated Circuit)により構成されていてもよい。制御装置100が複数のコンピュータ(複数の回路)によって構成されている場合、上記の機能モジュールの一部が1つのコンピュータ(回路)によって実現され、上記の機能モジュールの残りの一部が他のコンピュータ(回路)によって実現されてもよい。 The hardware configuration of the control device 100 is not necessarily limited to configuring each functional module by a program. For example, each functional module of the control device 100 may be configured by a dedicated logic circuit or an ASIC (Application Specific Integrated Circuit) that integrates these. When the control device 100 is configured by multiple computers (multiple circuits), some of the above functional modules may be realized by one computer (circuit), and the remaining parts of the above functional modules may be realized by another computer (circuit).
[基板検査方法]
 基板検査方法の一例として、制御装置100(検査制御部110)が実行する一連の処理について説明する。図6は、制御装置100において実行される一連の処理を説明するフロー図である。また、図7~図12は、各ステップにおいて行われる処理の詳細と、各ステップにおいて生成される画像データの一例とについて説明する図である。
[Substrate inspection method]
As an example of a substrate inspection method, a series of processes executed by the control device 100 (inspection control unit 110) will be described. Fig. 6 is a flow diagram illustrating the series of processes executed by the control device 100. Figs. 7 to 12 are diagrams illustrating the details of the processes performed in each step and an example of image data generated in each step.
 制御装置100は、例えば、図6に示されるように、ステップS01~S03を実行する。ステップS01では、モデルマップ作成部120の画像取得部122が、検査ユニットU3から正常なワークWに係る画像データ(正常画像)を取得する。次に、ステップS02では、モデルマップ作成部120の成分画像生成部124が、画像取得部122で取得した正常画像から成分画像を作成する。さらに、ステップS03では、モデルマップ作成部120のモデルマップ生成部126が、成分画像生成部124において生成された成分画像からモデルマップを生成する。 The control device 100 executes steps S01 to S03, for example, as shown in FIG. 6. In step S01, the image acquisition unit 122 of the model map creation unit 120 acquires image data (normal image) relating to a normal workpiece W from the inspection unit U3. Next, in step S02, the component image generation unit 124 of the model map creation unit 120 creates a component image from the normal image acquired by the image acquisition unit 122. Furthermore, in step S03, the model map generation unit 126 of the model map creation unit 120 generates a model map from the component image generated in the component image generation unit 124.
 図7は、上記のステップS01~S03の一連の手順を説明する図である。ステップS01において、画像取得部122において取得され、後述の手順で使用される正常画像G1は、ここでは3枚であるとする。例えば、正常画像G1として、2048×2048画素の画像データが用いられる。また、正常画像G1(及び検査画像)は、いずれもカラー画像である。RGBカラー画像の場合、1画素毎の画素値として、例えば、RGBそれぞれに係る画素値の情報が含まれる。このように、1枚の正常画像G1には2048×2048画素の画素値に係る情報が含まれる。 FIG. 7 is a diagram explaining the series of steps S01 to S03 described above. In step S01, it is assumed that there are three normal images G1 acquired by the image acquisition unit 122 and used in the procedure described below. For example, image data of 2048 x 2048 pixels is used as the normal images G1. In addition, both normal images G1 (and the inspection images) are color images. In the case of an RGB color image, the pixel value for each pixel includes, for example, information on the pixel values related to each of the RGB colors. In this way, one normal image G1 includes information on the pixel values of 2048 x 2048 pixels.
 図7では、3枚の正常画像G1に対して、画像生成処理P1が実行される例が示されている。図7に示す画像生成処理P1とは、図6におけるステップS02、すなわち、成分画像の作成処理に相当する。 FIG. 7 shows an example in which image generation process P1 is executed on three normal images G1. Image generation process P1 shown in FIG. 7 corresponds to step S02 in FIG. 6, i.e., the process of creating component images.
 図7では画像生成処理P1として、強調処理、差分処理、成分分解処理、及びフィルター処理が挙げられれている。これらの成分画像の作成手順について、図8~図10を参照しながら説明する。最初に、複数枚の正常画像G1における画素毎の画素値を平均化した平均画像G2が作成される。次に、平均画像G2に対して、強調処理が行われ、平均強調画像G3が作成される。一例として、カラー画像に対して、諧調変換関数に基づいたLUT(ルックアップテーブル)を使用した強調処理が実行される。諧調変換関数とは、入力画像と出力画像における画素値の対応付けを行う関数であり、強調したい特徴部分に応じて、RGB毎に関数が設定される。また、LUTとは変換前後の画素値の対応付けを示す表である。すなわち、強調したい部分にフォーカスした諧調変換関数を予め設定し、この諧調変換関数に対応するLUTを準備しておくことで、LUTに基づいてカラー画像の各画素におけるRGB値を変換していく処理が行われる。この結果、例えば、特定の画素値の領域がより強調された平均強調画像G3が得られる。 In FIG. 7, the image generation process P1 includes enhancement processing, difference processing, component decomposition processing, and filter processing. The procedure for creating these component images will be described with reference to FIG. 8 to FIG. 10. First, an average image G2 is created by averaging the pixel values of each pixel in multiple normal images G1. Next, an enhancement process is performed on the average image G2 to create an average enhancement image G3. As an example, an enhancement process using a LUT (lookup table) based on a tone conversion function is performed on a color image. A tone conversion function is a function that associates pixel values in an input image and an output image, and a function is set for each RGB according to the characteristic part to be emphasized. Also, a LUT is a table that shows the correspondence of pixel values before and after conversion. In other words, a tone conversion function that focuses on the part to be emphasized is set in advance, and a LUT corresponding to this tone conversion function is prepared, and a process is performed to convert the RGB values of each pixel of the color image based on the LUT. As a result, for example, an average enhancement image G3 is obtained in which a specific pixel value area is more emphasized.
 さらに、この平均強調画像G3に対してムラ除去処理を行う。ムラ除去処理とは、例えば、同心円状のムラを除去する処理をいう。具体的には、平均強調画像G3において、ワークWの中心を撮像した画素を中心とした同心円を設定する。そして、同心円上の画素におけるRGBそれぞれの画素値を平均化することで、同心円におけるRGBそれぞれの画素値が同じ値(平均値)となった、フィルタ画像を準備することができる。そして、平均強調画像G3における各画素の画素値から、フィルタ画像に含まれる各画素の画素値の差分を取ることで、平均ムラ除去画像G4が得られる。このような処理を行うことで、中心からの距離が異なることによって生じる同心円状のムラを除去することができる。上記の手順によって、平均強調画像G3及び平均ムラ除去画像G4が得られる。 Furthermore, an unevenness removal process is performed on this average emphasized image G3. The unevenness removal process refers to a process for removing concentric unevenness, for example. Specifically, in the average emphasized image G3, concentric circles are set with the pixel that captures the center of the workpiece W as the center. Then, by averaging the pixel values of each of the RGB pixels on the concentric circles, a filter image can be prepared in which the pixel values of each of the RGB pixels on the concentric circles are the same value (average value). Then, an average unevenness-removed image G4 is obtained by taking the difference between the pixel values of each pixel included in the filter image from the pixel values of each pixel in the average emphasized image G3. By performing such a process, it is possible to remove concentric unevenness caused by different distances from the center. The average emphasized image G3 and the average unevenness-removed image G4 are obtained by the above procedure.
 次に、成分画像を生成する対象となる画像を対象画像I1とし、対象画像について上記の平均強調画像G3の生成と同様の処理を行うことによって、対象強調画像I2が生成される。さらに、対象強調画像I2に対して、上記の平均ムラ除去画像G4の生成と同様の処理を行うことで、対象ムラ除去画像I3が生成される。この結果、図9(a)に示すように、対象強調画像I2と、対象ムラ除去画像I3とが生成される。なお、対象画像I1とは、上記の平均画像G2の生成に使用した正常画像G1の1枚に対応する。 Next, the image from which the component images are to be generated is set as the target image I1, and the target image is subjected to the same processing as in generating the above-mentioned average emphasized image G3 to generate the target emphasized image I2. Furthermore, the target emphasized image I2 is subjected to the same processing as in generating the above-mentioned average unevenness removed image G4 to generate the target unevenness removed image I3. As a result, as shown in FIG. 9(a), the target emphasized image I2 and the target unevenness removed image I3 are generated. Note that the target image I1 corresponds to one of the normal images G1 used to generate the above-mentioned average image G2.
 さらに、図9(b)に示すように、対象強調画像I2と平均強調画像G3との差分を取ることで、差分強調画像I4が得られる。そして、図9(c)に示すように、対象ムラ除去画像I3と平均ムラ除去画像G4との差分を取ることで、差分ムラ除去画像I5が得られる。差分強調画像I4及び差分ムラ除去画像I5は、それぞれ平均強調画像G3及び平均ムラ除去画像G4の成分が除去されている画像である。そのため、差分強調画像I4及び差分ムラ除去画像I5を得る処理は、3枚の正常画像G1を平均した画像から生成された平均強調画像G3及び平均ムラ除去画像G4に含まれている変動成分を除去する処理に相当する。つまり、正常画像G1を対象画像I1とした場合に、差分強調画像I4及び差分ムラ除去画像I5は、いずれも、正常差分画像に相当する。 Furthermore, as shown in FIG. 9(b), a difference emphasized image I4 is obtained by taking the difference between the target emphasized image I2 and the average emphasized image G3. Then, as shown in FIG. 9(c), a difference unevenness removed image I5 is obtained by taking the difference between the target unevenness removed image I3 and the average unevenness removed image G4. The difference emphasized image I4 and the difference unevenness removed image I5 are images from which the components of the average emphasized image G3 and the average unevenness removed image G4 have been removed, respectively. Therefore, the process of obtaining the difference emphasized image I4 and the difference unevenness removed image I5 corresponds to the process of removing the fluctuation components contained in the average emphasized image G3 and the average unevenness removed image G4 generated from the image obtained by averaging three normal images G1. In other words, when the normal image G1 is the target image I1, both the difference emphasized image I4 and the difference unevenness removed image I5 correspond to normal difference images.
 さらに、これまでのプロセスで得られた対象強調画像I2、対象ムラ除去画像I3、差分強調画像I4及び差分ムラ除去画像I5のそれぞれについて、図10(a)~図10(d)に示されるように、平均化処理、差分処理、及びノイズ除去処理を行う。平均化処理とは、例えば、1画素の画素値を、その画素の周辺の画素群の画素値の平均値とする処理であり、例えば、101×101画素のマスクを用いて1画素の画素値を周辺の101×101画素の画素値の平均値とする処理を行うことである。次に、差分処理とは、平均処理を行った後の画像データと、処理前の画像との差分を取ることで欠陥に由来しない大きな領域の色変化を除去する処理を行うことである。さらに、ノイズ除去処理とは、欠陥とは関係ないと思われるノイズを除去する処理であり、例えば、15×15画素のマスクを用いてノイズに由来する成分を除去する処理を行うことである。これらの処理を施すことで、図10(a)~図10(d)に示すように、対象強調画像I2、対象ムラ除去画像I3、差分強調画像I4及び差分ムラ除去画像I5から、4つの処理後画像I11、I12、I13、I14が作成される。 Furthermore, for each of the object emphasis image I2, object unevenness removed image I3, difference emphasis image I4, and difference unevenness removed image I5 obtained by the above processes, averaging, difference, and noise removal processes are performed as shown in Figs. 10(a) to 10(d). The averaging process is, for example, a process in which the pixel value of one pixel is set to the average value of the pixel values of the pixels surrounding that pixel, for example, a process in which the pixel value of one pixel is set to the average value of the pixel values of the surrounding 101 x 101 pixels using a mask of 101 x 101 pixels. Next, the difference process is a process in which a large area of color change that is not due to defects is removed by taking the difference between the image data after the averaging process and the image before the process. Furthermore, the noise removal process is a process in which noise that is thought to be unrelated to defects is removed, for example, a process in which a mask of 15 x 15 pixels is used to remove components that are due to noise. By carrying out these processes, four processed images I11, I12, I13, and I14 are created from the object-emphasized image I2, the object-unevenness-removed image I3, the difference-emphasized image I4, and the difference-unevenness-removed image I5, as shown in Figures 10(a) to 10(d).
 上記の手順は、RGBカラー画像を対象に行われる。したがって、対象強調画像I2、対象ムラ除去画像I3、及び、4つの処理後画像I11、I12、I13、I14には、それぞれ、R,G,B成分の画素値の情報が含まれている。これらを各成分のみ取り出して画像データを生成すると、例えば、RGBカラー画像に係る対象強調画像I2から、R成分の対象強調画像I2、G成分の対象強調画像I2、及び、B成分の対象強調画像I2を得ることができる。他の画像I3,I11~I14についても同様に、R成分、G成分、及びB成分に係る画像を生成することができる。 The above procedure is performed on an RGB color image. Therefore, object-enhanced image I2, object unevenness-removed image I3, and the four processed images I11, I12, I13, and I14 each contain information on pixel values of the R, G, and B components. If only each component is extracted from these to generate image data, for example, an object-enhanced image I2 for the R component, an object-enhanced image I2 for the G component, and an object-enhanced image I2 for the B component can be obtained from object-enhanced image I2 for the RGB color image. Similarly, images relating to the R, G, and B components can be generated for the other images I3, I11 to I14.
 また、カラー画像は、R,G,Bの色成分で表現することに代えて、別の色空間で定義することもができる。例えば、HSV色空間、XYZ色空間、又はLab色空間等が挙げられる。ある1画素の色を互いに異なる色空間の成分で表現することで、当該画素の色の特徴を別の切り口から捉えることができる。また、既定の変換式を用いて特定の色空間で定義された色を別の色空間の成分で表現することができる。 In addition, instead of expressing a color image using the R, G, and B color components, it is also possible to define it in another color space. Examples include the HSV color space, the XYZ color space, and the Lab color space. By expressing the color of a pixel using components of different color spaces, it is possible to capture the color characteristics of that pixel from a different perspective. Also, a color defined in a specific color space can be expressed using components of another color space using a predefined conversion formula.
 そこで、1枚のカラー画像を上記のHSV色空間における3成分(H成分,S成分,V成分)、XYZ色空間における3成分(X成分,Y成分,Z成分)、及び、Lab色空間における3成分(aa成分,ba成分,La成分)で表記をした上で、これらの各成分についても、対象強調画像I2、対象ムラ除去画像I3、及び、4つの処理後画像I11、I12、I13、I14に係る画像を作成する。この結果、1枚のカラー画像から、色に関する12成分(R,G,B,H,S,V,X,Y,Z,aa,ba,La)について、それぞれ前処理を施して得られる6種類の画像(対象強調画像I2、対象ムラ除去画像I3、及び、4つの処理後画像I11、I12、I13、I14)が得られる。つまり、1枚のカラー画像に関して12成分×6種類の処理後画像に基づいて、72枚の成分画像が得られることになる。図7では、3枚の正常画像G1に対する成分画像を成分画像G10として示している。 Therefore, one color image is expressed in the above three components in the HSV color space (H component, S component, V component), three components in the XYZ color space (X component, Y component, Z component), and three components in the Lab color space (aa component, ba component, La component), and then images related to each of these components are created: an object-emphasized image I2, an object-unevenness-removed image I3, and four processed images I11, I12, I13, and I14. As a result, six types of images (object-emphasized image I2, object-unevenness-removed image I3, and four processed images I11, I12, I13, and I14) are obtained by preprocessing the 12 color components (R, G, B, H, S, V, X, Y, Z, aa, ba, La) from one color image. In other words, 72 component images are obtained based on 12 components x 6 types of processed images for one color image. In Figure 7, the component image for the three normal images G1 is shown as component image G10.
 図7では、3枚の正常画像G1に対して、画像生成処理P1を実行した結果、成分画像G10が得られることが示されている。さらに、図7では、成分画像G10からモデルマップG20を作成することが示されている。また、モデルマップG20の作成には、マハラノビス距離の算出が行われることが示されている。このマハラノビス距離の算出を含む、モデルマップG20の作成の手順は、図6におけるステップS03、すなわち、モデルマップの作成処理に相当する。 FIG. 7 shows that a component image G10 is obtained as a result of executing image generation process P1 on three normal images G1. Furthermore, FIG. 7 shows that a model map G20 is created from the component image G10. It also shows that the Mahalanobis distance is calculated to create the model map G20. The procedure for creating the model map G20, including the calculation of the Mahalanobis distance, corresponds to step S03 in FIG. 6, i.e., the process for creating the model map.
 1つの正常画像G1に係る72枚の成分画像G10は、それぞれ互いに異なる特徴量を含んでいると考えられる。つまり、72枚の成分画像G10には、画素毎の72種類の特徴量が含まれていると捉えることができる。このとき、2048×2048画素に含まれる1つずつの画素毎に、72の特徴量が得られていると考え、各画素の平均及び共分散行列を算出した上で、画素毎にマハラノビス距離が算出される。平均及び共分散行列は、3枚の画像の平均値を使用する。画素Mにおけるマハラノビス距離は、例えば以下の式(1)で算出することができる。なお、式(1)中、μは平均であり、Σは共分散行列である。 The 72 component images G10 relating to one normal image G1 are considered to contain different features. In other words, the 72 component images G10 can be considered to contain 72 types of features for each pixel. In this case, it is considered that 72 features are obtained for each pixel contained in the 2048 x 2048 pixels, and the average and covariance matrix of each pixel are calculated, and the Mahalanobis distance is calculated for each pixel. The average values of the three images are used for the average and covariance matrix. The Mahalanobis distance at pixel M can be calculated, for example, using the following formula (1). In formula (1), μ is the average, and Σ is the covariance matrix.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 マハラノビス距離を算出することで、当該画素のデータがデータ集団からどれくらい離れているか、すなわち、データの異常度を算出することができる。1枚の正常画像G1から得られた72個の特徴量は画素毎に異なることから、2048×2048画素の画素群において、当該画素のデータがどの程度異常であるかを、マハラノビス距離を用いて算出することができる。 By calculating the Mahalanobis distance, it is possible to calculate how far the data for that pixel is from the data group, i.e., the degree of abnormality of the data. Since the 72 feature values obtained from one normal image G1 are different for each pixel, it is possible to use the Mahalanobis distance to calculate the degree to which the data for that pixel is abnormal in a pixel group of 2048 x 2048 pixels.
 上述のように1枚の正常画像G1から得られた72枚の成分画像G10を用いて、当該正常画像G1における各画素のマハラノビス距離を算出することができる。この処理を3枚の正常画像G1のそれぞれに対して実行することで、同じ位置の画素について、3つの正常画像G1から、3つのマハラノビス距離が算出される。各画素において、この3つのマハラノビス距離のうちの最大値を採用して、2048×2048画素の配置に対応させて配置する。このようにして作成できるのがモデルマップG20である。つまりモデルマップG20は、3枚の正常画像G1に含まれる各画素のデータの異常度をマッピングしたものであるといえる。 As described above, the 72 component images G10 obtained from one normal image G1 can be used to calculate the Mahalanobis distance for each pixel in the normal image G1. By performing this process on each of the three normal images G1, three Mahalanobis distances are calculated from the three normal images G1 for pixels in the same position. For each pixel, the maximum value of these three Mahalanobis distances is adopted and arranged to correspond to the 2048 x 2048 pixel arrangement. This is how the model map G20 can be created. In other words, the model map G20 can be said to be a mapping of the degree of abnormality of the data for each pixel contained in the three normal images G1.
 制御装置100では、上記の手順でモデルマップG20を作成した後、ステップS04~S07を実行する。ステップS04では、モデルマップG20を用いて、正常画像G1に係る前処理画像を準備する。また、ステップS05~S07では、モデルマップG20を用いて、検査対象の画像に係る前処理画像を準備する。 After creating the model map G20 using the above procedure, the control device 100 executes steps S04 to S07. In step S04, the model map G20 is used to prepare a preprocessed image related to the normal image G1. In addition, in steps S05 to S07, the model map G20 is used to prepare a preprocessed image related to the image to be inspected.
 より具体的には、ステップS05では前処理部130の画像取得部132が、検査対象画像を取得する。ステップS06では、前処理部130の成分画像生成部134が、検査対象画像の成分画像を生成する。そして、ステップS07では、前処理部130の前処理画像生成部136が、ステップS06で作成された成分画像及びモデルマップG20を用いて、正常画像G1に係る前処理画像を作成する。一方、ステップS04では、前処理部130の前処理画像生成部136が、ステップS02で作成された成分画像G10及びモデルマップG20を用いて、正常画像G1に係る前処理画像を作成する。 More specifically, in step S05, the image acquisition unit 132 of the pre-processing unit 130 acquires the inspection target image. In step S06, the component image generation unit 134 of the pre-processing unit 130 generates a component image of the inspection target image. Then, in step S07, the pre-processed image generation unit 136 of the pre-processing unit 130 creates a pre-processed image for the normal image G1 using the component image and model map G20 created in step S06. Meanwhile, in step S04, the pre-processed image generation unit 136 of the pre-processing unit 130 creates a pre-processed image for the normal image G1 using the component image G10 and model map G20 created in step S02.
 上記の手順について、図11を参照しながら説明する。図11は、検査対象の画像である検査画像T1から前処理画像T30を作成する手順を示している。まず、検査画像T1に対して、画像生成処理P1が実行される。図11に示す画像生成処理P1は、図7に示す画像生成処理P1と同じである。また、画像生成処理P1は、図6におけるステップS06、すなわち、成分画像の作成処理に相当する。 The above procedure will be explained with reference to FIG. 11. FIG. 11 shows the procedure for creating a preprocessed image T30 from an inspection image T1, which is an image of the inspection target. First, an image generation process P1 is executed on the inspection image T1. The image generation process P1 shown in FIG. 11 is the same as the image generation process P1 shown in FIG. 7. Moreover, the image generation process P1 corresponds to step S06 in FIG. 6, i.e., the process of creating a component image.
 図11においても、図7と同様に、画像生成処理P1として、強調処理、差分処理、成分分解処理、及びフィルター処理が挙げられれている。これらの成分画像の作成手順は、上記実施形態で説明した手順と同じである。ただし、平均強調画像G3及び平均ムラ除去画像G4は、正常画像G1から作成したものが使用される。これらの画像に基づいて、対象画像I1として検査画像T1を用いて、図9(a)に示す手順によって、対象強調画像I2と、対象ムラ除去画像I3とが生成される。次に、図9(b)に示すように、対象強調画像I2と平均強調画像G3との差分を取ることで、差分強調画像I4が生成される。さらに、図9(c)に示すように、対象ムラ除去画像I3と平均ムラ除去画像G4との差分を取ることで、差分ムラ除去画像I5が生成される。このように、検査画像T1を対象画像I1とした場合に作成される差分強調画像I4及び差分ムラ除去画像I5は、いずれも、検査差分画像に相当する。 11, as in FIG. 7, the image generation process P1 includes an emphasis process, a difference process, a component decomposition process, and a filter process. The procedure for creating these component images is the same as that described in the above embodiment. However, the average emphasized image G3 and the average unevenness removed image G4 are created from the normal image G1. Based on these images, the target emphasized image I2 and the target unevenness removed image I3 are generated by the procedure shown in FIG. 9(a) using the inspection image T1 as the target image I1. Next, as shown in FIG. 9(b), the difference emphasized image I4 is generated by taking the difference between the target emphasized image I2 and the average emphasized image G3. Furthermore, as shown in FIG. 9(c), the difference unevenness removed image I5 is generated by taking the difference between the target unevenness removed image I3 and the average unevenness removed image G4. In this way, the difference emphasized image I4 and the difference unevenness removed image I5 created when the inspection image T1 is used as the target image I1 both correspond to the inspection difference image.
 そして、これまでのプロセスで得られた対象強調画像I2、対象ムラ除去画像I3、差分強調画像I4及び差分ムラ除去画像I5のそれぞれについて、図10(a)~図10(d)に示されるように、平均化処理、差分処理、ノイズ除去処理を行うことで、4つの処理後画像I11、I12、I13、I14を作成する。この処理を、上述の12成分に対して行うことで、1枚のカラー画像から、色に関する12成分(R,G,B,H,S,V,X,Y,Z,aa,ba,La)について、それぞれ前処理を施して得られる6種類の画像(対象強調画像I2、対象ムラ除去画像I3、及び、4つの処理後画像I11、I12、I13、I14)が生成される。これによって、1枚の検査画像T1から、図11に示すように、12成分×6種類の処理後画像に基づいて、72枚の成分画像T10が得られる。 Then, as shown in Figs. 10(a) to 10(d), four processed images I11, I12, I13, and I14 are created by performing averaging, subtraction, and noise removal processes on the object-emphasized image I2, object-unevenness-removed image I3, difference-emphasized image I4, and difference-unevenness-removed image I5 obtained in the process so far. By performing this process on the 12 components described above, six types of images (object-emphasized image I2, object-unevenness-removed image I3, and four processed images I11, I12, I13, and I14) are generated from one color image by performing pre-processing on the 12 color components (R, G, B, H, S, V, X, Y, Z, aa, ba, and La). As a result, 72 component images T10 are obtained from one inspection image T1 based on the 12 components x 6 types of processed images, as shown in Fig. 11.
 その後、図11に示すように、成分画像T10からマップ画像T20が作成される。マップ画像T20の作成では、マハラノビス距離の算出が行われる。マップ画像T20の作成時のマハラノビス距離の算出には、上述の数式(1)が用いられる。ただし、正常画像G1における各画素の平均及び共分散行列(3枚の画像の平均値)が使用される。すなわち、検査画像T1由来の平均及び共分散行列ではなく、正常画像G1における平均及び共分散行列が使用される点が、モデルマップG20の作成時とは手順が異なる。 Then, as shown in FIG. 11, a map image T20 is created from the component image T10. When creating the map image T20, the Mahalanobis distance is calculated. The above-mentioned formula (1) is used to calculate the Mahalanobis distance when creating the map image T20. However, the mean and covariance matrix of each pixel in the normal image G1 (the average value of the three images) are used. In other words, the procedure differs from that when creating the model map G20 in that the mean and covariance matrix in the normal image G1 are used instead of the mean and covariance matrix derived from the test image T1.
 マハラノビス距離を算出することで、当該画素のデータがデータ集団からどれくらい離れているか、すなわち、データの異常度を算出することができる。1枚の検査画像T1から得られた72個の特徴量は画素毎に異なることから、2048×2048画素の画素群において、当該画素のデータがどの程度異常であるかを、マハラノビス距離を用いて算出することができる。 By calculating the Mahalanobis distance, it is possible to calculate how far the data for that pixel is from the data group, i.e., the degree of abnormality of the data. Since the 72 feature values obtained from one inspection image T1 are different for each pixel, it is possible to use the Mahalanobis distance to calculate the degree to which the data for that pixel is abnormal in a pixel group of 2048 x 2048 pixels.
 上述のように1枚の検査画像T1から得られた72枚の成分画像T10を用いて、当該検査画像T1における各画素のマハラノビス距離を算出することができる。各画素のマハラノビス距離を2048×2048画素の配置に対応させて配置する。このようにして作成できるのがマップ画像T20である。つまりマップ画像T20も、モデルマップG20と同様に、検査画像T1に含まれる各画素のデータの異常度をマッピングしたものである。 As described above, the 72 component images T10 obtained from one test image T1 can be used to calculate the Mahalanobis distance of each pixel in the test image T1. The Mahalanobis distance of each pixel is arranged so that it corresponds to the 2048 x 2048 pixel arrangement. In this way, the map image T20 can be created. In other words, like the model map G20, the map image T20 is a mapping of the degree of abnormality of the data of each pixel contained in the test image T1.
 そして、図11に示すように、マップ画像T20と、モデルマップG20との差分を求めることで、検査画像T1の前処理画像T30を作成する。モデルマップG20を用いた差分処理を行うことにより、正常画像G1から作成されたモデルマップG20において表れている異常度に係る情報が除去されることになる。マップ画像T20に対して、モデルマップG20との差分を求める処理を行った後の画像が、前処理画像T30となる。なお、成分画像T10を準備した後の、マップ画像T20の作成及び前処理画像T30の作成が、ステップS07の前処理画像の生成に対応する。 Then, as shown in FIG. 11, a preprocessed image T30 of the inspection image T1 is created by calculating the difference between the map image T20 and the model map G20. By performing difference processing using the model map G20, information relating to the degree of abnormality that appears in the model map G20 created from the normal image G1 is removed. The image obtained after processing is performed on the map image T20 to find the difference with the model map G20 becomes the preprocessed image T30. Note that the creation of the map image T20 and the preprocessed image T30 after preparing the component image T10 corresponds to the generation of the preprocessed image in step S07.
 上記の一連の処理を行うことによって、1枚の検査画像T1から前処理画像T30が作成される。 By carrying out the above series of processes, a pre-processed image T30 is created from a single inspection image T1.
 なお、正常画像G1に対応する前処理画像を作成するステップS04では、前処理部130の前処理画像生成部136が、ステップS02で作成された成分画像G10及びモデルマップG20を用いて、3枚の正常画像G1それぞれについての前処理画像を作成する。このとき、モデルマップG20の作成手順において、3枚の正常画像G1に係る成分画像G10は準備されている。したがって、図11に示す手順のうち、成分画像G10の準備までのステップは省略することができる。したがって、正常画像G1については、成分画像G10を準備した後の、マップ画像(図11におけるマップ画像T20に相当)の作成及び前処理画像(図11における前処理画像T30に相当)の作成を行うことが、ステップS07の前処理画像の生成に対応する。また、ステップS07を実行することで、3枚の正常画像G1それぞれについての前処理画像G30(図12参照)を作成することができる。 In step S04, in which a preprocessed image corresponding to the normal image G1 is created, the preprocessed image generating unit 136 of the preprocessing unit 130 creates a preprocessed image for each of the three normal images G1 using the component image G10 and model map G20 created in step S02. At this time, in the procedure for creating the model map G20, the component image G10 for the three normal images G1 is prepared. Therefore, in the procedure shown in FIG. 11, the steps up to the preparation of the component image G10 can be omitted. Therefore, for the normal image G1, the creation of the map image (corresponding to the map image T20 in FIG. 11) and the creation of the preprocessed image (corresponding to the preprocessed image T30 in FIG. 11) after the component image G10 is prepared corresponds to the generation of the preprocessed image in step S07. In addition, by executing step S07, a preprocessed image G30 (see FIG. 12) can be created for each of the three normal images G1.
 制御装置100は、上記の手順で正常画像G1に対応する前処理画像G30と、検査画像T1に対応する前処理画像T30を作成した後、ステップS11~S14を実行する。ステップS11,S12では、制御装置100が、正常画像G1の前処理画像G30から、特徴量を抽出し、異常度を計算することによって(ステップS11)、欠陥検出に使用する異常度マップを作成する(ステップS12)。また、ステップS13,S14では、制御装置100が、特徴量を抽出し、異常度を計算することによって(ステップS13)、欠陥検出に使用する異常度マップを作成する(ステップS14)。ステップS11~S14は、制御装置100の欠陥検出部112によって行われる。 After creating a preprocessed image G30 corresponding to normal image G1 and a preprocessed image T30 corresponding to inspection image T1 in the above procedure, the control device 100 executes steps S11 to S14. In steps S11 and S12, the control device 100 extracts features from the preprocessed image G30 of normal image G1 and calculates the degree of abnormality (step S11), thereby creating an abnormality map to be used for defect detection (step S12). In steps S13 and S14, the control device 100 extracts features and calculates the degree of abnormality (step S13), thereby creating an abnormality map to be used for defect detection (step S14). Steps S11 to S14 are performed by the defect detection unit 112 of the control device 100.
 ステップS11,S13で行われる特徴量の抽出とは、前処理画像G30,T30から、欠陥に関係する特徴量を抽出する処理であり、例えば、画像分類のアルゴリズムの1つであるVision Transformer(ViT)等を用いて実行することができる。また、ステップS12,S14で行われる異常度計算とは、前処理画像G30,T30から、特徴量に基づいて欠陥に関係する異常度を画素毎に算出する処理であり、例えば、ニューラルネットワークを用いて確率分布を求める手法の1つであるGraph Mixture Density Networks(GMDN:Graph混合密度ネットワーク)等を用いて実行することができる。ステップS11,S13、及び、ステップS12,S14のいずれにおいても同一のアルゴリズムが用いられる。 The extraction of features performed in steps S11 and S13 is a process of extracting features related to defects from the preprocessed images G30 and T30, and can be performed, for example, using Vision Transformer (ViT), which is one of the image classification algorithms. The calculation of anomaly level performed in steps S12 and S14 is a process of calculating the degree of anomaly related to defects for each pixel from the preprocessed images G30 and T30 based on the features, and can be performed, for example, using Graph Mixture Density Networks (GMDN), which is one of the methods for obtaining a probability distribution using a neural network. The same algorithm is used in steps S11 and S13 and steps S12 and S14.
 このように、特徴量の抽出及び異常度計算には、教師データを用いた機械学習によって生成されるアルゴリズムが用いられ得る。特徴量の抽出及び異常度計算において使用するアルゴリズムの学習段階では、事前に準備された複数の正常画像から作成された前処理画像について、特徴量を抽出して異常度計算を行った際に、算出される異常度の数値が低くなるように調整される。このような事前の処理を経てパラメータが調整されたアルゴリズム(モデル)を、正常画像G1から作成された前処理画像G30及び検査画像T1から作成された前処理画像T30に適用することで、画素毎の異常度を算出することができる。 In this way, an algorithm generated by machine learning using training data can be used to extract features and calculate the degree of abnormality. In the learning stage of the algorithm used to extract features and calculate the degree of abnormality, the preprocessed image created from multiple normal images prepared in advance is adjusted so that when features are extracted and the degree of abnormality is calculated, the calculated numerical value of the degree of abnormality is lowered. By applying the algorithm (model) whose parameters have been adjusted through such preprocessing to the preprocessed image G30 created from the normal image G1 and the preprocessed image T30 created from the inspection image T1, the degree of abnormality for each pixel can be calculated.
 制御装置100の欠陥検出部112は、上記の手順で算出された画素毎の異常度の情報を画素の配置に対応させてマッピングする。この結果、3枚の正常画像G1から1枚の異常度マップG40(正常画像異常度マップ)が作成され、1枚の検査画像T1から1枚の異常度マップT40(検査画像異常度マップ)が作成される。 The defect detection unit 112 of the control device 100 maps the information on the degree of abnormality for each pixel calculated by the above procedure in correspondence with the arrangement of the pixels. As a result, one abnormality map G40 (normal image abnormality map) is created from the three normal images G1, and one abnormality map T40 (inspection image abnormality map) is created from the one inspection image T1.
 次に、制御装置100は、ステップS15を実行する。ステップS15では、制御装置100の欠陥検出部112が、上記の異常度マップG40,T40を用いて、検査画像T1における欠陥の有無を判定する。また、制御装置100の結果出力部114が、欠陥の有無の判定結果を出力する。 Next, the control device 100 executes step S15. In step S15, the defect detection unit 112 of the control device 100 uses the above-mentioned anomaly maps G40 and T40 to determine whether or not there is a defect in the inspection image T1. In addition, the result output unit 114 of the control device 100 outputs the determination result of whether or not there is a defect.
 図12では欠陥の有無を判定する手順の一例を示している。まず、上述したように3枚の正常画像G1から作成された異常度マップG40と、1枚の検査画像T1から作成された異常度マップT40との差分を求めることで、差分画像T41が得られる。さらに、この差分画像T41について、各画素の値を所定の閾値を用いて二値化した二値化画像T42と、上述の手順で作成された検査画像T1の前処理画像T30について、各画素の値を所定の閾値を用いて二値化した二値化画像T43とを準備する。そして、2つの二値化画像T42,T43を重ね合わせて、両方の画像において白色に変換された画素を特定した欠陥検出画像T45が作成される。欠陥検出画像T45において、白色で表示される部分は、欠陥があると判定する。なお、上記の手順は一例であって、欠陥検出の手法は特に限定されない。 FIG. 12 shows an example of a procedure for determining the presence or absence of a defect. First, a difference image T41 is obtained by calculating the difference between the abnormality map G40 created from three normal images G1 as described above and the abnormality map T40 created from one inspection image T1. Furthermore, a binarized image T42 in which the value of each pixel is binarized using a predetermined threshold value for this difference image T41, and a binarized image T43 in which the value of each pixel is binarized using a predetermined threshold value for the pre-processed image T30 of the inspection image T1 created by the above procedure are prepared. Then, the two binarized images T42 and T43 are superimposed to create a defect detection image T45 in which pixels converted to white in both images are identified. In the defect detection image T45, a portion displayed in white is determined to have a defect. Note that the above procedure is merely an example, and the defect detection method is not particularly limited.
 上記の処理を行うことで、検査画像T1に基づいて欠陥があると推定される領域が特定される。制御装置100の結果出力部114は、欠陥検出画像T45を判定結果として出力してもよいし、欠陥検出画像T45に欠陥があると判定された領域が含まれているか否かに基づいて欠陥の有無の判定結果のみを出力してもよい。 By carrying out the above processing, areas that are estimated to have defects are identified based on the inspection image T1. The result output unit 114 of the control device 100 may output the defect detection image T45 as the judgment result, or may output only the judgment result of the presence or absence of a defect based on whether or not the defect detection image T45 contains an area that is judged to have a defect.
 なお、上記の手順では機械学習を行うアルゴリズムが用いられるが、学習方法は適宜変更することができる。例えば、上述したように、特徴量の抽出及び異常度計算に使用されるアルゴリズムの学習段階では、事前に準備された複数の正常画像から作成された前処理画像について特徴量を抽出して異常度計算を行った際に、算出される異常度の数値が低くなるように調整される。上記の例では、正常画像から作成された前処理画像を、学習に利用しているが、前処理画像のみを学習に使用するのではなく、前処理画像の作成に使用した成分画像が、機械学習の教師データとして使用されてもよい。上述したように、正常画像から生成される前処理画像は、1枚の正常画像から作成された72枚の成分画像を用いて作成される。これらの成分画像は、正常画像における特徴部分に係る情報を含んでいる画像であるので、特徴量の抽出及び異常度計算のアルゴリズムの教師データとして使用することができる。したがって、前処理画像の生成に使用した成分画像の少なくとも一部も教師データに含めた状態で特徴量の抽出及び異常度計算のアルゴリズムが行われてもよい。このような構成とした場合、1枚の正常画像から得られる1枚の前処理画像だけではなく、さらに成分画像を教師データとして使用するため、事前に準備する正常画像の枚数を減らしながらも教師データの数を増やすことができる。したがって、十分な学習に費用となる教師データをより簡単に準備することができる。 Note that, in the above procedure, an algorithm for machine learning is used, but the learning method can be changed as appropriate. For example, as described above, in the learning stage of the algorithm used for extracting the feature amount and calculating the degree of abnormality, when the feature amount is extracted from the preprocessed image created from a plurality of normal images prepared in advance and the degree of abnormality is calculated, the calculated value of the degree of abnormality is adjusted to be low. In the above example, the preprocessed image created from the normal image is used for learning, but instead of using only the preprocessed image for learning, the component images used to create the preprocessed image may be used as training data for machine learning. As described above, the preprocessed image generated from the normal image is created using 72 component images created from one normal image. These component images are images that contain information related to the characteristic parts of the normal image, so they can be used as training data for the algorithm for extracting the feature amount and calculating the degree of abnormality. Therefore, the algorithm for extracting the feature amount and calculating the degree of abnormality may be performed with at least a part of the component images used to generate the preprocessed image included in the training data. In this configuration, not only one preprocessed image obtained from one normal image but also component images are used as training data, so that the number of training data can be increased while reducing the number of normal images prepared in advance. Therefore, it is easier to prepare training data that would be necessary for sufficient training.
[効果]
 上記の基板検査装置(検査制御部110)及び基板検査方法によれば、複数の正常画像から生成される色空間における複数の成分画像に含まれる各画素の成分値の分散に係る分散情報に基づいて、複数の正常画像G1のそれぞれに係る前処理画像G30が準備される。また、検査対象となる基板を撮像した検査画像T1から生成される色空間における複数の成分画像に含まれる各画素の成分値の分散に係る分散情報に基づいて、検査画像に係る前処理画像T30が準備される。そして、これらの画像に基づいて、検査対象となる基板の欠陥の検査が行われる。このように、正常画像G1及び検査画像T1における色空間の複数の成分の成分値の分散に係る分散情報を利用した前処理画像G30,T30が生成され、これを用いて基板の欠陥の検査を行われる。これにより、色空間における成分値の分散を考慮した画像を用いて検査を行うことができる。したがって、基板表面における異常を精度良く検出するのに有用となる。
[effect]
According to the above-mentioned substrate inspection device (inspection control unit 110) and substrate inspection method, a preprocessed image G30 for each of the plurality of normal images G1 is prepared based on variance information related to the variance of component values of each pixel included in a plurality of component images in a color space generated from a plurality of normal images. Also, a preprocessed image T30 related to the inspection image is prepared based on variance information related to the variance of component values of each pixel included in a plurality of component images in a color space generated from an inspection image T1 obtained by capturing an inspection target substrate. Then, based on these images, an inspection of defects of the substrate to be inspected is performed. In this way, preprocessed images G30 and T30 using variance information related to the variance of component values of a plurality of components in a color space in the normal image G1 and the inspection image T1 are generated, and the substrate is inspected for defects using these. This allows the inspection to be performed using an image that takes into account the variance of component values in a color space. Therefore, it is useful for detecting abnormalities on the substrate surface with high accuracy.
 従来から、画像を用いて基板の欠陥検査を行う手法として、色空間の成分画像を用いることは知られている。しかしながら、従来の手法では、画像毎に含まれ得る色味の違いなどを含む情報が欠陥検査に使用されるため、検査の精度という点で改良の余地があった。これに対して、上記実施形態で説明した手法によれば、成分画像から得られる成分値の分散に係る情報を利用して検査に使用される前処理画像G30,T30が作成される。このような構成とすることで、成分画像に含まれる成分値の分散に関する情報を反映した前処理画像G30,T30が得られる。このような前処理画像を用いて欠陥検査を行う構成とすることで、欠陥検出の精度が高められ得る。 The use of component images in a color space has long been known as a method for inspecting substrates for defects using images. However, in conventional methods, information including differences in color that may be contained in each image is used for defect inspection, leaving room for improvement in terms of inspection accuracy. In contrast, according to the method described in the above embodiment, preprocessed images G30, T30 used for inspection are created using information related to the variance of component values obtained from the component images. With this configuration, preprocessed images G30, T30 are obtained that reflect information related to the variance of component values contained in the component images. With a configuration in which such preprocessed images are used for defect inspection, the accuracy of defect detection can be improved.
 また、上記実施形態では、複数の成分画像に含まれる各画素の成分値の分散情報に基づいて作成された、1つのモデルマップG20が作成される。そして、1つの正常画像から得られる複数の成分画像に含まれる各画素の成分値の分散情報と、モデルマップG20との差分を求めることで、1つの正常画像に係る前処理画像G30が作成される。そして、検査画像から得られる複数の成分画像に含まれる各画素の成分値の分散情報と、モデルマップG20との差分を求めることで、検査画像に係る前処理画像T30が作成される。1つのモデルマップとは、複数の正常画像G1における各画素の成分値の分散を取りまとめたものといえる。このようなモデルマップを用いて正常画像G1及び検査画像T1に係る前処理画像を作成する構成とすることで、正常画像において生じ得る成分値の分散を考慮した前処理画像を作成することができる。したがって、基板表面における異常をより精度良く検出することができる。 In the above embodiment, a model map G20 is created based on the variance information of the component values of each pixel included in the multiple component images. A preprocessed image G30 for one normal image is created by calculating the difference between the variance information of the component values of each pixel included in the multiple component images obtained from one normal image and the model map G20. A preprocessed image T30 for the inspection image is created by calculating the difference between the variance information of the component values of each pixel included in the multiple component images obtained from the inspection image and the model map G20. A model map can be said to be a compilation of the variances of the component values of each pixel in multiple normal images G1. By using such a model map to create preprocessed images for the normal image G1 and the inspection image T1, it is possible to create preprocessed images that take into account the variance of the component values that may occur in the normal image. Therefore, abnormalities on the substrate surface can be detected more accurately.
 また、分散情報は、複数の成分画像に含まれる各画素の成分値に係るマハラノビス距離を求めることにより得られることとしてもよい。このような構成とすることで、マハラノビス距離を求めることによって分散情報をより精度良く求めることができるため、基板表面における異常を検出する際により有用な前処理画像を作成することができる。 The variance information may also be obtained by calculating the Mahalanobis distance related to the component values of each pixel contained in the multiple component images. With this configuration, the variance information can be obtained more accurately by calculating the Mahalanobis distance, making it possible to create a preprocessed image that is more useful when detecting abnormalities on the substrate surface.
 色空間として、RGB色空間、HSV色空間、XYZ色空間、及びLab色空間の少なくとも1つが用いてもよい。上記のように、RGB色空間、HSV色空間、XYZ色空間、及びLab色空間の少なくとも1つを用いて成分画像を作成することで、基板表面における異常を検出する際により有用な前処理画像を作成することができる。なお、上記実施形態で説明したように、RGB色空間、HSV色空間、XYZ色空間、及びLab色空間のうち複数の色空間における成分画像を用いてもよい。この場合、1枚の画像を別の色空間で定義した際の成分画像が得られる。このような成分画像を用いて前処理画像を作ることで、1つの画像の情報を、互いに異なる複数の成分に分解して得られる成分画像を用いることになり、画像に含まれる特徴をより多面的に捉えることができる。したがって、欠陥検出の精度向上に有用な前処理画像を作成することができる。 At least one of the RGB color space, HSV color space, XYZ color space, and Lab color space may be used as the color space. As described above, by creating a component image using at least one of the RGB color space, HSV color space, XYZ color space, and Lab color space, a preprocessed image that is more useful for detecting abnormalities on the substrate surface can be created. As described in the above embodiment, component images in multiple color spaces among the RGB color space, HSV color space, XYZ color space, and Lab color space may be used. In this case, a component image when one image is defined in another color space is obtained. By creating a preprocessed image using such a component image, a component image obtained by decomposing information of one image into multiple different components is used, and the features contained in the image can be captured from multiple perspectives. Therefore, a preprocessed image useful for improving the accuracy of defect detection can be created.
 正常差分画像及び検査差分画像は、平均画像との差分を強調するための強調処理を行った後の画像を含む態様としてもよい。このような構成とすることで、画像毎の特徴的な成分をより強調した画像を得ることができ、前処理画像の作成に使用することができる。 The normal difference image and the test difference image may include images that have been subjected to enhancement processing to enhance the difference from the average image. With this configuration, it is possible to obtain images that further enhance the characteristic components of each image, and these images can be used to create pre-processed images.
 正常差分画像及び検査差分画像は、画像中のムラを除去するためのムラ除去処理を行った後の画像を含む態様としてもよい。このような構成とすることで、画像中に発生するムラを適切に除去した画像を得ることができ、前処理画像の作成に使用することができる。 The normal difference image and the inspection difference image may include an image after unevenness removal processing to remove unevenness in the image. With this configuration, an image can be obtained in which unevenness that occurs in the image has been appropriately removed, and can be used to create a preprocessed image.
 正常差分画像及び検査差分画像は、所定領域の画素の成分値を平均化した後に、平均化した値との差分を求める処理を行った後の画像を含む態様としてもよい。このような構成とすることで、単画素等で偶発的に発生する異常値を除去した画像を得ることができ、前処理画像の作成に使用することができる。 The normal difference image and the test difference image may include an image obtained after averaging the component values of pixels in a specified area and then performing a process to determine the difference from the averaged value. By configuring in this way, an image can be obtained in which abnormal values that occur accidentally in single pixels, etc. have been removed, and this can be used to create a preprocessed image.
 正常差分画像及び検査差分画像は、画像中のノイズを除去するためのノイズ除去処理を行った後の画像を含む態様としてもよい。このような構成とすることで、画像に含まれ得るノイズを除去した画像を得ることができ、前処理画像の作成に使用することができる。 The normal difference image and the test difference image may include an image after noise removal processing to remove noise from the image. By configuring in this way, an image can be obtained from which noise that may be contained in the image has been removed, and the image can be used to create a preprocessed image.
 基板の欠陥の検査を行うことは、正常画像G1に係る前処理画像G30における画素毎の異常度を示す正常画像異常度マップG40と、検査画像T1に係る前処理画像T30における画素毎の異常度を示す検査画像異常度マップT40と、を作成することと、正常画像異常度マップG40と、検査画像異常度マップT40との差分から、欠陥が存在すると推定される領域を特定することと、を含んでもよい。上記の構成とすることで、正常画像G1に係る前処理画像G30から作成された正常画像異常度マップG40と、検査画像T1に係る前処理画像T30から作成された検査画像異常度マップT40との差分から欠陥が存在すると推定される領域が特定される。異常度マップG40,T40は、画素毎の異常度を示すマップであり、正常画像G1と検査画像T1とのが画素毎の異常度の差から欠陥が存在する領域が推定されるため、各画素の情報に基づいてより精度良く推定が行われる。 Inspecting the substrate for defects may include creating a normal image abnormality map G40 showing the degree of abnormality for each pixel in the preprocessed image G30 related to the normal image G1, and an inspection image abnormality map T40 showing the degree of abnormality for each pixel in the preprocessed image T30 related to the inspection image T1, and identifying an area where a defect is estimated to exist from the difference between the normal image abnormality map G40 and the inspection image abnormality map T40. With the above configuration, an area where a defect is estimated to exist is identified from the difference between the normal image abnormality map G40 created from the preprocessed image G30 related to the normal image G1 and the inspection image abnormality map T40 created from the preprocessed image T30 related to the inspection image T1. The abnormality maps G40 and T40 are maps showing the degree of abnormality for each pixel, and the area where a defect exists is estimated from the difference in the degree of abnormality for each pixel between the normal image G1 and the inspection image T1, so that a more accurate estimation is performed based on the information of each pixel.
 正常画像異常度マップG40と、検査画像異常度マップT40と、を作成することにおいて、前処理画像を入力することで異常度マップを生成するアルゴリズムが用いられてもよい。このとき、アルゴリズムは、複数の正常画像G1、複数の正常差分画像、及び、これらから得られる複数の成分画像の少なくとも一部を教師データとした機械学習によって作成されてもよい。上記の構成によれば、異常度マップを生成するアルゴリズムが、複数の正常画像、複数の正常差分画像、及び、これらから得られる複数の成分画像の少なくとも一部を教師データとした機械学習によって作成される。このような構成とすることで、アルゴリズムの作成のために準備される正常画像の枚数を減らすことができるため、より簡便に精度の高いアルゴリズムを作成することができる。 In creating the normal image abnormality map G40 and the test image abnormality map T40, an algorithm may be used that generates an abnormality map by inputting a preprocessed image. In this case, the algorithm may be created by machine learning using the multiple normal images G1, the multiple normal difference images, and at least a portion of the multiple component images obtained therefrom as training data. According to the above configuration, the algorithm that generates the abnormality map is created by machine learning using the multiple normal images, the multiple normal difference images, and at least a portion of the multiple component images obtained therefrom as training data. With this configuration, the number of normal images prepared for creating the algorithm can be reduced, making it easier to create a highly accurate algorithm.
[変形例]
 以上、種々の例示的実施形態について説明してきたが、上述した例示的実施形態に限定されることなく、様々な省略、置換、及び変更がなされてもよい。また、異なる実施形態における要素を組み合わせて他の実施形態を形成することが可能である。
[Modification]
Although various exemplary embodiments have been described above, the present invention is not limited to the exemplary embodiments described above, and various omissions, substitutions, and modifications may be made. In addition, elements in different embodiments can be combined to form other embodiments.
 例えば、上記実施形態では、1枚のカラー画像から、RGB色空間、HSV色空間、XYZ色空間、及びLab色空間を用いて、色に関する12成分(R,G,B,H,S,V,X,Y,Z,aa,ba,La)について、それぞれ前処理を施して得られる6種類の画像(対象強調画像I2、対象ムラ除去画像I3、及び、4つの処理後画像I11、I12、I13、I14)を作成する場合について説明した。しかしながら、この構成は適宜変更することができ、例えば、上述の色空間の一部のみを使用してもよい。また、上述の色空間とは異なる色空間を用いて成分画像を作成することとしてもよい。 For example, in the above embodiment, a case has been described in which six types of images (target emphasis image I2, target unevenness removal image I3, and four processed images I11, I12, I13, I14) are created from one color image by preprocessing each of the 12 color components (R, G, B, H, S, V, X, Y, Z, aa, ba, La) using the RGB color space, HSV color space, XYZ color space, and Lab color space. However, this configuration can be modified as appropriate, and for example, only a portion of the above-mentioned color spaces may be used. Also, component images may be created using a color space different from the above-mentioned color spaces.
 また、前処理画像を作成した後に、異常度マップを作成し、欠陥を検出する際の手順は適宜変更することができる。上記実施形態で説明した手順は一例であって、画像を用いた欠陥検出において使用され得る種々の手法に変更することができる。また、欠陥検出において使用され得る手法に応じて、前処理画像についてさらなる画像処理等を施してもよい。 In addition, after creating the preprocessed image, the procedure for creating the anomaly map and detecting defects can be changed as appropriate. The procedure described in the above embodiment is one example, and can be changed to various methods that can be used in defect detection using images. In addition, further image processing, etc. can be performed on the preprocessed image depending on the method that can be used in defect detection.
 また、上述した一連の処理は、適宜変更可能である。上記一連の処理において、制御装置100(検査制御部110)は、一のステップと次のステップとを並列に実行してもよく、上述した例とは異なる順序で各ステップを実行してもよい。制御装置100(検査制御部110)は、いずれかのステップを省略してもよく、いずれかのステップにおいて上述の例とは異なる処理を実行してもよい。 The above-described series of processes can be modified as appropriate. In the above-described series of processes, the control device 100 (inspection control unit 110) may execute one step and the next step in parallel, or may execute each step in an order different from the example described above. The control device 100 (inspection control unit 110) may omit any step, or may execute a process in any step that is different from the example described above.
 検査制御部110を構成するコンピュータが、塗布現像装置2以外に設けられてもよい。この場合、制御装置100と検査制御部110とが、有線通信、または無線通信によって、通信可能に接続されてもよい。制御装置100が、検査ユニットU3から撮像画像を取得したうえで、その撮像画像を検査制御部110に送信してもよい。検査制御部110は、異常の有無の判定結果を示す情報を制御装置100に送信してもよい。 The computer constituting the inspection control unit 110 may be provided outside the coating and developing apparatus 2. In this case, the control device 100 and the inspection control unit 110 may be communicatively connected by wired or wireless communication. The control device 100 may acquire an image from the inspection unit U3 and then transmit the image to the inspection control unit 110. The inspection control unit 110 may transmit information indicating the determination result of whether or not there is an abnormality to the control device 100.
 以上の説明から、本開示の種々の実施形態は、説明の目的で本明細書において説明されており、本開示の範囲及び主旨から逸脱することなく種々の変更をなし得ることが、理解されるであろう。したがって、本明細書に開示した種々の実施形態は限定することを意図しておらず、真の範囲と主旨は、添付の特許請求の範囲によって示される。 From the foregoing, it will be understood that the various embodiments of the present disclosure have been described herein for purposes of illustration, and that various modifications may be made without departing from the scope and spirit of the present disclosure. Accordingly, the various embodiments disclosed herein are not intended to be limiting, with the true scope and spirit being indicated by the appended claims.
 1…基板処理システム、2…塗布現像装置、100…制御装置、102…処理制御部、110…検査制御部、112…欠陥検出部、114…結果出力部、120…モデルマップ作成部、122…画像取得部、124…成分画像生成部、126…モデルマップ生成部、130…前処理部、132…画像取得部、134…成分画像生成部、136…前処理画像生成部。

 
1...substrate processing system, 2...coating and developing apparatus, 100...control device, 102...processing control unit, 110...inspection control unit, 112...defect detection unit, 114...result output unit, 120...model map creation unit, 122...image acquisition unit, 124...component image generation unit, 126...model map generation unit, 130...pre-processing unit, 132...image acquisition unit, 134...component image generation unit, 136...pre-processing image generation unit.

Claims (16)

  1.  検査対象となる基板を撮像して得られた検査画像を用いて、前記検査対象となる基板の検査を行う基板検査方法であって、
     複数の正常基板を撮像して得られた複数の正常画像から生成した平均画像と、前記複数の正常画像それぞれとの差分である複数の正常差分画像を生成し、前記複数の正常画像及び前記複数の正常差分画像から、色空間における複数の成分画像を生成し、前記複数の成分画像に含まれる各画素の成分値の分散に係る分散情報に基づいて、前記複数の正常画像のそれぞれに係る前処理画像を生成することと、
     前記検査画像と、前記平均画像との差分である検査差分画像を生成し、前記検査画像及び前記検査差分画像から、色空間における複数の成分画像を生成し、前記複数の成分画像に含まれる各画素の成分値の分散に係る分散情報に基づいて、前記検査画像に係る前処理画像を生成することと、
     前記正常画像に係る前処理画像と、前記検査画像に係る前処理画像とに基づき、前記検査対象となる基板の欠陥の検査を行うことと、
     を含む、基板検査方法。
    A substrate inspection method for inspecting a substrate to be inspected by using an inspection image obtained by imaging the substrate, comprising:
    generating a plurality of normal difference images which are differences between an average image generated from a plurality of normal images obtained by imaging a plurality of normal substrates and each of the plurality of normal images; generating a plurality of component images in a color space from the plurality of normal images and the plurality of normal difference images; and generating a preprocessed image for each of the plurality of normal images based on variance information relating to the variance of component values of each pixel included in the plurality of component images;
    generating an inspection difference image which is a difference between the inspection image and the average image, generating a plurality of component images in a color space from the inspection image and the inspection difference image, and generating a preprocessed image for the inspection image based on variance information relating to the variance of component values of each pixel included in the plurality of component images;
    inspecting the substrate to be inspected for defects based on a preprocessed image related to the normal image and a preprocessed image related to the inspection image;
    A substrate inspection method comprising:
  2.  前記正常画像に係る前処理画像を生成することにおいて、前記複数の正常画像に係る前記複数の成分画像に含まれる各画素の成分値の分散情報に基づいて作成された、1つのモデルマップを生成し、1つの前記正常画像から得られる前記複数の成分画像に含まれる各画素の成分値の分散情報と、前記モデルマップとの差分を求めることで、1つの前記正常画像に係る前処理画像を作成し、
     前記検査画像に係る前処理画像を生成することにおいて、前記検査画像から得られる前記複数の成分画像に含まれる各画素の成分値の分散情報と、前記モデルマップとの差分を求めることで、前記検査画像に係る前処理画像を作成する、請求項1に記載の基板検査方法。
    In generating a preprocessed image for the normal image, a model map is generated based on variance information of component values of each pixel included in the component images for the normal images, and a difference between the model map and variance information of component values of each pixel included in the component images obtained from one of the normal images is calculated to generate a preprocessed image for one of the normal images;
    The substrate inspection method of claim 1, wherein in generating a preprocessed image for the inspection image, the preprocessed image for the inspection image is created by calculating the difference between variance information of component values of each pixel contained in the multiple component images obtained from the inspection image and the model map.
  3.  前記分散情報は、前記複数の成分画像に含まれる各画素の成分値に係るマハラノビス距離を求めることにより得られる、請求項1に記載の基板検査方法。 The substrate inspection method according to claim 1, wherein the variance information is obtained by calculating the Mahalanobis distance related to the component values of each pixel included in the plurality of component images.
  4.  前記色空間として、RGB色空間、HSV色空間、XYZ色空間、及びLab色空間の少なくとも1つが用いられる、請求項1に記載の基板検査方法。 The substrate inspection method according to claim 1, wherein at least one of the RGB color space, the HSV color space, the XYZ color space, and the Lab color space is used as the color space.
  5.  前記正常差分画像及び前記検査差分画像は、前記平均画像との差分を強調するための強調処理を行った後の画像を含む、請求項1に記載の基板検査方法。 The substrate inspection method according to claim 1, wherein the normal difference image and the inspection difference image include images after enhancement processing has been performed to enhance the difference from the average image.
  6.  前記正常差分画像及び前記検査差分画像は、画像中のムラを除去するためのムラ除去処理を行った後の画像を含む、請求項1に記載の基板検査方法。 The substrate inspection method according to claim 1, wherein the normal difference image and the inspection difference image include images after irregularity removal processing for removing irregularities in the images.
  7.  前記正常差分画像及び前記検査差分画像は、所定領域の画素の成分値を平均化した後に、平均化した値との差分を求める処理を行った後の画像を含む、請求項1に記載の基板検査方法。 The substrate inspection method according to claim 1, wherein the normal difference image and the inspection difference image include an image obtained after averaging component values of pixels in a specified area and then performing a process to determine the difference from the averaged value.
  8.  前記正常差分画像及び前記検査差分画像は、画像中のノイズを除去するためのノイズ除去処理を行った後の画像を含む、請求項1に記載の基板検査方法。 The substrate inspection method according to claim 1, wherein the normal difference image and the inspection difference image include images after noise removal processing for removing noise from the images.
  9.  前記検査対象となる基板の欠陥の検査を行うことは、
      前記正常画像に係る前処理画像における画素毎の異常度を示す正常画像異常度マップと、前記検査画像に係る前処理画像における画素毎の異常度を示す検査画像異常度マップと、を作成することと、
      前記正常画像異常度マップと、前記検査画像異常度マップとの差分から、欠陥が存在すると推定される領域を特定することと、
     を含む、請求項1に記載の基板検査方法。
    Inspecting the substrate to be inspected for defects includes:
    creating a normal image abnormality degree map showing an abnormality degree for each pixel in a preprocessed image related to the normal image, and an inspection image abnormality degree map showing an abnormality degree for each pixel in a preprocessed image related to the inspection image;
    Identifying an area in which a defect is estimated to exist based on a difference between the normal image anomaly degree map and the inspection image anomaly degree map;
    The method of claim 1 , further comprising:
  10.  前記正常画像異常度マップと、前記検査画像異常度マップと、を作成することにおいて、前処理画像を入力することで異常度マップを生成するアルゴリズムが用いられ、
     前記アルゴリズムは、前記複数の正常画像、前記複数の正常差分画像、及び、これらから得られる前記複数の成分画像の少なくとも一部を教師データとした機械学習によって作成される、請求項9に記載の基板検査方法。
    In creating the normal image abnormality degree map and the inspection image abnormality degree map, an algorithm is used that generates an abnormality degree map by inputting a preprocessed image;
    The substrate inspection method according to claim 9 , wherein the algorithm is created by machine learning using the plurality of normal images, the plurality of normal differential images, and at least a portion of the plurality of component images obtained therefrom as training data.
  11.  検査対象となる基板を撮像して得られた検査画像を用いて、前記検査対象となる基板の検査を行う基板検査装置であって、
     複数の正常基板を撮像して得られた複数の正常画像から生成した平均画像と、前記複数の正常画像それぞれとの差分である複数の正常差分画像を生成し、前記複数の正常画像及び前記複数の正常差分画像から、色空間における複数の成分画像を生成し、前記複数の成分画像に含まれる各画素の成分値の分散に係る分散情報に基づいて、前記正常画像に係る前処理画像を生成する正常前処理画像作成部と、
     前記検査画像と、前記平均画像との差分である検査差分画像を生成し、前記検査画像及び前記検査差分画像から、色空間における複数の成分画像を生成し、前記複数の成分画像に含まれる各画素の成分値の分散に係る分散情報に基づいて、前記検査画像に係る前処理画像を生成する検査前処理画像作成部と、
     前記正常画像に係る前処理画像と、前記検査画像に係る前処理画像とに基づき、前記検査対象となる基板の欠陥の検査を行う欠陥検出部と、
     を含む、基板検査装置。
    A substrate inspection apparatus for inspecting a substrate to be inspected by using an inspection image obtained by imaging the substrate, comprising:
    a normal preprocessed image creation unit that generates a plurality of normal difference images which are differences between an average image generated from a plurality of normal images obtained by imaging a plurality of normal substrates and each of the plurality of normal images, generates a plurality of component images in a color space from the plurality of normal images and the plurality of normal difference images, and generates a preprocessed image for the normal images based on variance information related to the variance of component values of each pixel included in the plurality of component images;
    a pre-inspection image creation unit that generates an inspection difference image which is the difference between the inspection image and the average image, generates a plurality of component images in a color space from the inspection image and the inspection difference image, and generates a pre-processed image for the inspection image based on variance information relating to the variance of component values of each pixel included in the plurality of component images;
    a defect detection unit that inspects the substrate to be inspected for defects based on a preprocessed image related to the normal image and a preprocessed image related to the inspection image;
    A substrate inspection apparatus comprising:
  12.  前記正常前処理画像作成部は、前記複数の正常画像に係る前記複数の成分画像に含まれる各画素の成分値の分散情報に基づいて作成された、1つのモデルマップを生成し、1つの前記正常画像から得られる前記複数の成分画像に含まれる各画素の成分値の分散情報と、前記モデルマップとの差分を求めることで、1つの前記正常画像に係る前処理画像を作成し、
     前記検査前処理画像作成部は、前記検査画像から得られる前記複数の成分画像に含まれる各画素の成分値の分散情報と、前記モデルマップとの差分を求めることで、前記検査画像に係る前処理画像を作成する、請求項11に記載の基板検査装置。
    the normal preprocessed image creation unit generates one model map created based on variance information of component values of each pixel included in the plurality of component images related to the plurality of normal images, and creates a preprocessed image related to one of the normal images by calculating a difference between the model map and variance information of component values of each pixel included in the plurality of component images obtained from one of the normal images;
    The substrate inspection apparatus of claim 11, wherein the pre-inspection image creation unit creates a pre-processed image related to the inspection image by calculating the difference between variance information of component values of each pixel contained in the multiple component images obtained from the inspection image and the model map.
  13.  前記分散情報は、前記複数の成分画像に含まれる各画素の成分値に係るマハラノビス距離を求めることにより得られる、請求項11に記載の基板検査装置。 The substrate inspection device according to claim 11, wherein the variance information is obtained by calculating the Mahalanobis distance related to the component values of each pixel included in the plurality of component images.
  14.  前記欠陥検出部は、
      前記正常画像に係る前処理画像における画素毎の異常度を示す正常画像異常度マップと、前記検査画像に係る前処理画像における画素毎の異常度を示す検査画像異常度マップと、を作成し、
      前記正常画像異常度マップと、前記検査画像異常度マップとの差分から、欠陥が存在すると推定される領域を特定する、請求項11に記載の基板検査装置。
    The defect detection unit includes:
    creating a normal image abnormality degree map showing the degree of abnormality for each pixel in a preprocessed image related to the normal image, and an inspection image abnormality degree map showing the degree of abnormality for each pixel in a preprocessed image related to the inspection image;
    The substrate inspection device according to claim 11 , wherein an area in which a defect is estimated to exist is identified from a difference between the normal image anomaly degree map and the inspection image anomaly degree map.
  15.  前記欠陥検出部は、前記正常画像異常度マップと、前記検査画像異常度マップと、を作成する際に、前処理画像を入力することで異常度マップを生成するアルゴリズムを使用し、
     前記アルゴリズムは、前記複数の正常画像、前記複数の正常差分画像、及び、これらから得られる前記複数の成分画像の少なくとも一部を教師データとした機械学習によって作成される、請求項14に記載の基板検査装置。
    the defect detection unit uses an algorithm for generating an anomaly degree map by inputting a preprocessed image when creating the normal image anomaly degree map and the inspection image anomaly degree map;
    The substrate inspection device according to claim 14 , wherein the algorithm is created by machine learning using the plurality of normal images, the plurality of normal differential images, and at least a portion of the plurality of component images obtained therefrom as training data.
  16.  検査対象となる基板を撮像して得られた検査画像を用いて、前記検査対象となる基板の検査を行う基板検査をコンピュータに実行させる基板検査プログラムであって、
     複数の正常基板を撮像して得られた複数の正常画像から生成した平均画像と、前記複数の正常画像それぞれとの差分である複数の正常差分画像を生成し、前記複数の正常画像及び前記複数の正常差分画像から、色空間における複数の成分画像を生成し、前記複数の成分画像に含まれる各画素の成分値の分散に係る分散情報に基づいて、前記複数の正常画像のそれぞれに係る前処理画像を生成することと、
     前記検査画像と、前記平均画像との差分である検査差分画像を生成し、前記検査画像及び前記検査差分画像から、色空間における複数の成分画像を生成し、前記複数の成分画像に含まれる各画素の成分値の分散に係る分散情報に基づいて、前記検査画像に係る前処理画像を生成することと、
     前記正常画像に係る前処理画像と、前記検査画像に係る前処理画像とに基づき、前記検査対象となる基板の欠陥の検査を行うことと、
     をコンピュータに実行させる、基板検査プログラム。

     
    A substrate inspection program for causing a computer to execute a substrate inspection for inspecting a substrate to be inspected by using an inspection image obtained by imaging the substrate, the substrate inspection program comprising:
    generating a plurality of normal difference images which are differences between an average image generated from a plurality of normal images obtained by imaging a plurality of normal substrates and each of the plurality of normal images; generating a plurality of component images in a color space from the plurality of normal images and the plurality of normal difference images; and generating a preprocessed image for each of the plurality of normal images based on variance information relating to the variance of component values of each pixel included in the plurality of component images;
    generating an inspection difference image which is a difference between the inspection image and the average image, generating a plurality of component images in a color space from the inspection image and the inspection difference image, and generating a preprocessed image for the inspection image based on variance information relating to the variance of component values of each pixel included in the plurality of component images;
    inspecting the substrate to be inspected for defects based on a preprocessed image related to the normal image and a preprocessed image related to the inspection image;
    A circuit board inspection program that causes a computer to execute the above.

PCT/JP2023/038877 2022-11-10 2023-10-27 Substrate inspection method, substrate inspection device, and substrate inspection program WO2024101186A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022180489 2022-11-10
JP2022-180489 2022-11-10

Publications (1)

Publication Number Publication Date
WO2024101186A1 true WO2024101186A1 (en) 2024-05-16

Family

ID=91032861

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/038877 WO2024101186A1 (en) 2022-11-10 2023-10-27 Substrate inspection method, substrate inspection device, and substrate inspection program

Country Status (1)

Country Link
WO (1) WO2024101186A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000046749A (en) * 1998-07-31 2000-02-18 Yamatake Corp Method for inspecting connection quality of part
JP2000346627A (en) * 1999-06-07 2000-12-15 Toray Eng Co Ltd Inspection system
US20150051860A1 (en) * 2013-08-19 2015-02-19 Taiwan Semiconductor Manufacturing Co., Ltd. Automatic optical appearance inspection by line scan apparatus
US20190384165A1 (en) * 2018-06-15 2019-12-19 Samsung Electronics Co., Ltd. Method for manufacturing a semiconductor device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000046749A (en) * 1998-07-31 2000-02-18 Yamatake Corp Method for inspecting connection quality of part
JP2000346627A (en) * 1999-06-07 2000-12-15 Toray Eng Co Ltd Inspection system
US20150051860A1 (en) * 2013-08-19 2015-02-19 Taiwan Semiconductor Manufacturing Co., Ltd. Automatic optical appearance inspection by line scan apparatus
US20190384165A1 (en) * 2018-06-15 2019-12-19 Samsung Electronics Co., Ltd. Method for manufacturing a semiconductor device

Similar Documents

Publication Publication Date Title
JP6615172B2 (en) Systems, devices and methods for quality assessment of OLED stack films
US9355442B2 (en) Film thickness measurement apparatus, film thickness measurement method, and non-transitory computer storage medium
US10026011B2 (en) Mask inspection apparatus, mask evaluation method and mask evaluation system
WO2020246366A1 (en) Substrate inspection device, substrate inspection system, and substrate inspection method
US20090136117A1 (en) Method and apparatus for residue detection on a polished wafer
JP7157580B2 (en) Board inspection method and board inspection apparatus
TW202024612A (en) Super-resolution defect review image generation through generative adversarial networks
KR20120106565A (en) Image creation method, substrate inspection method, storage medium for storing program for executing image creation method and substrate inspection method, and substrate inspection apparatus
JP2016145887A (en) Inspection device and method
KR20200099977A (en) Image generating apparatus, inspection apparatus, and image generating method
JP2006276454A (en) Image correcting method and pattern defect inspecting method using same
JP2018084431A (en) Inspection device and inspection method
JP2018004272A (en) Pattern inspection device and pattern inspection method
KR20140070464A (en) Method for inspection of defects on substrate, apparatus for inspection of defects on substrate and computer-readable recording medium
CN109870463B (en) Electronic chip fault detection device
JP2001209798A (en) Method and device for inspecting outward appearance
WO2024101186A1 (en) Substrate inspection method, substrate inspection device, and substrate inspection program
TW201447279A (en) Automated optical inspection device of wafer and a method of inspecting the uniformity of wafer
JP4243268B2 (en) Pattern inspection apparatus and pattern inspection method
JP2002175520A (en) Device and method for detecting defect of substrate surface, and recording medium with recorded program for defect detection
CN114354491A (en) DCB ceramic substrate defect detection method based on machine vision
KR20200126921A (en) A method for inspecting a skeleton wafer
WO2023106157A1 (en) Substrate inspection method, substrate inspection program, and substrate inspection device
JP2023100073A (en) Substrate analysis system, substrate analytical method, and storage medium
JP4889018B2 (en) Appearance inspection method