WO2023021697A1 - Système de mesure de circuits optiques et procédé de mesure - Google Patents

Système de mesure de circuits optiques et procédé de mesure Download PDF

Info

Publication number
WO2023021697A1
WO2023021697A1 PCT/JP2021/030623 JP2021030623W WO2023021697A1 WO 2023021697 A1 WO2023021697 A1 WO 2023021697A1 JP 2021030623 W JP2021030623 W JP 2021030623W WO 2023021697 A1 WO2023021697 A1 WO 2023021697A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
training
super
optical waveguide
measurement
Prior art date
Application number
PCT/JP2021/030623
Other languages
English (en)
Japanese (ja)
Inventor
雅 太田
慶太 山口
賢哉 鈴木
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to JP2023542161A priority Critical patent/JPWO2023021697A1/ja
Priority to PCT/JP2021/030623 priority patent/WO2023021697A1/fr
Publication of WO2023021697A1 publication Critical patent/WO2023021697A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness

Definitions

  • the present disclosure relates to a measurement system and measurement method for optical waveguide devices. More particularly, the present invention relates to a measurement system and calculation method for measuring at least part of an optical circuit formed on a semiconductor or insulator wafer with high accuracy and high throughput.
  • optical waveguide devices such as optical wavelength multiplexing/demultiplexing circuits and optical switch circuits
  • the components of these optical circuits include multiple optical signal paths with different optical path lengths and multiplexing/demultiplexing elements, which realize wavelength multiplexing/demultiplexing and switching functions using light wave interference.
  • the interference characteristics of light waves depend on the difference in optical path length between paths of optical signals. Determined primarily by dimensions. Therefore, the optical characteristics of an optical circuit fabricated on a wafer fluctuate according to the wafer in-plane distribution of the optical constants of the materials forming the optical waveguide and the dimensional variations in the structure of the optical waveguide. Among them, the width of the core of the optical waveguide has a high contribution rate to the optical characteristics of the optical waveguide device, and the variation in the optical characteristics of the manufactured optical waveguide device greatly depends on the accuracy of the process of patterning the optical waveguide.
  • FIG. 1 is a diagram showing a conventional method for manufacturing an optical waveguide.
  • a typical manufacturing process will be described using a quartz-based planar lightwave circuit made of quartz-based glass as an example.
  • a lower clad deposition step a glass film that will become a lower clad 12 is deposited on a silicon substrate (wafer) 11 .
  • the lower clad 12 is made of SiO 2 added with P 2 O 5 or B 2 O 3 deposited by a flame hydrolysis deposition (FHD) method.
  • FHD flame hydrolysis deposition
  • the FHD method is also used to deposit a thin film glass that will become the core 13 having a higher refractive index than the lower clad 12 .
  • the desired refractive index value can be obtained by adding GeO 2 to SiO 2 .
  • a transparent core 13 is formed by heating at a high temperature of 1000° C. or higher as in the lower clad deposition step.
  • step 3 In the photoresist film-forming process, a photoresist film 14 is formed on the substrate by spin coating.
  • step 4 circuit pattern exposure, the photoresist film is irradiated with UV light 16 through a photomask 15 to expose a circuit pattern corresponding to the mask pattern.
  • step 5 photoresist development, the circuit pattern of the photoresist film is developed to obtain a photoresist pattern 17 .
  • step 6 in the etching process, the photoresist pattern 17 is transferred to the core by reactive ion etching (RIE) to obtain a core pattern 18 .
  • step 7 In the resist removing step, the photoresist remaining on the core is removed by ashing.
  • step 8 upper clad deposition, an upper clad 19 is deposited by the same method as the lower clad deposition in step 1: lower clad deposition.
  • Various inspections such as the pattern size and optical characteristics of the optical circuit are performed on the optical waveguide obtained by the above manufacturing process.
  • a method of evaluating parameters correlated with the amplitude and phase of light in paths having different optical path lengths related to interference is used.
  • one of the lights branched from one wavelength tunable light source is input to an optical circuit and the other is input to an optical signal path with a known optical path length, and they are interfered on the output side to obtain a reference port. , and evaluates the interference pattern (see, for example, Patent Document 1 and Non-Patent Document 3).
  • the present disclosure enables the dimensional accuracy of an optical waveguide to be evaluated during the manufacture of an optical waveguide device, and the optical waveguide to An object of the present invention is to provide a technique for measuring the optical characteristics of a device.
  • the width of the core of the optical waveguide in particular has a high contribution rate to the optical characteristics of the optical waveguide device, and variations in the optical characteristics of the manufactured optical device depend on the accuracy of the patterning process of the optical waveguide. Therefore, by measuring the dimensions of the core pattern with high accuracy, it is possible to estimate the optical characteristics of the optical waveguide device and predict the variations in the optical characteristics during the manufacturing process.
  • a representative method is a method of measuring the dimensions of an optical waveguide or the like using a microscope during the manufacture of an optical waveguide device.
  • the width of the core of an optical waveguide that constitutes an optical circuit usually takes a value in the range of 0.1 ⁇ m to 100.0 ⁇ m
  • a microscope is used to measure the width of the core with high precision, using a high magnification (for example, , 500 times or more) to image the optical waveguide.
  • a high magnification for example, , 500 times or more
  • the present disclosure has been made in view of the above problems.
  • the purpose of this research is to apply processing trained by machine learning to low-magnification images with a wide imaging field of view, thereby suppressing displacement of captured images and increasing data volume, and achieving high-throughput optical guidance.
  • An object of the present invention is to provide a wave path measurement system and a measurement method.
  • the present disclosure is a measurement system for an optical waveguide device, which includes an imaging unit that captures an image of the appearance of an object to be measured and generates image data of the image; Dataset generation that implements pattern matching processing and trimming processing for each image data of multiple images with different magnifications, and generates a training image dataset based on each image data of multiple images with different magnifications.
  • a section comprising a neural network, training the neural network to implement super-resolution processing based on a training image dataset, generating a trained network configuration describing conditions for the trained neural network;
  • An optical waveguide device comprising a machine learning unit that generates a super-resolution processed image based on network settings and an image, and a measurement unit that measures at least part of an object to be measured based on the super-resolution processed image. Provide a measurement system.
  • the present disclosure is a method for measuring an optical waveguide device comprising a training step and an execution step, wherein the training step includes a first training image representing the appearance of a training object at arbitrary coordinates; imaging a second training image having a lower magnification than the one training image to generate image data for each; implementing pattern matching and cropping on the first training image and the second training image; and generating a training image dataset; training a neural network using the training image dataset to generate a trained network configuration that describes conditions for the trained neural network; The executing step acquires a measurement image showing the appearance at any coordinate of the object to be measured and having a lower magnification than the first training image; and based on the measurement image, the trained network settings are reflected. Generating a super-resolution processed image using a neural network; Measuring the length or area of at least a part of the super-resolution processed image; and Outputting the measurement result.
  • a measurement method for a wave path device is provided.
  • FIG. 1 is a block diagram illustrating an optical waveguide device measurement system according to the present disclosure
  • FIG. 4 is a flow chart illustrating a method for measuring an optical waveguide device according to the present disclosure
  • FIG. 11 is a diagram exemplifying a verification result of measurement throughput when super-resolution processing according to the present disclosure is used
  • FIG. 7 is a diagram illustrating verification results of measurement accuracy when super-resolution processing according to the present disclosure is used
  • 6(a) is a top view
  • FIG. 6(b) is a cross-sectional view taken along the VIb-VIb cross-sectional line in one embodiment of the present disclosure.
  • FIG. 4 is a flowchart illustrating a procedure for training a super-resolution processing network, according to an embodiment of the present disclosure
  • 4 is a flow chart illustrating a procedure for super-resolution processing using a trained super-resolution processing network and core width measurement in an embodiment of the present disclosure
  • 4 is a flow chart showing a procedure for super-resolution processing using a trained super-resolution processing network and measuring a foreign object in an embodiment of the present disclosure
  • Fig. 4 conceptually illustrates measuring the diameter and area of a foreign object, according to an embodiment of the present disclosure
  • FIG. 2 is a block diagram showing an optical waveguide device measurement system 20 according to the present disclosure.
  • An optical waveguide device measurement system 20 transmits and receives data to and from each component of the system (for example, an imaging unit 22, a machine learning unit 24, etc., which will be described later), and temporarily stores the received data.
  • an image capturing unit 22 that captures an external image of the object to be measured;
  • a data set generation unit 23 that generates a training image data set based on the image captured by the image capturing unit 22;
  • a machine learning unit that generates a trained network setting by machine learning based on the data, and generates a super-resolution processed image based on the trained network setting and the image data of the object to be measured (image data of the "measurement image” described later).
  • the database 21 is communicably connected to each of the imaging unit 22, the data set generating unit 23, the machine learning unit 24, and the measuring unit 25, thereby enabling data transmission and reception.
  • the database 21 includes a plurality of ports 211 for inputting and outputting data to be transmitted and received, a memory 212 for temporarily storing data, and a processing unit 213 for transmitting and receiving and storing data. .
  • the imaging unit 22 includes a microscope 221 that acquires an external image of the object to be measured, a camera 222 that captures the external image acquired by the microscope 221 as image data, a memory 223 that temporarily stores image data, and an image. and a port 224 that outputs data to the database 21 .
  • Various types of microscopes such as an optical microscope, a laser microscope, an electron microscope, an X-ray microscope, and an ultrasonic microscope can be applied to the microscope 221 .
  • the dataset generator 23 also includes a plurality of ports 231 for transmitting and receiving data (image data, training image datasets, etc.) to and from the database 21, and a memory 232 for temporarily storing the received image data. and a processing unit 233 that generates a training image data set based on the image data.
  • the processing unit 233 includes in advance an algorithm for implementing pattern matching processing and trimming processing based on two image data captured at the same coordinates and having different magnifications.
  • the processing unit 233 of the data set generation unit 23 detects the angle of the pattern in the images for the first training image and the second training image, and extracts the pattern with reference to a preset axis. angle may be corrected.
  • the machine learning unit 24 also includes a plurality of ports 241 for transmitting and receiving data (training image data sets and super-resolution processing image data) to and from the database 21, and temporarily storing the received training image data sets. and a super-resolution processing network 243 that trains based on the received training image datasets.
  • the super-resolution processing network 243 is a neural network whose input layer is a training image captured at a low magnification and whose output layer is a super-resolution processing image used for measurement.
  • This super-resolution processing network 243 can be a generative adversarial network, or a convolutional neural network.
  • the measurement unit 25 also includes a plurality of ports 251 for transmitting and receiving data (super-resolution processed image data and measurement result data) to and from the database 21, and a memory for temporarily storing super-resolution processed image data. 252 and a processing unit 253 that implements measurement of the object to be measured based on the super-resolution processed image data.
  • the optical waveguide device measurement system 20 according to the present disclosure having such a configuration differs from the conventional technology in that it includes a machine learning unit 24 that trains the super-resolution processing network 243 . Then, using the trained super-resolution processing network 243, by processing the low-magnification image (hereinafter referred to as super-resolution processing), the measurement accuracy is equal to or higher than when measuring the high-magnification image. can be realized.
  • super-resolution processing by processing the low-magnification image (hereinafter referred to as super-resolution processing)
  • a measurement method using the measurement system 20 according to the present disclosure will be described in detail below.
  • FIG. 3 is a flowchart illustrating a method 30 for measuring optical waveguide devices according to the present disclosure.
  • the method for measuring an optical waveguide device according to the present disclosure includes a training step 31 for training a super-resolution processing network 243 in the machine learning unit 24, and a super-resolution processing using the trained super-resolution processing network 243. and an execution step 32 for implementing image processing and performing measurements of the object to be measured based on the super-resolved image.
  • the imaging unit 22 captures a first training image and a second training image with different magnifications at arbitrary coordinates of the training object to be measured.
  • step 312 of storing in a directory of training image datasets in the database 21, and the database 21 transmits the training image datasets to the machine learning unit 24, and the machine learning unit 24 uses the training image datasets to perform the machine learning unit training 313 the super-resolution processing network 243 in 24 to generate trained network configuration data and storing the trained network configuration data in a directory of trained network configuration data in memory 212 of database 21; .
  • the magnification of the first training image is comparable to that of the prior art, that is, any magnification suitable for dimensional measurements for optical property evaluation in optical waveguide devices.
  • the standard deviation ⁇ of the measurement is within 1 ⁇ . If so, 1000 times or the like is assumed.
  • the magnification of the second training image is any magnification that is at least lower than the first training image. For example, if the magnification of the first training image is M and the magnification of the second training image is N, the ratio (M/N) is assumed to be 2 or more and 1000 or less. Also, the range (field of view) of the first training image must be contained within the second training image.
  • the microscope 221 of the imaging unit 22 that captures the first training image and the second training image is preferably of the same type (for example, when the first training image is captured by an optical microscope,
  • the second training image is preferably also taken with an optical microscope), but is not so limited.
  • the training image data set generated is a set of image data in which the captured first and second training images are correlated.
  • the training image dataset implements a pattern matching process and a cropping process on the second training image, is transformed into an image representing the same coordinates as the first training image, and then the first training image. generated by being associated with
  • the processing unit 233 of the data set generation unit 23 detects the angle of the pattern in the first training image and the second training image, and detects the angle of the pattern with reference to a preset axis. You can correct the angle.
  • the trained network setting data is data describing the conditions of the super-resolution processing network 243 trained in the form of any neural network. It should be noted that training in step 313 may be implemented using a portion of the training image data set as test data in order to evaluate the training accuracy of the super-resolution processing network 243 .
  • the imaging unit 22 captures an image for measurement at an arbitrary coordinate of the object to be measured at a single magnification, and the image data of the image for measurement is stored in the memory 212 of the database 21 as an image for measurement. and the database 21 transmits the measurement image and the pre-generated trained network settings to the machine learning unit 24, and the machine learning unit 24 performs super-resolution processing based on the trained network settings. and storing the generated image data of the super-resolution processed image in the super-resolution processed image directory in the memory 212 of the database 21, and the database 21 stores the image data of the super-resolution processed image.
  • a step 323 of transmitting to the measurement unit 25 the measurement unit 25 measuring at least a portion of the super-resolution processed image, and storing the measurement result data in the measurement result data directory in the memory 212 of the database 21 .
  • the measurement results can be output as lengths (eg, width of rectangular patterns, diameter of circular patterns, etc.) and areas in the super-resolution processed image.
  • a length measurement can be determined by measuring the distance between different contrasts in the image.
  • Area measurements can be obtained by measuring multiple lengths over a specified range and integrating the length measurements.
  • FIG. 4 is a diagram exemplifying the verification result of the measurement throughput when the super-resolution processing according to the present disclosure is used. Specifically, in measurement at arbitrary coordinates of the object to be measured, when a large number of high-magnification images are captured and measured (conventional technology), and super-resolution processing according to the present disclosure is applied to low-magnification images. The relationship between the number of times of imaging and the number of objects to be measured is shown for two conditions when measurement is performed by
  • the object to be measured is an arrayed waveguide grating (AWG: Arrayed Waveguide Grating) used for an optical wavelength filter or the like.
  • AWG arrayed Waveguide Grating
  • the high magnification image according to the prior art and the super-resolution processing according to the present disclosure are four times the magnification of the low magnification image.
  • the number of times of imaging is reduced in the measurement according to the present disclosure, as shown in FIG.
  • the number of imaging times in the measurement according to the present disclosure was 1/16 that in the measurement according to the prior art. That is, it can be said that the measurement according to the present disclosure has a higher throughput.
  • FIG. 5 is a diagram exemplifying the verification result of measurement accuracy when super-resolution processing according to the present disclosure is used. Specifically, the figure plots the measurement error for the same image with respect to two conditions, when the super-resolution processing according to the present disclosure is not applied and when the super-resolution processing is applied. The measurement was, as described above, the measurement of the distance (length measurement) between a plurality of different contrasts in the image. The figure also shows the value of the standard deviation ⁇ of the measurement error when the super-resolution processing according to the present disclosure is not applied and when the super-resolution processing is applied. As shown in FIG. 5, it can be seen that variations in measurement errors are smaller when super-resolution processing is applied than when super-resolution processing is not applied. This is because the value of the standard deviation ⁇ is 0.06 when the super-resolution processing according to the present disclosure is not applied, whereas it is as small as 0.03 when the super-resolution processing is applied. also understandable.
  • the measurement method 30 according to the present disclosure does not need to capture a large number of high-magnification images, so it is clearly possible to suppress misalignment of the images and an increase in data volume associated therewith. Therefore, it is possible to suppress the deterioration of the measurement accuracy due to the positional deviation when capturing the image of the object to be measured, so that the measurement can be performed with higher accuracy than the conventional technique.
  • FIG. 6 is a diagram schematically showing the configuration of the AWG 60, which is the object to be measured, in one embodiment of the present disclosure.
  • FIG. 6(a) shows a top view
  • FIG. 6(b) shows a cross-sectional view taken along the line VIb--VIb.
  • the AWG 60 includes a substrate 61, one or more input waveguides 62, one or more output waveguides 63, an input slab waveguide 64 and an output slab waveguide 65.
  • a plurality of waveguides between the slab waveguide 64 and the output side slab waveguide 65 includes a core 66 serving as an optical path for signal light and a clad 67 covering the periphery of the core.
  • the AWG 60 configured in this manner, wavelength-multiplexed light having a plurality of different wavelengths is incident from the input waveguide 62, and the signal light is demultiplexed between the input-side slab waveguide 64 and the output-side slab waveguide 65, Demultiplexed light is emitted from the output waveguide 63 . Due to such functions, the AWG 60 is generally put to practical use as an optical wavelength filter or the like.
  • the center wavelength ⁇ 0 is generally obtained by (Equation 1).
  • n array is the equivalent refractive index of the arrayed waveguide and is obtained by (Equation 2).
  • n core is the refractive index of the core
  • n clad is the refractive index of the clad
  • T is the film thickness of the core
  • w is the width of the core of the optical waveguide.
  • the width w of the core of each optical waveguide is measured with high accuracy, the optical characteristics of the AWG 60 can be evaluated with high accuracy.
  • the width of each core of the AWG 60 is measured using the optical waveguide device measurement system 20 and the measurement method 30 .
  • FIG. 7 is a diagram conceptually illustrating training of the super-resolution processing network 243, according to one embodiment of the present disclosure.
  • the super-resolution processing network 243 in this embodiment includes an image generator 71 and a discriminator 72 . These are network hierarchies that constitute the super-resolution processing network 243, and as shown in FIG. In this embodiment, in step 313 shown in FIG. 3, the image generator 71 and classifier 72 are configured to be trained in an adversarial manner.
  • FIG. 8 is a flow chart showing a procedure for training the super-resolution processing network 243, according to one embodiment of the present disclosure.
  • Training of the super-resolution processing network 243 includes step 81 of inputting a second training image of the training image data set previously generated in the data set generator 23 into the image generator 71; Step 82 of inputting a first training image of a training image data set pre-generated in set generator 23 to classifier 72; A step 83 of being trained to generate a super-resolved image similar to the training image, and a classifier 72 discriminating between the super-resolved image generated by the image generator 71 and the first training image. and step 84 .
  • step 84 shown in FIG. trained to do so.
  • Discrimination is implemented based on differential analysis of the super-resolved image and the first training image using a function such as PSNR (Peak Signal to Noise Ratio).
  • PSNR Peak Signal to Noise Ratio
  • This determination is performed by calculating the error in pixel units using a function such as PSNR for the super-resolution processed image and the first training image, and referring to the calculation result, using the error backpropagation method, gradient descent method, or the like. to calculate the slope of the error (dependence on the error) for the modifiable parameters (weights, biases, etc.) on the super-resolution processing network, and changing the parameters so that the error is minimized including.
  • PSNR Peak Signal to Noise Ratio
  • a network whose parameters are optimized by such a process performs super-resolution processing on the first image.
  • the super-resolution processing network 243 trained in this way then generates a trained network configuration and stores it in a directory of trained network configurations in the memory 212 of the database 21 .
  • FIG. 9 is a flowchart showing a procedure for super-resolution processing using the trained super-resolution processing network 243 and measuring the width of the core in one embodiment of the present disclosure.
  • the procedure for super-resolution processing and core width measurement in the present embodiment includes a step 91 of transmitting the trained network setting from the database 21 to the machine learning unit 24 and reflecting it in the super-resolution processing network 243, and Step 92 of transmitting the image data of the measurement image including at least partially the core 66 of the AWG 60 captured in the database 21 to the image generator 71 of the super-resolution processing network 243; A step 93 of generating an image-processed image and storing it in a super-resolution-processed image data directory in the memory 212 of the database 21; measuring 94 the width of the .
  • the width of core 66 measures the distance between different contrasts, as described above.
  • an optical waveguide device when foreign matter is scattered on a semiconductor wafer, if the distance between the foreign matter and the optical waveguide formed on the semiconductor wafer is short, the foreign matter interferes with the signal light propagating in the optical waveguide. causes a loss of signal strength. That is, in the manufacturing process of an optical waveguide device, measuring the size of foreign particles on a semiconductor wafer in advance with high accuracy makes it possible to estimate the influence of the foreign particles on the device characteristics, thereby improving the manufacturing yield. Connect.
  • the optical waveguide device measurement system 20 and measurement method 30 according to the present disclosure can also measure the size of such foreign matter on a semiconductor wafer with high accuracy and high throughput.
  • a convolutional neural network is used for the super-resolution processing network 243 in the measurement system 20 to measure the diameter and area of a foreign object.
  • a second training image containing a foreign object and a first training image having the same coordinates are captured in advance by the imaging unit 22, and the data set generation unit 23 A training image data set is generated based on both image data. Then, the second training image is used as an input layer, and the first training image is used as a filter to be read into the super-resolution processing network 243. By comparing the two, the features on the image are reflected in the convolution layer, and pooling is performed. Generate a super-resolution image. By implementing such training, super-resolution processing network 243 is optimized by the correlation between the second training image and the first training image. The trained super-resolution processing network 243 generates trained network settings and stores them in the trained network settings directory of the database 21, as in the first embodiment.
  • FIG. 10 is a flow chart showing a procedure for super-resolution processing using the trained super-resolution processing network 243 and measuring a foreign object in one embodiment of the present disclosure.
  • the procedure for super-resolution processing and measurement of a foreign object in this embodiment includes a step 101 of transmitting a trained network setting from the database 21 to the machine learning unit 24, and causing the machine learning unit 24 to reflect it in the super-resolution processing network 243; Step 102 of transmitting the image data of the image including the foreign object captured in advance by the imaging unit 22 from the database 21 to the super-resolution processing network 243, and the super-resolution processing network 243 generates the super-resolution processed image, and the database Step 103 of storing super-resolution processed image data in the directory of the memory 212 of 21, and Step 104 of transmitting the super-resolution processed image data from the database 21 to the measuring unit 25 and measuring the width and area of the foreign matter.
  • the items to be reflected in the super-resolution processing network 243 by the machine learning unit 24 are the information related to the configuration of the network (for example, the number of layers of the network, the configuration of each layer, the type of input/output image, etc.). information, etc.) and parameters (eg, weights, biases, etc.).
  • FIG. 11 conceptually illustrates measuring the diameter and area of a foreign object, according to one embodiment of the present disclosure.
  • the width of the foreign matter in the X direction can be measured by measuring the distance between different contrasts in the portion showing the foreign matter in the super-resolution processed image. Furthermore, this measurement is scanned in the Y direction perpendicular to the X direction, and a plurality of widths (L 1 , L 2 , . . . , L n ) of the foreign matter in the X direction are measured.
  • the integrated value of the plurality of measured widths (L 1 , L 2 , . . . , L n ) of the foreign matter in the X direction is defined as the area S of the foreign matter. From the values of the diameter D and the area S of the foreign matter thus obtained, it is possible to quantitatively estimate the size of the foreign matter.
  • the length of the foreign matter to be measured is in the X direction, but it is not limited to this, and it is possible to measure the length in any direction on the XY plane.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

La présente invention concerne un système de mesure de guide d'ondes optique et un procédé de mesure qui ont un haut rendement, et suppriment un déplacement de position ou un accroissement de la capacité de données d'une image capturée en appliquant un traitement entraîné par apprentissage par machine à une image à faible agrandissement d'un large champ de vision d'imagerie. Un système de mesure selon la présente invention comprend : une unité imageuse qui image un objet à imager ; une unité de génération d'ensemble de données qui met en œuvre un traitement d'appariement de tendances et un traitement de rognage pour une pluralité d'images ayant différents agrandissements, et génère un ensemble de données d'images d'entraînement ; une unité d'apprentissage par machine qui inclut un réseau neuronal, est entraînée, sur la base de l'ensemble de données d'images d'entraînement, pour mettre en œuvre un traitement d'ultrarésolution dans le réseau neuronal, et génère une image traitée à super-résolution sur la base de l'image et d'une configuration de réseau entraîné ; et une unité de mesure qui mesure, sur la base de l'image traitée à super-résolution, au moins une partie de l'objet à mesurer. De plus, la présente invention concerne un procédé de mesure de guide d'ondes optique consistant en un traitement d'entraînement et en un traitement d'exécution.
PCT/JP2021/030623 2021-08-20 2021-08-20 Système de mesure de circuits optiques et procédé de mesure WO2023021697A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2023542161A JPWO2023021697A1 (fr) 2021-08-20 2021-08-20
PCT/JP2021/030623 WO2023021697A1 (fr) 2021-08-20 2021-08-20 Système de mesure de circuits optiques et procédé de mesure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/030623 WO2023021697A1 (fr) 2021-08-20 2021-08-20 Système de mesure de circuits optiques et procédé de mesure

Publications (1)

Publication Number Publication Date
WO2023021697A1 true WO2023021697A1 (fr) 2023-02-23

Family

ID=85240276

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/030623 WO2023021697A1 (fr) 2021-08-20 2021-08-20 Système de mesure de circuits optiques et procédé de mesure

Country Status (2)

Country Link
JP (1) JPWO2023021697A1 (fr)
WO (1) WO2023021697A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003232948A (ja) * 2001-12-03 2003-08-22 Furukawa Electric Co Ltd:The 光導波路の製造方法およびその製造方法を用いた光導波路デバイスならびに導波路型光合分波器
WO2009044785A1 (fr) * 2007-10-03 2009-04-09 Kabushiki Kaisha Toshiba Dispositif d'examen visuel et procédé d'examen visuel
JP2015025758A (ja) * 2013-07-26 2015-02-05 Hoya株式会社 基板検査方法、基板製造方法および基板検査装置
JP2020163100A (ja) * 2019-03-11 2020-10-08 キヤノン株式会社 画像処理装置および画像処理方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003232948A (ja) * 2001-12-03 2003-08-22 Furukawa Electric Co Ltd:The 光導波路の製造方法およびその製造方法を用いた光導波路デバイスならびに導波路型光合分波器
WO2009044785A1 (fr) * 2007-10-03 2009-04-09 Kabushiki Kaisha Toshiba Dispositif d'examen visuel et procédé d'examen visuel
JP2015025758A (ja) * 2013-07-26 2015-02-05 Hoya株式会社 基板検査方法、基板製造方法および基板検査装置
JP2020163100A (ja) * 2019-03-11 2020-10-08 キヤノン株式会社 画像処理装置および画像処理方法

Also Published As

Publication number Publication date
JPWO2023021697A1 (fr) 2023-02-23

Similar Documents

Publication Publication Date Title
Chiles et al. Design, fabrication, and metrology of 10× 100 multi-planar integrated photonic routing manifolds for neural networks
TWI704410B (zh) 用於預測當微影製程進行時使用光罩而獲得的成像結果的方法與設備
US5631731A (en) Method and apparatus for aerial image analyzer
CN101382737B (zh) 检验方法和设备、光刻设备、光刻单元和器件制造方法
CN103843123B (zh) 利用光瞳相位信息来测量覆盖的方法及系统
US7085676B2 (en) Feed forward critical dimension control
CN101261452B (zh) 检验方法和设备、光刻处理单元和器件制造方法
KR102203005B1 (ko) 위치 센서, 리소그래피 장치 및 디바이스 제조 방법
JP2015534056A (ja) 単一検出器アレイを有するオンチップ複数機能分光計
TW202129430A (zh) 暗場數位全像顯微鏡及相關度量衡方法
CN111473953B (zh) 一种基于相位恢复的光纤激光模式分解方法及其实现装置
CN108369381A (zh) 由量测数据的统计分层重建
Menchtchikov et al. Reduction in overlay error from mark asymmetry using simulation, ORION, and alignment models
US7443493B2 (en) Transfer characteristic calculation apparatus, transfer characteristic calculation method, and exposure apparatus
Gao et al. Directional coupler based on single-crystal diamond waveguides
WO2023021697A1 (fr) Système de mesure de circuits optiques et procédé de mesure
CN102265220A (zh) 确定特性的方法
WO2022180840A1 (fr) Système de fabrication de circuit intégré optique et procédé de fabrication
WO2022180832A1 (fr) Système de fabrication de circuit intégré optique et procédé de fabrication
WO2022180829A1 (fr) Procédé de mesure sans contact
WO2022180827A1 (fr) Système de prédiction d'ia pour caractéristiques optiques
WO2024009457A1 (fr) Dispositif de guide d'ondes optique et son procédé de fabrication
WO2023105681A1 (fr) Procédé de mesure sans contact et dispositif d'estimation
WO2022180830A1 (fr) Procédé de mesure sans contact d'un film multicouche
WO2022180838A1 (fr) Procédé de production de dispositif de guide d'ondes optique et système de production

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21954264

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023542161

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE