WO2024106682A1 - Dispositif et procédé d'analyse de rugosité de surface moyenne par extraction d'une caractéristique d'une image de membrane - Google Patents
Dispositif et procédé d'analyse de rugosité de surface moyenne par extraction d'une caractéristique d'une image de membrane Download PDFInfo
- Publication number
- WO2024106682A1 WO2024106682A1 PCT/KR2023/010717 KR2023010717W WO2024106682A1 WO 2024106682 A1 WO2024106682 A1 WO 2024106682A1 KR 2023010717 W KR2023010717 W KR 2023010717W WO 2024106682 A1 WO2024106682 A1 WO 2024106682A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- input vector
- learning
- membrane
- processed images
- Prior art date
Links
- 239000012528 membrane Substances 0.000 title claims abstract description 84
- 238000000034 method Methods 0.000 title claims abstract description 42
- 230000003746 surface roughness Effects 0.000 title claims abstract description 41
- 239000013598 vector Substances 0.000 claims abstract description 91
- 238000004458 analytical method Methods 0.000 claims abstract description 81
- 239000002121 nanofiber Substances 0.000 claims abstract description 21
- 238000011176 pooling Methods 0.000 claims description 21
- 230000008569 process Effects 0.000 claims description 15
- 238000005520 cutting process Methods 0.000 claims description 12
- 238000013527 convolutional neural network Methods 0.000 claims description 11
- 238000001228 spectrum Methods 0.000 claims description 10
- 230000003595 spectral effect Effects 0.000 claims description 8
- 230000003044 adaptive effect Effects 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 20
- 238000004891 communication Methods 0.000 description 15
- 238000003860 storage Methods 0.000 description 12
- 238000009826 distribution Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 6
- 238000011156 evaluation Methods 0.000 description 5
- 239000000835 fiber Substances 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000001878 scanning electron micrograph Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000004626 scanning electron microscopy Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000011148 porous material Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 239000010409 thin film Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/002—Image coding using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Definitions
- one detection algorithm implemented in scanning electron microscopy (SEM) tools exclusively uses deep learning to detect defects-of-interest (DOI).
- SEM scanning electron microscopy
- the problem to be solved by the present invention is to provide a surface average roughness analysis device and method using feature extraction of a membrane image to predict the surface average roughness of a nanofiber-based membrane image using a prediction model including a convolutional neural network.
- the analysis device processes the membrane image when a membrane image related to nanofibers is input, generates a plurality of processed images, and groups the generated plurality of processed images into image groups
- An input vector generator that preprocesses each of the processed images included in the grouped image group to generate an input vector, and applies each of the input vectors included in the image group to a previously learned prediction model to generate a surface average roughness value in log units. It includes an analysis unit that outputs each surface average roughness value and calculates a statistical average surface roughness value of the membrane image using the plurality of output average surface roughness values.
- it further includes a learning unit that trains a prediction model including a convolutional neural network consisting of a convolution layer, a pooling layer, and a fully connected layer, and the prediction model includes the When 3-channel data that merges the learning CLAHE (Contrast-Limited Adaptive Histogram Equalization) image, the binarized image for learning, and the learning spectrum data are input as the learning input vector, the surface average roughness predicted value in log units corresponding to the learning input vector is calculated. It is characterized by output.
- a prediction model including a convolutional neural network consisting of a convolution layer, a pooling layer, and a fully connected layer
- the prediction model includes the When 3-channel data that merges the learning CLAHE (Contrast-Limited Adaptive Histogram Equalization) image, the binarized image for learning, and the learning spectrum data are input as the learning input vector, the surface average roughness predicted value in log units corresponding to the learning input vector is calculated. It is characterized by output.
- the learning unit trains the predicted surface average roughness value to be closer to the correct surface average roughness value corresponding to the learning input vector through the convolution operation and pooling operation process of the prediction model, and performs the learning a preset number of times. It is characterized by repeated performance.
- the input vector generator generates a plurality of processed images by cutting the membrane image into a preset size, randomly selects a preset number of processed images among the generated plurality of processed images, and It is characterized by grouping the processed images into one image group.
- the input vector generator is characterized by cutting the membrane image to a preset size using a sliding-window method.
- the input vector generator is characterized in that the membrane image is resized to a preset resolution before cutting the membrane image.
- the input vector generator generates a CLAHE image by normalizing the pixel values of each processed image based on the CLAHE technique, and generates a binarized image by binarizing the CLAHE image based on a global thresholding method, Spectral data is generated by spectralizing the CLAHE image based on a two-dimensional discrete Fourier transform, and the input vector is generated by merging the CLAHE image, the binarized image, and the spectral data into three channels.
- the membrane image is processed to generate a plurality of processed images, the generated plurality of processed images are grouped into image groups, and the grouped Preprocessing each of the processed images included in the image group to generate an input vector, and the analysis device applies each of the input vectors included in the image group to a previously learned prediction model to output a surface average roughness value in log units. and calculating a statistical average surface roughness value of the membrane image using the plurality of output average surface roughness values.
- a prediction model including a convolutional neural network consisting of a convolution layer, a pooling layer, and a fully connected layer, wherein the prediction model includes the When three-channel data that is a merged learning CLAHE image, learning binarized image, and learning spectrum data is input as a learning input vector for learning, the surface average roughness predicted value in log units corresponding to the learning input vector is output.
- the learning step includes training the predicted surface average roughness value to be closer to the correct surface average roughness value corresponding to the learning input vector through the convolution operation and pooling operation process of the prediction model, and performing the learning. It is characterized by including the step of repeating the process a set number of times.
- the step of generating the input vector includes cutting the membrane image to a preset size to generate a plurality of processed images, and randomly selecting a preset number of processed images among the plurality of generated processed images. and grouping the selected plurality of processed images into one image group.
- the step of generating the input vector is characterized by cutting the membrane image to a preset size using a sliding window method.
- the step of generating the input vector is characterized by resizing the membrane image to a preset resolution before cutting the membrane image.
- the step of generating the input vector includes generating a CLAHE image by normalizing the pixel value of each processed image based on the CLAHE technique, and generating a binarized image by binarizing the CLAHE image based on a global binarization method. , generating spectral data by spectralizing the CLAHE image based on a two-dimensional discrete Fourier transform, and generating the input vector by merging the CLAHE image, the binarized image, and the spectral data into three channels. It is characterized by
- the analysis system includes an analysis device that analyzes the surface average roughness of a membrane image related to nanofibers, receives information related to the analyzed surface average roughness from the analysis device, and provides information related to the received surface average roughness. It includes a user terminal that outputs, wherein when a membrane image related to nanofibers is input, the analysis device processes the membrane image to generate a plurality of processed images, and groups the generated plurality of processed images into image groups, An input vector generator that preprocesses each of the processed images included in the grouped image group to generate an input vector, and applies each of the input vectors included in the image group to a previously learned prediction model to generate a surface average roughness value in log units. It is characterized in that it includes an analysis unit that outputs each surface average roughness value and calculates a statistical average surface roughness value of the membrane image using the plurality of output average surface roughness values.
- the surface average roughness of a nanofiber-based membrane image can be predicted using a prediction model including a convolutional neural network, and a statistical surface average roughness value for the corresponding membrane image can be calculated.
- FIG. 1 is a configuration diagram for explaining an analysis system according to an embodiment of the present invention.
- Figure 2 is a block diagram for explaining an analysis device according to an embodiment of the present invention.
- Figure 3 is a block diagram for explaining a control unit according to an embodiment of the present invention.
- Figure 4 is a diagram for explaining the structure of a prediction model including a convolutional neural network according to an embodiment of the present invention.
- Figure 5 is a diagram for explaining a process of processing an original image according to an embodiment of the present invention.
- Figure 6 is a diagram for explaining the process of generating an input vector according to an embodiment of the present invention.
- Figure 7 is a diagram for explaining a global binarization image according to an embodiment of the present invention.
- Figure 8 is a flowchart for explaining an analysis method for predicting average surface roughness according to an embodiment of the present invention.
- Figure 9 is a diagram for explaining performance evaluation of an analysis device through continuous probability distribution according to an embodiment of the present invention.
- Figure 10 is a diagram for explaining the performance evaluation of an analysis device for prediction accuracy according to an embodiment of the present invention.
- Figure 11 is a block diagram for explaining a computing device according to an embodiment of the present invention.
- a component when a component is mentioned as being 'connected' or 'connected' to another component, it may be directly connected or connected to the other component, but may be connected to the other component in the middle. It should be understood that may exist. On the other hand, in this specification, when it is mentioned that a component is 'directly connected' or 'directly connected' to another component, it should be understood that there are no other components in between.
- 'and/or' includes a combination of a plurality of listed items or any of the plurality of listed items.
- 'A or B' may include 'A', 'B', or 'both A and B'.
- FIG. 1 is a configuration diagram for explaining an analysis system according to an embodiment of the present invention.
- the analysis system 300 predicts the average surface roughness of a nanofiber-based membrane image using a prediction model including a convolution neural network (CNN).
- the analysis system 300 includes an analysis device 100 and a user terminal 200.
- the analysis device 100 analyzes the surface average roughness of the membrane image related to the nanofiber.
- the membrane image may be a SEM (Scanning Electron Microscope) image.
- the analysis device 100 processes the membrane image to generate a plurality of processed images.
- the processed image refers to an image obtained by cutting one membrane image to a preset size.
- the analysis device 100 selects some of the plurality of generated processed images and groups them into one image group.
- the analysis device 100 generates an input vector by preprocessing each processed image included in the grouped image group.
- the analysis device 100 applies each input vector included in the image group to a previously learned prediction model and outputs each surface average roughness value in log units.
- the prediction model may be a convolutional neural network consisting of a convolution layer, a pooling layer, and a fully connected layer, but is not limited to this.
- the analysis device 100 calculates a statistical average surface roughness value of the membrane image using a plurality of output average surface roughness values.
- the statistical surface average roughness value represents the surface average roughness for the entire image, not the surface average roughness for a part of the membrane image, and means a value that reflects statistical concepts such as average value and mode.
- the statistical average surface roughness value may be a meaningful value that statistically represents the average surface roughness of the entire membrane image.
- the user terminal 200 is a terminal used by the user and communicates with the analysis device 100.
- the user terminal 200 receives information related to the statistical average surface roughness calculated from the analysis device 100.
- the information related to the statistical average surface roughness received may be an analysis result of the membrane image transmitted from the user terminal 200, but is not limited thereto.
- the user terminal 200 outputs information related to the received statistical surface average roughness to help the user intuitively recognize the surface average roughness of the membrane image remotely.
- the user terminal 200 may be a computer system such as a desktop, laptop, smartphone, handheld PC, etc.
- the user terminal 200 is shown as a separate configuration from the analysis device 100, but it is not limited to this and may be implemented as a single configuration depending on the situation.
- the analysis system 300 may establish a communication network 350 between the analysis device 100 and the user terminal 200 to support communication between them.
- the communication network 350 may be composed of a backbone network and a subscriber network.
- the backbone network may be composed of one or more integrated networks among the X.25 network, Frame Relay network, ATM network, MPLS (Multi-Protocol Label Switching) network, and GMPLS (Generalized Multi-Protocol Label Switching) network.
- Subscriber networks include FTTH (Fiber To The Home), ADSL (Asymmetric Digital Subscriber Line), cable network, zigbee, Bluetooth, and Wireless LAN (IEEE 802.11b, IEEE 802.11a, IEEE 802.11g, IEEE 802.11n).
- the communication network 350 may be an Internet network or a mobile communication network. Additionally, the communication network 350 may include any wireless or wired communication method that is widely known or will be developed in the future.
- Figure 2 is a block diagram for explaining an analysis device according to an embodiment of the present invention
- Figure 3 is a block diagram for explaining a control unit according to an embodiment of the present invention
- Figure 4 is a synthesis according to an embodiment of the present invention.
- Figure 5 is a diagram for explaining the structure of a prediction model including a product neural network
- Figure 5 is a diagram for explaining the process of processing an original image according to an embodiment of the present invention
- Figure 6 is an input according to an embodiment of the present invention.
- This is a diagram for explaining the process of generating a vector
- Figure 7 is a diagram for explaining a global binarization image according to an embodiment of the present invention.
- the analysis device 100 includes a communication unit 10, an input unit 30, a control unit 50, an output unit 70, and a storage unit 90.
- the communication unit 10 performs communication with the user terminal 200.
- the communication unit 10 may receive a membrane image related to nanofibers from the user terminal 200.
- the membrane image may be an SEM image. Additionally, the communication unit 10 may transmit information about the statistical average surface roughness value of the membrane image to the user terminal 200.
- the input unit 30 receives a learning input vector for learning a prediction model.
- the input vector for learning may be three-channel data that merges a CLAHE (Contrast-Limited Adaptive Histogram Equalization) image for learning, a binarized image for learning, and spectrum data for learning. Additionally, the input unit 30 can input membrane images related to nanofibers.
- CLAHE Contrast-Limited Adaptive Histogram Equalization
- the control unit 50 performs overall control of the analysis device 100.
- the control unit 50 may include an input vector generation unit 52 and an analysis unit 53, and may further include a learning unit 51.
- the learning unit 51 trains a prediction model that predicts the average surface roughness of the membrane image.
- the prediction model derives the surface average roughness (log R a ) value in log units from the nanofiber membrane image through a series of convolution and pooling operations that extract spatial patterns.
- the prediction model includes a convolutional neural network consisting of a convolutional layer, a pooling layer, and a fully connected layer.
- the convolutional neural network may include four convolutional layers, four pooling layers, and two fully connected layers, including a first convolutional layer, a first pooling layer, a second convolutional layer, a second pooling layer, and a first convolutional layer. It may be a neural network connected in the following order: 3 convolutional layers, a 3rd pooling layer, a 4th convolutional layer, a 4th pooling layer, a first fully connected layer, and a second fully connected layer.
- the convolution layer performs convolution operations using convolution filters and operations using activation functions.
- the activation function may be a Rectified Linear Unit (ReLU) that can describe non-linear relationships.
- the pooling layer performs pooling (or sub-sampling) operations using a pooling filter.
- the feature map of the convolution layer is subsampled with a 2x2 max pooling operation to reduce the number of feature map coefficients, and the number of hidden neurons can be determined by K-fold cross validation, which helps avoid overfitting problems. there is.
- the prediction model When the prediction model receives three-channel data that merges the CLAHE image for learning, the binarized image for learning, and the spectrum data for learning as the learning input vector, it outputs the surface average roughness prediction value in log units corresponding to the learning input vector.
- the learning unit 51 Based on this prediction model structure, the learning unit 51 performs learning so that the surface average roughness predicted value output through the convolution operation and pooling operation process is close to the surface average roughness answer value (known in advance) corresponding to the learning input vector. I order it. To this end, the learning unit 51 can learn the algorithm through the Adam optimizer so that the difference between the predicted value and the correct value is minimized. Additionally, the learning unit 51 can use Mean square error as a loss function to check convergence through a change in the size of the loss value (difference between the correct answer and the predicted value).
- the input vector generator 52 When a membrane image related to nanofibers is input through the communication unit 10 or the input unit 30, the input vector generator 52 generates an input vector to predict a value close to the average surface roughness value. That is, the input vector generator 52 processes the membrane image and generates a plurality of processed images. The input vector generator 52 groups the generated plurality of processed images into image groups and preprocesses each processed image included in the grouped image groups to generate an input vector.
- the input vector generator 52 cuts the input membrane image to a preset size to generate a plurality of processed images (63).
- the input vector generator 52 may cut the membrane image to a preset size using a sliding-window method. That is, the input vector generator 52 can move the window from the upper left to the lower right of the image and cut it to fit the window size (62).
- the input vector generator 52 resizes the membrane image to a preset resolution before cutting the membrane image.
- the input vector generator 52 resizes the resolution of the image to 51.2 pixel/um and then cuts the image width to 10 um.
- the input vector generator 52 increases the number of processed images by flipping or rotating the generated plurality of processed images 63 (64).
- the input vector generator 52 randomly selects a preset number of processed images from among the increased plurality of processed images (64), and groups the selected plurality of processed images into one image group (65). For example, the input vector generator 52 may randomly select 16 images from among images cut to a certain size from an SEM image with one average surface roughness, and group the selected images into one image group. .
- the input vector generator 52 may preprocess each of the plurality of processed images to generate a plurality of input vectors.
- the input vector generator 52 separates each processed image into RGB channels and converts the processed image into a grayscale image based on the separated RGB channels (66).
- the input vector generator 52 generates a CLAHE image by normalizing the processed image converted to a gray scale image. That is, the input vector generator 52 normalizes the pixel values of each processed image based on the CLAHE technique in order to minimize noise depending on the resolution between images and maximize the pixel value characteristics of the fiber boundary portion, which is an important factor in the average surface roughness. You can.
- the input vector generator 52 generates a binarized image by binarizing the CLAHE image based on a global thresholding method.
- the input vector generator 52 spectralizes the CLAHE image based on two-dimensional discrete Fourier transform (2d-DFT) to generate spectral data.
- 2d-DFT two-dimensional discrete Fourier transform
- the frequency domain through two-dimensional discrete Fourier transform represents the size spectrum according to the thickness, distribution and direction of the fiber, the frequency band is different depending on the position on the image, and the brightness difference is shown for each position, so the characteristics of roughness can be expressed as fiber distribution.
- the difference in frequency can be expressed in pixels.
- the input vector generator 52 sequentially stacks the arrays of the CLAHE image, binarized image, and spectrum data (67), and merges the stacked arrays into three channels to generate an input vector (68).
- the analysis unit 53 applies each input vector included in the image group to the prediction model learned in the learning unit 51 and outputs each surface average roughness value in log units.
- the analysis unit 53 calculates a statistical average surface roughness value of the membrane image using the plurality of output average surface roughness values. Through this, users can obtain meaningful statistical average surface roughness values.
- the analysis unit 53 can express the surface average roughness value in log units output from the prediction model on a continuous probability distribution and output statistical values through the mode and average values appearing on the continuous probability distribution.
- the output unit 70 outputs the membrane image transmitted through the communication unit 10 or the input unit 30.
- the output unit 70 outputs the input vector generated by the control unit 50 and outputs the statistical average surface roughness value calculated by the control unit 50.
- the output unit 70 includes a liquid crystal display (LCD), a thin film transistor-liquid crystal display (TFT LCD), an organic light-emitting diode (OLED), and a flexible display (flexible display). It may include at least one of a display and a 3D display.
- the storage unit 90 stores a program or algorithm for driving the guide device 100.
- the storage unit 90 stores the membrane image transmitted through the communication unit 10 or the input unit 30.
- the storage unit 90 stores the input vector generated by the control unit 50 and the statistical average surface roughness value calculated by the control unit 50.
- the storage unit 90 includes a flash memory type, hard disk type, multimedia card micro type, card type memory (for example, SD or XD memory, etc.), RAM (Random Access Memory), SRAM (Static Random Access Memory), ROM (Read-Only Memory, ROM), EEPROM (Electrically Erasable Programmable Read-Only Memory), PROM (Programmable Read-Only Memory), magnetic memory, It may include at least one storage medium of a magnetic disk and an optical disk.
- Figure 8 is a flowchart for explaining an analysis method for predicting average surface roughness according to an embodiment of the present invention.
- the analysis method predicts the surface average roughness of a nanofiber-based membrane image using a prediction model including a convolutional neural network to calculate a statistical average surface roughness value for the membrane image. You can. Through this, the analysis method allows users to quickly and accurately recognize meaningful information related to the average surface roughness of the membrane image.
- a membrane image related to the nanofiber is input to the analysis device 100.
- the analysis device 100 may receive and input a membrane image from the user terminal 200, or the membrane image may be directly input by the user.
- the analysis device 100 generates an input vector for the membrane image.
- the analysis device 100 processes the membrane image to generate a plurality of processed images.
- the analysis device 100 groups the generated plurality of processed images into image groups and preprocesses each processed image included in the grouped image group to generate an input vector.
- the input vector may be data obtained by merging the CLAHE image, binarized image, and spectrum data into three channels.
- step S130 the analysis device 100 calculates a statistical average surface roughness value.
- the analysis device 100 applies each input vector included in the image group to a previously learned prediction model and outputs a surface average roughness value in log units.
- the analysis device 100 can calculate statistical average surface roughness values, such as the mode and average value of the membrane image, using the plurality of output average surface roughness values.
- Figure 9 is a diagram for explaining the performance evaluation of an analysis device through continuous probability distribution according to an embodiment of the present invention
- Figure 10 is a diagram for explaining the performance evaluation of an analysis device for prediction accuracy according to an embodiment of the present invention. It is a drawing.
- the analysis device 100 can perform performance evaluation through various methods.
- the analysis device 100 may express the learning predicted value and the test predicted value, which are logarithmic average surface roughness values output through the prediction model, on a continuous probability distribution. Through this, the user can confirm that the mode and average values of the corresponding predicted values appear in the experimental values through measurement ( Figure 9). In addition, it can be confirmed that the prediction accuracy of the analysis device 100 is improved through an improvement in the coefficient of determination according to the input vector and a decrease in the average absolute ratio error (FIG. 10).
- Figure 11 is a block diagram for explaining a computing device according to an embodiment of the present invention.
- the computing device TN100 may be a device described herein (eg, an analysis device, a user terminal, etc.).
- the computing device TN100 may include at least one processor TN110, a transceiver device TN120, and a memory TN130. Additionally, the computing device TN100 may further include a storage device TN140, an input interface device TN150, an output interface device TN160, etc. Components included in the computing device TN100 may be connected by a bus TN170 and communicate with each other.
- the processor TN110 may execute a program command stored in at least one of the memory TN130 and the storage device TN140.
- the processor TN110 may refer to a central processing unit (CPU), a graphics processing unit (GPU), or a dedicated processor on which methods according to embodiments of the present invention are performed.
- Processor TN110 may be configured to implement procedures, functions, and methods described in connection with embodiments of the present invention.
- the processor TN110 may control each component of the computing device TN100.
- Each of the memory TN130 and the storage device TN140 can store various information related to the operation of the processor TN110.
- Each of the memory TN130 and the storage device TN140 may be comprised of at least one of a volatile storage medium and a non-volatile storage medium.
- the memory TN130 may be comprised of at least one of read only memory (ROM) and random access memory (RAM).
- the transceiving device TN120 can transmit or receive wired signals or wireless signals.
- the transmitting and receiving device (TN120) can be connected to a network and perform communication.
- the embodiments of the present invention are not only implemented through the apparatus and/or method described so far, but may also be implemented through a program that realizes the function corresponding to the configuration of the embodiment of the present invention or a recording medium on which the program is recorded.
- This implementation can be easily implemented by anyone skilled in the art from the description of the above-described embodiments.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
La présente invention divulgue un dispositif et un procédé d'analyse de rugosité de surface moyenne par extraction d'une caractéristique d'une image de membrane. Le dispositif d'analyse comprend : une unité de génération de vecteur d'entrée qui génère de multiples images traitées par traitement d'une image de membrane associée à une nanofibre lorsque l'image de membrane est entrée, regroupe les multiples images traitées générées en un groupe d'images, et prétraite les images traitées incluses dans le groupe d'images groupées pour générer des vecteurs d'entrée ; et une unité d'analyse qui applique les vecteurs d'entrée, inclus dans le groupe d'images, à un modèle de prédiction pré-entraîné pour délivrer en sortie des valeurs de rugosité de surface logarithmique moyenne, et calcule des valeurs statistiques de rugosité de surface moyenne de l'image de membrane à l'aide des multiples valeurs de rugosité de surface moyennes de sortie.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020220152050A KR20240070310A (ko) | 2022-11-14 | 2022-11-14 | 멤브레인 이미지의 특성 추출을 이용한 표면평균 거칠기 분석 장치 및 방법 |
KR10-2022-0152050 | 2022-11-14 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024106682A1 true WO2024106682A1 (fr) | 2024-05-23 |
Family
ID=91085062
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2023/010717 WO2024106682A1 (fr) | 2022-11-14 | 2023-07-25 | Dispositif et procédé d'analyse de rugosité de surface moyenne par extraction d'une caractéristique d'une image de membrane |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR20240070310A (fr) |
WO (1) | WO2024106682A1 (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118522655A (zh) * | 2024-07-17 | 2024-08-20 | 西安奕斯伟材料科技股份有限公司 | 晶圆及其表面纳米形貌的预测方法、装置、设备及介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110110758A (zh) * | 2019-04-15 | 2019-08-09 | 南京航空航天大学 | 一种基于卷积神经网络的表面粗糙度分类方法 |
CN113850339A (zh) * | 2021-09-30 | 2021-12-28 | 北京科技大学 | 一种基于多光源表面图像的粗糙度等级预测方法及装置 |
KR20220094791A (ko) * | 2020-12-29 | 2022-07-06 | 연세대학교 산학협력단 | 인공지능 학습 및 빅데이터 구축을 위한 피부질환 데이터 증강 방법 및 시스템 |
JP2022113177A (ja) * | 2021-01-24 | 2022-08-04 | 株式会社ジェイテクト | 表面状態推定方法及び表面状態推定システム |
US11468552B1 (en) * | 2021-07-12 | 2022-10-11 | The Florida International University Board Of Trustees | Systems and methods for quantifying concrete surface roughness |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11450012B2 (en) | 2019-10-31 | 2022-09-20 | Kla Corporation | BBP assisted defect detection flow for SEM images |
-
2022
- 2022-11-14 KR KR1020220152050A patent/KR20240070310A/ko unknown
-
2023
- 2023-07-25 WO PCT/KR2023/010717 patent/WO2024106682A1/fr unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110110758A (zh) * | 2019-04-15 | 2019-08-09 | 南京航空航天大学 | 一种基于卷积神经网络的表面粗糙度分类方法 |
KR20220094791A (ko) * | 2020-12-29 | 2022-07-06 | 연세대학교 산학협력단 | 인공지능 학습 및 빅데이터 구축을 위한 피부질환 데이터 증강 방법 및 시스템 |
JP2022113177A (ja) * | 2021-01-24 | 2022-08-04 | 株式会社ジェイテクト | 表面状態推定方法及び表面状態推定システム |
US11468552B1 (en) * | 2021-07-12 | 2022-10-11 | The Florida International University Board Of Trustees | Systems and methods for quantifying concrete surface roughness |
CN113850339A (zh) * | 2021-09-30 | 2021-12-28 | 北京科技大学 | 一种基于多光源表面图像的粗糙度等级预测方法及装置 |
Non-Patent Citations (1)
Title |
---|
KANG, DONG HEE; KANG, HYUN WOOK: "Prediction of Surface Roughness for Modeling Pressure Drop in Porous Nanofiber Membrane using the Developed CNN Algorithm", PROCEEDINGS OF THE KSME FLUID ENGINEERING DIVISION 2023 SPRING CONFERENCE, 1 May 2023 (2023-05-01), pages 253 - 254, XP009557103 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118522655A (zh) * | 2024-07-17 | 2024-08-20 | 西安奕斯伟材料科技股份有限公司 | 晶圆及其表面纳米形貌的预测方法、装置、设备及介质 |
Also Published As
Publication number | Publication date |
---|---|
KR20240070310A (ko) | 2024-05-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2024106682A1 (fr) | Dispositif et procédé d'analyse de rugosité de surface moyenne par extraction d'une caractéristique d'une image de membrane | |
US10535141B2 (en) | Differentiable jaccard loss approximation for training an artificial neural network | |
CN111431986B (zh) | 基于5g和ai云边协同的工业智能质检系统 | |
WO2022005091A1 (fr) | Procédé et appareil de lecture de l'âge d'un os | |
CN111447190A (zh) | 一种加密恶意流量的识别方法、设备及装置 | |
US20140286527A1 (en) | Systems and methods for accelerated face detection | |
WO2019050108A1 (fr) | Technologie pour analyser un comportement anormal dans un système basé sur un apprentissage profond en utilisant une imagerie de données | |
CN113160200B (zh) | 一种基于多任务孪生网络的工业图像缺陷检测方法及系统 | |
WO2020222391A1 (fr) | Système et procédé pour couche d'ondelettes réversible pour réseaux neuronaux | |
KR101963404B1 (ko) | 2-단계 최적화 딥 러닝 방법, 이를 실행시키기 위한 프로그램을 기록한 컴퓨터 판독 가능한 기록매체 및 딥 러닝 시스템 | |
CN113591674B (zh) | 一种面向实时视频流的边缘环境行为识别系统 | |
CN116342894B (zh) | 基于改进YOLOv5的GIS红外特征识别系统及方法 | |
US20120002938A1 (en) | Learned cognitive system | |
CN113469997A (zh) | 平面玻璃的检测方法、装置、设备和介质 | |
KR101822963B1 (ko) | 이진 영상을 이용한 결함 탐지장치 및 방법 | |
Ghayal et al. | Efficient eye diagram analyzer for optical modulation format recognition using deep learning technique | |
CN116721091A (zh) | 一种布匹瑕疵检测方法、装置及可读介质 | |
CN110490852A (zh) | 目标对象的检索方法、装置、计算机可读介质及电子设备 | |
CN117853573A (zh) | 一种视频处理方法、装置、电子设备及计算机可读介质 | |
KR20200135044A (ko) | 영상 변환을 이용한 머신러닝 기반 결함 분류 장치 및 방법 | |
KR102456189B1 (ko) | 클라우드 엣지 기반의 영상 분석 시스템 | |
CN114445875A (zh) | 基于深度学习的身份识别与人脸比对系统及训练方法 | |
CN111817902B (zh) | 一种控制带宽的方法和系统 | |
Babatunde et al. | Machine Learning Model for Classifying Free Space Optics Channel Impairments | |
CN110956366A (zh) | 一种装维质检中分光器施工一致性检验方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23891756 Country of ref document: EP Kind code of ref document: A1 |