WO2024106682A1 - Device and method for analyzing average surface roughness by extracting feature from membrane image - Google Patents

Device and method for analyzing average surface roughness by extracting feature from membrane image Download PDF

Info

Publication number
WO2024106682A1
WO2024106682A1 PCT/KR2023/010717 KR2023010717W WO2024106682A1 WO 2024106682 A1 WO2024106682 A1 WO 2024106682A1 KR 2023010717 W KR2023010717 W KR 2023010717W WO 2024106682 A1 WO2024106682 A1 WO 2024106682A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
input vector
learning
membrane
processed images
Prior art date
Application number
PCT/KR2023/010717
Other languages
French (fr)
Korean (ko)
Inventor
강현욱
강동희
김나경
Original Assignee
전남대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 전남대학교산학협력단 filed Critical 전남대학교산학협력단
Publication of WO2024106682A1 publication Critical patent/WO2024106682A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Definitions

  • one detection algorithm implemented in scanning electron microscopy (SEM) tools exclusively uses deep learning to detect defects-of-interest (DOI).
  • SEM scanning electron microscopy
  • the problem to be solved by the present invention is to provide a surface average roughness analysis device and method using feature extraction of a membrane image to predict the surface average roughness of a nanofiber-based membrane image using a prediction model including a convolutional neural network.
  • the analysis device processes the membrane image when a membrane image related to nanofibers is input, generates a plurality of processed images, and groups the generated plurality of processed images into image groups
  • An input vector generator that preprocesses each of the processed images included in the grouped image group to generate an input vector, and applies each of the input vectors included in the image group to a previously learned prediction model to generate a surface average roughness value in log units. It includes an analysis unit that outputs each surface average roughness value and calculates a statistical average surface roughness value of the membrane image using the plurality of output average surface roughness values.
  • it further includes a learning unit that trains a prediction model including a convolutional neural network consisting of a convolution layer, a pooling layer, and a fully connected layer, and the prediction model includes the When 3-channel data that merges the learning CLAHE (Contrast-Limited Adaptive Histogram Equalization) image, the binarized image for learning, and the learning spectrum data are input as the learning input vector, the surface average roughness predicted value in log units corresponding to the learning input vector is calculated. It is characterized by output.
  • a prediction model including a convolutional neural network consisting of a convolution layer, a pooling layer, and a fully connected layer
  • the prediction model includes the When 3-channel data that merges the learning CLAHE (Contrast-Limited Adaptive Histogram Equalization) image, the binarized image for learning, and the learning spectrum data are input as the learning input vector, the surface average roughness predicted value in log units corresponding to the learning input vector is calculated. It is characterized by output.
  • the learning unit trains the predicted surface average roughness value to be closer to the correct surface average roughness value corresponding to the learning input vector through the convolution operation and pooling operation process of the prediction model, and performs the learning a preset number of times. It is characterized by repeated performance.
  • the input vector generator generates a plurality of processed images by cutting the membrane image into a preset size, randomly selects a preset number of processed images among the generated plurality of processed images, and It is characterized by grouping the processed images into one image group.
  • the input vector generator is characterized by cutting the membrane image to a preset size using a sliding-window method.
  • the input vector generator is characterized in that the membrane image is resized to a preset resolution before cutting the membrane image.
  • the input vector generator generates a CLAHE image by normalizing the pixel values of each processed image based on the CLAHE technique, and generates a binarized image by binarizing the CLAHE image based on a global thresholding method, Spectral data is generated by spectralizing the CLAHE image based on a two-dimensional discrete Fourier transform, and the input vector is generated by merging the CLAHE image, the binarized image, and the spectral data into three channels.
  • the membrane image is processed to generate a plurality of processed images, the generated plurality of processed images are grouped into image groups, and the grouped Preprocessing each of the processed images included in the image group to generate an input vector, and the analysis device applies each of the input vectors included in the image group to a previously learned prediction model to output a surface average roughness value in log units. and calculating a statistical average surface roughness value of the membrane image using the plurality of output average surface roughness values.
  • a prediction model including a convolutional neural network consisting of a convolution layer, a pooling layer, and a fully connected layer, wherein the prediction model includes the When three-channel data that is a merged learning CLAHE image, learning binarized image, and learning spectrum data is input as a learning input vector for learning, the surface average roughness predicted value in log units corresponding to the learning input vector is output.
  • the learning step includes training the predicted surface average roughness value to be closer to the correct surface average roughness value corresponding to the learning input vector through the convolution operation and pooling operation process of the prediction model, and performing the learning. It is characterized by including the step of repeating the process a set number of times.
  • the step of generating the input vector includes cutting the membrane image to a preset size to generate a plurality of processed images, and randomly selecting a preset number of processed images among the plurality of generated processed images. and grouping the selected plurality of processed images into one image group.
  • the step of generating the input vector is characterized by cutting the membrane image to a preset size using a sliding window method.
  • the step of generating the input vector is characterized by resizing the membrane image to a preset resolution before cutting the membrane image.
  • the step of generating the input vector includes generating a CLAHE image by normalizing the pixel value of each processed image based on the CLAHE technique, and generating a binarized image by binarizing the CLAHE image based on a global binarization method. , generating spectral data by spectralizing the CLAHE image based on a two-dimensional discrete Fourier transform, and generating the input vector by merging the CLAHE image, the binarized image, and the spectral data into three channels. It is characterized by
  • the analysis system includes an analysis device that analyzes the surface average roughness of a membrane image related to nanofibers, receives information related to the analyzed surface average roughness from the analysis device, and provides information related to the received surface average roughness. It includes a user terminal that outputs, wherein when a membrane image related to nanofibers is input, the analysis device processes the membrane image to generate a plurality of processed images, and groups the generated plurality of processed images into image groups, An input vector generator that preprocesses each of the processed images included in the grouped image group to generate an input vector, and applies each of the input vectors included in the image group to a previously learned prediction model to generate a surface average roughness value in log units. It is characterized in that it includes an analysis unit that outputs each surface average roughness value and calculates a statistical average surface roughness value of the membrane image using the plurality of output average surface roughness values.
  • the surface average roughness of a nanofiber-based membrane image can be predicted using a prediction model including a convolutional neural network, and a statistical surface average roughness value for the corresponding membrane image can be calculated.
  • FIG. 1 is a configuration diagram for explaining an analysis system according to an embodiment of the present invention.
  • Figure 2 is a block diagram for explaining an analysis device according to an embodiment of the present invention.
  • Figure 3 is a block diagram for explaining a control unit according to an embodiment of the present invention.
  • Figure 4 is a diagram for explaining the structure of a prediction model including a convolutional neural network according to an embodiment of the present invention.
  • Figure 5 is a diagram for explaining a process of processing an original image according to an embodiment of the present invention.
  • Figure 6 is a diagram for explaining the process of generating an input vector according to an embodiment of the present invention.
  • Figure 7 is a diagram for explaining a global binarization image according to an embodiment of the present invention.
  • Figure 8 is a flowchart for explaining an analysis method for predicting average surface roughness according to an embodiment of the present invention.
  • Figure 9 is a diagram for explaining performance evaluation of an analysis device through continuous probability distribution according to an embodiment of the present invention.
  • Figure 10 is a diagram for explaining the performance evaluation of an analysis device for prediction accuracy according to an embodiment of the present invention.
  • Figure 11 is a block diagram for explaining a computing device according to an embodiment of the present invention.
  • a component when a component is mentioned as being 'connected' or 'connected' to another component, it may be directly connected or connected to the other component, but may be connected to the other component in the middle. It should be understood that may exist. On the other hand, in this specification, when it is mentioned that a component is 'directly connected' or 'directly connected' to another component, it should be understood that there are no other components in between.
  • 'and/or' includes a combination of a plurality of listed items or any of the plurality of listed items.
  • 'A or B' may include 'A', 'B', or 'both A and B'.
  • FIG. 1 is a configuration diagram for explaining an analysis system according to an embodiment of the present invention.
  • the analysis system 300 predicts the average surface roughness of a nanofiber-based membrane image using a prediction model including a convolution neural network (CNN).
  • the analysis system 300 includes an analysis device 100 and a user terminal 200.
  • the analysis device 100 analyzes the surface average roughness of the membrane image related to the nanofiber.
  • the membrane image may be a SEM (Scanning Electron Microscope) image.
  • the analysis device 100 processes the membrane image to generate a plurality of processed images.
  • the processed image refers to an image obtained by cutting one membrane image to a preset size.
  • the analysis device 100 selects some of the plurality of generated processed images and groups them into one image group.
  • the analysis device 100 generates an input vector by preprocessing each processed image included in the grouped image group.
  • the analysis device 100 applies each input vector included in the image group to a previously learned prediction model and outputs each surface average roughness value in log units.
  • the prediction model may be a convolutional neural network consisting of a convolution layer, a pooling layer, and a fully connected layer, but is not limited to this.
  • the analysis device 100 calculates a statistical average surface roughness value of the membrane image using a plurality of output average surface roughness values.
  • the statistical surface average roughness value represents the surface average roughness for the entire image, not the surface average roughness for a part of the membrane image, and means a value that reflects statistical concepts such as average value and mode.
  • the statistical average surface roughness value may be a meaningful value that statistically represents the average surface roughness of the entire membrane image.
  • the user terminal 200 is a terminal used by the user and communicates with the analysis device 100.
  • the user terminal 200 receives information related to the statistical average surface roughness calculated from the analysis device 100.
  • the information related to the statistical average surface roughness received may be an analysis result of the membrane image transmitted from the user terminal 200, but is not limited thereto.
  • the user terminal 200 outputs information related to the received statistical surface average roughness to help the user intuitively recognize the surface average roughness of the membrane image remotely.
  • the user terminal 200 may be a computer system such as a desktop, laptop, smartphone, handheld PC, etc.
  • the user terminal 200 is shown as a separate configuration from the analysis device 100, but it is not limited to this and may be implemented as a single configuration depending on the situation.
  • the analysis system 300 may establish a communication network 350 between the analysis device 100 and the user terminal 200 to support communication between them.
  • the communication network 350 may be composed of a backbone network and a subscriber network.
  • the backbone network may be composed of one or more integrated networks among the X.25 network, Frame Relay network, ATM network, MPLS (Multi-Protocol Label Switching) network, and GMPLS (Generalized Multi-Protocol Label Switching) network.
  • Subscriber networks include FTTH (Fiber To The Home), ADSL (Asymmetric Digital Subscriber Line), cable network, zigbee, Bluetooth, and Wireless LAN (IEEE 802.11b, IEEE 802.11a, IEEE 802.11g, IEEE 802.11n).
  • the communication network 350 may be an Internet network or a mobile communication network. Additionally, the communication network 350 may include any wireless or wired communication method that is widely known or will be developed in the future.
  • Figure 2 is a block diagram for explaining an analysis device according to an embodiment of the present invention
  • Figure 3 is a block diagram for explaining a control unit according to an embodiment of the present invention
  • Figure 4 is a synthesis according to an embodiment of the present invention.
  • Figure 5 is a diagram for explaining the structure of a prediction model including a product neural network
  • Figure 5 is a diagram for explaining the process of processing an original image according to an embodiment of the present invention
  • Figure 6 is an input according to an embodiment of the present invention.
  • This is a diagram for explaining the process of generating a vector
  • Figure 7 is a diagram for explaining a global binarization image according to an embodiment of the present invention.
  • the analysis device 100 includes a communication unit 10, an input unit 30, a control unit 50, an output unit 70, and a storage unit 90.
  • the communication unit 10 performs communication with the user terminal 200.
  • the communication unit 10 may receive a membrane image related to nanofibers from the user terminal 200.
  • the membrane image may be an SEM image. Additionally, the communication unit 10 may transmit information about the statistical average surface roughness value of the membrane image to the user terminal 200.
  • the input unit 30 receives a learning input vector for learning a prediction model.
  • the input vector for learning may be three-channel data that merges a CLAHE (Contrast-Limited Adaptive Histogram Equalization) image for learning, a binarized image for learning, and spectrum data for learning. Additionally, the input unit 30 can input membrane images related to nanofibers.
  • CLAHE Contrast-Limited Adaptive Histogram Equalization
  • the control unit 50 performs overall control of the analysis device 100.
  • the control unit 50 may include an input vector generation unit 52 and an analysis unit 53, and may further include a learning unit 51.
  • the learning unit 51 trains a prediction model that predicts the average surface roughness of the membrane image.
  • the prediction model derives the surface average roughness (log R a ) value in log units from the nanofiber membrane image through a series of convolution and pooling operations that extract spatial patterns.
  • the prediction model includes a convolutional neural network consisting of a convolutional layer, a pooling layer, and a fully connected layer.
  • the convolutional neural network may include four convolutional layers, four pooling layers, and two fully connected layers, including a first convolutional layer, a first pooling layer, a second convolutional layer, a second pooling layer, and a first convolutional layer. It may be a neural network connected in the following order: 3 convolutional layers, a 3rd pooling layer, a 4th convolutional layer, a 4th pooling layer, a first fully connected layer, and a second fully connected layer.
  • the convolution layer performs convolution operations using convolution filters and operations using activation functions.
  • the activation function may be a Rectified Linear Unit (ReLU) that can describe non-linear relationships.
  • the pooling layer performs pooling (or sub-sampling) operations using a pooling filter.
  • the feature map of the convolution layer is subsampled with a 2x2 max pooling operation to reduce the number of feature map coefficients, and the number of hidden neurons can be determined by K-fold cross validation, which helps avoid overfitting problems. there is.
  • the prediction model When the prediction model receives three-channel data that merges the CLAHE image for learning, the binarized image for learning, and the spectrum data for learning as the learning input vector, it outputs the surface average roughness prediction value in log units corresponding to the learning input vector.
  • the learning unit 51 Based on this prediction model structure, the learning unit 51 performs learning so that the surface average roughness predicted value output through the convolution operation and pooling operation process is close to the surface average roughness answer value (known in advance) corresponding to the learning input vector. I order it. To this end, the learning unit 51 can learn the algorithm through the Adam optimizer so that the difference between the predicted value and the correct value is minimized. Additionally, the learning unit 51 can use Mean square error as a loss function to check convergence through a change in the size of the loss value (difference between the correct answer and the predicted value).
  • the input vector generator 52 When a membrane image related to nanofibers is input through the communication unit 10 or the input unit 30, the input vector generator 52 generates an input vector to predict a value close to the average surface roughness value. That is, the input vector generator 52 processes the membrane image and generates a plurality of processed images. The input vector generator 52 groups the generated plurality of processed images into image groups and preprocesses each processed image included in the grouped image groups to generate an input vector.
  • the input vector generator 52 cuts the input membrane image to a preset size to generate a plurality of processed images (63).
  • the input vector generator 52 may cut the membrane image to a preset size using a sliding-window method. That is, the input vector generator 52 can move the window from the upper left to the lower right of the image and cut it to fit the window size (62).
  • the input vector generator 52 resizes the membrane image to a preset resolution before cutting the membrane image.
  • the input vector generator 52 resizes the resolution of the image to 51.2 pixel/um and then cuts the image width to 10 um.
  • the input vector generator 52 increases the number of processed images by flipping or rotating the generated plurality of processed images 63 (64).
  • the input vector generator 52 randomly selects a preset number of processed images from among the increased plurality of processed images (64), and groups the selected plurality of processed images into one image group (65). For example, the input vector generator 52 may randomly select 16 images from among images cut to a certain size from an SEM image with one average surface roughness, and group the selected images into one image group. .
  • the input vector generator 52 may preprocess each of the plurality of processed images to generate a plurality of input vectors.
  • the input vector generator 52 separates each processed image into RGB channels and converts the processed image into a grayscale image based on the separated RGB channels (66).
  • the input vector generator 52 generates a CLAHE image by normalizing the processed image converted to a gray scale image. That is, the input vector generator 52 normalizes the pixel values of each processed image based on the CLAHE technique in order to minimize noise depending on the resolution between images and maximize the pixel value characteristics of the fiber boundary portion, which is an important factor in the average surface roughness. You can.
  • the input vector generator 52 generates a binarized image by binarizing the CLAHE image based on a global thresholding method.
  • the input vector generator 52 spectralizes the CLAHE image based on two-dimensional discrete Fourier transform (2d-DFT) to generate spectral data.
  • 2d-DFT two-dimensional discrete Fourier transform
  • the frequency domain through two-dimensional discrete Fourier transform represents the size spectrum according to the thickness, distribution and direction of the fiber, the frequency band is different depending on the position on the image, and the brightness difference is shown for each position, so the characteristics of roughness can be expressed as fiber distribution.
  • the difference in frequency can be expressed in pixels.
  • the input vector generator 52 sequentially stacks the arrays of the CLAHE image, binarized image, and spectrum data (67), and merges the stacked arrays into three channels to generate an input vector (68).
  • the analysis unit 53 applies each input vector included in the image group to the prediction model learned in the learning unit 51 and outputs each surface average roughness value in log units.
  • the analysis unit 53 calculates a statistical average surface roughness value of the membrane image using the plurality of output average surface roughness values. Through this, users can obtain meaningful statistical average surface roughness values.
  • the analysis unit 53 can express the surface average roughness value in log units output from the prediction model on a continuous probability distribution and output statistical values through the mode and average values appearing on the continuous probability distribution.
  • the output unit 70 outputs the membrane image transmitted through the communication unit 10 or the input unit 30.
  • the output unit 70 outputs the input vector generated by the control unit 50 and outputs the statistical average surface roughness value calculated by the control unit 50.
  • the output unit 70 includes a liquid crystal display (LCD), a thin film transistor-liquid crystal display (TFT LCD), an organic light-emitting diode (OLED), and a flexible display (flexible display). It may include at least one of a display and a 3D display.
  • the storage unit 90 stores a program or algorithm for driving the guide device 100.
  • the storage unit 90 stores the membrane image transmitted through the communication unit 10 or the input unit 30.
  • the storage unit 90 stores the input vector generated by the control unit 50 and the statistical average surface roughness value calculated by the control unit 50.
  • the storage unit 90 includes a flash memory type, hard disk type, multimedia card micro type, card type memory (for example, SD or XD memory, etc.), RAM (Random Access Memory), SRAM (Static Random Access Memory), ROM (Read-Only Memory, ROM), EEPROM (Electrically Erasable Programmable Read-Only Memory), PROM (Programmable Read-Only Memory), magnetic memory, It may include at least one storage medium of a magnetic disk and an optical disk.
  • Figure 8 is a flowchart for explaining an analysis method for predicting average surface roughness according to an embodiment of the present invention.
  • the analysis method predicts the surface average roughness of a nanofiber-based membrane image using a prediction model including a convolutional neural network to calculate a statistical average surface roughness value for the membrane image. You can. Through this, the analysis method allows users to quickly and accurately recognize meaningful information related to the average surface roughness of the membrane image.
  • a membrane image related to the nanofiber is input to the analysis device 100.
  • the analysis device 100 may receive and input a membrane image from the user terminal 200, or the membrane image may be directly input by the user.
  • the analysis device 100 generates an input vector for the membrane image.
  • the analysis device 100 processes the membrane image to generate a plurality of processed images.
  • the analysis device 100 groups the generated plurality of processed images into image groups and preprocesses each processed image included in the grouped image group to generate an input vector.
  • the input vector may be data obtained by merging the CLAHE image, binarized image, and spectrum data into three channels.
  • step S130 the analysis device 100 calculates a statistical average surface roughness value.
  • the analysis device 100 applies each input vector included in the image group to a previously learned prediction model and outputs a surface average roughness value in log units.
  • the analysis device 100 can calculate statistical average surface roughness values, such as the mode and average value of the membrane image, using the plurality of output average surface roughness values.
  • Figure 9 is a diagram for explaining the performance evaluation of an analysis device through continuous probability distribution according to an embodiment of the present invention
  • Figure 10 is a diagram for explaining the performance evaluation of an analysis device for prediction accuracy according to an embodiment of the present invention. It is a drawing.
  • the analysis device 100 can perform performance evaluation through various methods.
  • the analysis device 100 may express the learning predicted value and the test predicted value, which are logarithmic average surface roughness values output through the prediction model, on a continuous probability distribution. Through this, the user can confirm that the mode and average values of the corresponding predicted values appear in the experimental values through measurement ( Figure 9). In addition, it can be confirmed that the prediction accuracy of the analysis device 100 is improved through an improvement in the coefficient of determination according to the input vector and a decrease in the average absolute ratio error (FIG. 10).
  • Figure 11 is a block diagram for explaining a computing device according to an embodiment of the present invention.
  • the computing device TN100 may be a device described herein (eg, an analysis device, a user terminal, etc.).
  • the computing device TN100 may include at least one processor TN110, a transceiver device TN120, and a memory TN130. Additionally, the computing device TN100 may further include a storage device TN140, an input interface device TN150, an output interface device TN160, etc. Components included in the computing device TN100 may be connected by a bus TN170 and communicate with each other.
  • the processor TN110 may execute a program command stored in at least one of the memory TN130 and the storage device TN140.
  • the processor TN110 may refer to a central processing unit (CPU), a graphics processing unit (GPU), or a dedicated processor on which methods according to embodiments of the present invention are performed.
  • Processor TN110 may be configured to implement procedures, functions, and methods described in connection with embodiments of the present invention.
  • the processor TN110 may control each component of the computing device TN100.
  • Each of the memory TN130 and the storage device TN140 can store various information related to the operation of the processor TN110.
  • Each of the memory TN130 and the storage device TN140 may be comprised of at least one of a volatile storage medium and a non-volatile storage medium.
  • the memory TN130 may be comprised of at least one of read only memory (ROM) and random access memory (RAM).
  • the transceiving device TN120 can transmit or receive wired signals or wireless signals.
  • the transmitting and receiving device (TN120) can be connected to a network and perform communication.
  • the embodiments of the present invention are not only implemented through the apparatus and/or method described so far, but may also be implemented through a program that realizes the function corresponding to the configuration of the embodiment of the present invention or a recording medium on which the program is recorded.
  • This implementation can be easily implemented by anyone skilled in the art from the description of the above-described embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

Disclosed are a device and a method for analyzing average surface roughness by extracting a feature from membrane image. The analysis device comprises: an input vector generation unit which generates multiple processed images by processing a membrane image related to a nanofiber when the membrane image is input, groups the multiple generated processed images into an image group, and preprocesses the processed images included in the grouped image group to generate input vectors; and an analysis unit which applies the input vectors included in the image group to a pre-trained prediction model to output average logarithmic surface roughness values, and calculates statistical average surface roughness values of the membrane image by using the multiple output average surface roughness values.

Description

멤브레인 이미지의 특성 추출을 이용한 표면평균 거칠기 분석 장치 및 방법Surface average roughness analysis device and method using feature extraction of membrane images
본 발명은 표면평균 거칠기 분석 장치에 관한 것으로, 더욱 상세하게는 나노 섬유기반 멤브레인 이미지로부터 표면평균 거칠기를 예측하는 나노섬유막 이미지의 특성 추출을 이용한 표면평균 거칠기 분석 장치 및 방법에 관한 것이다.The present invention relates to a surface average roughness analysis device, and more specifically, to a surface average roughness analysis device and method using feature extraction of a nanofiber membrane image to predict surface average roughness from a nanofiber-based membrane image.
반도체 제조 산업의 발전으로 수율 관리, 특히 계측 및 검사 시스템에 대한 요구가 높아지고 있다. With the development of the semiconductor manufacturing industry, demands for yield management, especially metrology and inspection systems, are increasing.
이러한 요구를 만족시키기 위해, 주사 전자 현미경(SEM) 도구에서 구현된 하나의 검출 알고리즘은 딥러닝(deep learning)을 독점적으로 사용하여 관심 결함(defects-of-interest, DOI)을 검출하고 있다. 하지만 이러한 종래의 방법은 여러 가지 이유로 불리하거나, 최적의 성능을 발휘하지 못하는 문제점을 가지고 있다.To meet this need, one detection algorithm implemented in scanning electron microscopy (SEM) tools exclusively uses deep learning to detect defects-of-interest (DOI). However, these conventional methods are disadvantageous for various reasons or have problems in that they do not provide optimal performance.
따라서 보다 개선된 결함 검출 성능을 가지는 기술에 대한 연구가 필요한 실정이다.Therefore, research on technologies with improved defect detection performance is needed.
본 발명이 해결하고자 하는 과제는, 합성곱신경망을 포함하는 예측모델을 이용하여 나노 섬유기반 멤브레인 이미지가 가지는 표면평균 거칠기를 예측하는 멤브레인 이미지의 특성 추출을 이용한 표면평균 거칠기 분석 장치 및 방법을 제공하는 것이다. The problem to be solved by the present invention is to provide a surface average roughness analysis device and method using feature extraction of a membrane image to predict the surface average roughness of a nanofiber-based membrane image using a prediction model including a convolutional neural network. will be.
상기 과제를 해결하기 위해 본 발명에 따른 분석 장치는 나노 섬유와 관련된 멤브레인 이미지가 입력되면 상기 멤브레인 이미지를 가공하여 복수의 가공 이미지를 생성하고, 상기 생성된 복수의 가공 이미지를 이미지 그룹으로 그룹핑하며, 상기 그룹핑된 이미지 그룹에 포함된 가공 이미지 각각을 전처리하여 입력벡터를 생성하는 입력벡터 생성부 및 상기 이미지 그룹에 포함된 입력벡터 각각을 기 학습된 예측모델에 적용하여 로그단위의 표면평균 거칠기 값을 각각 출력하고, 상기 출력된 복수의 표면평균 거칠기 값을 이용하여 상기 멤브레인 이미지의 통계적인 표면평균 거칠기 값을 산출하는 분석부를 포함한다.In order to solve the above problem, the analysis device according to the present invention processes the membrane image when a membrane image related to nanofibers is input, generates a plurality of processed images, and groups the generated plurality of processed images into image groups, An input vector generator that preprocesses each of the processed images included in the grouped image group to generate an input vector, and applies each of the input vectors included in the image group to a previously learned prediction model to generate a surface average roughness value in log units. It includes an analysis unit that outputs each surface average roughness value and calculates a statistical average surface roughness value of the membrane image using the plurality of output average surface roughness values.
또한 컨볼루션층(Convolution layer), 풀링층(Pooling layer) 및 완전 연결층(Fully connected layer)으로 구성되는 합성곱신경망이 포함된 예측모델을 학습시키는 학습부를 더 포함하고, 상기 예측모델은, 상기 학습을 위한 학습용 입력벡터로 학습용 CLAHE(Contrast-Limited Adaptive Histogram Equalization) 이미지, 학습용 이진화 이미지 및 학습용 스펙트럼 데이터가 병합된 3채널 데이터가 입력되면 상기 학습용 입력벡터에 대응하는 로그단위의 표면평균 거칠기 예측값을 출력하는 것을 특징으로 한다.In addition, it further includes a learning unit that trains a prediction model including a convolutional neural network consisting of a convolution layer, a pooling layer, and a fully connected layer, and the prediction model includes the When 3-channel data that merges the learning CLAHE (Contrast-Limited Adaptive Histogram Equalization) image, the binarized image for learning, and the learning spectrum data are input as the learning input vector, the surface average roughness predicted value in log units corresponding to the learning input vector is calculated. It is characterized by output.
또한 상기 학습부는, 상기 예측모델의 컨볼루션 연산 및 풀링 연산 과정을 통해 상기 표면평균 거칠기 예측값이 상기 학습용 입력벡터에 대응하는 표면평균 거칠기 정답값에 가까워지도록 학습을 시키고, 상기 학습을 기 설정된 횟수만큼 반복 수행하는 것을 특징으로 한다.In addition, the learning unit trains the predicted surface average roughness value to be closer to the correct surface average roughness value corresponding to the learning input vector through the convolution operation and pooling operation process of the prediction model, and performs the learning a preset number of times. It is characterized by repeated performance.
또한 상기 입력벡터 생성부는, 상기 멤브레인 이미지를 기 설정된 크기로 절단하여 복수의 가공 이미지를 생성하고, 상기 생성된 복수의 가공 이미지 중 기 설정된 개수만큼의 가공 이미지를 랜덤하게 선정하며, 상기 선정된 복수의 가공 이미지를 하나의 이미지 그룹으로 그룹핑하는 것을 특징으로 한다.In addition, the input vector generator generates a plurality of processed images by cutting the membrane image into a preset size, randomly selects a preset number of processed images among the generated plurality of processed images, and It is characterized by grouping the processed images into one image group.
또한 상기 입력벡터 생성부는, 슬라이딩 윈도우(sliding-window) 방식을 이용하여 상기 멤브레인 이미지를 기 설정된 크기로 절단하는 것을 특징으로 한다.In addition, the input vector generator is characterized by cutting the membrane image to a preset size using a sliding-window method.
또한 상기 입력벡터 생성부는, 상기 멤브레인 이미지를 절단하기 이전에 상기 멤브레인 이미지를 기 설정된 해상도로 리사이징하는 것을 특징으로 한다.In addition, the input vector generator is characterized in that the membrane image is resized to a preset resolution before cutting the membrane image.
또한 상기 입력벡터 생성부는, CLAHE 기법을 기반으로 각 가공 이미지의 픽셀값을 정규화하여 CLAHE 이미지를 생성하고, 전역 이진화(global thresholding) 방법을 기반으로 상기 CLAHE 이미지를 이진화를 하여 이진화 이미지를 생성하며, 2차원 이산 푸리에 변환을 기반으로 상기 CLAHE 이미지를 스펙트럼화하여 스펙트럼 데이터를 생성하고, 상기 CLAHE 이미지, 상기 이진화 이미지 및 상기 스펙트럼 데이터를 3채널로 병합한 상기 입력벡터를 생성하는 것을 특징으로 한다.In addition, the input vector generator generates a CLAHE image by normalizing the pixel values of each processed image based on the CLAHE technique, and generates a binarized image by binarizing the CLAHE image based on a global thresholding method, Spectral data is generated by spectralizing the CLAHE image based on a two-dimensional discrete Fourier transform, and the input vector is generated by merging the CLAHE image, the binarized image, and the spectral data into three channels.
본 발명에 따른 분석 방법은 분석 장치가 나노 섬유와 관련된 멤브레인 이미지가 입력되면 상기 멤브레인 이미지를 가공하여 복수의 가공 이미지를 생성하고, 상기 생성된 복수의 가공 이미지를 이미지 그룹으로 그룹핑하며, 상기 그룹핑된 이미지 그룹에 포함된 가공 이미지 각각을 전처리하여 입력벡터를 생성하는 단계 및 상기 분석 장치가 상기 이미지 그룹에 포함된 입력벡터 각각을 기 학습된 예측모델에 적용하여 로그단위의 표면평균 거칠기 값을 각각 출력하고, 상기 출력된 복수의 표면평균 거칠기 값을 이용하여 상기 멤브레인 이미지의 통계적인 표면평균 거칠기 값을 산출하는 단계를 포함한다.In the analysis method according to the present invention, when an analysis device inputs a membrane image related to a nanofiber, the membrane image is processed to generate a plurality of processed images, the generated plurality of processed images are grouped into image groups, and the grouped Preprocessing each of the processed images included in the image group to generate an input vector, and the analysis device applies each of the input vectors included in the image group to a previously learned prediction model to output a surface average roughness value in log units. and calculating a statistical average surface roughness value of the membrane image using the plurality of output average surface roughness values.
또한 컨볼루션층(Convolution layer), 풀링층(Pooling layer) 및 완전 연결층(Fully connected layer)으로 구성되는 합성곱신경망이 포함된 예측모델을 학습시키는 단계를 더 포함하고, 상기 예측모델은, 상기 학습을 위한 학습용 입력벡터로 학습용 CLAHE 이미지, 학습용 이진화 이미지 및 학습용 스펙트럼 데이터가 병합된 3채널 데이터가 입력되면 상기 학습용 입력벡터에 대응하는 로그단위의 표면평균 거칠기 예측값을 출력하는 것을 특징으로 한다.It further includes the step of training a prediction model including a convolutional neural network consisting of a convolution layer, a pooling layer, and a fully connected layer, wherein the prediction model includes the When three-channel data that is a merged learning CLAHE image, learning binarized image, and learning spectrum data is input as a learning input vector for learning, the surface average roughness predicted value in log units corresponding to the learning input vector is output.
또한 상기 학습시키는 단계는, 상기 예측모델의 컨볼루션 연산 및 풀링 연산 과정을 통해 상기 표면평균 거칠기 예측값이 상기 학습용 입력벡터에 대응하는 표면평균 거칠기 정답값에 가까워지도록 학습을 시키는 단계 및 상기 학습을 기 설정된 횟수만큼 반복 수행하는 단계를 포함하는 것을 특징으로 한다.In addition, the learning step includes training the predicted surface average roughness value to be closer to the correct surface average roughness value corresponding to the learning input vector through the convolution operation and pooling operation process of the prediction model, and performing the learning. It is characterized by including the step of repeating the process a set number of times.
또한 상기 입력벡터를 생성하는 단계는, 상기 멤브레인 이미지를 기 설정된 크기로 절단하여 복수의 가공 이미지를 생성하는 단계, 상기 생성된 복수의 가공 이미지 중 기 설정된 개수만큼의 가공 이미지를 랜덤하게 선정하는 단계 및 상기 선정된 복수의 가공 이미지를 하나의 이미지 그룹으로 그룹핑하는 단계를 포함하는 것을 특징으로 한다.In addition, the step of generating the input vector includes cutting the membrane image to a preset size to generate a plurality of processed images, and randomly selecting a preset number of processed images among the plurality of generated processed images. and grouping the selected plurality of processed images into one image group.
또한 상기 입력벡터를 생성하는 단계는, 슬라이딩 윈도우 방식을 이용하여 상기 멤브레인 이미지를 기 설정된 크기로 절단하는 것을 특징으로 한다.Additionally, the step of generating the input vector is characterized by cutting the membrane image to a preset size using a sliding window method.
또한 상기 입력벡터를 생성하는 단계는, 상기 멤브레인 이미지를 절단하기 이전에 상기 멤브레인 이미지를 기 설정된 해상도로 리사이징하는 것을 특징으로 한다.Additionally, the step of generating the input vector is characterized by resizing the membrane image to a preset resolution before cutting the membrane image.
또한 상기 입력벡터를 생성하는 단계는, CLAHE 기법을 기반으로 각 가공 이미지의 픽셀값을 정규화하여 CLAHE 이미지를 생성하는 단계, 전역 이진화 방법을 기반으로 상기 CLAHE 이미지를 이진화를 하여 이진화 이미지를 생성하는 단계, 2차원 이산 푸리에 변환을 기반으로 상기 CLAHE 이미지를 스펙트럼화하여 스펙트럼 데이터를 생성하는 단계 및 상기 CLAHE 이미지, 상기 이진화 이미지 및 상기 스펙트럼 데이터를 3채널로 병합한 상기 입력벡터를 생성하는 단계를 포함하는 것을 특징으로 한다.In addition, the step of generating the input vector includes generating a CLAHE image by normalizing the pixel value of each processed image based on the CLAHE technique, and generating a binarized image by binarizing the CLAHE image based on a global binarization method. , generating spectral data by spectralizing the CLAHE image based on a two-dimensional discrete Fourier transform, and generating the input vector by merging the CLAHE image, the binarized image, and the spectral data into three channels. It is characterized by
본 발명에 따른 분석 시스템은 나노 섬유와 관련된 멤브레인 이미지의 표면평균 거칠기를 분석하는 분석 장치 및 상기 분석 장치로부터 상기 분석된 표면평균 거칠기와 관련된 정보를 수신하고, 상기 수신된 표면평균 거칠기와 관련된 정보를 출력하는 사용자 단말을 포함하되, 상기 분석 장치는, 나노 섬유와 관련된 멤브레인 이미지가 입력되면 상기 멤브레인 이미지를 가공하여 복수의 가공 이미지를 생성하고, 상기 생성된 복수의 가공 이미지를 이미지 그룹으로 그룹핑하며, 상기 그룹핑된 이미지 그룹에 포함된 가공 이미지 각각을 전처리하여 입력벡터를 생성하는 입력벡터 생성부 및 상기 이미지 그룹에 포함된 입력벡터 각각을 기 학습된 예측모델에 적용하여 로그단위의 표면평균 거칠기 값을 각각 출력하고, 상기 출력된 복수의 표면평균 거칠기 값을 이용하여 상기 멤브레인 이미지의 통계적인 표면평균 거칠기 값을 산출하는 분석부를 포함하는 것을 특징으로 한다.The analysis system according to the present invention includes an analysis device that analyzes the surface average roughness of a membrane image related to nanofibers, receives information related to the analyzed surface average roughness from the analysis device, and provides information related to the received surface average roughness. It includes a user terminal that outputs, wherein when a membrane image related to nanofibers is input, the analysis device processes the membrane image to generate a plurality of processed images, and groups the generated plurality of processed images into image groups, An input vector generator that preprocesses each of the processed images included in the grouped image group to generate an input vector, and applies each of the input vectors included in the image group to a previously learned prediction model to generate a surface average roughness value in log units. It is characterized in that it includes an analysis unit that outputs each surface average roughness value and calculates a statistical average surface roughness value of the membrane image using the plurality of output average surface roughness values.
본 발명의 실시예에 따르면, 합성곱신경망을 포함하는 예측모델을 이용하여 나노 섬유기반 멤브레인 이미지가 가지는 표면평균 거칠기를 예측하여 해당 멤브레인 이미지에 대한 통계적인 표면평균 거칠기 값을 산출할 수 있다.According to an embodiment of the present invention, the surface average roughness of a nanofiber-based membrane image can be predicted using a prediction model including a convolutional neural network, and a statistical surface average roughness value for the corresponding membrane image can be calculated.
이를 통해 사용자가 해당 멤브레인 이미지의 표면평균 거칠기에 관련된 유의미한 정보를 빠르면서도 정확하게 인지할 수 있다.This allows users to quickly and accurately recognize meaningful information related to the average surface roughness of the membrane image.
도 1은 본 발명의 실시예에 따른 분석 시스템을 설명하기 위한 구성도이다.1 is a configuration diagram for explaining an analysis system according to an embodiment of the present invention.
도 2는 본 발명의 실시예에 따른 분석 장치를 설명하기 위한 블록도이다.Figure 2 is a block diagram for explaining an analysis device according to an embodiment of the present invention.
도 3은 본 발명의 실시예에 따른 제어부를 설명하기 위한 블록도이다.Figure 3 is a block diagram for explaining a control unit according to an embodiment of the present invention.
도 4는 본 발명의 실시예에 따른 합성곱신경망을 포함하는 예측모델의 구조를 설명하기 위한 도면이다.Figure 4 is a diagram for explaining the structure of a prediction model including a convolutional neural network according to an embodiment of the present invention.
도 5는 본 발명의 실시예에 따른 원본 이미지를 가공하는 과정을 설명하기 위한 도면이다.Figure 5 is a diagram for explaining a process of processing an original image according to an embodiment of the present invention.
도 6은 본 발명의 실시예에 따른 입력 벡터를 생성하는 과정을 설명하기 위한 도면이다.Figure 6 is a diagram for explaining the process of generating an input vector according to an embodiment of the present invention.
도 7은 본 발명의 실시예에 따른 전역이진화 이미지를 설명하기 위한 도면이다.Figure 7 is a diagram for explaining a global binarization image according to an embodiment of the present invention.
도 8은 본 발명의 실시예에 따른 표면평균 거칠기를 예측하는 분석 방법을 설명하기 위한 순서도이다.Figure 8 is a flowchart for explaining an analysis method for predicting average surface roughness according to an embodiment of the present invention.
도 9는 본 발명의 실시예에 따른 연속확률분포를 통한 분석 장치의 성능평가를 설명하기 위한 도면이다.Figure 9 is a diagram for explaining performance evaluation of an analysis device through continuous probability distribution according to an embodiment of the present invention.
도 10은 본 발명의 실시예에 따른 예측 정확도에 대한 분석 장치의 성능평가를 설명하기 위한 도면이다.Figure 10 is a diagram for explaining the performance evaluation of an analysis device for prediction accuracy according to an embodiment of the present invention.
도 11은 본 발명의 실시예에 따른 컴퓨팅 장치를 설명하기 위한 블록도이다.Figure 11 is a block diagram for explaining a computing device according to an embodiment of the present invention.
아래에서는 첨부한 도면을 참고로 하여 본 발명의 실시예에 대하여 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자가 용이하게 실시할 수 있도록 상세히 설명한다. 그러나 본 발명은 여러 가지 상이한 형태로 구현될 수 있으며 여기에서 설명하는 실시예에 한정되지 않는다. 그리고 도면에서 본 발명을 명확하게 설명하기 위해서 설명과 관계없는 부분은 생략하였으며, 명세서 전체를 통하여 유사한 부분에 대해서는 유사한 도면 부호를 붙였다. Below, with reference to the attached drawings, embodiments of the present invention will be described in detail so that those skilled in the art can easily implement the present invention. However, the present invention may be implemented in many different forms and is not limited to the embodiments described herein. In order to clearly explain the present invention in the drawings, parts unrelated to the description are omitted, and similar parts are given similar reference numerals throughout the specification.
본 명세서 및 도면(이하 '본 명세서')에서, 동일한 구성요소에 대해서 중복된 설명은 생략한다.In this specification and drawings (hereinafter referred to as “this specification”), duplicate descriptions of the same components are omitted.
또한 본 명세서에서, 어떤 구성요소가 다른 구성요소에 '연결되어' 있다거나 '접속되어' 있다고 언급된 때에는, 그 다른 구성요소에 직접적으로 연결되어 있거나 또는 접속되어 있을 수도 있지만, 중간에 다른 구성요소가 존재할 수도 있다고 이해되어야 할 것이다. 반면에 본 명세서에서, 어떤 구성요소가 다른 구성요소에 '직접 연결되어' 있다거나 '직접 접속되어' 있다고 언급된 때에는, 중간에 다른 구성요소가 존재하지 않는 것으로 이해되어야 할 것이다.Also, in this specification, when a component is mentioned as being 'connected' or 'connected' to another component, it may be directly connected or connected to the other component, but may be connected to the other component in the middle. It should be understood that may exist. On the other hand, in this specification, when it is mentioned that a component is 'directly connected' or 'directly connected' to another component, it should be understood that there are no other components in between.
또한, 본 명세서에서 사용되는 용어는 단지 특정한 실시예를 설명하기 위해 사용되는 것으로써, 본 발명을 한정하려는 의도로 사용되는 것이 아니다. Additionally, the terms used in this specification are merely used to describe specific embodiments and are not intended to limit the present invention.
또한 본 명세서에서, 단수의 표현은 문맥상 명백하게 다르게 뜻하지 않는 한, 복수의 표현을 포함할 수 있다. Also, in this specification, singular expressions may include plural expressions, unless the context clearly dictates otherwise.
또한 본 명세서에서, '포함하다' 또는 '가지다' 등의 용어는 명세서에 기재된 특징, 숫자, 단계, 동작, 구성요소, 부품, 또는 이들을 조합한 것이 존재함을 지정하려는 것일 뿐, 하나 또는 그 이상의 다른 특징, 숫자, 단계, 동작, 구성요소, 부품 또는 이들을 조합한 것의 존재 또는 부가 가능성을 미리 배제하지 않는 것으로 이해되어야 할 것이다.In addition, in this specification, terms such as 'include' or 'have' are only intended to designate the presence of features, numbers, steps, operations, components, parts, or combinations thereof described in the specification, and one or more It should be understood that this does not exclude in advance the presence or addition of other features, numbers, steps, operations, components, parts, or combinations thereof.
또한 본 명세서에서, '및/또는' 이라는 용어는 복수의 기재된 항목들의 조합 또는 복수의 기재된 항목들 중의 어느 항목을 포함한다. 본 명세서에서, 'A 또는 B'는, 'A', 'B', 또는 'A와 B 모두'를 포함할 수 있다.Also, in this specification, the term 'and/or' includes a combination of a plurality of listed items or any of the plurality of listed items. In this specification, 'A or B' may include 'A', 'B', or 'both A and B'.
또한 본 명세서에서, 본 발명의 요지를 흐리게 할 수 있는 공지 기능 및 구성에 대한 상세한 설명은 생략될 것이다.Additionally, in this specification, detailed descriptions of well-known functions and configurations that may obscure the gist of the present invention will be omitted.
도 1은 본 발명의 실시예에 따른 분석 시스템을 설명하기 위한 구성도이다.1 is a configuration diagram for explaining an analysis system according to an embodiment of the present invention.
도 1을 참조하면, 분석 시스템(300)은 합성곱신경망(Convolution Neural Network, CNN)을 포함하는 예측모델을 이용하여 나노 섬유기반 멤브레인 이미지가 가지는 표면평균 거칠기를 예측한다. 분석 시스템(300)은 분석 장치(100) 및 사용자 단말(200)을 포함한다.Referring to FIG. 1, the analysis system 300 predicts the average surface roughness of a nanofiber-based membrane image using a prediction model including a convolution neural network (CNN). The analysis system 300 includes an analysis device 100 and a user terminal 200.
분석 장치(100)는 나노 섬유와 관련된 멤브레인 이미지의 표면평균 거칠기를 분석한다. 여기서 멤브레인 이미지는 SEM(Scanning Electron Microscope) 이미지일 수 있다. 분석 장치(100)는 멤브레인 이미지가 입력되면 멤브레인 이미지를 가공하여 복수의 가공 이미지를 생성한다. 여기서 가공 이미지는 하나의 멤브레인 이미지를 기 설정된 크기로 절단한 이미지를 의미한다. 분석 장치(100)는 생성된 복수의 가공 이미지 중 일부를 선정하여 하나의 이미지 그룹으로 그룹핑한다. 분석 장치(100)는 그룹핑된 이미지 그룹에 포함된 가공 이미지 각각을 전처리하여 입력벡터를 생성한다. 분석 장치(100)는 이미지 그룹에 포함된 입력벡터 각각을 기 학습된 예측모델에 적용하여 로그단위의 표면평균 거칠기 값을 각각 출력한다. 여기서 예측모델은 컨볼루션층(Convolution layer), 풀링층(Pooling layer) 및 완전 연결층(Fully connected layer)으로 구성되는 합성곱신경망일 수 있으나, 이에 한정하지 않는다. 분석 장치(100)는 출력된 복수의 표면평균 거칠기 값을 이용하여 멤브레인 이미지의 통계적인 표면평균 거칠기 값을 산출한다. 여기서 통계적 표면평균 거칠기 값은 멤브레인 이미지의 일부에 대한 표면평균 거칠기가 아닌 전체 이미지에 대한 표면평균 거칠기를 나타내는 것으로써, 평균값, 최빈값 등과 같은 통계적인 개념이 반영된 값을 의미한다. 즉 통계적 표면평균 거칠기 값은 멤브레인 이미지 전체의 표면평균 거칠기를 통계적으로 나타내는 유의미한 값일 수 있다.The analysis device 100 analyzes the surface average roughness of the membrane image related to the nanofiber. Here, the membrane image may be a SEM (Scanning Electron Microscope) image. When a membrane image is input, the analysis device 100 processes the membrane image to generate a plurality of processed images. Here, the processed image refers to an image obtained by cutting one membrane image to a preset size. The analysis device 100 selects some of the plurality of generated processed images and groups them into one image group. The analysis device 100 generates an input vector by preprocessing each processed image included in the grouped image group. The analysis device 100 applies each input vector included in the image group to a previously learned prediction model and outputs each surface average roughness value in log units. Here, the prediction model may be a convolutional neural network consisting of a convolution layer, a pooling layer, and a fully connected layer, but is not limited to this. The analysis device 100 calculates a statistical average surface roughness value of the membrane image using a plurality of output average surface roughness values. Here, the statistical surface average roughness value represents the surface average roughness for the entire image, not the surface average roughness for a part of the membrane image, and means a value that reflects statistical concepts such as average value and mode. In other words, the statistical average surface roughness value may be a meaningful value that statistically represents the average surface roughness of the entire membrane image.
사용자 단말(200)은 사용자가 사용하는 단말로써, 분석 장치(100)와 통신을 수행한다. 사용자 단말(200)은 분석 장치(100)로부터 산출된 통계적인 표면평균 거칠기와 관련된 정보를 수신받는다. 이때 수신된 통계적인 표면평균 거칠기와 관련된 정보는 사용자 단말(200)에서 전송한 멤브레인 이미지에 대한 분석된 결과일 수 있으나, 이에 한정하지 않는다. 사용자 단말(200)은 수신된 통계적인 표면평균 거칠기와 관련된 정보를 출력하여 사용자가 원격에서 직관적으로 멤브레인 이미지에 대한 표면평균 거칠기를 인지할 수 있도록 도와준다. 사용자 단말(200)은 데스크톱, 랩톱, 스마트폰, 핸드헬드 PC 등과 같은 컴퓨터 시스템일 수 있다.The user terminal 200 is a terminal used by the user and communicates with the analysis device 100. The user terminal 200 receives information related to the statistical average surface roughness calculated from the analysis device 100. At this time, the information related to the statistical average surface roughness received may be an analysis result of the membrane image transmitted from the user terminal 200, but is not limited thereto. The user terminal 200 outputs information related to the received statistical surface average roughness to help the user intuitively recognize the surface average roughness of the membrane image remotely. The user terminal 200 may be a computer system such as a desktop, laptop, smartphone, handheld PC, etc.
도면에서는 사용자 단말(200)이 분석 장치(100)와 별개의 구성으로 도시되고 있으나, 이에 한정하지 않고 상황에 따라 하나의 구성으로 구현될 수 있다.In the drawing, the user terminal 200 is shown as a separate configuration from the analysis device 100, but it is not limited to this and may be implemented as a single configuration depending on the situation.
한편 분석 시스템(300)은 분석 장치(100)와 사용자 단말(200) 사이에 통신망(350)을 구축하여 서로 간의 통신을 지원할 수 있다. 통신망(350)은 백본망과 가입자망으로 구성될 수 있다. 백본망은 X.25 망, Frame Relay 망, ATM망, MPLS(Multi-Protocol Label Switching) 망 및 GMPLS(Generalized Multi-Protocol Label Switching) 망 등 중에 하나 또는 복수의 통합된 망으로 구성될 수 있다. 가입자망은 FTTH(Fiber To The Home), ADSL(Asymmetric Digital Subscriber Line), 케이블망, 지그비(zigbee), 블루투스(bluetooth), Wireless LAN(IEEE 802.11b, IEEE 802.11a, IEEE 802.11g, IEEE 802.11n), Wireless Hart(ISO/IEC62591-1), ISA100.11a(ISO/IEC 62734), CoAP(Constrained Application Protocol), MQTT(Message Queuing Telemetry Transport), WIBro(Wireless Broadband), Wimax, 3G, HSDPA(High Speed Downlink Packet Access), 4G, 5G 및 6G 등일 수 있다. 일부 실시예로, 통신망(350)은 인터넷망일 수 있고, 이동 통신망일 수 있다. 또한 통신망(350)은 기타 널리 공지되었거나 향후 개발될 모든 무선통신 또는 유선통신 방식을 포함할 수 있다.Meanwhile, the analysis system 300 may establish a communication network 350 between the analysis device 100 and the user terminal 200 to support communication between them. The communication network 350 may be composed of a backbone network and a subscriber network. The backbone network may be composed of one or more integrated networks among the X.25 network, Frame Relay network, ATM network, MPLS (Multi-Protocol Label Switching) network, and GMPLS (Generalized Multi-Protocol Label Switching) network. Subscriber networks include FTTH (Fiber To The Home), ADSL (Asymmetric Digital Subscriber Line), cable network, zigbee, Bluetooth, and Wireless LAN (IEEE 802.11b, IEEE 802.11a, IEEE 802.11g, IEEE 802.11n). ), Wireless Hart (ISO/IEC62591-1), ISA100.11a (ISO/IEC 62734), CoAP (Constrained Application Protocol), MQTT (Message Queuing Telemetry Transport), WIBro (Wireless Broadband), Wimax, 3G, HSDPA (High Speed Downlink Packet Access), 4G, 5G and 6G, etc. In some embodiments, the communication network 350 may be an Internet network or a mobile communication network. Additionally, the communication network 350 may include any wireless or wired communication method that is widely known or will be developed in the future.
도 2는 본 발명의 실시예에 따른 분석 장치를 설명하기 위한 블록도이고, 도 3은 본 발명의 실시예에 따른 제어부를 설명하기 위한 블록도이며, 도 4는 본 발명의 실시예에 따른 합성곱신경망을 포함하는 예측모델의 구조를 설명하기 위한 도면이고, 도 5는 본 발명의 실시예에 따른 원본 이미지를 가공하는 과정을 설명하기 위한 도면이며, 도 6은 본 발명의 실시예에 따른 입력 벡터를 생성하는 과정을 설명하기 위한 도면이고, 도 7은 본 발명의 실시예에 따른 전역이진화 이미지를 설명하기 위한 도면이다.Figure 2 is a block diagram for explaining an analysis device according to an embodiment of the present invention, Figure 3 is a block diagram for explaining a control unit according to an embodiment of the present invention, and Figure 4 is a synthesis according to an embodiment of the present invention. Figure 5 is a diagram for explaining the structure of a prediction model including a product neural network, Figure 5 is a diagram for explaining the process of processing an original image according to an embodiment of the present invention, and Figure 6 is an input according to an embodiment of the present invention. This is a diagram for explaining the process of generating a vector, and Figure 7 is a diagram for explaining a global binarization image according to an embodiment of the present invention.
도 1 내지 도 7을 참조하면, 분석 장치(100)는 통신부(10), 입력부(30), 제어부(50), 출력부(70) 및 저장부(90)를 포함한다.Referring to FIGS. 1 to 7 , the analysis device 100 includes a communication unit 10, an input unit 30, a control unit 50, an output unit 70, and a storage unit 90.
통신부(10)는 사용자 단말(200)과의 통신을 수행한다. 통신부(10)는 사용자 단말(200)로부터 나노 섬유와 관련된 멤브레인 이미지를 수신할 수 있다. 여기서 멤브레인 이미지는 SEM 이미지일 수 있다. 또한 통신부(10)는 멤브레인 이미지의 통계적인 표면평균 거칠기 값에 대한 정보를 사용자 단말(200)로 전송할 수 있다.The communication unit 10 performs communication with the user terminal 200. The communication unit 10 may receive a membrane image related to nanofibers from the user terminal 200. Here, the membrane image may be an SEM image. Additionally, the communication unit 10 may transmit information about the statistical average surface roughness value of the membrane image to the user terminal 200.
입력부(30)는 예측모델을 학습하기 위한 학습용 입력벡터가 입력된다. 여기서 학습용 입력벡터는 학습용 CLAHE(Contrast-Limited Adaptive Histogram Equalization) 이미지, 학습용 이진화 이미지 및 학습용 스펙트럼 데이터가 병합된 3채널 데이터일 수 있다. 또한 입력부(30)는 나노 섬유와 관련된 멤브레인 이미지가 입력될 수 있다.The input unit 30 receives a learning input vector for learning a prediction model. Here, the input vector for learning may be three-channel data that merges a CLAHE (Contrast-Limited Adaptive Histogram Equalization) image for learning, a binarized image for learning, and spectrum data for learning. Additionally, the input unit 30 can input membrane images related to nanofibers.
제어부(50)는 분석 장치(100)의 전반적인 제어를 수행한다. 제어부(50)는 입력벡터 생성부(52)는 입력벡터 생성부(52) 및 분석부(53)를 포함하고, 학습부(51)를 더 포함할 수 있다.The control unit 50 performs overall control of the analysis device 100. The control unit 50 may include an input vector generation unit 52 and an analysis unit 53, and may further include a learning unit 51.
학습부(51)는 멤브레인 이미지의 표면평균 거칠기를 예측하는 예측모델을 학습시킨다. 여기서 예측모델은 공간 패턴을 추출하는 일련의 컨볼루션 과정 및 풀링 연산 과정을 통해 나노 섬유 멤브레인 이미지에서 로그단위의 표면평균거칠기(log Ra) 값을 도출한다. 예측모델은 컨볼루션층, 풀링층 및 완전 연결층으로 구성되는 합성곱신경망을 포함한다. 이때 합성곱신경망은 4개의 컨볼루션층, 4개의 풀링층 및 2개의 완전 연결층을 포함할 수 있으며, 제1 컨볼루션층, 제1 풀링층, 제2 컨볼루션층, 제2 풀링층, 제3 컨볼루션층, 제3 풀링층, 제4 컨볼루션층, 제4 풀링층, 제1 완전 연결층 및 제2 완전 연결층 순으로 연결된 신경망일 수 있다. The learning unit 51 trains a prediction model that predicts the average surface roughness of the membrane image. Here, the prediction model derives the surface average roughness (log R a ) value in log units from the nanofiber membrane image through a series of convolution and pooling operations that extract spatial patterns. The prediction model includes a convolutional neural network consisting of a convolutional layer, a pooling layer, and a fully connected layer. At this time, the convolutional neural network may include four convolutional layers, four pooling layers, and two fully connected layers, including a first convolutional layer, a first pooling layer, a second convolutional layer, a second pooling layer, and a first convolutional layer. It may be a neural network connected in the following order: 3 convolutional layers, a 3rd pooling layer, a 4th convolutional layer, a 4th pooling layer, a first fully connected layer, and a second fully connected layer.
컨볼루션층은 컨볼루션용 필터를 이용한 컨볼루션 연산과 활성화 함수에 의한 연산을 수행한다. 여기서 활성화 함수는 비선형 관계를 설명할 수 있는 ReLU(Rectified Linear Unit)일 수 있다. 풀링층은 풀링용 필터를 이용한 풀링(pooling 또는 sub-sampling) 연산을 수행한다. 여기서 컨볼루션층의 특징맵은 특징맵 계수의 수를 줄이기 위해 2х2 최대 풀링 연산으로 서브샘플링이 되고, 은닉 뉴런의 수는 과적합 문제를 피하는데 도움이 되는 K-fold cross validation 검증에 의해 결정될 수 있다. The convolution layer performs convolution operations using convolution filters and operations using activation functions. Here, the activation function may be a Rectified Linear Unit (ReLU) that can describe non-linear relationships. The pooling layer performs pooling (or sub-sampling) operations using a pooling filter. Here, the feature map of the convolution layer is subsampled with a 2x2 max pooling operation to reduce the number of feature map coefficients, and the number of hidden neurons can be determined by K-fold cross validation, which helps avoid overfitting problems. there is.
예측모델은 학습을 위한 학습용 입력벡터로 학습용 CLAHE 이미지, 학습용 이진화 이미지 및 학습용 스펙트럼 데이터가 병합된 3채널 데이터가 입력되면 학습용 입력벡터에 대응하는 로그단위의 표면평균 거칠기 예측값을 출력한다. 이러한 예측모델 구조를 기반으로 학습부(51)는 컨볼루션 연산 및 풀링 연산 과정을 통해 출력된 표면평균 거칠기 예측값이 학습용 입력벡터에 대응하는 표면평균 거칠기 정답값(미리 알고 있음)에 가까워지도록 학습을 시킨다. 이를 위해 학습부(51)는 예측값과 정답값의 차이가 최소화되도록 Adam optimizer를 통해 알고리즘을 학습시킬 수 있다. 또한 학습부(51)는 손실함수로 Mean square error를 사용하여 로스(loss) 값(정답값과 예측값 차이)의 크기변화를 통해 수렴여부를 확인할 수 있다.When the prediction model receives three-channel data that merges the CLAHE image for learning, the binarized image for learning, and the spectrum data for learning as the learning input vector, it outputs the surface average roughness prediction value in log units corresponding to the learning input vector. Based on this prediction model structure, the learning unit 51 performs learning so that the surface average roughness predicted value output through the convolution operation and pooling operation process is close to the surface average roughness answer value (known in advance) corresponding to the learning input vector. I order it. To this end, the learning unit 51 can learn the algorithm through the Adam optimizer so that the difference between the predicted value and the correct value is minimized. Additionally, the learning unit 51 can use Mean square error as a loss function to check convergence through a change in the size of the loss value (difference between the correct answer and the predicted value).
입력벡터 생성부(52)는 통신부(10) 또는 입력부(30)를 통해 나노 섬유와 관련된 멤브레인 이미지가 입력되면 표면평균 거칠기 값에 근접한 값을 예측하기 위한 입력벡터를 생성한다. 즉 입력벡터 생성부(52)는 멤브레인 이미지를 가공하여 복수의 가공 이미지를 생성한다. 입력벡터 생성부(52)는 생성된 복수의 가공 이미지를 이미지 그룹으로 그룹핑하고, 그룹핑된 이미지 그룹에 포함된 가공 이미지 각각을 전처리하여 입력벡터를 생성한다. When a membrane image related to nanofibers is input through the communication unit 10 or the input unit 30, the input vector generator 52 generates an input vector to predict a value close to the average surface roughness value. That is, the input vector generator 52 processes the membrane image and generates a plurality of processed images. The input vector generator 52 groups the generated plurality of processed images into image groups and preprocesses each processed image included in the grouped image groups to generate an input vector.
상세하게는 입력벡터 생성부(52)는 SEM 이미지인 멤브레인 이미지가 입력되면(61) 입력된 멤브레인 이미지를 기 설정된 크기로 절단하여 복수의 가공 이미지를 생성한다(63). 이때 입력벡터 생성부(52)는 슬리이딩 윈도우(sliding-window) 방식을 이용하여 멤브레인 이미지를 기 설정된 크기로 절단할 수 있다. 즉 입력벡터 생성부(52)는 이미지의 좌측 상단부터 우측 하단까지 윈도우를 이동시키면서 윈도우 크기에 맞게 절단할 수 있다(62). 여기서 입력벡터 생성부(52)는 멤브레인 이미지를 절단하기 이전에 멤브레인 이미지를 기 설정된 해상도로 리사이징한다. 바람직하게는 입력벡터 생성부(52)는 이미지의 해상도를 51.2 pixel/um 단위로 리사이징한 이후, 폭과 너비를 각각 10um로 절단할 수 있다. 입력벡터 생성부(52)는 생성된 복수의 가공 이미지(63)를 뒤집거나, 회전시켜 가공 이미지의 개수를 증가시킨다(64). 입력벡터 생성부(52)는 증가된 복수의 가공 이미지 중 기 설정된 개수만큼의 가공 이미지를 랜덤하게 선정하고(64), 선정된 복수의 가공 이미지를 하나의 이미지 그룹으로 그룹핑한다(65). 예를 들어 입력벡터 생성부(52)는 한개의 평균표면 거칠기를 가지는 SEM 이미지로부터 일정한 크기로 절단시킨 이미지 중 무작위로 16개 이미지들을 선정하고, 선정된 이미지들을 하나의 이미지 그룹으로 그룹핑할 수 있다.In detail, when a membrane image, which is a SEM image, is input (61), the input vector generator 52 cuts the input membrane image to a preset size to generate a plurality of processed images (63). At this time, the input vector generator 52 may cut the membrane image to a preset size using a sliding-window method. That is, the input vector generator 52 can move the window from the upper left to the lower right of the image and cut it to fit the window size (62). Here, the input vector generator 52 resizes the membrane image to a preset resolution before cutting the membrane image. Preferably, the input vector generator 52 resizes the resolution of the image to 51.2 pixel/um and then cuts the image width to 10 um. The input vector generator 52 increases the number of processed images by flipping or rotating the generated plurality of processed images 63 (64). The input vector generator 52 randomly selects a preset number of processed images from among the increased plurality of processed images (64), and groups the selected plurality of processed images into one image group (65). For example, the input vector generator 52 may randomly select 16 images from among images cut to a certain size from an SEM image with one average surface roughness, and group the selected images into one image group. .
한편 입력벡터 생성부(52)는 복수의 가공 이미지 각각을 전처리하여 복수의 입력 벡터로 생성할 수 있다. 입력벡터 생성부(52)는 각 가공 이미지를 RGB 채널 분리를 하고, 분리된 RGB 채널을 기반으로 가공 이미지를 그레이 스케일(grayscale) 이미지로 변환한다(66). 입력벡터 생성부(52)는 그레이 스케일 이미지로 변환된 가공 이미지를 정규화하여 CLAHE 이미지를 생성한다. 즉 입력벡터 생성부(52)는 이미지 간의 해상도에 따른 노이즈를 최소화하고 표면평균 거칠기에 중요한 요소인 섬유 경계 부분의 픽셀값 특성을 극대화하기 위해 CLAHE 기법을 기반으로 각 가공 이미지의 픽셀값을 정규화할 수 있다. 입력벡터 생성부(52)는 전역 이진화(global thresholding) 방법을 기반으로 CLAHE 이미지를 이진화를 하여 이진화 이미지를 생성한다. 여기서 전역 이진화 방법은 섬유 부분과 포어(pore) 부분의 형상 정보만 남게 하고, 거칠기의 주요 영향 변수인 경계에 대한 분포 특성을 나타나며(도 7), 픽셀값을 향상시킴으로써, 예측모델의 커널층을 통과하더라도 해당 특성이 남게 한다. 입력벡터 생성부(52)는 2차원 이산 푸리에 변환(2d-DFT)을 기반으로 CLAHE 이미지를 스펙트럼화하여 스펙트럼 데이터를 생성한다. 여기서 2차원 이산 푸리에 변환을 통한 주파수 도메인은 섬유의 굵기, 분포와 방향성에 따른 크기 스펙트럼을 나타내고, 이미지 상의 위치에 따른 주파수 대역이 다르고, 위치별로 밝기 차이가 나타냄으로써, 거칠기에 대한 특성을 섬유분포의 빈도수에 의한 차이를 픽셀로 나타낼 수 있다. 입력벡터 생성부(52)는 각각 생성된 CLAHE 이미지, 이진화 이미지 및 스펙트럼 데이터에 대한 배열을 순서대로 쌓고(67), 쌓아 올린 배열을 3채널로 병합하여 입력벡터를 생성한다(68).Meanwhile, the input vector generator 52 may preprocess each of the plurality of processed images to generate a plurality of input vectors. The input vector generator 52 separates each processed image into RGB channels and converts the processed image into a grayscale image based on the separated RGB channels (66). The input vector generator 52 generates a CLAHE image by normalizing the processed image converted to a gray scale image. That is, the input vector generator 52 normalizes the pixel values of each processed image based on the CLAHE technique in order to minimize noise depending on the resolution between images and maximize the pixel value characteristics of the fiber boundary portion, which is an important factor in the average surface roughness. You can. The input vector generator 52 generates a binarized image by binarizing the CLAHE image based on a global thresholding method. Here, the global binarization method leaves only the shape information of the fiber portion and the pore portion, reveals the distribution characteristics of the boundary, which is the main influencing variable of roughness (Figure 7), and improves the pixel value, thereby improving the kernel layer of the prediction model. Even if you pass, the characteristic remains. The input vector generator 52 spectralizes the CLAHE image based on two-dimensional discrete Fourier transform (2d-DFT) to generate spectral data. Here, the frequency domain through two-dimensional discrete Fourier transform represents the size spectrum according to the thickness, distribution and direction of the fiber, the frequency band is different depending on the position on the image, and the brightness difference is shown for each position, so the characteristics of roughness can be expressed as fiber distribution. The difference in frequency can be expressed in pixels. The input vector generator 52 sequentially stacks the arrays of the CLAHE image, binarized image, and spectrum data (67), and merges the stacked arrays into three channels to generate an input vector (68).
분석부(53)는 이미지 그룹에 포함된 입력벡터 각각을 학습부(51)에서 학습된 예측모델에 적용하여 로그단위의 표면평균 거칠기 값을 각각 출력한다. 분석부(53)는 출력된 복수의 표면 평균 거칠기 값을 이용하여 멤브레인 이미지의 통계적인 표면평균 거칠기 값을 산출한다. 이를 통해 사용자는 유의미한 통계적인 표면평균 거칠기 값을 얻을 수 있다. 상세하게는 분석부(53)는 예측모델에서 출력된 로그단위의 표면평균 거칠기 값을 연속확률분포상에 표현하고, 연속확률분포상에 나타나는 최빈값 및 평균값을 통해 통계적인 수치를 출력할 수 있다. The analysis unit 53 applies each input vector included in the image group to the prediction model learned in the learning unit 51 and outputs each surface average roughness value in log units. The analysis unit 53 calculates a statistical average surface roughness value of the membrane image using the plurality of output average surface roughness values. Through this, users can obtain meaningful statistical average surface roughness values. In detail, the analysis unit 53 can express the surface average roughness value in log units output from the prediction model on a continuous probability distribution and output statistical values through the mode and average values appearing on the continuous probability distribution.
출력부(70)는 통신부(10) 또는 입력부(30)를 통해 전달된 멤브레인 이미지를 출력한다. 출력부(70)는 제어부(50)로부터 생성된 입력벡터를 출력하고, 제어부(50)로부터 산출된 통계적인 표면평균 거칠기 값을 출력한다. 출력부(70)는 액정 디스플레이(liquid crystal display, LCD), 박막 트랜지스터 액정 디스플레이(thin film transistor-liquid crystal display, TFT LCD), 유기 발광 다이오드(organic light-emitting diode, OLED), 플렉시블 디스플레이(flexible display), 3차원 디스플레이(3D display) 중에서 적어도 하나를 포함할 수 있다.The output unit 70 outputs the membrane image transmitted through the communication unit 10 or the input unit 30. The output unit 70 outputs the input vector generated by the control unit 50 and outputs the statistical average surface roughness value calculated by the control unit 50. The output unit 70 includes a liquid crystal display (LCD), a thin film transistor-liquid crystal display (TFT LCD), an organic light-emitting diode (OLED), and a flexible display (flexible display). It may include at least one of a display and a 3D display.
저장부(90)는 가이드 장치(100)가 구동되기 위한 프로그램 또는 알고리즘이 저장된다. 저장부(90)는 통신부(10) 또는 입력부(30)를 통해 전달된 멤브레인 이미지가 저장된다. 저장부(90)는 제어부(50)로부터 생성된 입력벡터가 저장되고, 제어부(50)로부터 산출된 통계적인 표면평균 거칠기 값이 저장된다. 저장부(90)는 플래시 메모리 타입(flash memory type), 하드디스크 타입(hard disk type), 미디어 카드 마이크로 타입(multimedia card micro type), 카드 타입의 메모리(예를 들어 SD 또는 XD 메모리 등), 램(Random Access Memory, RAM), SRAM(Static Random Access Memory), 롬(Read-Only Memory, ROM), EEPROM(Electrically Erasable Programmable Read-Only Memory), PROM(Programmable Read-Only Memory), 자기메모리, 자기 디스크 및 광디스크 중 적어도 하나의 저장매체를 포함할 수 있다. The storage unit 90 stores a program or algorithm for driving the guide device 100. The storage unit 90 stores the membrane image transmitted through the communication unit 10 or the input unit 30. The storage unit 90 stores the input vector generated by the control unit 50 and the statistical average surface roughness value calculated by the control unit 50. The storage unit 90 includes a flash memory type, hard disk type, multimedia card micro type, card type memory (for example, SD or XD memory, etc.), RAM (Random Access Memory), SRAM (Static Random Access Memory), ROM (Read-Only Memory, ROM), EEPROM (Electrically Erasable Programmable Read-Only Memory), PROM (Programmable Read-Only Memory), magnetic memory, It may include at least one storage medium of a magnetic disk and an optical disk.
도 8은 본 발명의 실시예에 따른 표면평균 거칠기를 예측하는 분석 방법을 설명하기 위한 순서도이다.Figure 8 is a flowchart for explaining an analysis method for predicting average surface roughness according to an embodiment of the present invention.
도 1 및 도 8을 참조하면, 분석 방법은 합성곱신경망을 포함하는 예측모델을 이용하여 나노 섬유기반 멤브레인 이미지가 가지는 표면평균 거칠기를 예측하여 해당 멤브레인 이미지에 대한 통계적인 표면평균 거칠기 값을 산출할 수 있다. 이를 통해 분석 방법은 사용자가 해당 멤브레인 이미지의 표면평균 거칠기에 관련된 유의미한 정보를 빠르면서도 정확하게 인지할 수 있다. Referring to Figures 1 and 8, the analysis method predicts the surface average roughness of a nanofiber-based membrane image using a prediction model including a convolutional neural network to calculate a statistical average surface roughness value for the membrane image. You can. Through this, the analysis method allows users to quickly and accurately recognize meaningful information related to the average surface roughness of the membrane image.
S110 단계에서, 분석 장치(100)는 나노 섬유와 관련된 멤브레인 이미지가 입력된다. 분석 장치(100)는 사용자 단말(200)로부터 멤브레인 이미지가 수신되어 입력되거나, 사용자에 의해 직접 멤브레인 이미지가 입력될 수 있다.In step S110, a membrane image related to the nanofiber is input to the analysis device 100. The analysis device 100 may receive and input a membrane image from the user terminal 200, or the membrane image may be directly input by the user.
S120 단계에서, 분석 장치(100)는 멤브레인 이미지에 대한 입력벡터를 생성한다. 분석 장치(100)는 멤브레인 이미지를 가공하여 복수의 가공 이미지를 생성한다. 분석 장치(100)는 생성된 복수의 가공 이미지를 이미지 그룹으로 그룹핑하고, 그룹핑된 이미지 그룹에 포함된 가공 이미지 각각을 전처리하여 입력벡터를 생성한다. 이때 입력벡터는 CLAHE 이미지, 이진화 이미지 및 스펙트럼 데이터를 3채널로 병합한 데이터일 수 있다. In step S120, the analysis device 100 generates an input vector for the membrane image. The analysis device 100 processes the membrane image to generate a plurality of processed images. The analysis device 100 groups the generated plurality of processed images into image groups and preprocesses each processed image included in the grouped image group to generate an input vector. At this time, the input vector may be data obtained by merging the CLAHE image, binarized image, and spectrum data into three channels.
S130 단계에서, 분석 장치(100)는 통계적 표면평균 거칠기 값을 산출한다. 분석 장치(100)는 이미지 그룹에 포함된 입력벡터 각각을 기 학습된 예측모델에 적용하여 로그단위의 표면평균 거칠기 값을 각각 출력한다. 분석 장치(100)는 출력된 복수의 표면평균 거칠기 값을 이용하여 멤브레인 이미지의 최빈값, 평균값 등과 같은 통계적인 표면평균 거칠기 값을 산출할 수 있다. In step S130, the analysis device 100 calculates a statistical average surface roughness value. The analysis device 100 applies each input vector included in the image group to a previously learned prediction model and outputs a surface average roughness value in log units. The analysis device 100 can calculate statistical average surface roughness values, such as the mode and average value of the membrane image, using the plurality of output average surface roughness values.
도 9는 본 발명의 실시예에 따른 연속확률분포를 통한 분석 장치의 성능평가를 설명하기 위한 도면이고, 도 10은 본 발명의 실시예에 따른 예측 정확도에 대한 분석 장치의 성능평가를 설명하기 위한 도면이다.Figure 9 is a diagram for explaining the performance evaluation of an analysis device through continuous probability distribution according to an embodiment of the present invention, and Figure 10 is a diagram for explaining the performance evaluation of an analysis device for prediction accuracy according to an embodiment of the present invention. It is a drawing.
도 1, 도 9 및 도 10을 참조하면, 분석 장치(100)는 다양한 방법을 통해 성능평가를 수행할 수 있다.Referring to FIGS. 1, 9, and 10, the analysis device 100 can perform performance evaluation through various methods.
예를 들어 분석 장치(100)는 예측모델을 통해 출력된 로그단위의 표면평균 거칠기 값들인 학습 예측값과 테스트 예측값을 연속확률분포상에 표현할 수 있다. 이를 통해 사용자는 해당 예측값들의 최빈값 및 평균값이 측정을 통한 실험값에 나타남을 확인할 수 있다(도 9). 또한 입력벡터에 따른 결정계수의 향상과 평균절대비오차의 감소를 통해 분석 장치(100)의 예측 정확도의 개선을 확인할 수 있다(도 10).For example, the analysis device 100 may express the learning predicted value and the test predicted value, which are logarithmic average surface roughness values output through the prediction model, on a continuous probability distribution. Through this, the user can confirm that the mode and average values of the corresponding predicted values appear in the experimental values through measurement (Figure 9). In addition, it can be confirmed that the prediction accuracy of the analysis device 100 is improved through an improvement in the coefficient of determination according to the input vector and a decrease in the average absolute ratio error (FIG. 10).
도 11은 본 발명의 실시예에 따른 컴퓨팅 장치를 설명하기 위한 블록도이다.Figure 11 is a block diagram for explaining a computing device according to an embodiment of the present invention.
도 11을 참조하면, 컴퓨팅 장치(TN100)는 본 명세서에서 기술된 장치(예를 들면 분석 장치, 사용자 단말 등) 일 수 있다. Referring to FIG. 11, the computing device TN100 may be a device described herein (eg, an analysis device, a user terminal, etc.).
컴퓨팅 장치(TN100)는 적어도 하나의 프로세서(TN110), 송수신 장치(TN120), 및 메모리(TN130)를 포함할 수 있다. 또한, 컴퓨팅 장치(TN100)는 저장 장치(TN140), 입력 인터페이스 장치(TN150), 출력 인터페이스 장치(TN160) 등을 더 포함할 수 있다. 컴퓨팅 장치(TN100)에 포함된 구성 요소들은 버스(bus)(TN170)에 의해 연결되어 서로 통신을 수행할 수 있다.The computing device TN100 may include at least one processor TN110, a transceiver device TN120, and a memory TN130. Additionally, the computing device TN100 may further include a storage device TN140, an input interface device TN150, an output interface device TN160, etc. Components included in the computing device TN100 may be connected by a bus TN170 and communicate with each other.
프로세서(TN110)는 메모리(TN130) 및 저장 장치(TN140) 중에서 적어도 하나에 저장된 프로그램 명령(program command)을 실행할 수 있다. 프로세서(TN110)는 중앙 처리 장치(CPU: central processing unit), 그래픽 처리 장치(GPU: graphics processing unit), 또는 본 발명의 실시예에 따른 방법들이 수행되는 전용의 프로세서를 의미할 수 있다. 프로세서(TN110)는 본 발명의 실시예와 관련하여 기술된 절차, 기능, 및 방법 등을 구현하도록 구성될 수 있다. 프로세서(TN110)는 컴퓨팅 장치(TN100)의 각 구성 요소를 제어할 수 있다.The processor TN110 may execute a program command stored in at least one of the memory TN130 and the storage device TN140. The processor TN110 may refer to a central processing unit (CPU), a graphics processing unit (GPU), or a dedicated processor on which methods according to embodiments of the present invention are performed. Processor TN110 may be configured to implement procedures, functions, and methods described in connection with embodiments of the present invention. The processor TN110 may control each component of the computing device TN100.
메모리(TN130) 및 저장 장치(TN140) 각각은 프로세서(TN110)의 동작과 관련된 다양한 정보를 저장할 수 있다. 메모리(TN130) 및 저장 장치(TN140) 각각은 휘발성 저장 매체 및 비휘발성 저장 매체 중에서 적어도 하나로 구성될 수 있다. 예를 들어, 메모리(TN130)는 읽기 전용 메모리(ROM: read only memory) 및 랜덤 액세스 메모리(RAM: random access memory) 중에서 적어도 하나로 구성될 수 있다. Each of the memory TN130 and the storage device TN140 can store various information related to the operation of the processor TN110. Each of the memory TN130 and the storage device TN140 may be comprised of at least one of a volatile storage medium and a non-volatile storage medium. For example, the memory TN130 may be comprised of at least one of read only memory (ROM) and random access memory (RAM).
송수신 장치(TN120)는 유선 신호 또는 무선 신호를 송신 또는 수신할 수 있다. 송수신 장치(TN120)는 네트워크에 연결되어 통신을 수행할 수 있다. The transceiving device TN120 can transmit or receive wired signals or wireless signals. The transmitting and receiving device (TN120) can be connected to a network and perform communication.
한편, 본 발명의 실시예는 지금까지 설명한 장치 및/또는 방법을 통해서만 구현되는 것은 아니며, 본 발명의 실시예의 구성에 대응하는 기능을 실현하는 프로그램 또는 그 프로그램이 기록된 기록 매체를 통해 구현될 수도 있으며, 이러한 구현은 상술한 실시예의 기재로부터 본 발명이 속하는 기술 분야의 통상의 기술자라면 쉽게 구현할 수 있는 것이다. Meanwhile, the embodiments of the present invention are not only implemented through the apparatus and/or method described so far, but may also be implemented through a program that realizes the function corresponding to the configuration of the embodiment of the present invention or a recording medium on which the program is recorded. This implementation can be easily implemented by anyone skilled in the art from the description of the above-described embodiments.
이상에서 본 발명의 실시예에 대하여 상세하게 설명하였지만 본 발명의 권리범위는 이에 한정되는 것은 아니고 다음의 청구범위에서 정의하고 있는 본 발명의 기본 개념을 이용한 통상의 기술자의 여러 변형 및 개량 형태 또한 본 발명의 권리범위에 속하는 것이다.Although the embodiments of the present invention have been described in detail above, the scope of the present invention is not limited thereto, and various modifications and improvements made by those skilled in the art using the basic concept of the present invention defined in the following claims are also possible. It falls within the scope of invention rights.

Claims (15)

  1. 나노 섬유와 관련된 멤브레인 이미지가 입력되면 상기 멤브레인 이미지를 가공하여 복수의 가공 이미지를 생성하고, 상기 생성된 복수의 가공 이미지를 이미지 그룹으로 그룹핑하며, 상기 그룹핑된 이미지 그룹에 포함된 가공 이미지 각각을 전처리하여 입력벡터를 생성하는 입력벡터 생성부; 및When a membrane image related to nanofibers is input, the membrane image is processed to generate a plurality of processed images, the generated plurality of processed images are grouped into image groups, and each processed image included in the grouped image group is preprocessed. an input vector generator that generates an input vector; and
    상기 이미지 그룹에 포함된 입력벡터 각각을 기 학습된 예측모델에 적용하여 로그단위의 표면평균 거칠기 값을 각각 출력하고, 상기 출력된 복수의 표면평균 거칠기 값을 이용하여 상기 멤브레인 이미지의 통계적인 표면평균 거칠기 값을 산출하는 분석부;Each of the input vectors included in the image group is applied to the previously learned prediction model to output logarithmic surface average roughness values, and the plurality of output surface average roughness values are used to calculate the statistical surface average of the membrane image. An analysis unit that calculates roughness values;
    를 포함하는 분석 장치.An analysis device comprising:
  2. 제 1항에 있어서,According to clause 1,
    컨볼루션층(Convolution layer), 풀링층(Pooling layer) 및 완전 연결층(Fully connected layer)으로 구성되는 합성곱신경망이 포함된 예측모델을 학습시키는 학습부;를 더 포함하고,It further includes a learning unit that trains a prediction model including a convolutional neural network consisting of a convolution layer, a pooling layer, and a fully connected layer,
    상기 예측모델은,The prediction model is,
    상기 학습을 위한 학습용 입력벡터로 학습용 CLAHE(Contrast-Limited Adaptive Histogram Equalization) 이미지, 학습용 이진화 이미지 및 학습용 스펙트럼 데이터가 병합된 3채널 데이터가 입력되면 상기 학습용 입력벡터에 대응하는 로그단위의 표면평균 거칠기 예측값을 출력하는 것을 특징으로 하는 분석 장치.When 3-channel data that merges the learning CLAHE (Contrast-Limited Adaptive Histogram Equalization) image, the binarized image for learning, and the learning spectrum data are input as the learning input vector for the learning, the surface average roughness predicted value in log units corresponding to the learning input vector An analysis device characterized in that output.
  3. 제 2항에 있어서,According to clause 2,
    상기 학습부는,The learning department,
    상기 예측모델의 컨볼루션 연산 및 풀링 연산 과정을 통해 상기 표면평균 거칠기 예측값이 상기 학습용 입력벡터에 대응하는 표면평균 거칠기 정답값에 가까워지도록 학습을 시키고, 상기 학습을 기 설정된 횟수만큼 반복 수행하는 것을 특징으로 하는 분석 장치.Through the convolution operation and pooling operation process of the prediction model, the predicted surface average roughness is trained so that the predicted surface average roughness value is close to the correct surface average roughness value corresponding to the learning input vector, and the learning is repeated a preset number of times. An analysis device that does.
  4. 제 1항에 있어서,According to clause 1,
    상기 입력벡터 생성부는,The input vector generator,
    상기 멤브레인 이미지를 기 설정된 크기로 절단하여 복수의 가공 이미지를 생성하고, 상기 생성된 복수의 가공 이미지 중 기 설정된 개수만큼의 가공 이미지를 랜덤하게 선정하며, 상기 선정된 복수의 가공 이미지를 하나의 이미지 그룹으로 그룹핑하는 것을 특징으로 하는 분석 장치.The membrane image is cut to a preset size to generate a plurality of processed images, a preset number of processed images are randomly selected from among the generated plurality of processed images, and the selected plurality of processed images are converted into one image. An analysis device characterized by grouping into groups.
  5. 제 4항에 있어서,According to clause 4,
    상기 입력벡터 생성부는,The input vector generator,
    슬라이딩 윈도우(sliding-window) 방식을 이용하여 상기 멤브레인 이미지를 기 설정된 크기로 절단하는 것을 특징으로 하는 분석 장치.An analysis device characterized in that the membrane image is cut to a preset size using a sliding-window method.
  6. 제 4항에 있어서,According to clause 4,
    상기 입력벡터 생성부는,The input vector generator,
    상기 멤브레인 이미지를 절단하기 이전에 상기 멤브레인 이미지를 기 설정된 해상도로 리사이징하는 것을 특징으로 하는 분석 장치.An analysis device characterized in that the membrane image is resized to a preset resolution before cutting the membrane image.
  7. 제 1항에 있어서,According to clause 1,
    상기 입력벡터 생성부는,The input vector generator,
    CLAHE 기법을 기반으로 각 가공 이미지의 픽셀값을 정규화하여 CLAHE 이미지를 생성하고, 전역 이진화(global thresholding) 방법을 기반으로 상기 CLAHE 이미지를 이진화를 하여 이진화 이미지를 생성하며, 2차원 이산 푸리에 변환을 기반으로 상기 CLAHE 이미지를 스펙트럼화하여 스펙트럼 데이터를 생성하고, 상기 CLAHE 이미지, 상기 이진화 이미지 및 상기 스펙트럼 데이터를 3채널로 병합한 상기 입력벡터를 생성하는 것을 특징으로 하는 분석 장치.Based on the CLAHE technique, a CLAHE image is generated by normalizing the pixel values of each processed image, and the CLAHE image is binarized based on a global thresholding method to generate a binarized image, based on a two-dimensional discrete Fourier transform. Spectralizing the CLAHE image to generate spectral data, and generating the input vector by merging the CLAHE image, the binarized image, and the spectral data into three channels.
  8. 분석 장치가 나노 섬유와 관련된 멤브레인 이미지가 입력되면 상기 멤브레인 이미지를 가공하여 복수의 가공 이미지를 생성하고, 상기 생성된 복수의 가공 이미지를 이미지 그룹으로 그룹핑하며, 상기 그룹핑된 이미지 그룹에 포함된 가공 이미지 각각을 전처리하여 입력벡터를 생성하는 단계; 및When a membrane image related to a nanofiber is input to the analysis device, the membrane image is processed to generate a plurality of processed images, the generated plurality of processed images are grouped into image groups, and the processed images included in the grouped image group are processed. Preprocessing each to generate an input vector; and
    상기 분석 장치가 상기 이미지 그룹에 포함된 입력벡터 각각을 기 학습된 예측모델에 적용하여 로그단위의 표면평균 거칠기 값을 각각 출력하고, 상기 출력된 복수의 표면평균 거칠기 값을 이용하여 상기 멤브레인 이미지의 통계적인 표면평균 거칠기 값을 산출하는 단계;The analysis device applies each of the input vectors included in the image group to a previously learned prediction model to output surface average roughness values in log units, respectively, and uses the plurality of output surface average roughness values to determine the value of the membrane image. Calculating a statistical average surface roughness value;
    를 포함하는 분석 방법.Analysis method including.
  9. 제 8항에 있어서,According to clause 8,
    컨볼루션층(Convolution layer), 풀링층(Pooling layer) 및 완전 연결층(Fully connected layer)으로 구성되는 합성곱신경망이 포함된 예측모델을 학습시키는 단계;를 더 포함하고,It further includes the step of training a prediction model including a convolutional neural network consisting of a convolution layer, a pooling layer, and a fully connected layer,
    상기 예측모델은,The prediction model is,
    상기 학습을 위한 학습용 입력벡터로 학습용 CLAHE 이미지, 학습용 이진화 이미지 및 학습용 스펙트럼 데이터가 병합된 3채널 데이터가 입력되면 상기 학습용 입력벡터에 대응하는 로그단위의 표면평균 거칠기 예측값을 출력하는 것을 특징으로 하는 분석 방법.When three-channel data that merges the learning CLAHE image, the learning binarization image, and the learning spectrum data are input as the learning input vector for the learning, the analysis is characterized in that it outputs the surface average roughness prediction value in log units corresponding to the learning input vector. method.
  10. 제 9항에 있어서,According to clause 9,
    상기 학습시키는 단계는,The learning step is,
    상기 예측모델의 컨볼루션 연산 및 풀링 연산 과정을 통해 상기 표면평균 거칠기 예측값이 상기 학습용 입력벡터에 대응하는 표면평균 거칠기 정답값에 가까워지도록 학습을 시키는 단계; 및Learning the predicted surface average roughness value to be closer to the correct surface average roughness value corresponding to the learning input vector through a convolution operation and a pooling operation process of the prediction model; and
    상기 학습을 기 설정된 횟수만큼 반복 수행하는 단계;Repeating the learning a preset number of times;
    를 포함하는 것을 특징으로 하는 분석 방법.An analysis method comprising:
  11. 제 8항에 있어서,According to clause 8,
    상기 입력벡터를 생성하는 단계는,The step of generating the input vector is,
    상기 멤브레인 이미지를 기 설정된 크기로 절단하여 복수의 가공 이미지를 생성하는 단계;generating a plurality of processed images by cutting the membrane image into a preset size;
    상기 생성된 복수의 가공 이미지 중 기 설정된 개수만큼의 가공 이미지를 랜덤하게 선정하는 단계; 및Randomly selecting a preset number of processed images from among the plurality of generated processed images; and
    상기 선정된 복수의 가공 이미지를 하나의 이미지 그룹으로 그룹핑하는 단계;Grouping the selected plurality of processed images into one image group;
    를 포함하는 것을 특징으로 하는 분석 방법.An analysis method comprising:
  12. 제 11항에 있어서,According to claim 11,
    상기 입력벡터를 생성하는 단계는,The step of generating the input vector is,
    슬라이딩 윈도우 방식을 이용하여 상기 멤브레인 이미지를 기 설정된 크기로 절단하는 것을 특징으로 하는 분석 방법.An analysis method characterized by cutting the membrane image to a preset size using a sliding window method.
  13. 제 11항에 있어서,According to clause 11,
    상기 입력벡터를 생성하는 단계는,The step of generating the input vector is,
    상기 멤브레인 이미지를 절단하기 이전에 상기 멤브레인 이미지를 기 설정된 해상도로 리사이징하는 것을 특징으로 하는 분석 방법.An analysis method characterized in that the membrane image is resized to a preset resolution before cutting the membrane image.
  14. 제 8항에 있어서,According to clause 8,
    상기 입력벡터를 생성하는 단계는,The step of generating the input vector is,
    CLAHE 기법을 기반으로 각 가공 이미지의 픽셀값을 정규화하여 CLAHE 이미지를 생성하는 단계;Generating a CLAHE image by normalizing the pixel values of each processed image based on the CLAHE technique;
    전역 이진화 방법을 기반으로 상기 CLAHE 이미지를 이진화를 하여 이진화 이미지를 생성하는 단계;Binarizing the CLAHE image based on a global binarization method to generate a binarized image;
    2차원 이산 푸리에 변환을 기반으로 상기 CLAHE 이미지를 스펙트럼화하여 스펙트럼 데이터를 생성하는 단계; 및generating spectral data by spectralizing the CLAHE image based on two-dimensional discrete Fourier transform; and
    상기 CLAHE 이미지, 상기 이진화 이미지 및 상기 스펙트럼 데이터를 3채널로 병합한 상기 입력벡터를 생성하는 단계;generating the input vector by merging the CLAHE image, the binarized image, and the spectrum data into three channels;
    를 포함하는 것을 특징으로 하는 분석 방법.An analysis method comprising:
  15. 나노 섬유와 관련된 멤브레인 이미지의 표면평균 거칠기를 분석하는 분석 장치; 및An analysis device that analyzes the average surface roughness of membrane images related to nanofibers; and
    상기 분석 장치로부터 상기 분석된 표면평균 거칠기와 관련된 정보를 수신하고, 상기 수신된 표면평균 거칠기와 관련된 정보를 출력하는 사용자 단말;을 포함하되,A user terminal that receives information related to the analyzed average surface roughness from the analysis device and outputs information related to the received average surface roughness,
    상기 분석 장치는,The analysis device is,
    나노 섬유와 관련된 멤브레인 이미지가 입력되면 상기 멤브레인 이미지를 가공하여 복수의 가공 이미지를 생성하고, 상기 생성된 복수의 가공 이미지를 이미지 그룹으로 그룹핑하며, 상기 그룹핑된 이미지 그룹에 포함된 가공 이미지 각각을 전처리하여 입력벡터를 생성하는 입력벡터 생성부; 및When a membrane image related to nanofibers is input, the membrane image is processed to generate a plurality of processed images, the generated plurality of processed images are grouped into image groups, and each processed image included in the grouped image group is preprocessed. an input vector generator that generates an input vector; and
    상기 이미지 그룹에 포함된 입력벡터 각각을 기 학습된 예측모델에 적용하여 로그단위의 표면평균 거칠기 값을 각각 출력하고, 상기 출력된 복수의 표면평균 거칠기 값을 이용하여 상기 멤브레인 이미지의 통계적인 표면평균 거칠기 값을 산출하는 분석부;Each of the input vectors included in the image group is applied to the previously learned prediction model to output logarithmic surface average roughness values, and the plurality of output surface average roughness values are used to calculate the statistical surface average of the membrane image. An analysis unit that calculates roughness values;
    를 포함하는 것을 특징으로 하는 분석 시스템.An analysis system comprising:
PCT/KR2023/010717 2022-11-14 2023-07-25 Device and method for analyzing average surface roughness by extracting feature from membrane image WO2024106682A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020220152050A KR20240070310A (en) 2022-11-14 2022-11-14 Apparatus and method for surface average roughness analysis using characteristic extraction of membrane image
KR10-2022-0152050 2022-11-14

Publications (1)

Publication Number Publication Date
WO2024106682A1 true WO2024106682A1 (en) 2024-05-23

Family

ID=91085062

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2023/010717 WO2024106682A1 (en) 2022-11-14 2023-07-25 Device and method for analyzing average surface roughness by extracting feature from membrane image

Country Status (2)

Country Link
KR (1) KR20240070310A (en)
WO (1) WO2024106682A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110110758A (en) * 2019-04-15 2019-08-09 南京航空航天大学 A kind of surface roughness classification method based on convolutional neural networks
CN113850339A (en) * 2021-09-30 2021-12-28 北京科技大学 Roughness grade prediction method and device based on multi-light-source surface image
KR20220094791A (en) * 2020-12-29 2022-07-06 연세대학교 산학협력단 Method and system for data augmentation
JP2022113177A (en) * 2021-01-24 2022-08-04 株式会社ジェイテクト Surface state estimation method and surface state estimation system
US11468552B1 (en) * 2021-07-12 2022-10-11 The Florida International University Board Of Trustees Systems and methods for quantifying concrete surface roughness

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11450012B2 (en) 2019-10-31 2022-09-20 Kla Corporation BBP assisted defect detection flow for SEM images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110110758A (en) * 2019-04-15 2019-08-09 南京航空航天大学 A kind of surface roughness classification method based on convolutional neural networks
KR20220094791A (en) * 2020-12-29 2022-07-06 연세대학교 산학협력단 Method and system for data augmentation
JP2022113177A (en) * 2021-01-24 2022-08-04 株式会社ジェイテクト Surface state estimation method and surface state estimation system
US11468552B1 (en) * 2021-07-12 2022-10-11 The Florida International University Board Of Trustees Systems and methods for quantifying concrete surface roughness
CN113850339A (en) * 2021-09-30 2021-12-28 北京科技大学 Roughness grade prediction method and device based on multi-light-source surface image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KANG, DONG HEE ET AL.: "Prediction of Surface Roughness for Modeling Pressure Drop in Porous Nanofiber Membrane using the Developed CNN Algorithm", PROCEEDINGS OF THE KSME FLUID ENGINEERING DIVISION 2023 SPRING CONFERENCE, May 2023 (2023-05-01), pages 253 - 254 *

Also Published As

Publication number Publication date
KR20240070310A (en) 2024-05-21

Similar Documents

Publication Publication Date Title
CN111062259B (en) Table identification method and apparatus
US11023715B2 (en) Method and apparatus for expression recognition
CN111431986B (en) Industrial intelligent quality inspection system based on 5G and AI cloud edge cooperation
CN111447190A (en) Encrypted malicious traffic identification method, equipment and device
WO2019132589A1 (en) Image processing device and method for detecting multiple objects
CN110808971B (en) Deep embedding-based unknown malicious traffic active detection system and method
WO2022005091A1 (en) Method and apparatus for reading bone age
US20140286527A1 (en) Systems and methods for accelerated face detection
WO2019050108A1 (en) Technology for analyzing abnormal behavior in deep learning-based system by using data imaging
WO2020222391A1 (en) System and method for invertible wavelet layer for neural networks
CN114187311A (en) Image semantic segmentation method, device, equipment and storage medium
CN113160200B (en) Industrial image defect detection method and system based on multi-task twin network
CN113591674B (en) Edge environment behavior recognition system for real-time video stream
US20120002938A1 (en) Learned cognitive system
CN114419363A (en) Target classification model training method and device based on label-free sample data
CN113469997A (en) Method, device, equipment and medium for detecting plane glass
CN116342894A (en) GIS infrared feature recognition system and method based on improved YOLOv5
WO2024106682A1 (en) Device and method for analyzing average surface roughness by extracting feature from membrane image
WO2022114363A1 (en) Unsupervised learning-based method and apparatus for generating supervised learning model, and method and apparatus for analyzing unsupervised learning model using same
Ghayal et al. Efficient eye diagram analyzer for optical modulation format recognition using deep learning technique
CN110490852A (en) Search method, device, computer-readable medium and the electronic equipment of target object
KR20200135044A (en) Apparatus and method of defect classification using image transformation based on machine-learning
CN116361791A (en) Malicious software detection method based on API packet reconstruction and image representation
CN108090468A (en) For detecting the method and apparatus of face
CN114581722A (en) Two-stage multi-classification industrial image defect detection method based on twin residual error network