WO2023101375A1 - Method, device, and system for optional artificial intelligence engine-based nondestructive inspection of object - Google Patents

Method, device, and system for optional artificial intelligence engine-based nondestructive inspection of object Download PDF

Info

Publication number
WO2023101375A1
WO2023101375A1 PCT/KR2022/019128 KR2022019128W WO2023101375A1 WO 2023101375 A1 WO2023101375 A1 WO 2023101375A1 KR 2022019128 W KR2022019128 W KR 2022019128W WO 2023101375 A1 WO2023101375 A1 WO 2023101375A1
Authority
WO
WIPO (PCT)
Prior art keywords
learning
electronic device
learning model
defect inspection
artificial intelligence
Prior art date
Application number
PCT/KR2022/019128
Other languages
French (fr)
Korean (ko)
Inventor
임태규
설재민
김승환
노은식
민병석
김형철
Original Assignee
(주)자비스
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by (주)자비스 filed Critical (주)자비스
Publication of WO2023101375A1 publication Critical patent/WO2023101375A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/02Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
    • G01N23/04Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/02Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
    • G01N23/06Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and measuring the absorption
    • G01N23/083Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and measuring the absorption the radiation being X-rays
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/02Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
    • G01N23/06Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and measuring the absorption
    • G01N23/18Investigating the presence of flaws defects or foreign matter
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2223/00Investigating materials by wave or particle radiation
    • G01N2223/10Different kinds of radiation or particles
    • G01N2223/101Different kinds of radiation or particles electromagnetic radiation
    • G01N2223/1016X-ray
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2223/00Investigating materials by wave or particle radiation
    • G01N2223/40Imaging
    • G01N2223/401Imaging image processing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2223/00Investigating materials by wave or particle radiation
    • G01N2223/60Specific applications or type of materials
    • G01N2223/646Specific applications or type of materials flaws, defects

Definitions

  • the present invention relates to non-destructive inspection of an object, and more particularly, to a method, apparatus, and system for non-destructive defect inspection of an object based on a selective artificial intelligence engine.
  • Defective products can lead to deterioration of supply chain services and loss of automation facilities. Therefore, it is very important to properly inspect the product for defects.
  • Non-destructive inspection that is, non-destructive inspection, which does not destroy an object by using radiation, in particular, X-ray, is used for quality inspection.
  • Conventional radiation non-destructive inspection applies a single technology to determine whether an object is defective in an X-ray image. was detected.
  • this conventional inspection method has limitations in defect detection performance, and thus has a problem in that all defects cannot be detected from the X-ray image, that is, there are undetected defects.
  • this since characteristics are different depending on the composition of the object, this may cause a difference in test result, that is, accuracy, even in the case of an allogeneic test object, thereby reducing the reliability of the test.
  • An object to be solved by the present invention is to provide a method, apparatus, and system that selects an optimal artificial intelligence learning model for an inspection object to increase inspection reliability through non-destructive inspection accuracy improvement and at the same time increase the efficiency of an inspection system.
  • An electronic device for performing non-destructive testing on an object based on an optional artificial intelligence engine for solving the above problems includes a plurality of learning variables for defect inspection and a plurality of learning variables corresponding to the individual learning variables. memory to store the model; and a processor inspecting a defect of the object, wherein the processor includes a processor configured to inspect a defect of the object based on at least one learning model selected from among the plurality of stored learning models.
  • a non-destructive inspection method for an object based on an artificial intelligence engine in an electronic device includes storing a plurality of learning variables for defect inspection and a plurality of learning models corresponding to the individual learning variables; receiving image data of the object; determining a category of the object to be input through the stored learning model, and selectively selecting at least one learning model from among the plurality of stored learning models based on the determined category of the object; performing a defect inspection of the target object based on the selected at least one learning model; and providing the defect inspection result.
  • An optional artificial intelligence engine-based non-destructive examination system for an object includes an image acquisition device that acquires image data of an object by radiating radiation; and an electronic device, wherein the electronic device includes: a memory configured to store a plurality of learning variables for defect inspection and a plurality of learning models corresponding to the individual learning variables; and a processor configured to perform a defect inspection of the object based on at least one learning model selected from among the plurality of stored learning models.
  • the present invention it is possible to increase the efficiency of the inspection system while increasing the reliability of the inspection through the improvement of the accuracy of the non-destructive inspection of the inspection object.
  • FIG. 1 is a block diagram illustrating an artificial intelligence based non-destructive testing system for an object according to an embodiment of the present invention.
  • FIG. 2 is a flowchart illustrating a method for non-destructive testing of an object according to an embodiment of the present invention.
  • FIG. 3 is a configuration block diagram of an electronic device according to an embodiment of the present invention.
  • FIG. 4 is a configuration block diagram of a learning unit according to an embodiment of the present invention.
  • FIG. 5 is a configuration block diagram of a processing unit according to an embodiment of the present invention.
  • FIG. 6 is a configuration block diagram of an electronic device according to another embodiment of the present invention.
  • FIG. 7 is a configuration block diagram of a processing unit of FIG. 6 .
  • FIGS 8 and 9 are diagrams for explaining results of non-destructive testing of an object according to the present invention.
  • FIG. 10 is a flowchart illustrating a non-destructive method of an object according to another embodiment of the present invention.
  • spatially relative terms “below”, “beneath”, “lower”, “above”, “upper”, etc. It can be used to easily describe a component's correlation with other components. Spatially relative terms should be understood as including different orientations of elements in use or operation in addition to the orientations shown in the drawings. For example, if you flip a component that is shown in a drawing, a component described as “below” or “beneath” another component will be placed “above” the other component. can Thus, the exemplary term “below” may include directions of both below and above. Components may also be oriented in other orientations, and thus spatially relative terms may be interpreted according to orientation.
  • 'image or image data' refers to still image or video data obtained through a tube or detector using radiation.
  • the image may be an X-ray image of an object through an X-ray tube or an X-ray detector.
  • the X-ray image is, for example, a 2D (Dimensional) image and a CT (Computed Tomography) image reconstructed from a continuous 2D image aggregation, and a reconstructed CT volume data.
  • a slice image may be included.
  • 'defect' indicates a part that is defined as normal or not a part that can be defined as normal for an object during a non-destructive test on an object to be inspected for a defect based on artificial intelligence according to the present invention. That is, it may be expressed by various names such as defect or error. Depending on the embodiment, the present invention is not limited to such an expression, and may include the same or similar meaning as a defect in a conventional sense.
  • FIG. 1 is a block diagram illustrating an artificial intelligence based non-destructive testing system for an object according to an embodiment of the present invention.
  • a system for performing an artificial intelligence-based non-destructive examination of an object may include an electronic device 100 and an image acquisition device 150.
  • the configuration of the electronic device 100 and the image acquisition device 150 shown in FIG. 1 is not limited thereto as an embodiment, and one or more components are added in relation to the operation according to the present invention It can be configured or vice versa.
  • the electronic device 100 may include a memory and a processor, and the memory may correspond to or include the database 120 shown in FIG. 1 , and the processor may include the controller 110 ) and at least one of the AI engine 130.
  • the AI engine 130 includes a deep learning network, but is not limited thereto.
  • the electronic device 100 may be connected to the image capture device 150 through a network to receive image data of an object.
  • the image acquisition device 150 may include a detector 160, an X-ray tube 170, and a lighting source (not shown), and the detector 160 may be at least one of a 2D detector and a 3D detector.
  • the detector 160 and the X-ray tube 170 are X-ray image capture devices for each object, which may be configured in a conventionally known configuration.
  • the image capture device 150 may further include a device capable of capturing motion of a moving object and a CT detector (not shown).
  • the light source includes, but is not limited to, a transmissive light source, terahertz.
  • the controller 110 controls operations performed by the electronic device 100
  • the database 120 stores an image of an object received from the image capture device 150 and information about the object. It stores data that is received and processed by the electronic device 100, such as a learning dataset used for defect inspection for a defect and a learning model corresponding to the learning dataset.
  • the control unit 110 classifies (or classifies) the category of the object through a learning model stored in the database 120 with image data of the object as input, and inspects defects on the image data of the object, which is a defect inspection target.
  • Various machine learning models for specifying an operation mode, selectively selecting at least one learning model corresponding to the specified defect inspection operation mode, and performing defect inspection from the image data of the object based on the selected learning model may include a hardware unit capable of performing an algorithm of a machine-learning model and related applications.
  • the control unit 110 may include at least one of a central processing unit, a microprocessor, and a graphic processing unit.
  • the controller 110 may further include a separate memory (not shown) for storing machine learning model algorithms or applications.
  • the electronic device 100 may obtain object image data (higher quality or improved) by learning image data of an object for defect inspection and inputting the learned image data to the AI engine 130 .
  • the image acquired through the AI engine 130 is generally higher quality than the object image data input from the image capture device 150 or improved image data as well as defects during non-destructive testing of the object based on artificial intelligence. In terms of inspection, it may represent all or some improved or new image data.
  • the electronic device 100 compares the examination result of the image data of the object before learning with the examination result of the image data of the object acquired through the AI engine 130 to create a new learning model. can be further defined.
  • the electronic device 100 determines whether or not to use the AI engine 130 from the image of the object input based on the learning model added in this way, that is, the information obtained through the AI engine 130 rather than the image data of the input object. It is also possible to determine whether or not to use the image data for defect inspection and perform an operation accordingly.
  • an X-ray image of an object input to the electronic device 100 is used for inspection in various fields such as inspection equipment, for example, semiconductor defect detection, PCB substrate defect detection, and foreign material detection in the food and pharmaceutical fields. It can be.
  • FIG. 2 is a flowchart illustrating a method for non-destructive testing of an object according to an embodiment of the present invention.
  • 3 is a block diagram of an electronic device 100 according to an embodiment of the present invention.
  • 4 is a configuration block diagram of a learning unit 220 according to an embodiment of the present invention.
  • 5 is a configuration block diagram of the processing unit 230 according to an embodiment of the present invention.
  • 6 is a block diagram of the electronic device 100 according to another embodiment of the present invention
  • FIG. 7 is a block diagram of the processing unit 610 of FIG. 6 .
  • 8 and 9 are diagrams for explaining results of non-destructive testing of an object according to the present invention.
  • 10 is a flowchart illustrating a non-destructive method of an object according to another embodiment of the present invention.
  • FIGS. 2 and 10 may be performed through the electronic device 100 of FIG. 1 .
  • the operations of FIGS. 2 and 10 will be described with reference to the configuration of the electronic device 100 shown in FIGS. 3 to 7 .
  • the electronic device 100 may store a plurality of learning variables and a corresponding learning model.
  • 'learning variables' denote training datasets (first and/or second training datasets) to be described later
  • 'corresponding learning models' denote learning models generated using the training datasets. .
  • the input unit 310 receives image data of the object acquired through radiation (eg, X-rays) from the image capture device 150.
  • radiation eg, X-rays
  • the order of operations 11 and 12 may be defined differently from that shown in FIG. 2 . This may be applied not only to operations 11 and 12, but also to the sequence of operations shown in FIG. 2 (and FIG. 10 described later).
  • the image data of the object input to the input unit 310 is raw data received from the image capture device 150 or at least partially processed to be suitable for defect inspection through the AI engine 130 described above. can be data.
  • the improved image data is obtained through the AI engine 130 and used as basic data for defect inspection. available.
  • the learning unit 320 takes the training data set as an input and generates a learning model corresponding to the input training data set through the preprocessing module 410, the feature extraction module 420, and the inspection processing module 430 to store the data in the memory. can be temporarily stored in
  • the processing unit 330 determines a category of the object through a learning model based on image data of the input object, and specifies a defect inspection operation mode for the object based on the determined category. Thus, it is possible to select a learning model corresponding to the specified operation mode.
  • the processing unit 330 inspects the object image data for defects based on the selected learning model, and generates defect inspection result data.
  • the output unit 340 provides defect detection result data generated by the processing unit 330 for the object.
  • 'providing' may be defined as various meanings related to outputting results to the target object, such as direct or indirect output through a display, transmission to a target terminal, and/or output control.
  • an artificial intelligence-based non-destructive examination method for an object according to the present invention will be described in more detail as follows.
  • the electronic device 100 learns a model for defect inspection based on the image data of the inspection target and receives image data of the object, which correspond to operations 11 and 12 of FIG. 2, respectively. It is an action that becomes
  • the electronic device 100 determines a defect inspection operation mode for the target object.
  • the electronic device 100 selects a learning model corresponding to the corresponding operation mode through an operation mode selection module, and selects the image data of the object. processing, that is, performing defect inspection.
  • the electronic device 100 selects a learning model corresponding to the corresponding operation mode through an operation mode selection module, Image data is processed, that is, defect inspection is performed.
  • FIGS. 4 and 5 Details of determining the defect inspection operation mode (first mode, second mode) and processing image data of the object according to the determined defect inspection operation mode are described in FIGS. 4 and 5 below. described later in order to help the understanding of the technical idea of the present invention, only the first and second operation modes are defined and described as defect inspection operation modes in the present specification, but the number of operation modes and the definition of operation modes according to setting or request etc. may be implemented differently.
  • the electronic device 100 may include an input unit 310, a defect inspection unit, and an output unit 340.
  • the defect inspection unit may include a learning part 320 and a processing part 330 .
  • the learning unit 320 constituting the defect inspection unit includes a preprocessing module 410 that receives and preprocesses training data (or data sets) for non-destructive inspection of an object, that is, learning of a defect inspection from a memory; It may include a feature extraction module 420 that extracts features from the preprocessed data, and an inspection processing module 430 that generates a learning model to be used for defect inspection based on the extracted features.
  • the learning unit 320 may use one AI engine for learning the defect inspection, but is not limited thereto.
  • the learning unit 320 is implemented as a plurality of learning modules, and each learning module may use a different AI engine for learning defect inspection.
  • the individual learning module may include at least one or more of a preprocessing module, a feature extraction module, and a test processing module of the learning unit 320 described above.
  • the training dataset that is, Prod #1, Prod #2, ... , Prod #N (where N is a positive integer) may be defined as a first training dataset for predefined individual test subjects.
  • learning models Model #1, Model #2, ... , Model #N (where N is a positive integer) may be defined as a learning model generated corresponding to the first training dataset, that is, a first learning model.
  • the learner 320 may generate a plurality of individual first learning models corresponding to each other using a plurality of learning variables, that is, individual first training datasets.
  • the first learning model generated in this way may be defined as a learning model specialized for an individual inspection target. That is, the learner 320 may generate N first learning models corresponding to N first training datasets (where N is a positive integer).
  • a second training dataset including all first training datasets or a combination of at least two individual first training datasets may be defined.
  • This second training dataset may be named a full training dataset, a combined training dataset, or a united training dataset.
  • the learning unit 320 may generate a second learning model using the second training dataset.
  • the second learning model is a learning model different from the aforementioned first learning model, and the number of the second learning models may be determined by the number of the defined second training datasets.
  • the individual training dataset used for the second training dataset may be the same as or different from the first training dataset described above.
  • the second training dataset may include one or more training datasets not classified as the first training dataset. From this point of view, the first training dataset may be viewed as a classified training dataset that is the basis for generating the first learning model.
  • the second training dataset may be defined as a plurality of second training datasets according to a configuration method, and a plurality of corresponding learning models may also be provided.
  • the generated first and/or second learning models may have different values of parameters, weights, and the like according to objects to be inspected.
  • whether the object is specified may mean, for example, a category or classification determined for the object.
  • whether or not the object is specified may be determined not by the electronic device 100 but by an input or request of an external input, for example, a defect inspection requester or terminal for the object. Accordingly, whether or not the object is specified may be determined based on whether or not a learning model usable for defect inspection can be specified for the object to be inspected.
  • the electronic device 100 may specify and selectively select a learning model based on the received external input.
  • the electronic device 100 selects and uses a specific first learning model corresponding to the specified object. Otherwise, the electronic device 100 inspects whether or not the object is defective using a second learning model. can At this time, when there are a plurality of second learning models, the electronic device 100 determines which second learning model(s) to use or all of the plurality of second learning models among them, and determines whether to use the plurality of second learning models. Defect inspection can be performed.
  • the electronic device 100 may control to select a second learning model other than another first learning model belonging to the first learning model from among the plurality of stored learning models.
  • the electronic device 100 determines the target object based on at least one of a first learning model and a second learning model set by default. You can also perform a defect inspection on .
  • the electronic device 100 may perform a corresponding operation by determining whether to define or re-examine an additional learning model based on the object defect inspection result according to a specific learning model set as default. For example, the electronic device 100 performs a defect inspection on an object based on a specific first learning model set as a default, and if the result is equal to or less than a predefined reference value, the electronic device 100 inspects the defect on the object based on the second learning model. can be re-executed. According to an embodiment, the electronic device 100 selects at least two or more specific first learning models from among the first learning models by default to perform defect inspection on the object, and based on a result of the execution, the defect inspection based on the second learning model.
  • the selected specific first learning model may be determined based on information about an object subject to defect inspection, a history of previous defect inspection, settings of the electronic device 100, information input by a person requesting inspection or a terminal, and the like. can This may also be used in the same or similar form for specifying a target object.
  • the operation of the learning unit 320 shown in FIG. 4 is i) when image data of an object to be inspected is input through the input unit 310 shown in FIG. 3, ii) the previous period of i) or non-regular, iii) may be performed regularly or irregularly regardless of i) above.
  • the processing unit 330 may include a model selection module 510, a pre-processing module 520, a feature extraction module 530, and an inspection processing module 540.
  • the preprocessing module 520, the feature extraction module 530, and the inspection processing module 540 may be separate components from the corresponding components shown in FIG. 4 . Therefore, the pre-processing module, the feature extraction module, and the inspection processing module can be regarded as individually implemented in the learning unit 320 of FIG. 4 and the processing unit 330 of FIG. 5 described above.
  • the preprocessing module is the learning unit of FIG. 4 as shown in FIG. 7 ( 320) and the processing unit 330 of FIG. 5 may be implemented in a shared form.
  • the model selection module 510 determines a defect inspection operation mode for the object and selectively selects at least one of the learning models generated based on the training data set in FIG. 4 corresponding to the determined defect inspection operation mode. According to an embodiment, the model selection module 510 is based on the training dataset in FIG. 4 according to the defect inspection operation mode already determined by the components in the control unit 110 of FIG. 1 or the learning unit 320 of FIG. 4 . At least one of the generated learning models may be simply selected.
  • the processing unit 330 performs pre-processing in the pre-processing module 520 on the image data of the object input through the input unit 310, and the pre-processed image data of the object in the model selection module 510. Based on the learning model selected as the learning model corresponding to the defect inspection operation mode, defects are inspected from the image data of the object through the feature extraction module 530 and the inspection processing module 540, and defect inspection result data based on the inspection results. is generated and transmitted to the output unit 340. Thereafter, the output unit 340 provides defect inspection result data of the target object.
  • a first training dataset may be generated based on previously classified data for examination of an object, and a second training dataset may be generated based on or including unclassified unknown data. It can be.
  • the defect inspection unit 610 of the electronic device 100 may have a different configuration from the defect inspection unit shown in FIG. 3 . . That is, in FIG.
  • the defect inspection unit has a learning unit 320 and a processing unit 330 shown as separate configurations, and each configuration is implemented in such a way as to individually include a preprocessing module, a feature extraction module, and an inspection processing module, but FIG. 6 and 7, the preprocessing module 720, the feature extraction module 740, and the inspection processing module 750 are shared by the defect inspection unit 610 to perform the functions of the learning unit and the processing unit of FIG.
  • the defect inspection processing unit 610 of FIGS. 6 and 7 includes a model selection module 710 as shown in FIG. 5 .
  • the defect detection result data is the defective part detected on the image data of the object in the manner shown in FIG. 8 or FIG. 9(c) to FIG. 9(d), for example. It can be provided in a way that displays.
  • the present invention is not limited to this provision method. For example, text, audio, images (graphs), etc. related to the type of defects detected in the above method, the degree of defects, the number of defects, the defect ratio, etc. may be additionally added or provided individually.
  • FIGS. 8(a) to 8(b) show defect inspection results from image data of an object using the second defect inspection operation mode described above, that is, the second learning model described in FIGS. 4 to 5 .
  • FIGS. 8(c) to 8(d) show defect inspection results from the image data of the object using the first defect inspection operation mode described above, that is, the first learning model specialized for the object described in FIGS. 4 to 5. indicates
  • FIGS. 9(a) to 9(b) show results of inspecting defects in image data of an object using only one pre-generated and learned model.
  • FIGS. 9(c) to 9(d) show the results of examining defects in image data of an object by selectively selecting a learning model from among a plurality of learning models according to the present invention described above. .
  • Rectangular parts 811 to 844 in the image data shown in FIGS. 8(a) to 8(d) and 911 in the image data shown in FIGS. 9(a) to 9(d) to 941) denotes a defect detection portion.
  • the processor may represent all of the input unit, learning unit, processing unit, and output unit shown in FIGS. 3 to 7, or at least one of them, depending on the context.
  • Steps of a method or algorithm described in connection with an embodiment of the present invention may be implemented directly in hardware, implemented in a software module executed by hardware, or implemented by a combination thereof.
  • a software module may include random access memory (RAM), read only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, hard disk, removable disk, CD-ROM, or It may reside in any form of computer readable recording medium well known in the art to which the present invention pertains.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Toxicology (AREA)
  • Image Analysis (AREA)

Abstract

Provided is a method, device, and system for optional artificial intelligence engine-based nondestructive inspection of an object. An electronic device for performing the optional artificial intelligence engine-based nondestructive inspection of an object comprises: a memory for storing a plurality of learning variables for defect inspection and a plurality of learning models corresponding to the respective learning variables; and a processor for inspecting a defect of the object, wherein the processor comprises a processing unit for performing defect inspection of the object on the basis of at least one learning model selected from among the stored plurality of learning models.

Description

선택적 인공 지능 엔진 기반 대상체 비파괴 검사 방법, 장치 및 시스템Optional artificial intelligence engine based object non-destructive inspection method, device and system
본 발명은 대상체 비파괴 검사에 관한 것으로, 보다 자세하게는 선택적 인공 지능 엔진에 기초하여 대상체에 대한 비파괴 결함 검사를 수행하는 방법, 장치 및 시스템에 관한 것이다.The present invention relates to non-destructive inspection of an object, and more particularly, to a method, apparatus, and system for non-destructive defect inspection of an object based on a selective artificial intelligence engine.
제품의 불량은 공급망 서비스의 저하, 자동화 설비의 손실 등을 발생시킬 수 있다. 그러므로 제품의 불량 여부를 제대로 검사하는 것이 매우 중요하다.Defective products can lead to deterioration of supply chain services and loss of automation facilities. Therefore, it is very important to properly inspect the product for defects.
방사선 특히, 엑스레이(X-ray)를 활용하여 대상체(object)를 파괴하지 않는 즉, 비파괴 검사가 품질 검사에 활용되고 있는데, 종래 방사선 비파괴 검사는 단일 기술을 적용하여 대상체에 대한 엑스레이 영상 내 불량 여부를 검출하였다. 그러나 이러한 종래 검사 방식은 불량 검출 성능의 한계가 있어, 상기 엑스레이 영상으로부터 모든 불량을 검출하지 못하는 즉, 미검출되는 불량이 있는 문제점이 있었다. 뿐만 아니라, 대상체의 구성물에 따라 특징이 상이한바, 이는 동종이형의 경우에도 검사의 결과 즉, 정확도에 차이가 발생할 수 있어 검사의 신뢰도를 떨어뜨리는 문제점이 있다.Non-destructive inspection, that is, non-destructive inspection, which does not destroy an object by using radiation, in particular, X-ray, is used for quality inspection. Conventional radiation non-destructive inspection applies a single technology to determine whether an object is defective in an X-ray image. was detected. However, this conventional inspection method has limitations in defect detection performance, and thus has a problem in that all defects cannot be detected from the X-ray image, that is, there are undetected defects. In addition, since characteristics are different depending on the composition of the object, this may cause a difference in test result, that is, accuracy, even in the case of an allogeneic test object, thereby reducing the reliability of the test.
본 발명이 해결하고자 하는 과제는, 검사 대상체에 대하여 최적의 인공 지능 학습 모델을 선택하여 비파괴 검사 정확도 향상을 통해 검사 신뢰도를 높이면서 동시에 검사 시스템의 효율성을 높이는 방법, 장치 및 시스템을 제공하는 것이다.An object to be solved by the present invention is to provide a method, apparatus, and system that selects an optimal artificial intelligence learning model for an inspection object to increase inspection reliability through non-destructive inspection accuracy improvement and at the same time increase the efficiency of an inspection system.
본 발명이 해결하고자 하는 과제들은 이상에서 언급된 과제로 제한되지 않으며, 언급되지 않은 또 다른 과제들은 아래의 기재로부터 통상의 기술자에게 명확하게 이해될 수 있을 것이다.The problems to be solved by the present invention are not limited to the problems mentioned above, and other problems not mentioned will be clearly understood by those skilled in the art from the description below.
상술한 과제를 해결하기 위한 본 발명의 일 면에 따른 선택적 인공 지능 엔진 기반 대상체에 대한 비파괴 검사를 수행하는 전자 장치는, 결함 검사를 위한 복수 개의 학습 변수와 상기 개별 학습 변수에 대응되는 복수 개의 학습 모델을 저장하는 메모리; 및 상기 대상체의 결함을 검사하는 프로세서를 포함하되, 상기 프로세서는, 상기 저장된 복수 개의 학습 모델 중 선택된 적어도 하나의 학습 모델에 기초하여 상기 대상체의 결함 검사를 수행하는 처리부를 포함한다.An electronic device for performing non-destructive testing on an object based on an optional artificial intelligence engine according to an aspect of the present invention for solving the above problems includes a plurality of learning variables for defect inspection and a plurality of learning variables corresponding to the individual learning variables. memory to store the model; and a processor inspecting a defect of the object, wherein the processor includes a processor configured to inspect a defect of the object based on at least one learning model selected from among the plurality of stored learning models.
본 발명의 일 면에 따른 전자 장치에서 선택적 인공 지능 엔진 기반 대상체에 대한 비파괴 검사 방법은, 결함 검사를 위한 복수 개의 학습 변수와 상기 개별 학습 변수에 대응되는 복수 개의 학습 모델을 저장하는 단계; 상기 대상체의 이미지 데이터를 입력받는 단계; 상기 저장된 학습 모델을 통해 상기 입력받는 대상체에 대한 카테고리를 결정하고, 상기 결정된 대상체의 카테고리에 기초하여 상기 저장된 복수 개의 학습 모델 중 적어도 하나의 학습 모델을 선별적으로 선택하는 단계; 상기 선택된 적어도 하나의 학습 모델에 기초하여 상기 대상체의 결함 검사를 수행하는 단계; 및 상기 결함 검사 결과를 제공하는 단계를 포함한다.According to an aspect of the present invention, a non-destructive inspection method for an object based on an artificial intelligence engine in an electronic device includes storing a plurality of learning variables for defect inspection and a plurality of learning models corresponding to the individual learning variables; receiving image data of the object; determining a category of the object to be input through the stored learning model, and selectively selecting at least one learning model from among the plurality of stored learning models based on the determined category of the object; performing a defect inspection of the target object based on the selected at least one learning model; and providing the defect inspection result.
본 발명의 일면에 따른 선택적 인공 지능 엔진 기반 대상체 비파괴 검사 시스템은, 방사선을 조사하여 대상체에 대한 이미지 데이터를 획득하는 영상 획득 장치; 및 전자 장치를 포함하되, 상기 전자 장치는, 결함 검사를 위한 복수 개의 학습 변수와 상기 개별 학습 변수에 대응되는 복수 개의 학습 모델을 저장하는 메모리; 및 상기 저장된 복수 개의 학습 모델 중 선택된 적어도 하나의 학습 모델에 기초하여 상기 대상체의 결함 검사를 수행하는 프로세서를 포함한다.An optional artificial intelligence engine-based non-destructive examination system for an object according to an aspect of the present invention includes an image acquisition device that acquires image data of an object by radiating radiation; and an electronic device, wherein the electronic device includes: a memory configured to store a plurality of learning variables for defect inspection and a plurality of learning models corresponding to the individual learning variables; and a processor configured to perform a defect inspection of the object based on at least one learning model selected from among the plurality of stored learning models.
본 발명의 기타 구체적인 사항들은 상세한 설명 및 도면들에 포함되어 있다.Other specific details of the invention are included in the detailed description and drawings.
본 발명에 따르면, 다음과 같은 효과들을 가질 수 있다.According to the present invention, it can have the following effects.
본 발명에 따르면, 검사 대상체에 대한 비파괴 검사의 정확도 향상을 통해 검사 신뢰도를 높이면서 동시에 검사 시스템의 효율성을 높일 수 있다.According to the present invention, it is possible to increase the efficiency of the inspection system while increasing the reliability of the inspection through the improvement of the accuracy of the non-destructive inspection of the inspection object.
본 발명의 효과들은 이상에서 언급된 효과로 제한되지 않으며, 언급되지 않은 또 다른 효과들은 아래의 기재로부터 통상의 기술자에게 명확하게 이해될 수 있을 것이다.The effects of the present invention are not limited to the effects mentioned above, and other effects not mentioned will be clearly understood by those skilled in the art from the description below.
도 1 은 본 발명의 일 실시 예에 따른 인공 지능 기반 대상체 비파괴 검사 시스템을 도시한 블록도이다.1 is a block diagram illustrating an artificial intelligence based non-destructive testing system for an object according to an embodiment of the present invention.
도 2는 본 발명의 일 실시 예에 따른 대상체 비파괴 검사 방법을 설명하기 위해 도시한 흐름도이다. 2 is a flowchart illustrating a method for non-destructive testing of an object according to an embodiment of the present invention.
도 3은 본 발명의 일 실시 예에 따른 전자 장치의 구성 블록도이다. 3 is a configuration block diagram of an electronic device according to an embodiment of the present invention.
도 4는 본 발명의 일 실시 예에 따른 학습부의 구성 블록도이다. 4 is a configuration block diagram of a learning unit according to an embodiment of the present invention.
도 5는 본 발명의 일 실시 예에 따른 처리부의 구성 블록도이다. 5 is a configuration block diagram of a processing unit according to an embodiment of the present invention.
도 6은 본 발명의 다른 일 실시 예에 따른 전자 장치의 구성 블록도이다.6 is a configuration block diagram of an electronic device according to another embodiment of the present invention.
도 7은 도 6의 처리부의 구성 블록도이다. FIG. 7 is a configuration block diagram of a processing unit of FIG. 6 .
도 8과 도 9는 본 발명에 따른 대상체 비파괴 검사 결과를 설명하기 위해 도시한 도면이다. 8 and 9 are diagrams for explaining results of non-destructive testing of an object according to the present invention.
도 10은 본 발명의 다른 일 실시 예에 따른 대상체 비파괴 방법을 설명하기 위해 도시한 흐름도이다.10 is a flowchart illustrating a non-destructive method of an object according to another embodiment of the present invention.
본 발명의 이점 및 특징, 그리고 그것들을 달성하는 방법은 첨부되는 도면과 함께 상세하게 후술되어 있는 실시예들을 참조하면 명확해질 것이다. 그러나, 본 발명은 이하에서 개시되는 실시예들에 제한되는 것이 아니라 서로 다른 다양한 형태로 구현될 수 있으며, 단지 본 실시예들은 본 발명의 개시가 완전하도록 하고, 본 발명이 속하는 기술 분야의 통상의 기술자에게 본 발명의 범주를 완전하게 알려주기 위해 제공되는 것이며, 본 발명은 청구항의 범주에 의해 정의될 뿐이다. Advantages and features of the present invention, and methods of achieving them, will become clear with reference to the detailed description of the following embodiments taken in conjunction with the accompanying drawings. However, the present invention is not limited to the embodiments disclosed below, but may be implemented in various different forms, only these embodiments are intended to complete the disclosure of the present invention, and are common in the art to which the present invention belongs. It is provided to fully inform the person skilled in the art of the scope of the invention, and the invention is only defined by the scope of the claims.
본 명세서에서 사용된 용어는 실시예들을 설명하기 위한 것이며 본 발명을 제한하고자 하는 것은 아니다. 본 명세서에서, 단수형은 문구에서 특별히 언급하지 않는 한 복수형도 포함한다. 명세서에서 사용되는 "포함한다(comprises)" 및/또는 "포함하는(comprising)"은 언급된 구성요소 외에 하나 이상의 다른 구성요소의 존재 또는 추가를 배제하지 않는다. 명세서 전체에 걸쳐 동일한 도면 부호는 동일한 구성 요소를 지칭하며, "및/또는"은 언급된 구성요소들의 각각 및 하나 이상의 모든 조합을 포함한다. 비록 "제1", "제2" 등이 다양한 구성요소들을 서술하기 위해서 사용되나, 이들 구성요소들은 이들 용어에 의해 제한되지 않음은 물론이다. 이들 용어들은 단지 하나의 구성요소를 다른 구성요소와 구별하기 위하여 사용하는 것이다. 따라서, 이하에서 언급되는 제1 구성요소는 본 발명의 기술적 사상 내에서 제2 구성요소일 수도 있음은 물론이다.Terminology used herein is for describing the embodiments and is not intended to limit the present invention. In this specification, singular forms also include plural forms unless specifically stated otherwise in a phrase. As used herein, "comprises" and/or "comprising" does not exclude the presence or addition of one or more other elements other than the recited elements. Like reference numerals throughout the specification refer to like elements, and “and/or” includes each and every combination of one or more of the recited elements. Although "first", "second", etc. are used to describe various components, these components are not limited by these terms, of course. These terms are only used to distinguish one component from another. Accordingly, it goes without saying that the first element mentioned below may also be the second element within the technical spirit of the present invention.
다른 정의가 없다면, 본 명세서에서 사용되는 모든 용어(기술 및 과학적 용어를 포함)는 본 발명이 속하는 기술분야의 통상의 기술자에게 공통적으로 이해될 수 있는 의미로 사용될 수 있을 것이다. 또한, 일반적으로 사용되는 사전에 정의되어 있는 용어들은 명백하게 특별히 정의되어 있지 않는 한 이상적으로 또는 과도하게 해석되지 않는다.Unless otherwise defined, all terms (including technical and scientific terms) used in this specification may be used with meanings commonly understood by those skilled in the art to which the present invention belongs. In addition, terms defined in commonly used dictionaries are not interpreted ideally or excessively unless explicitly specifically defined.
공간적으로 상대적인 용어인 "아래(below)", "아래(beneath)", "하부(lower)", "위(above)", "상부(upper)" 등은 도면에 도시되어 있는 바와 같이 하나의 구성요소와 다른 구성요소들과의 상관관계를 용이하게 기술하기 위해 사용될 수 있다. 공간적으로 상대적인 용어는 도면에 도시되어 있는 방향에 더하여 사용시 또는 동작시 구성요소들의 서로 다른 방향을 포함하는 용어로 이해되어야 한다. 예를 들어, 도면에 도시되어 있는 구성요소를 뒤집을 경우, 다른 구성요소의 "아래(below)"또는 "아래(beneath)"로 기술된 구성요소는 다른 구성요소의 "위(above)"에 놓여질 수 있다. 따라서, 예시적인 용어인 "아래"는 아래와 위의 방향을 모두 포함할 수 있다. 구성요소는 다른 방향으로도 배향될 수 있으며, 이에 따라 공간적으로 상대적인 용어들은 배향에 따라 해석될 수 있다.The spatially relative terms "below", "beneath", "lower", "above", "upper", etc. It can be used to easily describe a component's correlation with other components. Spatially relative terms should be understood as including different orientations of elements in use or operation in addition to the orientations shown in the drawings. For example, if you flip a component that is shown in a drawing, a component described as "below" or "beneath" another component will be placed "above" the other component. can Thus, the exemplary term “below” may include directions of both below and above. Components may also be oriented in other orientations, and thus spatially relative terms may be interpreted according to orientation.
이하, 첨부된 도면을 참조하여 본 발명의 실시예를 상세하게 설명한다. Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
본 명세서에서 '이미지 또는 이미지 데이터(image data)'는 방사선을 이용하는 튜브(Tube), 디텍터(Detector) 등을 통해 얻어진 정지 영상(still image)이나 동영상(video) 데이터를 의미한다. 일 실시 예로, 상기 이미지는 X-ray 튜브나 X-ray 디텍터를 통해 대상체(object)에 대한 X-ray 이미지일 수 있다. 이 때, 상기 X-ray 이미지는 예를 들어, 2D(Dimensional) 이미지와 연속적인 2D 이미지 집합(image aggregation)으로부터 재구성(reconstruction)된 CT(Computed Tomography) 이미지, 재구성된 CT 볼륨(volume) 데이터의 단면(slice) 이미지를 포함할 수 있다.In this specification, 'image or image data' refers to still image or video data obtained through a tube or detector using radiation. As an example, the image may be an X-ray image of an object through an X-ray tube or an X-ray detector. At this time, the X-ray image is, for example, a 2D (Dimensional) image and a CT (Computed Tomography) image reconstructed from a continuous 2D image aggregation, and a reconstructed CT volume data. A slice image may be included.
본 명세서에서 '결함'은 본 발명에 따른 인공 지능 기반으로 결함 검사의 대상이 되는 대상체에 대한 비파괴 검사시, 상기 대상체에 대하여 정상(normal)으로 정의된 또는 정의될 수 있는 부분이 아닌 부분을 나타내는 것으로, 이는 불량 또는 오류 등 다양한 명칭으로 표현할 수도 있다. 실시 예에 따라, 본 발명은 그러한 표현에 한정되지 않고, 통상적인 의미에서의 결함과 동일 또는 유사한 의미도 포함할 수 있다. In the present specification, 'defect' indicates a part that is defined as normal or not a part that can be defined as normal for an object during a non-destructive test on an object to be inspected for a defect based on artificial intelligence according to the present invention. That is, it may be expressed by various names such as defect or error. Depending on the embodiment, the present invention is not limited to such an expression, and may include the same or similar meaning as a defect in a conventional sense.
도 1 은 본 발명의 일 실시 예에 따른 인공 지능 기반 대상체 비파괴 검사 시스템을 도시한 블록도이다.1 is a block diagram illustrating an artificial intelligence based non-destructive testing system for an object according to an embodiment of the present invention.
도 1을 참조하면, 본 발명의 일 실시 예에 따른 인공 지능 기반 대상체 비파괴 검사를 수행하는 시스템은, 전자 장치(100)와 영상 획득 장치(150)를 포함하여 구성될 수 있다. 이 때, 도 1에 도시된 전자 장치(100)와 영상 획득 장치(150)의 구성은 일 실시 예로서 이에 한정되지 않으며, 본 발명에 따른 동작 수행과 관련하여 하나 또는 그 이상의 구성요소가 추가되어 구성될 수도 있고, 그 반대일 수도 있다.Referring to FIG. 1 , a system for performing an artificial intelligence-based non-destructive examination of an object according to an embodiment of the present invention may include an electronic device 100 and an image acquisition device 150. At this time, the configuration of the electronic device 100 and the image acquisition device 150 shown in FIG. 1 is not limited thereto as an embodiment, and one or more components are added in relation to the operation according to the present invention It can be configured or vice versa.
전자 장치(100)는 메모리(memory)와 프로세서(processor)를 포함하여 구성될 수 있으며, 상기 메모리는 도 1에 도시된 데이터베이스(120)에 대응되거나 그를 포함할 수 있고, 상기 프로세서는 제어부(110)와 AI 엔진(130) 중 적어도 하나를 포함할 수 있다. 이 때, AI 엔진(130)은 딥러닝 네트워크를 포함하나, 이에 한정되는 것은 아니다.The electronic device 100 may include a memory and a processor, and the memory may correspond to or include the database 120 shown in FIG. 1 , and the processor may include the controller 110 ) and at least one of the AI engine 130. At this time, the AI engine 130 includes a deep learning network, but is not limited thereto.
상기 전자 장치(100)는, 상기 영상 획득 장치(150)와 네트워크를 통해 연결되어 대상체에 대한 이미지 데이터를 수신할 수 있다. The electronic device 100 may be connected to the image capture device 150 through a network to receive image data of an object.
영상 획득 장치(150)는 디텍터(160), 엑스레이 튜브(170) 및 광원(lighting source)(미도시)를 포함하여 구성될 수 있으며, 상기 디텍터(160)는 2D 디텍터와 3D 디텍터 중 적어도 하나일 수 있다. 상기에서, 디텍터(160) 및 엑스레이 튜브(170)는 각각 대상체에 대한 엑스레이 영상 획득 장치로, 이는 기존의 공지된 구성으로 이루어질 수 있다. 이외에도, 영상 획득 장치(150)는 이동 대상체의 움직임(motion)을 촬영할 수 있는 장치 및 CT 디텍터(미도시)를 추가로 구비할 수도 있다. 광원은 투과성 광원인 테라헤르츠(terahertz)를 포함하나, 이에 한정되는 것은 아니다.The image acquisition device 150 may include a detector 160, an X-ray tube 170, and a lighting source (not shown), and the detector 160 may be at least one of a 2D detector and a 3D detector. can In the above, the detector 160 and the X-ray tube 170 are X-ray image capture devices for each object, which may be configured in a conventionally known configuration. In addition, the image capture device 150 may further include a device capable of capturing motion of a moving object and a CT detector (not shown). The light source includes, but is not limited to, a transmissive light source, terahertz.
전자 장치(100)의 일 구성요소로서, 제어부(110)는 전자 장치(100)에서 수행되는 동작을 제어하며, 데이터베이스(120)는 영상 획득 장치(150)로부터 수신되는 대상체의 이미지, 상기 대상체에 대한 결함 검사에 이용되는 학습 데이터셋(learning dataset), 상기 학습 데이터셋에 대응되는 학습 모델(learning model) 등 전자 장치(100)에 의해 수신, 처리 등이 이루어지는 데이터를 저장한다.As one component of the electronic device 100, the controller 110 controls operations performed by the electronic device 100, and the database 120 stores an image of an object received from the image capture device 150 and information about the object. It stores data that is received and processed by the electronic device 100, such as a learning dataset used for defect inspection for a defect and a learning model corresponding to the learning dataset.
제어부(110)는 대상체의 이미지 데이터를 입력으로 데이터베이스(120)에 저장된 학습 모델을 통해 상기 대상체의 카테고리를 결정(classification)(또는 분류)하고, 결함 검사의 대상인 상기 대상체의 이미지 데이터에 대한 결함 검사 동작 모드를 특정하고, 상기 특정된 결함 검사 동작 모드에 대응되는 적어도 하나의 학습 모델을 선별적으로 선택하여, 선택된 학습 모델에 기초하여 상기 대상체의 이미지 데이터로부터 결함 검사를 수행하는 다양한 기계학습 모델(Machine-learning model)의 알고리즘 및 관련 애플리케이션을 수행하는 연산 능력이 있는 하드웨어 유닛(hardware unit)을 포함할 수 있다. 예를 들어, 제어부(110)는 중앙 처리 장치(Central Processing Unit), 마이크로 프로세서(microprocessor) 및 그래픽 프로세서(Graphic Processing Unit) 중 적어도 하나를 포함할 수 있다. 또한, 제어부(110)는 기계 학습 모델 알고리즘 또는 애플리케이션을 저장하는 별도의 메모리(미도시)를 더 포함할 수 있다.The control unit 110 classifies (or classifies) the category of the object through a learning model stored in the database 120 with image data of the object as input, and inspects defects on the image data of the object, which is a defect inspection target. Various machine learning models for specifying an operation mode, selectively selecting at least one learning model corresponding to the specified defect inspection operation mode, and performing defect inspection from the image data of the object based on the selected learning model ( It may include a hardware unit capable of performing an algorithm of a machine-learning model and related applications. For example, the control unit 110 may include at least one of a central processing unit, a microprocessor, and a graphic processing unit. In addition, the controller 110 may further include a separate memory (not shown) for storing machine learning model algorithms or applications.
전자 장치(100)는 결함 검사를 위하여 대상체에 대한 이미지 데이터를 학습시켜서, AI 엔진(130)에 입력하여 (고화질 또는 개선된) 대상체 이미지 데이터를 획득할 수 있다. 이 때, AI 엔진(130)을 통해 획득된 이미지는 일반적인 의미에서 영상 획득 장치(150)로부터 입력된 대상체 이미지 데이터보다 전체적으로 고화질이거나 개선된 이미지 데이터뿐만 아니라 인공 지능 기반으로 대상체에 대한 비파괴 검사시 결함 검사의 관점에서 전부 또는 일부 개선된 또는 새로운 이미지 데이터를 나타낼 수 있다. The electronic device 100 may obtain object image data (higher quality or improved) by learning image data of an object for defect inspection and inputting the learned image data to the AI engine 130 . At this time, the image acquired through the AI engine 130 is generally higher quality than the object image data input from the image capture device 150 or improved image data as well as defects during non-destructive testing of the object based on artificial intelligence. In terms of inspection, it may represent all or some improved or new image data.
일 실시 예에 따라, 전자 장치(100)는 상기 학습 전 대상체에 대한 이미지 데이터에 대한 검사 결과와 상기 AI 엔진(130)을 통해 획득한 대상체에 대한 이미지 데이터에 대한 검사 결과를 비교하여 새로운 학습 모델을 추가 정의할 수도 있다. 전자 장치(100)는 이렇게 추가된 학습 모델에 기반하여 입력되는 대상체의 이미지로부터 AI 엔진(130)의 이용 여부 즉, 입력된 대상체의 이미지 데이터가 아닌 상기 AI 엔진(130)을 통해 획득한 대상체의 이미지 데이터를 결함 여부 검사에 이용할 지 여부를 결정하고, 그에 따른 동작을 수행할 수도 있다. According to an embodiment, the electronic device 100 compares the examination result of the image data of the object before learning with the examination result of the image data of the object acquired through the AI engine 130 to create a new learning model. can be further defined. The electronic device 100 determines whether or not to use the AI engine 130 from the image of the object input based on the learning model added in this way, that is, the information obtained through the AI engine 130 rather than the image data of the input object. It is also possible to determine whether or not to use the image data for defect inspection and perform an operation accordingly.
따라서, 본 발명과 관련하여 전자 장치(100)로 입력되는 대상체의 엑스레이 이미지는 검사 장비 예를 들어, 반도체 불량 검출, PCB 기판 불량 검출, 식품 및 제약 분야의 이물질 검출 등의 다양한 분야의 검사에 활용될 수 있다.Therefore, in relation to the present invention, an X-ray image of an object input to the electronic device 100 is used for inspection in various fields such as inspection equipment, for example, semiconductor defect detection, PCB substrate defect detection, and foreign material detection in the food and pharmaceutical fields. It can be.
도 2는 본 발명의 일 실시 예에 따른 대상체 비파괴 검사 방법을 설명하기 위해 도시한 흐름도이다. 도 3은 본 발명의 일 실시 예에 따른 전자 장치(100)의 구성 블록도이다. 도 4는 본 발명의 일 실시 예에 따른 학습부(220)의 구성 블록도이다. 도 5는 본 발명의 일 실시 예에 따른 처리부(230)의 구성 블록도이다. 도 6은 본 발명의 다른 일 실시 예에 따른 전자 장치(100)의 구성 블록도이고, 도 7은 도 6의 처리부(610)의 구성 블록도이다. 도 8과 도 9는 본 발명에 따른 대상체 비파괴 검사 결과를 설명하기 위해 도시한 도면이다. 도 10은 본 발명의 다른 일 실시 예에 따른 대상체 비파괴 방법을 설명하기 위해 도시한 흐름도이다.2 is a flowchart illustrating a method for non-destructive testing of an object according to an embodiment of the present invention. 3 is a block diagram of an electronic device 100 according to an embodiment of the present invention. 4 is a configuration block diagram of a learning unit 220 according to an embodiment of the present invention. 5 is a configuration block diagram of the processing unit 230 according to an embodiment of the present invention. 6 is a block diagram of the electronic device 100 according to another embodiment of the present invention, and FIG. 7 is a block diagram of the processing unit 610 of FIG. 6 . 8 and 9 are diagrams for explaining results of non-destructive testing of an object according to the present invention. 10 is a flowchart illustrating a non-destructive method of an object according to another embodiment of the present invention.
도 2 및 도 10의 동작들은 도 1의 전자 장치(100)를 통해 수행될 수 있다. 여기서, 도 2 및 도 10의 동작들을 도 3 내지 7에 도시된 전자 장치(100)의 구성을 참조하여 설명한다.The operations of FIGS. 2 and 10 may be performed through the electronic device 100 of FIG. 1 . Here, the operations of FIGS. 2 and 10 will be described with reference to the configuration of the electronic device 100 shown in FIGS. 3 to 7 .
먼저, 도 2를 참조하면, 동작 11에서, 전자 장치(100)는 복수의 학습 변수와 대응 학습 모델을 저장할 수 있다. 여기서, '학습 변수'라 함은 후술하는 훈련 데이터셋(제1 및/또는 제2 훈련 데이터셋)을 나타내고, '대응 학습 모델'이라 함은 상기 훈련 데이터셋을 이용하여 생성된 학습 모델을 나타낸다.First, referring to FIG. 2 , in operation 11, the electronic device 100 may store a plurality of learning variables and a corresponding learning model. Here, 'learning variables' denote training datasets (first and/or second training datasets) to be described later, and 'corresponding learning models' denote learning models generated using the training datasets. .
동작 12에서, 입력부(310)는 영상 획득 장치(150)로부터 방사선(예를 들어, X선) 조사를 통해 획득된 대상체의 이미지 데이터를 수신한다.In operation 12, the input unit 310 receives image data of the object acquired through radiation (eg, X-rays) from the image capture device 150.
실시 예에 따라서, 동작 11과 동작 12는 그 동작 순서가 도 2에 도시된 바와 다르게 정의될 수도 있다. 이는 비단 동작 11 및 동작 12뿐만 아니라 도 2(및 후술하는 도 10)에 도시된 동작들의 순서에도 적용될 수 있다.Depending on the embodiment, the order of operations 11 and 12 may be defined differently from that shown in FIG. 2 . This may be applied not only to operations 11 and 12, but also to the sequence of operations shown in FIG. 2 (and FIG. 10 described later).
입력부(310)로 입력되는 대상체의 이미지 데이터는, 영상 획득 장치(150)로부터 수신한 로 데이터(raw data)이거나 전술한 AI 엔진(130)을 통하여 결함 검사에 적합하도록 적어도 일부에 대해 가공 처리된 데이터일 수 있다. The image data of the object input to the input unit 310 is raw data received from the image capture device 150 or at least partially processed to be suitable for defect inspection through the AI engine 130 described above. can be data.
일 실시 예에서, 수신한 대상체의 이미지 데이터 즉, 로 데이터에 기초하여 결함 검사 수행 여부를 미리 판단할 수 있다. 별도의 동작 또는 후술하는 동작 16에 따른 대상체 결함 여부 판단 결과에 기초하여, 상기 이용한 로 데이터에 오류가 있거나 검사 결과에 대한 신뢰도가 미리 정한 기준치 이하로 판단되면, 그라운드 트루쓰(Ground Truth)에 해당하는 결과를 얻기 어려울 뿐만 아니라 설령 유사한 결과를 얻는다고 하더라도 해당 결과를 신뢰할 수 없기 때문에, 이 경우에는 전술한 바와 같이 AI 엔진(130)을 통해 개선된 이미지 데이터를 획득하여 그것을 결함 검사를 위한 기본 데이터로 이용할 수 있다.In an embodiment, it may be determined in advance whether to perform the defect inspection based on received image data of the object, that is, raw data. Based on the result of determining whether the object is defective according to a separate operation or operation 16 described later, if there is an error in the raw data used or if the reliability of the inspection result is determined to be less than a predetermined standard value, it corresponds to ground truth. In this case, as described above, the improved image data is obtained through the AI engine 130 and used as basic data for defect inspection. available.
학습부(320)는, 훈련 데이터셋을 입력으로 하여 전처리 모듈(410), 특징 추출 모듈(420) 및 검사 처리 모듈(430)을 통해, 상기 입력 훈련 데이터셋에 대응되는 학습 모델을 생성하여 메모리에 일시 저장할 수 있다.The learning unit 320 takes the training data set as an input and generates a learning model corresponding to the input training data set through the preprocessing module 410, the feature extraction module 420, and the inspection processing module 430 to store the data in the memory. can be temporarily stored in
동작 13과 동작 14에서, 처리부(330)는 입력되는 대상체의 이미지 데이터 기반 학습 모델을 통해 상기 대상체의 카테고리(category)를 결정하고, 상기 결정된 카테고리에 기초하여 상기 대상체에 대한 결함 검사 동작 모드를 특정하여, 상기 특정된 동작 모드에 대응되는 학습 모델을 선택할 수 있다.In operations 13 and 14, the processing unit 330 determines a category of the object through a learning model based on image data of the input object, and specifies a defect inspection operation mode for the object based on the determined category. Thus, it is possible to select a learning model corresponding to the specified operation mode.
동작 15에서, 처리부(330)는 선택된 학습 모델에 기초하여 상기 대상체의 이미지 데이터로부터 결함 여부를 검사하고, 결함 여부 검사 결과 데이터를 생성한다.In operation 15, the processing unit 330 inspects the object image data for defects based on the selected learning model, and generates defect inspection result data.
동작 16에서, 출력부(340)는 처리부(330)에서 대상체 대해 생성한 결함 여부 검사 결과 데이터를 제공한다. 본 명세서에서, '제공'이라 함은, 디스플레이를 통한 직접 또는 간접 출력, 대상 단말로의 전송 및/또는 출력 제어 등 상기 대상체에 대한 결과의 출력과 관련된 다양한 의미로 정의될 수 있다. In operation 16, the output unit 340 provides defect detection result data generated by the processing unit 330 for the object. In this specification, 'providing' may be defined as various meanings related to outputting results to the target object, such as direct or indirect output through a display, transmission to a target terminal, and/or output control.
도 10을 참조하여 본 발명에 따른 인공 지능 기반 대상체 비파괴 검사 방식을 보다 상세하게 설명하면, 다음과 같다.Referring to FIG. 10 , an artificial intelligence-based non-destructive examination method for an object according to the present invention will be described in more detail as follows.
동작 21과 동작 22에서, 전자 장치(100)는 검사 대상의 이미지 데이터 기반 결함 검사를 위한 모델을 학습하고, 대상체의 이미지 데이터를 수신하는데, 이는 각각 전술한 도 2의 동작 11과 동작 12에 대응되는 동작이다.In operations 21 and 22, the electronic device 100 learns a model for defect inspection based on the image data of the inspection target and receives image data of the object, which correspond to operations 11 and 12 of FIG. 2, respectively. It is an action that becomes
동작 23에서, 전자 장치(100)는 대상체에 대한 결함 검사 동작 모드를 결정한다.In operation 23, the electronic device 100 determines a defect inspection operation mode for the target object.
동작 24-1에서, 전자 장치(100)는 상기 동작 23에서 결정한 결함 검사 동작 모드가 제1 동작 모드이면, 동작 모드 선택 모듈을 통해 해당 동작 모드에 대응하는 학습 모델을 선택하여, 대상체의 이미지 데이터를 처리, 즉 결함 검사를 수행한다.In operation 24-1, if the defect inspection operation mode determined in operation 23 is the first operation mode, the electronic device 100 selects a learning model corresponding to the corresponding operation mode through an operation mode selection module, and selects the image data of the object. processing, that is, performing defect inspection.
반면, 동작 24-2에서, 전자 장치(100)는 상기 동작 23에서 결정한 결함 검사 동작 모드가 제2 동작 모드이면, 동작 모드 선택 모듈을 통해 해당 동작 모드에 대응하는 학습 모델을 선택하여, 대상체의 이미지 데이터를 처리, 즉 결함 검사를 수행한다.On the other hand, in operation 24-2, if the defect inspection operation mode determined in operation 23 is the second operation mode, the electronic device 100 selects a learning model corresponding to the corresponding operation mode through an operation mode selection module, Image data is processed, that is, defect inspection is performed.
상기 동작 23과 동작 24와 관련하여, 결함 검사 동작 모드 결정(제1 모드, 제2 모드) 및 결정된 결함 검사 동작 모드에 따른 대상체의 이미지 데이터 처리에 대한 상세 내용은 아래 도 4 내지 5의 설명 부분에서 후술한다. 본 발명의 기술 사상에 대한 이해를 돕기 위하여, 본 명세서에서는 결함 검사 동작 모드로 제1 및 제2 동작 모드, 2개만을 정의하여 설명하였으나, 설정이나 요청에 따라 동작 모드의 개수, 동작 모드의 정의 등은 다르게 구현될 수 있다.With regard to operations 23 and 24 above, details of determining the defect inspection operation mode (first mode, second mode) and processing image data of the object according to the determined defect inspection operation mode are described in FIGS. 4 and 5 below. described later in In order to help the understanding of the technical idea of the present invention, only the first and second operation modes are defined and described as defect inspection operation modes in the present specification, but the number of operation modes and the definition of operation modes according to setting or request etc. may be implemented differently.
도 3을 참조하면, 전자 장치(100)는 입력부(310), 결함 검사부 및 출력부(340)를 포함하여 구성될 수 있다. 이 때, 상기 결함 검사부는 학습부(Learning part)(320)와 처리부(Processing part)(330)를 포함하여 구성될 수 있다. Referring to FIG. 3 , the electronic device 100 may include an input unit 310, a defect inspection unit, and an output unit 340. At this time, the defect inspection unit may include a learning part 320 and a processing part 330 .
도 4를 참조하면, 결함 검사부를 구성하는 학습부(320)는, 메모리로부터 대상체의 비파괴 검사 즉, 결함 검사의 학습을 위한 훈련 데이터(또는 데이터셋)를 입력받아 전처리하는 전처리 모듈(410), 상기 전처리된 데이터로부터 특징을 추출하는 특징 추출 모듈(420), 및 추출된 특징에 기초하여 결함 검사에 이용될 학습 모델을 생성하는 검사 처리 모듈(430)을 포함하여 구성될 수 있다. 상기에서, 학습부(320)는 하나의 AI 엔진을 결함 검사의 학습에 이용할 수 있으나, 이에 한정되는 것은 아니다. 실시 예에 따라서, 학습부(320)는 복수의 학습 모듈로 구현되고 각 학습 모듈은 서로 다른 AI 엔진을 결함 검사의 학습에 이용할 수 있다. 이 때, 상기 개별 학습 모듈은 전술한 학습부(320)의 전처리 모듈, 특징 추출 모듈 및 검사 처리 모듈 중 적어도 하나 이상을 포함하여 구성될 수도 있다.Referring to FIG. 4 , the learning unit 320 constituting the defect inspection unit includes a preprocessing module 410 that receives and preprocesses training data (or data sets) for non-destructive inspection of an object, that is, learning of a defect inspection from a memory; It may include a feature extraction module 420 that extracts features from the preprocessed data, and an inspection processing module 430 that generates a learning model to be used for defect inspection based on the extracted features. In the above, the learning unit 320 may use one AI engine for learning the defect inspection, but is not limited thereto. According to an embodiment, the learning unit 320 is implemented as a plurality of learning modules, and each learning module may use a different AI engine for learning defect inspection. In this case, the individual learning module may include at least one or more of a preprocessing module, a feature extraction module, and a test processing module of the learning unit 320 described above.
도 4를 참조하면, 훈련 데이터셋 즉, Prod #1, Prod #2, …, Prod #N(여기서, N은 양의 정수)은, 미리 정의된 개별 검사 대상에 대한 제1 훈련 데이터셋으로 정의할 수 있다. 또한, 학습 모델 Model #1, Model #2, …, Model #N(여기서, N은 양의 정수)은 상기 제1 훈련 데이터셋에 대응하여 생성된 학습 모델 즉, 제1 학습 모델로 정의할 수 있다.Referring to FIG. 4, the training dataset, that is, Prod #1, Prod #2, ... , Prod #N (where N is a positive integer) may be defined as a first training dataset for predefined individual test subjects. In addition, learning models Model #1, Model #2, ... , Model #N (where N is a positive integer) may be defined as a learning model generated corresponding to the first training dataset, that is, a first learning model.
다시 말해, 학습부(320)는 복수의 학습 변수 즉, 개별 제1 훈련 데이터셋을 이용하여 대응하는 복수의 개별 제1 학습 모델을 생성할 수 있다. 이렇게 생성된 제1 학습 모델은 개별 검사 대상에 특화된 학습 모델로 정의할 수도 있다. 즉, 학습부(320)는 N(여기서, N은 양의 정수)개의 제1 훈련 데이터셋에 대응하여 N개의 제1 학습 모델을 생성할 수 있다. In other words, the learner 320 may generate a plurality of individual first learning models corresponding to each other using a plurality of learning variables, that is, individual first training datasets. The first learning model generated in this way may be defined as a learning model specialized for an individual inspection target. That is, the learner 320 may generate N first learning models corresponding to N first training datasets (where N is a positive integer).
한편, 적어도 둘 이상의 개별 제1 훈련 데이터셋으로 조합된 또는 모든 제1 훈련 데이터셋을 포함한 제2 훈련 데이터셋을 정의할 수 있다. 이러한 제2 훈련 데이터셋은, 풀(Full) 훈련 데이터셋, 조합된(Combinated) 훈련 데이터셋 또는 통합(United) 훈련 데이터셋 등으로 명명될 수도 있다. 학습부(320)는 이러한 제2 훈련 데이터셋을 이용하여 제2 학습 모델을 생성할 수 있다. 따라서, 제2 학습 모델은 전술한 제1 학습 모델과는 상이한 학습 모델이며, 상기 제2 학습 모델의 개수는 상기 정의되는 제2 훈련 데이터셋의 개수에 의해 결정될 수 있다. 또한, 상기에서 제2 훈련 데이터셋에 이용되는 개별 훈련 데이터셋은 전술한 제1 훈련 데이터셋과 동일하거나 그렇지 않을 수 있다. 예를 들어, 제2 훈련 데이터셋에는 상기 제1 훈련 데이터셋으로 분류되지 않은 적어도 하나 이상의 훈련 데이터셋이 포함될 수 있다. 이러한 관점에서, 제1 훈련 데이터셋은 제1 학습 모델 생성에 기초가 되는 분류된 훈련 데이터셋으로 볼 수도 있다.Meanwhile, a second training dataset including all first training datasets or a combination of at least two individual first training datasets may be defined. This second training dataset may be named a full training dataset, a combined training dataset, or a united training dataset. The learning unit 320 may generate a second learning model using the second training dataset. Accordingly, the second learning model is a learning model different from the aforementioned first learning model, and the number of the second learning models may be determined by the number of the defined second training datasets. In addition, the individual training dataset used for the second training dataset may be the same as or different from the first training dataset described above. For example, the second training dataset may include one or more training datasets not classified as the first training dataset. From this point of view, the first training dataset may be viewed as a classified training dataset that is the basis for generating the first learning model.
한편, 도 4, 5 및 7에서 설명의 편의상 모든 제1 훈련 데이터셋을 포함한 하나의 제2 훈련 데이터셋만 정의하였으나, 이에 한정되지 않는다. 다시 말해, 상기 제2 훈련 데이터셋은 구성 방식에 따라 복수 개의 제2 훈련 데이터셋으로 정의될 수 있으며, 대응하는 학습 모델 역시 복수 개일 수 있다.Meanwhile, in FIGS. 4, 5, and 7, only one second training dataset including all first training datasets is defined for convenience of description, but is not limited thereto. In other words, the second training dataset may be defined as a plurality of second training datasets according to a configuration method, and a plurality of corresponding learning models may also be provided.
상기에서, 생성되는 제1 및/또는 제2 학습 모델은 검사 대상에 따라 파라미터(parameter), 가중치(weight) 등의 값이 상이하게 설정될 수 있다.In the above, the generated first and/or second learning models may have different values of parameters, weights, and the like according to objects to be inspected.
후술하는 바와 같이, 본 발명에서는 대상체의 대상 특정 여부에 따라 어떤 학습 모델을 이용할 것인지 결정하고, 결정된 학습 모델에 따라 대상체의 결함 여부 검사를 수행할 수 있다. 이 때, 상기에서 대상 특정 여부는 예를 들어, 대상체 대해 결정된 카테고리 내지 분류를 의미할 수 있다. 실시 예에 따라, 상기 대상 특정 여부는 전자 장치(100)에 의해서가 아니라 외부 입력 예를 들어, 대상체에 대한 결함 검사 요청자 내지 단말의 입력이나 요청에 의해 결정될 수도 있다. 따라서, 상기 대상 특정 여부는 결함 검사의 대상이 되는 대상체에 대하여 결함 검사에 이용 가능한 학습 모델을 특정할 수 있는 지 여부로 판단될 수도 있다. 즉, 전자 장치(100)는 상기 외부 입력이 수신되면, 그에 기초하여 학습 모델을 특정하여 선별적으로 선택할 수 있다. 전자 장치(100)는 상기 대상체의 대상이 특정되는 경우에는 특정된 대상에 대응하는 특정 제1 학습 모델을 선택 이용하나, 그렇지 않은 경우에는 제2 학습 모델을 이용하여 상기 대상체의 결함 여부를 검사할 수 있다. 이 때, 전자 장치(100)는 상기 제2 학습 모델이 복수 개인 경우에는, 그 중 어떤 제2 학습 모델(들)을 이용할 것인지 아니면 복수의 제2 학습 모델 전부를 이용할 것인지 결정하여 그에 따라 대상체에 대한 결함 검사를 수행할 수 있다. As will be described later, in the present invention, it is possible to determine which learning model to use depending on whether the object is specific, and to check whether the object is defective or not according to the determined learning model. In this case, in the above, whether the object is specified may mean, for example, a category or classification determined for the object. Depending on an embodiment, whether or not the object is specified may be determined not by the electronic device 100 but by an input or request of an external input, for example, a defect inspection requester or terminal for the object. Accordingly, whether or not the object is specified may be determined based on whether or not a learning model usable for defect inspection can be specified for the object to be inspected. That is, when the external input is received, the electronic device 100 may specify and selectively select a learning model based on the received external input. When the object of the object is specified, the electronic device 100 selects and uses a specific first learning model corresponding to the specified object. Otherwise, the electronic device 100 inspects whether or not the object is defective using a second learning model. can At this time, when there are a plurality of second learning models, the electronic device 100 determines which second learning model(s) to use or all of the plurality of second learning models among them, and determines whether to use the plurality of second learning models. Defect inspection can be performed.
한편, 실시 예에 따라서, 기수행된 대상체의 결함 검사 결과가 미리 정한 기준치 이하이고, 상기 저장된 복수 개의 학습 모델 중 기 선택된 학습 모델이 상기 복수 개의 제1 학습 모델 중 하나이면, 전자 장치(100)는 상기 저장된 복수 개의 학습 모델 중 상기 제1 학습 모델에 속하는 다른 제1 학습 모델이 아닌 제2 학습 모델을 선택하도록 제어할 수 있다. 실시 예에 따라서, 대상체에 대한 대상 특정과 무관하게 또는 대상체의 대상 특정이 어려운 경우 등에서, 전자 장치(100)는 디폴트(default) 설정된 제1 학습 모델 또는 제2 학습 모델 중 적어도 하나에 기초하여 대상체에 대한 결함 검사를 수행할 수도 있다. 이 경우, 전자 장치(100)는 디폴트로 설정된 특정 학습 모델에 따른 대상체 결함 검사 결과에 기초하여 추가 학습 모델 정의 내지 재검사 등 여부를 결정하여 해당 동작을 수행할 수도 있다. 예를 들어, 전자 장치(100)는 디폴트로 설정된 특정 제1 학습 모델에 기초하여 대상체에 대한 결함 검사를 수행하고, 수행 결과가 미리 정의한 기준치 이하이면 제2 학습 모델에 기초하여 대상체에 대한 결함 검사를 재수행할 수 있다. 실시예에 따라서, 전자 장치(100)는 디폴트로 제1 학습 모델 중 적어도 둘 이상의 특정 제1 학습 모델을 선정하여 대상체에 대한 결함 검사를 수행하고, 수행 결과에 따라 제2 학습 모델에 기초한 결함 검사 재수행 여부를 결정할 수 있다. 이 때, 상기 선정되는 특정 제1 학습 모델은 결함 검사의 대상이 되는 대상체에 대한 정보, 이전 결함 검사 수행 히스토리, 전자 장치(100)의 설정 사항, 검사 요청자 또는 단말에 의한 입력 정보 등에 기초하여 결정될 수 있다. 이는 또한, 대상체의 대상 특정에도 동일 또는 유사한 형태로 이용될 수 있다.Meanwhile, according to an embodiment, when the result of the previously performed defect inspection of the object is equal to or less than a predetermined reference value and the previously selected learning model among the plurality of stored learning models is one of the plurality of first learning models, the electronic device 100 may control to select a second learning model other than another first learning model belonging to the first learning model from among the plurality of stored learning models. According to embodiments, regardless of target specification of an object or when target specification of an object is difficult, the electronic device 100 determines the target object based on at least one of a first learning model and a second learning model set by default. You can also perform a defect inspection on . In this case, the electronic device 100 may perform a corresponding operation by determining whether to define or re-examine an additional learning model based on the object defect inspection result according to a specific learning model set as default. For example, the electronic device 100 performs a defect inspection on an object based on a specific first learning model set as a default, and if the result is equal to or less than a predefined reference value, the electronic device 100 inspects the defect on the object based on the second learning model. can be re-executed. According to an embodiment, the electronic device 100 selects at least two or more specific first learning models from among the first learning models by default to perform defect inspection on the object, and based on a result of the execution, the defect inspection based on the second learning model. You can decide whether or not to do it again. In this case, the selected specific first learning model may be determined based on information about an object subject to defect inspection, a history of previous defect inspection, settings of the electronic device 100, information input by a person requesting inspection or a terminal, and the like. can This may also be used in the same or similar form for specifying a target object.
도 4에 도시된 학습부(320)의 동작은, i) 도 3에 도시된 입력부(310)를 통해 결함 검사의 대상이 되는 대상체의 이미지 데이터가 입력되는 경우, ii) 상기 i)의 이전 정기 또는 비정기, iii) 상기 i)과 무관하게 정기 또는 비정기로 수행될 수도 있다.The operation of the learning unit 320 shown in FIG. 4 is i) when image data of an object to be inspected is input through the input unit 310 shown in FIG. 3, ii) the previous period of i) or non-regular, iii) may be performed regularly or irregularly regardless of i) above.
도 5를 참조하면, 처리부(330)는 모델 선택 모듈(510), 전처리 모듈(520), 특징 추출 모듈(530) 및 검사 처리 모듈(540)을 포함하여 구성될 수 있다. Referring to FIG. 5 , the processing unit 330 may include a model selection module 510, a pre-processing module 520, a feature extraction module 530, and an inspection processing module 540.
여기서, 전처리 모듈(520), 특징 추출 모듈(530) 및 검사 처리 모듈(540)은 도 4에 도시된 대응 구성요소와는 개별 구성요소일 수 있다. 따라서, 전처리 모듈, 특징 추출 모듈 및 검사 처리 모듈은 전술한 도 4의 학습부(320)와 도 5의 처리부(330)에 각각 개별 구현된 것으로 볼 수 있다. Here, the preprocessing module 520, the feature extraction module 530, and the inspection processing module 540 may be separate components from the corresponding components shown in FIG. 4 . Therefore, the pre-processing module, the feature extraction module, and the inspection processing module can be regarded as individually implemented in the learning unit 320 of FIG. 4 and the processing unit 330 of FIG. 5 described above.
한편, 실시 예에 따라서, 도 6에 도시된 바와 같이 하나의 처리부가 존재하고 이 경우, 전처리 모듈, 특징 추출 모듈 및 검사 처리 모듈은 도 7에 도시된 바와 같이, 전술한 도 4의 학습부(320)와 도 5의 처리부(330)에서 공유하는 형태로 구현될 수도 있다.On the other hand, according to the embodiment, there is one processing unit as shown in FIG. 6, and in this case, the preprocessing module, the feature extraction module, and the inspection processing module are the learning unit of FIG. 4 as shown in FIG. 7 ( 320) and the processing unit 330 of FIG. 5 may be implemented in a shared form.
모델 선택 모듈(510)은 대상체에 대한 결함 검사 동작 모드를 결정하고, 상기 결정된 결함 검사 동작 모드에 대응하는 도 4에서 훈련 데이터셋 기반으로 생성된 학습 모델 중 적어도 하나를 선별적으로 선택한다. 실시 예에 따라, 상기 모델 선택 모듈(510)은 도 1의 제어부(110) 또는 도 4의 학습부(320) 내 구성요소에 의해 이미 결정된 결함 검사 동작 모드에 따라 도 4에서 훈련 데이터셋 기반으로 생성한 학습 모델 중 적어도 하나를 단순 선택할 수도 있다.The model selection module 510 determines a defect inspection operation mode for the object and selectively selects at least one of the learning models generated based on the training data set in FIG. 4 corresponding to the determined defect inspection operation mode. According to an embodiment, the model selection module 510 is based on the training dataset in FIG. 4 according to the defect inspection operation mode already determined by the components in the control unit 110 of FIG. 1 or the learning unit 320 of FIG. 4 . At least one of the generated learning models may be simply selected.
도 5를 참조하면, 처리부(330)는 입력부(310)를 통해 입력받은 대상체의 이미지 데이터에 대하여 전처리 모듈(520)에서 전처리를 수행하고, 전처리된 대상체의 이미지 데이터는 모델 선택 모듈(510)에서 결함 검사 동작 모드에 대응하는 학습 모델로 선택된 학습 모델 기반으로 특징 추출 모듈(530)과 검사 처리 모듈(540)을 통해 상기 대상체의 이미지 데이터로부터 결함 여부를 검사하고, 검사 결과에 기반 결함 검사 결과 데이터를 생성하여, 출력부(340)로 전달한다. 이후, 출력부(340)는, 전달받은 상기 대상체의 결함 검사 결과 데이터를 제공한다.Referring to FIG. 5 , the processing unit 330 performs pre-processing in the pre-processing module 520 on the image data of the object input through the input unit 310, and the pre-processed image data of the object in the model selection module 510. Based on the learning model selected as the learning model corresponding to the defect inspection operation mode, defects are inspected from the image data of the object through the feature extraction module 530 and the inspection processing module 540, and defect inspection result data based on the inspection results. is generated and transmitted to the output unit 340. Thereafter, the output unit 340 provides defect inspection result data of the target object.
한편, 본 명세서에서 제1 훈련 데이터셋은 대상체에 대한 검사를 위해 이미 분류된 데이터에 기초하여 생성될 수 있으며, 제2 훈련 데이터셋은 분류되지 않은 언노운(Unknown) 데이터에 기초하거나 그를 포함하여 생성될 수 있다.Meanwhile, in the present specification, a first training dataset may be generated based on previously classified data for examination of an object, and a second training dataset may be generated based on or including unclassified unknown data. It can be.
하기 표 1을 참조하면, 본 발명과 관련하여, 학습 모델에 따른 결함 검사 결과를 도시한 것으로 볼 수 있다.Referring to Table 1 below, in relation to the present invention, it can be seen as showing defect inspection results according to the learning model.
Prod#1 Prod#1 Prod#2 Prod#2 UnknownUnknown
제2AI 학습모델The 2nd AI learning model 80.9%80.9% 84.2%84.2% 79.3%79.3%
Prod#1제1AI 학습모델Prod#1 The 1st AI Learning Model 95.3%95.3% 61.9%61.9% 38.1%38.1%
Prod#2
제1AI 학습모델
Prod#2
1st AI learning model
52.1%52.1% 97.6%97.6% 50.7%50.7%
따라서, 전술한 바와 같이, 대상체에 대하여 특정된 대상에 대하여 선별적으로 학습 모델을 선정하여, 상기 대상체에 대한 결함 검사를 수행하는 것이 바람직하다. 즉, 대상체 대하여 대상이 특정되지 않은 언노운의 경우에는 제2 학습 모델을 이용하여 결함 검사를 수행하고, 기분류된 데이터 중 Prod#1에 대해서는 제1 학습 모델 중 Prod#1에 대응하는 제1 학습 모델을, 그리고 기분류된 데이터 중 Prod#2에 대해서는 제1 학습 모델 중 Prod#2에 대응하는 제1 학습 모델을 선별적으로 선정하여 결함 검사를 수행하는 것이 가장 성능이 우수하고 효율적이며, 원하는 결함 검사 결과를 획득할 수 있다.다른 실시예에 따라, 도 6 및 7을 참조하면, 전자 장치(100)의 결함 검사부(610)은, 도 3에 도시된 결함 검사부의 구성과 상이할 수 있다. 즉, 도 3에서 결함 검사부는 학습부(320)와 처리부(330)가 개별 구성으로 도시되고 각 구성은 개별적으로 전처리 모듈, 특징 추출 모듈 및 검사 처리 모듈을 구비하는 방식으로 구현되었으나, 도 6 및 7에서는 전처리 모듈(720), 특징 추출 모듈(740) 및 검사 처리 모듈(750)을 결함 검사부(610)가 공유하여 도 3의 학습부와 처리부의 기능을 수행하는 방식으로 모듈화 구현되었다. 이 때, 도 6 및 도 7의 결함 검사 처리부(610)는 도 5에 도시된 바와 같은 모델 선택 모듈(710)을 포함한다.Therefore, as described above, it is preferable to selectively select a learning model for a specific target object and perform defect inspection on the target object. That is, in the case of an unknown object in which the object is not specified, the defect inspection is performed using the second learning model, and the first learning corresponding to Prod#1 of the first learning model is performed for Prod#1 among the misclassified data. For Prod#2 among the misclassified data, selectively selecting the first learning model corresponding to Prod#2 among the first learning models to perform defect inspection is the most excellent and efficient, and the desired According to another embodiment, referring to FIGS. 6 and 7 , the defect inspection unit 610 of the electronic device 100 may have a different configuration from the defect inspection unit shown in FIG. 3 . . That is, in FIG. 3, the defect inspection unit has a learning unit 320 and a processing unit 330 shown as separate configurations, and each configuration is implemented in such a way as to individually include a preprocessing module, a feature extraction module, and an inspection processing module, but FIG. 6 and 7, the preprocessing module 720, the feature extraction module 740, and the inspection processing module 750 are shared by the defect inspection unit 610 to perform the functions of the learning unit and the processing unit of FIG. At this time, the defect inspection processing unit 610 of FIGS. 6 and 7 includes a model selection module 710 as shown in FIG. 5 .
한편, 도 2의 동작 16과 같이, 결함 여부 검사 결과 데이터는 예를 들어, 도 8이나 도 9의 (c) 내지 도 9의 (d)와 같은 방식으로 대상체의 이미지 데이터 상에 검출된 결함 부분을 표시하는 방식으로 제공될 수 있다. 다만, 본 발명은 이러한 제공 방식에 한정되는 것은 아니다. 예를 들어, 상기와 같은 방식에 검출된 결함의 타입, 결함의 정도, 결함의 개수, 결함 비율 등과 관련된 텍스트, 오디오, 이미지(그래프) 등이 더 추가되거나 개별로 제공될 수도 있다.On the other hand, as in operation 16 of FIG. 2, the defect detection result data is the defective part detected on the image data of the object in the manner shown in FIG. 8 or FIG. 9(c) to FIG. 9(d), for example. It can be provided in a way that displays. However, the present invention is not limited to this provision method. For example, text, audio, images (graphs), etc. related to the type of defects detected in the above method, the degree of defects, the number of defects, the defect ratio, etc. may be additionally added or provided individually.
도 8의 (a) 내지 도 8의 (b)는 전술한 제2 결함 검사 동작 모드 즉, 도 4 내지 5에서 설명한 제2 학습 모델을 이용한 대상체의 이미지 데이터로부터 결함 검사 결과를 나타낸다. 반면, 도 8의 (c) 내지 도 8의 (d)는 전술한 제1 결함 검사 동작 모드 즉, 도 4 내지 5에서 설명한 상기 대상체에 특화된 제1 학습 모델을 이용한 대상체의 이미지 데이터로부터 결함 검사 결과를 나타낸다. 8(a) to 8(b) show defect inspection results from image data of an object using the second defect inspection operation mode described above, that is, the second learning model described in FIGS. 4 to 5 . On the other hand, FIGS. 8(c) to 8(d) show defect inspection results from the image data of the object using the first defect inspection operation mode described above, that is, the first learning model specialized for the object described in FIGS. 4 to 5. indicates
도 9의 (a) 내지 도 9의 (b)는, 미리 생성된 하나의 학습된 모델만을 이용하여 대상체의 이미지 데이터 내 결함 여부 검사 결과를 나타낸 것이다. 이에 반해, 도 9의 (c) 내지 도 9의 (d)는 전술한 본 발명에 따라 다수의 학습 모델 중에서 선별적으로 학습 모델을 선택하여 대상체의 이미지 데이터 내 결함 여부를 검사한 결과를 나타낸 것이다.9(a) to 9(b) show results of inspecting defects in image data of an object using only one pre-generated and learned model. In contrast, FIGS. 9(c) to 9(d) show the results of examining defects in image data of an object by selectively selecting a learning model from among a plurality of learning models according to the present invention described above. .
도 8의 (a) 내지 도 8의 (d)에 도시된 이미지 데이터 내 사각형 부분(811 내지 844) 및 도 9의 (a) 내지 도 9의 (d)에 도시된 이미지 데이터 내 사각형 부분(911 내지 941)은 결함 검출 부분을 나타낸다. Rectangular parts 811 to 844 in the image data shown in FIGS. 8(a) to 8(d) and 911 in the image data shown in FIGS. 9(a) to 9(d) to 941) denotes a defect detection portion.
본 발명의 권리범위와 관련하여, 프로세서는 문맥에 따라 도 3 내지 7에 도시된 입력부, 학습부, 처리부 및 출력부 모두를 나타내거나 그 중 적어도 하나 이상을 나타낼 수도 있다.In relation to the scope of the present invention, the processor may represent all of the input unit, learning unit, processing unit, and output unit shown in FIGS. 3 to 7, or at least one of them, depending on the context.
본 발명의 실시예와 관련하여 설명된 방법 또는 알고리즘의 단계들은 하드웨어로 직접 구현되거나, 하드웨어에 의해 실행되는 소프트웨어 모듈로 구현되거나, 또는 이들의 결합에 의해 구현될 수 있다. 소프트웨어 모듈은 RAM(Random Access Memory), ROM(Read Only Memory), EPROM(Erasable Programmable ROM), EEPROM(Electrically Erasable Programmable ROM), 플래시 메모리(Flash Memory), 하드 디스크, 착탈형 디스크, CD-ROM, 또는 본 발명이 속하는 기술 분야에서 잘 알려진 임의의 형태의 컴퓨터 판독가능 기록매체에 상주할 수도 있다.Steps of a method or algorithm described in connection with an embodiment of the present invention may be implemented directly in hardware, implemented in a software module executed by hardware, or implemented by a combination thereof. A software module may include random access memory (RAM), read only memory (ROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, hard disk, removable disk, CD-ROM, or It may reside in any form of computer readable recording medium well known in the art to which the present invention pertains.
이상, 첨부된 도면을 참조로 하여 본 발명의 실시예를 설명하였지만, 본 발명이 속하는 기술분야의 통상의 기술자는 본 발명이 그 기술적 사상이나 필수적인 특징을 변경하지 않고서 다른 구체적인 형태로 실시될 수 있다는 것을 이해할 수 있을 것이다. 그러므로, 이상에서 기술한 실시예들은 모든 면에서 예시적인 것이며, 제한적이 아닌 것으로 이해해야만 한다.Although the embodiments of the present invention have been described with reference to the accompanying drawings, those skilled in the art to which the present invention pertains can be implemented in other specific forms without changing the technical spirit or essential features of the present invention. you will be able to understand Therefore, it should be understood that the embodiments described above are illustrative in all respects and not restrictive.

Claims (10)

  1. 선택적 인공 지능 엔진 기반 대상체에 대한 비파괴 검사를 수행하는 전자 장치에 있어서,An electronic device that performs non-destructive testing on an object based on an optional artificial intelligence engine,
    결함 검사를 위한 복수 개의 학습 변수와 상기 개별 학습 변수에 대응되는 복수 개의 학습 모델을 저장하는 메모리; 및a memory for storing a plurality of learning variables for defect inspection and a plurality of learning models corresponding to the individual learning variables; and
    상기 대상체의 결함을 검사하는 프로세서를 포함하되,Including a processor for inspecting the defect of the object,
    상기 프로세서는, 상기 저장된 복수 개의 학습 모델 중 선택된 적어도 하나의 학습 모델에 기초하여 상기 대상체의 결함 검사를 수행하는 처리부를 포함하는,The processor comprises a processor configured to perform a defect inspection of the object based on at least one learning model selected from among the plurality of stored learning models.
    선택적 인공 지능 엔진 기반 대상체 비파괴 검사 수행 전자 장치.Optional artificial intelligence engine based electronic device for performing non-destructive testing of objects.
  2. 제1항에 있어서,According to claim 1,
    상기 프로세서는,the processor,
    상기 저장된 학습 모델을 통해 상기 입력받는 대상체에 대한 카테고리를 결정하고, 상기 결정된 대상체의 카테고리에 기초하여 상기 저장된 복수 개의 학습 모델 중 적어도 하나의 학습 모델을 선별적으로 선택하는,determining a category of the object to be input through the stored learning model, and selectively selecting at least one learning model from among the plurality of stored learning models based on the determined category of the object;
    선택적 인공 지능 엔진 기반 대상체 비파괴 검사 수행 전자 장치.Optional artificial intelligence engine based electronic device for performing non-destructive testing of objects.
  3. 제1항에 있어서,According to claim 1,
    상기 훈련 데이터셋에는, In the training dataset,
    상기 복수 개의 학습 변수 중 미리 분류된 학습 변수에 대한 개별 훈련 데이터가 포함된 제1 훈련 데이터셋과, A first training dataset including individual training data for pre-classified learning variables among the plurality of learning variables;
    상기 복수 개의 학습 변수 중 미리 분류되지 않은 학습 변수 또는 상기 적어도 둘 이상의 제1 훈련 데이터셋의 조합에 따른 훈련 데이터셋이 포함된 제2 훈련 데이터셋이 포함되는,A second training dataset including a learning variable that is not previously classified among the plurality of learning variables or a training dataset according to a combination of the at least two or more first training datasets,
    선택적 인공 지능 엔진 기반 대상체 비파괴 검사 수행 전자 장치.Optional artificial intelligence engine based electronic device for performing non-destructive testing of objects.
  4. 제3항에 있어서,According to claim 3,
    상기 프로세서는,the processor,
    상기 복수 개의 학습 모델을 상기 제1 훈련 데이터셋에 대응하는 각 제1 학습 모델과, 상기 제2 훈련 데이터셋에 대응하는 제2 학습 모델을 구분하여 생성 및 저장하는,Generating and storing the plurality of learning models by dividing each first learning model corresponding to the first training dataset and the second learning model corresponding to the second training dataset,
    선택적 인공 지능 엔진 기반 대상체 비파괴 검사 수행 전자 장치.Optional artificial intelligence engine based electronic device for performing non-destructive testing of objects.
  5. 제4항에 있어서,According to claim 4,
    상기 프로세서는,the processor,
    상기 수행된 대상체의 결함 검사 결과가 미리 정한 기준치 이하이고, 상기 저장된 복수 개의 학습 모델 중 기 선택된 학습 모델이 상기 복수 개의 제1 학습 모델 중 하나이면, 상기 저장된 복수 개의 학습 모델 중 제2 학습 모델을 선택하도록 제어하는,If the result of the defect inspection of the object performed is equal to or less than a predetermined reference value and the pre-selected learning model among the plurality of stored learning models is one of the plurality of first learning models, a second learning model is selected from among the plurality of stored learning models. control to choose,
    선택적 인공 지능 엔진 기반 대상체 비파괴 검사 수행 전자 장치.Optional artificial intelligence engine based electronic device for performing non-destructive testing of objects.
  6. 제1항에 있어서,According to claim 1,
    상기 프로세서는,the processor,
    상기 복수 개의 학습 변수에 대응하는 훈련 데이터셋을 전처리하는 전처리 모듈;a preprocessing module for preprocessing a training dataset corresponding to the plurality of learning variables;
    전처리된 데이터로부터 특징을 추출하는 특징 추출 모듈; 및a feature extraction module for extracting features from the preprocessed data; and
    추출된 특징에 기초하여 상기 훈련 데이터셋에 대응하는 개별 학습 모델을 생성하는 처리 모듈을 포함하는,A processing module for generating an individual learning model corresponding to the training dataset based on the extracted features,
    선택적 인공 지능 엔진 기반 대상체 비파괴 검사 수행 전자 장치.Optional artificial intelligence engine based electronic device for performing non-destructive testing of objects.
  7. 제1항에 있어서,According to claim 1,
    상기 프로세서는,the processor,
    상기 대상체의 이미지 데이터를 입력받는 입력부와,an input unit for receiving image data of the object;
    상기 처리부의 결함 검사 결과를 제공하는 출력부를 더 포함하는,Further comprising an output unit providing a defect inspection result of the processing unit,
    선택적 인공 지능 엔진 기반 대상체 비파괴 검사 수행 전자 장치.Optional artificial intelligence engine based electronic device for performing non-destructive testing of objects.
  8. 제1항에 있어서,According to claim 1,
    상기 프로세서는,the processor,
    상기 대상체에 관한 외부 입력이 수신되면, 상기 외부 입력에 기초하여 상기 저장된 복수 개의 학습 모델 중 적어도 하나의 학습 모델을 선택하는,selecting at least one learning model from among the plurality of stored learning models based on the external input when an external input related to the object is received;
    선택적 인공 지능 엔진 기반 대상체 비파괴 검사 수행 전자 장치.Optional artificial intelligence engine based electronic device for performing non-destructive testing of objects.
  9. 전자 장치에서 선택적 인공 지능 엔진 기반 대상체에 대한 비파괴 검사 방법에 있어서,In the non-destructive testing method for an object based on an optional artificial intelligence engine in an electronic device,
    결함 검사를 위한 복수 개의 학습 변수와 상기 개별 학습 변수에 대응되는 복수 개의 학습 모델을 저장하는 단계;storing a plurality of learning variables for defect inspection and a plurality of learning models corresponding to the individual learning variables;
    상기 대상체의 이미지 데이터를 입력받는 단계;receiving image data of the object;
    상기 저장된 학습 모델을 통해 상기 입력받는 대상체에 대한 카테고리를 결정하고, 상기 결정된 대상체의 카테고리에 기초하여 상기 저장된 복수 개의 학습 모델 중 적어도 하나의 학습 모델을 선별적으로 선택하는 단계;determining a category of the object to be received through the stored learning model, and selectively selecting at least one learning model from among the plurality of stored learning models based on the determined category of the object;
    상기 선택된 적어도 하나의 학습 모델에 기초하여 상기 대상체의 결함 검사를 수행하는 단계; 및 performing a defect inspection of the target object based on the selected at least one learning model; and
    상기 결함 검사 결과를 제공하는 단계를 포함하는,Including providing the defect inspection result,
    전자 장치에서 선택적 인공 지능 엔진 기반 대상체에 대한 비파괴 검사 방법.A method for non-destructive testing of objects based on selective artificial intelligence engines in electronic devices.
  10. 방사선을 조사하여 대상체에 대한 이미지 데이터를 획득하는 영상 획득 장치; 및an image acquisition device that obtains image data of an object by radiating radiation; and
    전자 장치를 포함하되, 상기 전자 장치는, Including an electronic device, wherein the electronic device,
    결함 검사를 위한 복수 개의 학습 변수와 상기 개별 학습 변수에 대응되는 복수 개의 학습 모델을 저장하는 메모리; 및a memory for storing a plurality of learning variables for defect inspection and a plurality of learning models corresponding to the individual learning variables; and
    상기 저장된 복수 개의 학습 모델 중 선택된 적어도 하나의 학습 모델에 기초하여 상기 대상체의 결함 검사를 수행하는 프로세서를 포함하는,And a processor for performing a defect inspection of the object based on at least one learning model selected from among the plurality of stored learning models.
    선택적 인공 지능 엔진 기반 대상체 비파괴 검사 시스템.Optional artificial intelligence engine based object non-destructive inspection system.
PCT/KR2022/019128 2021-11-30 2022-11-30 Method, device, and system for optional artificial intelligence engine-based nondestructive inspection of object WO2023101375A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020210169106A KR102602559B1 (en) 2021-11-30 2021-11-30 Method, apparatus and system for non-constructive inspection of object based on selective artificial intelligence engine
KR10-2021-0169106 2021-11-30

Publications (1)

Publication Number Publication Date
WO2023101375A1 true WO2023101375A1 (en) 2023-06-08

Family

ID=86612612

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2022/019128 WO2023101375A1 (en) 2021-11-30 2022-11-30 Method, device, and system for optional artificial intelligence engine-based nondestructive inspection of object

Country Status (2)

Country Link
KR (2) KR102602559B1 (en)
WO (1) WO2023101375A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018147240A (en) * 2017-03-06 2018-09-20 パナソニックIpマネジメント株式会社 Image processing device, image processing method, and image processing program
KR101940029B1 (en) * 2018-07-11 2019-01-18 주식회사 마키나락스 Anomaly detection
JP2021039022A (en) * 2019-09-04 2021-03-11 信越化学工業株式会社 Defect classification method and defect classification system and screening method and manufacturing method for photo mask blank
KR20210042267A (en) * 2018-05-27 2021-04-19 엘루시드 바이오이미징 아이엔씨. Method and system for using quantitative imaging
KR102316286B1 (en) * 2021-02-18 2021-10-22 임계현 Method for analyzing hair condition using artificial intelligence and computing device for executing the method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102249836B1 (en) 2019-08-26 2021-05-10 레이디소프트 주식회사 Method for non-destructive inspection based on image and Computer-readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018147240A (en) * 2017-03-06 2018-09-20 パナソニックIpマネジメント株式会社 Image processing device, image processing method, and image processing program
KR20210042267A (en) * 2018-05-27 2021-04-19 엘루시드 바이오이미징 아이엔씨. Method and system for using quantitative imaging
KR101940029B1 (en) * 2018-07-11 2019-01-18 주식회사 마키나락스 Anomaly detection
JP2021039022A (en) * 2019-09-04 2021-03-11 信越化学工業株式会社 Defect classification method and defect classification system and screening method and manufacturing method for photo mask blank
KR102316286B1 (en) * 2021-02-18 2021-10-22 임계현 Method for analyzing hair condition using artificial intelligence and computing device for executing the method

Also Published As

Publication number Publication date
KR20230160754A (en) 2023-11-24
KR102602559B9 (en) 2024-03-13
KR20230081911A (en) 2023-06-08
KR102602559B1 (en) 2023-11-16

Similar Documents

Publication Publication Date Title
CN112534243B (en) Inspection apparatus and method, and computer-readable non-transitory recording medium
CN110390351A (en) A kind of Epileptic focus three-dimensional automatic station-keeping system based on deep learning
WO2016204402A1 (en) Component defect inspection method, and apparatus therefor
WO2021137454A1 (en) Artificial intelligence-based method and system for analyzing user medical information
CN114170478A (en) Defect detection and positioning method and system based on cross-image local feature alignment
WO2022197044A1 (en) Bladder lesion diagnosis method using neural network, and system thereof
CN112766251B (en) Infrared detection method and system for power transformation equipment, storage medium and computer equipment
WO2023101375A1 (en) Method, device, and system for optional artificial intelligence engine-based nondestructive inspection of object
CN115222649A (en) System, apparatus and method for detecting and classifying patterns of heatmaps
Ibarra et al. Determination of Leaf Degradation Percentage for Banana leaves with Panama Disease Using Image Segmentation of Color Spaces and OpenCV
JP2023145412A (en) Defect detection method and system
WO2023101374A1 (en) Artificial intelligence-based non-destructive ensemble testing method, device, and system for object
US20220172453A1 (en) Information processing system for determining inspection settings for object based on identification information thereof
JP2021042955A (en) Food inspection device, food inspection method and learning method of food reconstruction neural network for food inspection device
KR20220111214A (en) Method, apparatus and computer program for inspection of product based on artificial intelligence
WO2022108250A1 (en) Deep learning-based high image quality x-ray image generation method, apparatus, and program
WO2020158630A1 (en) Detecting device, learner, computer program, detecting method, and method for generating learner
KR102585028B1 (en) Method, apparatus and system for non-constructive inspection of object based on three-dimensional process
Acri et al. A novel phantom and a dedicated developed software for image quality controls in x-ray intraoral devices
KR20000050719A (en) Automatic inspection system for quality of connection cable
CN110288573A (en) A kind of mammalian livestock illness automatic testing method
JP3097153B2 (en) Grade testing equipment for fruits and vegetables
WO2022108249A1 (en) Method, apparatus, and program for generating training data, and method for detecting foreign substance using same
EP4372379A1 (en) Pathology image analysis method and system
WO2023282611A1 (en) Ai model learning device for reading diagnostic kit test result and method for operating same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22901733

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE