WO2023166773A1 - Système d'analyse d'image, procédé d'analyse d'image et programme - Google Patents

Système d'analyse d'image, procédé d'analyse d'image et programme Download PDF

Info

Publication number
WO2023166773A1
WO2023166773A1 PCT/JP2022/036388 JP2022036388W WO2023166773A1 WO 2023166773 A1 WO2023166773 A1 WO 2023166773A1 JP 2022036388 W JP2022036388 W JP 2022036388W WO 2023166773 A1 WO2023166773 A1 WO 2023166773A1
Authority
WO
WIPO (PCT)
Prior art keywords
learning
image
evaluation
image analysis
sub
Prior art date
Application number
PCT/JP2022/036388
Other languages
English (en)
Japanese (ja)
Inventor
敦 宮本
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Publication of WO2023166773A1 publication Critical patent/WO2023166773A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to an image analysis system, an image analysis method, and a program.
  • the present invention claims priority of Japanese patent application number 2022-033635 filed on March 4, 2022, and for designated countries where incorporation by reference of documents is permitted, the content described in the application is incorporated into this application by reference.
  • step S1 the appearance of the welded portion 201 of the workpiece 200 is inspected by the shape measurement unit 21 of the appearance inspection apparatus 20 (step S1).
  • step S5 the presence or absence of shape defects and the type of shape defects are specified in the acquired image data.
  • step S6 the learning data set can be reviewed and re- Creation or new creation is performed (step S6), and re-learning of the judgment model is executed using the learning data set created in step S6 (step S7).”
  • the accuracy of the judgment model for judging the quality of the shape of the welded part is improved, and the quality of the shape of the welded part is judged. It is possible to improve the accuracy of determining the presence and type of important shape defects.”
  • Patent Document 1 The technique disclosed in Patent Document 1 is inefficient because it requires advance preparation such as classifying image data for learning in advance according to the material and shape of the workpiece and performing data expansion processing.
  • the present invention has been made in view of the above points, and aims to provide a technique for efficiently learning in image analysis using machine learning.
  • the present application includes multiple means for solving at least part of the above problems, and the following are examples of such means.
  • an image analysis system of the present invention is an image analysis system comprising at least one processor and a memory resource, wherein the processor captures an evaluation object for learning to obtain a learning image;
  • An evaluation image obtaining step of capturing an evaluation image to obtain an evaluation image, and an evaluation step of inputting the evaluation image to a trained evaluation engine and outputting an estimated evaluation value are executed.
  • FIG. 4 is a diagram showing an example of an overall processing sequence in an image analysis system based on machine learning;
  • FIG. 4 is a diagram showing an example of a learning method of an evaluation engine;
  • FIG. 4 is a diagram schematically showing an example of a learning state of an evaluation engine; It is a figure which shows an example of a selection condition input screen. It is a figure which shows an example of the hardware constitutions of an image analysis system.
  • CNN Convolutional Neural Network
  • a method of inspecting is disclosed.
  • machine learning-based image processing covers a wide range of fields, including semantic segmentation, recognition, image classification, image conversion, and image quality improvement.
  • an image of an object to be evaluated for learning (learning image) is input, and the evaluation engine is adjusted so that the difference between the estimated evaluation value output from the evaluation engine and the correct evaluation value taught in advance becomes small. update the internal parameters of , such as network weights and biases.
  • the evaluation value is the inspection result such as the presence or absence of defects and the degree of abnormality of the evaluation target.
  • the evaluation value is the label of the region. is an image of
  • mini-batches As for the timing of updating the internal parameters, instead of learning all the training images at once, it is common to divide the training images into sets called mini-batches and update the internal parameters for each mini-batch. be. This is called mini-batch learning, and when all mini-batches have been learned, all training images have been used for learning. Learning all these mini-batches once is called one epoch, and the internal parameters are optimized by repeating the epoch many times. We may also shuffle the training images in the mini-batch for each epoch.
  • data cleansing reduces the number of learning images by deleting or integrating unnecessary or redundant images included in the learning image group in advance.
  • the structures and appearances of evaluation objects are diverse. If the object to be evaluated has many pattern variations, a large number of training images are essentially required, and there is a limit to how much training images can be reduced by data cleansing. If images to be learned are excluded by sampling or the like, there is a risk of impairing evaluation performance. Therefore, there is a demand for a mechanism for quickly learning the internal parameters of the evaluation engine without degrading the evaluation performance.
  • FIG. 1 is a diagram showing an example of an overall processing sequence in an image analysis system 1 based on machine learning.
  • Image processing includes visual inspection, semantic segmentation and recognition, image classification, image transformation and image quality improvement.
  • a processing sequence executed by the image analysis system 1 is roughly divided into a learning phase 110 and an evaluation phase 120 .
  • the evaluation object P is imaged for learning to acquire a learning image (step S0).
  • An image is acquired by imaging the surface or inside of the evaluation object P as a digital image with an imaging device such as a CCD (Charge Coupled Device) camera, optical microscope, charged particle microscope, ultrasonic inspection device, X-ray inspection device, etc. do.
  • an imaging device such as a CCD (Charge Coupled Device) camera, optical microscope, charged particle microscope, ultrasonic inspection device, X-ray inspection device, etc. do.
  • CCD Charge Coupled Device
  • acquisition it is possible to simply receive an image captured by another system and store it in the storage resource of the image analysis system.
  • step S1 data cleansing may be performed on all the learning image groups Q captured in step S0 to reduce the number of learning images by deleting or integrating unnecessary or redundant learning images in advance (step S1 ).
  • a correct evaluation value g_i is given to each learning image f_i.
  • the evaluation value is the inspection result such as the presence or absence of defects and the degree of abnormality of the evaluation object P.
  • the evaluation value is the label of the region.
  • the evaluation value is the image after conversion. is.
  • a correct evaluation value is assigned to these evaluation criteria based on a user's visual judgment or a numerical value analyzed by another processing device/means.
  • the evaluation engine 111 learns using the learning image ⁇ f_i ⁇ and the correct evaluation value ⁇ g_i ⁇ (step S2).
  • the evaluation engine 111 is an estimator that receives a learning image f_i (an evaluation image S in the evaluation phase 120) and outputs an estimated evaluation value g ⁇ _i.
  • a variety of machine learning engines can be used for the evaluation engine 111, and examples include deep neural networks represented by Convolutional Neural Network (CNN).
  • CNN Convolutional Neural Network
  • the internal parameters 113 of the evaluation engine 111 are optimized so that an estimated evaluation value g ⁇ _i close to the taught correct evaluation value g_i is output when the learning image f_i is input.
  • the internal parameters 113 include "hyperparameters” such as network structure, activation function, learning rate and termination conditions of learning, and "model parameters” such as weights (coupling coefficients) and biases between nodes of the network. is included.
  • optimization of this internal parameter 113 is performed by iterative learning, and the sub-learning image group ⁇ f'_j(k) ⁇ used in the k-th iterative learning is selected by the learning image selection engine 112 as the learning image group ⁇ f_i ⁇ (R) Choose from
  • the actual evaluation object P is imaged (step S0), and an evaluation image S is obtained.
  • the evaluation image S is input to the evaluation engine 111 using the internal parameters 113 learned in the learning phase 110, and automatic evaluation is performed (step S3).
  • the user confirms this evaluation result as necessary (step S4).
  • data cleansing is known in which unnecessary or redundant images included in a group of learning images are deleted or integrated in advance to reduce the number of learning images. S1).
  • the structure and appearance of the evaluation target P are diverse. When many pattern variations exist in the evaluation object P, many learning images are essentially required, and there is a limit to how much learning images can be reduced by data cleansing. If the learning images to be learned are excluded by sampling or the like, there is a risk of impairing the evaluation performance. Therefore, this embodiment provides a mechanism for quickly learning the internal parameters 113 of the evaluation engine 111 without degrading the evaluation performance.
  • FIG. 2 is a diagram showing an example of the learning method of the evaluation engine 111. As shown in FIG. A method for quickly learning the internal parameters 113 of the evaluation engine 111 will be described with reference to FIG.
  • each learning image is not divided in advance into two choices of use/exclusion, but the learning image is dynamically used/excluded according to the learning state of the evaluation engine 111 (temporarily during iterative learning). used or excluded).
  • the entire learning step step S2 in FIG. 1
  • the selection probability P_k(f_i) (201) ("201" is a reference sign) is calculated for each iterative learning for the learning image f_i, and based on this selection probability P_k(f_i) (201), k A sub-learning image group ⁇ f'_j(k) ⁇ (T) is obtained by determining whether or not to use the learning image f_i in the iterative learning of the iteration.
  • the selection probability P_k(f_i) (201) can be set high in the next k-th iterative learning. Desirably.
  • the selection probability P_(k+1)(f_i) (201) is set low in the next k+1-th iterative learning. be able to.
  • the selection probability P_(k+2)(f_i) (201) is set large in the next k+2 iterative learning. is desirable.
  • the sub-learning image group ⁇ f'_j(k) ⁇ (T) is obtained in consideration of this priority. This makes it possible to reduce the number of learning images in each epoch.
  • a specific example of the evaluation result 202 is whether the estimated evaluation value is right or wrong 203 .
  • ⁇ Method for calculating selection probability P_k(f_i)> Regarding the learning image selection engine 112, a method of calculating the selection probability P_k(f_i) (201) of the learning image f_i to the sub-learning image group ⁇ f'_j(k) ⁇ (T) in the k-th iterative learning will be described.
  • the degree of margin M(f_i ) (204) (“204” is a reference sign)
  • the selection probability P_k(f_i) ( 201) is characterized by being a function of the margin M(f_i) (204).
  • the selection probability P_k(f_i) (201) can be set low.
  • the margin M(f_i) (204) for example, the difference between the estimated evaluation value g ⁇ _i and the correct evaluation value g_i of the learning image f_i can be used. If the difference is very small, it means that the estimated evaluation value g ⁇ _i is correct with a margin. Also, if the difference is large, the evaluation result will be an incorrect answer, but the larger the difference, the lower the margin for the same incorrect answer.
  • the margin M(f_i) (204) and the selection probability P_k(f_i) (201) can be calculated as continuous values.
  • the present embodiment sets the selection probability P_k(f_i) of the learning image f_i to the sub-learning image group ⁇ f'_j(k) ⁇ (T) in the k-th iterative learning.
  • (201) is characterized by being a function of the degree of similarity between the learning image f_i and other learning image groups ⁇ f_a ⁇ (a ⁇ i).
  • This embodiment is characterized in that the selection probability P_k(f_i) (201) of the learning image f_i is updated for each iterative learning.
  • the selection probability P_k(f_i) (201) the degree of similarity between the learning image f_i and other learning images can be used. When very similar learning images exist or there are many similar images, the similarity is high and the selection probability P_k(f_i) (201) is low.
  • the accuracy 203 of the estimated evaluation value based on the estimated evaluation value g ⁇ _i of the learning image f_i in the k-th iterative learning and the margin M( f_i) (204) can be used, but these values will change depending on the learning state.
  • the similarity between training images does not change during iterative learning, but together with those whose values change, it is used as a judgment material for calculating the selection probability P_k(f_i) (201). can do. That is, the selection probability P_k(f_i) (201) can be given as a function of these multiple criteria (step S21).
  • the image analysis system 1 optimizes the internal parameters 113 of the evaluation engine 111 by repeating the processing from step S21 to step S23 using the learning image group ⁇ f_i ⁇ (R).
  • the image analysis system 1 determines selection probabilities P_k(f_i) (201). Specifically, the image analysis system 1 uses the k-1th evaluation result 202 to calculate the kth selection probability P_k(f_i ) (201). After that, the image analysis system 1 uses the learning image selection engine 112 to select a sub-learning image group ⁇ f'_j( k) ⁇ (T).
  • step S22 the image analysis system 1 performs learning of the evaluation engine 111. Specifically, the image analysis system 1 sets the correct evaluation value g_i assigned in advance to Update the internal parameters 113 of the evaluation engine 111 so that the closest estimated evaluation value g ⁇ _i is output.
  • step S23 the image analysis system 1 uses the evaluation engine 111 to obtain evaluation results 202 for each learning image. Specifically, the image analysis system 1 estimates each of the learning images f_i included in the learning image group R ⁇ f_j(k) ⁇ using the evaluation engine 111 to which the internal parameters 113 learned in step S22 are applied. Calculate the evaluation value g ⁇ _i. The image analysis system 1 obtains an evaluation result 202 for each learning image f_i included in the learning image group ⁇ f_i ⁇ (R). As an example, the evaluation result 202 is the correctness 203 of the estimated evaluation value g ⁇ _i and the degree of margin M(f_i) (204).
  • the evaluation engine 111 can learn efficiently. In addition, since the evaluation engine 111 learns using the sub-learning image group ⁇ f'_j(k) ⁇ (T) selected using the margin M(f_i) (204) and the similarity, it is more efficient. Intrinsic parameters 113 can be optimized for
  • FIG. 3 is a diagram schematically showing an example of the learning state of the evaluation engine 111.
  • the concept of calculating the selection probability P_k(f_i) (201) will be described with reference to FIG.
  • the evaluation engine 111 let us consider non-defective product determination for inspecting whether the evaluation target P is a non-defective product or a defective product.
  • a plurality of feature values ⁇ Ca ⁇ (a 1, . It is considered that an estimated evaluation value is output based on this feature amount.
  • the circle and triangle plots show the distribution of the feature values of the learning images in the k-th iterative learning, and the circle and triangle plots are the feature values of the good and bad product learning images, respectively.
  • Plots existing inside the non-defective product cluster 300 in the k-th iterative learning are determined to be non-defective products, and plots existing outside are determined to be defective products.
  • the non-defective product cluster 300 separates non-defective products from non-defective products as much as possible.
  • P_k(f_i) (201) of the learning image f_i to the sub-learning image group ⁇ f'_j(k) ⁇ (T) in the k-th iterative learning white, gray, and black plots represent 'high', 'medium', and 'low' selection probabilities, respectively.
  • the selection probabilities are displayed in three stages in FIG. 3, but the actual selection probabilities can take continuous values.
  • the selection probability is set high. is desirable.
  • the five non-defective product learning images 302 present in the center of the non-defective product cluster 300 have correct estimated evaluation values and are located in the center of the non-defective product cluster 300.
  • the estimated evaluation value is less likely to turn into an incorrect answer. That is, since this is a learning image with a high degree of margin M(f_i) (204), it is desirable to set the selection probability low.
  • the margin M(f_i) (204) of the two non-defective product learning images 303 existing between the boundary and the center of the non-defective product cluster 300 is medium, it is desirable to set the selection probability to be medium as well.
  • the learning images for defective products are the same as for non-defective products. That is, the learning images 304, 305, and 306 of defective products are all correct because they are determined to be defective products. It is desirable to set
  • the selection probability is desirable to be very high in order to improve the judgment results for erroneously determined learning images (307, which is a non-defective product but is erroneously determined as a defective product, and 308, which is a defective product but is erroneously determined as a non-defective product). It is desirable to set a low selection probability for a group of images (309 and 310) with a high degree of similarity (which are likely to be plotted in the vicinity even in the feature space). It is desirable to set a high selection probability for a learning image 311 that is erroneously determined even if it is an image group with a high degree of similarity.
  • the selection probability needs to be determined by comprehensively considering the correctness 203 of the estimated evaluation value, the degree of margin M(f_i) (204), the degree of similarity between images, and the like. Determining the sub-learning image group ⁇ f'_j(k) ⁇ (T) not only reduces the number of learning images in each iterative learning, but also speeds up optimization by prioritizing learning images to be learned. There is also a possibility that good evaluation performance can be obtained with a small number of iterations.
  • the number of images Nf of the sub-learning image group ⁇ f'_j(k) ⁇ (T) is calculated from the number of images Nf of the learning image group ⁇ f_i ⁇ (R). It is characterized by having a GUI for accepting designation of a reduction rate R_k for _k, and the reduction rate R_k being a function of the number k of iterative learning.
  • the reduction rate R_k can be specified by, for example, (Nf-Nf'_k)/Nf*100. In this case, the larger the value, the more the training images are reduced and the learning time is shortened. Also, the reduction rate R_k can be changed by the number k of iterative learning.
  • the internal parameters are not fixed and extensive parameter search is required, so it is desirable to set the reduction rate R_k small.
  • the reduction rate R_k can be set large.
  • the specified value may be the number of images Nf'_k of the sub-learning image group ⁇ f'_j(k) ⁇ (T), or the estimated learning time, instead of the reduction rate R_k.
  • the number of images Nf'_k of the sub-learning image group ⁇ f'_j(k) ⁇ (T) is set so that learning is completed within the estimated learning time based on the number of iterations of learning. will be determined.
  • FIG. 4 is a diagram showing an example of a selection condition input screen 400.
  • the selection condition input screen 400 is a GUI (Graphical User Interface) for the user to specify the learning method of the evaluation engine 111 .
  • the designation of the reduction rate R_k becomes effective.
  • the method of giving the reduction rate R_k can be selected by radio buttons 402-404.
  • the reduction rate R_k is a specified constant value 406, regardless of the number of iterations k, as given by a straight line 405.
  • the reduction rate R_k is specified by a polygonal line 407. In the illustrated example, the reduction rate R_k increases until the epoch (the number of iterations of learning) specified in the box 408, and after that, it remains constant as specified in the box 409. value. If radio button 404 is selected, reduction rate R_k will be curve 410 specified in box 411 . These are examples of how to specify the reduction rate R_k, and any shape can be specified.
  • the image analysis system 1 accepts the specification of the reduction rate R_k setting method from the user, so that the evaluation engine 111 can be learned more efficiently according to needs.
  • a method for calculating the selection probability P_k(f_i) (201) of the learning image f_i can be specified using the GUI shown in FIG.
  • the selection probability P_k(f_i) (201) can be calculated using the correctness 203 of the estimated evaluation value, the degree of margin M(f_i) (204), the similarity between the learning images, and the like.
  • the selection probability P_k(f_i) (201) can be given by a function having these judgment materials as arguments.
  • Arguments to be considered in calculating the selection probability P_k(f_i) (201) can be designated by check boxes 412-414. Boxes 415-417 can be used to specify the ratio (weight) of each argument to be considered in the calculation of the selection probability P_k(f_i) (201).
  • the values of boxes 415 to 417 are specified as 0.2, 0.6, and 0.2, respectively, and the selection probability P_k(f_i) (201) is determined with emphasis on the margin M(f_i) (204). It will be. Also, a rule can be set for the selection of the learning image f_i, and together with the selection probability P_k(f_i) (201), the sub-learning image group ⁇ f'_j(k) ⁇ (T) is determined. For example, by checking a check box 418, it is possible to validate the rule that training images that were incorrect in the previous iterative learning must be selected in the next iterative learning.
  • a threshold can be specified in box 421 . Also, by checking the check box 420, it is possible to validate the rule that training images with a degree of similarity equal to or higher than the threshold are always thinned out. A threshold can be specified in box 422 .
  • the arguments and rules described here are examples, and other arguments and rules can be set.
  • the user can use various means to specify what kind of learning image should be preferentially learned in each iterative learning.
  • the user's domain knowledge into the evaluation object P, more appropriate learning becomes possible.
  • FIG. 5 is a diagram showing an example of the hardware configuration of the image analysis system 1.
  • the image analysis system 1 has the aforementioned imaging device 106 and computer 100 .
  • the imaging device 106 is as described above.
  • the computer 100 is a component for processing the image evaluation method in this embodiment, and includes a processor 101 , a memory resource 102 , a GUI device 103 , an input device 104 and a communication interface 105 .
  • the processor 101 is a processing device such as a CPU (Central Processing Unit) or a GPU (Graphic Processing Unit), but is not limited thereto, and may be any device capable of executing the above-described image analysis method.
  • one processor 101 may be single core or multicore, or a circuit (for example, FPGA (Field -Programmable Gate Array), CPLD (Complex Programmable Logic Device), or ASIC (Application Specific Integrated Circuit).
  • the storage resource 102 is a storage device such as RAM (Random Access Memory), ROM (Read Only Memory), HDD (Hard Disk Drive), non-volatile memory (flash memory, etc.), etc., and is a memory from which programs and data are temporarily read. act as an area.
  • the storage resource 102 may store a program (referred to as an image analysis program) that causes the processor 101 to execute the image analysis method described in the above embodiments.
  • the GUI device 103 is a device that displays a GUI, such as a display such as an OLCD (Organic Liquid Crystal Display) or a projector, but is not limited to this example as long as it can display a GUI.
  • the input device 104 is a device that receives an input operation from a user, and is, for example, a keyboard, mouse, touch panel, or the like.
  • the input device 104 is not particularly limited as long as it is a component capable of receiving operations from the user, and the input device 104 and the GUI device 103 may be integrated.
  • the communication interface 105 is USB, Ethernet, Wi-Fi, etc., and is an interface that mediates input/output of information. Note that the communication interface 105 is not limited to the example shown here as long as it is an interface that can directly receive an image from the imaging device 106 or that allows the user to transmit the image to the computer 100 .
  • a portable non-volatile storage medium for example, flash memory, DVD, CD-ROM, Blu-ray disc, etc.
  • storing the image can be connected to the communication interface 105 and the image can be stored in the computer 100 .
  • the image analysis program described above can be distributed to the computer 100 by connecting a portable nonvolatile storage medium storing the image analysis program to the communication interface 105 .
  • the image analysis program can be distributed to computer 100 by a program distribution server.
  • the program distribution server has a storage resource 102 storing the image analysis program, a processor performing distribution processing for distributing the image analysis program, and a communication interface device capable of communicating with the communication interface device of the computer 100. .
  • Various functions of the image analysis program distributed or distributed to the computer 100 are realized by the processor 101 .
  • the image analysis system 1 has a learning phase 110 in which the evaluation engine 111 learns, and an evaluation phase 120 in which the evaluation image S is evaluated using the evaluation engine 111 learned in the learning phase 110.
  • the processor 101 executing the learning phase 110 and the processor 101 executing the evaluation phase 120 may be the same or different. If the processor 101 executing the learning phase 110 and the processor 101 executing the evaluation phase 120 are different, the processor 101 executing the learning phase 110 will inform the processor 101 executing the evaluation phase 120 of the internal parameters 113 of the evaluation engine 111 can be handed over.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

Le but de la présente invention est de fournir une technologie pour effectuer efficacement un apprentissage pour une analyse d'image qui utilise un apprentissage automatique. Le système d'analyse d'image selon l'invention comprend au moins un processeur et une ressource de mémoire et est caractérisé par l'exécution par le processeur : d'une étape d'acquisition d'image d'apprentissage dans laquelle un groupe d'images d'apprentissage {f_i} (i=1, ..., Nf, Nf : nombre d'images d'apprentissage) est acquis par imagerie d'une cible d'évaluation pour l'apprentissage ; d'une étape d'entraînement dans laquelle un moteur d'évaluation est entraîné à l'aide du groupe d'images d'apprentissage {f_i} ; d'une étape d'acquisition d'image d'évaluation dans laquelle une image d'évaluation est acquise par imagerie d'une cible d'évaluation ; et d'une étape d'évaluation dans laquelle l'image d'évaluation est fournie en entrée au moteur d'évaluation entraîné et une valeur d'évaluation estimée est fournie en sortie. Dans l'étape d'entraînement, un groupe d'images d'apprentissage secondaires {f'_j(k)} (j=1, ..., Nf'_k, Nf'_k : nombre d'images d'apprentissage secondaires), {f'_j(k)}⊂{f_i}, k : nombre d'instances d'apprentissage itératif) est déterminé par un moteur de sélection d'image, le groupe d'images d'apprentissage secondaires étant une collection partielle du groupe d'images d'apprentissage {f_i} pour chaque instance d'apprentissage itératif et le groupe d'images d'apprentissage secondaires {f'_j(k)} est utilisé pour réaliser la kième instance d'apprentissage itératif du moteur d'évaluation.
PCT/JP2022/036388 2022-03-04 2022-09-29 Système d'analyse d'image, procédé d'analyse d'image et programme WO2023166773A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-033635 2022-03-04
JP2022033635A JP2023128941A (ja) 2022-03-04 2022-03-04 画像分析システム、画像分析方法、及びプログラム

Publications (1)

Publication Number Publication Date
WO2023166773A1 true WO2023166773A1 (fr) 2023-09-07

Family

ID=87883538

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/036388 WO2023166773A1 (fr) 2022-03-04 2022-09-29 Système d'analyse d'image, procédé d'analyse d'image et programme

Country Status (2)

Country Link
JP (1) JP2023128941A (fr)
WO (1) WO2023166773A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016103094A (ja) * 2014-11-27 2016-06-02 株式会社豊田自動織機 画像処理方法、画像処理装置、および画像処理プログラム
JP2021131835A (ja) * 2020-02-18 2021-09-09 東洋製罐グループホールディングス株式会社 画像処理システム、及び画像処理プログラム

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016103094A (ja) * 2014-11-27 2016-06-02 株式会社豊田自動織機 画像処理方法、画像処理装置、および画像処理プログラム
JP2021131835A (ja) * 2020-02-18 2021-09-09 東洋製罐グループホールディングス株式会社 画像処理システム、及び画像処理プログラム

Also Published As

Publication number Publication date
JP2023128941A (ja) 2023-09-14

Similar Documents

Publication Publication Date Title
CN107408209B (zh) 用于在半导体工艺中进行缺陷分类的系统和方法
US10818000B2 (en) Iterative defect filtering process
JP7074460B2 (ja) 画像検査装置および方法
TWI748122B (zh) 用於對多個項進行分類的系統、方法和電腦程式產品
KR102618355B1 (ko) 딥 러닝을 기반으로 웨이퍼 결함 이미지를 이용하여 웨이퍼의 결함을 분류하는 방법 및 시스템
JP2015087903A (ja) 情報処理装置及び情報処理方法
JP6943291B2 (ja) 学習装置、学習方法、及び、プログラム
US11694327B2 (en) Cross layer common-unique analysis for nuisance filtering
US20210012211A1 (en) Techniques for visualizing the operation of neural networks
Janik et al. Interpretability of a deep learning model in the application of cardiac MRI segmentation with an ACDC challenge dataset
US11580425B2 (en) Managing defects in a model training pipeline using synthetic data sets associated with defect types
KR20220047228A (ko) 이미지 분류 모델 생성 방법 및 장치, 전자 기기, 저장 매체, 컴퓨터 프로그램, 노변 장치 및 클라우드 제어 플랫폼
Choi et al. Deep learning based defect inspection using the intersection over minimum between search and abnormal regions
JP2021143884A (ja) 検査装置、検査方法、プログラム、学習装置、学習方法、および学習済みデータセット
WO2022121544A1 (fr) Normalisation de données d'image oct
de la Rosa et al. Defect detection and classification on semiconductor wafers using two-stage geometric transformation-based data augmentation and SqueezeNet lightweight convolutional neural network
CN114528913A (zh) 基于信任和一致性的模型迁移方法、装置、设备及介质
Du Nguyen et al. Crack segmentation of imbalanced data: The role of loss functions
US20210343000A1 (en) Automatic selection of algorithmic modules for examination of a specimen
WO2023166773A1 (fr) Système d'analyse d'image, procédé d'analyse d'image et programme
CN115018884A (zh) 基于多策略融合树的可见光红外视觉跟踪方法
KR102178238B1 (ko) 회전 커널을 이용한 머신러닝 기반 결함 분류 장치 및 방법
Vilalta et al. Studying the impact of the full-network embedding on multimodal pipelines
Kavitha et al. Explainable AI for Detecting Fissures on Concrete Surfaces Using Transfer Learning
WO2023166776A1 (fr) Système d'analyse d'apparence, procédé d'analyse d'apparence et programme

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22929917

Country of ref document: EP

Kind code of ref document: A1