WO2024044915A1 - Image comparison method and apparatus for error detection, and computer device - Google Patents

Image comparison method and apparatus for error detection, and computer device Download PDF

Info

Publication number
WO2024044915A1
WO2024044915A1 PCT/CN2022/115623 CN2022115623W WO2024044915A1 WO 2024044915 A1 WO2024044915 A1 WO 2024044915A1 CN 2022115623 W CN2022115623 W CN 2022115623W WO 2024044915 A1 WO2024044915 A1 WO 2024044915A1
Authority
WO
WIPO (PCT)
Prior art keywords
illumination
invariant
features
image
invariant features
Prior art date
Application number
PCT/CN2022/115623
Other languages
French (fr)
Chinese (zh)
Inventor
刘宁
李岩
Original Assignee
西门子股份公司
西门子(中国)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 西门子股份公司, 西门子(中国)有限公司 filed Critical 西门子股份公司
Priority to PCT/CN2022/115623 priority Critical patent/WO2024044915A1/en
Publication of WO2024044915A1 publication Critical patent/WO2024044915A1/en

Links

Images

Definitions

  • the present application relates to the field of image processing technology, and specifically, to a method, device, computer equipment, storage medium and computer program product for image comparison for error detection.
  • Printed circuit board assembly defect detection is a key technology in the field of quality control. Due to the changeable appearance and dense interfaces of components on circuit boards, various assembly defects are prone to occur during manual assembly. Therefore, circuit board assembly defect detection has become particularly important. .
  • this application proposes an image comparison method, including: using a variational autoencoder recognition network to decouple illumination invariant features and illumination features from the target image and the standard image, wherein the target image and The standard image is associated with the relative position between the components to be verified, and the illumination invariant features are associated with the geometric features of the components to be verified; using mutual information intervals, from the illumination invariant features and the illumination Select the illumination-invariant feature among features; compare the similarity between the illumination-invariant feature of the target image and the illumination-invariant feature of the standard image; determine whether the similarity is less than a threshold. Whether the element to be verified associated with the target image has errors.
  • the recognition network is obtained through deep learning and is an approximation of the true posterior distribution.
  • the recognition network is obtained through deep learning and is an approximation of the true posterior distribution.
  • using a mutual information interval to select the illumination-invariant feature from the illumination-invariant feature and the illumination-invariant feature includes:
  • the illumination-invariant features are selected from the illumination-invariant features and the illumination-invariant features using a joint distribution to estimate mutual information intervals between latent variables and gold standard factors.
  • comparing the similarity between the illumination-invariant features of the target image and the illumination-invariant features of the standard image includes:
  • the similarity between the illumination-invariant features of the target image and the illumination-invariant features of the standard image is compared by comparing cosine distances between the illumination-invariant features.
  • This application also proposes an image comparison device, including: a decoupling module for using a variational autoencoder recognition network to decouple illumination invariant features and illumination features from the target image and the standard image, where The target image and the standard image are associated with the relative positions of the components to be verified, and the illumination invariant features are associated with the geometric features of the components to be verified; a selection module is used to use mutual information intervals to select from the Select the illumination invariant feature from the illumination invariant feature and the illumination feature; a comparison module for comparing the illumination invariant feature of the target image with the illumination invariant feature of the standard image a similarity; a determining module configured to determine, in response to the similarity being less than a threshold, that the position of the element to be verified associated with the target image is incorrect.
  • a decoupling module for using a variational autoencoder recognition network to decouple illumination invariant features and illumination features from the target image and the standard image, where The target image and the standard image are associated with the relative positions
  • This application also provides a computer device, including a memory and a processor.
  • the memory stores a computer program.
  • the processor executes the computer program, the above method is implemented.
  • This application also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the above method is implemented.
  • This application also provides a computer program product, which includes computer instructions, and the computer instructions instruct a computing device to perform the above method.
  • FIG. 1 is a flow chart of an image comparison method for error detection according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of an image comparison device for error detection according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a computer device for image comparison of error detection according to an embodiment of the present application.
  • FIG. 1 is a flowchart of a method of image comparison for error detection according to an embodiment of the present application.
  • the circuit board assembly process includes the circuit design stage, the circuit board assembly stage, the assembly defect detection stage, and the assembly defect correction stage.
  • EDA Electronic Design Automation
  • preprocessing of the printed circuit board and assembly of components on the circuit board are performed based on the component layout instance.
  • tasks such as drilling are performed. process, etching process, etc.
  • Operations performed during the assembly process include welding, placement of components, etc.
  • assembly defect detection can be performed on the assembled circuit board, including angular defects, position defects, missing defects, flip defects, etc. of components.
  • the components to be corrected can be marked, and then returned to the circuit board assembly stage for reassembly.
  • the cycle from the circuit board assembly stage to the assembly defect correction stage can be one or more times.
  • assembly defect detection is performed multiple times, the assembly accuracy of the printed circuit board can be improved. More specifically, during the circuit board assembly stage, component characteristics can be compared between the pre-made circuit board template and the on-site circuit board to obtain defective components.
  • the assembly defect detection stage includes the image preprocessing stage and the on-site inspection stage. Part of the data generated in the image preprocessing stage is needed in the on-site detection stage.
  • target detection is performed on the circuit board image (an example of the circuit board image to be detected) to obtain the component area.
  • the circuit board image can be input into the component detection module and the feature detection module, and the detected component area can be output.
  • the component area is input into the affine transformation module for affine transformation to obtain the registered component area, that is, the component area after affine transformation.
  • affine transformation is performed to ensure accurate target detection efficiency. In other words, deformation will occur in the component area after affine transformation, and the samples for training the target detection model are usually based on real data, thereby ensuring the accuracy of target detection, that is, ensuring the global detection accuracy of the component area.
  • the affine transformed component area is more conducive to the identification of local defects. That is to say, when performing consistency detection in the consistency detection module, the affine transformed component area is compared with the components of the template image. Compare the areas to determine whether the components corresponding to the affine transformed component areas have assembly defects.
  • the component area of the template image can be generated from the image preprocessing stage.
  • the component area can be characterized by structural description and image features.
  • the structural description can be obtained by inputting the template image into the component detection module and feature detection module. , further, perform processing based on the structural description to obtain the image features of the component area.
  • the consistency detection module can perform at least one of angle defects, missing defects, position defects, and flip defects of components.
  • This application improves the consistency detection method.
  • FIG. 1 is a flowchart of an image comparison method for error detection according to an embodiment of the present application.
  • Method 100 begins with step 101 .
  • step 101 use the recognition network of the variational autoencoder to decouple the illumination invariant features and illumination features from the target image and the standard image, where the target image, the standard image and the element to be verified are The relative position of the illumination invariant feature is associated with the geometric feature of the element to be verified.
  • illumination-invariant features refer to features that have not changed under different lighting conditions, such as geometric features.
  • This method uses a probabilistic model, where the probabilistic model is used for latent representation learning.
  • the model used in this method includes an encoder and a decoder, where the encoder can be called a recognition network and the decoder can be called a production network.
  • the role of the encoder is to map the image into a low-dimensional latent space.
  • the recognition network is an approximation of the true posterior distribution p ⁇ (z
  • z is the variable we wish to learn.
  • Latent variables are opposite to observed variables and refer to unobservable random variables. Latent variables can be inferred from observed data using mathematical models.
  • Formula 3 is the loss function.
  • the loss function has two terms, among which the first term is the KL divergence. To optimize the second term, a fully differentiable estimator is used, with parameters and ⁇ can be jointly optimized. Among them, the loss function refers to a function that maps an event to a real number of economic costs related to the event.
  • is added to the standard variational bounds to simulate the redundancy reduction that allows learning of the decoupled latent space. Because variational autoencoders are able to achieve competitive decoupling, more complex data sets require stronger constraints to achieve interpretable feature separation.
  • a simple penalty term in the loss function can achieve highly decoupled features.
  • adjusts the learning constraint imposed on the model, where the constraint imposes a limit on the capacity of the latent information channel.
  • > 1 and this parameter should be tuned during training.
  • can be estimated using a decoupling metric.
  • the architecture of the encoder and decoder can change.
  • a convolutional neural network and a fully connected network can be used as the encoder, and a convolutional neural network can be used as the decoder.
  • the latent variable z is sampled from a unit Gaussian prior and then fed back through the decoder.
  • the generation network will generate the image.
  • a representation can be learned in which illumination latent features are sensitive to illumination changes while being insensitive to geometric features. Images can be collected under various lighting conditions.
  • a synthetic lighting mixer can be used to synthesize real industrial images.
  • Steps go to 102.
  • step 102 it is determined which features in z are illumination-invariant features.
  • mutual information intervals can be used to select features. can use The defined joint distribution estimates the empirical mutual information interval between the latent variable z j and the gold standard factor v k .
  • axis alignment can be achieved by measuring the difference between the two latent variables with the highest mutual information.
  • the value range of the mutual information interval is between 0 and 1.
  • MIG is estimated by traversing the data set.
  • z0, z1, z2 have the highest scores and, therefore, are illumination-invariant features.
  • step 103 the similarity between the illumination-invariant features of the target image and the illumination-invariant features of the standard image is compared.
  • the similarity between the illumination-invariant features of the target image and the illumination-invariant features of the standard image is compared by comparing cosine distances between the illumination-invariant features.
  • Steps go to 104.
  • step 104 in response to the similarity being less than a threshold, it is determined whether the element to be verified associated with the target image has an error.
  • steps in the flowchart of FIG. 1 are shown in sequence as indicated by arrows, these steps are not necessarily executed in the order indicated by arrows. Unless otherwise specified in this article, there is no strict order restriction on the execution of these steps, and these steps can be executed in other orders. Moreover, at least some of the steps in Figure 1 may include multiple steps or stages. These steps or stages are not necessarily executed at the same time, but may be executed at different times. The execution order of these steps or stages is also It does not necessarily need to be performed sequentially, but may be performed in turn or alternately with other steps or at least part of steps or stages in other steps.
  • Figure 2 provides an apparatus 200 for image comparison for error detection.
  • the device 200 includes: a decoupling module 201, used to decouple illumination invariant features and illumination features from the target image and the standard image using the recognition network of the variational autoencoder, wherein the target image is associated with the relative position between the standard image and the component to be verified, and the illumination invariant features are associated with the geometric features of the component to be verified; the selection module 202 is configured to use the mutual information interval to select from the illumination Select the illumination invariant feature from the invariant features and the illumination feature; the comparison module 203 is used to compare the illumination invariant feature of the target image with the illumination invariant feature of the standard image. Similarity; determining module 204, configured to determine, in response to the similarity being less than a threshold, that the position of the element to be verified associated with the target image is incorrect.
  • the recognition network is an approximation of the true posterior distribution
  • the decoupling module of the device is further configured to obtain the recognition network in the following manner:
  • the decoupling module of the device is further configured to obtain the identification network in the following manner:
  • the selection module of the device is further configured to use a joint distribution to estimate the mutual information interval between the latent variable and the gold standard factor, and select the illumination invariant feature and the illumination feature. Illumination invariant features.
  • the comparison module of the device is further configured to compare the illumination-invariant features of the target image with the illumination of the standard image by comparing cosine distances between the illumination-invariant features. Similarity between invariant features.
  • the device may contain more or fewer modules to implement the described functionality.
  • at least one module in Figure 2 may be further divided into a plurality of different sub-modules, each sub-module being configured to perform at least a portion of the operations described herein in connection with the corresponding module.
  • the apparatus 200 may also include additional modules for performing other operations that have been described in the specification.
  • the exemplary apparatus 200 may be implemented in software, hardware, firmware, or any combination thereof.
  • Figure 3 provides a computer device for image comparison for error detection.
  • computer device 300 may include processor 302 that executes a computer program stored in memory 304. The computer program when executed by a processor implements an image comparison method for error detection.
  • FIG. 3 is only a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the present application is applied.
  • Specific computer equipment may include There may be more or fewer parts than shown, or certain parts may be combined, or may have a different arrangement of parts.
  • Non-volatile memory can include read-only memory (ROM), magnetic tape, floppy disk, flash memory or optical memory, etc.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM can be in many forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM).
  • the present application also provides a computer-readable storage medium on which a computer program is stored, which is characterized in that the above steps are implemented when the computer program is executed by a processor.
  • Some implementations of the disclosure may include articles of manufacture.
  • the article of manufacture may include storage media for storing logic. Examples of storage media may include one or more types of computer-readable storage media capable of storing electronic data, including volatile or nonvolatile memory, removable or non-removable memory, erasable or non-erasable memory memory, writable or rewritable memory, etc.
  • Examples of logic may include various software units, such as software components, programs, applications, computer programs, applications, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions , methods, procedures, software interfaces, application programming interfaces (APIs), sets of instructions, computational code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof.
  • an article of manufacture may store executable computer program instructions that, when executed by a processor, cause the processor to perform the methods and/or operations described herein.
  • Executable computer program instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, etc.
  • Executable computer program instructions may be implemented in accordance with a predefined computer language, manner, or syntax for commanding a computer to perform specific functions.
  • the instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language.
  • Embodiments of the present application also provide a computer program product, including computer instructions, which instruct the computing device to perform any corresponding operation in the multiple method embodiments mentioned above.
  • the above-mentioned methods according to the embodiments of the present application can be implemented in hardware, firmware, or as software or computer code that can be stored in a recording medium (such as CD ROM, RAM, floppy disk, hard disk or magneto-optical disk), or by The computer code downloaded by the network is originally stored in a remote recording medium or a non-transitory machine-readable medium and will be stored in a local recording medium, so that the method described here can be stored using a general-purpose computer, a special-purpose processor or a programmable computer.
  • a computer, processor, microprocessor controller, or programmable hardware includes storage components (e.g., RAM, ROM, flash memory, etc.) that can store or receive software or computer code when the software or computer code is used by the computer, When accessed and executed by a processor or hardware, the methods described herein are implemented. Furthermore, when a general-purpose computer accesses code for implementing the methods illustrated herein, execution of the code converts the general-purpose computer into a special-purpose computer for performing the methods illustrated herein.

Landscapes

  • Image Analysis (AREA)

Abstract

Disclosed in the present application are an image comparison method and apparatus for error detection, and a computer device, a storage medium and a computer program product. The image comparison method disclosed in the present application comprises: by using a recognition network based on a variational autoencoder, performing decoupling on a target image and a standard image, so as to obtain illumination-invariant features and illumination features, wherein the target image and the standard image are associated with relative positions of elements to be checked, and the illumination-invariant features are associated with geometric features of said elements; by using a mutual information gap, selecting the illumination-invariant features from the illumination-invariant features and the illumination features; comparing the illumination-invariant features of the target image and the illumination-invariant features of the standard image, so as to determine the similarity therebetween; and determining, according to whether the similarity is less than a threshold value, whether there is an error occurring in said elements, which are associated with the target image.

Description

用于错误检测的图像比较的方法、装置、计算机设备Method, device, computer equipment for image comparison for error detection 技术领域Technical field
本申请涉及图像处理技术领域,具体地,涉及一种用于错误检测的图像比较的方法、装置、计算机设备、存储介质以及计算机程序产品。The present application relates to the field of image processing technology, and specifically, to a method, device, computer equipment, storage medium and computer program product for image comparison for error detection.
背景技术Background technique
印刷电路板装配缺陷检测是质量控制领域的关键技术,由于电路板上元器件外观多变、接口密集,在手工组装过程中容易出现多种装配缺陷,因此,电路板装配缺陷检测变得尤其重要。Printed circuit board assembly defect detection is a key technology in the field of quality control. Due to the changeable appearance and dense interfaces of components on circuit boards, various assembly defects are prone to occur during manual assembly. Therefore, circuit board assembly defect detection has become particularly important. .
当前,需要对PCB板上组装的元件进行检测。通常,检测环境随光照条件会发生变化。现有的检测方式不能很好地应对该问题。例如,在某些光照条件下,会错误识别原本正确安装的元件。Currently, components assembled on PCB boards need to be inspected. Typically, the detection environment changes with lighting conditions. Existing detection methods cannot cope with this problem well. For example, under certain lighting conditions, components that were installed correctly can be misidentified.
发明内容Contents of the invention
提供本发明内容部分来以简化的形式介绍一些选出的概念,其将在下面的具体实施方式部分中被进一步描述。该发明内容部分并非是要标识出所要求保护的主题的任何关键特征或必要特征,也不是要被用于帮助确定所要求保护的主题的范围。This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify any key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
基于此,本申请提出一种图像比较的方法,包括:使用变分自编码器的识别网络,从目标图像和标准图像中解耦出光照不变特征和光照特征,其中,所述目标图像和所述标准图像与待验证的元件之间的相对位置相关联,所述光照不变特征与待验证的元件的几何特征相关联;使用互信息间隔,从所述光照不变特征和所述光照特征中选择所述光照不变特征;比较所述目标图像的所述光照不变特征与所述标准图像的所述光照不变特征之间的相似度;根据所述相似度是否小于阈值,确定与所述目标图像相关联的待验证的元件是否有错误。Based on this, this application proposes an image comparison method, including: using a variational autoencoder recognition network to decouple illumination invariant features and illumination features from the target image and the standard image, wherein the target image and The standard image is associated with the relative position between the components to be verified, and the illumination invariant features are associated with the geometric features of the components to be verified; using mutual information intervals, from the illumination invariant features and the illumination Select the illumination-invariant feature among features; compare the similarity between the illumination-invariant feature of the target image and the illumination-invariant feature of the standard image; determine whether the similarity is less than a threshold. Whether the element to be verified associated with the target image has errors.
通过这种方式,可以在不同光照条件下,准确对图像进行比较,从而识别出图像中的错误。In this way, images can be accurately compared under different lighting conditions and errors in the images can be identified.
可选地,在上述方面的一个示例中,所述识别网络通过深度学习来获得,是真实后验分布的近似,在所述识别网络的深度学习过程中:Optionally, in an example of the above aspect, the recognition network is obtained through deep learning and is an approximation of the true posterior distribution. During the deep learning process of the recognition network:
最大化产生真实图像的概率,同时保持所述真实后验分布和估计后验分布之间的距离 小于所述阈值。Maximize the probability of producing a true image while keeping the distance between the true posterior distribution and the estimated posterior distribution less than the threshold.
可选地,在上述方面的一个示例中,在所述识别网络的深度学习过程中:Optionally, in an example of the above aspect, during the deep learning process of the recognition network:
使用全可微分估计器,联合优化损失函数。Jointly optimize the loss function using fully differentiable estimators.
可选地,在上述方面的一个示例中,所述使用互信息间隔,从所述光照不变特征和所述光照特征中选择所述光照不变特征包括:Optionally, in an example of the above aspect, using a mutual information interval to select the illumination-invariant feature from the illumination-invariant feature and the illumination-invariant feature includes:
使用联合分布估计隐变量和金标准因子之间的互信息间隔,从所述光照不变特征和所述光照特征中选择所述光照不变特征。The illumination-invariant features are selected from the illumination-invariant features and the illumination-invariant features using a joint distribution to estimate mutual information intervals between latent variables and gold standard factors.
可选地,在上述方面的一个示例中,所述比较所述目标图像的所述光照不变特征与所述标准图像的所述光照不变特征之间的相似度包括:Optionally, in an example of the above aspect, comparing the similarity between the illumination-invariant features of the target image and the illumination-invariant features of the standard image includes:
通过比较所述光照不变特征之间的余弦距离,比较所述目标图像的所述光照不变特征与所述标准图像的所述光照不变特征之间的相似度。The similarity between the illumination-invariant features of the target image and the illumination-invariant features of the standard image is compared by comparing cosine distances between the illumination-invariant features.
本申请还提出一种图像比较的装置,包括:解耦模块,用于使用变分自编码器的识别网络,从目标图像和标准图像中解耦出光照不变特征和光照特征,其中,所述目标图像和所述标准图像与待验证的元件之间的相对位置相关联,所述光照不变特征与待验证的元件的几何特征相关联;选择模块,用于使用互信息间隔,从所述光照不变特征和所述光照特征中选择所述光照不变特征;比较模块,用于比较所述目标图像的所述光照不变特征与所述标准图像的所述光照不变特征之间的相似度;确定模块,用于响应于所述相似度小于阈值,确定与所述目标图像相关联的待验证的元件的位置有误。This application also proposes an image comparison device, including: a decoupling module for using a variational autoencoder recognition network to decouple illumination invariant features and illumination features from the target image and the standard image, where The target image and the standard image are associated with the relative positions of the components to be verified, and the illumination invariant features are associated with the geometric features of the components to be verified; a selection module is used to use mutual information intervals to select from the Select the illumination invariant feature from the illumination invariant feature and the illumination feature; a comparison module for comparing the illumination invariant feature of the target image with the illumination invariant feature of the standard image a similarity; a determining module configured to determine, in response to the similarity being less than a threshold, that the position of the element to be verified associated with the target image is incorrect.
本申请还提供了一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现上述方法。This application also provides a computer device, including a memory and a processor. The memory stores a computer program. When the processor executes the computer program, the above method is implemented.
本申请还提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述方法。This application also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the above method is implemented.
本申请还提供了一种计算机程序产品,包括计算机指令,所述计算机指令指示计算设备执行上述方法。This application also provides a computer program product, which includes computer instructions, and the computer instructions instruct a computing device to perform the above method.
附图说明Description of drawings
在附图中对本公开的实现以示例的形式而非限制的形式进行了说明,附图中相似的附图标记表示相同或类似的部件。Implementations of the present disclosure are illustrated by way of example and not by way of limitation in the accompanying drawings, in which like reference numbers refer to the same or similar parts.
图1是根据本申请的一实施例的错误检测的图像比较的方法的流程图。FIG. 1 is a flow chart of an image comparison method for error detection according to an embodiment of the present application.
图2是根据本申请的一实施例的错误检测的图像比较的装置的示意图。FIG. 2 is a schematic diagram of an image comparison device for error detection according to an embodiment of the present application.
图3是根据本申请的一实施例的错误检测的图像比较的计算机设备的示意图。3 is a schematic diagram of a computer device for image comparison of error detection according to an embodiment of the present application.
其中,附图标记如下:Among them, the reference signs are as follows:
S101-S104:步骤S101-S104: Steps
200:图像比较装置200: Image comparison device
201:解耦模块201: Decoupled module
202:选择模块202: Select module
203:比较模块203: Comparison module
204:确定模块204: Determine module
300:计算机设备300: Computer equipment
302:处理器302: Processor
304:存储器304: Memory
具体实施方式Detailed ways
在以下的说明书中,出于解释的目的,阐述了大量具体细节。然而,可以理解的是,本发明的实现无需这些具体细节就可以实施。在其它实例中,并未详细示出公知的电路、结构和技术,以免影响对说明书的理解。In the following specification, numerous specific details are set forth for purposes of explanation. However, it is understood that the invention may be practiced without these specific details. In other instances, well-known circuits, structures, and techniques have not been shown in detail so as not to obscure the understanding of the description.
说明书通篇中对“一种实现”、“实现”、“示例性实现”、“一些实现”、“各种实现”等的引述表示所描述的本发明的实现可以包括特定的特征、结构或特性,然而,并不是说每个实现都必须要包含这些特定的特征、结构或特性。此外,一些实现可以具有针对其它实现描述的特征中的一些、全部,或者不具有针对其它实现描述的特征。References throughout this specification to "one implementation," "an implementation," "an example implementation," "some implementations," "various implementations," etc., mean that the described implementations of the invention may include particular features, structures, or Features, however, do not imply that every implementation must contain these specific features, structures, or characteristics. Furthermore, some implementations may have some, all, or none of the features described for other implementations.
在下面的描述中,可能会用到术语“耦合”和“连接”及其派生词。需要理解的是,这些术语并非是要作为彼此的同义词。相反,在特定的实现中,“连接”用于表示两个或更多部件彼此直接物理或电接触,而“耦合”则用于表示两个或更多部件彼此协作或交互,但是它们可能、也可能不直接物理或电接触图1是根据本申请的一实施例的错误检测的图像比较的方法的流程图。In the following description, the terms "coupling" and "connection" and their derivatives may be used. It needs to be understood that these terms are not intended as synonyms for each other. Rather, in a particular implementation, "connected" is used to indicate that two or more components are in direct physical or electrical contact with each other, and "coupled" is used to indicate that two or more components cooperate or interact with each other, however they may, There may also be no direct physical or electrical contact. FIG. 1 is a flowchart of a method of image comparison for error detection according to an embodiment of the present application.
电路板装配过程包括电路设计阶段、电路板装配阶段、装配缺陷检测阶段以及装配缺陷校正阶段。The circuit board assembly process includes the circuit design stage, the circuit board assembly stage, the assembly defect detection stage, and the assembly defect correction stage.
具体来说,在电路设计阶段中,可以采用诸如电子设计自动化(EDA)的软件进行逻辑模拟和物理模拟,得到元器件布局实例。然后,在电路板装配阶段中,基于元器件布局实例执行印制电路板的预处理和电路板上元器件的装配,在印制电路板的预处理中,基于元器件布局实例执行诸如钻孔工艺、蚀刻工艺等。在装配过程中所执行的操作包括焊接、元器件的放置等。在装配过程之后,即,在装配缺陷检测阶段中,可以对装配的电路板执 行装配缺陷检测,包括元器件的角度缺陷、位置缺陷、缺失缺陷、翻转缺陷等。进一步地,在装配缺陷校正阶段,可以对要校正了元器件进行标记,然后,再次返回到电路板装配阶段进行重新装配。应理解,从电路板装配阶段到装配缺陷校正阶段的循环可以为一次或多次,在多次执行装配缺陷检测的情况下,能够提高印制电路板的装配准确度。更具体地,在电路板装配阶段中,可以通过预先制作的电路板模板与现场电路板进行元器件特征比较,从而得到具有缺陷的元器件。Specifically, during the circuit design stage, software such as Electronic Design Automation (EDA) can be used to perform logical simulation and physical simulation to obtain component layout examples. Then, in the circuit board assembly stage, preprocessing of the printed circuit board and assembly of components on the circuit board are performed based on the component layout instance. In the preprocessing of the printed circuit board, based on the component layout instance, tasks such as drilling are performed. process, etching process, etc. Operations performed during the assembly process include welding, placement of components, etc. After the assembly process, that is, in the assembly defect detection stage, assembly defect detection can be performed on the assembled circuit board, including angular defects, position defects, missing defects, flip defects, etc. of components. Further, in the assembly defect correction stage, the components to be corrected can be marked, and then returned to the circuit board assembly stage for reassembly. It should be understood that the cycle from the circuit board assembly stage to the assembly defect correction stage can be one or more times. When assembly defect detection is performed multiple times, the assembly accuracy of the printed circuit board can be improved. More specifically, during the circuit board assembly stage, component characteristics can be compared between the pre-made circuit board template and the on-site circuit board to obtain defective components.
其中,装配缺陷检测阶段包括图像预处理阶段以及现场检测阶段。在现场检测阶段中需要用到图像预处理阶段生成的部分数据。Among them, the assembly defect detection stage includes the image preprocessing stage and the on-site inspection stage. Part of the data generated in the image preprocessing stage is needed in the on-site detection stage.
在本示例中,对电路板图像(待检测电路板图像的示例)进行目标检测,得到元器件区域。具体地,可以对电路板图像输入到元器件检测模块和特征检测模块中,输出检测到的元器件区域。In this example, target detection is performed on the circuit board image (an example of the circuit board image to be detected) to obtain the component area. Specifically, the circuit board image can be input into the component detection module and the feature detection module, and the detected component area can be output.
然后,对元器件区域输入到仿射变换模块中进行仿射变换,得到配准元器件区域,即,仿射变换后的元器件区域。应理解,一方面,在目标检测之后,执行仿射变换,保证了准确的目标检测效率。换言之,在仿射变换之后的元器件区域会出现形变,训练目标检测模型的样本通常基于真实数据,从而确保了目标检测的准确度,也就是说,保证了元器件区域的全局检测准确性。另一方面,经过仿射变换的元器件区域更有利于局部缺陷的识别,也就是说,在一致性检测模块中执行一致性检测时,将仿射变换的元器件区域与模板图像的元器件区域进行比较,判断仿射变换的元器件区域对应的元器件是否具有装配缺陷。Then, the component area is input into the affine transformation module for affine transformation to obtain the registered component area, that is, the component area after affine transformation. It should be understood that, on the one hand, after target detection, affine transformation is performed to ensure accurate target detection efficiency. In other words, deformation will occur in the component area after affine transformation, and the samples for training the target detection model are usually based on real data, thereby ensuring the accuracy of target detection, that is, ensuring the global detection accuracy of the component area. On the other hand, the affine transformed component area is more conducive to the identification of local defects. That is to say, when performing consistency detection in the consistency detection module, the affine transformed component area is compared with the components of the template image. Compare the areas to determine whether the components corresponding to the affine transformed component areas have assembly defects.
进一步地,模板图像的元器件区域可以从图像预处理阶段生成,元器件区域可以采用结构描述和图像特征来表征,结构描述可以是对模板图像输入到元器件检测模块和特征检测模块中处理得到,进一步地,基于结构描述进行处理,得到元器件区域的图像特征。Further, the component area of the template image can be generated from the image preprocessing stage. The component area can be characterized by structural description and image features. The structural description can be obtained by inputting the template image into the component detection module and feature detection module. , further, perform processing based on the structural description to obtain the image features of the component area.
进一步地,一致性检测模块可以执行元器件的角度缺陷、缺失缺陷、位置缺陷、翻转缺陷中至少一者。Further, the consistency detection module can perform at least one of angle defects, missing defects, position defects, and flip defects of components.
本申请对一致性检测方法进行改进。This application improves the consistency detection method.
图1是根据本申请的一实施例的用于错误检测的图像比较的方法的流程图。方法100开始于步骤101。在步骤101,使用变分自编码器的识别网络,从目标图像和标准图像中解耦出光照不变特征和光照特征,其中,所述目标图像和所述标准图像与待验证的元件之间的相对位置相关联,所述光照不变特征与待验证的元件的几何特征相关联。FIG. 1 is a flowchart of an image comparison method for error detection according to an embodiment of the present application. Method 100 begins with step 101 . In step 101, use the recognition network of the variational autoencoder to decouple the illumination invariant features and illumination features from the target image and the standard image, where the target image, the standard image and the element to be verified are The relative position of the illumination invariant feature is associated with the geometric feature of the element to be verified.
本申请提出一种非监督深度学习方法,该方法仅提取光照不变特征。这里,光照不变特征是指在不同光照条件下观测到没有发生改变的特征,例如几何特征等。This application proposes an unsupervised deep learning method that only extracts illumination-invariant features. Here, illumination-invariant features refer to features that have not changed under different lighting conditions, such as geometric features.
该方法使用了一种概率模型,其中,该概率模型用于隐表示学习。该方法所使用的模 型包括编码器和解码器,其中,编码器可以被称为识别网络,解码器可以被称为产生网络。编码器的作用是将图像映射到低维度隐空间中。This method uses a probabilistic model, where the probabilistic model is used for latent representation learning. The model used in this method includes an encoder and a decoder, where the encoder can be called a recognition network and the decoder can be called a production network. The role of the encoder is to map the image into a low-dimensional latent space.
识别网络是真实后验分布p θ(z|x)的近似
Figure PCTCN2022115623-appb-000001
其中z是一组隐变量,x是一组输入图像的特征。这里,z是希望学习的变量。
The recognition network is an approximation of the true posterior distribution p θ (z|x)
Figure PCTCN2022115623-appb-000001
where z is a set of latent variables and x is a set of features of the input image. Here, z is the variable we wish to learn.
隐变量与观测变量相对,指的是不可观测的随机变量。隐变量可以通过使用数学模型依据观测得的数据被推断出来。Hidden variables are opposite to observed variables and refer to unobservable random variables. Latent variables can be inferred from observed data using mathematical models.
在训练的过程中,希望最大化产生真实图像的概率,同时保持真实后验分布和估计后验分布之间的距离小于阈值ε。During the training process, we hope to maximize the probability of generating real images while keeping the distance between the real posterior distribution and the estimated posterior distribution less than the threshold ε.
Figure PCTCN2022115623-appb-000002
Figure PCTCN2022115623-appb-000003
Figure PCTCN2022115623-appb-000002
and
Figure PCTCN2022115623-appb-000003
在一实施例中,可以将上述公式改写为如下形式。这里增加了参数β。In an embodiment, the above formula can be rewritten into the following form. The parameter β is added here.
Figure PCTCN2022115623-appb-000004
Figure PCTCN2022115623-appb-000004
Figure PCTCN2022115623-appb-000005
Figure PCTCN2022115623-appb-000005
其中z (i)=m (i)+(σ (i)*e (i))而且
Figure PCTCN2022115623-appb-000006
where z (i) =m (i) + (σ (i) *e (i) ) and
Figure PCTCN2022115623-appb-000006
公式3是损失函数,该损失函数有两项,其中,第一项是KL散度。为了优化第二项,使用全可微分估计器(fully differentiable estimator),参数
Figure PCTCN2022115623-appb-000007
和θ可以被联合优化。其中,损失函数是指将一个事件映射到与其事件相关的经济成本的 实数上的一种函数。
Formula 3 is the loss function. The loss function has two terms, among which the first term is the KL divergence. To optimize the second term, a fully differentiable estimator is used, with parameters
Figure PCTCN2022115623-appb-000007
and θ can be jointly optimized. Among them, the loss function refers to a function that maps an event to a real number of economic costs related to the event.
在一个实现中,β被添加到标准变分边界以模拟允许学习解耦隐空间的冗余减少。因为变分自编码器能够实现竞争性解耦,更复杂数据集要求更强的约束以实现可解释的特征分离。In one implementation, β is added to the standard variational bounds to simulate the redundancy reduction that allows learning of the decoupled latent space. Because variational autoencoders are able to achieve competitive decoupling, more complex data sets require stronger constraints to achieve interpretable feature separation.
在一个实现中,损失函数中的一个简单惩罚项可以实现高解耦特征。β调节施加到模型的学习约束,其中,该约束施加了隐信息信道的容量的限制。在一个实现中,β>1并且该参数应当在训练期间被调整。在另一实现中,可以使用解耦度量来估计β。In one implementation, a simple penalty term in the loss function can achieve highly decoupled features. β adjusts the learning constraint imposed on the model, where the constraint imposes a limit on the capacity of the latent information channel. In one implementation, β > 1 and this parameter should be tuned during training. In another implementation, β can be estimated using a decoupling metric.
在一个实现中,编码器和解码器的架构可以改变。例如,可以使用卷积神经网络和全连接网络作为编码器,并且使用卷积神经网络作为解码器。In an implementation, the architecture of the encoder and decoder can change. For example, a convolutional neural network and a fully connected network can be used as the encoder, and a convolutional neural network can be used as the decoder.
在一个实现中,从单位高斯先验中采样隐变量z,然后通过解码器进行反馈。这里,产生网络将产生图像。In one implementation, the latent variable z is sampled from a unit Gaussian prior and then fed back through the decoder. Here, the generation network will generate the image.
在一实现中,可以学习其中光照隐特征对于光照改变敏感同时对几何特征不敏感的表示。可以收集各种光照条件下的图像。此外,可以利用合成光照混合器来对真实工业图像 进行合成。In one implementation, a representation can be learned in which illumination latent features are sensitive to illumination changes while being insensitive to geometric features. Images can be collected under various lighting conditions. In addition, a synthetic lighting mixer can be used to synthesize real industrial images.
步骤进入到102。在步骤102,确定z中哪些特征是光照不变特征。在一个实现中,可以使用互信息间隔来选择特征。可以使用
Figure PCTCN2022115623-appb-000008
定义的联合分布估计隐变量z j和金标准因子v k之间的经验互信息间隔。
Steps go to 102. In step 102, it is determined which features in z are illumination-invariant features. In one implementation, mutual information intervals can be used to select features. can use
Figure PCTCN2022115623-appb-000008
The defined joint distribution estimates the empirical mutual information interval between the latent variable z j and the gold standard factor v k .
较高的互信息意味着z j包含了许多关于v k的信息,并且在zj和vj之间存在确定不可逆的关系时,互信息是最大的。单个因子可以具有多个隐变量的高互信息值。 Higher mutual information means that z j contains a lot of information about v k , and when there is a certain irreversible relationship between zj and vj, the mutual information is maximum. A single factor can have high mutual information values for multiple latent variables.
在一个实现中,可以通过测量具有最高互信息的两个隐变量之间的差值来实现轴对齐。In one implementation, axis alignment can be achieved by measuring the difference between the two latent variables with the highest mutual information.
调用互信息间隔的全部度量如下The full measure of calling mutual information intervals is as follows
Figure PCTCN2022115623-appb-000009
Figure PCTCN2022115623-appb-000009
其中
Figure PCTCN2022115623-appb-000010
in
Figure PCTCN2022115623-appb-000010
Figure PCTCN2022115623-appb-000011
Figure PCTCN2022115623-appb-000011
j (k)=argmax jI n(z j;v k); j (k) = argmax j I n (z j ; v k );
互信息间隔的取值范围在0到1之间。在一个实现中,通过遍历数据集来估计MIG。The value range of the mutual information interval is between 0 and 1. In one implementation, MIG is estimated by traversing the data set.
在一个实现中,z0,z1,z2具有最高的分数,因此,其是光照不变特征。In one implementation, z0, z1, z2 have the highest scores and, therefore, are illumination-invariant features.
步骤进入到103。在步骤103,比较所述目标图像的所述光照不变特征与所述标准图像的所述光照不变特征之间的相似度。在一实现中,通过比较所述光照不变特征之间的余弦距离,比较所述目标图像的所述光照不变特征与所述标准图像的所述光照不变特征之间的相似度。Go to step 103. In step 103, the similarity between the illumination-invariant features of the target image and the illumination-invariant features of the standard image is compared. In one implementation, the similarity between the illumination-invariant features of the target image and the illumination-invariant features of the standard image is compared by comparing cosine distances between the illumination-invariant features.
步骤进入到104。在步骤104,响应于所述相似度小于阈值,确定与所述目标图像相关联的待验证的元件是否有错误。Steps go to 104. In step 104, in response to the similarity being less than a threshold, it is determined whether the element to be verified associated with the target image has an error.
应该理解的是,虽然图1的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图1中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that although various steps in the flowchart of FIG. 1 are shown in sequence as indicated by arrows, these steps are not necessarily executed in the order indicated by arrows. Unless otherwise specified in this article, there is no strict order restriction on the execution of these steps, and these steps can be executed in other orders. Moreover, at least some of the steps in Figure 1 may include multiple steps or stages. These steps or stages are not necessarily executed at the same time, but may be executed at different times. The execution order of these steps or stages is also It does not necessarily need to be performed sequentially, but may be performed in turn or alternately with other steps or at least part of steps or stages in other steps.
图2提供了一种用于错误检测的图像比较的装置200。在该装置200中,其包括:解耦模块201,用于使用变分自编码器的识别网络,从目标图像和标准图像中解耦出光照不 变特征和光照特征,其中,所述目标图像和所述标准图像与待验证的元件之间的相对位置相关联,所述光照不变特征与待验证的元件的几何特征相关联;选择模块202,用于使用互信息间隔,从所述光照不变特征和所述光照特征中选择所述光照不变特征;比较模块203,用于比较所述目标图像的所述光照不变特征与所述标准图像的所述光照不变特征之间的相似度;确定模块204,用于响应于所述相似度小于阈值,确定与所述目标图像相关联的待验证的元件的位置有误。Figure 2 provides an apparatus 200 for image comparison for error detection. In the device 200, it includes: a decoupling module 201, used to decouple illumination invariant features and illumination features from the target image and the standard image using the recognition network of the variational autoencoder, wherein the target image is associated with the relative position between the standard image and the component to be verified, and the illumination invariant features are associated with the geometric features of the component to be verified; the selection module 202 is configured to use the mutual information interval to select from the illumination Select the illumination invariant feature from the invariant features and the illumination feature; the comparison module 203 is used to compare the illumination invariant feature of the target image with the illumination invariant feature of the standard image. Similarity; determining module 204, configured to determine, in response to the similarity being less than a threshold, that the position of the element to be verified associated with the target image is incorrect.
在一实现中,所述识别网络是真实后验分布的近似,该装置的所述解耦模块还配置用于,通过如下方式获得所述识别网络:In one implementation, the recognition network is an approximation of the true posterior distribution, and the decoupling module of the device is further configured to obtain the recognition network in the following manner:
最大化产生真实图像的概率,同时保持所述真实后验分布和估计后验分布之间的距离小于所述阈值。Maximize the probability of producing a true image while keeping the distance between the true posterior distribution and the estimated posterior distribution less than the threshold.
在一实现中,该装置的所述解耦模块还配置用于,通过如下方式获得所述识别网络:In an implementation, the decoupling module of the device is further configured to obtain the identification network in the following manner:
使用全可微分估计器,联合优化损失函数。Jointly optimize the loss function using fully differentiable estimators.
在一实现中,该装置的所述选择模块还配置用于,使用联合分布估计隐变量和金标准因子之间的互信息间隔,从所述光照不变特征和所述光照特征中选择所述光照不变特征。In one implementation, the selection module of the device is further configured to use a joint distribution to estimate the mutual information interval between the latent variable and the gold standard factor, and select the illumination invariant feature and the illumination feature. Illumination invariant features.
在一实现中,该装置的比较模块还配置用于,通过比较所述光照不变特征之间的余弦距离,比较所述目标图像的所述光照不变特征与所述标准图像的所述光照不变特征之间的相似度。In one implementation, the comparison module of the device is further configured to compare the illumination-invariant features of the target image with the illumination of the standard image by comparing cosine distances between the illumination-invariant features. Similarity between invariant features.
需要注意的是,该装置可以包含更多或更少的模块来实现所描述的功能。例如,图2中的至少一个模块可以被进一步分成复数个不同的子模块,每个子模块用于执行这里结合相应的模块所描述的操作的至少一部分。此外,在一些示例中,装置200还可以包括附加的模块,用于执行说明书中已经描述的其它操作。此外,本领域技术人员可以理解,示例性装置200可以用软件、硬件、固件、或其任意组合来实现。It should be noted that the device may contain more or fewer modules to implement the described functionality. For example, at least one module in Figure 2 may be further divided into a plurality of different sub-modules, each sub-module being configured to perform at least a portion of the operations described herein in connection with the corresponding module. In addition, in some examples, the apparatus 200 may also include additional modules for performing other operations that have been described in the specification. Furthermore, those skilled in the art will understand that the exemplary apparatus 200 may be implemented in software, hardware, firmware, or any combination thereof.
图3提供了一种用于错误检测的图像比较的计算机设备。根据一个实施例,计算机设备300可以包括处理器302,处理器302执行存储器304存储的计算机程序。该计算机程序被处理器执行时以实现一种错误检测的图像比较的方法。Figure 3 provides a computer device for image comparison for error detection. According to one embodiment, computer device 300 may include processor 302 that executes a computer program stored in memory 304. The computer program when executed by a processor implements an image comparison method for error detection.
本领域技术人员可以理解,图3中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。Those skilled in the art can understand that the structure shown in Figure 3 is only a block diagram of a partial structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the present application is applied. Specific computer equipment may include There may be more or fewer parts than shown, or certain parts may be combined, or may have a different arrangement of parts.
本领域普通技术人员可以理解实现上述实施例的方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可 读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。Those of ordinary skill in the art can understand that all or part of the processes in the methods of implementing the above embodiments can be completed by instructing relevant hardware through computer programs. The computer programs can be stored in a non-volatile computer and can be read. In the storage medium, when executed, the computer program may include the processes of the above method embodiments. Any reference to memory, storage, database or other media used in the embodiments provided in this application may include at least one of non-volatile and volatile memory. Non-volatile memory can include read-only memory (ROM), magnetic tape, floppy disk, flash memory or optical memory, etc. Volatile memory may include random access memory (RAM) or external cache memory. By way of illustration and not limitation, RAM can be in many forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM).
本申请还提供了一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现上述步骤。本公开的一些实现可以包括制品。制品可以包括存储介质,其用于存储逻辑。存储介质的示例可以包括一种或多种类型的能够存储电子数据的计算机可读存储介质,包括易失性存储器或非易失性存储器、可移动或不可移动存储器、可擦除或不可擦除存储器、可写或可重写存储器,等等。逻辑的示例可以包括各种软件单元,例如软件部件、程序、应用、计算机程序、应用程序、系统程序、机器程序、操作系统软件、中间件、固件、软件模块、例程、子例程、函数、方法、过程、软件接口、应用程序接口(API)、指令集、计算代码、计算机代码、代码段、计算机代码段、字、值、符号、或其任意组合。在一些实现中,例如,制品可以存储可执行的计算机程序指令,其在被处理器执行时,使得处理器执行本文中所述的方法和/或操作。可执行的计算机程序指令可以包括任意合适类型的代码,例如,源代码、编译代码、解释代码、可执行代码、静态代码、动态代码,等等。可执行的计算机程序指令可以根据预定义的用于命令计算机来执行特定功能的计算机语言、方式或语法来实现。所述指令可以使用任意适当的高级的、低级的、面向对象的、可视化的、编译的和/或解释的编程语言来实现。The present application also provides a computer-readable storage medium on which a computer program is stored, which is characterized in that the above steps are implemented when the computer program is executed by a processor. Some implementations of the disclosure may include articles of manufacture. The article of manufacture may include storage media for storing logic. Examples of storage media may include one or more types of computer-readable storage media capable of storing electronic data, including volatile or nonvolatile memory, removable or non-removable memory, erasable or non-erasable memory memory, writable or rewritable memory, etc. Examples of logic may include various software units, such as software components, programs, applications, computer programs, applications, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions , methods, procedures, software interfaces, application programming interfaces (APIs), sets of instructions, computational code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. In some implementations, for example, an article of manufacture may store executable computer program instructions that, when executed by a processor, cause the processor to perform the methods and/or operations described herein. Executable computer program instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, etc. Executable computer program instructions may be implemented in accordance with a predefined computer language, manner, or syntax for commanding a computer to perform specific functions. The instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language.
本申请实施例还提供了一种计算机程序产品,包括计算机指令,该计算机指令指示计算设备执行上述多个方法实施例中的任一对应的操作。上述根据本申请实施例的方法可在硬件、固件中实现,或者被实现为可存储在记录介质(诸如CD ROM、RAM、软盘、硬盘或磁光盘)中的软件或计算机代码,或者被实现通过网络下载的原始存储在远程记录介质或非暂时机器可读介质中并将被存储在本地记录介质中的计算机代码,从而在此描述的方法可被存储在使用通用计算机、专用处理器或者可编程或专用硬件(诸如ASIC或FPGA)的记录介质上的这样的软件处理。可以理解,计算机、处理器、微处理器控制器或可编程硬件包括可存储或接收软件或计算机代码的存储组件(例如,RAM、ROM、闪存等),当所述软件或计算机代码被计算机、处理器或硬件访问且执行时,实现在此描述的方法。此外,当通用计算机访问用于实现在此示出的方法的代码时,代码的执行将通用计算机转换 为用于执行在此示出的方法的专用计算机。Embodiments of the present application also provide a computer program product, including computer instructions, which instruct the computing device to perform any corresponding operation in the multiple method embodiments mentioned above. The above-mentioned methods according to the embodiments of the present application can be implemented in hardware, firmware, or as software or computer code that can be stored in a recording medium (such as CD ROM, RAM, floppy disk, hard disk or magneto-optical disk), or by The computer code downloaded by the network is originally stored in a remote recording medium or a non-transitory machine-readable medium and will be stored in a local recording medium, so that the method described here can be stored using a general-purpose computer, a special-purpose processor or a programmable computer. or such software processing on a recording medium of dedicated hardware such as ASIC or FPGA. It will be understood that a computer, processor, microprocessor controller, or programmable hardware includes storage components (e.g., RAM, ROM, flash memory, etc.) that can store or receive software or computer code when the software or computer code is used by the computer, When accessed and executed by a processor or hardware, the methods described herein are implemented. Furthermore, when a general-purpose computer accesses code for implementing the methods illustrated herein, execution of the code converts the general-purpose computer into a special-purpose computer for performing the methods illustrated herein.
上面已经描述的包括所公开的架构的示例。当然并不可能描述部件和/或方法的每种可以想见的组合,但是本领域技术人员可以理解,许多其它的组合和排列也是可行的。因此,该新颖架构旨在涵盖落入所附权利要求的精神和范围之内的所有这样的替代、修改和变型。What has been described above includes examples of the disclosed architecture. It is of course not possible to describe every conceivable combination of components and/or methods, but one skilled in the art will appreciate that many other combinations and permutations are possible. Accordingly, the novel architecture is intended to embrace all such alternatives, modifications and variations that fall within the spirit and scope of the appended claims.

Claims (13)

  1. 一种用于图像比较的方法,包括:A method for image comparison including:
    使用变分自编码器的识别网络,从目标图像和标准图像中解耦出光照不变特征和光照特征,其中,所述目标图像和所述标准图像与待验证的元件之间的相对位置相关联,所述光照不变特征与待验证的元件的几何特征相关联;Using a recognition network of variational autoencoders, illumination-invariant features and illumination features are decoupled from the target image and the standard image, where the target image and the standard image are related to the relative positions of the components to be verified. The illumination invariant feature is associated with the geometric feature of the component to be verified;
    使用互信息间隔,从所述光照不变特征和所述光照特征中选择所述光照不变特征;Selecting the illumination-invariant feature from the illumination-invariant feature and the illumination-invariant feature using a mutual information interval;
    比较所述目标图像的所述光照不变特征与所述标准图像的所述光照不变特征之间的相似度;Comparing the similarity between the illumination-invariant features of the target image and the illumination-invariant features of the standard image;
    根据所述相似度是否小于阈值,确定与所述目标图像相关联的待验证的元件是否有错误。Depending on whether the similarity is less than a threshold, it is determined whether the element to be verified associated with the target image has an error.
  2. 根据权利要求1所述的方法,其中,所述识别网络通过深度学习来获得,是真实后验分布的近似,在所述识别网络的深度学习过程中:The method according to claim 1, wherein the recognition network is obtained through deep learning and is an approximation of the true posterior distribution. During the deep learning process of the recognition network:
    最大化产生真实图像的概率,同时保持所述真实后验分布和估计后验分布之间的距离小于所述阈值。Maximize the probability of producing a true image while keeping the distance between the true posterior distribution and the estimated posterior distribution less than the threshold.
  3. 根据权利要求2所述的方法,其中,在所述识别网络的深度学习过程中:The method of claim 2, wherein during the deep learning process of the recognition network:
    使用全可微分估计器,联合优化损失函数。Jointly optimize the loss function using fully differentiable estimators.
  4. 根据权利要求1-3中任意一项所述的方法,其中,所述使用互信息间隔,从所述光照不变特征和所述光照特征中选择所述光照不变特征包括:The method according to any one of claims 1 to 3, wherein said using a mutual information interval to select the illumination-invariant feature from the illumination-invariant feature and the illumination-invariant feature includes:
    使用联合分布估计隐变量和金标准因子之间的互信息间隔,从所述光照不变特征和所述光照特征中选择所述光照不变特征。The illumination-invariant features are selected from the illumination-invariant features and the illumination-invariant features using a joint distribution to estimate mutual information intervals between latent variables and gold standard factors.
  5. 根据权利要求1-3中任意一项所述的方法,其中,所述比较所述目标图像的所述光照不变特征与所述标准图像的所述光照不变特征之间的相似度包括:The method according to any one of claims 1-3, wherein comparing the similarity between the illumination-invariant features of the target image and the illumination-invariant features of the standard image includes:
    通过比较所述光照不变特征之间的余弦距离,比较所述目标图像的所述光照不变特征与所述标准图像的所述光照不变特征之间的相似度。The similarity between the illumination-invariant features of the target image and the illumination-invariant features of the standard image is compared by comparing cosine distances between the illumination-invariant features.
  6. 一种用于图像比较的装置(200),包括:An apparatus (200) for image comparison, comprising:
    解耦模块(201),被配置为使用变分自编码器的识别网络,从目标图像和标准图像中 解耦出光照不变特征和光照特征,其中,所述目标图像和所述标准图像与待验证的元件之间的相对位置相关联,所述光照不变特征与待验证的元件的几何特征相关联;The decoupling module (201) is configured to use the recognition network of the variational autoencoder to decouple the illumination invariant features and illumination features from the target image and the standard image, wherein the target image and the standard image are The relative positions between the elements to be verified are associated, and the illumination invariant features are associated with the geometric features of the elements to be verified;
    选择模块(202),被配置为使用互信息间隔,从所述光照不变特征和所述光照特征中选择所述光照不变特征;a selection module (202) configured to select the illumination-invariant feature from the illumination-invariant feature and the illumination-invariant feature using a mutual information interval;
    比较模块(203),被配置为比较所述目标图像的所述光照不变特征与所述标准图像的所述光照不变特征之间的相似度;A comparison module (203) configured to compare the similarity between the illumination-invariant features of the target image and the illumination-invariant features of the standard image;
    确定模块(204),被配置为根据所述相似度是否小于阈值,确定与所述目标图像相关联的待验证的元件是否有错误。The determining module (204) is configured to determine whether the element to be verified associated with the target image has an error according to whether the similarity is less than a threshold.
  7. 根据权利要求6所述的装置,还包括识别网络训练模块,被配置为通过深度学习来获得所述识别网络,其中,在深度学习的过程中,最大化产生真实图像的概率,同时保持所述真实后验分布和估计后验分布之间的距离小于所述阈值,其中,所述识别网络是真实后验分布的近似。The device according to claim 6, further comprising a recognition network training module configured to obtain the recognition network through deep learning, wherein in the process of deep learning, the probability of generating a real image is maximized while maintaining the The distance between the true posterior distribution and the estimated posterior distribution is less than the threshold, wherein the recognition network is an approximation of the true posterior distribution.
  8. 根据权利要求7所述的装置,其中,所述识别网络训练模块进一步被配置为:The device according to claim 7, wherein the recognition network training module is further configured to:
    使用全可微分估计器,联合优化损失函数。Jointly optimize the loss function using fully differentiable estimators.
  9. 根据权利要求6-8中任意一项所述的装置,其中,所述选择模块进一步被配置为:The device according to any one of claims 6-8, wherein the selection module is further configured to:
    使用联合分布估计隐变量和金标准因子之间的互信息间隔,从所述光照不变特征和所述光照特征中选择所述光照不变特征。The illumination-invariant features are selected from the illumination-invariant features and the illumination-invariant features using a joint distribution to estimate mutual information intervals between latent variables and gold standard factors.
  10. 根据权利要求6-8中任意一项所述的装置,其中,所述比较模块进一步被配置为:The device according to any one of claims 6-8, wherein the comparison module is further configured to:
    通过比较所述光照不变特征之间的余弦距离,比较所述目标图像的所述光照不变特征与所述标准图像的所述光照不变特征之间的相似度。The similarity between the illumination-invariant features of the target image and the illumination-invariant features of the standard image is compared by comparing cosine distances between the illumination-invariant features.
  11. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至5中任一项所述的方法的步骤。A computer device includes a memory and a processor, the memory stores a computer program, and is characterized in that when the processor executes the computer program, the steps of the method described in any one of claims 1 to 5 are implemented.
  12. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至5中任一项所述的方法的步骤。A computer-readable storage medium with a computer program stored thereon, characterized in that when the computer program is executed by a processor, the steps of the method described in any one of claims 1 to 5 are implemented.
  13. 一种计算机程序产品,包括计算机指令,所述计算机指令指示计算设备执行如权利要求1至5中任一项所述的方法的步骤。A computer program product comprising computer instructions instructing a computing device to perform the steps of the method according to any one of claims 1 to 5.
PCT/CN2022/115623 2022-08-29 2022-08-29 Image comparison method and apparatus for error detection, and computer device WO2024044915A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/115623 WO2024044915A1 (en) 2022-08-29 2022-08-29 Image comparison method and apparatus for error detection, and computer device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/115623 WO2024044915A1 (en) 2022-08-29 2022-08-29 Image comparison method and apparatus for error detection, and computer device

Publications (1)

Publication Number Publication Date
WO2024044915A1 true WO2024044915A1 (en) 2024-03-07

Family

ID=90100115

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/115623 WO2024044915A1 (en) 2022-08-29 2022-08-29 Image comparison method and apparatus for error detection, and computer device

Country Status (1)

Country Link
WO (1) WO2024044915A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184778A (en) * 2015-08-25 2015-12-23 广州视源电子科技股份有限公司 Detection method and device
CN111161243A (en) * 2019-12-30 2020-05-15 华南理工大学 Industrial product surface defect detection method based on sample enhancement
CN112508826A (en) * 2020-11-16 2021-03-16 哈尔滨工业大学(深圳) Printed matter defect detection method based on feature registration and gradient shape matching fusion
CN113538432A (en) * 2021-09-17 2021-10-22 南通蓝城机械科技有限公司 Part defect detection method and system based on image processing
CN114140385A (en) * 2021-10-26 2022-03-04 杭州涿溪脑与智能研究所 Printed circuit board defect detection method and system based on deep learning
CN114332993A (en) * 2021-12-17 2022-04-12 深圳集智数字科技有限公司 Face recognition method and device, electronic equipment and computer readable storage medium
US20220207686A1 (en) * 2020-12-30 2022-06-30 Vitrox Technologies Sdn. Bhd. System and method for inspecting an object for defects

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184778A (en) * 2015-08-25 2015-12-23 广州视源电子科技股份有限公司 Detection method and device
CN111161243A (en) * 2019-12-30 2020-05-15 华南理工大学 Industrial product surface defect detection method based on sample enhancement
CN112508826A (en) * 2020-11-16 2021-03-16 哈尔滨工业大学(深圳) Printed matter defect detection method based on feature registration and gradient shape matching fusion
US20220207686A1 (en) * 2020-12-30 2022-06-30 Vitrox Technologies Sdn. Bhd. System and method for inspecting an object for defects
CN113538432A (en) * 2021-09-17 2021-10-22 南通蓝城机械科技有限公司 Part defect detection method and system based on image processing
CN114140385A (en) * 2021-10-26 2022-03-04 杭州涿溪脑与智能研究所 Printed circuit board defect detection method and system based on deep learning
CN114332993A (en) * 2021-12-17 2022-04-12 深圳集智数字科技有限公司 Face recognition method and device, electronic equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN107228860B (en) Gear defect detection method based on image rotation period characteristics
CN113077453A (en) Circuit board component defect detection method based on deep learning
CN109583504B (en) Visual sense-based method for quickly and accurately identifying circular positioning hole of PCB
CN112036426B (en) Method and system for unsupervised anomaly detection and liability using majority voting of high-dimensional sensor data
CN110838145B (en) Visual positioning and mapping method for indoor dynamic scene
US20220076404A1 (en) Defect management apparatus, method and non-transitory computer readable medium
CN110146017A (en) Industrial robot repetitive positioning accuracy measurement method
CN111461113A (en) Large-angle license plate detection method based on deformed plane object detection network
CN115457391A (en) Magnetic flux leakage internal detection method and system for pipeline and related components
CN114092448B (en) Plug-in electrolytic capacitor mixed detection method based on deep learning
CN116310285B (en) Automatic pointer instrument reading method and system based on deep learning
WO2024044915A1 (en) Image comparison method and apparatus for error detection, and computer device
CN114387433A (en) High-precision positioning method for large-amplitude FPC
CN114299040A (en) Ceramic tile flaw detection method and device and electronic equipment
Sowah et al. An intelligent instrument reader: using computer vision and machine learning to automate meter reading
CN113705564A (en) Pointer type instrument identification reading method
CN117576379A (en) Target detection method based on meta-learning combined attention mechanism network model
CN116630990B (en) RPA flow element path intelligent restoration method and system
CN112329590A (en) Pipeline assembly detection system and detection method
CN111652244A (en) Pointer type meter identification method based on unsupervised feature extraction and matching
CN114529543B (en) Installation detection method and device for peripheral screw gasket of aero-engine
CN112001388B (en) Method for detecting circular target in PCB based on YOLOv3 improved model
CN114692887A (en) Semi-supervised learning system and semi-supervised learning method
CN111311721A (en) Image data set processing method, system, storage medium, program and device
CN114187294B (en) Regular wafer positioning method based on prior information