WO2022201451A1 - Detection device and detection method - Google Patents
Detection device and detection method Download PDFInfo
- Publication number
- WO2022201451A1 WO2022201451A1 PCT/JP2021/012632 JP2021012632W WO2022201451A1 WO 2022201451 A1 WO2022201451 A1 WO 2022201451A1 JP 2021012632 W JP2021012632 W JP 2021012632W WO 2022201451 A1 WO2022201451 A1 WO 2022201451A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- processing
- processing unit
- detection
- images
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 114
- 238000012545 processing Methods 0.000 claims abstract description 197
- 230000005856 abnormality Effects 0.000 claims abstract description 35
- 238000004364 calculation method Methods 0.000 claims abstract description 21
- 238000000034 method Methods 0.000 claims description 32
- 238000003384 imaging method Methods 0.000 claims description 17
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000002156 mixing Methods 0.000 claims description 6
- 238000004148 unit process Methods 0.000 claims description 2
- 230000002159 abnormal effect Effects 0.000 description 15
- 230000002950 deficient Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000004519 manufacturing process Methods 0.000 description 6
- 238000007781 pre-processing Methods 0.000 description 6
- 230000007547 defect Effects 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 4
- 238000003672 processing method Methods 0.000 description 4
- 238000003786 synthesis reaction Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 230000002194 synthesizing effect Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 208000033985 Device component issue Diseases 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000001454 recorded image Methods 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
Definitions
- the present invention relates to a detection device and detection method using an autoencoder (AE).
- AE autoencoder
- anomaly detection technology using models using artificial intelligence (AI), especially deep learning, has become mainstream.
- AI artificial intelligence
- a technique called an autoencoder that learns only with non-defective data
- anomaly detection techniques using various techniques such as DAE (Denoising Auto Encoder) and VAE (Variational Auto Encoder) have been proposed. In these methods, only normal data is used for learning so that the output is equal to the input. During operation, the image data to be judged is input, and if it is restored normally, it is judged to be good. If not, it is determined to be defective (there is an abnormal part).
- Patent Document 1 discloses a detection device that improves detection accuracy by appropriately performing preprocessing and selection of learning data and model selection when detecting log anomalies using an autoencoder.
- the detection device includes a preprocessing section, a generation section, and a detection section.
- the preprocessing unit processes learning data and detection target data.
- the generation unit generates a normal state model by deep learning based on the learning data processed by the preprocessing unit.
- the detection unit calculates the degree of abnormality based on the output data obtained by inputting the data to be detected processed by the preprocessing unit into the model, and detects an abnormality in the data to be detected based on the degree of abnormality.
- the present invention has been devised in view of such circumstances. It is to provide a detection device and a detection method that prevent the determination of
- a detection device for detecting normality/abnormality of a detection target included in an image includes an image acquisition unit for acquiring a first image including the detection target, and a second image obtained by processing the first image. an image processing unit that outputs an image; an AE processing unit that outputs a third image obtained by processing at least a second image out of the first image and the second image by a learned autoencoder; the first image and the third image; and a calculation processing unit that outputs a fourth image, and a determination unit that determines normality/abnormality based on the fourth image.
- the image processing section may be characterized by outputting a second image obtained by partially processing the first image. According to this, since the second image is obtained by partially processing the first image, which is the determination target, the detection target that existed in the determination target appears in the third image. is highlighted and displayed, and it is possible to prevent such a detection target from being overlooked.
- the image processing section outputs a plurality of second images by performing a plurality of different processing processes on the first image
- the AE processing section processes the plurality of second images, and processes the plurality of second images.
- a plurality of third images corresponding to each of the above, and the calculation processing unit may be characterized by calculating differences between the first image and the plurality of third images and outputting one fourth image. According to this, by synthesizing a plurality of different third images obtained by processing a plurality of differently processed second images with an autoencoder to generate one fourth image, and making a determination based on the fourth image , even an extremely small detection target can be displayed in an emphasized manner by superimposing the corresponding portion, and it is possible to prevent such a detection target from being overlooked.
- the calculation processing unit may be characterized by outputting one fourth image by calculating a logical product of a plurality of calculated differences.
- one fourth image is generated by calculating the logical product of a plurality of differences, and a determination is made based on this fourth image, so that even an extremely small detection target is emphasized by overlapping the relevant part. It is possible to prevent such a detection target from being overlooked.
- the image processing unit may be characterized by performing arbitrary two or more different processing from a set of brightness conversion processing, blurring processing, edge enhancement processing, and alpha blending processing. According to this, by performing a plurality of different processing processes on the first image that is the determination target, the detection target that existed in the determination target appears in all the third images in common. By doing so, it is displayed in an emphasized manner, and it is possible to prevent such a detection target from being overlooked.
- the image acquiring unit acquires the first image from the imaging device. According to this, it is possible to provide a detection device integrated with an imaging device.
- a detection method for detecting normality/abnormality of a detection target contained in an image comprising: acquiring a first image containing the detection target; outputting; outputting a third image obtained by processing at least the second image among the first image and the second image by a learned autoencoder; calculating a difference between the first image and the third image; , outputting a fourth image, and making a normal/abnormal determination based on the fourth image.
- FIG. 1 is a functional block diagram of an anomaly detection system according to a first embodiment of the present invention
- FIG. 3 is a functional block diagram of an image processing unit of the detection device according to the first embodiment of the present invention
- FIG. 4 is a diagram showing the relationship among an input image, a processed image, and an AE processing result image in the detection device of the first embodiment according to the present invention
- FIG. 5 is a diagram showing the relationship among an input image, an AE processing result image, a difference result image, and a synthesis result image in the detection device of the first embodiment according to the present invention
- 4 is a flowchart of processing in the calculation processing unit of the detection device according to the first embodiment of the present invention
- 1 is a hardware configuration diagram of an anomaly detection system according to a first embodiment of the present invention
- an abnormality detection system 101 is used, for example, in a production line 103 for screws 102 to detect an abnormality in the manufactured screw 102 and sort the screws 102 into non-defective products and non-defective products.
- An abnormality detection system 101 captures an image of a screw 102 in a production line 103 in which non-defective products and defective products coexist, and detects an abnormality based on the captured image.
- the abnormality detection system 101 distributes the screws 102 for which no abnormality has been detected to the non-defective production line 103 , and distributes the screws 102 for which the abnormality has been detected to the defective production line 103 .
- This figure shows that the defective screw 102 has an abnormality in which the tool groove (cruciform portion) of the head is crushed.
- the anomaly detection system 101 is not limited to the example shown in this figure, and can be used for various detection targets such as a structure itself, parts such as substrates constituting the structure, and products including food.
- the anomaly detection system 101 includes a detection device 100 according to the present invention, a recording control unit 204 that controls recording of images and videos processed by the detection device 100, and the recording control unit 204 instructs recording.
- a recording device 205 for storing the recorded image
- a display control unit 206 for controlling display of an image or the like instructed to be displayed by the recording control unit 204;
- a display output device 207 that displays and outputs images and the like, and a device control device 208 that controls devices such as the production line 103 according to the sorting control performed by the detection device 100 (switches the flow direction, etc.).
- the recording control unit 204 uses the results determined by the detection device 100 to control the recording of images and videos, as well as the compression rate and recording interval of recorded videos.
- the recording device 205 records and retains images and the like obtained from the detection device 100 according to commands from the recording control unit 204 .
- the display control unit 206 controls display of an image or the like acquired by the detection device 100 , a result determined by the detection device 100 , and information saved in the recording device 205 .
- the display output device 207 actually displays these images, results, information, and the like.
- the detection device 100 is a device that detects normality/abnormality of a detection target included in an image.
- the detection device 100 includes an imaging device 201 that captures an image of a detection target, an image acquisition unit 202 that acquires an image or video (first image) including the detection target captured by the imaging device 201, an image An image processing unit 203 is provided for processing an image based on the image or the like acquired by the acquisition unit 202 and determining whether the image is good or bad.
- Imaging device 201 is, for example, one or more industrial cameras. As a result, it is possible to provide the detection device 100 integrated with the imaging device 201 , and to quickly detect normality/abnormality from an image or the like obtained by the imaging device 201 .
- the imaging device 201, the image acquisition unit 202, and the image processing unit 203 do not necessarily have to be integrated. may be located remotely (eg, in a control room) and communicatively connected to each other.
- the image acquisition unit 202 acquires the signal obtained from the imaging device 201 as an image or the like.
- the image acquisition unit 202 obtains a one-dimensional, two-dimensional, or three-dimensional image from video signals input from real-time image data from a camera, which is the imaging device 201, or from a video recording device in which image data is recorded. Get it as data.
- processing such as a smoothing filter may be appropriately performed as preprocessing in order to reduce the influence of flicker and the like.
- data formats such as RGB color, YUV, and monochrome may be selected according to the application.
- the image data may be reduced to a predetermined size. Note that the image processing unit 203 will be described later.
- the anomaly detection system 101 includes a processing unit Prc composed of processing units such as CPU (Central Processing Unit) and MPU (Micro Processing Unit), memory devices such as ROM (Read Only Memory) and RAM (Random Access Memory), It includes a storage unit Mem such as a storage device such as a hard disk HD or DVD, and a communication unit Com of a network interface for communicating with the imaging device 201 and the device control device 208 to input and output signals. They are connected to each other via a transmission line such as a system bus including an expansion bus.
- a processing unit Prc composed of processing units such as CPU (Central Processing Unit) and MPU (Micro Processing Unit), memory devices such as ROM (Read Only Memory) and RAM (Random Access Memory), It includes a storage unit Mem such as a storage device such as a hard disk HD or DVD, and a communication unit Com of a network interface for communicating with the imaging device 201 and the device control device 208 to input and output signals. They are connected to each other via a transmission line such as a
- the processing unit Prc has one or more processors (or cores) and their peripheral circuits capable of executing multiple programs in parallel.
- the processing unit Prc includes an overall control unit 209 that controls the overall operation of the abnormality detection system 101, transmits and receives control signals and information signals (data) to and from the other components described above, and It performs various kinds of arithmetic processing required for processing, execution, and control of the detection system 101 . Therefore, the processing unit Prc performs arithmetic operations such as addition, subtraction, multiplication, and division using a numerical operation unit or the like, logical operations such as logical product, and vector operations according to the learned model in a storage area that can be accessed at high speed. is configured to allow
- the storage unit Mem includes various types of memory devices and storage devices according to usage, and partly configures the recording device 205 .
- the ROM generally records an IPL (Initial Program Loader) that is executed first after the power is turned on.
- IPL Initial Program Loader
- the programs, data, learned models, etc. stored in the storage device such as the hard disk HD are temporarily stored in the RAM for temporary storage by the overall control unit 209.
- These programs are written out and executed by the overall control unit 209 .
- the programs stored in the storage unit Mem are an operating system program, programs and modules necessary for the anomaly detection system 101, and trained models.
- the operating system program is MICROSOFT (registered trademark) WINDOWS (registered trademark), LINUX (registered trademark), UNIX (registered trademark), or the like, and is not particularly limited as long as the anomaly detection system 101 can be executed.
- the processing unit Prc reads programs and the like necessary for the anomaly detection system 101 from the storage unit Mem, and controls the image acquisition unit 202, the image processing unit 203, the recording control unit 204, the display control unit 206 that controls the display output device 207, and the like. It implements the functions of the anomaly detection system 101 . In this way, the above-described hardware and software necessary for the anomaly detection system 101 work together to construct unique processing and operations of the anomaly detection system 101 .
- the anomaly detection system 101 is not limited to the hardware described above, and may be replaced by a non-computer system such as a Digital Signal Processor (DSP), a Field-Programmable Gate Array (FPGA), or a Graphics Processing Unit (GPU). may
- DSP Digital Signal Processor
- FPGA Field-Programmable Gate Array
- GPU Graphics Processing Unit
- the image processing unit 203 receives an image or the like (first image) from the image acquisition unit 202, outputs a determination result to the device control device 208, and outputs an image or the like to the recording control unit 204 using the determination result.
- the image processing unit 203 includes an image processing unit 301 that outputs an image (second image) obtained by processing the image (first image) input from the image acquisition unit 202, and an image (third image) processed by a learned autoencoder.
- AE processing unit 303 that outputs an image), and a calculation that outputs an image (fourth image) obtained by calculating the difference between the input image (first image) and the image processed by the AE processing unit 303 (third image). It includes a processing unit 304 and a determination unit 305 that determines normality/abnormality based on the image (fourth image) for which the difference is calculated.
- the image processing unit 301 has a function of variously processing the input image.
- the image processing unit 301 may process the entire image or may process the image partially.
- the image processing unit 301 may process the image by various processing methods such as brightness conversion processing, blurring processing, edge enhancement processing, and alpha blending processing.
- Brightness conversion processing is processing for changing brightness, which is one of the three elements that constitute the color space of an image. Further, the lightness conversion processing may be more detailed processing such as changing saturation for each hue.
- the blurring process is a process of calculating pixel values (RGB values) of pixels in an image by combining them with surrounding pixels using various filters.
- Various filters are, for example, a Gaussian filter, an averaging filter, a median filter, and the like. The direction and size of the periphery are appropriately determined.
- Edge enhancement processing is processing that converts pixels using various filters so that the image becomes clearer, and is processing that enlarges portions of the image where there is a large change (gradient) in pixel values (luminance).
- Various filters are, for example, a Prewitt filter, a Sobel filter, a Laplacian filter, a Sharp filter, and the like.
- the direction for edge detection and the magnitude of the gradient are appropriately determined.
- the alpha blending process is a process of multiplying an image by an alpha value (transparency information) to superimpose a translucent image.
- the blended image can be any texture.
- the image processing unit 301 may apply the processing method described above to the entire image or to a part of the image.
- a part of an image it may be a pattern such as regular vertical stripes, horizontal stripes, or lattice, or may be random dots, islands, or the like. It may have a spiral shape or the like, and is not particularly limited.
- the detection target such as a flaw or defect
- the detection target portion is emphasized and displayed in the fourth image. , it is possible to prevent such a detection target from being overlooked.
- the image processing unit 301 may output a plurality of images (second images) by processing the input image using a plurality of different processing methods.
- the AE processing unit 303 in the next step processes the processed multiple images and outputs multiple images (third images) respectively corresponding to the multiple images.
- the image processing unit 301 may directly output the input image to the AE processing unit 303 without processing the input image.
- the AE processing unit 303 processes at least the processed image (second image) out of the unprocessed image (first image) and the processed image (second image).
- the image processing unit 301 selects any two or more from a combination of brightness conversion processing, blurring processing, edge enhancement processing, and alpha blending processing. It is preferable to perform different processing treatments. In this way, by performing a plurality of different processings on the input image (first image) to be determined, a plurality of differently processed images can be input to the AE processing unit 303 .
- the image processing unit 301 when the image processing unit 301 performs two or more different processing, it is preferable that the processing be performed on different portions of the image.
- the image processing unit 301 performs alpha blending processing with a spiral mask in processing processing 1 on an input image 401, and blurs it in horizontal stripes in processing processing N-1. These are output, and an input image that is not processed is also output. In this manner, the image processing unit 301 performs processing such as intentionally adding noise to the input image, and outputs the processed image 402 .
- the AE processing unit 303 processes the processed image 402 output by the image processing unit 301 using a learned autoencoder.
- An autoencoder is composed of an encoder and a decoder composed of a neural network.
- the encoder outputs dimensionally compressed features from the input data, and the decoder functions to recover the input data from the features.
- the input to the autoencoder and the output from the autoencoder are compared to calculate the error, and the error is minimized by, for example, error back propagation, i.e., the output of the autoencoder and the input are matched. Adjust the neural network weights to .
- the autoencoder When the autoencoder is used to detect anomalies, etc., as shown in FIG. 8, the autoencoder is trained using only image data of normal products with no anomalies.
- the learned autoencoder receives an image of a normal screw, it restores the normal screw and outputs it. Even if an image showing a screw is input, it will try to output an image from which the abnormal part is removed, that is, an image of a normal screw. Therefore, by comparing an image containing an abnormality with an output image that appears to be normal and extracting the difference, it is possible to detect an abnormality by specifying the location of the abnormality.
- the abnormal portion to be detected in the image is shown in an extremely small area, it is difficult to detect it with a normal autoencoder.
- the present invention solves such problems.
- the structure of the intermediate layer in the learning model 302 of the trained autoencoder is not limited to the depth of layers, the size and number of filters, and the like.
- the autoencoder may use another technique based on the autoencoder, such as similar DAE or VAE.
- the learning model 302 of the learned autoencoder is stored in the recording device 205 or the like, and is loaded into the processing unit Prc as part of the image processing unit 203 during operation of the detection device 100 .
- the AE processing unit 303 receives a processed image 402 that has undergone processing processing 1 by the above-described learned autoencoder, and converts the AE processing result image 403 of the AE processing result 2 to Output.
- the AE processing result image 403 of the AE processing result 2 is closer to a normal product than the processed image 402 subjected to the processing processing 1, but cannot be completely restored due to the spiral mask, and is partially restored with a sense of incongruity. It is an image.
- the AE processing unit 303 outputs an AE processing result image 403 of the AE processing result N when the processed image 402 subjected to the processing processing N ⁇ 1 is input.
- the AE processing result image 403 of the AE processing result N is closer to a normal product than the processed image 402 subjected to the processing processing N ⁇ 1, but it cannot be completely restored due to the horizontal striped mask, and it is partially uncomfortable. It is a restored image.
- an abnormal portion not included in a normal screw (in this figure, the tool groove of the screw head is The (cruciform part) is not completely restored due to the crushed part), and it is partially restored in a strange shape.
- the AE processing unit 303 cannot completely restore a part of an image that is not processed, even if the abnormal part is extremely small, and the part of the abnormal part is restored in a strange manner. it is conceivable that. Conversely, it can be said that the AE processing unit 303 restores part of the abnormal portion to a normal, comfortable shape.
- the AE processing unit 303 processes at least the processed image (third image) out of the unprocessed input image and the processed image. do. Also, as shown in FIG. 3, it is preferable that a plurality of AE processing units 303 be provided in order to parallelize and speed up calculation processing.
- the calculation processing unit 304 combines an input image 401 (first image) acquired by the image acquisition unit 202 and an AE processing result image 403 (third image) restored and output by the AE processing unit 303. is calculated as a difference result image 404 (fourth image).
- the difference result image 404 is obtained, for example, by measuring the color space distance for both corresponding pixels and highlighting those pixels where the distance is non-zero or above a predetermined threshold.
- the predetermined threshold value is appropriately determined so as to make the difference stand out.
- the calculation processing unit 304 calculates the difference result image 404 of the difference result 2 by calculating the difference between the input image 401 and the AE process result image 403 of the AE process result 2.
- a difference result image 404 of the difference result 2 is an image showing the difference between the input image 401 and the restored image which cannot be completely restored by the spiral mask and which is partially unnatural.
- the difference result image 404 of the difference result N is an image showing the difference between the input image 401 and the restored image that cannot be completely restored by the horizontal striped mask and is partially unnatural.
- the difference result image 404 of the difference result 1 which shows the case where the AE processing unit 303 also processes an image that is not processed, is the same as the input image 401, and even if the abnormal portion is extremely small, a part of it is FIG. 10 is an image showing the difference between a portion that cannot be completely restored and is partially restored in a strange form, and a portion that is different from the abnormal portion and is normally restored in a form that does not give a strange feeling. .
- the difference included in the difference result image 404 of the difference result 1 may be changed to other difference result images such as the difference result image 404 of the difference result 2 and the difference result image 404 of the difference result N. 404 may also be included.
- the image processing unit 301 outputs a plurality of processed images 402, and the AE processing unit 303 processes the plurality of images and outputs a difference result image 404 which is a plurality of difference results.
- the calculation processing unit 304 outputs one difference result image 404 (fourth image).
- the calculation processing unit 304 calculates a plurality of difference results obtained by calculating the differences between the input image 401 and the plurality of AE processing result images 403 restored by the AE processing unit 303.
- An image 404 is generated, and a single synthesized result image 405 (fourth image) is output by synthesizing the plurality of difference result images 404 .
- the calculation processing unit 304 combines the logic of the calculated plurality of difference result images 404. It is preferable to synthesize by calculating the product and output one synthesized result image 405 (fourth image). As a result, the composite result image 405 becomes an image in which even an extremely small detection target is emphasized and displayed by superimposing the relevant portion.
- the difference included in the difference result image 404 of the difference result 1, that is, the difference due to the abnormal portion, is the difference result image 404 of the difference result 2, the difference result image 404 of the difference result 3, . and the difference result image 404 of the difference result N, one synthesis result image 405 synthesized by calculating the logical product of all these difference result images 404 is extremely large in each difference result image 404. Even a small difference is calculated as a large difference.
- the calculation processing unit 304 is executed as shown in the flowchart of FIG.
- the calculation processing unit 304 acquires a difference result image 404 that is a plurality of difference results 1 to N output by the AE processing unit 303.
- FIG. In S102 the calculation processing unit 304 converts pixel values equal to or greater than the threshold A to 1 (white) and pixel values that do not exceed the threshold A to 0 ( black). In this way, the portion above the threshold A is emphasized.
- the calculation processing unit 304 performs logical product (AND processing) on each corresponding pixel in each converted difference result image 404 .
- the calculation processing unit 304 uses different thresholds B, and sets the logical AND result to 1 (white) if each pixel is equal to or greater than the threshold B, and to logical if the threshold B is not exceeded.
- the result of the product may be 0 (black).
- the determination unit 305 determines normality/abnormality based on one difference result image 404 (fourth image). In this case, one difference result image 404 other than the difference result 1 is output. Further, when the AE processing unit 303 processes a plurality of images, the determination unit 305 performs normal/abnormal determination based on one combined result image 405 (fourth image). In this way, an input image 401 (first image) to be determined and an AE processing result image 403 (third image) obtained by processing an input image 401 (first image) and a processed image 402 (second image) processed by the autoencoder. By making a determination based on the difference result image (fourth image) showing the difference between the normal/ It is possible to provide the detection device 100 that prevents an abnormality from being determined.
- a plurality of different AE processing result images 403 obtained by processing a plurality of different processed images 402 (second images) by an autoencoder are combined to form one composite result image (fourth image).
- image is generated, and determination is made based on the synthesized result image 405 (fourth image), so that even an extremely small detection target can be calculated and displayed in an emphasized manner by superimposing such a detection target. You can avoid missing it.
- all the detection targets such as scratches that existed in the determination target are the AE processing result images 403 (first image). 3 images), even if the detection target is small, it is calculated and displayed in an emphasized manner by being superimposed on the synthesis result image 405 (fourth image), and such a detection target cannot be overlooked. can be prevented.
- the above is also a detection method for detecting the normality/abnormality of the detection target included in the image.
- This detection method includes the steps of acquiring an input image 401 (first image) in which a detection target such as a screw is shown, outputting a processed image 402 (second image) obtained by processing the input image, a step of outputting an AE processing result image 403 (third image) obtained by processing at least the processed image 402 out of the input image 401 and the processed image 402 by the autoencoder; 403 to output a difference result image 404 (fourth image); and a step of determining normality/abnormality based on the difference result image 404.
- the manufactured product anomaly detection system 101 using an industrial camera or the like it is possible to not overlook defects such as extremely small scratches, defects, and processing deviations that are mixed in with manufactured products.
- Detecting device 101: Anomaly detection system, 102: Screw (detection target), 103: Manufacturing line, 201: Imaging device, 202: Image acquiring unit, 203: Image processing unit, 204: Recording control unit, 205: Recording Device 206: Display control unit 207: Display output device 208: Equipment control device 209: Overall control unit 301: Image processing unit 302: Learning model 303: AE processing unit 304: Calculation processing unit 305 : determination unit, 401: input image (first image, determination target), 402: processed image (second image), 403: AE processing result image (third image), 404: difference result image (fourth image ), 405: Synthesis result image (fourth image)
Landscapes
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Image Analysis (AREA)
Abstract
To provide a detection device that prevents normality/abnormality determination from being made with an object to be detected being overlooked even when the object to be detected is shown in an extremely small area of an image, a detection device 100 comprises an image acquisition unit 202 that acquires a first image including an object to be detected, an image processing unit 301 that outputs a second image obtained by processing the first image, an AE processing unit 303 that outputs a third image obtained by processing at least the second image out of the first image and the second image by using a trained autoencoder, a calculation processing unit 304 that calculates a difference between the first image and the third image and outputs a fourth image, and a determination unit 305 that makes normality/abnormality determination on the basis of the fourth image.
Description
本発明は、オートエンコーダ(AE:Autoencoder)を用いた検知装置および検知方法に関する。
The present invention relates to a detection device and detection method using an autoencoder (AE).
従来から、産業カメラ等の撮像装置を用いて、製造品などの良・不良を自動判定する異常検知に関する技術が知られている。近年では、人工知能(AI)、特に深層学習を用いたモデルを用いた異常検知技術が主流となっている。特に、異常データを大量に収集することが難しい分野では、良品データのみで学習するオートエンコーダという手法が用いられる。また、この手法をベースに、DAE(Denoising Auto Encoder)やVAE(Variational Auto Encoder)など様々な手法による異常検知技術が提案されている。これらの手法は、正常なデータのみを用いて出力が入力と等しくなるように学習を行い、運用時には判定対象の画像データを入力し、正常に復元されれば良と判定し、正常に復元されなければ不良(異常部分がある)と判定する。
Conventionally, technology related to anomaly detection that automatically judges whether a manufactured product is good or bad using an imaging device such as an industrial camera has been known. In recent years, anomaly detection technology using models using artificial intelligence (AI), especially deep learning, has become mainstream. In particular, in fields where it is difficult to collect a large amount of abnormal data, a technique called an autoencoder that learns only with non-defective data is used. Based on this technique, anomaly detection techniques using various techniques such as DAE (Denoising Auto Encoder) and VAE (Variational Auto Encoder) have been proposed. In these methods, only normal data is used for learning so that the output is equal to the input. During operation, the image data to be judged is input, and if it is restored normally, it is judged to be good. If not, it is determined to be defective (there is an abnormal part).
たとえば、特許文献1は、オートエンコーダを使ってログの異常検知を行う場合の学習データの前処理や選定、モデルの選択を適切に行うことにより検知精度を向上させた検知装置を開示する。この検知装置は、前処理部と、生成部と、検知部とを備える。前処理部は、学習用のデータ及び検知対象のデータを加工する。生成部は、前処理部によって加工された学習用のデータを基に、深層学習により正常状態のモデルを生成する。検知部は、前処理部によって加工された検知対象のデータをモデルに入力して得られた出力データを基に異常度を計算し、異常度を基に検知対象のデータの異常を検知する。
For example, Patent Document 1 discloses a detection device that improves detection accuracy by appropriately performing preprocessing and selection of learning data and model selection when detecting log anomalies using an autoencoder. The detection device includes a preprocessing section, a generation section, and a detection section. The preprocessing unit processes learning data and detection target data. The generation unit generates a normal state model by deep learning based on the learning data processed by the preprocessing unit. The detection unit calculates the degree of abnormality based on the output data obtained by inputting the data to be detected processed by the preprocessing unit into the model, and detects an abnormality in the data to be detected based on the degree of abnormality.
しかし、製造品に紛れる不良の原因は、それが写った画像の中で極めて小さい傷や欠損、加工ずれであることが多い。このような小さな領域に示される原因に対して、上述した技術を用いても検知することは難しく、見逃してしまうことが多いという課題があった。
However, the causes of defects that are mixed in with manufactured products are often very small scratches, defects, and processing deviations in the images that show them. There is a problem that it is difficult to detect the cause indicated in such a small area even by using the above-described technique, and the cause is often overlooked.
そこで、本発明は、かかる事情に鑑みて考案されたものであり、画像の中で検知すべき対象が極めて小さい領域に示される場合であっても、そのような検知対象を見逃して正常/異常を判定することを防止する検知装置および検知方法を提供するものである。
Therefore, the present invention has been devised in view of such circumstances. It is to provide a detection device and a detection method that prevent the determination of
上記課題を解決するために、画像に含まれる検知対象の正常/異常を検知する検知装置であって、検知対象を含む第1画像を取得する画像取得部と、第1画像を加工した第2画像を出力する画像加工部と、学習済みのオートエンコーダにより、第1画像と第2画像の内少なくとも第2画像を処理した第3画像を出力するAE処理部と、第1画像と第3画像との差分を算出し、第4画像を出力する算出処理部と、第4画像に基づき正常/異常の判定を行う判定部と、を備える検知装置が提供される。
これによれば、判定対象である第1画像と、その画像を加工した第2画像をオートエンコーダが処理した第3画像との差分を示す第4画像に基づき判定することで、画像の中で検知すべき対象が極めて小さい領域に示される場合であっても、そのような検知対象を見逃して正常/異常を判定することを防止する検知装置を提供することができる。 In order to solve the above problems, a detection device for detecting normality/abnormality of a detection target included in an image includes an image acquisition unit for acquiring a first image including the detection target, and a second image obtained by processing the first image. an image processing unit that outputs an image; an AE processing unit that outputs a third image obtained by processing at least a second image out of the first image and the second image by a learned autoencoder; the first image and the third image; and a calculation processing unit that outputs a fourth image, and a determination unit that determines normality/abnormality based on the fourth image.
According to this, by performing a determination based on a fourth image showing the difference between the first image to be determined and the third image processed by the autoencoder to the second image processed by the image, It is possible to provide a detection device that prevents the detection target from being overlooked and normality/abnormality determination even when the target to be detected is shown in an extremely small area.
これによれば、判定対象である第1画像と、その画像を加工した第2画像をオートエンコーダが処理した第3画像との差分を示す第4画像に基づき判定することで、画像の中で検知すべき対象が極めて小さい領域に示される場合であっても、そのような検知対象を見逃して正常/異常を判定することを防止する検知装置を提供することができる。 In order to solve the above problems, a detection device for detecting normality/abnormality of a detection target included in an image includes an image acquisition unit for acquiring a first image including the detection target, and a second image obtained by processing the first image. an image processing unit that outputs an image; an AE processing unit that outputs a third image obtained by processing at least a second image out of the first image and the second image by a learned autoencoder; the first image and the third image; and a calculation processing unit that outputs a fourth image, and a determination unit that determines normality/abnormality based on the fourth image.
According to this, by performing a determination based on a fourth image showing the difference between the first image to be determined and the third image processed by the autoencoder to the second image processed by the image, It is possible to provide a detection device that prevents the detection target from being overlooked and normality/abnormality determination even when the target to be detected is shown in an extremely small area.
さらに、画像加工部は、第1画像を部分的に加工した第2画像を出力することを特徴としてもよい。
これによれば、第2画像は判定対象である第1画像を部分的に加工したものであることで、判定対象に存在した検知対象が第3画像に現れるため第4画像では当該検知対象部分が強調されて表示され、かかる検知対象を見逃すことを防止できる。 Furthermore, the image processing section may be characterized by outputting a second image obtained by partially processing the first image.
According to this, since the second image is obtained by partially processing the first image, which is the determination target, the detection target that existed in the determination target appears in the third image. is highlighted and displayed, and it is possible to prevent such a detection target from being overlooked.
これによれば、第2画像は判定対象である第1画像を部分的に加工したものであることで、判定対象に存在した検知対象が第3画像に現れるため第4画像では当該検知対象部分が強調されて表示され、かかる検知対象を見逃すことを防止できる。 Furthermore, the image processing section may be characterized by outputting a second image obtained by partially processing the first image.
According to this, since the second image is obtained by partially processing the first image, which is the determination target, the detection target that existed in the determination target appears in the third image. is highlighted and displayed, and it is possible to prevent such a detection target from being overlooked.
さらに、画像加工部は、第1画像に対して異なる複数の加工処理を行うことにより複数の第2画像を出力し、AE処理部は、複数の第2画像を処理し、複数の第2画像のそれぞれに対応する複数の第3画像を出力し、算出処理部は、第1画像と複数の第3画像との差分を算出し、一の第4画像を出力することを特徴としてもよい。
これによれば、複数の異なる加工処理された第2画像をオートエンコーダが処理した複数の異なる第3画像を合成して1つの第4画像を生成し、かかる第4画像に基づき判定することで、極めて小さい検知対象であっても当該部分が重畳することで強調されて表示され、かかる検知対象を見逃すことを防止できる。 Further, the image processing section outputs a plurality of second images by performing a plurality of different processing processes on the first image, and the AE processing section processes the plurality of second images, and processes the plurality of second images. a plurality of third images corresponding to each of the above, and the calculation processing unit may be characterized by calculating differences between the first image and the plurality of third images and outputting one fourth image.
According to this, by synthesizing a plurality of different third images obtained by processing a plurality of differently processed second images with an autoencoder to generate one fourth image, and making a determination based on the fourth image , even an extremely small detection target can be displayed in an emphasized manner by superimposing the corresponding portion, and it is possible to prevent such a detection target from being overlooked.
これによれば、複数の異なる加工処理された第2画像をオートエンコーダが処理した複数の異なる第3画像を合成して1つの第4画像を生成し、かかる第4画像に基づき判定することで、極めて小さい検知対象であっても当該部分が重畳することで強調されて表示され、かかる検知対象を見逃すことを防止できる。 Further, the image processing section outputs a plurality of second images by performing a plurality of different processing processes on the first image, and the AE processing section processes the plurality of second images, and processes the plurality of second images. a plurality of third images corresponding to each of the above, and the calculation processing unit may be characterized by calculating differences between the first image and the plurality of third images and outputting one fourth image.
According to this, by synthesizing a plurality of different third images obtained by processing a plurality of differently processed second images with an autoencoder to generate one fourth image, and making a determination based on the fourth image , even an extremely small detection target can be displayed in an emphasized manner by superimposing the corresponding portion, and it is possible to prevent such a detection target from being overlooked.
さらに、算出処理部は、算出された複数の差分の論理積を算出することにより一の第4画像を出力することを特徴としてもよい。
これによれば、複数の差分の論理積を算出した1つの第4画像を生成し、かかる第4画像に基づき判定することで、極めて小さい検知対象であっても当該部分が重畳することで強調されて表示され、かかる検知対象を見逃すことを防止できる。 Further, the calculation processing unit may be characterized by outputting one fourth image by calculating a logical product of a plurality of calculated differences.
According to this, one fourth image is generated by calculating the logical product of a plurality of differences, and a determination is made based on this fourth image, so that even an extremely small detection target is emphasized by overlapping the relevant part. It is possible to prevent such a detection target from being overlooked.
これによれば、複数の差分の論理積を算出した1つの第4画像を生成し、かかる第4画像に基づき判定することで、極めて小さい検知対象であっても当該部分が重畳することで強調されて表示され、かかる検知対象を見逃すことを防止できる。 Further, the calculation processing unit may be characterized by outputting one fourth image by calculating a logical product of a plurality of calculated differences.
According to this, one fourth image is generated by calculating the logical product of a plurality of differences, and a determination is made based on this fourth image, so that even an extremely small detection target is emphasized by overlapping the relevant part. It is possible to prevent such a detection target from being overlooked.
さらに、画像加工部は、明度変換処理、ぼかし処理、エッジ強調処理、および、アルファブレンディング処理の組から任意の2以上の異なる加工処理を行うことを特徴としてもよい。
これによれば、判定対象である第1画像を複数の異なる加工処理を行うことで、判定対象に存在した検知対象がすべての第3画像に共通して現れるため第4画像では当該部分が重畳することで強調されて表示され、かかる検知対象を見逃すことを防止できる。 Furthermore, the image processing unit may be characterized by performing arbitrary two or more different processing from a set of brightness conversion processing, blurring processing, edge enhancement processing, and alpha blending processing.
According to this, by performing a plurality of different processing processes on the first image that is the determination target, the detection target that existed in the determination target appears in all the third images in common. By doing so, it is displayed in an emphasized manner, and it is possible to prevent such a detection target from being overlooked.
これによれば、判定対象である第1画像を複数の異なる加工処理を行うことで、判定対象に存在した検知対象がすべての第3画像に共通して現れるため第4画像では当該部分が重畳することで強調されて表示され、かかる検知対象を見逃すことを防止できる。 Furthermore, the image processing unit may be characterized by performing arbitrary two or more different processing from a set of brightness conversion processing, blurring processing, edge enhancement processing, and alpha blending processing.
According to this, by performing a plurality of different processing processes on the first image that is the determination target, the detection target that existed in the determination target appears in all the third images in common. By doing so, it is displayed in an emphasized manner, and it is possible to prevent such a detection target from being overlooked.
さらに、検知対象を撮像する撮像装置をさらに備え、画像取得部は、撮像装置から第1画像を取得することを特徴としてもよい。
これによれば、撮像装置と一体化した検知装置を提供することができる。 Furthermore, it may be characterized by further comprising an imaging device that captures an image of the detection target, and the image acquiring unit acquires the first image from the imaging device.
According to this, it is possible to provide a detection device integrated with an imaging device.
これによれば、撮像装置と一体化した検知装置を提供することができる。 Furthermore, it may be characterized by further comprising an imaging device that captures an image of the detection target, and the image acquiring unit acquires the first image from the imaging device.
According to this, it is possible to provide a detection device integrated with an imaging device.
上記課題を解決するために、画像に含まれる検知対象の正常/異常を検知する検知方法であって、検知対象を含む第1画像を取得するステップと、第1画像を加工した第2画像を出力するステップと、学習済みのオートエンコーダにより、第1画像と第2画像の内少なくとも第2画像を処理した第3画像を出力するステップと、第1画像と第3画像との差分を算出し、第4画像を出力するステップと、第4画像に基づき正常/異常の判定を行うステップと、を備える検知方法が提供される。
これによれば、判定対象である第1画像と、その画像を加工した第2画像をオートエンコーダが処理した第3画像との差分を示す第4画像に基づき判定することで、画像の中で検知すべき対象が極めて小さい領域に示される場合であっても、そのような検知対象を見逃して正常/異常を判定することを防止する検知方法を提供することができる。 In order to solve the above problems, there is provided a detection method for detecting normality/abnormality of a detection target contained in an image, comprising: acquiring a first image containing the detection target; outputting; outputting a third image obtained by processing at least the second image among the first image and the second image by a learned autoencoder; calculating a difference between the first image and the third image; , outputting a fourth image, and making a normal/abnormal determination based on the fourth image.
According to this, by performing a determination based on a fourth image showing the difference between the first image to be determined and the third image processed by the autoencoder to the second image processed by the image, Even if an object to be detected is shown in an extremely small area, it is possible to provide a detection method that prevents such a detection object from being overlooked and normal/abnormal judgments to be made.
これによれば、判定対象である第1画像と、その画像を加工した第2画像をオートエンコーダが処理した第3画像との差分を示す第4画像に基づき判定することで、画像の中で検知すべき対象が極めて小さい領域に示される場合であっても、そのような検知対象を見逃して正常/異常を判定することを防止する検知方法を提供することができる。 In order to solve the above problems, there is provided a detection method for detecting normality/abnormality of a detection target contained in an image, comprising: acquiring a first image containing the detection target; outputting; outputting a third image obtained by processing at least the second image among the first image and the second image by a learned autoencoder; calculating a difference between the first image and the third image; , outputting a fourth image, and making a normal/abnormal determination based on the fourth image.
According to this, by performing a determination based on a fourth image showing the difference between the first image to be determined and the third image processed by the autoencoder to the second image processed by the image, Even if an object to be detected is shown in an extremely small area, it is possible to provide a detection method that prevents such a detection object from being overlooked and normal/abnormal judgments to be made.
以上説明したように、本発明によれば、画像の中で検知すべき対象が極めて小さい領域に示される場合であっても、そのような検知対象を見逃して正常/異常を判定することを防止する検知装置および検知方法を提供することができる。
As described above, according to the present invention, even if an object to be detected is shown in an extremely small area in an image, it is possible to prevent such a detection object from being overlooked and to determine normality/abnormality. It is possible to provide a detection device and a detection method for detecting.
以下では、図面を参照しながら、本発明に係る実施形態について説明する。
<第一実施形態>
図1~図7を参照し、本実施形態における異常検知システム101および異常検知システム101に組み込まれた検知装置100を説明する。異常検知システム101は、図1に示すように、たとえばねじ102の製造ライン103において、製造されたねじ102の異常を検知して良・不良品の振り分けに用いられる。異常検知システム101は、良品と不良品が混在する製造ライン103のねじ102を撮像し、撮像した画像に基づき異常を検知する。異常検知システム101は、異常を検知しなかったねじ102を良品の製造ライン103へ振り分け、異常を検知したねじ102を不良品の製造ライン103へ振り分ける制御を行う。なお、本図では、不良品のねじ102は、頭部の工具溝(十字形部分)が潰れている異常を有することを示している。もちろん、異常検知システム101は、本図の例に限られず、たとえばある構造物そのものやその構造物を構成する基板などの部品、食品を含む生産物など、様々な検知対象に使用され得る。 Embodiments according to the present invention will be described below with reference to the drawings.
<First embodiment>
Ananomaly detection system 101 and a detection device 100 incorporated in the anomaly detection system 101 according to the present embodiment will be described with reference to FIGS. 1 to 7. FIG. As shown in FIG. 1, an abnormality detection system 101 is used, for example, in a production line 103 for screws 102 to detect an abnormality in the manufactured screw 102 and sort the screws 102 into non-defective products and non-defective products. An abnormality detection system 101 captures an image of a screw 102 in a production line 103 in which non-defective products and defective products coexist, and detects an abnormality based on the captured image. The abnormality detection system 101 distributes the screws 102 for which no abnormality has been detected to the non-defective production line 103 , and distributes the screws 102 for which the abnormality has been detected to the defective production line 103 . This figure shows that the defective screw 102 has an abnormality in which the tool groove (cruciform portion) of the head is crushed. Of course, the anomaly detection system 101 is not limited to the example shown in this figure, and can be used for various detection targets such as a structure itself, parts such as substrates constituting the structure, and products including food.
<第一実施形態>
図1~図7を参照し、本実施形態における異常検知システム101および異常検知システム101に組み込まれた検知装置100を説明する。異常検知システム101は、図1に示すように、たとえばねじ102の製造ライン103において、製造されたねじ102の異常を検知して良・不良品の振り分けに用いられる。異常検知システム101は、良品と不良品が混在する製造ライン103のねじ102を撮像し、撮像した画像に基づき異常を検知する。異常検知システム101は、異常を検知しなかったねじ102を良品の製造ライン103へ振り分け、異常を検知したねじ102を不良品の製造ライン103へ振り分ける制御を行う。なお、本図では、不良品のねじ102は、頭部の工具溝(十字形部分)が潰れている異常を有することを示している。もちろん、異常検知システム101は、本図の例に限られず、たとえばある構造物そのものやその構造物を構成する基板などの部品、食品を含む生産物など、様々な検知対象に使用され得る。 Embodiments according to the present invention will be described below with reference to the drawings.
<First embodiment>
An
異常検知システム101は、図2に示すように、本発明に係る検知装置100と、検知装置100が処理した画像や映像の記録を制御する記録制御部204と、記録制御部204が記録を指示した画像を格納する記録装置205と、記録制御部204が表示を指示した画像等を表示する制御を行う表示制御部206と、表示制御部206が表示を指示した画像等または記録装置205からの画像等を表示出力する表示出力装置207と、検知装置100が行った振り分け制御に応じて製造ライン103などの機器を制御する(流す方向の切り替えなどを行う)機器制御装置208と、を備える。
As shown in FIG. 2, the anomaly detection system 101 includes a detection device 100 according to the present invention, a recording control unit 204 that controls recording of images and videos processed by the detection device 100, and the recording control unit 204 instructs recording. a recording device 205 for storing the recorded image; a display control unit 206 for controlling display of an image or the like instructed to be displayed by the recording control unit 204; A display output device 207 that displays and outputs images and the like, and a device control device 208 that controls devices such as the production line 103 according to the sorting control performed by the detection device 100 (switches the flow direction, etc.).
記録制御部204は、検知装置100が判定した結果を用いて画像や映像の記録制御や、記録映像の圧縮率や記録間隔を制御する。記録装置205は、検知装置100より得られた画像等を記録制御部204の命令により記録保持する。表示制御部206は、検知装置100が取得した画像等および検知装置100が判定した結果や、記録装置205に保存された情報の表示を制御する。表示出力装置207は、これらの画像等や結果・情報等を実際に表示する。
The recording control unit 204 uses the results determined by the detection device 100 to control the recording of images and videos, as well as the compression rate and recording interval of recorded videos. The recording device 205 records and retains images and the like obtained from the detection device 100 according to commands from the recording control unit 204 . The display control unit 206 controls display of an image or the like acquired by the detection device 100 , a result determined by the detection device 100 , and information saved in the recording device 205 . The display output device 207 actually displays these images, results, information, and the like.
検知装置100は、画像に含まれる検知対象の正常/異常を検知する装置である。検知装置100は、本図に示すように、検知対象を撮像する撮像装置201と、撮像装置201が撮像した検知対象を含む画像や映像(第1画像)を取得する画像取得部202と、画像取得部202が取得した画像等に基づき画像を処理して良不良の判定を行う画像処理部203とを備える。撮像装置201は、たとえば1台以上の産業カメラである。これにより、撮像装置201と一体化した検知装置100を提供することができ、撮像装置201で得た画像等から迅速に正常/異常を検知することができる。なお、撮像装置201と画像取得部202および画像処理部203は必ずしも一体である必要はなく、たとえば、撮像装置201は製造ライン103の近傍に配置され、画像取得部202と画像処理部203はそこから離れた場所(たとえば制御室など)に配置され、互いに通信により接続されていてもよい。
The detection device 100 is a device that detects normality/abnormality of a detection target included in an image. As shown in the figure, the detection device 100 includes an imaging device 201 that captures an image of a detection target, an image acquisition unit 202 that acquires an image or video (first image) including the detection target captured by the imaging device 201, an image An image processing unit 203 is provided for processing an image based on the image or the like acquired by the acquisition unit 202 and determining whether the image is good or bad. Imaging device 201 is, for example, one or more industrial cameras. As a result, it is possible to provide the detection device 100 integrated with the imaging device 201 , and to quickly detect normality/abnormality from an image or the like obtained by the imaging device 201 . Note that the imaging device 201, the image acquisition unit 202, and the image processing unit 203 do not necessarily have to be integrated. may be located remotely (eg, in a control room) and communicatively connected to each other.
画像取得部202は、撮像装置201から得られる信号を画像等として取得する。画像取得部202は、撮像装置201であるカメラからのリアルタイムの画像データや、画像データが記録されている映像記録装置などから入力された映像信号から1次元配列もしくは2次元、3次元配列の画像データとして取得する。この画像データにおいては、フリッカなどの影響を低減するために前処理として適宜平滑化フィルタなどの処理を施してもよい。また、用途に応じてRGBカラーやYUV、モノクロなどのデータ形式を選択してもよい。さらには、処理コスト低減のために、所定の大きさで画像データに縮小処理を施してもよい。なお、画像処理部203については、後述する。
The image acquisition unit 202 acquires the signal obtained from the imaging device 201 as an image or the like. The image acquisition unit 202 obtains a one-dimensional, two-dimensional, or three-dimensional image from video signals input from real-time image data from a camera, which is the imaging device 201, or from a video recording device in which image data is recorded. Get it as data. In this image data, processing such as a smoothing filter may be appropriately performed as preprocessing in order to reduce the influence of flicker and the like. Further, data formats such as RGB color, YUV, and monochrome may be selected according to the application. Furthermore, in order to reduce the processing cost, the image data may be reduced to a predetermined size. Note that the image processing unit 203 will be described later.
ここで、図7を参照し、異常検知システム101のハードウェア構成の一例を説明する。異常検知システム101は、CPU(Central Processing Unit)やMPU(Micro Processing Unit)などの処理装置から構成される処理部Prc、ROM(Read Only Memory)やRAM(Random Access Memory)等のメモリデバイスおよび、ハードディスクHDやDVD等のストレージデバイスといった記憶部Mem、撮像装置201や機器制御装置208と通信し信号の入出力を行うためのネットワークインターフェースの通信部Comを含み、これらが、内部バス、外部バス、拡張バス等を含むシステムバスBusといった伝送路を介して互いに接続されたものである。
Here, an example of the hardware configuration of the anomaly detection system 101 will be described with reference to FIG. The anomaly detection system 101 includes a processing unit Prc composed of processing units such as CPU (Central Processing Unit) and MPU (Micro Processing Unit), memory devices such as ROM (Read Only Memory) and RAM (Random Access Memory), It includes a storage unit Mem such as a storage device such as a hard disk HD or DVD, and a communication unit Com of a network interface for communicating with the imaging device 201 and the device control device 208 to input and output signals. They are connected to each other via a transmission line such as a system bus including an expansion bus.
処理部Prcは、複数のプログラムを並列に実行することができる、1以上のプロセッサ(またはコア)及びその周辺回路を有する。処理部Prcは、異常検知システム101の全体的な動作を統括制御する全体制御部209を備え、上述した他の構成要素との間で制御信号及び情報信号(データ)の送受信を行うとともに、異常検知システム101の処理、実行、制御に必要な各種の演算処理を行う。そのため、処理部Prcは、高速アクセス可能な記憶領域に対して、数値演算ユニット等を用いた加減乗除等の算術演算、論理積等の論理演算、学習済みモデルに従ったベクトル演算等を行うことが可能なように構成されている。
The processing unit Prc has one or more processors (or cores) and their peripheral circuits capable of executing multiple programs in parallel. The processing unit Prc includes an overall control unit 209 that controls the overall operation of the abnormality detection system 101, transmits and receives control signals and information signals (data) to and from the other components described above, and It performs various kinds of arithmetic processing required for processing, execution, and control of the detection system 101 . Therefore, the processing unit Prc performs arithmetic operations such as addition, subtraction, multiplication, and division using a numerical operation unit or the like, logical operations such as logical product, and vector operations according to the learned model in a storage area that can be accessed at high speed. is configured to allow
記憶部Memは、用途に応じた様々な種類のメモリデバイスやストレージデバイスを備え、一部に記録装置205を構成する。たとえば、ROMには、一般に、電源投入後、最初に実行されるIPL(Initial Program Loader)が記録されている。これが処理部Prcに読み込まれ実行されることにより、ハードディスクHD等のストレージデバイスに記憶されたプログラムやデータ、学習済みモデルなどが、全体制御部209によって一旦これらを一時的に記憶するためのRAMに書き出され、それらのプログラムが全体制御部209によって実行される。記憶部Memに記憶されているプログラムは、オペレーティングシステムプログラムや異常検知システム101に必要なプログラムやモジュール、学習済みモデルである。なお、オペレーティングシステムプログラムは、MICROSOFT(登録商標)WINDOWS(登録商標)、LINUX(登録商標)、UNIX(登録商標)などであり、異常検知システム101が実行され得る限り特に限定されない。
The storage unit Mem includes various types of memory devices and storage devices according to usage, and partly configures the recording device 205 . For example, the ROM generally records an IPL (Initial Program Loader) that is executed first after the power is turned on. By reading this into the processing unit Prc and executing it, the programs, data, learned models, etc. stored in the storage device such as the hard disk HD are temporarily stored in the RAM for temporary storage by the overall control unit 209. These programs are written out and executed by the overall control unit 209 . The programs stored in the storage unit Mem are an operating system program, programs and modules necessary for the anomaly detection system 101, and trained models. Note that the operating system program is MICROSOFT (registered trademark) WINDOWS (registered trademark), LINUX (registered trademark), UNIX (registered trademark), or the like, and is not particularly limited as long as the anomaly detection system 101 can be executed.
処理部Prcは、記憶部Memから異常検知システム101に必要なプログラム等を読み込み、画像取得部202、画像処理部203、記録制御部204、表示出力装置207の制御を行う表示制御部206などの異常検知システム101の機能を実現する。このように、上述したハードウェアと異常検知システム101に必要なソフトウェアが協働することにより、異常検知システム101の特有の処理や動作が構築されている。なお、異常検知システム101は、上述したハードウェアに限定されず、たとえば、Digital Signal Processor(DSP)、Field-Programmable Gate Array(FPGA)、Graphics Processing Unit(GPU)などの電子計算機システム以外で代替してもよい。
The processing unit Prc reads programs and the like necessary for the anomaly detection system 101 from the storage unit Mem, and controls the image acquisition unit 202, the image processing unit 203, the recording control unit 204, the display control unit 206 that controls the display output device 207, and the like. It implements the functions of the anomaly detection system 101 . In this way, the above-described hardware and software necessary for the anomaly detection system 101 work together to construct unique processing and operations of the anomaly detection system 101 . The anomaly detection system 101 is not limited to the hardware described above, and may be replaced by a non-computer system such as a Digital Signal Processor (DSP), a Field-Programmable Gate Array (FPGA), or a Graphics Processing Unit (GPU). may
図3を参照し、画像処理部203について説明する。画像処理部203は、画像取得部202から画像等(第1画像)を入力され、機器制御装置208へ判定結果を出力するとともに、その判定結果を用いて画像等を記録制御部204へ出力する。画像処理部203は、画像取得部202から入力された画像(第1画像)を加工した画像(第2画像)を出力する画像加工部301と、学習済みのオートエンコーダにより処理した画像(第3画像)を出力するAE処理部303と、入力された画像(第1画像)とAE処理部303が処理した画像(第3画像)との差分を算出した画像(第4画像)を出力する算出処理部304と、差分を算出した画像(第4画像)に基づき正常/異常の判定を行う判定部305と、を備える。
The image processing unit 203 will be described with reference to FIG. The image processing unit 203 receives an image or the like (first image) from the image acquisition unit 202, outputs a determination result to the device control device 208, and outputs an image or the like to the recording control unit 204 using the determination result. . The image processing unit 203 includes an image processing unit 301 that outputs an image (second image) obtained by processing the image (first image) input from the image acquisition unit 202, and an image (third image) processed by a learned autoencoder. AE processing unit 303 that outputs an image), and a calculation that outputs an image (fourth image) obtained by calculating the difference between the input image (first image) and the image processed by the AE processing unit 303 (third image). It includes a processing unit 304 and a determination unit 305 that determines normality/abnormality based on the image (fourth image) for which the difference is calculated.
画像加工部301は、入力された画像を様々に加工する機能を有する。たとえば、画像加工部301は、画像全体を加工してもよいし、部分的に加工してもよい。また、画像加工部301は、明度変換処理、ぼかし処理、エッジ強調処理、および、アルファブレンディング処理などの様々な加工処理方法により画像を加工してもよい。明度変換処理とは、画像における色空間を構成する3要素の一つである明度を変更する処理である。また、明度変換処理は、色相ごとに彩度を変更するようなより詳細な処理であってもよい。ぼかし処理とは、各種フィルタを用いて画像のピクセルの画素値(RGB値)を周囲のピクセルと合わせて計算する処理である。各種フィルタとは、たとえば、ガウシアンフィルタ、平均化フィルタ、メディアンフィルタなどである。周囲の方向や大きさは適宜定められる。
The image processing unit 301 has a function of variously processing the input image. For example, the image processing unit 301 may process the entire image or may process the image partially. Also, the image processing unit 301 may process the image by various processing methods such as brightness conversion processing, blurring processing, edge enhancement processing, and alpha blending processing. Brightness conversion processing is processing for changing brightness, which is one of the three elements that constitute the color space of an image. Further, the lightness conversion processing may be more detailed processing such as changing saturation for each hue. The blurring process is a process of calculating pixel values (RGB values) of pixels in an image by combining them with surrounding pixels using various filters. Various filters are, for example, a Gaussian filter, an averaging filter, a median filter, and the like. The direction and size of the periphery are appropriately determined.
エッジ強調処理とは、画像がより鮮明になるように各種フィルタを用いて画素を変換する処理であり、画像の画素値(輝度)の変化(勾配)が大きい部分をより大きくする処理である。各種フィルタとは、たとえば、プレヴィットフィルタ、ソーベルフィルタ、ラプラシアンフィルタ、シャープフィルタなどである。エッジを検出する方向や勾配の大きさは適宜定められる。アルファブレンディング処理とは、画像にアルファ値(透過情報)を掛け合わせて半透明な画像を重ねる処理である。ブレンドする画像は任意のテクスチャであってもよい。
Edge enhancement processing is processing that converts pixels using various filters so that the image becomes clearer, and is processing that enlarges portions of the image where there is a large change (gradient) in pixel values (luminance). Various filters are, for example, a Prewitt filter, a Sobel filter, a Laplacian filter, a Sharp filter, and the like. The direction for edge detection and the magnitude of the gradient are appropriately determined. The alpha blending process is a process of multiplying an image by an alpha value (transparency information) to superimpose a translucent image. The blended image can be any texture.
画像加工部301は、上述した加工処理方法を、画像全体に施してもよいし、画像の一部に施してもよい。画像の一部を加工する場合、規則的な縦縞状、横縞状、格子状などの模様であってもよいし、ランダムな点状、島状などの斑点であってもよいし、円状・渦巻き状などであってもよく、特に限定されない。画像を部分的に加工した場合、後述するように、判定対象に存在した検知対象(傷や欠損など)が第3画像に現れるため第4画像では当該検知対象部分が強調されて表示されるため、かかる検知対象を見逃すことを防止できる。
The image processing unit 301 may apply the processing method described above to the entire image or to a part of the image. When processing a part of an image, it may be a pattern such as regular vertical stripes, horizontal stripes, or lattice, or may be random dots, islands, or the like. It may have a spiral shape or the like, and is not particularly limited. When the image is partially processed, as will be described later, the detection target (such as a flaw or defect) that was present in the determination target appears in the third image, and the detection target portion is emphasized and displayed in the fourth image. , it is possible to prevent such a detection target from being overlooked.
画像加工部301は、入力された画像に対して異なる複数の加工処理方法を用いて加工処理を行うことにより複数の画像(第2画像)を出力してもよい。この場合、次工程のAE処理部303は、加工処理された複数の画像を処理し、その複数の画像のそれぞれに対応する複数の画像(第3画像)を出力する。なお、画像加工部301は、加工処理した画像に加えて、入力された画像を加工処理せずにそのままAE処理部303に出力してもよい。この場合、AE処理部303は、加工処理していない画像(第1画像)と加工処理した画像(第2画像)の内少なくとも加工処理した画像(第2画像)を処理する。
The image processing unit 301 may output a plurality of images (second images) by processing the input image using a plurality of different processing methods. In this case, the AE processing unit 303 in the next step processes the processed multiple images and outputs multiple images (third images) respectively corresponding to the multiple images. In addition to the processed image, the image processing unit 301 may directly output the input image to the AE processing unit 303 without processing the input image. In this case, the AE processing unit 303 processes at least the processed image (second image) out of the unprocessed image (first image) and the processed image (second image).
画像加工部301は、入力された画像に対して複数の加工処理方法を用いて加工処理を行う場合、明度変換処理、ぼかし処理、エッジ強調処理、および、アルファブレンディング処理の組から任意の2以上の異なる加工処理を行うことが好ましい。このように、判定対象である入力画像(第1画像)を複数の異なる加工処理を行うことで、異なる加工処理をされた複数の画像をAE処理部303に入力することができる。
When processing an input image using a plurality of processing methods, the image processing unit 301 selects any two or more from a combination of brightness conversion processing, blurring processing, edge enhancement processing, and alpha blending processing. It is preferable to perform different processing treatments. In this way, by performing a plurality of different processings on the input image (first image) to be determined, a plurality of differently processed images can be input to the AE processing unit 303 .
また、画像加工部301は、2以上の異なる加工処理を行う場合、画像の異なる部分にそれらの加工処理が施されることが好ましい。このように、入力画像(第1画像)に対して異なる部分に異なる加工処理を行うことで、異なる部分に異なる加工処理をされた複数の画像をAE処理部303に入力することができる。たとえば、図4に示すように、画像加工部301は、入力画像401に対して、加工処理1により渦巻き状のマスクを有するアルファブレンディング処理を行い、加工処理N-1により横縞状にぼかし処理を行い、これらを出力すると共に、加工処理しない入力画像も出力する。このように、画像加工部301は、入力画像に対して意図的にノイズを加えるような加工処理を行い、加工処理済み画像402として出力する。
Also, when the image processing unit 301 performs two or more different processing, it is preferable that the processing be performed on different portions of the image. By performing different processing on different portions of the input image (first image) in this way, a plurality of images having different portions processed differently can be input to the AE processing unit 303 . For example, as shown in FIG. 4, the image processing unit 301 performs alpha blending processing with a spiral mask in processing processing 1 on an input image 401, and blurs it in horizontal stripes in processing processing N-1. These are output, and an input image that is not processed is also output. In this manner, the image processing unit 301 performs processing such as intentionally adding noise to the input image, and outputs the processed image 402 .
AE処理部303は、学習済みのオートエンコーダにより、画像加工部301が出力する加工処理済み画像402を処理する。オートエンコーダとは、ニューラルネットワークから構成されるエンコーダとデコーダから構成される。エンコーダでは、入力データから次元圧縮した特徴を出力し、デコーダはその特徴から入力データを復元するように機能する。機械学習の段階では、オートエンコーダへの入力とオートエンコーダからの出力を比較し誤差を算出し、たとえば誤差逆伝搬によりその誤差を最小化するように、すなわちオートエンコーダの出力と入力が一致するようにニューラルネットワークの重みを調整する。
The AE processing unit 303 processes the processed image 402 output by the image processing unit 301 using a learned autoencoder. An autoencoder is composed of an encoder and a decoder composed of a neural network. The encoder outputs dimensionally compressed features from the input data, and the decoder functions to recover the input data from the features. In the machine learning stage, the input to the autoencoder and the output from the autoencoder are compared to calculate the error, and the error is minimized by, for example, error back propagation, i.e., the output of the autoencoder and the input are matched. Adjust the neural network weights to .
オートエンコーダを異常の検知などに用いる場合、図8に示すように、異常を有さない正常品の画像データだけを用いてオートエンコーダを学習させる。このようにして学習された学習済みオートエンコーダは、正常品のねじが写った画像を入力されると、正常品のネジを復元して出力する一方、一部に傷などの異常を含んだねじが写った画像を入力されても、同様にその異常部分を除去した画像、すなわち正常なねじの画像を出力しようとする。そこで、異常を含んだ画像と出力した正常そうな画像を比較して差分を抽出することで、異常個所を特定した異常検知を行うことができる。しかし、その画像の中で検知すべき異常部分が極めて小さい領域に示される場合、通常のオートエンコーダでは検知することが難しい。本発明は、かかる課題を解決するものである。
When the autoencoder is used to detect anomalies, etc., as shown in FIG. 8, the autoencoder is trained using only image data of normal products with no anomalies. When the learned autoencoder receives an image of a normal screw, it restores the normal screw and outputs it. Even if an image showing a screw is input, it will try to output an image from which the abnormal part is removed, that is, an image of a normal screw. Therefore, by comparing an image containing an abnormality with an output image that appears to be normal and extracting the difference, it is possible to detect an abnormality by specifying the location of the abnormality. However, if the abnormal portion to be detected in the image is shown in an extremely small area, it is difficult to detect it with a normal autoencoder. The present invention solves such problems.
なお、学習済みオートエンコーダの学習モデル302における中間層の構造は、層の深さやフィルタの大きさ・数などに制限はないものとする。また、オートエンコーダは、類似したDAEやVAEなどのオートエンコーダを基本とした別手法を用いてもよい。また、学習済みオートエンコーダの学習モデル302は、記録装置205などに記憶され、検知装置100の運用時においては、画像処理部203の一部として処理部Prcにロードされる。
It should be noted that the structure of the intermediate layer in the learning model 302 of the trained autoencoder is not limited to the depth of layers, the size and number of filters, and the like. Also, the autoencoder may use another technique based on the autoencoder, such as similar DAE or VAE. Also, the learning model 302 of the learned autoencoder is stored in the recording device 205 or the like, and is loaded into the processing unit Prc as part of the image processing unit 203 during operation of the detection device 100 .
図4に示すように、AE処理部303は、上述した学習済みオートエンコーダにより、加工処理1が施された加工処理済み画像402を入力されると、AE処理結果2のAE処理結果画像403を出力する。AE処理結果2のAE処理結果画像403は、加工処理1が施された加工処理済み画像402に比べて正常品に近いものの渦巻き状のマスクにより完全には復元できず一部に違和感のある復元画像である。また、同様に、AE処理部303は、加工処理N-1が施された加工処理済み画像402を入力されると、AE処理結果NのAE処理結果画像403を出力する。AE処理結果NのAE処理結果画像403は、加工処理N-1が施された加工処理済み画像402に比べて正常品に近いものの横縞状のマスクにより完全には復元できず一部に違和感のある復元画像である。
As shown in FIG. 4, the AE processing unit 303 receives a processed image 402 that has undergone processing processing 1 by the above-described learned autoencoder, and converts the AE processing result image 403 of the AE processing result 2 to Output. The AE processing result image 403 of the AE processing result 2 is closer to a normal product than the processed image 402 subjected to the processing processing 1, but cannot be completely restored due to the spiral mask, and is partially restored with a sense of incongruity. It is an image. Similarly, the AE processing unit 303 outputs an AE processing result image 403 of the AE processing result N when the processed image 402 subjected to the processing processing N−1 is input. The AE processing result image 403 of the AE processing result N is closer to a normal product than the processed image 402 subjected to the processing processing N−1, but it cannot be completely restored due to the horizontal striped mask, and it is partially uncomfortable. It is a restored image.
また、これらの復元画像であるAE処理結果画像403には、渦巻き状のマスク・横縞状のマスクと同様、正常品のねじには含まれない異常部分(本図では、ねじ頭部の工具溝(十字形部分)が潰れた部分)により完全には復元できず一部に違和感のある形で復元されている。もちろん、AE処理部303は、加工処理しない画像に対しても、当該異常部分が極めて小さい部分であってもその一部が完全には復元できず一部に違和感のある形で復元されていると考えられる。逆に言えば、AE処理部303は、当該異常部分の一部を正常に違和感のない形に復元していると言える。
Also, in the AE processing result image 403, which is the restored image, an abnormal portion not included in a normal screw (in this figure, the tool groove of the screw head is The (cruciform part) is not completely restored due to the crushed part), and it is partially restored in a strange shape. Of course, the AE processing unit 303 cannot completely restore a part of an image that is not processed, even if the abnormal part is extremely small, and the part of the abnormal part is restored in a strange manner. it is conceivable that. Conversely, it can be said that the AE processing unit 303 restores part of the abnormal portion to a normal, comfortable shape.
なお、AE処理部303は、画像加工部301が加工処理しない入力画像も出力する場合、加工処理しない入力画像と加工処理した画像の内少なくとも加工処理した画像(第3画像)を処理するものとする。また、AE処理部303は、図3のように、計算処理を並列化し高速化するために複数設けられることが好ましい。
When the image processing unit 301 also outputs an input image that is not processed, the AE processing unit 303 processes at least the processed image (third image) out of the unprocessed input image and the processed image. do. Also, as shown in FIG. 3, it is preferable that a plurality of AE processing units 303 be provided in order to parallelize and speed up calculation processing.
算出処理部304は、図5に示すように、画像取得部202が取得した入力画像401(第1画像)と、AE処理部303が復元し出力したAE処理結果画像403(第3画像)との差分を算出し、差分結果画像404(第4画像)として算出する。差分結果画像404は、たとえば、両方の対応する画素毎に色空間距離を測り、その距離がゼロでない画素や所定の閾値を超えた画素を強調表示することにより得られる。所定の閾値は差分を際立たせるように適宜定められる。
As shown in FIG. 5, the calculation processing unit 304 combines an input image 401 (first image) acquired by the image acquisition unit 202 and an AE processing result image 403 (third image) restored and output by the AE processing unit 303. is calculated as a difference result image 404 (fourth image). The difference result image 404 is obtained, for example, by measuring the color space distance for both corresponding pixels and highlighting those pixels where the distance is non-zero or above a predetermined threshold. The predetermined threshold value is appropriately determined so as to make the difference stand out.
本図では、算出処理部304は、入力画像401とAE処理結果2のAE処理結果画像403の差分を算出した差分結果2の差分結果画像404を算出する。差分結果2の差分結果画像404は、入力画像401と、渦巻き状のマスクにより完全には復元できず一部に違和感のある復元画像との差分を示す画像である。同様に、差分結果Nの差分結果画像404は、入力画像401と、横縞状のマスクにより完全には復元できず一部に違和感のある復元画像との差分を示す画像である。
In this figure, the calculation processing unit 304 calculates the difference result image 404 of the difference result 2 by calculating the difference between the input image 401 and the AE process result image 403 of the AE process result 2. A difference result image 404 of the difference result 2 is an image showing the difference between the input image 401 and the restored image which cannot be completely restored by the spiral mask and which is partially unnatural. Similarly, the difference result image 404 of the difference result N is an image showing the difference between the input image 401 and the restored image that cannot be completely restored by the horizontal striped mask and is partially unnatural.
また、AE処理部303が加工処理しない画像に対しても処理した場合を示す差分結果1の差分結果画像404は、入力画像401と、当該異常部分が極めて小さい部分であってもその一部が完全には復元できず一部に違和感のある形で復元されている部分、および、当該異常部分の異なる一部を正常に違和感のない形に復元している部分との差分を示す画像である。また、差分結果1の差分結果画像404に含まれる差分は、入力画像401が異常部分を含む場合、差分結果2の差分結果画像404および差分結果Nの差分結果画像404などの他の差分結果画像404にも含まれ得るものである。なお、本図では、画像加工部301が複数の加工処理済み画像402を出力し、AE処理部303がその複数の画像に対して処理して複数の差分結果である差分結果画像404を出力しているが、1つの画像に対して処理を行う場合には、算出処理部304は1つの差分結果画像404(第4画像)を出力する。
Further, the difference result image 404 of the difference result 1, which shows the case where the AE processing unit 303 also processes an image that is not processed, is the same as the input image 401, and even if the abnormal portion is extremely small, a part of it is FIG. 10 is an image showing the difference between a portion that cannot be completely restored and is partially restored in a strange form, and a portion that is different from the abnormal portion and is normally restored in a form that does not give a strange feeling. . In addition, if the input image 401 includes an abnormal portion, the difference included in the difference result image 404 of the difference result 1 may be changed to other difference result images such as the difference result image 404 of the difference result 2 and the difference result image 404 of the difference result N. 404 may also be included. In this figure, the image processing unit 301 outputs a plurality of processed images 402, and the AE processing unit 303 processes the plurality of images and outputs a difference result image 404 which is a plurality of difference results. However, when processing one image, the calculation processing unit 304 outputs one difference result image 404 (fourth image).
AE処理部303が複数の画像に対して処理する場合、算出処理部304は、入力画像401と、AE処理部303が復元した複数のAE処理結果画像403との差分を算出した複数の差分結果画像404を生成し、この複数の差分結果画像404を合成することにより1つの合成結果画像405(第4画像)を出力する。画像の合成は、様々な方法が考えられるが、複数の差分結果画像404に共通する差分を強調することを目的とすれば、算出処理部304は、算出された複数の差分結果画像404の論理積を算出することにより合成し、1つの合成結果画像405(第4画像)を出力することが好ましい。これにより、合成結果画像405は、極めて小さい検知対象であっても当該部分が重畳することで強調されて表示される画像となる。
When the AE processing unit 303 processes a plurality of images, the calculation processing unit 304 calculates a plurality of difference results obtained by calculating the differences between the input image 401 and the plurality of AE processing result images 403 restored by the AE processing unit 303. An image 404 is generated, and a single synthesized result image 405 (fourth image) is output by synthesizing the plurality of difference result images 404 . Various methods are conceivable for synthesizing the images, but if the purpose is to emphasize the difference common to the plurality of difference result images 404, the calculation processing unit 304 combines the logic of the calculated plurality of difference result images 404. It is preferable to synthesize by calculating the product and output one synthesized result image 405 (fourth image). As a result, the composite result image 405 becomes an image in which even an extremely small detection target is emphasized and displayed by superimposing the relevant portion.
たとえば、本図に示すように、差分結果1の差分結果画像404に含まれる差分すなわち異常部分による差分は、差分結果2の差分結果画像404、差分結果3の差分結果画像404、・・・、および差分結果Nの差分結果画像404にも含まれ得るものだから、これらすべての差分結果画像404の論理積を算出することにより合成した1つの合成結果画像405は、それぞれの差分結果画像404では極めて小さい差分であっても大きな差分として算出される。
For example, as shown in this figure, the difference included in the difference result image 404 of the difference result 1, that is, the difference due to the abnormal portion, is the difference result image 404 of the difference result 2, the difference result image 404 of the difference result 3, . and the difference result image 404 of the difference result N, one synthesis result image 405 synthesized by calculating the logical product of all these difference result images 404 is extremely large in each difference result image 404. Even a small difference is calculated as a large difference.
AE処理部303が複数の画像に対して処理した場合、算出処理部304は、図6のフローチャートの例のように実行される。算出処理部304は、S100において、AE処理部303が出力した複数の差分結果1~Nである差分結果画像404を取得する。算出処理部304は、S102において、それぞれの差分結果画像404に対して、所定の閾値Aに基づき、閾値A以上の画素値は1(白色)に、閾値Aを超えなかった画素値は0(黒色)に変換する。このように、閾値A以上の部分は強調される。算出処理部304は、S104において、変換したそれぞれの差分結果画像404において対応する各画素に対して論理積(AND処理)を行う。より差分を強調するために、算出処理部304は、異なる閾値Bを用いて、各画素で閾値B以上の場合は論理積の結果を1(白色)に、閾値Bを超えなかった場合は論理積の結果を0(黒色)にしてもよい。
When the AE processing unit 303 processes a plurality of images, the calculation processing unit 304 is executed as shown in the flowchart of FIG. In S100, the calculation processing unit 304 acquires a difference result image 404 that is a plurality of difference results 1 to N output by the AE processing unit 303. FIG. In S102, the calculation processing unit 304 converts pixel values equal to or greater than the threshold A to 1 (white) and pixel values that do not exceed the threshold A to 0 ( black). In this way, the portion above the threshold A is emphasized. In S<b>104 , the calculation processing unit 304 performs logical product (AND processing) on each corresponding pixel in each converted difference result image 404 . In order to further emphasize the difference, the calculation processing unit 304 uses different thresholds B, and sets the logical AND result to 1 (white) if each pixel is equal to or greater than the threshold B, and to logical if the threshold B is not exceeded. The result of the product may be 0 (black).
判定部305は、AE処理部303が1つの画像に対して処理する場合は、1つの差分結果画像404(第4画像)に基づき正常/異常の判定を行う。なお、この場合、差分結果1以外の1つの差分結果画像404が出力される。また、判定部305は、AE処理部303が複数の画像に対して処理する場合は、1つの合成結果画像405(第4画像)に基づき正常/異常の判定を行う。このように、判定対象である入力画像401(第1画像)と、その画像を加工した加工処理済み画像402(第2画像)をオートエンコーダが処理したAE処理結果画像403(第3画像)との差分を示す差分結果画像(第4画像)に基づき判定することで、画像の中で検知すべき対象が極めて小さい領域に示される場合であっても、そのような検知対象を見逃して正常/異常を判定することを防止する検知装置100を提供することができる。
When the AE processing unit 303 processes one image, the determination unit 305 determines normality/abnormality based on one difference result image 404 (fourth image). In this case, one difference result image 404 other than the difference result 1 is output. Further, when the AE processing unit 303 processes a plurality of images, the determination unit 305 performs normal/abnormal determination based on one combined result image 405 (fourth image). In this way, an input image 401 (first image) to be determined and an AE processing result image 403 (third image) obtained by processing an input image 401 (first image) and a processed image 402 (second image) processed by the autoencoder. By making a determination based on the difference result image (fourth image) showing the difference between the normal/ It is possible to provide the detection device 100 that prevents an abnormality from being determined.
特に、複数の異なる加工処理された加工処理済み画像402(第2画像)をオートエンコーダが処理した複数の異なるAE処理結果画像403(第3画像)を合成して1つの合成結果画像(第4画像)を生成し、かかる合成結果画像405(第4画像)に基づき判定することで、極めて小さい検知対象であっても当該部分が重畳することで強調されて算出、表示され、かかる検知対象を見逃すことを防止できる。また、上述したように、判定対象である入力画像401(第1画像)を複数の異なる加工処理を行うことで、判定対象に存在した傷などの検知対象がすべてのAE処理結果画像403(第3画像)に共通して現れるため、検知対象が小さなものであっても合成結果画像405(第4画像)では当該部分が重畳することで強調されて算出、表示され、かかる検知対象を見逃すことを防止できる。
In particular, a plurality of different AE processing result images 403 (third images) obtained by processing a plurality of different processed images 402 (second images) by an autoencoder are combined to form one composite result image (fourth image). image) is generated, and determination is made based on the synthesized result image 405 (fourth image), so that even an extremely small detection target can be calculated and displayed in an emphasized manner by superimposing such a detection target. You can avoid missing it. Further, as described above, by performing a plurality of different processing processes on the input image 401 (first image), which is the determination target, all the detection targets such as scratches that existed in the determination target are the AE processing result images 403 (first image). 3 images), even if the detection target is small, it is calculated and displayed in an emphasized manner by being superimposed on the synthesis result image 405 (fourth image), and such a detection target cannot be overlooked. can be prevented.
上述したことは、画像に含まれる検知対象の正常/異常を検知する検知方法でもある。この検知方法は、ねじなどの検知対象が写った入力画像401(第1画像)を取得するステップと、入力画像を加工した加工処理済み画像402(第2画像)を出力するステップと、学習済みのオートエンコーダにより、入力画像401と加工処理済み画像402の内少なくとも加工処理済み画像402を処理したAE処理結果画像403(第3画像)を出力するステップと、入力画像401像とAE処理結果画像403との差分を算出し、差分結果画像404(第4画像)を出力するステップと、差分結果画像404に基づき正常/異常の判定を行うステップと、を備える検知方法である。このように、判定対象である入力画像401と、その画像を加工した加工処理済み画像402をオートエンコーダが処理したAE処理結果画像403との差分を示す差分結果画像404に基づき判定することで、画像の中で検知すべき対象が極めて小さい領域に示される場合であっても、そのような検知対象を見逃して正常/異常を判定することを防止する検知方法を提供することができる。
The above is also a detection method for detecting the normality/abnormality of the detection target included in the image. This detection method includes the steps of acquiring an input image 401 (first image) in which a detection target such as a screw is shown, outputting a processed image 402 (second image) obtained by processing the input image, a step of outputting an AE processing result image 403 (third image) obtained by processing at least the processed image 402 out of the input image 401 and the processed image 402 by the autoencoder; 403 to output a difference result image 404 (fourth image); and a step of determining normality/abnormality based on the difference result image 404. In this way, by making a determination based on the difference result image 404 showing the difference between the input image 401 to be determined and the processed image 402 obtained by processing the input image 401 and the AE processing result image 403 processed by the autoencoder, Even if an object to be detected is shown in an extremely small area in an image, it is possible to provide a detection method that prevents such a detection object from being overlooked and normality/abnormality to be determined.
本発明によれば、産業カメラなどを用いた製造品の異常検知システム101において、製造品に紛れる極めて小さい傷や欠損、加工ずれなどが存在する不良を見逃さないことが可能となる。
According to the present invention, in the manufactured product anomaly detection system 101 using an industrial camera or the like, it is possible to not overlook defects such as extremely small scratches, defects, and processing deviations that are mixed in with manufactured products.
なお、本発明は、例示した実施例に限定するものではなく、特許請求の範囲の各項に記載された内容から逸脱しない範囲の構成による実施が可能である。すなわち、本発明は、主に特定の実施形態に関して特に図示され、かつ説明されているが、本発明の技術的思想および目的の範囲から逸脱することなく、以上述べた実施形態に対し、数量、その他の詳細な構成において、当業者が様々な変形を加えることができるものである。
It should be noted that the present invention is not limited to the exemplified embodiments, and can be implemented with a configuration that does not deviate from the contents described in each claim. That is, although the present invention has been particularly illustrated and described primarily with respect to particular embodiments, there may be modifications, quantities, Various modifications can be made to other detailed configurations by those skilled in the art.
100:検知装置、101:異常検知システム、102:ネジ(検知対象)、103:製造ライン、201:撮像装置、202:画像取得部、203:画像処理部、204:記録制御部、205:記録装置、206:表示制御部、207:表示出力装置、208:機器制御装置、209:全体制御部、301:画像加工部、302:学習モデル、303:AE処理部、304:算出処理部、305:判定部、401:入力画像(第1画像、判定対象)、402:加工処理済み画像(第2画像)、403:AE処理結果画像(第3画像)、404:差分結果画像(第4画像)、405:合成結果画像(第4画像)
100: Detecting device, 101: Anomaly detection system, 102: Screw (detection target), 103: Manufacturing line, 201: Imaging device, 202: Image acquiring unit, 203: Image processing unit, 204: Recording control unit, 205: Recording Device 206: Display control unit 207: Display output device 208: Equipment control device 209: Overall control unit 301: Image processing unit 302: Learning model 303: AE processing unit 304: Calculation processing unit 305 : determination unit, 401: input image (first image, determination target), 402: processed image (second image), 403: AE processing result image (third image), 404: difference result image (fourth image ), 405: Synthesis result image (fourth image)
Claims (7)
- 画像に含まれる検知対象の正常/異常を検知する検知装置であって、
検知対象を含む第1画像を取得する画像取得部と、
前記第1画像を加工した第2画像を出力する画像加工部と、
学習済みのオートエンコーダにより、前記第1画像と前記第2画像の内少なくとも前記第2画像を処理した第3画像を出力するAE処理部と、
前記第1画像と前記第3画像との差分を算出し、第4画像を出力する算出処理部と、
前記第4画像に基づき正常/異常の判定を行う判定部と、
を備える検知装置。 A detection device for detecting normality/abnormality of a detection target included in an image,
an image acquisition unit that acquires a first image including a detection target;
an image processing unit that outputs a second image obtained by processing the first image;
an AE processing unit that outputs a third image obtained by processing at least the second image out of the first image and the second image by a learned autoencoder;
a calculation processing unit that calculates the difference between the first image and the third image and outputs a fourth image;
a determination unit that determines normality/abnormality based on the fourth image;
A detection device comprising: - 前記画像加工部は、前記第1画像を部分的に加工した前記第2画像を出力することを特徴とする請求項1に記載の検知装置。 The detection device according to claim 1, wherein the image processing unit outputs the second image obtained by partially processing the first image.
- 前記画像加工部は、前記第1画像に対して異なる複数の加工処理を行うことにより複数の第2画像を出力し、
前記AE処理部は、前記複数の第2画像を処理し、前記複数の第2画像のそれぞれに対応する複数の第3画像を出力し、
前記算出処理部は、前記第1画像と前記複数の第3画像との差分を算出し、一の前記第4画像を出力する、
ことを特徴とする請求項1または2に記載の検知装置。 The image processing unit outputs a plurality of second images by performing a plurality of different processing processes on the first image,
the AE processing unit processes the plurality of second images and outputs a plurality of third images corresponding to each of the plurality of second images;
The calculation processing unit calculates a difference between the first image and the plurality of third images, and outputs one of the fourth images.
3. The detection device according to claim 1 or 2, characterized in that: - 前記算出処理部は、算出された複数の差分の論理積を算出することにより前記一の第4画像を出力することを特徴とする請求項3に記載の検知装置。 The detection device according to claim 3, wherein the calculation processing unit outputs the first fourth image by calculating a logical product of a plurality of calculated differences.
- 前記画像加工部は、明度変換処理、ぼかし処理、エッジ強調処理、および、アルファブレンディング処理の組から任意の2以上の異なる加工処理を行うことを特徴とする請求項3または4に記載の検知装置。 5. The detection device according to claim 3, wherein the image processing unit performs any two or more different processing from a set of brightness conversion processing, blurring processing, edge enhancement processing, and alpha blending processing. .
- 検知対象を撮像する撮像装置をさらに備え、
前記画像取得部は、前記撮像装置から前記第1画像を取得することを特徴とする請求項1乃至5のいずれかに記載の検知装置。 further comprising an imaging device for imaging the detection target,
The detection device according to any one of claims 1 to 5, wherein the image acquisition unit acquires the first image from the imaging device. - 画像に含まれる検知対象の正常/異常を検知する検知方法であって、
検知対象を含む第1画像を取得するステップと、
前記第1画像を加工した第2画像を出力するステップと、
学習済みのオートエンコーダにより、前記第1画像と前記第2画像の内少なくとも前記第2画像を処理した第3画像を出力するステップと、
前記第1画像と前記第3画像との差分を算出し、第4画像を出力するステップと、
前記第4画像に基づき正常/異常の判定を行うステップと、
を備える検知方法。 A detection method for detecting normality/abnormality of a detection target included in an image,
obtaining a first image containing the detection target;
outputting a second image obtained by processing the first image;
outputting a third image obtained by processing at least the second image out of the first image and the second image by a trained autoencoder;
calculating a difference between the first image and the third image and outputting a fourth image;
a step of determining normality/abnormality based on the fourth image;
A detection method comprising:
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/012632 WO2022201451A1 (en) | 2021-03-25 | 2021-03-25 | Detection device and detection method |
JP2023508334A JP7436752B2 (en) | 2021-03-25 | 2021-03-25 | Detection device and detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/012632 WO2022201451A1 (en) | 2021-03-25 | 2021-03-25 | Detection device and detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022201451A1 true WO2022201451A1 (en) | 2022-09-29 |
Family
ID=83395468
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/012632 WO2022201451A1 (en) | 2021-03-25 | 2021-03-25 | Detection device and detection method |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP7436752B2 (en) |
WO (1) | WO2022201451A1 (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019188040A1 (en) * | 2018-03-29 | 2019-10-03 | 日本電気株式会社 | Image processing device, image processing method, and image processing program |
WO2020031984A1 (en) * | 2018-08-08 | 2020-02-13 | Blue Tag株式会社 | Component inspection method and inspection system |
JP2020067865A (en) * | 2018-10-25 | 2020-04-30 | 株式会社アルム | Image processing apparatus, image processing system, and image processing program |
US20200250812A1 (en) * | 2019-01-31 | 2020-08-06 | Siemens Healthcare Limited | Method and system for image analysis |
JP2020140580A (en) * | 2019-02-28 | 2020-09-03 | 日本電信電話株式会社 | Detection device and detection program |
WO2020184069A1 (en) * | 2019-03-08 | 2020-09-17 | 日本電気株式会社 | Image processing method, image processing device, and program |
JP2020187735A (en) * | 2019-05-13 | 2020-11-19 | 富士通株式会社 | Surface defect identification method and apparatus |
-
2021
- 2021-03-25 WO PCT/JP2021/012632 patent/WO2022201451A1/en active Application Filing
- 2021-03-25 JP JP2023508334A patent/JP7436752B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019188040A1 (en) * | 2018-03-29 | 2019-10-03 | 日本電気株式会社 | Image processing device, image processing method, and image processing program |
WO2020031984A1 (en) * | 2018-08-08 | 2020-02-13 | Blue Tag株式会社 | Component inspection method and inspection system |
JP2020067865A (en) * | 2018-10-25 | 2020-04-30 | 株式会社アルム | Image processing apparatus, image processing system, and image processing program |
US20200250812A1 (en) * | 2019-01-31 | 2020-08-06 | Siemens Healthcare Limited | Method and system for image analysis |
JP2020140580A (en) * | 2019-02-28 | 2020-09-03 | 日本電信電話株式会社 | Detection device and detection program |
WO2020184069A1 (en) * | 2019-03-08 | 2020-09-17 | 日本電気株式会社 | Image processing method, image processing device, and program |
JP2020187735A (en) * | 2019-05-13 | 2020-11-19 | 富士通株式会社 | Surface defect identification method and apparatus |
Also Published As
Publication number | Publication date |
---|---|
JPWO2022201451A1 (en) | 2022-09-29 |
JP7436752B2 (en) | 2024-02-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020031984A1 (en) | Component inspection method and inspection system | |
US8437566B2 (en) | Software methodology for autonomous concealed object detection and threat assessment | |
US11386549B2 (en) | Abnormality inspection device and abnormality inspection method | |
US20170103510A1 (en) | Three-dimensional object model tagging | |
CN111402146A (en) | Image processing method and image processing apparatus | |
JP6046927B2 (en) | Image processing apparatus and control method thereof | |
JP2005122361A (en) | Image processor, its processing method, computer program, and recording medium | |
US11074742B2 (en) | Image processing apparatus, image processing method, and storage medium | |
US11615520B2 (en) | Information processing apparatus, information processing method of information processing apparatus, and storage medium | |
US20100215266A1 (en) | Image processing device and method, and program recording medium | |
KR102374840B1 (en) | Defect image generation method for deep learning and system therefor | |
US20120206593A1 (en) | Defect Detection Apparatus, Defect Detection Method, And Computer Program | |
US20200118250A1 (en) | Image processing apparatus, image processing method, and non-transitory computer-readable storage medium | |
US9836818B2 (en) | Method and device for color interpolation | |
WO2022130814A1 (en) | Index selection device, information processing device, information processing system, inspection device, inspection system, index selection method, and index selection program | |
CN112073718A (en) | Television screen splash detection method and device, computer equipment and storage medium | |
WO2021124791A1 (en) | State determination device and state determination method | |
WO2022201451A1 (en) | Detection device and detection method | |
US8452090B1 (en) | Bayer reconstruction of images using a GPU | |
KR20070063781A (en) | Method and apparatus for image adaptive color adjustment of pixel in color gamut | |
JP2007013231A (en) | Device, method and program for compensating shading of image | |
WO2019106877A1 (en) | Image processing device, image processing method, and program | |
JP5400087B2 (en) | Image processing apparatus, image processing method, and program | |
JP4629629B2 (en) | Digital camera false color evaluation method, digital camera false color evaluation apparatus, and digital camera false color evaluation program | |
JP4419726B2 (en) | Image processing apparatus and extraction color setting method suitable for the same |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21933058 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2023508334 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21933058 Country of ref document: EP Kind code of ref document: A1 |