WO2023162940A1 - Inspection device, inspection method, and inspection program - Google Patents

Inspection device, inspection method, and inspection program Download PDF

Info

Publication number
WO2023162940A1
WO2023162940A1 PCT/JP2023/006090 JP2023006090W WO2023162940A1 WO 2023162940 A1 WO2023162940 A1 WO 2023162940A1 JP 2023006090 W JP2023006090 W JP 2023006090W WO 2023162940 A1 WO2023162940 A1 WO 2023162940A1
Authority
WO
WIPO (PCT)
Prior art keywords
inspection
region
captured image
area
image
Prior art date
Application number
PCT/JP2023/006090
Other languages
French (fr)
Japanese (ja)
Inventor
麻理恵 神田
和久 大沼
貴之 藤堂
Original Assignee
i-PRO株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by i-PRO株式会社 filed Critical i-PRO株式会社
Publication of WO2023162940A1 publication Critical patent/WO2023162940A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/956Inspecting patterns on the surface of objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to an inspection device, an inspection method, and an inspection program.
  • Patent Literature 1 discloses a technique for obtaining an optimal illumination pattern for detecting so-called defects in an inspection object, such as damage and defective products, based on a group of images captured using a plurality of illumination patterns. is disclosed. Further, Patent Literature 2 discloses a technique of obtaining an illumination pattern for suppressing non-defective product variations and individual variations of inspection objects while emphasizing defects of the inspection objects.
  • Patent Documents 1 and 2 detect defects in an inspection object included in a group of captured images. It is valid.
  • an illumination pattern is obtained by applying Patent Documents 1 and 2 to the entire substrate as one inspection target.
  • the illumination pattern obtained when the entire board is treated as one inspection object is Therefore, the desired inspection accuracy may not be obtained.
  • the present invention has been made to solve such problems, and aims to improve the accuracy of appearance inspection for a plurality of inspection objects.
  • An inspection apparatus includes one or more processors, a memory, and a program stored in the memory, wherein the program stores a first inspection object and the first inspection object.
  • the program stores a first inspection object and the first inspection object.
  • an inspection target area in which a plurality of inspection objects including a second inspection object different from the and a camera that captures the inspection target area and outputs the captured image of the inspection target area as the captured image of the inspection target area according to a first imaging condition capturing a first captured image and a second captured image captured under a second imaging condition different from the first imaging condition; and including the first inspection object and the second inspection object.
  • an inspection method in an inspection device, in an inspection target area in which a plurality of inspection targets including a first inspection target and a second inspection target different from the first inspection target exist, and setting a first attention area for inspecting the first inspection object and a second attention area for inspecting the second inspection object, imaging the inspection object area, and performing the inspection.
  • a learning model for detecting an abnormality in a plurality of inspection objects including the first inspection object and the second inspection object; and the first inspection object corresponding to the first region of interest.
  • An inspection program provides an inspection target region in which a plurality of inspection targets including a first inspection target and a second inspection target different from the first inspection target exist, wherein the first inspection target is setting a first attention area for inspecting an inspection object and a second attention area for inspecting the second inspection object; a first captured image captured under a first imaging condition and a second captured image captured under a second imaging condition different from the first imaging condition, as captured images of the inspection target area, to a camera that outputs the captured images of and a learning model for detecting anomalies in a plurality of inspection objects including the first inspection object and the second inspection object, and the first corresponding to the first attention area inspecting the first inspection object based on the first area of the captured image; and performing the second inspection based on the learning model and a second area of the first captured image corresponding to the second attention area.
  • the accuracy of appearance inspection for multiple inspection objects is improved.
  • FIG. 1 is a block diagram showing a configuration example of an inspection system according to Embodiment 1.
  • FIG. 2 is a block diagram showing an example of the functional configuration of the inspection apparatus according to Embodiment 1.
  • FIG. 3 is a block diagram illustrating an example of a functional configuration of a management device according to Embodiment 1;
  • FIG. 4 is a diagram showing an example of an attention area set on a work according to the first embodiment.
  • FIG. 5 is a diagram showing an example of an imaging pattern according to Embodiment 1.
  • FIG. 6 is a diagram showing an example of an illumination pattern according to Embodiment 1.
  • FIG. FIG. 7 is a sequence chart showing pre-inspection operations in the inspection system according to the first embodiment.
  • FIG. 8 is a sequence chart showing inspection operations in the inspection system according to the first embodiment.
  • 9 is a flowchart illustrating an example of pre-examination processing according to Embodiment 1.
  • FIG. 10 is a flow chart showing a detailed example of the optimum condition determination process shown in FIG. 11 is a flowchart illustrating an example of inspection processing according to Embodiment 1.
  • FIG. 12 is a schematic diagram showing an example of a UI screen for generating and adjusting an imaging pattern and an illumination pattern according to Embodiment 1.
  • FIG. 13 is a schematic diagram showing an example of a UI screen for setting an attention area according to Embodiment 1.
  • FIG. 14 is a diagram showing a configuration example of attention area information according to Embodiment 1.
  • FIG. 15 is a schematic diagram showing an example of a UI screen for confirming a list of work inspection results according to the first embodiment.
  • FIG. 16 is a schematic diagram showing an example of a UI screen for confirming inspection results in detail according to the first embodiment.
  • FIG. 17 is a schematic diagram showing an example of reducing the size of the work image to the size of the image that can be input to the learning model.
  • FIG. 18 is a schematic diagram showing an example in which a plurality of cameras divides a workpiece into image sizes that can be input to a learning model and captures the images.
  • 19 is a block diagram illustrating an example of a functional configuration of an inspection apparatus according to Embodiment 2; FIG.
  • FIGS. 20A and 20B are diagrams illustrating an example of a first method for generating a synthesized image according to Embodiment 2.
  • FIG. 21A and 21B are diagrams illustrating an example of a second method for generating a synthesized image according to Embodiment 2.
  • FIG. 22 is a flowchart illustrating an example of pre-examination processing according to Embodiment 2.
  • FIG. 23 is a flow chart showing an example of the optimum condition determination process shown in FIG. 22.
  • FIG. 24 is a flowchart illustrating an example of inspection processing according to Embodiment 2.
  • FIG. FIG. 25 is a flowchart showing a modification of inspection processing according to the second embodiment.
  • FIG. 1 is a block diagram showing a configuration example of an inspection system 10 according to Embodiment 1. As shown in FIG. 1
  • the inspection system 10 is a system for automatically inspecting whether or not the component 2 is normally mounted on the board 1 .
  • the board 1 to be inspected, on which the component 2 is mounted is called a work 3 (see FIG. 4).
  • one component 2a is referred to as a first inspection object
  • the other component 2b is referred to as a second inspection object. good too.
  • the inspection system 10 includes an inspection device 20, an illumination device 30, a detection sensor 19, a management device 40, an input device 50, a speaker 52, a display device 60, and Patlite (registered trademark). ) 62.
  • the inspection device 20 captures an image of the workpiece 3 to be inspected, and inspects whether or not the component 2 is normally mounted on the board 1 based on the captured image.
  • a captured image of the workpiece 3 is hereinafter referred to as a workpiece image 4 (see FIG. 4).
  • the illumination device 30 illuminates the work 3 when the inspection device 20 images the work 3 .
  • the lighting device 30 is connected to the inspection device 20 via a predetermined cable 12 . Note that the illumination device 30 may be read as a light.
  • the management device 40 operates the inspection device 20 and displays inspection results.
  • the management device 40 is, for example, an information processing device represented by a PC (Personal Computer).
  • the management device 40 is connected to the inspection device 20 via a predetermined communication network 11 .
  • the communication network 11 may be either a wired LAN or a wireless LAN.
  • the input device 50 receives input operations from the user. Examples of the input device 50 include a keyboard, mouse, touchpad, microphone, or the like.
  • the input device 50 is connected to the management device 40 via a predetermined cable 13 or wireless communication (for example, Bluetooth (registered trademark)). Note that the input device 50 may be connected to the inspection device 20 .
  • the speaker 52 outputs audio related to the examination.
  • the speaker 52 is connected to the management device 40 via a predetermined cable 16.
  • the display device 60 displays a screen regarding inspection.
  • Examples of the display device 60 include an LCD (Liquid Crystal Display) or an organic EL (Electro Luminescence) display.
  • a display device 60 is connected to the management device 40 via a predetermined cable 14 .
  • the display device 60 may be connected to the inspection device 20 .
  • the input device 50, the speaker 52, and the display device 60 may be an integrated device (such as a tablet terminal, for example).
  • the management device 40, the input device 50, the speaker 52, and the display device 60 may be an integrated device.
  • the detection sensor 19 is a sensor for detecting the arrival of the workpiece 3 to be inspected.
  • the detection sensor 19 is connected to the inspection device 20 via a predetermined cable 15 . Note that the detection sensor 19 may be connected to the management device 40 .
  • the patrol light 62 is connected to the inspection device 20 via a predetermined cable 17.
  • the patrol light 62 blinks when receiving a predetermined signal transmitted according to the inspection result from the inspection device 20 .
  • the inspection apparatus 20 transmits a predetermined signal to the patrol light 62 when the abnormally attached component 2 is detected.
  • the patrol light 62 receives the signal and blinks. Accordingly, the user can immediately know that the inspection apparatus 20 has detected an abnormally mounted component 2 by seeing the flashing of the patrol light 62 .
  • inspection device 20 the lighting device 30, and the management device 40 are separate devices.
  • inspection device 20 may include illumination device 30 .
  • inspection device 20 may include management device 40 .
  • inspection device 20 may include illumination device 30 and management device 40 .
  • the inspection apparatus 20 includes a camera 21, one or more processors 22, a ROM (Read Only Memory) 23, a RAM (Random Access Memory) 24, a storage 25, a communication I/F (Interface) 26, and , and an input/output I/F 27 .
  • the camera 21, the processor 22, the ROM 23, the RAM 24, the storage 25, the communication I/F 26, and the input/output I/F 27 are connected via a bidirectional communicable bus (not shown).
  • the camera 21 includes, for example, a lens and an imaging device.
  • the camera 21 captures an image of the workpiece 3 to be inspected and generates a workpiece image 4 .
  • the camera 21 may be read as another term such as an imaging device or an imaging unit.
  • the processor 22 controls the operation of the inspection device 20 as a whole.
  • the processor 22 may be read as other terms such as computing means, CPU (Central Processing Unit), or controller.
  • the ROM 23 is a read-only non-volatile storage medium in which programs such as firmware are stored.
  • the RAM 24 is a volatile storage medium that enables high-speed reading and writing of information, and is used as a work area when the processor 22 processes information. Note that the RAM 24 may simply be read as a memory.
  • the storage 25 is a non-volatile storage medium from which information can be read and written, and stores various control programs, application programs, learning models 110 (see FIG. 2), and the like.
  • the storage 25 is configured by, for example, a flash memory or an SD card.
  • the communication I/F 26 is an interface for connecting the inspection device 20 to the communication network 11.
  • the communication I/F 26 may be an interface compatible with either a wired LAN or a wireless LAN.
  • the input/output I/F 27 is an interface for connecting the lighting device 30 and the detection sensor 19 .
  • the input device 50 and/or the display device 60 may be connected to the input/output I/F 27 .
  • Management device 40 includes one or more processors 41 , ROM 42 , RAM 43 , storage 44 , communication I/F 45 and input/output I/F 46 .
  • Processor 41, ROM 42, RAM 43, storage 44, communication I/F 45, and input/output I/F 46 are connected via a bus (not shown) capable of two-way communication.
  • the processor 41 controls the overall operation of the management device 40.
  • the ROM 42 is a read-only non-volatile storage medium in which programs such as firmware are stored.
  • the RAM 43 is a volatile storage medium that enables high-speed reading and writing of information, and is used as a work area when the processor 41 processes information. Note that the RAM 43 may simply be read as a memory.
  • the storage 44 is a non-volatile storage medium from which information can be read and written, and stores an OS (Operating System), various control programs, application programs, learning models 110 (see FIG. 3), and the like.
  • the storage 44 is configured by, for example, flash memory, SSD (Solid State Drive), or HDD (Hard Disk Drive).
  • the communication I/F 45 is an interface for connecting the management device 40 to the communication network 11.
  • the communication I/F 45 may be an interface compatible with either a wired LAN or a wireless LAN.
  • the input/output I/F 46 is an interface for connecting the input device 50 and/or the display device 60.
  • the management device 40 may include a GPU (Graphics Processing Unit) for processing image drawing at high speed.
  • GPU Graphics Processing Unit
  • the illumination device 30 includes an LED (Light Emitting Diode) light source 31 , an input/output I/F 32 and a dimming control circuit 33 .
  • LED Light Emitting Diode
  • the LED light source 31 includes a plurality of LEDs and is capable of emitting light.
  • the illumination device 30 may have a plurality of LED light sources 31 with different shapes.
  • the illumination device 30 may include a bar-shaped LED light source 31 , a multi-angle-shaped LED light source 31 , a dome-shaped LED light source 31 , and a backlight-shaped LED light source 31 .
  • the illumination device 30 may include an LED light source 31 that emits infrared rays.
  • the input/output I/F 32 is an interface for connecting the inspection device 20 .
  • the dimming control circuit 33 controls light emission of the LED light source 31 based on instructions from the inspection device 20 received through the input/output I/F 32 .
  • the dimming control circuit 33 controls which LED light source 31 is to emit light, the color and intensity of illumination of the LED light source 31, and the like.
  • FIG. 2 is a block diagram showing an example of the functional configuration of the inspection device 20 according to Embodiment 1. As shown in FIG.
  • the inspection apparatus 20 includes an imaging control unit 101, an optimum condition determination unit 102, an inspection execution unit 103, an attention area storage unit 104, an imaging condition storage unit 105, an optimum imaging condition storage unit 106, and a learning model storage unit. 107.
  • the functions of the imaging control unit 101, the optimum condition determination unit 102, and the inspection execution unit 103 may be realized by the processor 22 cooperating with the RAM 24 (memory) and the like to execute a computer program (inspection program).
  • Functions of the imaging condition storage unit 105 , the optimum imaging condition storage unit 106 , and the learning model storage unit 107 may be realized by the RAM 24 (memory) and/or the storage 25 .
  • the attention area storage unit 104 stores information (hereinafter referred to as attention area information) regarding a plurality of attention areas 5 (see FIG. 4) set on the workpiece 3 to be inspected.
  • the region of interest 5 is a region set surrounding the part 2, which is the object to be inspected, of the work 3, which is the object to be inspected.
  • a region in which a plurality of parts 5, which are objects to be inspected, exist on the workpiece 3 may be referred to as a region to be inspected. Details of the attention area 5 will be described later (see FIG. 4).
  • the imaging condition storage unit 105 stores information regarding a plurality of imaging conditions.
  • the imaging conditions include information about imaging by the camera 21 and information about lighting by the lighting device 30 . Details of the imaging conditions will be described later (see FIGS. 6 and 7).
  • the optimum imaging condition storage unit 106 stores the optimum imaging condition (hereinafter referred to as the optimum imaging condition) for the workpiece 3 to be inspected among the plurality of imaging conditions stored in the imaging condition storage unit 105 . Details of the optimum imaging conditions will be described later (see FIGS. 9 and 10).
  • the learning model storage unit 107 stores a learning model 110 used for detecting whether or not the component 2, which is the inspection object, is normally mounted in the attention area 5 of the work image 4 obtained by imaging the work 3. .
  • the imaging control unit 101 controls the camera 21 and the lighting device 30 to image the workpiece 3 to be inspected and generate the workpiece image 4 .
  • the imaging control unit 101 may adjust the image quality of the workpiece image 4 .
  • the optimum condition determining unit 102 determines the optimum imaging condition (optimum imaging condition) for each attention area 5 of the workpiece image 4 to be inspected from among the plurality of imaging conditions stored in the imaging condition storage unit 105 .
  • the optimal condition determination unit 102 stores the determined optimal imaging conditions in the optimal imaging condition storage unit 106 .
  • the inspection executing unit 103 uses the work image 4 captured by the imaging control unit 101 based on the optimal imaging conditions stored in the optimal imaging condition storage unit 106 to determine whether the component 2 is normal in the area corresponding to each attention area 5. Check whether it is attached to the The inspection executing unit 103 inputs the image of the area corresponding to the attention area 5 of the workpiece image 4 to the learning model 110 stored in the learning model storage unit 107, thereby executing the inspection.
  • FIG. 3 is a block diagram showing an example of functional configuration of the management device 40 according to the first embodiment.
  • the management device 40 has, as functions, an attention area setting unit 201, an imaging pattern generation unit 202, an illumination pattern generation unit 203, an imaging condition generation unit 204, a learning model generation unit 205, a UI control unit 206, an attention It has an area storage unit 207 , an imaging pattern storage unit 208 , an illumination pattern storage unit 209 , an imaging condition storage unit 210 , a learning model storage unit 211 , and an inspection result storage unit 212 .
  • the functions of the attention area setting unit 201, the imaging pattern generation unit 202, the illumination pattern generation unit 203, the imaging condition generation unit 204, the learning model generation unit 205, and the UI control unit 206 are performed by the processor 41 in cooperation with the RAM 43 (memory) and the like.
  • the functions of the region-of-interest storage unit 207, the imaging pattern storage unit 208, the illumination pattern storage unit 209, the imaging condition storage unit 210, the learning model storage unit 211, and the inspection result storage unit 212 are implemented by the RAM 43 (memory) and/or the storage 44. may be realized by
  • the region-of-interest storage unit 207 stores information (region-of-interest information) on a plurality of regions of interest 5 set for each work 3 .
  • the imaging pattern storage unit 208 stores a plurality of imaging patterns. Details of the imaging pattern will be described later (see FIG. 5).
  • the illumination pattern storage unit 209 stores a plurality of illumination patterns. Details of the illumination pattern will be described later (see FIG. 6).
  • the imaging condition storage unit 210 stores information regarding a plurality of imaging conditions. Details of the imaging conditions will be described later.
  • the learning model storage unit 211 stores the learning model 110 used to detect whether or not the component 2 is normally mounted in the attention area 5 of the work image 4 .
  • the inspection result storage unit 212 stores information indicating the inspection result of the work 3 inspected by the inspection device 20 (hereinafter referred to as work inspection result information).
  • the attention area setting unit 201 sets the attention area 5 so as to surround the part 2 which is the inspection object of the workpiece 3, and stores information (attention area information) on the attention area 5 in the attention area storage unit 207.
  • the attention area setting unit 201 also acquires attention area information associated with the workpiece 3 to be inspected from the attention area storage unit 207 and transmits the attention area information to the inspection apparatus 20 .
  • the inspection apparatus 20 stores the transmitted attention area information in the attention area storage unit 104 .
  • the imaging pattern generation unit 202 generates a plurality of imaging patterns and stores them in the imaging pattern storage unit 208 . Details of the imaging pattern will be described later (see FIG. 5).
  • the illumination pattern generation unit 203 generates a plurality of illumination patterns and stores them in the illumination pattern generation unit 203. Details of the illumination pattern will be described later (see FIG. 6).
  • the imaging condition generation unit 204 generates imaging conditions by combining the imaging pattern stored in the imaging pattern storage unit 208 and the illumination pattern stored in the illumination pattern storage unit 209 .
  • the imaging condition generation unit 204 stores the generated imaging conditions in the imaging condition storage unit 210 .
  • the imaging condition generation unit 204 also transmits a plurality of imaging conditions stored in the imaging condition storage unit 210 to the inspection device 20 .
  • the inspection device 20 stores the imaging conditions received from the management device 40 in the imaging condition storage unit 105 .
  • the learning model generation unit 205 generates and learns the learning model 110 used to check whether the component 2 is normally mounted in the attention area 5 .
  • the learning model generation unit 205 learns the learning model 110 using the feature amount of images (hereinafter referred to as attention area images) in the attention areas 5 of a plurality of work images 4 on which the parts 2 are normally mounted. I do.
  • attention area images the feature amount of images
  • the learning model 110 determines how much the feature amount of the input attention area image differs from the feature amount of the attention area image in which the part 2 is normally mounted. Output as an evaluation value.
  • the learning model 110 When a region-of-interest image is input, the learning model 110 outputs (that is, infers) a smaller evaluation value as the probability that the component 2 included in the region-of-interest image is normally mounted is higher. In other words, when the attention area image is input, the learning model 110 outputs (that is, infers) a larger evaluation value as the possibility that the part 2 included in the attention area image is abnormally mounted is higher. .
  • the learning model generation unit 205 stores the learned learning model 110 in the learning model storage unit 211 . Also, the learning model generation unit 205 transmits the learned learning model 110 to the inspection device 20 .
  • the inspection device 20 stores the learning model 110 received from the management device 40 in the learning model storage unit 107 .
  • the learning model 110 may be configured as a neural network for image analysis, a deep neural network, or a CNN (Convolutional Neural Network). However, the learning model 110 is not limited to these, and may be configured based on various artificial intelligence techniques or machine learning techniques. As described above, the present embodiment exemplifies the case of using the learning model 100 that outputs a larger evaluation value as the possibility that the component 2 is abnormally attached is higher. However, the present embodiment may be configured using the learning model 100 that outputs a larger evaluation value as the possibility that the component 2 is normally mounted is higher. Moreover, although the management device 40 has the learning model generation unit 205 in the present embodiment, the inspection device 20 may have the learning model generation unit 205 .
  • the UI control unit 206 generates a UI screen regarding examination and displays it on the display device 60 . Also, the UI control unit 206 receives input from the input device 50 and controls input and display of various information. For example, the UI control unit 206 generates a UI screen showing the inspection result of the work 3 based on the work inspection result information transmitted from the inspection device 20 and displays it on the display device 60 . Thereby, the user can visually recognize the inspection result for the workpiece 3 . Also, the UI control unit 206 stores workpiece inspection result information transmitted from the inspection apparatus 20 in the inspection result storage unit 212 . Details of the UI control unit 206 will be described later (see FIGS. 12, 13, 15, and 16).
  • FIG. 4 is a diagram showing an example of the attention area 5 set on the workpiece 3 according to the first embodiment.
  • the attention area setting unit 201 sets an attention area 5 (for example, attention areas 5a and 5b) so as to surround a part 2 (for example, parts 2a and 2b) that is an inspection target on the workpiece 3.
  • FIG. The part 2a may be read as the first inspection object, and the attention area 5a surrounding the part 2a may be read as the first attention area.
  • the part 2b may be read as the second inspection object, and the attention area 5b surrounding the part 2b may be read as the second attention area.
  • the shape of the attention area 5 may be rectangular. However, the shape of the attention area 5 is not limited to a rectangle, and may be polygonal, elliptical, or the like.
  • the attention area setting unit 201 sets the attention area 5 by, for example, one of the following methods (A1) and (A2).
  • the attention area setting unit 201 sets the attention area 5 based on the design information of the workpiece 3 .
  • the design information is information that determines the position of the component 2 that is the object to be inspected on the board 1 . Therefore, the attention area setting unit 201 automatically identifies the position on the board 1 where the inspection target component 2 is mounted based on the design information, and sets the attention area 5 at the identified position.
  • the user manually sets the attention area 5 on the work image 4 through the UI screen provided by the UI control unit 206 .
  • the user surrounds the part 2 of the inspection object in the workpiece image 4 through the UI screen.
  • the attention area setting unit 201 sets the enclosed area as the attention area 5 of the component 2 .
  • the attention area setting unit 201 generates attention area information indicating the attention area 5 set by the method (A1) or (A2) above, and stores it in the attention area storage unit 207 .
  • FIG. 5 is a diagram showing an example of an imaging pattern according to Embodiment 1.
  • FIG. 5 is a diagram showing an example of an imaging pattern according to Embodiment 1.
  • An imaging pattern is a combination of multiple different imaging parameters.
  • imaging parameters as shown in FIG. 5, are shutter speed, maximum exposure time, lens aperture value, maximum gain, camera sensitivity, brightness, white balance red volume, white balance blue volume, contrast intensity, and dark area correction. , bright area correction, and pedestal level.
  • the imaging pattern generation unit 202 generates a plurality of imaging patterns with at least one different imaging parameter, for example, by either method (B1) or (B2) below.
  • the user manually adjusts each imaging parameter through the UI screen provided by the UI control unit 206 to generate a plurality of imaging patterns. For example, the user generates a first imaging pattern in which each imaging parameter is adjusted so that the dark-colored component 2 attached to the first attention area 5 is appropriately imaged. Also, the user generates a second imaging pattern in which each imaging parameter is adjusted so that the bright-colored component 2 attached to the second attention area 5 is appropriately imaged.
  • the imaging pattern generation unit 202 stores the plurality of imaging patterns generated in this manner in the imaging pattern storage unit 208 .
  • the imaging pattern generation unit 202 analyzes the work image 4 captured by the camera 21 of the inspection device 20, automatically adjusts imaging parameters, and generates a plurality of imaging patterns. For example, the imaging pattern generation unit 202 analyzes the features of the region-of-interest image of the first region of interest 5 of the work image 4, determines each imaging parameter appropriate for the region of interest 5 from the analysis result, and Generate an imaging pattern. Examples of features of the attention area 5 include the color of the component 2, the reflectance of the component 2, the transmittance of the component 2, the material of the component 2, the height of the component 2, and the like.
  • the imaging pattern generation unit 202 analyzes the features of the attention area image of the second attention area 5 of the work image 4, determines each imaging parameter appropriate for the attention area 5 from the analysis result, Generate an imaging pattern.
  • the imaging pattern generation unit 202 stores the plurality of imaging patterns generated in this manner in the imaging pattern storage unit 208 .
  • the management device 40 has the imaging pattern generation unit 202
  • the inspection device 20 may have the imaging pattern generation unit 202 .
  • ⁇ Details of lighting pattern> 6 is a diagram showing an example of an illumination pattern according to Embodiment 1.
  • FIG. 1 is a diagram showing an example of an illumination pattern according to Embodiment 1.
  • a lighting pattern is a combination of multiple different lighting parameters.
  • lighting parameters include lighting shape, lighting method, lighting color, use of polarizing filter, use of infrared lighting, and lighting intensity, as shown in FIG.
  • Examples of lighting parameters related to lighting shape include bar, multi-angle, dome, and backlight.
  • the imaging control unit 101 captures an image of the workpiece 3 based on an illumination pattern in which the illumination parameter related to the shape of the illumination is “bar”, the imaging control unit 101 lights the bar-shaped LED light source 31 provided in the illumination device 30 .
  • Specular reflection, diffuse reflection, and transmission are examples of lighting parameters related to how to apply lighting.
  • Examples of lighting parameters related to lighting colors include blue, red, and green.
  • the illumination pattern generation unit 203 may generate a plurality of illumination patterns with at least one illumination parameter different from each other by the following method. That is, the user manually adjusts the lighting parameters through the UI screen provided by the UI control unit 206 to generate a plurality of lighting patterns.
  • the illumination pattern generation unit 203 stores the plurality of illumination patterns generated in this way in the illumination pattern storage unit 209 .
  • the management device 40 has the illumination pattern generation unit 203
  • the inspection device 20 may have the illumination pattern generation unit 203 .
  • the imaging condition generation unit 204 combines one imaging pattern stored in the imaging pattern storage unit 208 and one illumination pattern stored in the illumination pattern storage unit 209 to generate one imaging condition.
  • the imaging condition generation unit 204 generates a plurality of imaging conditions by combining imaging patterns and illumination patterns in different ways.
  • the imaging condition generation unit 204 transmits a plurality of imaging conditions stored in the imaging condition storage unit 210 to the inspection device 20 .
  • the inspection device 20 stores the plurality of imaging conditions received from the management device 40 in the imaging condition storage unit 105 .
  • FIG. 7 is a sequence chart showing pre-inspection operations in the inspection system 10 according to the first embodiment.
  • the management device 40 displays an inspection main UI screen (not shown) on the display device 60 (step S11).
  • the user inputs the user name on the examination main UI screen through the input device 50 (steps S12 and S13).
  • the management device 40 transmits a request for setting the input user name to the inspection device 20 (step S14).
  • the inspection device 20 receives the user name setting request, completes the setting of the user name, and transmits a completion response to the management device 40 (step S15).
  • the management device 40 receives the completion response from the inspection device 20 and displays on the display device 60 that the input of the user name has been completed (step S16).
  • the user selects the workpiece 3 to be inspected through the input device 50 (steps S17 and S18).
  • the management device 40 transmits a selection request for the selected work 3 to the inspection device 20 (step S19).
  • the inspection device 20 receives the selection request for the work 3, selects the work 3 to be inspected, and transmits a completion response to the management device 40 (step S20).
  • the management device 40 receives the completion response from the inspection device 20, and displays on the display device 60 that the selection of the work 3 to be inspected has been completed (step S21).
  • the management device 40 transmits to the inspection device 20 a request for distribution of the video stream being imaged by the camera 21 (step S22).
  • the inspection device 20 receives the video stream distribution request and transmits a response to the request to the management device 40 (step S23). Also, the inspection device 20 transmits the video stream being imaged by the camera 21 to the management device 40 (step S24).
  • the management device 40 displays the video stream received from the inspection device 20 in real time on the display device 60 (step S25).
  • the user can set the workpiece 3 to be inspected in the inspection device 20 . Also, the user can view the video stream being imaged by the camera 21 of the inspection device 20 in real time.
  • FIG. 8 is a sequence chart showing inspection operations in the inspection system 10 according to the first embodiment.
  • the user When the user starts the inspection, the user inputs inspection mode ON through the input device 50 (steps S31 and S32).
  • the management device 40 transmits the input inspection mode ON request to the inspection device 20 (step S33).
  • the inspection device 20 receives the request to turn on the inspection mode, changes the inspection mode to ON, and transmits a completion response to the management device 40 (step S34).
  • the management device 40 receives the completion response from the inspection device 20, and displays on the display device 60 that switching to inspection mode ON has been completed (step S35).
  • the management device 40 transmits a request for distribution of the video being captured by the camera 21 to the inspection device 20 (step S36).
  • the inspection device 20 receives the video distribution request and transmits an acknowledgment of the distribution request to the management device 40 (step S37). Also, the inspection device 20 transmits the video stream being imaged by the camera 21 to the management device 40 (step S38).
  • the management device 40 displays the video stream received from the inspection device 20 in real time on the display device 60 (step S39).
  • the detection sensor 19 transmits a work detection notification to the inspection device 20 (step S40).
  • the inspection apparatus 20 When the inspection apparatus 20 receives the workpiece detection notification, it transmits an inspection start notification for the workpiece 3 to the management apparatus 40 (step S41).
  • the management device 40 When the management device 40 receives the inspection start notification of the work 3 from the inspection device 20, it displays on the display device 60 that the inspection of the work 3 will start (step S42).
  • the inspection device 20 executes inspection processing of the workpiece 3 (step S43). The details of the inspection process for the workpiece 3 will be described later (see FIG. 11).
  • the inspection device 20 After completing the inspection process of the work 3, the inspection device 20 transmits work inspection result information including the inspection result of the work 3 to the management device 40 (step S44).
  • the management device 40 receives the workpiece inspection result information from the inspection device 20, and displays the content of the workpiece inspection result information on the display device 60 (step S45).
  • the inspection system 10 repeatedly performs the above-described processing of steps S40 to S45 on the workpieces 3 to be inspected that are sequentially conveyed.
  • the user When the user ends the inspection, the user inputs inspection mode OFF through the input device 50 (steps S46 and S47).
  • the management device 40 transmits the input request to turn off the inspection mode to the inspection device 20 (step S48).
  • the inspection device 20 receives the request to turn off the inspection mode, changes the inspection mode to OFF, and transmits a completion response to the management device 40 (step S49).
  • the management device 40 receives the completion response from the inspection device 20, and displays on the display device 60 that switching to inspection mode OFF has been completed (step S50).
  • pre-inspection processing performed before inspection processing of the workpiece 3 will be described with reference to FIGS. 9 and 10.
  • FIG. In the pre-inspection processing, setting of the attention area 5 for the workpiece 3 and determination of optimum imaging conditions are performed.
  • FIG. 9 is a flowchart showing an example of pre-examination processing according to the first embodiment.
  • the imaging control unit 101 controls the camera 21 to image the workpiece 3 and generate the workpiece image 4 (step S101).
  • the attention area setting unit 201 sets a plurality of attention areas 5 on the workpiece 4 (step S102).
  • the attention area setting unit 201 transmits information (attention area information) indicating a plurality of set attention areas 5 to the inspection device 20 .
  • the inspection apparatus 20 stores the received plural pieces of attention area information in the attention area storage unit 104 .
  • the imaging condition generation unit 204 generates a plurality of imaging conditions by combining a plurality of imaging patterns stored in the imaging pattern storage unit 208 and a plurality of illumination patterns stored in the illumination pattern storage unit 209 (step S103).
  • the imaging condition generation unit 204 transmits the plurality of generated imaging conditions to the inspection device 20 .
  • the inspection apparatus 20 stores the received multiple imaging conditions in the imaging condition storage unit 105 .
  • the optimum condition determining unit 102 selects one unselected imaging condition from among the imaging conditions stored in the imaging condition storage unit 105 (step S104).
  • the selected imaging conditions are referred to as selected imaging conditions.
  • the imaging control unit 101 controls the camera 21 and the lighting device 30 based on the selected imaging conditions to image the workpiece 3 and generate the workpiece image 4 (step S105).
  • the imaging control unit 101 stores the generated work image 4 in the RAM 24 or storage 25 .
  • the optimum condition determination unit 102 determines whether or not all imaging conditions stored in the imaging condition storage unit 105 have been selected (step S106).
  • step S106 NO
  • the inspection device 20 returns the process to step S104.
  • step S106 When all imaging conditions have been selected (step S106: YES), the optimum condition determination unit 102 executes optimum condition determination processing (step S107). Details of the optimum condition determination process will be described later (see FIG. 10). Then, the process ends.
  • FIG. 10 is a flow chart showing an example of the optimum condition determination process (step S107) shown in FIG.
  • the optimum condition determination unit 102 selects one unselected attention area 5 from among the plurality of attention areas 5 stored in the attention area storage unit 104 (step S201).
  • the selected attention area 5 is called a selected attention area 5 .
  • the optimum condition determination unit 102 acquires an image of the selected region of interest 5 from each of the plurality of work images 4 captured under different imaging conditions, stored in the RAM 24 or storage 25 in step S105. In the description of FIG. 10, this image is called a selected region-of-interest image.
  • the optimum condition determination unit 102 uses the learning model 110 stored in the learning model storage unit 107 to calculate an evaluation value for each of the acquired selected region-of-interest images (step S202).
  • the optimum condition determination unit 102 determines the optimum imaging conditions for the selected region of interest 5 based on the evaluation value calculated in step S202 (step S203). For example, the optimum condition determining unit 102 determines the imaging condition with the highest calculated evaluation value as the optimum imaging condition for the selected region of interest 5 .
  • the optimal condition determination unit 102 associates the selected attention area 5 stored in the attention area storage unit 104 with the optimal imaging condition determined in step S203 (step S204).
  • the optimum condition determination unit 102 determines whether or not all the attention areas 5 stored in the attention area storage unit 104 have been selected (step S205).
  • step S205 If an unselected region of interest 5 remains (step S205: NO), the inspection device 20 returns the process to step S201. If all the attention areas 5 have been selected (step S205: YES), the inspection device 20 terminates this process.
  • FIG. 11 is a flowchart showing an example of inspection processing according to the first embodiment.
  • the imaging control unit 101 selects the attention area storage unit 104 and the optimum imaging condition storage unit 106 associated with the workpiece 3 to be inspected (step S301).
  • the imaging control unit 101 determines whether or not the detection sensor 19 has detected the workpiece 3 (step S302). For example, the imaging control unit 101 determines whether or not a workpiece detection notification has been received from the detection sensor 19 .
  • step S302 If the detection sensor 19 has not detected the workpiece 3 (step S302: NO), the inspection device 20 returns the process to step S302. If the detection sensor 19 detects the workpiece 3 (step S302: YES), the inspection apparatus 20 advances the process to the next step S303.
  • the imaging control unit 101 selects one unselected optimum imaging condition from the optimum imaging condition storage unit 106 (step S303).
  • the selected optimum imaging conditions are referred to as selected optimum imaging conditions.
  • the imaging control unit 101 controls the camera 21 and the lighting device 30 based on the selected optimum imaging conditions to image the workpiece 3 and generate the workpiece image 4 (step S304).
  • the imaging control unit 101 stores the generated work image 4 in the RAM 43 or storage 44 .
  • the imaging control unit 101 determines whether or not all the optimum imaging conditions included in the optimum imaging condition storage unit 106 have been selected (step S305).
  • step S305: NO If unselected optimum imaging conditions remain (step S305: NO), the inspection apparatus 20 returns the process to step S303. If all the optimum imaging conditions have been selected (step S305: YES), the inspection apparatus 20 advances the process to the next step S306.
  • the inspection execution unit 103 selects one unselected attention area 5 from among the plurality of attention areas 5 stored in the attention area storage unit 104 (step S306).
  • the selected attention area 5 is called a selected attention area 5 .
  • the inspection executing unit 103 picks up images under the optimum imaging conditions associated with the selected region of interest 5 among the plurality of workpiece images 4 picked up under different optimum imaging conditions stored in the RAM 43 or the storage 44 in step S304.
  • the work image 4 that has been displayed is selected (step S307). In the description of FIG. 11, the selected work image 4 will be referred to as a selected work image 4.
  • FIG. 11 the selected work image 4 will be referred to as a selected work image 4.
  • the inspection executing unit 103 uses the learning model 110 stored in the learning model storage unit 107 to calculate the evaluation value of the image in the selected attention area 5 of the selected work image 4 (step S308).
  • the image is referred to as a selected region-of-interest image.
  • the selected workpiece image 4 is captured under optimum imaging conditions for imaging the selected attention area 5 . Therefore, the evaluation value calculated from the selected region-of-interest image obtained from the selected work image 4 is higher than the evaluation value calculated from the selected region-of-interest image obtained from the work image captured under a single imaging condition. , can be more accurate.
  • the inspection execution unit 103 determines the inspection result of the selected region of interest 5 based on the evaluation value calculated in step S308 (step S309). For example, when the evaluation value is less than a predetermined threshold value Th, the inspection execution unit 103 determines that the component 2 is abnormally attached (NG) in the selected attention area 5 . For example, when the evaluation value is equal to or greater than the threshold Th, the inspection execution unit 103 determines that the component 2 is normally mounted (OK) in the selected attention area 5 . The inspection execution unit 103 associates the inspection results with the selected region of interest 5 and stores them in the RAM 43 or the storage 44 .
  • the inspection execution unit 103 determines whether or not all the attention areas 5 stored in the attention area storage unit 104 have been selected (step S310). If there remains an unselected region of interest 5, the inspection apparatus 20 returns the process to step S306.
  • the inspection execution unit 103 collects the inspection results associated with each attention area 5 and stored in the RAM 43 or the storage 44 in step S309 to generate workpiece inspection result information. and transmitted to the management device 40 (step S311). The inspection apparatus 20 then returns the process to step S302 and inspects the work 3 that is next transported.
  • the UI control unit 206 of the management device 40 receives work inspection result information from the inspection device 20 and stores it in the inspection result storage unit 212 .
  • the UI control unit 206 also displays the content of the work inspection result Xu Yufeng on the display device 60 .
  • the user can see the contents of the workpiece inspection result information displayed on the display device 60 and can confirm whether or not the component 2 is normally mounted in each attention area 5 of the workpiece 3 .
  • ⁇ UI screen for patterns> 12 is a schematic diagram showing an example of a UI screen for generating and adjusting an imaging pattern and an illumination pattern according to Embodiment 1.
  • FIG. 12 is a schematic diagram showing an example of a UI screen for generating and adjusting an imaging pattern and an illumination pattern according to Embodiment 1.
  • the UI control unit 206 of the management device 40 displays, on the display device 60, a UI screen (hereinafter referred to as a pattern UI screen 300) for generating or adjusting the imaging pattern and the illumination pattern, as shown in FIG.
  • the pattern UI screen 300 includes an imaging pattern list area 301, an illumination pattern list area 302, an imaging condition-specific area 303, and a workpiece image confirmation area 304, as shown in FIG.
  • the UI control unit 206 lists the imaging patterns stored in the imaging pattern storage unit 208 in the imaging pattern list area 301 .
  • the user can generate a new imaging pattern by adding a new imaging pattern to the imaging pattern list area 301 . Further, the user can adjust the imaging parameters of the imaging patterns stored in the imaging pattern storage unit 208 by adjusting the imaging parameters displayed in the imaging pattern list area 301 .
  • the UI control unit 206 lists the illumination patterns stored in the illumination pattern storage unit 209 in the illumination pattern list area 302 .
  • a user can generate a new lighting pattern by adding a new lighting pattern to the lighting pattern list area 302 .
  • the user can adjust the illumination parameters of the illumination patterns stored in the illumination pattern storage unit 209 by adjusting the illumination parameters displayed in the illumination pattern list area 302 .
  • the UI control unit 206 displays a plurality of workpiece images 4 captured under different imaging conditions in the imaging condition-specific area 303 . Accordingly, the user can confirm what kind of work image 4 is captured for each imaging condition in the imaging condition-specific area 303 .
  • the UI control unit 206 displays the workpiece image 4 selected by the user from the imaging condition-specific area 303 in the workpiece image confirmation area 304 .
  • the UI control unit 206 enlarges or reduces and displays the work image 4 in the work image confirmation area 304 according to the user's operation. This allows the user to check the selected work image 4 in more detail.
  • FIG. 13 is a schematic diagram showing an example of a UI screen for setting the attention area 5 according to the first embodiment.
  • 14 is a diagram showing a configuration example of attention area information according to Embodiment 1.
  • the UI control unit 206 of the management device 40 displays a UI screen for inputting or correcting the attention area 5 (hereinafter referred to as an attention area UI screen 320) on the display device 60, as shown in FIG.
  • the attention area UI screen 320 includes an attention area list area 321 , an attention area confirmation area 322 , and an evaluation value confirmation area 323 .
  • the UI control unit 206 lists the attention area information stored in the attention area storage unit 207 in the attention area list area 321 .
  • the attention area information includes, as parameters, the part name, part number, inspection details, upper left coordinates of the attention area, lower right coordinates of the attention area, preprocessing position correction, imaging pattern, and illumination pattern. have.
  • Part name indicates the name of the part 2.
  • Part number indicates the part number of the part 2.
  • Inspection content indicates whether or not the component 2 is to be inspected. The inspection content also indicates how the part 2 may be mounted abnormally.
  • “Upper left coordinates of attention area” indicates the X and Y coordinates of the upper left point of the rectangular attention area 5 on the workpiece image 4 .
  • “Lower right coordinates of attention area” indicates the X and Y coordinates of the lower right point of the rectangular attention area 5 on the workpiece image 4.
  • FIG. "Pre-processing position correction” indicates whether or not the position of the attention area 5 is corrected in pre-processing.
  • a setting button 324 may be included in the “preprocessing position correction”.
  • a UI screen for setting a correction amount of the position of the attention area 5 by preprocessing and a threshold value Th to be compared with the evaluation value of the attention area 5 is displayed.
  • "Imaging pattern” indicates the imaging pattern of the region of interest 5 under the optimal imaging conditions.
  • "Illumination pattern” indicates the illumination pattern of the target area 5 under the optimum imaging conditions.
  • the user can set new attention area information by adding a new attention area 5 to the attention area list area 321 . Further, the user can adjust the parameters of the attention area information stored in the attention area storage unit 207 by adjusting the parameters of the attention area information displayed in the attention area list area 321 .
  • the UI control unit 206 displays the work image 4 in the attention area confirmation area 322 , and displays a rectangular frame indicating the attention area 5 corresponding to the attention area information selected in the attention area list area 321 on the work image 4 . Display superimposed. Thereby, the user can confirm that the attention area information being selected in the attention area list area 321 corresponds to the attention area 5 of the component 2 at which position on the workpiece image 4 .
  • the UI control unit 206 displays an evaluation value calculated from each attention area image of the attention area 5 captured under different imaging conditions for the attention area information selected in the attention area list area 321 in the evaluation value confirmation area 323 . , is displayed together with the attention area image. Thereby, the user can confirm the relationship between the imaging condition and the evaluation value for each attention area 5 .
  • the attention area setting unit 201 may store the contents input or modified in the attention area list area 321 in the attention area storage unit 207 as attention area information.
  • the attention area information may be configured as CSV data as shown in FIG. 14 .
  • FIG. 15 is a schematic diagram showing an example of a UI screen for checking inspection results of the workpiece 3 according to the first embodiment.
  • the UI control unit 206 uses a plurality of workpiece inspection results stored in the inspection result storage unit 212 to display a UI screen (hereinafter referred to as an inspection result list UI screen 340) for checking the inspection results of the workpiece 3 in a list. ) is displayed on the display device 60 .
  • a UI screen hereinafter referred to as an inspection result list UI screen 340
  • the inspection result list UI screen 340 includes a plurality of work areas 341 .
  • the inspection result list UI screen 340 also includes a work image 4 , an inspection result 342 , and a details button 343 for each work area 341 .
  • the UI control unit 206 displays a workpiece image 4 of one inspected workpiece 3 in one workpiece area 341 .
  • the UI control unit 206 When an abnormality is detected in at least one attention area 5 among the plurality of attention areas 5 in the work image 4 displayed in the work area 341 , the UI control unit 206 outputs an abnormality (NG) as the inspection result 342 . indicate. For the work image 4 displayed in the work area 341 , the UI control unit 206 displays normal (OK) as the inspection result 342 when no abnormality is detected in all of the plurality of attention areas 5 .
  • the UI control unit 206 displays the inspection result details UI screen 360 (see FIG. 16) for the work image 4 of the selected work area 341 .
  • FIG. 16 is a schematic diagram showing an example of a UI screen for confirming inspection results in detail according to the first embodiment.
  • the UI control unit 206 controls the work 3 for which the details button 343 is selected (depressed) in FIG. 15 (hereinafter referred to as the selected work 3 in the description of FIG. 16).
  • a UI screen (hereinafter referred to as an inspection result detail UI screen 360) is displayed.
  • the inspection result details UI screen 360 includes an explanation area 361 , a work area 362 and a parts list area 363 .
  • the UI control unit 206 displays, in the explanation area 361, the content explaining how the attention area 5 determined to be abnormal and the attention area 5 determined to be normal are displayed in a distinguishable manner.
  • the UI control unit 206 displays the work image 4 of the selected work 3 in the work area 362 .
  • the UI control unit 206 also superimposes the attention area 5 on each part 2 of the work image 4 .
  • the UI control unit 206 displays the attention area 5 of the part 2 determined to be abnormal and the attention area 5 of the part 2 determined to be normal in a distinguishable manner.
  • the UI control unit 206 displays the attention area 5 of the part 2 determined to be abnormal in red, and displays the attention area 5 of the part 2 determined to be normal in green. Thereby, the user can easily confirm which component 2 is determined to be abnormal.
  • the UI control unit 206 displays a list of the parts 2 attached to the selected workpiece 3 in the parts list area 363. In addition, the UI control unit 206 displays the parts 2 determined to be abnormal and the parts 2 determined to be normal in a distinguishable manner in the parts list area 363 . Thereby, the user can easily confirm which component 2 is determined to be abnormal.
  • Embodiment 1 (Summary of Embodiment 1) The contents of Embodiment 1 can be expressed as the following items.
  • Inspection device 20 includes one or more processors 22, memory (eg, RAM 24), and programs stored in the memory.
  • the program causes the processor 22 to do the following.
  • a first inspection target eg, a part 2a
  • a second inspection target eg, a component 2b
  • a first A first attention area 5 (5a) for inspecting an inspection object and a second attention area 5 (5b) for inspecting a second inspection object are set.
  • the program causes the camera 21, which captures an image of the inspection target area and outputs a captured image of the inspection target area (for example, the workpiece image 4), to take a first captured image captured under a first imaging condition as the captured image of the inspection target area, A second captured image captured under a second imaging condition different from the first imaging condition is captured.
  • the program includes a learning model 110 for detecting anomalies in a plurality of inspection objects including a first inspection object and a second inspection object, and a first region of a first captured image corresponding to a first region of interest. and inspects a second inspection object based on the learning model and the second region of the first captured image corresponding to the second region of interest.
  • the program inspects a first inspection object based on the learning model and a first area of a second captured image corresponding to the first attention area, and inspects the learning model and a second captured image corresponding to the second attention area.
  • a second inspection is performed to inspect the second inspection object based on the second area.
  • the program outputs the results of the first test and the results of the second test.
  • the inspection apparatus 20 inspects the first inspection object and the second inspection object using the first captured image captured under the first imaging condition, outputs the result of the first inspection, and outputs the result of the first inspection.
  • the first inspection object and the second inspection object are inspected using the second captured image captured under the two imaging conditions, and the result of the second inspection is output. Therefore, the inspection apparatus 20 can inspect each inspection target using captured images captured under different imaging conditions, and can obtain more accurate inspection results.
  • the program causes the processor 22 to execute the following.
  • the program defines a first imaging pattern including at least one imaging parameter as a first imaging condition based on a first region of the captured image corresponding to the first region of interest.
  • the program defines a second imaging pattern including at least one imaging parameter as a second imaging condition based on a second region of the captured image corresponding to the second region of interest.
  • each imaging condition is determined by an imaging pattern having different imaging parameters. Therefore, the inspection apparatus 20 can obtain more accurate inspection results by inspecting each inspection object using captured images captured under imaging conditions with different imaging parameters.
  • the inspection apparatus 20 described in item 2 includes an illumination device 30 that illuminates an inspection target area.
  • the program defines an illumination pattern including at least one illumination parameter for illumination device 30 to illuminate a region to be inspected. Thereby, the inspection device 20 can cause the illumination device 30 to illuminate the inspection target region with illumination patterns having different illumination parameters.
  • the program causes the processor 22 to perform the following.
  • the program causes the camera to output, as the first captured image, the captured image of the inspection target region captured by applying the first imaging condition for each at least one irradiation pattern.
  • the program causes the camera to output, as the second captured image, the captured image of the inspection target region captured by applying the second imaging condition for each at least one irradiation pattern.
  • the program causes the processor 22 (or processor 41) to execute the following.
  • the program generates the learning model 110 by learning based on the first captured image and the second captured image output by the camera. Thereby, the inspection apparatus 20 can generate the learning model 110 for inspecting the inspection object using the captured image.
  • the program causes the processor 22 (or processor 41) to execute the following.
  • the program sets the first region of interest and the second region of interest based on either design information defining the positions of a plurality of inspection objects or a captured image of the inspection target region. Thereby, the inspection apparatus 20 can set a plurality of attention areas in the inspection target area.
  • the inspection device 20 includes a camera 21 . Thereby, the inspection apparatus 20 can control the camera 21 to capture a captured image of the inspection target area.
  • the inspection device 20 implements the following inspection method.
  • the inspection apparatus 20 detects the first inspection object in an inspection object area in which a plurality of inspection objects including a first inspection object (for example, the part 2) and a second inspection object different from the first inspection object exist. and a second attention area 5 for inspecting a second inspection object are set.
  • the inspection apparatus 20 provides a camera 21 that captures an image of an inspection target area and outputs a captured image of the inspection target area (for example, the work image 4) as a captured image of the inspection target area, as a first image captured under a first imaging condition. An image and a second captured image captured under a second imaging condition different from the first imaging condition are captured.
  • the inspection apparatus 20 includes a learning model 110 for detecting anomalies in a plurality of inspection objects including a first inspection object and a second inspection object, and a first image of a first captured image corresponding to a first region of interest. and inspecting a second inspection object based on the learning model and a second area of the first captured image corresponding to the second attention area. Execute.
  • the inspection apparatus 20 inspects the first inspection object based on the learning model and the first area of the second captured image corresponding to the first attention area, and performs second imaging corresponding to the learning model and the second attention area.
  • a second inspection is performed to inspect a second inspection object based on a second region of the image.
  • the inspection device 20 outputs the result of the first inspection and the result of the second inspection.
  • the inspection apparatus 20 inspects the first inspection object and the second inspection object using the first captured image captured under the first imaging condition, outputs the result of the first inspection, and outputs the result of the first inspection.
  • the first inspection object and the second inspection object are inspected using the second captured image captured under the two imaging conditions, and the result of the second inspection is output. Therefore, the inspection apparatus 20 can inspect each inspection target using captured images captured under different imaging conditions, and can obtain more accurate inspection results.
  • the inspection program causes the processor 22 to do the following.
  • the inspection program includes a first inspection object (for example, a part 2) and a second inspection object different from the first inspection object in an inspection object area in which a plurality of inspection objects exist.
  • a first attention area 5 for inspection and a second attention area 5 for inspecting a second inspection object are set.
  • the inspection program causes the camera 21, which captures an image of the inspection target area and outputs a captured image of the inspection target area (for example, the workpiece image 4), to capture the first captured image captured under the first imaging condition as the captured image of the inspection target area. , and a second captured image captured under a second imaging condition different from the first imaging condition.
  • the inspection program comprises a learning model 110 for detecting anomalies in a plurality of inspection objects including a first inspection object and a second inspection object, and a first region of a first captured image corresponding to a first region of interest. and inspecting the second inspection object based on the learning model and the second region of the first captured image corresponding to the second region of interest. .
  • the inspection program inspects the first inspection object based on the learning model and a first area of a second captured image corresponding to the first attention area, and inspects the second captured image corresponding to the learning model and the second attention area.
  • a second inspection is performed for inspecting the second inspection object based on the second area of .
  • the inspection program outputs the result of the first inspection and the result of the second inspection.
  • the inspection program inspects the first inspection object and the second inspection object using the first captured image captured under the first imaging condition, outputs the result of the first inspection, and outputs the second inspection result.
  • the inspection program can inspect each inspection object using captured images captured under different imaging conditions, and can obtain more accurate inspection results.
  • FIG. 17 is a schematic diagram showing an example of reducing the size of the workpiece image 4 to the size of an image that can be input to the learning model 110.
  • FIG. 18 is a schematic diagram showing an example in which a plurality of cameras 21 divide the workpiece 3 into image sizes that can be input to the learning model 110 and capture the images.
  • the size of the image that can be input to the learning model 110 is smaller than the size of the work image 4 captured by the camera 21, the following countermeasure 1 or countermeasure 2 can be considered.
  • the inspection apparatus reduces the size of the workpiece image 4 to the size of an image that can be input to the learning model 110 to generate a reduced workpiece image 391 .
  • a possible countermeasure is to input and calculate the evaluation value.
  • the resolution of the part 2 is insufficient due to the reduction in the size of the workpiece image 4, and the problem arises that the learning model 110 cannot accurately calculate the evaluation value of the small part 2.
  • a countermeasure may be considered in which a plurality of cameras 21 divide and capture images of the workpiece 3, and the divided workpiece images 392 captured by each camera 21 are input to the learning model 110, respectively.
  • the present embodiment describes an inspection system 10 that suppresses an increase in processing time and cost while maintaining the accuracy of abnormality detection related to component mounting.
  • the hardware configuration of the inspection system 10 according to the second embodiment is the same as that of FIG. 1, and thus description thereof is omitted.
  • FIG. 19 is a block diagram showing an example of the functional configuration of the inspection device 20 according to the second embodiment.
  • the inspection apparatus 20 includes an imaging control unit 101, an optimum condition determination unit 102, an inspection execution unit 103, an attention area extraction unit 111, an attention area storage unit 104, an imaging condition storage unit 105, and an optimum imaging condition storage unit. 106 , a learning model storage unit 107 , and a synthetic image storage unit 112 .
  • the imaging control unit 101 controls the camera 21 and the lighting device 30 to image the workpiece 3 and generate the workpiece image 4 .
  • the region-of-interest extraction unit 111 extracts an image (region-of-interest image) of each region of interest 5 from the work image 4 .
  • the attention area extraction unit 111 combines the extracted attention area images to generate a composite image 400 (see FIGS. 20 and 21).
  • the attention area extraction unit 111 stores the generated synthetic image 400 in the synthetic image storage unit 112 . Details of the attention area extraction unit 111 and the synthesized image 400 will be described later (see FIGS. 20 and 21).
  • the optimum condition determination unit 102 determines the optimum imaging conditions for imaging each attention area 5 among a plurality of imaging conditions. In addition, the optimum condition determination unit 102 determines optimum extraction conditions (hereinafter referred to as optimum extraction conditions) when extracting an attention area image corresponding to the attention area 5 from the work image 4 .
  • optimum extraction conditions optimum extraction conditions
  • the inspection execution unit 103 uses the synthesized image 400 to inspect whether or not the component 2 is normally mounted in the area corresponding to the attention area 5 .
  • the inspection execution unit 103 inputs the image of the area corresponding to the attention area 5 of the synthesized image 400 to the learning model 110 stored in the learning model storage unit 107, thereby executing the inspection.
  • the inspection device 20 may generate the composite image 400 using either the first generation method or the second generation method.
  • FIG. 20 is a diagram showing an example of a first method for generating the synthesized image 400 according to the second embodiment.
  • the attention area extracting unit 111 extracts attention area images (pixels) corresponding to each attention area 5 (for example, attention areas 5a and 5b) extracted from the work image 4, to the attention area 5 (for example, attention areas 5a and 5b).
  • attention area images pixels
  • the attention area extracting unit 111 enlarges the attention area image of the part 2a for which the abnormality detection accuracy in the learning model 110 is improved by enlarging it.
  • the attention area extracting unit 111 reduces the attention area image of the part 2b for which the abnormality detection accuracy in the learning model 110 does not change much even if it is reduced.
  • the region-of-interest extraction unit 111 may adjust the surplus region in addition to the enlargement or reduction.
  • the surplus area indicates an area of peripheral pixels of the part 2 in the attention area image.
  • the region-of-interest extraction unit 111 extracts a region-of-interest image by setting a large surplus region for component 2, for which a larger surplus region (that is, peripheral pixels) increases the accuracy of calculation of the evaluation value of the learning model 110.
  • the region-of-interest extraction unit 111 extracts a region-of-interest image with a small surplus region for the part 2a for which the calculation accuracy of the evaluation value of the learning model 110 increases when the surplus region (that is, the peripheral pixels) is small. .
  • the region-of-interest extraction unit 111 synthesizes the region-of-interest image corresponding to each region-of-interest 5 obtained by enlarging or reducing and adjusting the surplus regions in one workpiece image 4 as described above, and obtains a single region-of-interest image as shown in FIG. generate two composite images 400;
  • the attention area extraction unit 111 stores the generated synthetic image 400 in the synthetic image storage unit 112 .
  • FIG. 21 is a diagram showing an example of a second method for generating the synthesized image 400 according to the second embodiment.
  • the attention area extraction unit 111 enlarges or reduces attention area images (pixels) corresponding to each attention area 5 (for example, attention areas 5a and 5b) extracted from the work image 4 so as to have a predetermined size. For example, the attention area extraction unit 111 enlarges the first attention area image corresponding to the first attention area 5a smaller than a predetermined size so as to have a predetermined size. For example, the attention area extraction unit 111 reduces the attention area image corresponding to the second attention area 5b larger than a predetermined size to a predetermined size.
  • the region-of-interest extraction unit 111 extracts peripheral pixels of the second region-of-interest 5 from the work image so that the pixels corresponding to the second region-of-interest 5b are the same as the pixels corresponding to the first region-of-interest 5a. Extract from 4.
  • the region-of-interest extraction unit 111 may adjust the surplus region in addition to the enlargement or reduction.
  • the attention area extracting unit 111 selects a large surplus area for the part 2a, for which the calculation accuracy of the evaluation value of the learning model 110 increases when the surplus area (that is, the surrounding pixels of the part 2) is large, and extracts the attention area image.
  • the attention area extracting unit 111 selects a small surplus area for the part 2b for which the calculation accuracy of the evaluation value of the learning model 110 increases when the surplus area (that is, the surrounding pixels of the part 2) is small, and extracts the attention area image.
  • the attention area extracting unit 111 synthesizes each attention area image of the same size by enlarging or reducing the one work image 4 and adjusting the surplus area in this way, and produces a result as shown in FIG. 21 .
  • a single composite image 400 is generated.
  • the attention area extraction unit 111 stores the generated synthetic image 400 in the synthetic image storage unit 112 .
  • the data amount of the image is smaller in this embodiment than in the above-described countermeasure 2, so the amount of processing and memory usage can be reduced.
  • the resolution of the small component 2 is no longer insufficient, so the inspection apparatus 20 according to the present embodiment can detect abnormalities with high accuracy.
  • ⁇ Pre-test processing flow> 22 is a flowchart illustrating an example of pre-examination processing according to Embodiment 2.
  • the imaging control unit 101 controls the camera 21 to image the workpiece 3 and generate the workpiece image 4 (step S401).
  • the attention area setting unit 201 sets a plurality of attention areas 5 on the workpiece 3 (step S402).
  • the attention area setting unit 201 transmits information (attention area information) indicating a plurality of set attention areas 5 to the inspection device 20 .
  • the inspection apparatus 20 stores the received plural pieces of attention area information in the attention area storage unit 104 .
  • the imaging condition generation unit 204 generates a plurality of imaging conditions by combining a plurality of imaging patterns stored in the imaging pattern storage unit 208 and a plurality of illumination patterns stored in the illumination pattern storage unit 209 (step S403).
  • the imaging condition generation unit 204 transmits the plurality of generated imaging conditions to the inspection device 20 .
  • the inspection apparatus 20 stores the received multiple imaging conditions in the imaging condition storage unit 105 .
  • the optimum condition determination unit 102 selects one unselected imaging condition from the plurality of imaging conditions stored in the imaging condition storage unit 105 (step S404).
  • the selected imaging conditions are referred to as selected imaging conditions.
  • the imaging control unit 101 controls the camera 21 and the lighting device 30 based on the selected imaging conditions to capture an image of the workpiece 3 and generate a workpiece image 4 (step S405).
  • the imaging control unit 101 stores the generated work image 4 in the RAM 43 or storage 44 .
  • the optimum condition determination unit 102 determines whether or not all imaging conditions stored in the imaging condition storage unit 105 have been selected (step S406).
  • the inspection apparatus 20 When unselected imaging conditions remain (S406: NO), the inspection apparatus 20 returns the process to step S404. If all imaging conditions have been selected (S406: YES), the inspection apparatus 20 advances the process to step S407.
  • the region-of-interest extraction unit 111 extracts each workpiece image stored in the RAM 43 or the storage 44 by the above-described first synthetic image generation method (see FIG. 20) or the second synthetic image generation method (see FIG. 21). 4, each region of interest 5 is extracted under various extraction conditions.
  • the extraction conditions include an enlargement rate or a reduction rate, a surplus area ratio, and the like.
  • the attention area extraction unit 111 extracts an image of an area corresponding to each attention area 5 in the work image 4, and enlarges or reduces it based on the extraction conditions. Further, the attention area extracting unit 111 extracts an image by increasing or decreasing the surplus area based on the extraction condition from the area corresponding to each attention area 5 in the work image 4 .
  • the attention area extracting unit 111 combines the attention area images corresponding to the extracted attention areas 5 to generate the composite image 400 (step S407).
  • the region-of-interest extraction unit 111 stores the plurality of generated synthetic images 400 in the synthetic image storage unit 112 .
  • the composite image storage unit 112 stores a plurality of composite images 400 obtained by combining attention area images extracted under various extraction conditions from the work images 4 captured under various imaging conditions. .
  • the inspection device 20 executes optimum condition determination processing (step S408). Details of the optimum condition determination process will be described later (see FIG. 23). Then, the process ends.
  • FIG. 23 is a flow chart showing an example of the optimum condition determination process (step S408) shown in FIG.
  • the optimum condition determination unit 102 selects one unselected attention area 5 from among the plurality of attention areas 5 stored in the attention area storage unit 104 (step S501).
  • the selected attention area 5 is called a selected attention area 5 .
  • the optimum condition determination unit 102 selects an image in the selected region of interest 5 (hereinafter referred to as a selection (referred to as a region-of-interest image).
  • the optimum condition determination unit 102 uses the learning model 110 stored in the learning model storage unit 107 to calculate an evaluation value for each of the acquired selected region-of-interest images (step S502).
  • the optimum condition determination unit 102 determines the optimum imaging conditions and optimum extraction conditions for the selected region of interest 5 based on the evaluation values calculated in step S502 (step S503). For example, the optimum condition determining unit 102 determines the imaging condition with the highest calculated evaluation value as the optimum imaging condition for the selected region of interest 5 . The optimum condition determination unit 102 also determines the extraction condition with the highest calculated evaluation value as the optimum extraction condition for the selected region of interest 5 .
  • the optimum condition determination unit 102 associates the selected attention area 5 with the optimum imaging conditions and optimum extraction conditions determined in step S503, and stores them in the attention area storage unit 104 (step S504).
  • the optimal condition determination unit 102 stores the optimal imaging conditions determined in step S503 in the optimal imaging condition storage unit 106 (step S505).
  • the optimum condition determination unit 102 determines whether or not all the attention areas 5 stored in the attention area storage unit 104 have been selected (step S506).
  • step S506 If an unselected region of interest 5 remains (step S506: NO), the inspection device 20 returns the process to step S501. If all the attention areas 5 have been selected (step S506: YES), the inspection device 20 terminates this process.
  • ⁇ Inspection processing flow> 24 is a flowchart illustrating an example of inspection processing according to Embodiment 2.
  • FIG. 24 is a flowchart illustrating an example of inspection processing according to Embodiment 2.
  • the imaging control unit 101 selects the attention area storage unit 104 and the optimum imaging condition storage unit 106 associated with the workpiece 3 to be inspected (step S601).
  • the imaging control unit 101 determines whether or not the detection sensor 19 has detected the workpiece 3 (step S602). For example, the imaging control unit 101 determines whether or not a workpiece detection notification has been received from the detection sensor 19 .
  • step S602 NO
  • the inspection device 20 returns the process to step S602. If the detection sensor 19 detects the workpiece 3 (step S602: YES), the inspection apparatus 20 advances the process to the next step S603.
  • the imaging control unit 101 selects one unselected optimum imaging condition from the optimum imaging condition storage unit 106 (step S603).
  • the selected optimum imaging conditions are referred to as selected optimum imaging conditions.
  • the imaging control unit 101 controls the camera 21 and the illumination device 30 based on the selected optimum imaging conditions to image the workpiece 3 and generate the workpiece image 4 (step S604).
  • the region-of-interest extraction unit 111 generates a composite image 400 from the work image 4 (step S605). Specifically, the attention area extracting unit 111 extracts each attention area 5 of the workpiece image 4 under the optimum extraction condition associated with the attention area 5, and synthesizes the extracted attention area images to form a synthesized image 400. to generate The attention area extraction unit 111 stores the generated synthetic image 400 in the synthetic image storage unit 112 . As a result, the amount of memory used can be reduced compared to the first embodiment in which the work image 4 is stored as it is.
  • the imaging control unit 101 determines whether or not all the optimum imaging conditions included in the optimum imaging condition storage unit 106 have been selected (step S606).
  • step S606: NO If unselected optimum imaging conditions remain (step S606: NO), the inspection device 20 returns the process to step S603. If all the optimum imaging conditions have been selected (step S606: YES), the inspection apparatus 20 advances the process to the next step S607.
  • the inspection execution unit 103 selects one unselected attention area 5 from among the plurality of attention areas 5 stored in the attention area storage unit 104 (step S607).
  • the selected attention area 5 is called a selected attention area 5 .
  • the examination executing unit 103 selects a synthesized image 400 generated based on the optimum imaging condition and the optimum extraction condition associated with the selected region of interest 5 among the plurality of synthesized images 400 stored in the synthesized image storage unit 112. is selected (step S608).
  • the selected synthetic image 400 will be referred to as the selected synthetic image 400 .
  • the inspection execution unit 103 uses the learning model 110 stored in the learning model storage unit 107 to calculate the image evaluation value of the area corresponding to the selected attention area 5 in the selected combined image 400 (step S609).
  • the selected synthesized image 400 is obtained by synthesizing an attention area image extracted under optimum imaging conditions for extracting the selected attention area 5 from the work image 4 imaged under the optimum imaging conditions for imaging the selected attention area 5. It is generated. Therefore, the evaluation value calculated from the selected region-of-interest image obtained from the selected composite image 400 is higher than the evaluation value calculated from the selected region-of-interest image obtained from the work image captured under a single imaging condition. , can be more accurate.
  • the inspection execution unit 103 determines the inspection result of the selected region of interest 5 based on the evaluation value calculated in step S609 (step S610). For example, when the evaluation value is less than a predetermined threshold value Th, the inspection execution unit 103 determines that the component 2 is abnormally attached (NG) in the selected attention area 5 . For example, when the evaluation value is equal to or greater than the threshold Th, the inspection execution unit 103 determines that the component 2 is normally mounted (OK) in the selected attention area 5 . The inspection execution unit 103 associates the inspection results with the selected region of interest 5 and stores them in the RAM 34 or the storage 44 .
  • the inspection execution unit 103 determines whether or not all the attention areas 5 stored in the attention area storage unit 104 have been selected (step S611). If there remains an unselected region of interest 5, the inspection apparatus 20 returns the process to step S607.
  • the inspection execution unit 103 collects the inspection results associated with each attention area 5 and stored in the RAM 43 or the storage 44 in step S610 to generate workpiece inspection result information. and transmitted to the management device 40 (step S612). The inspection apparatus 20 then returns the process to step S602 and inspects the work 3 that is next transported.
  • the UI control unit 206 of the management device 40 receives work inspection result information from the inspection device 20 and stores it in the inspection result storage unit 212 . Also, the UI control unit 206 displays the content of the work inspection result information on the display device 60 . The user can see the contents of the workpiece inspection result information displayed on the display device 60 and can confirm whether or not the component 2 is normally mounted in each attention area 5 of the workpiece 3 .
  • FIG. 25 is a flowchart showing a modification of inspection processing according to the second embodiment.
  • the imaging control unit 101 selects the attention area storage unit 104 and the optimum imaging condition storage unit 106 associated with the workpiece 3 to be inspected (step S701).
  • the imaging control unit 101 determines whether or not the detection sensor 19 has detected the workpiece 3 (step S702). For example, the imaging control unit 101 determines whether or not a workpiece detection notification has been received from the detection sensor 19 .
  • step S702 If the detection sensor 19 has not detected the workpiece 3 (step S702: NO), the inspection device 20 returns the process to step S702. If the detection sensor 19 detects the workpiece 3 (step S702: YES), the inspection apparatus 20 advances the process to the next step S703.
  • the imaging control unit 101 controls the camera 21 and the lighting device 30 based on predetermined imaging conditions to capture an image of the workpiece 3 and generate a workpiece image 4 (step S704).
  • the region-of-interest extraction unit 111 generates a composite image 400 from the work image 4 (step S705). Specifically, the attention area extracting unit 111 extracts each attention area 5 of the workpiece image 4 under the optimum extraction condition associated with the attention area 5, and synthesizes the extracted attention area images to form a composite image 400. to generate The attention area extraction unit 111 stores the generated synthetic image 400 in the synthetic image storage unit 112 . As a result, the amount of memory used can be reduced compared to the first embodiment in which the work image 4 is stored as it is.
  • the inspection execution unit 103 selects one unselected attention area 5 from among the plurality of attention areas 5 stored in the attention area storage unit 104 (step S705).
  • the selected attention area 5 is called a selected attention area 5 .
  • the test execution unit 103 uses the learning model 110 stored in the learning model storage unit 107 to calculate the evaluation value of the image of the region corresponding to the selected attention region 5 in the synthesized image 400 generated in step S705. (Step S706).
  • the synthesized image 400 is generated by synthesizing attention area images extracted under optimum imaging conditions for extracting the selected attention area 5 . Therefore, the evaluation value calculated from the selected region-of-interest image obtained from the selected composite image 400 is higher than the evaluation value calculated from the selected region-of-interest image obtained from the work image captured under a single imaging condition. , can be more accurate.
  • the inspection execution unit 103 determines the inspection result of the selected region of interest 5 based on the evaluation value calculated in step S706 (step S707). For example, when the evaluation value is less than a predetermined threshold value Th, the inspection execution unit 103 determines that the component 2 is abnormally attached (NG) in the selected attention area 5 . For example, when the evaluation value is equal to or greater than the threshold Th, the inspection execution unit 103 determines that the component 2 is normally mounted (OK) in the selected attention area 5 . The inspection execution unit 103 associates the inspection results with the selected region of interest 5 and stores them in the RAM 34 or the storage 44 .
  • the inspection execution unit 103 determines whether or not all the attention areas 5 stored in the attention area storage unit 104 have been selected (step S708). If an unselected region of interest 5 remains, the inspection apparatus 20 returns the process to step S705.
  • the inspection execution unit 103 collects the inspection results associated with each attention area 5 stored in the RAM 43 or the storage 44 in step S707 to generate workpiece inspection result information. and transmitted to the management device 40 (step S709).
  • the inspection apparatus 20 then returns the process to step S702 and inspects the work 3 that is next transported.
  • Inspection device 20 includes one or more processors 22, memory (eg, RAM 24), and programs stored in the memory.
  • the program causes the processor 22 to do the following.
  • the program performs a first inspection in an inspection area including a plurality of inspection objects including a first inspection object (for example, part 2a) and a second inspection object (for example, part 2b) different from the first inspection object.
  • a first attention area 5 (5a) for inspecting an object and a second attention area 5 (5b) for inspecting a second inspection object are set.
  • the program causes the camera 21 that captures an image of the inspection target area and outputs a captured image of the inspection target area (for example, the workpiece image 4) to capture the inspection target area.
  • the program extracts a first image area corresponding to the first attention area and a second image area corresponding to the second attention area from the captured image.
  • the program executes a first inspection for inspecting a first inspection object based on a learning model 110 for detecting anomalies in a plurality of inspection objects and a first image region.
  • the program performs a second inspection that inspects a second inspection object based on the learning model and the second image region.
  • the program outputs the results of the first test and the results of the second test.
  • the inspection apparatus 20 inspects the first inspection object using the first image area extracted from the captured image, outputs the result of the first inspection, and uses the second image area extracted from the captured image. to output the result of the second inspection of the second inspection object.
  • the inspection apparatus 20 can inspect each inspection object using the appropriately extracted image area, and can obtain a more accurate inspection result.
  • the inspection apparatus 20 can reduce the processing load of the processor 22 and the amount of memory (for example, RAM 23) usage compared to the case where the first inspection object and the second inspection object are inspected directly from the captured image.
  • the program causes the processor 22 to execute the following.
  • the program generates a first composite image 400 that includes a first image region and a second image region.
  • the program uses the first composite image and the learning model to perform the first test and the second test.
  • the inspection apparatus 20 can perform the first inspection and the second inspection using the first synthesized image. Therefore, the inspection apparatus 20 can reduce the processing load of the processor 22 and the amount of memory used compared to the case where the first inspection object and the second inspection object are inspected directly from the captured image.
  • the program causes the processor 22 to perform the following.
  • the program extracts the first image area by enlarging or reducing the pixels corresponding to the first attention area from the captured image.
  • the program enlarges or reduces pixels corresponding to the second attention area from the captured image to extract the second image area.
  • the inspection device 20 can generate the first synthetic image of a size that can be input to the learning model.
  • the program causes the processor 22 to perform the following.
  • the program extracts the first region of interest and peripheral pixels of the first region of interest from the captured image as pixels corresponding to the first region of interest.
  • the program extracts the second region of interest and peripheral pixels of the second region of interest from the captured image as pixels corresponding to the second region of interest. In this way, the inspection apparatus 20 can improve the inspection accuracy in the learning model by also extracting the peripheral pixels of each attention area.
  • the program causes the processor 22 to perform the following.
  • the program extracts peripheral pixels of the second region of interest from the captured image so that pixels corresponding to the second region of interest have the same size as pixels corresponding to the first region of interest.
  • the inspection apparatus 20 can combine images extracted in the same size for each attention area of the captured image to generate a combined image.
  • the program causes the processor 22 to execute the following.
  • the program generates a learning model 110 by learning based on the first image region and the second image region.
  • the inspection apparatus 20 can generate the learning model 110 for inspecting the inspection object using the captured image.
  • the program causes the processor 22 to execute the following.
  • the program defines first imaging conditions including a first imaging pattern including a plurality of imaging parameters based on a first region of the captured image corresponding to the first region of interest.
  • the program defines second imaging conditions including a second imaging pattern including a plurality of imaging parameters based on a second region of the captured image corresponding to the second region of interest.
  • each imaging condition is determined by an imaging pattern having different imaging parameters. Therefore, the inspection apparatus 20 can obtain more accurate inspection results by inspecting each inspection object using captured images captured under imaging conditions with different imaging parameters.
  • the inspection apparatus 20 includes an illumination device 30 that illuminates an inspection target area.
  • the program causes the processor 22 to do the following.
  • the program defines an illumination pattern including a plurality of illumination parameters for illuminating an area to be inspected by the illuminator. Thereby, the inspection device 20 can cause the illumination device 30 to illuminate the inspection target region with illumination patterns having different illumination parameters.
  • the program causes the processor 22 to perform the following.
  • the program captures an image of an inspection target area obtained by applying a first imaging condition for each of a plurality of irradiation conditions and an image by applying a second imaging condition for each of a plurality of irradiation conditions as an image of the inspection object.
  • the camera 21 is caused to output the picked-up image of the inspection target area.
  • the inspection apparatus 20 can inspect each inspection object using captured images captured with different illumination patterns, and can obtain more accurate inspection results.
  • the program causes the processor 22 to execute the following.
  • the program sets the first region of interest and the second region of interest based on either design information that defines the positions of a plurality of inspection objects or a captured image of the inspection object region. Thereby, the inspection apparatus 20 can set a plurality of attention areas in the inspection target area.
  • the inspection device 20 includes a camera 21 . Thereby, the inspection apparatus 20 can control the camera 21 to capture a captured image of the inspection target area.
  • the device (for example, inspection device 20) implements the following image processing method.
  • the apparatus inspects a first inspection object in an inspection object area including a plurality of inspection objects including a first inspection object (eg, part 2) and a second inspection object different from the first inspection object.
  • a first attention area for performing inspection and a second attention area for performing inspection of a second inspection object are set.
  • the apparatus causes the camera 21 that captures an image of the inspection target area and outputs a captured image of the inspection target area (for example, the workpiece image 4) to capture the inspection target area.
  • the device extracts a first image region corresponding to the first region of interest 5 and a second image region corresponding to the second region of interest 5 from the captured image.
  • the apparatus performs a first inspection for inspecting a first inspection object based on a learning model 110 for detecting anomalies in a plurality of inspection objects and a first image region.
  • the apparatus performs a second inspection that inspects a second inspection object based on the learned model and the second image region.
  • the device outputs the result of the first test and the result of said second test.
  • the apparatus inspects the first inspection object using the first image area extracted from the captured image, outputs the result of the first inspection, and uses the second image area extracted from the captured image to A result of the second inspection of the second inspection object is output. Therefore, the apparatus can reduce the processing load of the processor 22 and the usage of the memory (for example, the RAM 23) as compared with the case of directly inspecting the first inspection object and the second inspection object from the captured image.
  • the image processing program causes the processor 22 to do the following.
  • the image processing program in an inspection area including a plurality of inspection objects including a first inspection object (for example, part 2) and a second inspection object different from the first inspection object, A first attention area for inspection and a second attention area for inspecting a second inspection object are set.
  • the image processing program causes the camera 21 that captures an image of the inspection target area and outputs a captured image of the inspection target area (for example, the work image 4) to capture the inspection target area.
  • the image processing program extracts a first image area corresponding to the first attention area and a second image area corresponding to the second attention area from the captured image.
  • the image processing program executes a first inspection for inspecting a first inspection object based on a learning model 110 for detecting anomalies in a plurality of inspection objects and a first image region.
  • the image processing program performs a second inspection for inspecting a second inspection object based on the learning model and the second image region.
  • the image processing program outputs the result of the first inspection and the result of the second inspection.
  • the image processing program inspects the first inspection object using the first image area extracted from the captured image, outputs the result of the first inspection, and uses the second image area extracted from the captured image. to output the result of the second inspection of the second inspection object. Therefore, the image processing program can reduce the processing load of the processor 22 and the usage of the memory (for example, the RAM 23) as compared with the case of directly inspecting the first inspection object and the second inspection object from the captured image.
  • steps included in the processing disclosed in this specification do not necessarily have to be executed in chronological order according to the order described in the sequence diagrams and/or flowcharts.
  • steps in a process may be performed out of order and/or in parallel from the order illustrated in the sequence diagrams and/or flowcharts. It is also possible to omit some of the steps included in the process and/or add additional steps to the process.
  • an apparatus or a module thereof may be provided.
  • an imaging control module for example, an imaging control module, an optimum condition determination module, an inspection execution module, and/or a region of interest extraction module
  • a processor and/or a processing element equivalent to the processor for example, ASIC: Application Specific Integrated Circuit, FPGA: Field Programmable Gate Array, CPLD: Complex Programmable Logic Device, etc.
  • a recording medium computer-readable non-temporary recording medium
  • the technology of the present disclosure is useful for devices and the like that inspect whether or not each component is normally mounted on a board.

Abstract

An inspection device according to the present invention: executes a first inspection in which a first inspection object is inspected on the basis of a training model for detecting abnormalities in inspection objects, and a first region in a first captured image, the first region corresponding to the first region of interest, and in which a second inspection object is inspected on the basis of the training model and a second region in the first captured image, the second region corresponding to the second region of interest; executes a second inspection in which the first inspection object is inspected on the basis of the training model and a first region in a second captured image, the first region corresponding to the first region of interest, and in which the second inspection object is inspected on the basis of the training model and a second region in the second captured image, the second region corresponding to the second region of interest; and outputs the results of the first inspection and the second inspection.

Description

検査装置、検査方法、及び検査プログラムInspection device, inspection method, and inspection program
 本発明は、検査装置、検査方法、及び検査プログラムに関する。 The present invention relates to an inspection device, an inspection method, and an inspection program.
 近年、製造現場における検査の自動化を目的として、カメラで撮像した検査対象物の画像に基づいた製品の外観検査が行われている(例えば、特許文献1及び特許文献2参照)。 In recent years, with the aim of automating inspections at manufacturing sites, product appearance inspections have been performed based on images of inspection objects captured by cameras (see Patent Documents 1 and 2, for example).
 特許文献1には、複数の照明パターンを用いて撮像された画像群に基づいて、検査対象物の損傷や悪品等、いわゆる検査対象物の欠陥を検出するための最適な照明パターンを得る技術が開示されている。また、特許文献2には、検査対象物の欠陥を強調しつつ、検査対象物の良品ばらつきや個体ばらつきを抑制するための照明パターンを得る技術が開示されている。 Patent Literature 1 discloses a technique for obtaining an optimal illumination pattern for detecting so-called defects in an inspection object, such as damage and defective products, based on a group of images captured using a plurality of illumination patterns. is disclosed. Further, Patent Literature 2 discloses a technique of obtaining an illumination pattern for suppressing non-defective product variations and individual variations of inspection objects while emphasizing defects of the inspection objects.
欧州特許出願公開第2887055号明細書EP-A-2887055 日本国特開2021-507131号公報Japanese Patent Application Laid-Open No. 2021-507131
 特許文献1及び特許文献2に開示されている技術は、撮像された画像群に含まれる検査対象物の欠陥を検出するため、撮像された画像群に含まれる検査対象物の数が少ない画像に対して有効である。ところで、製造現場では、たくさんの部品を実装した基板に対する検査の自動化が望まれている。ここで、このような要望に対して、特許文献1及び特許文献2を適用して基板全体を一つの検査対象物とした照明パターンを取得すると仮定する。このような場合、基板に実装されている部品は色や素材、形状などが様々であることから、基板全体を一つの検査対象物としたときに得られる照明パターンが基板に実装されている各々の部品の検査に好適であるとは限らないため、所望の検査精度を得られない可能性がある。 The techniques disclosed in Patent Documents 1 and 2 detect defects in an inspection object included in a group of captured images. It is valid. By the way, at the manufacturing site, it is desired to automate the inspection of boards on which many components are mounted. Here, it is assumed that, in response to such a request, an illumination pattern is obtained by applying Patent Documents 1 and 2 to the entire substrate as one inspection target. In such a case, since the components mounted on the board vary in color, material, shape, etc., the illumination pattern obtained when the entire board is treated as one inspection object is Therefore, the desired inspection accuracy may not be obtained.
 本発明は、このような課題を解決するためになされたものであり、複数の検査対象物に対する外観検査の精度を向上することを目的とする。 The present invention has been made to solve such problems, and aims to improve the accuracy of appearance inspection for a plurality of inspection objects.
 本発明の一態様に係る検査装置は、1つ以上のプロセッサと、メモリと、前記メモリに保存されているプログラムと、を備え、前記プログラムは、第1検査対象物と前記第1検査対象物とは異なる第2検査対象物とを含む複数の検査対象物が存在する検査対象領域において、前記第1検査対象物の検査を行うための第1注目領域と、前記第2検査対象物の検査を行うための第2注目領域とを設定することと、前記検査対象領域を撮像して前記検査対象領域の撮像画像を出力するカメラに、前記検査対象領域の撮像画像として、第1撮像条件により撮像した第1撮像画像と、前記第1撮像条件とは異なる第2撮像条件により撮像した第2撮像画像とを撮像させることと、前記第1検査対象物と前記第2検査対象物とを含む複数の検査対象物の異常を検知するための学習モデルと、前記第1注目領域に対応する前記第1撮像画像の第1領域とに基づいて前記第1検査対象物を検査し、かつ前記学習モデルと前記第2注目領域に対応する前記第1撮像画像の第2領域とに基づいて前記第2検査対象物を検査する第1検査を実行することと、前記学習モデルと前記第1注目領域に対応する前記第2撮像画像の第1領域とに基づいて前記第1検査対象物を検査し、かつ前記学習モデルと前記第2注目領域に対応する前記第2撮像画像の第2領域とに基づいて前記第2検査対象物を検査する第2検査を実行することと、前記第1検査の結果と前記第2検査の結果とを出力することと、を、前記1つ以上のプロセッサに実行させる。 An inspection apparatus according to an aspect of the present invention includes one or more processors, a memory, and a program stored in the memory, wherein the program stores a first inspection object and the first inspection object. In an inspection target area in which a plurality of inspection objects including a second inspection object different from the and a camera that captures the inspection target area and outputs the captured image of the inspection target area as the captured image of the inspection target area according to a first imaging condition capturing a first captured image and a second captured image captured under a second imaging condition different from the first imaging condition; and including the first inspection object and the second inspection object. inspecting the first inspection object based on a learning model for detecting anomalies in a plurality of inspection objects and a first region of the first captured image corresponding to the first region of interest; executing a first inspection for inspecting the second inspection object based on a model and a second region of the first captured image corresponding to the second region of interest; and the learning model and the first region of interest. and inspecting the first inspection object based on the first region of the second captured image corresponding to the learning model and the second region of the second captured image corresponding to the second region of interest and outputting results of the first inspection and results of the second inspection to the one or more processors. Let
 本発明の一態様に係る検査方法は、検査装置において、第1検査対象物と前記第1検査対象物とは異なる第2検査対象物とを含む複数の検査対象物が存在する検査対象領域において、前記第1検査対象物の検査を行うための第1注目領域と、前記第2検査対象物の検査を行うための第2注目領域とを設定し、前記検査対象領域を撮像して前記検査対象領域の撮像画像を出力するカメラに、前記検査対象領域の撮像画像として、第1撮像条件により撮像した第1撮像画像と、前記第1撮像条件とは異なる第2撮像条件により撮像した第2撮像画像とを撮像させ、前記第1検査対象物と前記第2検査対象物とを含む複数の検査対象物の異常を検知するための学習モデルと、前記第1注目領域に対応する前記第1撮像画像の第1領域とに基づいて前記第1検査対象物を検査し、かつ前記学習モデルと前記第2注目領域に対応する前記第1撮像画像の第2領域とに基づいて前記第2検査対象物を検査する第1検査を実行し、前記学習モデルと前記第1注目領域に対応する前記第2撮像画像の第1領域とに基づいて前記第1検査対象物を検査し、かつ前記学習モデルと前記第2注目領域に対応する前記第2撮像画像の第2領域とに基づいて前記第2検査対象物を検査する第2検査を実行し、前記第1検査の結果と前記第2検査の結果とを出力する。 In an inspection method according to an aspect of the present invention, in an inspection device, in an inspection target area in which a plurality of inspection targets including a first inspection target and a second inspection target different from the first inspection target exist, and setting a first attention area for inspecting the first inspection object and a second attention area for inspecting the second inspection object, imaging the inspection object area, and performing the inspection. A first captured image captured under a first imaging condition and a second captured image captured under a second imaging condition different from the first imaging condition, as captured images of the inspection target region, to a camera that outputs a captured image of the target region. a learning model for detecting an abnormality in a plurality of inspection objects including the first inspection object and the second inspection object; and the first inspection object corresponding to the first region of interest. inspecting the first inspection object based on the first area of the captured image; and performing the second inspection based on the learning model and a second area of the first captured image corresponding to the second attention area. performing a first inspection for inspecting an object, inspecting the first inspection object based on the learning model and a first region of the second captured image corresponding to the first region of interest, and performing the learning; performing a second inspection for inspecting the second inspection object based on the model and a second region of the second captured image corresponding to the second region of interest, and performing the second inspection with the result of the first inspection; and output the result of
 本発明の一態様に係る検査プログラムは、第1検査対象物と前記第1検査対象物とは異なる第2検査対象物とを含む複数の検査対象物が存在する検査対象領域において、前記第1検査対象物の検査を行うための第1注目領域と、前記第2検査対象物の検査を行うための第2注目領域とを設定することと、前記検査対象領域を撮像して前記検査対象領域の撮像画像を出力するカメラに、前記検査対象領域の撮像画像として、第1撮像条件により撮像した第1撮像画像と、前記第1撮像条件とは異なる第2撮像条件により撮像した第2撮像画像とを撮像させることと、前記第1検査対象物と前記第2検査対象物とを含む複数の検査対象物の異常を検知するための学習モデルと、前記第1注目領域に対応する前記第1撮像画像の第1領域とに基づいて前記第1検査対象物を検査し、かつ前記学習モデルと前記第2注目領域に対応する前記第1撮像画像の第2領域とに基づいて前記第2検査対象物を検査する第1検査を実行することと、前記学習モデルと前記第1注目領域に対応する前記第2撮像画像の第1領域とに基づいて前記第1検査対象物を検査し、かつ前記学習モデルと前記第2注目領域に対応する前記第2撮像画像の第2領域とに基づいて前記第2検査対象物を検査する第2検査を実行することと、前記第1検査の結果と前記第2検査の結果とを出力することと、を、プロセッサに実行させる。 An inspection program according to an aspect of the present invention provides an inspection target region in which a plurality of inspection targets including a first inspection target and a second inspection target different from the first inspection target exist, wherein the first inspection target is setting a first attention area for inspecting an inspection object and a second attention area for inspecting the second inspection object; a first captured image captured under a first imaging condition and a second captured image captured under a second imaging condition different from the first imaging condition, as captured images of the inspection target area, to a camera that outputs the captured images of and a learning model for detecting anomalies in a plurality of inspection objects including the first inspection object and the second inspection object, and the first corresponding to the first attention area inspecting the first inspection object based on the first area of the captured image; and performing the second inspection based on the learning model and a second area of the first captured image corresponding to the second attention area. performing a first inspection for inspecting an object; inspecting the first inspection object based on the learning model and a first region of the second captured image corresponding to the first region of interest; executing a second inspection for inspecting the second inspection object based on the learning model and a second region of the second captured image corresponding to the second region of interest; and a result of the first inspection. and outputting the result of the second test.
 なお、これらの包括的又は具体的な態様は、システム、装置、方法、集積回路、コンピュータプログラム又は記録媒体で実現されてもよく、システム、装置、方法、集積回路、コンピュータプログラム及び記録媒体の任意な組み合わせで実現されてもよい。 In addition, these generic or specific aspects may be realized by a system, device, method, integrated circuit, computer program or recording medium, and any of the system, device, method, integrated circuit, computer program and recording medium may be implemented. may be implemented in any combination.
 本発明によれば、複数の検査対象物に対する外観検査の精度が向上する。 According to the present invention, the accuracy of appearance inspection for multiple inspection objects is improved.
図1は、実施の形態1に係る検査システムの構成例を示すブロック図である。FIG. 1 is a block diagram showing a configuration example of an inspection system according to Embodiment 1. FIG. 図2は、実施の形態1に係る検査装置の機能構成の一例を示すブロック図である。FIG. 2 is a block diagram showing an example of the functional configuration of the inspection apparatus according to Embodiment 1. FIG. 図3は、実施の形態1に係る管理装置の機能構成の一例を示すブロック図である。3 is a block diagram illustrating an example of a functional configuration of a management device according to Embodiment 1; FIG. 図4は、実施の形態1に係るワークに設定される注目領域の一例を示す図である。FIG. 4 is a diagram showing an example of an attention area set on a work according to the first embodiment. 図5は、実施の形態1に係る撮像パターンの一例を示す図である。FIG. 5 is a diagram showing an example of an imaging pattern according to Embodiment 1. FIG. 図6は、実施の形態1に係る照明パターンの一例を示す図である。6 is a diagram showing an example of an illumination pattern according to Embodiment 1. FIG. 図7は、実施の形態1に係る検査システムにおける検査前の動作を示すシーケンスチャートである。FIG. 7 is a sequence chart showing pre-inspection operations in the inspection system according to the first embodiment. 図8は、実施の形態1に係る検査システムにおける検査の動作を示すシーケンスチャートである。FIG. 8 is a sequence chart showing inspection operations in the inspection system according to the first embodiment. 図9は、実施の形態1に係る検査前処理の一例を示すフローチャートである。9 is a flowchart illustrating an example of pre-examination processing according to Embodiment 1. FIG. 図10は、図9に示す最適条件決定処理の詳細例を示すフローチャートである。FIG. 10 is a flow chart showing a detailed example of the optimum condition determination process shown in FIG. 図11は、実施の形態1に係る検査処理の一例を示すフローチャートである。11 is a flowchart illustrating an example of inspection processing according to Embodiment 1. FIG. 図12は、実施の形態1に係る撮像パターン及び照明パターンを生成及び調整するためのUI画面の一例を示す模式図である。12 is a schematic diagram showing an example of a UI screen for generating and adjusting an imaging pattern and an illumination pattern according to Embodiment 1. FIG. 図13は、実施の形態1に係る注目領域を設定するためのUI画面の一例を示す模式図である。13 is a schematic diagram showing an example of a UI screen for setting an attention area according to Embodiment 1. FIG. 図14は、実施の形態1に係る注目領域情報の構成例を示す図である。14 is a diagram showing a configuration example of attention area information according to Embodiment 1. FIG. 図15は、実施の形態1に係るワークの検査結果を一覧で確認するためのUI画面の一例を示す模式図である。FIG. 15 is a schematic diagram showing an example of a UI screen for confirming a list of work inspection results according to the first embodiment. 図16は、実施の形態1に係る検査結果を詳細に確認するためのUI画面の一例を示す模式図である。FIG. 16 is a schematic diagram showing an example of a UI screen for confirming inspection results in detail according to the first embodiment. 図17は、ワーク画像のサイズを、学習モデルに入力可能な画像のサイズに縮小する例を示す模式図である。FIG. 17 is a schematic diagram showing an example of reducing the size of the work image to the size of the image that can be input to the learning model. 図18は、複数のカメラでワークを、学習モデルに入力可能な画像のサイズに分割して撮像する例を示す模式図である。FIG. 18 is a schematic diagram showing an example in which a plurality of cameras divides a workpiece into image sizes that can be input to a learning model and captures the images. 図19は、実施の形態2に係る検査装置の機能構成の一例を示すブロック図である。19 is a block diagram illustrating an example of a functional configuration of an inspection apparatus according to Embodiment 2; FIG. 図20は、実施の形態2に係る合成画像の第1の生成方法の一例を示す図である。20A and 20B are diagrams illustrating an example of a first method for generating a synthesized image according to Embodiment 2. FIG. 図21は、実施の形態2に係る合成画像の第2の生成方法の一例を示す図である。21A and 21B are diagrams illustrating an example of a second method for generating a synthesized image according to Embodiment 2. FIG. 図22は、実施の形態2に係る検査前処理の一例を示すフローチャートである。22 is a flowchart illustrating an example of pre-examination processing according to Embodiment 2. FIG. 図23は、図22に示す最適条件決定処理の一例を示すフローチャートである。23 is a flow chart showing an example of the optimum condition determination process shown in FIG. 22. FIG. 図24は、実施の形態2に係る検査処理の一例を示すフローチャートである。24 is a flowchart illustrating an example of inspection processing according to Embodiment 2. FIG. 図25は、実施の形態2に係る検査処理の変形例を示すフローチャートである。FIG. 25 is a flowchart showing a modification of inspection processing according to the second embodiment.
 以下、図面を適宜参照して、本開示の実施の形態について、詳細に説明する。ただし、必要以上に詳細な説明は省略する場合がある。例えば、すでによく知られた事項の詳細説明及び実質的に同一の構成に対する重複説明を省略する場合がある。これは、以下の説明が不必要に冗長になるのを避け、当業者の理解を容易にするためである。なお、添付図面及び以下の説明は、当業者が本開示を十分に理解するために提供されるのであって、これらにより特許請求の記載の主題を限定することは意図されていない。 Hereinafter, embodiments of the present disclosure will be described in detail with reference to the drawings as appropriate. However, more detailed description than necessary may be omitted. For example, detailed descriptions of well-known matters and redundant descriptions of substantially the same configurations may be omitted. This is to avoid unnecessary verbosity in the following description and to facilitate understanding by those skilled in the art. It should be noted that the accompanying drawings and the following description are provided to allow those skilled in the art to fully understand the present disclosure and are not intended to limit the subject matter of the claims.
(実施の形態1)
<検査システムの構成>
 図1は、実施の形態1に係る検査システム10の構成例を示すブロック図である。
(Embodiment 1)
<Configuration of inspection system>
FIG. 1 is a block diagram showing a configuration example of an inspection system 10 according to Embodiment 1. As shown in FIG.
 従来、基板1(図4参照)に部品2(図4参照)が正常に装着されているか否かを検査員が目視で検査する外観検査が行われている。しかし、検査員による目視での外観検査は、検査員の負担が大きく、検査員によるばらつきも発生し、時間もかかる。そのため、外観検査の自動化が求められている。本実施の形態に係る検査システム10は、基板1に部品2が正常に装着されているか否かを自動的に検査するためのシステムである。本実施の形態において、検査対象である、部品2が装着された基板1をワーク3(図4参照)と称する。なお、基板1に装着された検査対象物である複数の部品2のうち、1つの部品2aを、第1検査対象物と称し、他の1つの部品2bを、第2検査対象物と称してもよい。 Conventionally, an appearance inspection is performed by an inspector to visually check whether the component 2 (see FIG. 4) is normally mounted on the board 1 (see FIG. 4). However, visual inspection by an inspector places a heavy burden on the inspector, causes variation among inspectors, and takes time. Therefore, automation of appearance inspection is required. The inspection system 10 according to the present embodiment is a system for automatically inspecting whether or not the component 2 is normally mounted on the board 1 . In the present embodiment, the board 1 to be inspected, on which the component 2 is mounted, is called a work 3 (see FIG. 4). Among the plurality of components 2 mounted on the substrate 1 and serving as inspection objects, one component 2a is referred to as a first inspection object, and the other component 2b is referred to as a second inspection object. good too.
 図1に示すように、検査システム10は、検査装置20と、照明装置30と、検出センサ19と、管理装置40と、入力装置50と、スピーカ52と、表示装置60と、パトライト(登録商標)62とを含む。 As shown in FIG. 1, the inspection system 10 includes an inspection device 20, an illumination device 30, a detection sensor 19, a management device 40, an input device 50, a speaker 52, a display device 60, and Patlite (registered trademark). ) 62.
 検査装置20は、検査対象のワーク3を撮像し、その撮像画像に基づいて、基板1に部品2が正常に装着されているか否かを検査する。以下、ワーク3を撮像した撮像画像を、ワーク画像4(図4参照)と称する。 The inspection device 20 captures an image of the workpiece 3 to be inspected, and inspects whether or not the component 2 is normally mounted on the board 1 based on the captured image. A captured image of the workpiece 3 is hereinafter referred to as a workpiece image 4 (see FIG. 4).
 照明装置30は、検査装置20がワーク3を撮像する際に当該ワーク3を照射する。照明装置30は、所定のケーブル12を介して、検査装置20に接続される。なお、照明装置30は、ライトと読み替えられてもよい。 The illumination device 30 illuminates the work 3 when the inspection device 20 images the work 3 . The lighting device 30 is connected to the inspection device 20 via a predetermined cable 12 . Note that the illumination device 30 may be read as a light.
 管理装置40は、検査装置20の操作及び検査結果の表示等を行う。管理装置40は、例えば、PC(Personal Computer)に代表される情報処理装置である。管理装置40は、所定の通信ネットワーク11を介して検査装置20に接続される。通信ネットワーク11は、有線LAN又は無線LANのいずれであってもよい。 The management device 40 operates the inspection device 20 and displays inspection results. The management device 40 is, for example, an information processing device represented by a PC (Personal Computer). The management device 40 is connected to the inspection device 20 via a predetermined communication network 11 . The communication network 11 may be either a wired LAN or a wireless LAN.
 入力装置50は、ユーザから入力操作を受け付ける。入力装置50の例として、キーボード、マウス、タッチパッド、又は、マイク等が挙げられる。入力装置50は、所定のケーブル13又は無線通信(例えばBluetooth(登録商標))を介して管理装置40に接続される。なお、入力装置50は、検査装置20に接続されてもよい。 The input device 50 receives input operations from the user. Examples of the input device 50 include a keyboard, mouse, touchpad, microphone, or the like. The input device 50 is connected to the management device 40 via a predetermined cable 13 or wireless communication (for example, Bluetooth (registered trademark)). Note that the input device 50 may be connected to the inspection device 20 .
 スピーカ52は、検査に関する音声を出力する。スピーカ52は、所定のケーブル16を介して管理装置40に接続される。なお、スピーカ52は、検査装置20に接続されてもよい。 The speaker 52 outputs audio related to the examination. The speaker 52 is connected to the management device 40 via a predetermined cable 16. FIG. Note that the speaker 52 may be connected to the inspection device 20 .
 表示装置60は、検査に関する画面を表示する。表示装置60の例として、LCD(Liquid Crystal Display)、又は、有機EL(Electro Luminescence)ディスプレイ等が挙げられる。表示装置60は、所定のケーブル14を介して管理装置40に接続される。なお、表示装置60は、検査装置20に接続されてもよい。なお、入力装置50、スピーカ52及び表示装置60は、一体の装置(例えばタブレット端末等)であってもよい。あるいは、管理装置40、入力装置50、スピーカ52、及び、表示装置60は、一体の装置であってもよい。 The display device 60 displays a screen regarding inspection. Examples of the display device 60 include an LCD (Liquid Crystal Display) or an organic EL (Electro Luminescence) display. A display device 60 is connected to the management device 40 via a predetermined cable 14 . Note that the display device 60 may be connected to the inspection device 20 . Note that the input device 50, the speaker 52, and the display device 60 may be an integrated device (such as a tablet terminal, for example). Alternatively, the management device 40, the input device 50, the speaker 52, and the display device 60 may be an integrated device.
 検出センサ19は、検査対象のワーク3の到着を検出するためのセンサである。検出センサ19は、所定のケーブル15を介して検査装置20に接続される。なお、検出センサ19は、管理装置40に接続されてもよい。 The detection sensor 19 is a sensor for detecting the arrival of the workpiece 3 to be inspected. The detection sensor 19 is connected to the inspection device 20 via a predetermined cable 15 . Note that the detection sensor 19 may be connected to the management device 40 .
 パトライト62は、所定のケーブル17を介して検査装置20に接続される。パトライト62は、検査装置20から検査結果に応じて送信される所定の信号を受信した場合、点滅する。例えば、検査装置20は、異常に装着されている部品2を検出した場合、所定の信号をパトライト62に送信する。パトライト62は、当該信号を受信し、点滅する。これにより、ユーザは、パトライト62の点滅を見て、検査装置20において異常に装着されている部品2が検出されたことを直ちに知ることができる。 The patrol light 62 is connected to the inspection device 20 via a predetermined cable 17. The patrol light 62 blinks when receiving a predetermined signal transmitted according to the inspection result from the inspection device 20 . For example, the inspection apparatus 20 transmits a predetermined signal to the patrol light 62 when the abnormally attached component 2 is detected. The patrol light 62 receives the signal and blinks. Accordingly, the user can immediately know that the inspection apparatus 20 has detected an abnormally mounted component 2 by seeing the flashing of the patrol light 62 .
 なお、本実施の形態では、検査装置20、照明装置30、及び、管理装置40を別の装置としている。しかし、検査装置20は照明装置30を含んでもよい。あるいは、検査装置20は管理装置40を含んでもよい。あるいは、検査装置20は、照明装置30及び管理装置40を含んでもよい。 Note that, in the present embodiment, the inspection device 20, the lighting device 30, and the management device 40 are separate devices. However, inspection device 20 may include illumination device 30 . Alternatively, inspection device 20 may include management device 40 . Alternatively, inspection device 20 may include illumination device 30 and management device 40 .
<検査装置のハードウェア構成>
 検査装置20は、カメラ21と、1つ以上のプロセッサ22と、ROM(Read Only Memory)23と、RAM(Random Access Memory)24と、ストレージ25と、通信I/F(Interface)26と、及び、入出力I/F27とを含む。カメラ21と、プロセッサ22と、ROM23と、RAM24と、ストレージ25と、通信I/F26と、入出力I/F27とは、双方向通信可能なバス(図示しない)を介して接続される。
<Hardware configuration of inspection device>
The inspection apparatus 20 includes a camera 21, one or more processors 22, a ROM (Read Only Memory) 23, a RAM (Random Access Memory) 24, a storage 25, a communication I/F (Interface) 26, and , and an input/output I/F 27 . The camera 21, the processor 22, the ROM 23, the RAM 24, the storage 25, the communication I/F 26, and the input/output I/F 27 are connected via a bidirectional communicable bus (not shown).
 カメラ21は、例えばレンズと撮像素子とを含む。カメラ21は、検査対象のワーク3を撮像し、ワーク画像4を生成する。カメラ21は、撮像装置又は撮像部といった他の用語に読み替えられてもよい。 The camera 21 includes, for example, a lens and an imaging device. The camera 21 captures an image of the workpiece 3 to be inspected and generates a workpiece image 4 . The camera 21 may be read as another term such as an imaging device or an imaging unit.
 プロセッサ22は、検査装置20全体の動作を制御する。プロセッサ22は、演算手段、CPU(Central Processing Unit)、又は、コントローラといった他の用語に読み替えられてもよい。 The processor 22 controls the operation of the inspection device 20 as a whole. The processor 22 may be read as other terms such as computing means, CPU (Central Processing Unit), or controller.
 ROM23は、読み出し専用の不揮発性の記憶媒体であり、ファームウェア等のプログラムが格納される。 The ROM 23 is a read-only non-volatile storage medium in which programs such as firmware are stored.
 RAM24は、情報の高速な読み書きが可能な揮発性の記憶媒体であり、プロセッサ22が情報を処理する際の作業領域として用いられる。なお、RAM24は、単にメモリと読み替えられてもよい。 The RAM 24 is a volatile storage medium that enables high-speed reading and writing of information, and is used as a work area when the processor 22 processes information. Note that the RAM 24 may simply be read as a memory.
 ストレージ25は、情報の読み書きが可能な不揮発性の記憶媒体であり、各種の制御プログラム、アプリケーションプログラム、及び、学習モデル110(図2参照)等が格納される。ストレージ25は、例えば、フラッシュメモリ、又は、SDカード等によって構成される。 The storage 25 is a non-volatile storage medium from which information can be read and written, and stores various control programs, application programs, learning models 110 (see FIG. 2), and the like. The storage 25 is configured by, for example, a flash memory or an SD card.
 通信I/F26は、検査装置20を通信ネットワーク11に接続するためのインタフェースである。通信I/F26は、有線LAN又は無線LANのいずれに対応するインタフェースであってもよい。 The communication I/F 26 is an interface for connecting the inspection device 20 to the communication network 11. The communication I/F 26 may be an interface compatible with either a wired LAN or a wireless LAN.
 入出力I/F27は、照明装置30及び検出センサ19を接続するためのインタフェースである。なお、入出力I/F27には、入力装置50及び/又は表示装置60が接続されてもよい。 The input/output I/F 27 is an interface for connecting the lighting device 30 and the detection sensor 19 . In addition, the input device 50 and/or the display device 60 may be connected to the input/output I/F 27 .
 プロセッサ22が、ROM23に格納されたプログラム、又は、ストレージ25からRAM24にロードされたプログラムを実行することにより、後述する、検査装置20が有する各種機能が実現される。 By the processor 22 executing the program stored in the ROM 23 or the program loaded from the storage 25 to the RAM 24, various functions of the inspection device 20, which will be described later, are realized.
<管理装置のハードウェア構成>
 管理装置40は、1つ以上のプロセッサ41と、ROM42と、RAM43と、ストレージ44と、通信I/F45と、入出力I/F46とを含む。プロセッサ41と、ROM42と、RAM43と、ストレージ44と、通信I/F45と、入出力I/F46とは、双方向通信可能なバス(図示しない)を介して接続される。
<Hardware configuration of management device>
Management device 40 includes one or more processors 41 , ROM 42 , RAM 43 , storage 44 , communication I/F 45 and input/output I/F 46 . Processor 41, ROM 42, RAM 43, storage 44, communication I/F 45, and input/output I/F 46 are connected via a bus (not shown) capable of two-way communication.
 プロセッサ41は、管理装置40全体の動作を制御する。 The processor 41 controls the overall operation of the management device 40.
 ROM42は、読み出し専用の不揮発性の記憶媒体であり、ファームウェア等のプログラムが格納されている。 The ROM 42 is a read-only non-volatile storage medium in which programs such as firmware are stored.
 RAM43は、情報の高速な読み書きが可能な揮発性の記憶媒体であり、プロセッサ41が情報を処理する際の作業領域として用いられる。なお、RAM43は、単にメモリと読み替えられてもよい。 The RAM 43 is a volatile storage medium that enables high-speed reading and writing of information, and is used as a work area when the processor 41 processes information. Note that the RAM 43 may simply be read as a memory.
 ストレージ44は、情報の読み書きが可能な不揮発性の記憶媒体であり、OS(Operating System)、各種の制御プログラム、アプリケーションプログラム、及び、学習モデル110(図3参照)等が格納される。ストレージ44は、例えば、フラッシュメモリ、SSD(Solid State Drive)、又は、HDD(Hard Disk Drive)等によって構成される。 The storage 44 is a non-volatile storage medium from which information can be read and written, and stores an OS (Operating System), various control programs, application programs, learning models 110 (see FIG. 3), and the like. The storage 44 is configured by, for example, flash memory, SSD (Solid State Drive), or HDD (Hard Disk Drive).
 通信I/F45は、管理装置40を通信ネットワーク11に接続するためのインタフェースである。通信I/F45は、有線LAN又は無線LANのいずれに対応するインタフェースであってもよい。 The communication I/F 45 is an interface for connecting the management device 40 to the communication network 11. The communication I/F 45 may be an interface compatible with either a wired LAN or a wireless LAN.
 入出力I/F46は、入力装置50及び/又は表示装置60を接続するためのインタフェースである。 The input/output I/F 46 is an interface for connecting the input device 50 and/or the display device 60.
 プロセッサ41が、ROM42に格納されたプログラム、又は、ストレージ44からRAM43にロードされたプログラムを実行することにより、後述する、管理装置40が有する各種機能が実現される。なお、管理装置40は、画像描画を高速に処理するためのGPU(Graphics Processing Unit)を備えてもよい。 By the processor 41 executing a program stored in the ROM 42 or a program loaded from the storage 44 to the RAM 43, various functions of the management device 40, which will be described later, are realized. Note that the management device 40 may include a GPU (Graphics Processing Unit) for processing image drawing at high speed.
<照明装置のハードウェア構成>
 照明装置30は、LED(Light Emitting Diode)光源31と、入出力I/F32と、調光制御回路33とを含む。
<Hardware Configuration of Lighting Device>
The illumination device 30 includes an LED (Light Emitting Diode) light source 31 , an input/output I/F 32 and a dimming control circuit 33 .
 LED光源31は、複数のLEDを含んで構成され、発光可能である。照明装置30は、互いに形状の異なる複数のLED光源31を有してよい。例えば、照明装置30は、バー形状のLED光源31と、マルチアングル形状のLED光源31と、ドーム形状のLED光源31と、バックライト形状のLED光源31とを備えてよい。さらに、照明装置30は、赤外線を発光するLED光源31を備えてもよい。 The LED light source 31 includes a plurality of LEDs and is capable of emitting light. The illumination device 30 may have a plurality of LED light sources 31 with different shapes. For example, the illumination device 30 may include a bar-shaped LED light source 31 , a multi-angle-shaped LED light source 31 , a dome-shaped LED light source 31 , and a backlight-shaped LED light source 31 . Furthermore, the illumination device 30 may include an LED light source 31 that emits infrared rays.
 入出力I/F32は、検査装置20を接続するためのインタフェースである。 The input/output I/F 32 is an interface for connecting the inspection device 20 .
 調光制御回路33は、入出力I/F32を通じて受信した検査装置20からの指示に基づいて、LED光源31の発光を制御する。例えば、調光制御回路33は、いずれのLED光源31を発光させるか、LED光源31の照明の色及び強度等を制御する。 The dimming control circuit 33 controls light emission of the LED light source 31 based on instructions from the inspection device 20 received through the input/output I/F 32 . For example, the dimming control circuit 33 controls which LED light source 31 is to emit light, the color and intensity of illumination of the LED light source 31, and the like.
<検査装置の機能構成>
 図2は、実施の形態1に係る検査装置20の機能構成の一例を示すブロック図である。
<Functional configuration of inspection device>
FIG. 2 is a block diagram showing an example of the functional configuration of the inspection device 20 according to Embodiment 1. As shown in FIG.
 検査装置20は、撮像制御部101と、最適条件決定部102と、検査実行部103と、注目領域格納部104と、撮像条件格納部105と、最適撮像条件格納部106と、学習モデル格納部107とを有する。撮像制御部101、最適条件決定部102、及び、検査実行部103の機能は、プロセッサ22がRAM24(メモリ)等と協働してコンピュータプログラム(検査プログラム)を実行することにより、実現されてよい。撮像条件格納部105、最適撮像条件格納部106、及び、学習モデル格納部107の機能は、RAM24(メモリ)及び/又はストレージ25によって実現されてよい。 The inspection apparatus 20 includes an imaging control unit 101, an optimum condition determination unit 102, an inspection execution unit 103, an attention area storage unit 104, an imaging condition storage unit 105, an optimum imaging condition storage unit 106, and a learning model storage unit. 107. The functions of the imaging control unit 101, the optimum condition determination unit 102, and the inspection execution unit 103 may be realized by the processor 22 cooperating with the RAM 24 (memory) and the like to execute a computer program (inspection program). . Functions of the imaging condition storage unit 105 , the optimum imaging condition storage unit 106 , and the learning model storage unit 107 may be realized by the RAM 24 (memory) and/or the storage 25 .
 注目領域格納部104は、検査対象のワーク3に設定される複数の注目領域5(図4参照)に関する情報(以下、注目領域情報と称する)を格納する。注目領域5は、検査対象であるワーク3の検査対象物である部品2を囲んで設定される領域である。ワーク3における検査対象物である複数の部品5が存在する領域を、検査対象領域と称してもよい。なお、注目領域5の詳細については後述する(図4参照)。 The attention area storage unit 104 stores information (hereinafter referred to as attention area information) regarding a plurality of attention areas 5 (see FIG. 4) set on the workpiece 3 to be inspected. The region of interest 5 is a region set surrounding the part 2, which is the object to be inspected, of the work 3, which is the object to be inspected. A region in which a plurality of parts 5, which are objects to be inspected, exist on the workpiece 3 may be referred to as a region to be inspected. Details of the attention area 5 will be described later (see FIG. 4).
 撮像条件格納部105は、複数の撮像条件に関する情報を格納する。撮像条件には、カメラ21の撮像に関する情報と、照明装置30の照明に関する情報とが含まれる。なお、撮像条件の詳細については後述する(図6及び図7参照)。 The imaging condition storage unit 105 stores information regarding a plurality of imaging conditions. The imaging conditions include information about imaging by the camera 21 and information about lighting by the lighting device 30 . Details of the imaging conditions will be described later (see FIGS. 6 and 7).
 最適撮像条件格納部106は、撮像条件格納部105に格納されている複数の撮像条件のうち、検査対象のワーク3に最適な撮像条件(以下、最適撮像条件と称する)を格納する。なお、最適撮像条件の詳細については後述する(図9及び図10参照)。 The optimum imaging condition storage unit 106 stores the optimum imaging condition (hereinafter referred to as the optimum imaging condition) for the workpiece 3 to be inspected among the plurality of imaging conditions stored in the imaging condition storage unit 105 . Details of the optimum imaging conditions will be described later (see FIGS. 9 and 10).
 学習モデル格納部107は、ワーク3を撮像したワーク画像4の注目領域5において、検査対象物である部品2が正常に装着されているか否かを検出するために用いられる学習モデル110を格納する。 The learning model storage unit 107 stores a learning model 110 used for detecting whether or not the component 2, which is the inspection object, is normally mounted in the attention area 5 of the work image 4 obtained by imaging the work 3. .
 撮像制御部101は、カメラ21及び照明装置30を制御して、検査対象のワーク3を撮像し、ワーク画像4を生成する。撮像制御部101は、ワーク画像4の画質調整を行ってもよい。 The imaging control unit 101 controls the camera 21 and the lighting device 30 to image the workpiece 3 to be inspected and generate the workpiece image 4 . The imaging control unit 101 may adjust the image quality of the workpiece image 4 .
 最適条件決定部102は、撮像条件格納部105に格納されている複数の撮像条件のうち、検査対象のワーク画像4の各注目領域5に最適な撮像条件(最適撮像条件)を決定する。最適条件決定部102は、決定した最適撮像条件を最適撮像条件格納部106に格納する。 The optimum condition determining unit 102 determines the optimum imaging condition (optimum imaging condition) for each attention area 5 of the workpiece image 4 to be inspected from among the plurality of imaging conditions stored in the imaging condition storage unit 105 . The optimal condition determination unit 102 stores the determined optimal imaging conditions in the optimal imaging condition storage unit 106 .
 検査実行部103は、最適撮像条件格納部106に格納されている最適撮像条件に基づいて撮像制御部101が撮像したワーク画像4を用いて、各注目領域5に対応する領域において部品2が正常に装着されているか否かを検査する。検査実行部103は、学習モデル格納部107に格納されている学習モデル110に対して、ワーク画像4の注目領域5に対応する領域の画像を入力することにより、当該検査を実行する。 The inspection executing unit 103 uses the work image 4 captured by the imaging control unit 101 based on the optimal imaging conditions stored in the optimal imaging condition storage unit 106 to determine whether the component 2 is normal in the area corresponding to each attention area 5. Check whether it is attached to the The inspection executing unit 103 inputs the image of the area corresponding to the attention area 5 of the workpiece image 4 to the learning model 110 stored in the learning model storage unit 107, thereby executing the inspection.
<管理装置の機能構成>
 図3は、実施の形態1に係る管理装置40の機能構成の一例を示すブロック図である。
<Functional configuration of management device>
FIG. 3 is a block diagram showing an example of functional configuration of the management device 40 according to the first embodiment.
 管理装置40は、機能として、注目領域設定部201と、撮像パターン生成部202と、照明パターン生成部203と、撮像条件生成部204と、学習モデル生成部205と、UI制御部206と、注目領域格納部207と、撮像パターン格納部208と、照明パターン格納部209と、撮像条件格納部210と、学習モデル格納部211と、検査結果格納部212とを有する。注目領域設定部201、撮像パターン生成部202、照明パターン生成部203、撮像条件生成部204、学習モデル生成部205、及び、UI制御部206の機能は、プロセッサ41がRAM43(メモリ)等と協働してコンピュータプログラム(検査プログラム)を実行することにより、実現されてよい。注目領域格納部207、撮像パターン格納部208、照明パターン格納部209、撮像条件格納部210、学習モデル格納部211、及び、検査結果格納部212の機能は、RAM43(メモリ)及び/又はストレージ44によって実現されてよい。 The management device 40 has, as functions, an attention area setting unit 201, an imaging pattern generation unit 202, an illumination pattern generation unit 203, an imaging condition generation unit 204, a learning model generation unit 205, a UI control unit 206, an attention It has an area storage unit 207 , an imaging pattern storage unit 208 , an illumination pattern storage unit 209 , an imaging condition storage unit 210 , a learning model storage unit 211 , and an inspection result storage unit 212 . The functions of the attention area setting unit 201, the imaging pattern generation unit 202, the illumination pattern generation unit 203, the imaging condition generation unit 204, the learning model generation unit 205, and the UI control unit 206 are performed by the processor 41 in cooperation with the RAM 43 (memory) and the like. It may be realized by working and executing a computer program (inspection program). The functions of the region-of-interest storage unit 207, the imaging pattern storage unit 208, the illumination pattern storage unit 209, the imaging condition storage unit 210, the learning model storage unit 211, and the inspection result storage unit 212 are implemented by the RAM 43 (memory) and/or the storage 44. may be realized by
 注目領域格納部207は、各ワーク3に設定される複数の注目領域5に関する情報(注目領域情報)を格納する。 The region-of-interest storage unit 207 stores information (region-of-interest information) on a plurality of regions of interest 5 set for each work 3 .
 撮像パターン格納部208は、複数の撮像パターンを格納する。なお、撮像パターンの詳細については後述する(図5参照)。 The imaging pattern storage unit 208 stores a plurality of imaging patterns. Details of the imaging pattern will be described later (see FIG. 5).
 照明パターン格納部209は、複数の照明パターンを格納する。なお、照明パターンの詳細については後述する(図6参照)。 The illumination pattern storage unit 209 stores a plurality of illumination patterns. Details of the illumination pattern will be described later (see FIG. 6).
 撮像条件格納部210は、複数の撮像条件に関する情報を格納する。なお、撮像条件の詳細については後述する。 The imaging condition storage unit 210 stores information regarding a plurality of imaging conditions. Details of the imaging conditions will be described later.
 学習モデル格納部211は、ワーク画像4の注目領域5において部品2が正常に装着されているか否かを検出するために用いられる学習モデル110を格納する。 The learning model storage unit 211 stores the learning model 110 used to detect whether or not the component 2 is normally mounted in the attention area 5 of the work image 4 .
 検査結果格納部212は、検査装置20によって検査されたワーク3の検査結果を示す情報(以下、ワーク検査結果情報と称する)を格納する。 The inspection result storage unit 212 stores information indicating the inspection result of the work 3 inspected by the inspection device 20 (hereinafter referred to as work inspection result information).
 注目領域設定部201は、ワーク3の検査対象物である部品2を囲むように注目領域5を設定し、その注目領域5に関する情報(注目領域情報)を注目領域格納部207に格納する。また、注目領域設定部201は、注目領域格納部207から、検査対象のワーク3に対応付けられている注目領域情報を取得し、検査装置20に送信する。検査装置20は、送信された注目領域情報を注目領域格納部104に格納する。 The attention area setting unit 201 sets the attention area 5 so as to surround the part 2 which is the inspection object of the workpiece 3, and stores information (attention area information) on the attention area 5 in the attention area storage unit 207. The attention area setting unit 201 also acquires attention area information associated with the workpiece 3 to be inspected from the attention area storage unit 207 and transmits the attention area information to the inspection apparatus 20 . The inspection apparatus 20 stores the transmitted attention area information in the attention area storage unit 104 .
 撮像パターン生成部202は、複数の撮像パターンを生成し、撮像パターン格納部208に格納する。なお、撮像パターンの詳細については後述する(図5参照)。 The imaging pattern generation unit 202 generates a plurality of imaging patterns and stores them in the imaging pattern storage unit 208 . Details of the imaging pattern will be described later (see FIG. 5).
 照明パターン生成部203は、複数の照明パターンを生成し、照明パターン生成部203に格納する。なお、照明パターンの詳細については後述する(図6参照)。 The illumination pattern generation unit 203 generates a plurality of illumination patterns and stores them in the illumination pattern generation unit 203. Details of the illumination pattern will be described later (see FIG. 6).
 撮像条件生成部204は、撮像パターン格納部208に格納されている撮像パターンと、照明パターン格納部209に格納されている照明パターンとを組み合わせて撮像条件を生成する。撮像条件生成部204は、生成した複数の撮像条件を撮像条件格納部210に格納する。また、撮像条件生成部204は、撮像条件格納部210に格納されている複数の撮像条件を、検査装置20に送信する。検査装置20は、管理装置40から受信した撮像条件を撮像条件格納部105に格納する。 The imaging condition generation unit 204 generates imaging conditions by combining the imaging pattern stored in the imaging pattern storage unit 208 and the illumination pattern stored in the illumination pattern storage unit 209 . The imaging condition generation unit 204 stores the generated imaging conditions in the imaging condition storage unit 210 . The imaging condition generation unit 204 also transmits a plurality of imaging conditions stored in the imaging condition storage unit 210 to the inspection device 20 . The inspection device 20 stores the imaging conditions received from the management device 40 in the imaging condition storage unit 105 .
 学習モデル生成部205は、注目領域5において部品2が正常に装着されているか否かを検査するために用いられる学習モデル110の生成及び学習を行う。例えば、学習モデル生成部205は、部品2が正常に装着されている複数のワーク画像4の注目領域5における画像(以下、注目領域画像と称する)の特徴量を用いて、学習モデル110の学習を行う。学習モデル110は、注目領域画像が入力された場合、その入力された注目領域画像の特徴量が、部品2が正常に装着されている注目領域画像の特徴量と比較してどのくらい異なっているかを評価値として出力する。学習モデル110は、注目領域画像が入力されると、当該注目領域画像に含まれる部品2が正常に装着されている可能性が高いほど、小さな評価値を出力(つまり推論)する。別言すると、学習モデル110は、注目領域画像が入力されると、当該注目領域画像に含まれる部品2が異常に装着されている可能性が高いほど、大きな評価値を出力(つまり推論)する。学習モデル生成部205は、学習済みの学習モデル110を、学習モデル格納部211に格納する。また、学習モデル生成部205は、学習済みの学習モデル110を検査装置20に送信する。検査装置20は、管理装置40から受信した学習モデル110を学習モデル格納部107に格納する。学習モデル110は、画像解析に係るニューラルネットワーク、ディープニューラルネットワーク、又は、CNN(Convolutional Neural Network)として構成されてよい。ただし、学習モデル110は、これらに限られず、様々な人工知能技術又は機械学習技術に基づいて構成されてよい。なお、本実施の形態では、上述のとおり、部品2が異常に装着されている可能性が高いほど大きな評価値を出力する学習モデル100を用いる場合を例示している。しかし、本実施の形態は、部品2が正常に装着されている可能性が高いほど大きな評価値を出力する学習モデル100を用いて構成されてもよい。また、本実施の形態では、管理装置40が学習モデル生成部205を有しているが、検査装置20が学習モデル生成部205を有してもよい。 The learning model generation unit 205 generates and learns the learning model 110 used to check whether the component 2 is normally mounted in the attention area 5 . For example, the learning model generation unit 205 learns the learning model 110 using the feature amount of images (hereinafter referred to as attention area images) in the attention areas 5 of a plurality of work images 4 on which the parts 2 are normally mounted. I do. When the attention area image is input, the learning model 110 determines how much the feature amount of the input attention area image differs from the feature amount of the attention area image in which the part 2 is normally mounted. Output as an evaluation value. When a region-of-interest image is input, the learning model 110 outputs (that is, infers) a smaller evaluation value as the probability that the component 2 included in the region-of-interest image is normally mounted is higher. In other words, when the attention area image is input, the learning model 110 outputs (that is, infers) a larger evaluation value as the possibility that the part 2 included in the attention area image is abnormally mounted is higher. . The learning model generation unit 205 stores the learned learning model 110 in the learning model storage unit 211 . Also, the learning model generation unit 205 transmits the learned learning model 110 to the inspection device 20 . The inspection device 20 stores the learning model 110 received from the management device 40 in the learning model storage unit 107 . The learning model 110 may be configured as a neural network for image analysis, a deep neural network, or a CNN (Convolutional Neural Network). However, the learning model 110 is not limited to these, and may be configured based on various artificial intelligence techniques or machine learning techniques. As described above, the present embodiment exemplifies the case of using the learning model 100 that outputs a larger evaluation value as the possibility that the component 2 is abnormally attached is higher. However, the present embodiment may be configured using the learning model 100 that outputs a larger evaluation value as the possibility that the component 2 is normally mounted is higher. Moreover, although the management device 40 has the learning model generation unit 205 in the present embodiment, the inspection device 20 may have the learning model generation unit 205 .
 UI制御部206は、検査に関するUI画面を生成し、表示装置60に表示する。また、UI制御部206は、入力装置50からの入力を受け付けて、各種情報の入力及び表示を制御する。例えば、UI制御部206は、検査装置20から送信されるワーク検査結果情報に基づいて、ワーク3の検査結果を示すUI画面を生成し、表示装置60に表示する。これにより、ユーザは、ワーク3に対する検査結果を視認できる。また、UI制御部206は、検査装置20から送信されるワーク検査結果情報を、検査結果格納部212に格納する。なお、UI制御部206の詳細については後述する(図12、図13、図15、図16参照)。 The UI control unit 206 generates a UI screen regarding examination and displays it on the display device 60 . Also, the UI control unit 206 receives input from the input device 50 and controls input and display of various information. For example, the UI control unit 206 generates a UI screen showing the inspection result of the work 3 based on the work inspection result information transmitted from the inspection device 20 and displays it on the display device 60 . Thereby, the user can visually recognize the inspection result for the workpiece 3 . Also, the UI control unit 206 stores workpiece inspection result information transmitted from the inspection apparatus 20 in the inspection result storage unit 212 . Details of the UI control unit 206 will be described later (see FIGS. 12, 13, 15, and 16).
<注目領域の詳細>
 図4は、実施の形態1に係るワーク3に設定される注目領域5の一例を示す図である。
<Details of attention area>
FIG. 4 is a diagram showing an example of the attention area 5 set on the workpiece 3 according to the first embodiment.
 図4に示すように、基板1には複数の部品2が装着される。注目領域設定部201は、ワーク3における検査対象物である部品2(例えば部品2a、2b)を囲むように注目領域5(例えば注目領域5a、5b)を設定する。部品2aは第1検査対象物と読み替え、部品2aを囲む注目領域5aは第1注目領域と読み替えられてもよい。部品2bは第2検査対象物と読み替え、部品2bを囲む注目領域5bは第2注目領域と読み替えられてもよい。注目領域5の形状は矩形であってよい。ただし、注目領域5の形状は矩形に限られず、多角形又は楕円形等であってもよい。注目領域設定部201は、例えば、次の(A1)又は(A2)のいずれかの方法によって注目領域5を設定する。 As shown in FIG. 4, a plurality of components 2 are mounted on the substrate 1. The attention area setting unit 201 sets an attention area 5 (for example, attention areas 5a and 5b) so as to surround a part 2 (for example, parts 2a and 2b) that is an inspection target on the workpiece 3. FIG. The part 2a may be read as the first inspection object, and the attention area 5a surrounding the part 2a may be read as the first attention area. The part 2b may be read as the second inspection object, and the attention area 5b surrounding the part 2b may be read as the second attention area. The shape of the attention area 5 may be rectangular. However, the shape of the attention area 5 is not limited to a rectangle, and may be polygonal, elliptical, or the like. The attention area setting unit 201 sets the attention area 5 by, for example, one of the following methods (A1) and (A2).
(A1)注目領域設定部201は、ワーク3の設計情報に基づいて注目領域5を設定する。設計情報は、基板1における検査対象物である部品2の位置を定める情報である。そこで、注目領域設定部201は、設計情報に基づいて、検査対象の部品2が装着される基板1上の位置を自動的に特定し、その特定した位置に注目領域5を設定する。 (A1) The attention area setting unit 201 sets the attention area 5 based on the design information of the workpiece 3 . The design information is information that determines the position of the component 2 that is the object to be inspected on the board 1 . Therefore, the attention area setting unit 201 automatically identifies the position on the board 1 where the inspection target component 2 is mounted based on the design information, and sets the attention area 5 at the identified position.
(A2)ユーザは、UI制御部206が提供するUI画面を通じて、ワーク画像4に対して手動で注目領域5を設定する。例えば、ユーザが、UI画面を通じて、ワーク画像4における検査対象物の部品2を囲む。注目領域設定部201は、囲まれた領域を当該部品2の注目領域5に設定する。 (A2) The user manually sets the attention area 5 on the work image 4 through the UI screen provided by the UI control unit 206 . For example, the user surrounds the part 2 of the inspection object in the workpiece image 4 through the UI screen. The attention area setting unit 201 sets the enclosed area as the attention area 5 of the component 2 .
 注目領域設定部201は、上記の(A1)又は(A2)の方法によって設定した注目領域5を示す注目領域情報を生成し、注目領域格納部207に格納する。 The attention area setting unit 201 generates attention area information indicating the attention area 5 set by the method (A1) or (A2) above, and stores it in the attention area storage unit 207 .
<撮像パターンの詳細>
 図5は、実施の形態1に係る撮像パターンの一例を示す図である。
<Details of imaging pattern>
FIG. 5 is a diagram showing an example of an imaging pattern according to Embodiment 1. FIG.
 撮像パターンは、複数の異なる撮像パラメータを組み合わせたものである。撮像パラメータの例として、図5に示すように、シャッタースピート、最長露光時間、レンズの絞り値、最大ゲイン、カメラ感度、明度、ホワイトバランスのRedボリューム、ホワイトバランスのBlueボリューム、コントラスト強度、暗部補正、明部補正、及び、ペデスタルレベルが挙げられる。 An imaging pattern is a combination of multiple different imaging parameters. Examples of imaging parameters, as shown in FIG. 5, are shutter speed, maximum exposure time, lens aperture value, maximum gain, camera sensitivity, brightness, white balance red volume, white balance blue volume, contrast intensity, and dark area correction. , bright area correction, and pedestal level.
 撮像パターン生成部202は、例えば、次の(B1)又は(B2)のいずれかの方法により、少なくとも1つの撮像パラメータが異なる複数の撮像パターンを生成する。 The imaging pattern generation unit 202 generates a plurality of imaging patterns with at least one different imaging parameter, for example, by either method (B1) or (B2) below.
(B1)ユーザは、UI制御部206が提供するUI画面を通じて、手動で各撮像パラメータを調整し、複数の撮像パターンを生成する。例えば、ユーザは、第1の注目領域5に装着される暗い色の部品2が適切に撮像されるよう各撮像パラメータを調整した第1の撮像パターンを生成する。また、ユーザは、第2の注目領域5に装着される明るい色の部品2が適切に撮像されるよう各撮像パラメータを調整した第2の撮像パターンを生成する。撮像パターン生成部202は、このように生成された複数の撮像パターンを、撮像パターン格納部208に格納する。 (B1) The user manually adjusts each imaging parameter through the UI screen provided by the UI control unit 206 to generate a plurality of imaging patterns. For example, the user generates a first imaging pattern in which each imaging parameter is adjusted so that the dark-colored component 2 attached to the first attention area 5 is appropriately imaged. Also, the user generates a second imaging pattern in which each imaging parameter is adjusted so that the bright-colored component 2 attached to the second attention area 5 is appropriately imaged. The imaging pattern generation unit 202 stores the plurality of imaging patterns generated in this manner in the imaging pattern storage unit 208 .
(B2)撮像パターン生成部202は、検査装置20のカメラ21が撮像したワーク画像4を分析して撮像パラメータを自動的に調整し、複数の撮像パターンを生成する。例えば、撮像パターン生成部202は、ワーク画像4の第1の注目領域5の注目領域画像の特徴を分析し、その分析結果から当該注目領域5に適切な各撮像パラメータを決定し、第1の撮像パターンを生成する。注目領域5の特徴の例として、部品2の色、部品2の反射率、部品2の透過率、部品2の材質、及び、部品2の高さ等が挙げられる。また、撮像パターン生成部202は、ワーク画像4の第2の注目領域5の注目領域画像の特徴を分析し、その分析結果から当該注目領域5に適切な各撮像パラメータを決定し、第2の撮像パターンを生成する。撮像パターン生成部202は、このように生成した複数の撮像パターンを、撮像パターン格納部208に格納する。 (B2) The imaging pattern generation unit 202 analyzes the work image 4 captured by the camera 21 of the inspection device 20, automatically adjusts imaging parameters, and generates a plurality of imaging patterns. For example, the imaging pattern generation unit 202 analyzes the features of the region-of-interest image of the first region of interest 5 of the work image 4, determines each imaging parameter appropriate for the region of interest 5 from the analysis result, and Generate an imaging pattern. Examples of features of the attention area 5 include the color of the component 2, the reflectance of the component 2, the transmittance of the component 2, the material of the component 2, the height of the component 2, and the like. In addition, the imaging pattern generation unit 202 analyzes the features of the attention area image of the second attention area 5 of the work image 4, determines each imaging parameter appropriate for the attention area 5 from the analysis result, Generate an imaging pattern. The imaging pattern generation unit 202 stores the plurality of imaging patterns generated in this manner in the imaging pattern storage unit 208 .
 なお、本実施の形態では、管理装置40が撮像パターン生成部202を有する例を説明するが、検査装置20が撮像パターン生成部202を有してもよい。 In this embodiment, an example in which the management device 40 has the imaging pattern generation unit 202 will be described, but the inspection device 20 may have the imaging pattern generation unit 202 .
<照明パターンの詳細>
 図6は、実施の形態1に係る照明パターンの一例を示す図である。
<Details of lighting pattern>
6 is a diagram showing an example of an illumination pattern according to Embodiment 1. FIG.
 照明パターンは、複数の異なる照明パラメータを組み合わせたものである。照明パラメータの例として、図6に示すように、照明の形状、照明の当て方、照明の色、偏光フィルタの使用有無、赤外線照明の使用有無、及び、照明の強度が挙げられる。 A lighting pattern is a combination of multiple different lighting parameters. Examples of lighting parameters include lighting shape, lighting method, lighting color, use of polarizing filter, use of infrared lighting, and lighting intensity, as shown in FIG.
 照明の形状に関する照明パラメータの例として、バー、マルチアングル、ドーム、及び、バックライトが挙げられる。例えば、撮像制御部101は、照明の形状に関する照明パラメータが「バー」の照明パターンに基づいてワーク3を撮像する場合、照明装置30が備えるバー形状のLED光源31を点灯させる。 Examples of lighting parameters related to lighting shape include bar, multi-angle, dome, and backlight. For example, when the imaging control unit 101 captures an image of the workpiece 3 based on an illumination pattern in which the illumination parameter related to the shape of the illumination is “bar”, the imaging control unit 101 lights the bar-shaped LED light source 31 provided in the illumination device 30 .
 照明の当て方に関する照明パラメータの例として、正反射、拡散反射、及び、透過が挙げられる。  Specular reflection, diffuse reflection, and transmission are examples of lighting parameters related to how to apply lighting.
 照明の色に関する照明パラメータの例として、青色、赤色、及び、緑色が挙げられる。 Examples of lighting parameters related to lighting colors include blue, red, and green.
 照明パターン生成部203は、次の方法により、少なくとも1つの照明パラメータが互いに異なる複数の照明パターンを生成してよい。すなわち、ユーザは、UI制御部206が提供するUI画面を通じて、手動で照明パラメータを調整し、複数の照明パターンを生成する。照明パターン生成部203は、このように生成された複数の照明パターンを、照明パターン格納部209に格納する。 The illumination pattern generation unit 203 may generate a plurality of illumination patterns with at least one illumination parameter different from each other by the following method. That is, the user manually adjusts the lighting parameters through the UI screen provided by the UI control unit 206 to generate a plurality of lighting patterns. The illumination pattern generation unit 203 stores the plurality of illumination patterns generated in this way in the illumination pattern storage unit 209 .
 なお、本実施の形態では、管理装置40が照明パターン生成部203を有する例を説明するが、検査装置20が照明パターン生成部203を有してもよい。 In this embodiment, an example in which the management device 40 has the illumination pattern generation unit 203 will be described, but the inspection device 20 may have the illumination pattern generation unit 203 .
<撮像条件の詳細>
 撮像条件生成部204は、撮像パターン格納部208に格納されている1つの撮像パターンと、照明パターン格納部209に格納されている1つの照明パターンとを組み合せて、1つの撮像条件を生成する。撮像条件生成部204は、撮像パターンと照明パターンとを互いに異なるように組み合わせて複数の撮像条件を生成する。撮像条件生成部204は、生成した複数の撮像条件を、撮像条件格納部210に格納する。例えば、撮像パターン格納部208に4つの撮像パターンが格納されており、照明パターン格納部209に4つの照明パターンが格納されている場合、撮像条件生成部204は、4×4=16個の撮像条件を生成する。撮像条件生成部204は、撮像条件格納部210に格納されている複数の撮像条件を検査装置20に送信する。検査装置20は、管理装置40から受信した複数の撮像条件を撮像条件格納部105に格納する。
<Details of imaging conditions>
The imaging condition generation unit 204 combines one imaging pattern stored in the imaging pattern storage unit 208 and one illumination pattern stored in the illumination pattern storage unit 209 to generate one imaging condition. The imaging condition generation unit 204 generates a plurality of imaging conditions by combining imaging patterns and illumination patterns in different ways. The imaging condition generation unit 204 stores the generated imaging conditions in the imaging condition storage unit 210 . For example, when four imaging patterns are stored in the imaging pattern storage unit 208 and four illumination patterns are stored in the illumination pattern storage unit 209, the imaging condition generation unit 204 performs 4×4=16 imaging. Generate conditions. The imaging condition generation unit 204 transmits a plurality of imaging conditions stored in the imaging condition storage unit 210 to the inspection device 20 . The inspection device 20 stores the plurality of imaging conditions received from the management device 40 in the imaging condition storage unit 105 .
<検査前シーケンス>
 図7は、実施の形態1に係る検査システム10における検査前の動作を示すシーケンスチャートである。
<Pre-examination sequence>
FIG. 7 is a sequence chart showing pre-inspection operations in the inspection system 10 according to the first embodiment.
 管理装置40は、表示装置60に検査メインUI画面(図示しない)を表示する(ステップS11)。 The management device 40 displays an inspection main UI screen (not shown) on the display device 60 (step S11).
 ユーザは、入力装置50を通じて、検査メインUI画面にユーザ名を入力する(ステップS12、ステップS13)。 The user inputs the user name on the examination main UI screen through the input device 50 (steps S12 and S13).
 管理装置40は、入力されたユーザ名の設定要求を検査装置20に送信する(ステップS14)。 The management device 40 transmits a request for setting the input user name to the inspection device 20 (step S14).
 検査装置20は、ユーザ名の設定要求を受信し、当該ユーザ名の設定を完了し、完了応答を管理装置40へ送信する(ステップS15)。 The inspection device 20 receives the user name setting request, completes the setting of the user name, and transmits a completion response to the management device 40 (step S15).
 管理装置40は、検査装置20から完了応答を受信し、ユーザ名の入力が完了した旨を表示装置60に表示する(ステップS16)。 The management device 40 receives the completion response from the inspection device 20 and displays on the display device 60 that the input of the user name has been completed (step S16).
 ユーザは、入力装置50を通じて、検査対象とするワーク3を選択する(ステップS17、ステップS18)。 The user selects the workpiece 3 to be inspected through the input device 50 (steps S17 and S18).
 管理装置40は、選択されたワーク3の選択要求を検査装置20に送信する(ステップS19)。 The management device 40 transmits a selection request for the selected work 3 to the inspection device 20 (step S19).
 検査装置20は、ワーク3の選択要求を受信し、検査対象とするワーク3を選択し、完了応答を管理装置40へ送信する(ステップS20)。 The inspection device 20 receives the selection request for the work 3, selects the work 3 to be inspected, and transmits a completion response to the management device 40 (step S20).
 管理装置40は、検査装置20から完了応答を受信し、検査対象とするワーク3の選択が完了した旨を表示装置60に表示する(ステップS21)。 The management device 40 receives the completion response from the inspection device 20, and displays on the display device 60 that the selection of the work 3 to be inspected has been completed (step S21).
 また、管理装置40は、カメラ21が撮像中の映像ストリームの配信要求を検査装置20に送信する(ステップS22)。 In addition, the management device 40 transmits to the inspection device 20 a request for distribution of the video stream being imaged by the camera 21 (step S22).
 検査装置20は、映像ストリームの配信要求を受信し、当該要求の受領応答を管理装置40に送信する(ステップS23)。また、検査装置20は、カメラ21が撮像中の映像ストリームを管理装置40に送信する(ステップS24)。 The inspection device 20 receives the video stream distribution request and transmits a response to the request to the management device 40 (step S23). Also, the inspection device 20 transmits the video stream being imaged by the camera 21 to the management device 40 (step S24).
 管理装置40は、検査装置20から受信した映像ストリームをリアルタイムで表示装置60に表示する(ステップS25)。 The management device 40 displays the video stream received from the inspection device 20 in real time on the display device 60 (step S25).
 これにより、ユーザは、検査対象とするワーク3を、検査装置20に設定できる。また、ユーザは、検査装置20のカメラ21が撮像中の映像ストリームをリアルタイムに視聴できる。 Thereby, the user can set the workpiece 3 to be inspected in the inspection device 20 . Also, the user can view the video stream being imaged by the camera 21 of the inspection device 20 in real time.
<検査シーケンス>
 図8は、実施の形態1に係る検査システム10における検査の動作を示すシーケンスチャートである。
<Inspection sequence>
FIG. 8 is a sequence chart showing inspection operations in the inspection system 10 according to the first embodiment.
 ユーザは、検査を開始する場合、入力装置50を通じて、検査モードONを入力する(ステップS31、ステップS32)。 When the user starts the inspection, the user inputs inspection mode ON through the input device 50 (steps S31 and S32).
 管理装置40は、入力された検査モードONの要求を検査装置20に送信する(ステップS33)。 The management device 40 transmits the input inspection mode ON request to the inspection device 20 (step S33).
 検査装置20は、検査モードONの要求を受信し、検査モードをONに変更し、完了応答を管理装置40へ送信する(ステップS34)。 The inspection device 20 receives the request to turn on the inspection mode, changes the inspection mode to ON, and transmits a completion response to the management device 40 (step S34).
 管理装置40は、検査装置20から完了応答を受信し、検査モードONへの切り替えが完了した旨を表示装置60に表示する(ステップS35)。 The management device 40 receives the completion response from the inspection device 20, and displays on the display device 60 that switching to inspection mode ON has been completed (step S35).
 また、管理装置40は、カメラ21が撮像中の映像の配信要求を検査装置20へ送信する(ステップS36)。 Also, the management device 40 transmits a request for distribution of the video being captured by the camera 21 to the inspection device 20 (step S36).
 検査装置20は、映像の配信要求を受信し、当該配信要求の受領応答を管理装置40に送信する(ステップS37)。また、検査装置20は、カメラ21が撮像中の映像ストリームを管理装置40に送信する(ステップS38)。 The inspection device 20 receives the video distribution request and transmits an acknowledgment of the distribution request to the management device 40 (step S37). Also, the inspection device 20 transmits the video stream being imaged by the camera 21 to the management device 40 (step S38).
 管理装置40は、検査装置20から受信した映像ストリームをリアルタイムで表示装置60に表示する(ステップS39)。 The management device 40 displays the video stream received from the inspection device 20 in real time on the display device 60 (step S39).
 検出センサ19は、ワーク3を検出した場合、ワーク検出通知を検査装置20に送信する(ステップS40)。 When detecting the work 3, the detection sensor 19 transmits a work detection notification to the inspection device 20 (step S40).
 検査装置20は、ワーク検出通知を受信した場合、ワーク3の検査開始通知を管理装置40に送信する(ステップS41)。 When the inspection apparatus 20 receives the workpiece detection notification, it transmits an inspection start notification for the workpiece 3 to the management apparatus 40 (step S41).
 管理装置40は、検査装置20からワーク3の検査開始通知を受信した場合、ワーク3の検査を開始する旨を表示装置60に表示する(ステップS42)。 When the management device 40 receives the inspection start notification of the work 3 from the inspection device 20, it displays on the display device 60 that the inspection of the work 3 will start (step S42).
 また、検査装置20は、ワーク3の検査処理を実行する(ステップS43)。当該ワーク3の検査処理の詳細については後述する(図11参照)。 Also, the inspection device 20 executes inspection processing of the workpiece 3 (step S43). The details of the inspection process for the workpiece 3 will be described later (see FIG. 11).
 検査装置20は、ワーク3の検査処理を完了した後、当該ワーク3の検査結果を含むワーク検査結果情報を管理装置40に送信する(ステップS44)。 After completing the inspection process of the work 3, the inspection device 20 transmits work inspection result information including the inspection result of the work 3 to the management device 40 (step S44).
 管理装置40は、ワーク検査結果情報を検査装置20から受信し、当該ワーク検査結果情報の内容を表示装置60に表示する(ステップS45)。 The management device 40 receives the workpiece inspection result information from the inspection device 20, and displays the content of the workpiece inspection result information on the display device 60 (step S45).
 検査システム10は、順次搬送されてくる検査対象のワーク3に対して、上述したステップS40~ステップS45の処理を繰り返し行う。 The inspection system 10 repeatedly performs the above-described processing of steps S40 to S45 on the workpieces 3 to be inspected that are sequentially conveyed.
 ユーザは、検査を終了する場合、入力装置50を通じて、検査モードOFFを入力する
(ステップS46、ステップS47)。
When the user ends the inspection, the user inputs inspection mode OFF through the input device 50 (steps S46 and S47).
 管理装置40は、入力された検査モードOFFの要求を検査装置20に送信する(ステップS48)。 The management device 40 transmits the input request to turn off the inspection mode to the inspection device 20 (step S48).
 検査装置20は、検査モードOFFの要求を受信し、検査モードをOFFに変更し、完了応答を管理装置40へ送信する(ステップS49)。 The inspection device 20 receives the request to turn off the inspection mode, changes the inspection mode to OFF, and transmits a completion response to the management device 40 (step S49).
 管理装置40は、完了応答を検査装置20から受信し、検査モードOFFへの切り替えが完了した旨を表示装置60に表示する(ステップS50)。 The management device 40 receives the completion response from the inspection device 20, and displays on the display device 60 that switching to inspection mode OFF has been completed (step S50).
<検査前処理フロー>
 次に、図9及び図10を参照して、ワーク3の検査処理の前に行われる検査前処理について説明する。検査前処理では、ワーク3に対する注目領域5の設定及び最適撮像条件の決定が行われる。
<Pre-test processing flow>
Next, pre-inspection processing performed before inspection processing of the workpiece 3 will be described with reference to FIGS. 9 and 10. FIG. In the pre-inspection processing, setting of the attention area 5 for the workpiece 3 and determination of optimum imaging conditions are performed.
 図9は、実施の形態1に係る検査前処理の一例を示すフローチャートである。 FIG. 9 is a flowchart showing an example of pre-examination processing according to the first embodiment.
 撮像制御部101は、カメラ21を制御してワーク3を撮像し、ワーク画像4を生成する(ステップS101)。 The imaging control unit 101 controls the camera 21 to image the workpiece 3 and generate the workpiece image 4 (step S101).
 注目領域設定部201は、ワーク4に複数の注目領域5を設定する(ステップS102)。注目領域設定部201は、設定した複数の注目領域5を示す情報(注目領域情報)を検査装置20に送信する。検査装置20は、受信した複数の注目領域情報を注目領域格納部104に格納する。 The attention area setting unit 201 sets a plurality of attention areas 5 on the workpiece 4 (step S102). The attention area setting unit 201 transmits information (attention area information) indicating a plurality of set attention areas 5 to the inspection device 20 . The inspection apparatus 20 stores the received plural pieces of attention area information in the attention area storage unit 104 .
 撮像条件生成部204は、撮像パターン格納部208に格納されている複数の撮像パターンと、照明パターン格納部209に格納されている複数の照明パターンとを組み合わせて、複数の撮像条件を生成する(ステップS103)。撮像条件生成部204は、生成した複数の撮像条件を検査装置20に送信する。検査装置20は、受信した複数の撮像条件を撮像条件格納部105に格納する。 The imaging condition generation unit 204 generates a plurality of imaging conditions by combining a plurality of imaging patterns stored in the imaging pattern storage unit 208 and a plurality of illumination patterns stored in the illumination pattern storage unit 209 ( step S103). The imaging condition generation unit 204 transmits the plurality of generated imaging conditions to the inspection device 20 . The inspection apparatus 20 stores the received multiple imaging conditions in the imaging condition storage unit 105 .
 最適条件決定部102は、撮像条件格納部105に格納されている撮像条件のうち、未選択の撮像条件を1つ選択する(ステップS104)。図10の説明において、当該選択された撮像条件を、選択撮像条件と称する。 The optimum condition determining unit 102 selects one unselected imaging condition from among the imaging conditions stored in the imaging condition storage unit 105 (step S104). In the description of FIG. 10, the selected imaging conditions are referred to as selected imaging conditions.
 撮像制御部101は、選択撮像条件に基づいてカメラ21及び照明装置30を制御してワーク3を撮像し、ワーク画像4を生成する(ステップS105)。撮像制御部101は、生成したワーク画像4を、RAM24又はストレージ25に格納する。 The imaging control unit 101 controls the camera 21 and the lighting device 30 based on the selected imaging conditions to image the workpiece 3 and generate the workpiece image 4 (step S105). The imaging control unit 101 stores the generated work image 4 in the RAM 24 or storage 25 .
 最適条件決定部102は、撮像条件格納部105に格納されているすべての撮像条件を選択したか否かを判定する(ステップS106)。 The optimum condition determination unit 102 determines whether or not all imaging conditions stored in the imaging condition storage unit 105 have been selected (step S106).
 未選択の撮像条件が残っている場合(ステップS106:NO)、検査装置20は、処理をステップS104に戻す。 If unselected imaging conditions remain (step S106: NO), the inspection device 20 returns the process to step S104.
 すべての撮像条件を選択した場合(ステップS106:YES)、最適条件決定部102は、最適条件決定処理を実行する(ステップS107)。なお、最適条件決定処理の詳細については後述する(図10参照)。そして、本処理は終了する。 When all imaging conditions have been selected (step S106: YES), the optimum condition determination unit 102 executes optimum condition determination processing (step S107). Details of the optimum condition determination process will be described later (see FIG. 10). Then, the process ends.
 図10は、図9に示す最適条件決定処理(ステップS107)の一例を示すフローチャートである。 FIG. 10 is a flow chart showing an example of the optimum condition determination process (step S107) shown in FIG.
 最適条件決定部102は、注目領域格納部104に格納されている複数の注目領域5のうち、未選択の注目領域5を1つ選択する(ステップS201)。図10の説明において、当該選択された注目領域5を、選択注目領域5と称する。 The optimum condition determination unit 102 selects one unselected attention area 5 from among the plurality of attention areas 5 stored in the attention area storage unit 104 (step S201). In the description of FIG. 10 , the selected attention area 5 is called a selected attention area 5 .
 最適条件決定部102は、ステップS105においてRAM24又はストレージ25に格納された、異なる撮像条件で撮像された複数のワーク画像4のそれぞれから、選択注目領域5における画像を取得する。図10の説明において、当該画像を選択注目領域画像と称する。最適条件決定部102は、学習モデル格納部107に格納されている学習モデル110を用いて、取得した複数の選択注目領域画像のそれぞれについて評価値を算出する(ステップS202)。 The optimum condition determination unit 102 acquires an image of the selected region of interest 5 from each of the plurality of work images 4 captured under different imaging conditions, stored in the RAM 24 or storage 25 in step S105. In the description of FIG. 10, this image is called a selected region-of-interest image. The optimum condition determination unit 102 uses the learning model 110 stored in the learning model storage unit 107 to calculate an evaluation value for each of the acquired selected region-of-interest images (step S202).
 最適条件決定部102は、ステップS202で算出した評価値に基づいて、選択注目領域5の最適撮像条件を決定する(ステップS203)。例えば、最適条件決定部102は、評価値が最も高く算出された撮像条件を、選択注目領域5の最適撮像条件に決定する。 The optimum condition determination unit 102 determines the optimum imaging conditions for the selected region of interest 5 based on the evaluation value calculated in step S202 (step S203). For example, the optimum condition determining unit 102 determines the imaging condition with the highest calculated evaluation value as the optimum imaging condition for the selected region of interest 5 .
 最適条件決定部102は、注目領域格納部104に格納されている選択注目領域5に、ステップS203で決定した最適撮像条件を対応付ける(ステップS204)。 The optimal condition determination unit 102 associates the selected attention area 5 stored in the attention area storage unit 104 with the optimal imaging condition determined in step S203 (step S204).
 最適条件決定部102は、注目領域格納部104に格納されているすべての注目領域5を選択したか否かを判定する(ステップS205)。 The optimum condition determination unit 102 determines whether or not all the attention areas 5 stored in the attention area storage unit 104 have been selected (step S205).
 未選択の注目領域5が残っている場合(ステップS205:NO)、検査装置20は、処理をステップS201に戻す。すべての注目領域5を選択した場合(ステップS205:YES)、検査装置20は、本処理を終了する。 If an unselected region of interest 5 remains (step S205: NO), the inspection device 20 returns the process to step S201. If all the attention areas 5 have been selected (step S205: YES), the inspection device 20 terminates this process.
<検査処理フロー>
 次に、図11を参照して、ワーク3の検査処理について説明する。
<Inspection processing flow>
Next, referring to FIG. 11, inspection processing of the workpiece 3 will be described.
 図11は、実施の形態1に係る検査処理の一例を示すフローチャートである。 FIG. 11 is a flowchart showing an example of inspection processing according to the first embodiment.
 撮像制御部101は、検査対象のワーク3に対応付けられている注目領域格納部104及び最適撮像条件格納部106を選択する(ステップS301)。 The imaging control unit 101 selects the attention area storage unit 104 and the optimum imaging condition storage unit 106 associated with the workpiece 3 to be inspected (step S301).
 撮像制御部101は、検出センサ19がワーク3を検出したか否かを判定する(ステップS302)。例えば、撮像制御部101は、検出センサ19からワーク検出通知を受信したか否かを判定する。 The imaging control unit 101 determines whether or not the detection sensor 19 has detected the workpiece 3 (step S302). For example, the imaging control unit 101 determines whether or not a workpiece detection notification has been received from the detection sensor 19 .
 検出センサ19がワーク3を未検出である場合(ステップS302:NO)、検査装置20は、処理をステップS302に戻す。検出センサ19がワーク3を検出した場合(ステップS302:YES)、検査装置20は、処理を次のステップS303に進める。 If the detection sensor 19 has not detected the workpiece 3 (step S302: NO), the inspection device 20 returns the process to step S302. If the detection sensor 19 detects the workpiece 3 (step S302: YES), the inspection apparatus 20 advances the process to the next step S303.
 撮像制御部101は、最適撮像条件格納部106から、未選択の最適撮像条件を1つ選択する(ステップS303)。図11の説明において、当該選択された最適撮像条件を、選択最適撮像条件と称する。 The imaging control unit 101 selects one unselected optimum imaging condition from the optimum imaging condition storage unit 106 (step S303). In the description of FIG. 11, the selected optimum imaging conditions are referred to as selected optimum imaging conditions.
 撮像制御部101は、選択最適撮像条件に基づいてカメラ21及び照明装置30を制御してワーク3を撮像し、ワーク画像4を生成する(ステップS304)。撮像制御部101は、生成したワーク画像4をRAM43又はストレージ44に格納する。 The imaging control unit 101 controls the camera 21 and the lighting device 30 based on the selected optimum imaging conditions to image the workpiece 3 and generate the workpiece image 4 (step S304). The imaging control unit 101 stores the generated work image 4 in the RAM 43 or storage 44 .
 撮像制御部101は、最適撮像条件格納部106に含まれるすべての最適撮像条件を選択したか否かを判定する(ステップS305)。 The imaging control unit 101 determines whether or not all the optimum imaging conditions included in the optimum imaging condition storage unit 106 have been selected (step S305).
 未選択の最適撮像条件が残っている場合(ステップS305:NO)、検査装置20は、処理をステップS303に戻す。すべての最適撮像条件を選択した場合(ステップS305:YES)、検査装置20は、処理を次のステップS306に進める。 If unselected optimum imaging conditions remain (step S305: NO), the inspection apparatus 20 returns the process to step S303. If all the optimum imaging conditions have been selected (step S305: YES), the inspection apparatus 20 advances the process to the next step S306.
 検査実行部103は、注目領域格納部104に格納されている複数の注目領域5のうち、未選択の注目領域5を1つ選択する(ステップS306)。図11の説明において、当該選択された注目領域5を、選択注目領域5と称する。 The inspection execution unit 103 selects one unselected attention area 5 from among the plurality of attention areas 5 stored in the attention area storage unit 104 (step S306). In the description of FIG. 11 , the selected attention area 5 is called a selected attention area 5 .
 検査実行部103は、ステップS304においてRAM43又はストレージ44に格納された、異なる最適撮像条件で撮像された複数のワーク画像4のうち、選択注目領域5に対応付けられている最適撮像条件にて撮像されたワーク画像4を選択する(ステップS307)。図11の説明において、当該選択されたワーク画像4を、選択ワーク画像4と称する。 The inspection executing unit 103 picks up images under the optimum imaging conditions associated with the selected region of interest 5 among the plurality of workpiece images 4 picked up under different optimum imaging conditions stored in the RAM 43 or the storage 44 in step S304. The work image 4 that has been displayed is selected (step S307). In the description of FIG. 11, the selected work image 4 will be referred to as a selected work image 4. FIG.
 検査実行部103は、学習モデル格納部107に格納されている学習モデル110を用いて、選択ワーク画像4の選択注目領域5における画像の評価値を算出する(ステップS308)。図11の説明において、当該画像を選択注目領域画像と称する。選択ワーク画像4は、選択注目領域5を撮像するに最適な撮像条件にて撮像されたものである。よって、当該選択ワーク画像4から取得された選択注目領域画像から算出される評価値は、単一の撮像条件で撮像されたワーク画像から取得された選択注目領域画像から算出される評価値よりも、精度が高くなり得る。 The inspection executing unit 103 uses the learning model 110 stored in the learning model storage unit 107 to calculate the evaluation value of the image in the selected attention area 5 of the selected work image 4 (step S308). In the description of FIG. 11, the image is referred to as a selected region-of-interest image. The selected workpiece image 4 is captured under optimum imaging conditions for imaging the selected attention area 5 . Therefore, the evaluation value calculated from the selected region-of-interest image obtained from the selected work image 4 is higher than the evaluation value calculated from the selected region-of-interest image obtained from the work image captured under a single imaging condition. , can be more accurate.
 検査実行部103は、ステップS308で算出した評価値に基づいて、選択注目領域5の検査結果を決定する(ステップS309)。例えば、検査実行部103は、評価値が所定の閾値Th未満である場合、選択注目領域5において部品2が異常に装着されている(NG)と決定する。例えば、検査実行部103は、評価値が閾値Th以上である場合、選択注目領域5において部品2が正常に装着されている(OK)と決定する。検査実行部103は、選択注目領域5に検査結果を対応付けてRAM43又はストレージ44に格納する。 The inspection execution unit 103 determines the inspection result of the selected region of interest 5 based on the evaluation value calculated in step S308 (step S309). For example, when the evaluation value is less than a predetermined threshold value Th, the inspection execution unit 103 determines that the component 2 is abnormally attached (NG) in the selected attention area 5 . For example, when the evaluation value is equal to or greater than the threshold Th, the inspection execution unit 103 determines that the component 2 is normally mounted (OK) in the selected attention area 5 . The inspection execution unit 103 associates the inspection results with the selected region of interest 5 and stores them in the RAM 43 or the storage 44 .
 検査実行部103は、注目領域格納部104に格納されているすべての注目領域5を選択したか否かを判定する(ステップS310)。未選択の注目領域5が残っている場合、検査装置20は、処理をステップS306に戻す。 The inspection execution unit 103 determines whether or not all the attention areas 5 stored in the attention area storage unit 104 have been selected (step S310). If there remains an unselected region of interest 5, the inspection apparatus 20 returns the process to step S306.
 すべての注目領域5を選択した場合、検査実行部103は、ステップS309にてRAM43又はストレージ44に格納された、各注目領域5に対応付けられている検査結果をまとめてワーク検査結果情報を生成し、管理装置40へ送信する(ステップS311)。そして、検査装置20は、処理をステップS302に戻し、次に搬送されてくるワーク3の検査を行う。 When all the attention areas 5 have been selected, the inspection execution unit 103 collects the inspection results associated with each attention area 5 and stored in the RAM 43 or the storage 44 in step S309 to generate workpiece inspection result information. and transmitted to the management device 40 (step S311). The inspection apparatus 20 then returns the process to step S302 and inspects the work 3 that is next transported.
 なお、管理装置40のUI制御部206は、検査装置20からワーク検査結果情報を受信し、検査結果格納部212に格納する。また、UI制御部206は、ワーク検査結果徐由豊の内容を表示装置60に表示する。ユーザは、表示装置60に表示されたワーク検査結果情報の内容を見て、ワーク3の各注目領域5において部品2が正常に装着されているか否かを確認できる。 The UI control unit 206 of the management device 40 receives work inspection result information from the inspection device 20 and stores it in the inspection result storage unit 212 . The UI control unit 206 also displays the content of the work inspection result Xu Yufeng on the display device 60 . The user can see the contents of the workpiece inspection result information displayed on the display device 60 and can confirm whether or not the component 2 is normally mounted in each attention area 5 of the workpiece 3 .
<パターン用UI画面>
 図12は、実施の形態1に係る撮像パターン及び照明パターンを生成及び調整するためのUI画面の一例を示す模式図である。
<UI screen for patterns>
12 is a schematic diagram showing an example of a UI screen for generating and adjusting an imaging pattern and an illumination pattern according to Embodiment 1. FIG.
 管理装置40のUI制御部206は、図12に示すように、撮像パターン及び照明パターンを生成又は調整するためのUI画面(以下、パターン用UI画面300と称する)を表示装置60に表示する。 The UI control unit 206 of the management device 40 displays, on the display device 60, a UI screen (hereinafter referred to as a pattern UI screen 300) for generating or adjusting the imaging pattern and the illumination pattern, as shown in FIG.
 パターン用UI画面300は、図12に示すように、撮像パターンリスト領域301と、照明パターンリスト領域302と、撮像条件別領域303と、ワーク画像確認領域304とを含む。 The pattern UI screen 300 includes an imaging pattern list area 301, an illumination pattern list area 302, an imaging condition-specific area 303, and a workpiece image confirmation area 304, as shown in FIG.
 UI制御部206は、撮像パターンリスト領域301に、撮像パターン格納部208に格納されている撮像パターンをリスト表示する。ユーザは、撮像パターンリスト領域301に、新たな撮像パターンを追加することにより、新たな撮像パターンを生成できる。また、ユーザは、撮像パターンリスト領域301に表示されている撮像パラメータを調整することにより、撮像パターン格納部208に格納されている撮像パターンの撮像パラメータを調整できる。 The UI control unit 206 lists the imaging patterns stored in the imaging pattern storage unit 208 in the imaging pattern list area 301 . The user can generate a new imaging pattern by adding a new imaging pattern to the imaging pattern list area 301 . Further, the user can adjust the imaging parameters of the imaging patterns stored in the imaging pattern storage unit 208 by adjusting the imaging parameters displayed in the imaging pattern list area 301 .
 UI制御部206は、照明パターンリスト領域302に、照明パターン格納部209に格納されている照明パターンをリスト表示する。ユーザは、照明パターンリスト領域302に、新たな照明パターンを追加することにより、新たな照明パターンを生成できる。また、ユーザは、照明パターンリスト領域302に表示されている照明パラメータを調整することにより、照明パターン格納部209に格納されている照明パターンの照明パラメータを調整できる。 The UI control unit 206 lists the illumination patterns stored in the illumination pattern storage unit 209 in the illumination pattern list area 302 . A user can generate a new lighting pattern by adding a new lighting pattern to the lighting pattern list area 302 . Also, the user can adjust the illumination parameters of the illumination patterns stored in the illumination pattern storage unit 209 by adjusting the illumination parameters displayed in the illumination pattern list area 302 .
 UI制御部206は、異なる撮像条件で撮像した複数のワーク画像4を撮像条件別領域303に表示する。これにより、ユーザは、撮像条件別領域303において、撮像条件毎にどのようなワーク画像4が撮像されるのかを確認できる。 The UI control unit 206 displays a plurality of workpiece images 4 captured under different imaging conditions in the imaging condition-specific area 303 . Accordingly, the user can confirm what kind of work image 4 is captured for each imaging condition in the imaging condition-specific area 303 .
 UI制御部206は、ユーザが撮像条件別領域303から選択したワーク画像4を、ワーク画像確認領域304に表示する。UI制御部206は、ユーザの操作に応じて、ワーク画像確認領域304におけるワーク画像4を拡大又は縮小して表示する。これにより、ユーザは、選択したワーク画像4をより詳細に確認できる。 The UI control unit 206 displays the workpiece image 4 selected by the user from the imaging condition-specific area 303 in the workpiece image confirmation area 304 . The UI control unit 206 enlarges or reduces and displays the work image 4 in the work image confirmation area 304 according to the user's operation. This allows the user to check the selected work image 4 in more detail.
<注目領域用UI画面>
 図13は、実施の形態1に係る注目領域5を設定するためのUI画面の一例を示す模式図である。図14は、実施の形態1に係る注目領域情報の構成例を示す図である。
<UI screen for attention area>
FIG. 13 is a schematic diagram showing an example of a UI screen for setting the attention area 5 according to the first embodiment. 14 is a diagram showing a configuration example of attention area information according to Embodiment 1. FIG.
 管理装置40のUI制御部206は、図13に示すように、注目領域5を入力又は修正するためのUI画面(以下、注目領域用UI画面320と称する)を表示装置60に表示する。 The UI control unit 206 of the management device 40 displays a UI screen for inputting or correcting the attention area 5 (hereinafter referred to as an attention area UI screen 320) on the display device 60, as shown in FIG.
 注目領域用UI画面320は、注目領域リスト領域321と、注目領域確認領域322と、評価値確認領域323とを含む。 The attention area UI screen 320 includes an attention area list area 321 , an attention area confirmation area 322 , and an evaluation value confirmation area 323 .
 UI制御部206は、注目領域リスト領域321に、注目領域格納部207に格納されている注目領域情報をリスト表示する。注目領域情報は、例えば、各部品2について、パラメータとして、部品名、部品品番、検査内容、注目領域の左上座標、注目領域の右下座標、前処理位置補正、撮像パターン、及び、照明パターンを有する。 The UI control unit 206 lists the attention area information stored in the attention area storage unit 207 in the attention area list area 321 . For example, for each component 2, the attention area information includes, as parameters, the part name, part number, inspection details, upper left coordinates of the attention area, lower right coordinates of the attention area, preprocessing position correction, imaging pattern, and illumination pattern. have.
・『部品名』は、部品2の名称を示す。
・『部品品番』は、部品2の品番を示す。
・『検査内容』は、部品2を検査対象とするか否かを示す。また、検査内容は、部品2がどのように異常に装着される可能性があるかを示す。
・『注目領域の左上座標』は、矩形の注目領域5の左上の点のワーク画像4上でのX座標及びY座標を示す。
・『注目領域の右下座標』は、矩形の注目領域5の右下の点のワーク画像4上でのX座標及びY座標を示す。
・『前処理位置補正』は、前処理で注目領域5の位置を補正するか否かを示す。また、『前処理位置補正』には、設定ボタン324が含まれてよい。設定ボタン324が選択(押下)されると、前処理による注目領域5の位置の補正量、及び、注目領域5の評価値と比較される閾値Thなどを設定するためのUI画面が表示されてもよい。
・『撮像パターン』は、注目領域5の最適撮像条件の撮像パターンを示す。
・『照明パターン』は、注目領域5の最適撮像条件の照明パターンを示す。
- "Part name" indicates the name of the part 2.
- "Part number" indicates the part number of the part 2.
- "Inspection content" indicates whether or not the component 2 is to be inspected. The inspection content also indicates how the part 2 may be mounted abnormally.
"Upper left coordinates of attention area" indicates the X and Y coordinates of the upper left point of the rectangular attention area 5 on the workpiece image 4 .
"Lower right coordinates of attention area" indicates the X and Y coordinates of the lower right point of the rectangular attention area 5 on the workpiece image 4. FIG.
"Pre-processing position correction" indicates whether or not the position of the attention area 5 is corrected in pre-processing. In addition, a setting button 324 may be included in the “preprocessing position correction”. When the setting button 324 is selected (pressed), a UI screen for setting a correction amount of the position of the attention area 5 by preprocessing and a threshold value Th to be compared with the evaluation value of the attention area 5 is displayed. good too.
"Imaging pattern" indicates the imaging pattern of the region of interest 5 under the optimal imaging conditions.
- "Illumination pattern" indicates the illumination pattern of the target area 5 under the optimum imaging conditions.
 ユーザは、注目領域リスト領域321に、新たな注目領域5を追加することにより、新たな注目領域情報を設定できる。また、ユーザは、注目領域リスト領域321に表示されている注目領域情報のパラメータを調整することにより、注目領域格納部207に格納されている注目領域情報のパラメータを調整できる。 The user can set new attention area information by adding a new attention area 5 to the attention area list area 321 . Further, the user can adjust the parameters of the attention area information stored in the attention area storage unit 207 by adjusting the parameters of the attention area information displayed in the attention area list area 321 .
 UI制御部206は、注目領域確認領域322に、ワーク画像4を表示すると共に、注目領域リスト領域321にて選択された注目領域情報に対応する注目領域5を示す矩形枠を、ワーク画像4に重畳表示する。これにより、ユーザは、注目領域リスト領域321にて選択中の注目領域情報が、ワーク画像4のいずれの位置の部品2の注目領域5に対応しているかを確認できる。 The UI control unit 206 displays the work image 4 in the attention area confirmation area 322 , and displays a rectangular frame indicating the attention area 5 corresponding to the attention area information selected in the attention area list area 321 on the work image 4 . Display superimposed. Thereby, the user can confirm that the attention area information being selected in the attention area list area 321 corresponds to the attention area 5 of the component 2 at which position on the workpiece image 4 .
 UI制御部206は、評価値確認領域323に、注目領域リスト領域321にて選択された注目領域情報について、異なる撮像条件にて撮像した当該注目領域5の各注目領域画像から算出した評価値を、当該注目領域画像と共に表示する。これにより、ユーザは、注目領域5ごとに、撮像条件と評価値との関係を確認できる。 The UI control unit 206 displays an evaluation value calculated from each attention area image of the attention area 5 captured under different imaging conditions for the attention area information selected in the attention area list area 321 in the evaluation value confirmation area 323 . , is displayed together with the attention area image. Thereby, the user can confirm the relationship between the imaging condition and the evaluation value for each attention area 5 .
 注目領域設定部201は、注目領域リスト領域321にて入力又は修正された内容を注目領域情報として注目領域格納部207に格納してよい。なお、注目領域情報は、図14に示すように、CSVデータとして構成されてもよい。 The attention area setting unit 201 may store the contents input or modified in the attention area list area 321 in the attention area storage unit 207 as attention area information. Note that the attention area information may be configured as CSV data as shown in FIG. 14 .
<検査結果一覧用UI画面>
 図15は、実施の形態1に係るワーク3の検査結果を一覧で確認するためのUI画面の一例を示す模式図である。
<UI screen for inspection result list>
FIG. 15 is a schematic diagram showing an example of a UI screen for checking inspection results of the workpiece 3 according to the first embodiment.
 UI制御部206は、検査結果格納部212に格納されている複数のワーク検査結果を用いて、ワーク3の検査結果を一覧で確認するためのUI画面(以下、検査結果一覧用UI画面340と称する)を表示装置60に表示する。 The UI control unit 206 uses a plurality of workpiece inspection results stored in the inspection result storage unit 212 to display a UI screen (hereinafter referred to as an inspection result list UI screen 340) for checking the inspection results of the workpiece 3 in a list. ) is displayed on the display device 60 .
 検査結果一覧用UI画面340は、複数のワーク領域341を含む。また、検査結果一覧用UI画面340は、ワーク領域341ごとに、ワーク画像4と、検査結果342と、詳細ボタン343とを含む。 The inspection result list UI screen 340 includes a plurality of work areas 341 . The inspection result list UI screen 340 also includes a work image 4 , an inspection result 342 , and a details button 343 for each work area 341 .
 UI制御部206は、1つのワーク領域341に、検査を行った1つのワーク3のワーク画像4を表示する。 The UI control unit 206 displays a workpiece image 4 of one inspected workpiece 3 in one workpiece area 341 .
 UI制御部206は、ワーク領域341に表示されているワーク画像4について、複数の注目領域5のうち、少なくとも1つの注目領域5において異常が検出された場合、検査結果342として異常(NG)を表示する。UI制御部206は、ワーク領域341に表示されているワーク画像4について、複数の注目領域5のすべてにおいて異常が検出されなかった場合、検査結果342として正常(OK)を表示する。 When an abnormality is detected in at least one attention area 5 among the plurality of attention areas 5 in the work image 4 displayed in the work area 341 , the UI control unit 206 outputs an abnormality (NG) as the inspection result 342 . indicate. For the work image 4 displayed in the work area 341 , the UI control unit 206 displays normal (OK) as the inspection result 342 when no abnormality is detected in all of the plurality of attention areas 5 .
 UI制御部206は、ワーク領域341の詳細ボタン343が選択(押下)された場合、選択されたワーク領域341のワーク画像4についての検査結果詳細用UI画面360(図16参照)を表示する。 When the details button 343 of the work area 341 is selected (depressed), the UI control unit 206 displays the inspection result details UI screen 360 (see FIG. 16) for the work image 4 of the selected work area 341 .
<検査結果詳細用UI画面>
 図16は、実施の形態1に係る検査結果を詳細に確認するためのUI画面の一例を示す模式図である。
<UI screen for inspection result details>
FIG. 16 is a schematic diagram showing an example of a UI screen for confirming inspection results in detail according to the first embodiment.
 UI制御部206は、図15で詳細ボタン343が選択(押下)されたワーク3(以下、図16の説明において選択ワーク3と称する)について、当該ワーク3の検査結果を詳細に確認するためのUI画面(以下、検査結果詳細用UI画面360と称する)を表示する。 The UI control unit 206 controls the work 3 for which the details button 343 is selected (depressed) in FIG. 15 (hereinafter referred to as the selected work 3 in the description of FIG. 16). A UI screen (hereinafter referred to as an inspection result detail UI screen 360) is displayed.
 検査結果詳細用UI画面360は、説明領域361と、ワーク領域362と、部品リスト領域363とを含む。 The inspection result details UI screen 360 includes an explanation area 361 , a work area 362 and a parts list area 363 .
 UI制御部206は、説明領域361に、異常と判定された注目領域5と、正常と判定された注目領域5とをどのように区別可能に表示しているかを説明する内容を表示する。 The UI control unit 206 displays, in the explanation area 361, the content explaining how the attention area 5 determined to be abnormal and the attention area 5 determined to be normal are displayed in a distinguishable manner.
 UI制御部206は、ワーク領域362に、選択ワーク3のワーク画像4を表示する。また、UI制御部206は、当該ワーク画像4の各部品2上に注目領域5を重畳表示する。このとき、UI制御部206は、異常と判定された部品2の注目領域5と、正常と判定された部品2の注目領域5とを区別可能に表示する。例えば、UI制御部206は、異常と判定された部品2の注目領域5を赤色で表示し、正常と判定された部品2の注目領域5を緑色で表示する。これにより、ユーザは、いずれの部品2が異常と判定されたかを容易に確認できる。 The UI control unit 206 displays the work image 4 of the selected work 3 in the work area 362 . The UI control unit 206 also superimposes the attention area 5 on each part 2 of the work image 4 . At this time, the UI control unit 206 displays the attention area 5 of the part 2 determined to be abnormal and the attention area 5 of the part 2 determined to be normal in a distinguishable manner. For example, the UI control unit 206 displays the attention area 5 of the part 2 determined to be abnormal in red, and displays the attention area 5 of the part 2 determined to be normal in green. Thereby, the user can easily confirm which component 2 is determined to be abnormal.
 UI制御部206は、部品リスト領域363に、選択ワーク3に装着されている部品2のリストを表示する。また、UI制御部206は、部品リスト領域363において、異常と判定された部品2と、正常と判定された部品2とを区別可能に表示する。これにより、ユーザは、いずれの部品2が異常と判定されたかを容易に確認できる。 The UI control unit 206 displays a list of the parts 2 attached to the selected workpiece 3 in the parts list area 363. In addition, the UI control unit 206 displays the parts 2 determined to be abnormal and the parts 2 determined to be normal in a distinguishable manner in the parts list area 363 . Thereby, the user can easily confirm which component 2 is determined to be abnormal.
(実施の形態1のまとめ)
 実施の形態1の内容は、以下の項目のように表現できる。
(Summary of Embodiment 1)
The contents of Embodiment 1 can be expressed as the following items.
<項目1>
 検査装置20は、1つ以上のプロセッサ22と、メモリ(例えばRAM24)と、メモリに保存されているプログラムと、を備える。プログラムは、次のことをプロセッサ22に実行させる。
 プログラムは、第1検査対象物(例えば部品2a)と第1検査対象物とは異なる第2検査対象物(例えば部品2b)とを含む複数の検査対象物が存在する検査対象領域において、第1検査対象物の検査を行うための第1注目領域5(5a)と、第2検査対象物の検査を行うための第2注目領域5(5b)とを設定する。
 プログラムは、検査対象領域を撮像して検査対象領域の撮像画像(例えばワーク画像4)を出力するカメラ21に、検査対象領域の撮像画像として、第1撮像条件により撮像した第1撮像画像と、第1撮像条件とは異なる第2撮像条件により撮像した第2撮像画像とを撮像させる。
 プログラムは、第1検査対象物と第2検査対象物とを含む複数の検査対象物の異常を検知するための学習モデル110と、第1注目領域に対応する第1撮像画像の第1領域とに基づいて第1検査対象物を検査し、かつ学習モデルと第2注目領域に対応する第1撮像画像の第2領域とに基づいて第2検査対象物を検査する第1検査を実行する。
 プログラムは、学習モデルと第1注目領域に対応する第2撮像画像の第1領域とに基づいて第1検査対象物を検査し、かつ学習モデルと第2注目領域に対応する第2撮像画像の第2領域とに基づいて第2検査対象物を検査する第2検査を実行する。
 プログラムは、第1検査の結果と第2検査の結果とを出力する。
 これにより、検査装置20は、第1撮像条件で撮像された第1撮像画像を用いて、第1検査対象物と第2検査対象物とを検査して第1検査の結果を出力し、第2撮像条件で撮像された第2撮像画像を用いて、第1検査対象物と第2検査対象物とを検査して第2検査の結果を出力する。よって、検査装置20は、異なる撮像条件で撮像した撮像画像を用いて、各検査対象物を検査することができ、より精度の高い検査結果を得ることができる。
<Item 1>
Inspection device 20 includes one or more processors 22, memory (eg, RAM 24), and programs stored in the memory. The program causes the processor 22 to do the following.
In an inspection target area in which a plurality of inspection targets including a first inspection target (eg, a part 2a) and a second inspection target (eg, a component 2b) different from the first inspection target exist, a first A first attention area 5 (5a) for inspecting an inspection object and a second attention area 5 (5b) for inspecting a second inspection object are set.
The program causes the camera 21, which captures an image of the inspection target area and outputs a captured image of the inspection target area (for example, the workpiece image 4), to take a first captured image captured under a first imaging condition as the captured image of the inspection target area, A second captured image captured under a second imaging condition different from the first imaging condition is captured.
The program includes a learning model 110 for detecting anomalies in a plurality of inspection objects including a first inspection object and a second inspection object, and a first region of a first captured image corresponding to a first region of interest. and inspects a second inspection object based on the learning model and the second region of the first captured image corresponding to the second region of interest.
The program inspects a first inspection object based on the learning model and a first area of a second captured image corresponding to the first attention area, and inspects the learning model and a second captured image corresponding to the second attention area. A second inspection is performed to inspect the second inspection object based on the second area.
The program outputs the results of the first test and the results of the second test.
Thereby, the inspection apparatus 20 inspects the first inspection object and the second inspection object using the first captured image captured under the first imaging condition, outputs the result of the first inspection, and outputs the result of the first inspection. The first inspection object and the second inspection object are inspected using the second captured image captured under the two imaging conditions, and the result of the second inspection is output. Therefore, the inspection apparatus 20 can inspect each inspection target using captured images captured under different imaging conditions, and can obtain more accurate inspection results.
<項目2>
 項目1に記載の検査装置20において、プログラムは、次のことをプロセッサ22に実行させる。
 プログラムは、第1注目領域に対応する撮像画像の第1領域に基づいて、第1撮像条件として少なくとも1つの撮像パラメータを含む第1撮像パターンを定める。
 プログラムは、第2注目領域に対応する撮像画像の第2領域に基づいて、第2撮像条件として少なくとも1つの撮像パラメータを含む第2撮像パターンを定める。
 これにより、各撮像条件は、撮像パラメータの異なる撮像パターンによって定められる。よって、検査装置20は、撮像パラメータの異なる撮像条件で撮像した撮像画像を用いて、各検査対象物を検査することにより、より精度の高い検査結果を得ることができる。
<Item 2>
In the inspection device 20 described in item 1, the program causes the processor 22 to execute the following.
The program defines a first imaging pattern including at least one imaging parameter as a first imaging condition based on a first region of the captured image corresponding to the first region of interest.
The program defines a second imaging pattern including at least one imaging parameter as a second imaging condition based on a second region of the captured image corresponding to the second region of interest.
Thereby, each imaging condition is determined by an imaging pattern having different imaging parameters. Therefore, the inspection apparatus 20 can obtain more accurate inspection results by inspecting each inspection object using captured images captured under imaging conditions with different imaging parameters.
<項目3>
 項目2に記載の検査装置20は、検査対象領域を照射する照明装置30を備える。
 プログラムは、照明装置30が検査対象領域を照射するための少なくとも1つの照射パラメータを含む照射パターンを定める。
 これにより、検査装置20は、照明装置30に、照明パラメータの異なる照明パターンによって検査対象領域を照射させることができる。
<Item 3>
The inspection apparatus 20 described in item 2 includes an illumination device 30 that illuminates an inspection target area.
The program defines an illumination pattern including at least one illumination parameter for illumination device 30 to illuminate a region to be inspected.
Thereby, the inspection device 20 can cause the illumination device 30 to illuminate the inspection target region with illumination patterns having different illumination parameters.
<項目4>
 項目3に記載の検査装置20において、プログラムは、次のことをプロセッサ22に実行させる。
 プログラムは、第1撮像画像として、少なくとも1つの照射パターンごとに第1撮像条件を適用して撮像した検査対象領域の撮像画像をカメラに出力させる。
 プログラムは、第2撮像画像として、少なくとも1つの照射パターンごとに第2撮像条件を適用して撮像した検査対象領域の撮像画像をカメラに出力させる。
 これにより、検査装置20は、異なる照明パターンで撮像した撮像画像を用いて、各検査対象物を検査することができ、より精度の高い検査結果を得ることができる。
<Item 4>
In the inspection device 20 described in item 3, the program causes the processor 22 to perform the following.
The program causes the camera to output, as the first captured image, the captured image of the inspection target region captured by applying the first imaging condition for each at least one irradiation pattern.
The program causes the camera to output, as the second captured image, the captured image of the inspection target region captured by applying the second imaging condition for each at least one irradiation pattern.
As a result, the inspection apparatus 20 can inspect each inspection object using captured images captured with different illumination patterns, and can obtain more accurate inspection results.
<項目5>
 項目1から4のいずれか1項に記載の検査装置20において、プログラムは、次のことをプロセッサ22(又はプロセッサ41)に実行させる。
 プログラムは、カメラが出力した第1撮像画像と第2撮像画像とに基づいた学習により、学習モデル110を生成する。
 これにより、検査装置20は、撮像画像を用いて、検査対象物を検査するための学習モデル110を生成できる。
<Item 5>
In the inspection apparatus 20 according to any one of items 1 to 4, the program causes the processor 22 (or processor 41) to execute the following.
The program generates the learning model 110 by learning based on the first captured image and the second captured image output by the camera.
Thereby, the inspection apparatus 20 can generate the learning model 110 for inspecting the inspection object using the captured image.
<項目6>
 項目1から5のいずれか1項に記載の検査装置20において、プログラムは、次のことをプロセッサ22(又はプロセッサ41)に実行させる。
 プログラムは、複数の検査対象物の位置を定めた設計情報、又は検査対象領域の撮像画像のいずれかに基づいて第1注目領域と第2注目領域とを設定する。
 これにより、検査装置20は、検査対象領域において、複数の注目領域を設定できる。
<Item 6>
In the inspection apparatus 20 according to any one of items 1 to 5, the program causes the processor 22 (or processor 41) to execute the following.
The program sets the first region of interest and the second region of interest based on either design information defining the positions of a plurality of inspection objects or a captured image of the inspection target region.
Thereby, the inspection apparatus 20 can set a plurality of attention areas in the inspection target area.
<項目7>
 項目1から6のいずれか1項に記載の検査装置20は、カメラ21を備える。
 これにより、検査装置20は、カメラ21を制御して、検査対象領域の撮像画像を撮像できる。
<Item 7>
The inspection device 20 according to any one of items 1 to 6 includes a camera 21 .
Thereby, the inspection apparatus 20 can control the camera 21 to capture a captured image of the inspection target area.
<項目8>
 検査装置20は、次の検査方法を実施する。
 検査装置20は、第1検査対象物(例えば部品2)と第1検査対象物とは異なる第2検査対象物とを含む複数の検査対象物が存在する検査対象領域において、第1検査対象物の検査を行うための第1注目領域5と、第2検査対象物の検査を行うための第2注目領域5とを設定する。
 検査装置20は、検査対象領域を撮像して前記検査対象領域の撮像画像(例えばワーク画像4)を出力するカメラ21に、検査対象領域の撮像画像として、第1撮像条件により撮像した第1撮像画像と、第1撮像条件とは異なる第2撮像条件により撮像した第2撮像画像とを撮像させる。
 検査装置20は、第1検査対象物と第2検査対象物とを含む複数の検査対象物の異常を検知するための学習モデル110と、第1注目領域に対応する第1撮像画像の第1領域とに基づいて第1検査対象物を検査し、かつ前記学習モデルと第2注目領域に対応する第1撮像画像の第2領域とに基づいて第2検査対象物を検査する第1検査を実行する。
 検査装置20は、学習モデルと第1注目領域に対応する第2撮像画像の第1領域とに基づいて第1検査対象物を検査し、かつ学習モデルと第2注目領域に対応する第2撮像画像の第2領域とに基づいて第2検査対象物を検査する第2検査を実行する。
 検査装置20は、第1検査の結果と前記第2検査の結果とを出力する。
 これにより、検査装置20は、第1撮像条件で撮像された第1撮像画像を用いて、第1検査対象物と第2検査対象物とを検査して第1検査の結果を出力し、第2撮像条件で撮像された第2撮像画像を用いて、第1検査対象物と第2検査対象物とを検査して第2検査の結果を出力する。よって、検査装置20は、異なる撮像条件で撮像した撮像画像を用いて、各検査対象物を検査することができ、より精度の高い検査結果を得ることができる。
<Item 8>
The inspection device 20 implements the following inspection method.
The inspection apparatus 20 detects the first inspection object in an inspection object area in which a plurality of inspection objects including a first inspection object (for example, the part 2) and a second inspection object different from the first inspection object exist. and a second attention area 5 for inspecting a second inspection object are set.
The inspection apparatus 20 provides a camera 21 that captures an image of an inspection target area and outputs a captured image of the inspection target area (for example, the work image 4) as a captured image of the inspection target area, as a first image captured under a first imaging condition. An image and a second captured image captured under a second imaging condition different from the first imaging condition are captured.
The inspection apparatus 20 includes a learning model 110 for detecting anomalies in a plurality of inspection objects including a first inspection object and a second inspection object, and a first image of a first captured image corresponding to a first region of interest. and inspecting a second inspection object based on the learning model and a second area of the first captured image corresponding to the second attention area. Execute.
The inspection apparatus 20 inspects the first inspection object based on the learning model and the first area of the second captured image corresponding to the first attention area, and performs second imaging corresponding to the learning model and the second attention area. A second inspection is performed to inspect a second inspection object based on a second region of the image.
The inspection device 20 outputs the result of the first inspection and the result of the second inspection.
Thereby, the inspection apparatus 20 inspects the first inspection object and the second inspection object using the first captured image captured under the first imaging condition, outputs the result of the first inspection, and outputs the result of the first inspection. The first inspection object and the second inspection object are inspected using the second captured image captured under the two imaging conditions, and the result of the second inspection is output. Therefore, the inspection apparatus 20 can inspect each inspection target using captured images captured under different imaging conditions, and can obtain more accurate inspection results.
<項目9>
 検査プログラムは、次のことをプロセッサ22に実行させる。
 検査プログラムは、第1検査対象物(例えば部品2)と第1検査対象物とは異なる第2検査対象物とを含む複数の検査対象物が存在する検査対象領域において、第1検査対象物の検査を行うための第1注目領域5と、第2検査対象物の検査を行うための第2注目領域5とを設定する。
 検査プログラムは、検査対象領域を撮像して検査対象領域の撮像画像(例えばワーク画像4)を出力するカメラ21に、検査対象領域の撮像画像として、第1撮像条件により撮像した第1撮像画像と、第1撮像条件とは異なる第2撮像条件により撮像した第2撮像画像とを撮像させる。
 検査プログラムは、第1検査対象物と第2検査対象物とを含む複数の検査対象物の異常を検知するための学習モデル110と、第1注目領域に対応する第1撮像画像の第1領域とに基づいて第1検査対象物を検査し、かつ学習モデルと第2注目領域に対応する第1撮像画像の第2領域とに基づいて第2検査対象物を検査する第1検査を実行する。
 検査プログラムは、学習モデルと第1注目領域に対応する第2撮像画像の第1領域とに基づいて第1検査対象物を検査し、かつ学習モデルと第2注目領域に対応する第2撮像画像の第2領域とに基づいて第2検査対象物を検査する第2検査を実行する。
 検査プログラムは、第1検査の結果と第2検査の結果とを出力する。
 これにより、検査プログラムは、第1撮像条件で撮像された第1撮像画像を用いて、第1検査対象物と第2検査対象物とを検査して第1検査の結果を出力し、第2撮像条件で撮像された第2撮像画像を用いて、第1検査対象物と第2検査対象物とを検査して第2検査の結果を出力する。よって、検査プログラムは、異なる撮像条件で撮像した撮像画像を用いて、各検査対象物を検査することができ、より精度の高い検査結果を得ることができる。
<Item 9>
The inspection program causes the processor 22 to do the following.
The inspection program includes a first inspection object (for example, a part 2) and a second inspection object different from the first inspection object in an inspection object area in which a plurality of inspection objects exist. A first attention area 5 for inspection and a second attention area 5 for inspecting a second inspection object are set.
The inspection program causes the camera 21, which captures an image of the inspection target area and outputs a captured image of the inspection target area (for example, the workpiece image 4), to capture the first captured image captured under the first imaging condition as the captured image of the inspection target area. , and a second captured image captured under a second imaging condition different from the first imaging condition.
The inspection program comprises a learning model 110 for detecting anomalies in a plurality of inspection objects including a first inspection object and a second inspection object, and a first region of a first captured image corresponding to a first region of interest. and inspecting the second inspection object based on the learning model and the second region of the first captured image corresponding to the second region of interest. .
The inspection program inspects the first inspection object based on the learning model and a first area of a second captured image corresponding to the first attention area, and inspects the second captured image corresponding to the learning model and the second attention area. A second inspection is performed for inspecting the second inspection object based on the second area of .
The inspection program outputs the result of the first inspection and the result of the second inspection.
Accordingly, the inspection program inspects the first inspection object and the second inspection object using the first captured image captured under the first imaging condition, outputs the result of the first inspection, and outputs the second inspection result. Using the second captured image captured under the imaging conditions, the first inspection object and the second inspection object are inspected, and the result of the second inspection is output. Therefore, the inspection program can inspect each inspection object using captured images captured under different imaging conditions, and can obtain more accurate inspection results.
(実施の形態2)
 実施の形態2では、実施の形態1で説明済みの構成要素については共通の参照符号を付し、説明を省略する場合がある。
(Embodiment 2)
In the second embodiment, the same reference numerals are given to the constituent elements that have already been explained in the first embodiment, and the explanation may be omitted.
<本実施の形態に至る経緯>
 本実施の形態に至る経緯について、図17及び図18を参照しながら説明する。図17は、ワーク画像4のサイズを、学習モデル110に入力可能な画像のサイズに縮小する例を示す模式図である。図18は、複数のカメラ21でワーク3を、学習モデル110に入力可能な画像のサイズに分割して撮像する例を示す模式図である。
<Background leading up to the present embodiment>
The background to the present embodiment will be described with reference to FIGS. 17 and 18. FIG. FIG. 17 is a schematic diagram showing an example of reducing the size of the workpiece image 4 to the size of an image that can be input to the learning model 110. As shown in FIG. FIG. 18 is a schematic diagram showing an example in which a plurality of cameras 21 divide the workpiece 3 into image sizes that can be input to the learning model 110 and capture the images.
 学習モデル110に入力可能な画像のサイズが、カメラ21によって撮像されるワーク画像4のサイズよりも小さい場合、次の対策1又は対策2が考えらえる。 If the size of the image that can be input to the learning model 110 is smaller than the size of the work image 4 captured by the camera 21, the following countermeasure 1 or countermeasure 2 can be considered.
<<対策1>>
 図17に示すように、検査装置が、ワーク画像4のサイズを、学習モデル110に入力可能な画像のサイズに縮小して縮小ワーク画像391を生成し、当該縮小ワーク画像391を学習モデル110に入力して評価値を算出する対策が考えられる。しかし、この場合、ワーク画像4のサイズの縮小によって部品2の解像度が不足し、学習モデル110が、小さな部品2の評価値を精度良く算出できないという問題が生じる。
<<Countermeasure 1>>
As shown in FIG. 17 , the inspection apparatus reduces the size of the workpiece image 4 to the size of an image that can be input to the learning model 110 to generate a reduced workpiece image 391 . A possible countermeasure is to input and calculate the evaluation value. However, in this case, the resolution of the part 2 is insufficient due to the reduction in the size of the workpiece image 4, and the problem arises that the learning model 110 cannot accurately calculate the evaluation value of the small part 2. FIG.
<<対策2>>
 図18に示すように、複数のカメラ21でワーク3を分割して撮像し、各カメラ21が撮像した分割ワーク画像392をそれぞれ学習モデル110に入力する対策が考えられる。しかし、この場合、検査装置20に複数のカメラ21を搭載する必要があり、検査システム10の構成が煩雑になる。
<<Countermeasure 2>>
As shown in FIG. 18, a countermeasure may be considered in which a plurality of cameras 21 divide and capture images of the workpiece 3, and the divided workpiece images 392 captured by each camera 21 are input to the learning model 110, respectively. However, in this case, it is necessary to mount a plurality of cameras 21 on the inspection apparatus 20, and the configuration of the inspection system 10 becomes complicated.
 あるいは、1台のカメラ21でワーク3全体を撮像し、その撮像したワーク画像4を複数の分割ワーク画像392に分割し、分割ワーク画像392をそれぞれ学習モデル110に入力する対策が考えられる。しかし、この場合、検査装置は、複数の分割ワーク画像392を処理する必要があるため、処理量及びメモリ使用量が大きくなり、検査装置20の処理時間及びコストの増大を招く。 Alternatively, it is conceivable to take an image of the entire work 3 with one camera 21, divide the imaged work image 4 into a plurality of divided work images 392, and input each of the divided work images 392 to the learning model 110. However, in this case, since the inspection apparatus needs to process a plurality of divided work images 392, the amount of processing and the amount of memory used are increased, and the processing time and cost of the inspection apparatus 20 are increased.
 このような問題に鑑みて、本実施の形態は、部品装着に係る異常検出の精度を保ちつつ、処理時間及びコストの増大を抑制する検査システム10について説明する。なお、実施の形態2に係る検査システム10のハードウェア構成については、図1と同様であるため、説明を省略する。 In view of such problems, the present embodiment describes an inspection system 10 that suppresses an increase in processing time and cost while maintaining the accuracy of abnormality detection related to component mounting. Note that the hardware configuration of the inspection system 10 according to the second embodiment is the same as that of FIG. 1, and thus description thereof is omitted.
<検査装置の機能構成>
 図19は、実施の形態2に係る検査装置20の機能構成の一例を示すブロック図である。
<Functional configuration of inspection device>
FIG. 19 is a block diagram showing an example of the functional configuration of the inspection device 20 according to the second embodiment.
 検査装置20は、撮像制御部101と、最適条件決定部102と、検査実行部103と、注目領域抽出部111と、注目領域格納部104と、撮像条件格納部105と、最適撮像条件格納部106と、学習モデル格納部107と、合成画像格納部112とを有する。 The inspection apparatus 20 includes an imaging control unit 101, an optimum condition determination unit 102, an inspection execution unit 103, an attention area extraction unit 111, an attention area storage unit 104, an imaging condition storage unit 105, and an optimum imaging condition storage unit. 106 , a learning model storage unit 107 , and a synthetic image storage unit 112 .
 撮像制御部101は、カメラ21及び照明装置30を制御してワーク3を撮像し、ワーク画像4を生成する。 The imaging control unit 101 controls the camera 21 and the lighting device 30 to image the workpiece 3 and generate the workpiece image 4 .
 注目領域抽出部111は、ワーク画像4から各注目領域5の画像(注目領域画像)を抽出する。注目領域抽出部111は、抽出した各注目領域画像を合成し、合成画像400(図20、図21参照)を生成する。注目領域抽出部111は、生成した合成画像400を合成画像格納部112に格納する。なお、注目領域抽出部111及び合成画像400の詳細については後述する(図20及び図21参照)。 The region-of-interest extraction unit 111 extracts an image (region-of-interest image) of each region of interest 5 from the work image 4 . The attention area extraction unit 111 combines the extracted attention area images to generate a composite image 400 (see FIGS. 20 and 21). The attention area extraction unit 111 stores the generated synthetic image 400 in the synthetic image storage unit 112 . Details of the attention area extraction unit 111 and the synthesized image 400 will be described later (see FIGS. 20 and 21).
 最適条件決定部102は、複数の撮像条件のうち、各注目領域5の撮像に最適な最適撮像条件を決定する。加えて、最適条件決定部102は、ワーク画像4から注目領域5に対応する注目領域画像を抽出する際に最適な抽出条件(以下、最適抽出条件と称する)を決定する。なお、抽出条件及び最適抽出条件については後述する。 The optimum condition determination unit 102 determines the optimum imaging conditions for imaging each attention area 5 among a plurality of imaging conditions. In addition, the optimum condition determination unit 102 determines optimum extraction conditions (hereinafter referred to as optimum extraction conditions) when extracting an attention area image corresponding to the attention area 5 from the work image 4 . The extraction conditions and optimum extraction conditions will be described later.
 検査実行部103は、合成画像400を用いて、注目領域5に対応する領域において部品2が正常に装着されているか否かを検査する。検査実行部103は、学習モデル格納部107に格納されている学習モデル110に対して、合成画像400の注目領域5に対応する領域の画像を入力することにより、当該検査を実行する。 The inspection execution unit 103 uses the synthesized image 400 to inspect whether or not the component 2 is normally mounted in the area corresponding to the attention area 5 . The inspection execution unit 103 inputs the image of the area corresponding to the attention area 5 of the synthesized image 400 to the learning model 110 stored in the learning model storage unit 107, thereby executing the inspection.
<合成画像の詳細>
 次に、合成画像400の第1の生成方法、及び、第2の生成方法について説明する。検査装置20は、第1の生成方法、及び、第2の生成方法のいずれで合成画像400を生成してもよい。
<Details of composite image>
Next, a first generation method and a second generation method of the synthetic image 400 will be described. The inspection device 20 may generate the composite image 400 using either the first generation method or the second generation method.
<<合成画像の第1の生成方法>>
 図20は、実施の形態2に係る合成画像400の第1の生成方法の一例を示す図である。
<<First Synthetic Image Generation Method>>
FIG. 20 is a diagram showing an example of a first method for generating the synthesized image 400 according to the second embodiment.
 注目領域抽出部111は、ワーク画像4から抽出した各注目領域5(例えば、注目領域5a、5b)に対応する注目領域画像(画素)を、当該注目領域5(例えば注目領域5a、5b)に含まれる部品2(例えば部品2a、2b)に応じて、最適なサイズに拡大又は縮小する。例えば、注目領域抽出部111は、拡大した方が学習モデル110での異常検出精度が高くなる部品2aの注目領域画像を拡大する。例えば、注目領域抽出部111は、縮小しても学習モデル110での異常検出精度があまり変わらない部品2bの注目領域画像を縮小する。 The attention area extracting unit 111 extracts attention area images (pixels) corresponding to each attention area 5 (for example, attention areas 5a and 5b) extracted from the work image 4, to the attention area 5 (for example, attention areas 5a and 5b). Depending on the part 2 involved (eg part 2a, 2b), it is scaled up or down to an optimal size. For example, the attention area extracting unit 111 enlarges the attention area image of the part 2a for which the abnormality detection accuracy in the learning model 110 is improved by enlarging it. For example, the attention area extracting unit 111 reduces the attention area image of the part 2b for which the abnormality detection accuracy in the learning model 110 does not change much even if it is reduced.
 注目領域抽出部111は、拡大又は縮小に加えて、余剰領域も調整してよい。余剰領域とは、注目領域画像における部品2の周辺画素の領域を示す。例えば、注目領域抽出部111は、余剰領域(つまり周辺画素)を大きくとった方が学習モデル110の評価値の算出精度が高くなる部品2については余剰領域を大きくとって注目領域画像を抽出する。例えば、注目領域抽出部111は、余剰領域(つまり周辺画素)を小さくとった方が学習モデル110の評価値の算出精度が高くなる部品2aについては余剰領域を小さくとって注目領域画像を抽出する。 The region-of-interest extraction unit 111 may adjust the surplus region in addition to the enlargement or reduction. The surplus area indicates an area of peripheral pixels of the part 2 in the attention area image. For example, the region-of-interest extraction unit 111 extracts a region-of-interest image by setting a large surplus region for component 2, for which a larger surplus region (that is, peripheral pixels) increases the accuracy of calculation of the evaluation value of the learning model 110. . For example, the region-of-interest extraction unit 111 extracts a region-of-interest image with a small surplus region for the part 2a for which the calculation accuracy of the evaluation value of the learning model 110 increases when the surplus region (that is, the peripheral pixels) is small. .
 注目領域抽出部111は、1つのワーク画像4について、このように拡大又は縮小、並びに、余剰領域を調整した各注目領域5に対応する注目領域画像を合成して、図20に示すような1つの合成画像400を生成する。注目領域抽出部111は、生成した合成画像400を、合成画像格納部112に格納する。 The region-of-interest extraction unit 111 synthesizes the region-of-interest image corresponding to each region-of-interest 5 obtained by enlarging or reducing and adjusting the surplus regions in one workpiece image 4 as described above, and obtains a single region-of-interest image as shown in FIG. generate two composite images 400; The attention area extraction unit 111 stores the generated synthetic image 400 in the synthetic image storage unit 112 .
<<合成画像の第2の生成方法>>
 図21は、実施の形態2に係る合成画像400の第2の生成方法の一例を示す図である。
<<Second Generation Method of Synthetic Image>>
FIG. 21 is a diagram showing an example of a second method for generating the synthesized image 400 according to the second embodiment.
 注目領域抽出部111は、ワーク画像4から抽出した各注目領域5(例えば、注目領域5a、5b)に対応する注目領域画像(画素)について、所定のサイズとなるように、拡大又は縮小する。例えば、注目領域抽出部111は、所定のサイズよりも小さな第1の注目領域5aに対応する第1の注目領域画像については所定のサイズとなるように拡大する。例えば、注目領域抽出部111は、所定のサイズよりも大きな第2の注目領域5bに対応する注目領域画像については所定のサイズとなるように縮小する。例えば、注目領域抽出部111は、第2の注目領域5bに対応する画素が、第1の注目領域5aに対応する画素と同じになるように、第2の注目領域5の周辺画素をワーク画像4から抽出する。 The attention area extraction unit 111 enlarges or reduces attention area images (pixels) corresponding to each attention area 5 (for example, attention areas 5a and 5b) extracted from the work image 4 so as to have a predetermined size. For example, the attention area extraction unit 111 enlarges the first attention area image corresponding to the first attention area 5a smaller than a predetermined size so as to have a predetermined size. For example, the attention area extraction unit 111 reduces the attention area image corresponding to the second attention area 5b larger than a predetermined size to a predetermined size. For example, the region-of-interest extraction unit 111 extracts peripheral pixels of the second region-of-interest 5 from the work image so that the pixels corresponding to the second region-of-interest 5b are the same as the pixels corresponding to the first region-of-interest 5a. Extract from 4.
 注目領域抽出部111は、拡大又は縮小に加えて、余剰領域も調整してよい。例えば、注目領域抽出部111は、余剰領域(つまり部品2の周辺画素)を大きくとった方が学習モデル110の評価値の算出精度が高くなる部品2aについては余剰領域を大きくとって注目領域画像を抽出する。例えば、注目領域抽出部111は、余剰領域(つまり部品2の周辺画素)を小さくとった方が学習モデル110の評価値の算出精度が高くなる部品2bについては余剰領域を小さくとって注目領域画像を抽出する。 The region-of-interest extraction unit 111 may adjust the surplus region in addition to the enlargement or reduction. For example, the attention area extracting unit 111 selects a large surplus area for the part 2a, for which the calculation accuracy of the evaluation value of the learning model 110 increases when the surplus area (that is, the surrounding pixels of the part 2) is large, and extracts the attention area image. to extract For example, the attention area extracting unit 111 selects a small surplus area for the part 2b for which the calculation accuracy of the evaluation value of the learning model 110 increases when the surplus area (that is, the surrounding pixels of the part 2) is small, and extracts the attention area image. to extract
 そして、注目領域抽出部111は、1つのワーク画像4について、このように拡大又は縮小、並びに、余剰領域を調整して同じサイズにした各注目領域画像を合成して、図21に示すような1つの合成画像400を生成する。注目領域抽出部111は、生成した合成画像400を、合成画像格納部112に格納する。 Then, the attention area extracting unit 111 synthesizes each attention area image of the same size by enlarging or reducing the one work image 4 and adjusting the surplus area in this way, and produces a result as shown in FIG. 21 . A single composite image 400 is generated. The attention area extraction unit 111 stores the generated synthetic image 400 in the synthetic image storage unit 112 .
 このように合成画像400を生成することにより、上述した対策2と比較して、本実施の形態は、画像のデータ量が小さくなるため、処理量及びメモリ使用量を低減できる。また、上述した対策1と比較して、小さな部品2の解像度が不足しなくなるので、本実施の形態に係る検査装置20は、高い精度で異常を検出できる。 By generating the composite image 400 in this way, the data amount of the image is smaller in this embodiment than in the above-described countermeasure 2, so the amount of processing and memory usage can be reduced. In addition, as compared with measure 1 described above, the resolution of the small component 2 is no longer insufficient, so the inspection apparatus 20 according to the present embodiment can detect abnormalities with high accuracy.
<検査前処理フロー>
 図22は、実施の形態2に係る検査前処理の一例を示すフローチャートである。
<Pre-test processing flow>
22 is a flowchart illustrating an example of pre-examination processing according to Embodiment 2. FIG.
 撮像制御部101は、カメラ21を制御してワーク3を撮像し、ワーク画像4を生成する(ステップS401)。 The imaging control unit 101 controls the camera 21 to image the workpiece 3 and generate the workpiece image 4 (step S401).
 注目領域設定部201は、ワーク3に複数の注目領域5を設定する(ステップS402)。注目領域設定部201は、設定した複数の注目領域5を示す情報(注目領域情報)を検査装置20に送信する。検査装置20は、受信した複数の注目領域情報を注目領域格納部104に格納する。 The attention area setting unit 201 sets a plurality of attention areas 5 on the workpiece 3 (step S402). The attention area setting unit 201 transmits information (attention area information) indicating a plurality of set attention areas 5 to the inspection device 20 . The inspection apparatus 20 stores the received plural pieces of attention area information in the attention area storage unit 104 .
 撮像条件生成部204は、撮像パターン格納部208に格納されている複数の撮像パターンと、照明パターン格納部209に格納されている複数の照明パターンとを組み合わせて、複数の撮像条件を生成する(ステップS403)。撮像条件生成部204は、生成した複数の撮像条件を検査装置20に送信する。検査装置20は、受信した複数の撮像条件を撮像条件格納部105に格納する。 The imaging condition generation unit 204 generates a plurality of imaging conditions by combining a plurality of imaging patterns stored in the imaging pattern storage unit 208 and a plurality of illumination patterns stored in the illumination pattern storage unit 209 ( step S403). The imaging condition generation unit 204 transmits the plurality of generated imaging conditions to the inspection device 20 . The inspection apparatus 20 stores the received multiple imaging conditions in the imaging condition storage unit 105 .
 最適条件決定部102は、撮像条件格納部105に格納されている複数の撮像条件のうち、未選択の撮像条件を1つ選択する(ステップS404)。図22の説明において、当該選択された撮像条件を、選択撮像条件と称する。 The optimum condition determination unit 102 selects one unselected imaging condition from the plurality of imaging conditions stored in the imaging condition storage unit 105 (step S404). In the description of FIG. 22, the selected imaging conditions are referred to as selected imaging conditions.
 撮像制御部101は、選択撮像条件に基づいてカメラ21及び照明装置30を制御してワーク3を撮像し、ワーク画像4を生成する(ステップS405)。撮像制御部101は、生成したワーク画像4をRAM43又はストレージ44に格納する。 The imaging control unit 101 controls the camera 21 and the lighting device 30 based on the selected imaging conditions to capture an image of the workpiece 3 and generate a workpiece image 4 (step S405). The imaging control unit 101 stores the generated work image 4 in the RAM 43 or storage 44 .
 最適条件決定部102は、撮像条件格納部105に格納されているすべての撮像条件を選択したか否かを判定する(ステップS406)。 The optimum condition determination unit 102 determines whether or not all imaging conditions stored in the imaging condition storage unit 105 have been selected (step S406).
 未選択の撮像条件が残っている場合(S406:NO)、検査装置20は、処理をステップS404に戻す。すべての撮像条件を選択した場合(S406:YES)、検査装置20は、処理をステップS407に進める。 When unselected imaging conditions remain (S406: NO), the inspection apparatus 20 returns the process to step S404. If all imaging conditions have been selected (S406: YES), the inspection apparatus 20 advances the process to step S407.
 注目領域抽出部111は、上述した合成画像の第1の生成方法(図20参照)又は合成画像の第2の生成方法(図21参照)によって、RAM43又はストレージ44に格納されている各ワーク画像4から、様々な抽出条件で各注目領域5を抽出する。抽出条件には、拡大率又は縮小率、並びに、余剰領域の割合等が含まれている。例えば、注目領域抽出部111は、ワーク画像4内の各注目領域5に対応する領域の画像を抽出し、抽出条件に基づいて拡大又は縮小する。また、注目領域抽出部111は、ワーク画像4内の各注目領域5に対応する領域から、抽出条件に基づいて余剰領域を増加又は減少させて画像を抽出する。注目領域抽出部111は、抽出した各注目領域5に対応する注目領域画像を合成して合成画像400を生成する(ステップS407)。注目領域抽出部111は、生成した複数の合成画像400を、合成画像格納部112に格納する。これにより、合成画像格納部112には、様々な撮像条件にて撮像された各ワーク画像4から、様々な抽出条件にて抽出された注目領域画像を合成した複数の合成画像400が格納される。 The region-of-interest extraction unit 111 extracts each workpiece image stored in the RAM 43 or the storage 44 by the above-described first synthetic image generation method (see FIG. 20) or the second synthetic image generation method (see FIG. 21). 4, each region of interest 5 is extracted under various extraction conditions. The extraction conditions include an enlargement rate or a reduction rate, a surplus area ratio, and the like. For example, the attention area extraction unit 111 extracts an image of an area corresponding to each attention area 5 in the work image 4, and enlarges or reduces it based on the extraction conditions. Further, the attention area extracting unit 111 extracts an image by increasing or decreasing the surplus area based on the extraction condition from the area corresponding to each attention area 5 in the work image 4 . The attention area extracting unit 111 combines the attention area images corresponding to the extracted attention areas 5 to generate the composite image 400 (step S407). The region-of-interest extraction unit 111 stores the plurality of generated synthetic images 400 in the synthetic image storage unit 112 . As a result, the composite image storage unit 112 stores a plurality of composite images 400 obtained by combining attention area images extracted under various extraction conditions from the work images 4 captured under various imaging conditions. .
 検査装置20は、最適条件決定処理を実行する(ステップS408)。なお、最適条件決定処理の詳細については後述する(図23参照)。そして、本処理は終了する。 The inspection device 20 executes optimum condition determination processing (step S408). Details of the optimum condition determination process will be described later (see FIG. 23). Then, the process ends.
 図23は、図22に示す最適条件決定処理(ステップS408)の一例を示すフローチャートである。 FIG. 23 is a flow chart showing an example of the optimum condition determination process (step S408) shown in FIG.
 最適条件決定部102は、注目領域格納部104に格納されている複数の注目領域5のうち、未選択の注目領域5を1つ選択する(ステップS501)。図23の説明において、当該選択された注目領域5を、選択注目領域5と称する。 The optimum condition determination unit 102 selects one unselected attention area 5 from among the plurality of attention areas 5 stored in the attention area storage unit 104 (step S501). In the description of FIG. 23 , the selected attention area 5 is called a selected attention area 5 .
 最適条件決定部102は、合成画像格納部112に格納された、異なる撮像条件で撮像され、異なる抽出条件で抽出された複数の合成画像400のそれぞれから、選択注目領域5における画像(以下、選択注目領域画像と称する)を取得する。最適条件決定部102は、学習モデル格納部107に格納されている学習モデル110を用いて、取得した複数の選択注目領域画像のそれぞれについて評価値を算出する(ステップS502)。 The optimum condition determination unit 102 selects an image in the selected region of interest 5 (hereinafter referred to as a selection (referred to as a region-of-interest image). The optimum condition determination unit 102 uses the learning model 110 stored in the learning model storage unit 107 to calculate an evaluation value for each of the acquired selected region-of-interest images (step S502).
 最適条件決定部102は、ステップS502で算出した評価値に基づいて、選択注目領域5の最適撮像条件及び最適抽出条件を決定する(ステップS503)。例えば、最適条件決定部102は、評価値が最も高く算出された撮像条件を、選択注目領域5の最適撮像条件に決定する。また、最適条件決定部102は、評価値が最も高く算出された抽出条件を、選択注目領域5の最適抽出条件に決定する。 The optimum condition determination unit 102 determines the optimum imaging conditions and optimum extraction conditions for the selected region of interest 5 based on the evaluation values calculated in step S502 (step S503). For example, the optimum condition determining unit 102 determines the imaging condition with the highest calculated evaluation value as the optimum imaging condition for the selected region of interest 5 . The optimum condition determination unit 102 also determines the extraction condition with the highest calculated evaluation value as the optimum extraction condition for the selected region of interest 5 .
 最適条件決定部102は、選択注目領域5に、ステップS503で決定した最適撮像条件及び最適抽出条件を対応付けて、注目領域格納部104に格納する(ステップS504)。 The optimum condition determination unit 102 associates the selected attention area 5 with the optimum imaging conditions and optimum extraction conditions determined in step S503, and stores them in the attention area storage unit 104 (step S504).
 最適条件決定部102は、ステップS503で決定した最適撮像条件を、最適撮像条件格納部106に格納する(ステップS505)。 The optimal condition determination unit 102 stores the optimal imaging conditions determined in step S503 in the optimal imaging condition storage unit 106 (step S505).
 最適条件決定部102は、注目領域格納部104に格納されているすべての注目領域5を選択したか否かを判定する(ステップS506)。 The optimum condition determination unit 102 determines whether or not all the attention areas 5 stored in the attention area storage unit 104 have been selected (step S506).
 未選択の注目領域5が残っている場合(ステップS506:NO)、検査装置20は、処理をステップS501に戻す。すべての注目領域5を選択した場合(ステップS506:YES)、検査装置20は、本処理を終了する。 If an unselected region of interest 5 remains (step S506: NO), the inspection device 20 returns the process to step S501. If all the attention areas 5 have been selected (step S506: YES), the inspection device 20 terminates this process.
<検査処理フロー>
 図24は、実施の形態2に係る検査処理の一例を示すフローチャートである。
<Inspection processing flow>
24 is a flowchart illustrating an example of inspection processing according to Embodiment 2. FIG.
 撮像制御部101は、検査対象のワーク3に対応付けられている注目領域格納部104及び最適撮像条件格納部106を選択する(ステップS601)。 The imaging control unit 101 selects the attention area storage unit 104 and the optimum imaging condition storage unit 106 associated with the workpiece 3 to be inspected (step S601).
 撮像制御部101は、検出センサ19がワーク3を検出したか否かを判定する(ステップS602)。例えば、撮像制御部101は、検出センサ19からワーク検出通知を受信したか否かを判定する。 The imaging control unit 101 determines whether or not the detection sensor 19 has detected the workpiece 3 (step S602). For example, the imaging control unit 101 determines whether or not a workpiece detection notification has been received from the detection sensor 19 .
 検出センサ19がワーク3を未検出である場合(ステップS602:NO)、検査装置20は、処理をステップS602に戻す。検出センサ19がワーク3を検出した場合(ステップS602:YES)、検査装置20は、処理を次のステップS603に進める。 When the detection sensor 19 has not detected the workpiece 3 (step S602: NO), the inspection device 20 returns the process to step S602. If the detection sensor 19 detects the workpiece 3 (step S602: YES), the inspection apparatus 20 advances the process to the next step S603.
 撮像制御部101は、最適撮像条件格納部106から、未選択の最適撮像条件を1つ選択する(ステップS603)。図24の説明において、当該選択された最適撮像条件を、選択最適撮像条件と称する。 The imaging control unit 101 selects one unselected optimum imaging condition from the optimum imaging condition storage unit 106 (step S603). In the description of FIG. 24, the selected optimum imaging conditions are referred to as selected optimum imaging conditions.
 撮像制御部101は、選択最適撮像条件に基づいてカメラ21及び照明装置30を制御してワーク3を撮像し、ワーク画像4を生成する(ステップS604)。 The imaging control unit 101 controls the camera 21 and the illumination device 30 based on the selected optimum imaging conditions to image the workpiece 3 and generate the workpiece image 4 (step S604).
 注目領域抽出部111は、上述した合成画像の第1の生成方法(図20参照)又は合成画像の第2の生成方法(図21参照)により、ワーク画像4から合成画像400を生成する(ステップS605)。具体的には、注目領域抽出部111は、ワーク画像4の各注目領域5を当該注目領域5に対応付けられている最適抽出条件で抽出し、抽出した注目領域画像を合成して合成画像400を生成する。注目領域抽出部111は、生成した合成画像400を、合成画像格納部112に格納する。これにより、ワーク画像4をそのまま格納する実施の形態1と比較して、メモリの使用量を削減できる。 The region-of-interest extraction unit 111 generates a composite image 400 from the work image 4 (step S605). Specifically, the attention area extracting unit 111 extracts each attention area 5 of the workpiece image 4 under the optimum extraction condition associated with the attention area 5, and synthesizes the extracted attention area images to form a synthesized image 400. to generate The attention area extraction unit 111 stores the generated synthetic image 400 in the synthetic image storage unit 112 . As a result, the amount of memory used can be reduced compared to the first embodiment in which the work image 4 is stored as it is.
 撮像制御部101は、最適撮像条件格納部106に含まれるすべての最適撮像条件を選択したか否かを判定する(ステップS606)。 The imaging control unit 101 determines whether or not all the optimum imaging conditions included in the optimum imaging condition storage unit 106 have been selected (step S606).
 未選択の最適撮像条件が残っている場合(ステップS606:NO)、検査装置20は、処理をステップS603に戻す。すべての最適撮像条件を選択した場合(ステップS606:YES)、検査装置20は、処理を次のステップS607に進める。 If unselected optimum imaging conditions remain (step S606: NO), the inspection device 20 returns the process to step S603. If all the optimum imaging conditions have been selected (step S606: YES), the inspection apparatus 20 advances the process to the next step S607.
 検査実行部103は、注目領域格納部104に格納されている複数の注目領域5のうち、未選択の注目領域5を1つ選択する(ステップS607)。図24の説明において、当該選択された注目領域5を、選択注目領域5と称する。 The inspection execution unit 103 selects one unselected attention area 5 from among the plurality of attention areas 5 stored in the attention area storage unit 104 (step S607). In the description of FIG. 24 , the selected attention area 5 is called a selected attention area 5 .
 検査実行部103は、合成画像格納部112に格納されている複数の合成画像400のうち、選択注目領域5に対応付けられている最適撮像条件及び最適抽出条件に基づいて生成された合成画像400を選択する(ステップS608)。図24の説明において、当該選択された合成画像400を、選択合成画像400と称する。 The examination executing unit 103 selects a synthesized image 400 generated based on the optimum imaging condition and the optimum extraction condition associated with the selected region of interest 5 among the plurality of synthesized images 400 stored in the synthesized image storage unit 112. is selected (step S608). In the description of FIG. 24 , the selected synthetic image 400 will be referred to as the selected synthetic image 400 .
 検査実行部103は、学習モデル格納部107に格納されている学習モデル110を用いて、選択合成画像400における選択注目領域5に対応する領域の画像の評価値を算出する(ステップS609)。選択合成画像400は、選択注目領域5を撮像するに最適な撮像条件で撮像されたワーク画像4から、選択注目領域5を抽出するに最適な撮像条件で抽出された注目領域画像を合成して生成されたものである。よって、当該選択合成画像400から取得された選択注目領域画像から算出される評価値は、単一の撮像条件で撮像されたワーク画像から取得された選択注目領域画像から算出される評価値よりも、精度が高くなり得る。 The inspection execution unit 103 uses the learning model 110 stored in the learning model storage unit 107 to calculate the image evaluation value of the area corresponding to the selected attention area 5 in the selected combined image 400 (step S609). The selected synthesized image 400 is obtained by synthesizing an attention area image extracted under optimum imaging conditions for extracting the selected attention area 5 from the work image 4 imaged under the optimum imaging conditions for imaging the selected attention area 5. It is generated. Therefore, the evaluation value calculated from the selected region-of-interest image obtained from the selected composite image 400 is higher than the evaluation value calculated from the selected region-of-interest image obtained from the work image captured under a single imaging condition. , can be more accurate.
 検査実行部103は、ステップS609で算出した評価値に基づいて、選択注目領域5の検査結果を決定する(ステップS610)。例えば、検査実行部103は、評価値が所定の閾値Th未満である場合、選択注目領域5において部品2が異常に装着されている(NG)と決定する。例えば、検査実行部103は、評価値が閾値Th以上である場合、選択注目領域5において部品2が正常に装着されている(OK)と決定する。検査実行部103は、選択注目領域5に検査結果を対応付けてRAM34又はストレージ44に格納する。 The inspection execution unit 103 determines the inspection result of the selected region of interest 5 based on the evaluation value calculated in step S609 (step S610). For example, when the evaluation value is less than a predetermined threshold value Th, the inspection execution unit 103 determines that the component 2 is abnormally attached (NG) in the selected attention area 5 . For example, when the evaluation value is equal to or greater than the threshold Th, the inspection execution unit 103 determines that the component 2 is normally mounted (OK) in the selected attention area 5 . The inspection execution unit 103 associates the inspection results with the selected region of interest 5 and stores them in the RAM 34 or the storage 44 .
 検査実行部103は、注目領域格納部104に格納されているすべての注目領域5を選択したか否かを判定する(ステップS611)。未選択の注目領域5が残っている場合、検査装置20は、処理をステップS607に戻す。 The inspection execution unit 103 determines whether or not all the attention areas 5 stored in the attention area storage unit 104 have been selected (step S611). If there remains an unselected region of interest 5, the inspection apparatus 20 returns the process to step S607.
 すべての注目領域5を選択した場合、検査実行部103は、ステップS610にてRAM43又はストレージ44に格納された、各注目領域5に対応付けられている検査結果をまとめてワーク検査結果情報を生成し、管理装置40へ送信する(ステップS612)。そして、検査装置20は、処理をステップS602に戻し、次に搬送されてくるワーク3の検査を行う。 When all the attention areas 5 are selected, the inspection execution unit 103 collects the inspection results associated with each attention area 5 and stored in the RAM 43 or the storage 44 in step S610 to generate workpiece inspection result information. and transmitted to the management device 40 (step S612). The inspection apparatus 20 then returns the process to step S602 and inspects the work 3 that is next transported.
 なお、管理装置40のUI制御部206は、検査装置20からワーク検査結果情報を受信し、検査結果格納部212に格納する。また、UI制御部206は、ワーク検査結果情報の内容を表示装置60に表示する。ユーザは、表示装置60に表示されたワーク検査結果情報の内容を見て、ワーク3の各注目領域5において部品2が正常に装着されているか否かを確認できる。 The UI control unit 206 of the management device 40 receives work inspection result information from the inspection device 20 and stores it in the inspection result storage unit 212 . Also, the UI control unit 206 displays the content of the work inspection result information on the display device 60 . The user can see the contents of the workpiece inspection result information displayed on the display device 60 and can confirm whether or not the component 2 is normally mounted in each attention area 5 of the workpiece 3 .
<検査処理フローの変形例>
 次に、検査処理の変形例として、検査装置20が、単一の撮像条件によって撮像されたワーク画像から合成画像400を生成して検査を行う場合の処理について説明する。図25は、実施の形態2に係る検査処理の変形例を示すフローチャートである。
<Modified example of inspection processing flow>
Next, as a modified example of the inspection process, a process in which the inspection apparatus 20 generates a composite image 400 from a workpiece image captured under a single imaging condition and performs inspection will be described. FIG. 25 is a flowchart showing a modification of inspection processing according to the second embodiment.
 撮像制御部101は、検査対象のワーク3に対応付けられている注目領域格納部104及び最適撮像条件格納部106を選択する(ステップS701)。 The imaging control unit 101 selects the attention area storage unit 104 and the optimum imaging condition storage unit 106 associated with the workpiece 3 to be inspected (step S701).
 撮像制御部101は、検出センサ19がワーク3を検出したか否かを判定する(ステップS702)。例えば、撮像制御部101は、検出センサ19からワーク検出通知を受信したか否かを判定する。 The imaging control unit 101 determines whether or not the detection sensor 19 has detected the workpiece 3 (step S702). For example, the imaging control unit 101 determines whether or not a workpiece detection notification has been received from the detection sensor 19 .
 検出センサ19がワーク3を未検出である場合(ステップS702:NO)、検査装置20は、処理をステップS702に戻す。検出センサ19がワーク3を検出した場合(ステップS702:YES)、検査装置20は、処理を次のステップS703に進める。 If the detection sensor 19 has not detected the workpiece 3 (step S702: NO), the inspection device 20 returns the process to step S702. If the detection sensor 19 detects the workpiece 3 (step S702: YES), the inspection apparatus 20 advances the process to the next step S703.
 撮像制御部101は、所定の撮像条件に基づいてカメラ21及び照明装置30を制御してワーク3を撮像し、ワーク画像4を生成する(ステップS704)。 The imaging control unit 101 controls the camera 21 and the lighting device 30 based on predetermined imaging conditions to capture an image of the workpiece 3 and generate a workpiece image 4 (step S704).
 注目領域抽出部111は、上述した合成画像の第1の生成方法(図20参照)又は合成画像の第2の生成方法(図21参照)により、ワーク画像4から合成画像400を生成する(ステップS705)。具体的には、注目領域抽出部111は、ワーク画像4の各注目領域5を当該注目領域5に対応付けられている最適抽出条件で抽出し、抽出した注目領域画像を合成して合成画像400を生成する。注目領域抽出部111は、生成した合成画像400を、合成画像格納部112に格納する。これにより、ワーク画像4をそのまま格納する実施の形態1と比較して、メモリの使用量を削減できる。 The region-of-interest extraction unit 111 generates a composite image 400 from the work image 4 (step S705). Specifically, the attention area extracting unit 111 extracts each attention area 5 of the workpiece image 4 under the optimum extraction condition associated with the attention area 5, and synthesizes the extracted attention area images to form a composite image 400. to generate The attention area extraction unit 111 stores the generated synthetic image 400 in the synthetic image storage unit 112 . As a result, the amount of memory used can be reduced compared to the first embodiment in which the work image 4 is stored as it is.
 検査実行部103は、注目領域格納部104に格納されている複数の注目領域5のうち、未選択の注目領域5を1つ選択する(ステップS705)。図25の説明において、当該選択された注目領域5を、選択注目領域5と称する。 The inspection execution unit 103 selects one unselected attention area 5 from among the plurality of attention areas 5 stored in the attention area storage unit 104 (step S705). In the description of FIG. 25 , the selected attention area 5 is called a selected attention area 5 .
 検査実行部103は、学習モデル格納部107に格納されている学習モデル110を用いて、ステップS705にて生成された合成画像400における選択注目領域5に対応する領域の画像の評価値を算出する(ステップS706)。当該合成画像400は、選択注目領域5を抽出するに最適な撮像条件で抽出された注目領域画像を合成して生成されたものである。よって、当該選択合成画像400から取得された選択注目領域画像から算出される評価値は、単一の撮像条件で撮像されたワーク画像から取得された選択注目領域画像から算出される評価値よりも、精度が高くなり得る。 The test execution unit 103 uses the learning model 110 stored in the learning model storage unit 107 to calculate the evaluation value of the image of the region corresponding to the selected attention region 5 in the synthesized image 400 generated in step S705. (Step S706). The synthesized image 400 is generated by synthesizing attention area images extracted under optimum imaging conditions for extracting the selected attention area 5 . Therefore, the evaluation value calculated from the selected region-of-interest image obtained from the selected composite image 400 is higher than the evaluation value calculated from the selected region-of-interest image obtained from the work image captured under a single imaging condition. , can be more accurate.
 検査実行部103は、ステップS706で算出した評価値に基づいて、選択注目領域5の検査結果を決定する(ステップS707)。例えば、検査実行部103は、評価値が所定の閾値Th未満である場合、選択注目領域5において部品2が異常に装着されている(NG)と決定する。例えば、検査実行部103は、評価値が閾値Th以上である場合、選択注目領域5において部品2が正常に装着されている(OK)と決定する。検査実行部103は、選択注目領域5に検査結果を対応付けてRAM34又はストレージ44に格納する。 The inspection execution unit 103 determines the inspection result of the selected region of interest 5 based on the evaluation value calculated in step S706 (step S707). For example, when the evaluation value is less than a predetermined threshold value Th, the inspection execution unit 103 determines that the component 2 is abnormally attached (NG) in the selected attention area 5 . For example, when the evaluation value is equal to or greater than the threshold Th, the inspection execution unit 103 determines that the component 2 is normally mounted (OK) in the selected attention area 5 . The inspection execution unit 103 associates the inspection results with the selected region of interest 5 and stores them in the RAM 34 or the storage 44 .
 検査実行部103は、注目領域格納部104に格納されているすべての注目領域5を選択したか否かを判定する(ステップS708)。未選択の注目領域5が残っている場合、検査装置20は、処理をステップS705に戻す。 The inspection execution unit 103 determines whether or not all the attention areas 5 stored in the attention area storage unit 104 have been selected (step S708). If an unselected region of interest 5 remains, the inspection apparatus 20 returns the process to step S705.
 すべての注目領域5を選択した場合、検査実行部103は、ステップS707にてRAM43又はストレージ44に格納された、各注目領域5に対応付けられている検査結果をまとめてワーク検査結果情報を生成し、管理装置40へ送信する(ステップS709)。そして、検査装置20は、処理をステップS702に戻し、次に搬送されてくるワーク3の検査を行う。 When all the attention areas 5 are selected, the inspection execution unit 103 collects the inspection results associated with each attention area 5 stored in the RAM 43 or the storage 44 in step S707 to generate workpiece inspection result information. and transmitted to the management device 40 (step S709). The inspection apparatus 20 then returns the process to step S702 and inspects the work 3 that is next transported.
<実施の形態2のまとめ>
 実施の形態2の内容は、以下の項目のように表現できる。
<Summary of Embodiment 2>
The contents of the second embodiment can be expressed as the following items.
<項目1>
 検査装置20は、1つ以上のプロセッサ22と、メモリ(例えばRAM24)と、メモリに保存されているプログラムと、を備える。プログラムは、次のことをプロセッサ22に実行させる。
 プログラムは、第1検査対象物(例えば部品2a)と第1検査対象物とは異なる第2検査対象物(例えば部品2b)とを含む複数の検査対象物を含む検査対象領域において、第1検査対象物の検査を行うための第1注目領域5(5a)と、第2検査対象物の検査を行うための第2注目領域5(5b)とを設定する。
 プログラムは、検査対象領域を撮像して検査対象領域の撮像画像(例えばワーク画像4)を出力するカメラ21に検査対象領域を撮像させる。
 プログラムは、撮像画像から、第1注目領域に対応する第1画像領域と、第2注目領域に対応する第2画像領域とを抽出する。
 プログラムは、複数の検査対象物の異常を検知するための学習モデル110と、第1画像領域とに基づいて第1検査対象物を検査する第1検査を実行する。
 プログラムは、学習モデルと、第2画像領域とに基づいて第2検査対象物を検査する第2検査を実行する。
 プログラムは、第1検査の結果と、第2検査の結果とを出力する。
 これにより、検査装置20は、撮像画像から抽出した第1画像領域を用いて、第1検査対象物を検査して第1検査の結果を出力し、撮像画像から抽出した第2画像領域を用いて、第2検査対象物を検査した第2検査の結果を出力する。よって、検査装置20は、適切に抽出した画像領域を用いて、各検査対象物を検査することができ、より精度の高い検査結果を得ることができる。加えて、検査装置20は、撮像画像からそのまま第1検査対象物及び第2検査対象物を検査する場合と比較して、プロセッサ22の処理負荷及びメモリ(例えばRAM23)の使用量を軽減できる。
<Item 1>
Inspection device 20 includes one or more processors 22, memory (eg, RAM 24), and programs stored in the memory. The program causes the processor 22 to do the following.
The program performs a first inspection in an inspection area including a plurality of inspection objects including a first inspection object (for example, part 2a) and a second inspection object (for example, part 2b) different from the first inspection object. A first attention area 5 (5a) for inspecting an object and a second attention area 5 (5b) for inspecting a second inspection object are set.
The program causes the camera 21 that captures an image of the inspection target area and outputs a captured image of the inspection target area (for example, the workpiece image 4) to capture the inspection target area.
The program extracts a first image area corresponding to the first attention area and a second image area corresponding to the second attention area from the captured image.
The program executes a first inspection for inspecting a first inspection object based on a learning model 110 for detecting anomalies in a plurality of inspection objects and a first image region.
The program performs a second inspection that inspects a second inspection object based on the learning model and the second image region.
The program outputs the results of the first test and the results of the second test.
As a result, the inspection apparatus 20 inspects the first inspection object using the first image area extracted from the captured image, outputs the result of the first inspection, and uses the second image area extracted from the captured image. to output the result of the second inspection of the second inspection object. Therefore, the inspection apparatus 20 can inspect each inspection object using the appropriately extracted image area, and can obtain a more accurate inspection result. In addition, the inspection apparatus 20 can reduce the processing load of the processor 22 and the amount of memory (for example, RAM 23) usage compared to the case where the first inspection object and the second inspection object are inspected directly from the captured image.
<項目2>
 項目1に記載の検査装置20において、プログラムは、次のことをプロセッサ22に実行させる。
 プログラムは、第1画像領域と第2画像領域とを含む第1合成画像400を生成する。
 プログラムは、第1合成画像と学習モデルとを用いて第1検査と前記第2検査とを実行する。
 これにより、検査装置20は、第1合成画像を用いて、第1検査及び第2検査を実行できる。よって、検査装置20は、撮像画像からそのまま第1検査対象物及び第2検査対象物を検査する場合と比較して、プロセッサ22の処理負荷及びメモリの使用量を軽減できる。
<Item 2>
In the inspection device 20 described in item 1, the program causes the processor 22 to execute the following.
The program generates a first composite image 400 that includes a first image region and a second image region.
The program uses the first composite image and the learning model to perform the first test and the second test.
Thereby, the inspection apparatus 20 can perform the first inspection and the second inspection using the first synthesized image. Therefore, the inspection apparatus 20 can reduce the processing load of the processor 22 and the amount of memory used compared to the case where the first inspection object and the second inspection object are inspected directly from the captured image.
<項目3>
 項目1又は2に記載の検査装置20において、プログラムは、次のことをプロセッサ22に実行させる。
 プログラムは、撮像画像から第1注目領域に対応する画素を拡大又は縮小して第1画像領域を抽出する。
 プログラムは、撮像画像から第2注目領域に対応する画素を拡大又は縮小して第2画像領域を抽出する。
 これにより、検査装置20は、学習モデルに入力可能なサイズの第1合成画像を生成できる。
<Item 3>
In the inspection device 20 according to item 1 or 2, the program causes the processor 22 to perform the following.
The program extracts the first image area by enlarging or reducing the pixels corresponding to the first attention area from the captured image.
The program enlarges or reduces pixels corresponding to the second attention area from the captured image to extract the second image area.
As a result, the inspection device 20 can generate the first synthetic image of a size that can be input to the learning model.
<項目4>
 項目3に記載の検査装置20において、プログラムは、次のことをプロセッサ22に実行させる。
 プログラムは、第1注目領域に対応する画素として、第1注目領域と、第1注目領域の周辺画素とを撮像画像から抽出する。
 プログラムは、第2注目領域に対応する画素として、第2注目領域と、第2注目領域の周辺画素とを撮像画像から抽出する。
 このように、検査装置20は、各注目領域の周辺画素も抽出することにより、学習モデルにおける検査精度を高めることができる。
<Item 4>
In the inspection device 20 described in item 3, the program causes the processor 22 to perform the following.
The program extracts the first region of interest and peripheral pixels of the first region of interest from the captured image as pixels corresponding to the first region of interest.
The program extracts the second region of interest and peripheral pixels of the second region of interest from the captured image as pixels corresponding to the second region of interest.
In this way, the inspection apparatus 20 can improve the inspection accuracy in the learning model by also extracting the peripheral pixels of each attention area.
<項目5>
 項目4に記載の検査装置20において、プログラムは、次のことをプロセッサ22に実行させる。
 プログラムは、第2注目領域に対応する画素が第1注目領域に対応する画素と同じ大きさになるように第2注目領域の周辺画素を撮像画像から抽出する。
 これにより、検査装置20は、撮像画像の各注目領域について同じサイズで抽出した画像を合成して、合成画像を生成できる。
<Item 5>
In the inspection device 20 described in item 4, the program causes the processor 22 to perform the following.
The program extracts peripheral pixels of the second region of interest from the captured image so that pixels corresponding to the second region of interest have the same size as pixels corresponding to the first region of interest.
As a result, the inspection apparatus 20 can combine images extracted in the same size for each attention area of the captured image to generate a combined image.
<項目6>
 項目1から5のいずれか1項に記載の検査装置20において、プログラムは、次のことをプロセッサ22に実行させる。
 プログラムは、第1画像領域と、第2画像領域とに基づいた学習により、学習モデル110を生成する。
 これにより、検査装置20は、撮像画像を用いて、検査対象物を検査するための学習モデル110を生成できる。
<Item 6>
In the inspection apparatus 20 according to any one of items 1 to 5, the program causes the processor 22 to execute the following.
The program generates a learning model 110 by learning based on the first image region and the second image region.
Thereby, the inspection apparatus 20 can generate the learning model 110 for inspecting the inspection object using the captured image.
<項目7>
 項目1から6のいずれか1項に記載の検査装置20において、プログラムは、次のことをプロセッサ22に実行させる。
 プログラムは、第1注目領域に対応する撮像画像の第1領域に基づいて、複数の撮像パラメータを含む第1撮像パターンを含む第1撮像条件を定める。
 プログラムは、第2注目領域に対応する撮像画像の第2領域に基づいて、複数の撮像パラメータを含む第2撮像パターンを含む第2撮像条件を定める。
 これにより、各撮像条件は、撮像パラメータの異なる撮像パターンによって定められる。よって、検査装置20は、撮像パラメータの異なる撮像条件で撮像した撮像画像を用いて、各検査対象物を検査することにより、より精度の高い検査結果を得ることができる。
<Item 7>
In the inspection device 20 according to any one of items 1 to 6, the program causes the processor 22 to execute the following.
The program defines first imaging conditions including a first imaging pattern including a plurality of imaging parameters based on a first region of the captured image corresponding to the first region of interest.
The program defines second imaging conditions including a second imaging pattern including a plurality of imaging parameters based on a second region of the captured image corresponding to the second region of interest.
Thereby, each imaging condition is determined by an imaging pattern having different imaging parameters. Therefore, the inspection apparatus 20 can obtain more accurate inspection results by inspecting each inspection object using captured images captured under imaging conditions with different imaging parameters.
<項目8>
 項目7に記載の検査装置20は、検査対象領域を照射する照明装置30を備える。
 検査装置20において、プログラムは、次のことをプロセッサ22に実行させる。
 プログラムは、照明装置が検査対象領域を照射するための複数の照射パラメータを含む照射パターンを定める。
 これにより、検査装置20は、照明装置30に、照明パラメータの異なる照明パターンによって検査対象領域を照射させることができる。
<Item 8>
The inspection apparatus 20 according to item 7 includes an illumination device 30 that illuminates an inspection target area.
In the inspection device 20, the program causes the processor 22 to do the following.
The program defines an illumination pattern including a plurality of illumination parameters for illuminating an area to be inspected by the illuminator.
Thereby, the inspection device 20 can cause the illumination device 30 to illuminate the inspection target region with illumination patterns having different illumination parameters.
<項目9>
 項目8に記載の検査装置20において、プログラムは、次のことをプロセッサ22に実行させる。
 プログラムは、検査対象物の撮像画像として、複数の照射条件ごとに第1撮像条件を適用して撮像した検査対象領域の撮像画像と、複数の照射条件ごとに第2撮像条件を適用して撮像した検査対象領域の撮像画像とをカメラ21に出力させる。
 これにより、検査装置20は、異なる照明パターンで撮像した撮像画像を用いて、各検査対象物を検査することができ、より精度の高い検査結果を得ることができる。
<Item 9>
In the inspection device 20 according to item 8, the program causes the processor 22 to perform the following.
The program captures an image of an inspection target area obtained by applying a first imaging condition for each of a plurality of irradiation conditions and an image by applying a second imaging condition for each of a plurality of irradiation conditions as an image of the inspection object. The camera 21 is caused to output the picked-up image of the inspection target area.
As a result, the inspection apparatus 20 can inspect each inspection object using captured images captured with different illumination patterns, and can obtain more accurate inspection results.
<項目10>
 項目1から10のいずれか1項に記載の検査装置20において、プログラムは、次のことをプロセッサ22に実行させる。
 プログラムは、複数の検査対象物の位置を定めた設計情報、又は検査対象物領域の撮像画像のいずれかに基づいて第1注目領域と第2注目領域とを設定する。
 これにより、検査装置20は、検査対象領域において、複数の注目領域を設定できる。
<Item 10>
In the inspection apparatus 20 according to any one of items 1 to 10, the program causes the processor 22 to execute the following.
The program sets the first region of interest and the second region of interest based on either design information that defines the positions of a plurality of inspection objects or a captured image of the inspection object region.
Thereby, the inspection apparatus 20 can set a plurality of attention areas in the inspection target area.
<項目11>
 項目1から10のいずれか1項に記載の検査装置20は、カメラ21を備える。
 これにより、検査装置20は、カメラ21を制御して、検査対象領域の撮像画像を撮像できる。
<Item 11>
The inspection device 20 according to any one of items 1 to 10 includes a camera 21 .
Thereby, the inspection apparatus 20 can control the camera 21 to capture a captured image of the inspection target area.
<項目12>
 装置(例えば検査装置20)は、次の画像処理方法を実施する。
 装置は、第1検査対象物(例えば部品2)と第1検査対象物とは異なる第2検査対象物とを含む複数の検査対象物を含む検査対象領域において、第1検査対象物の検査を行うための第1注目領域と、第2検査対象物の検査を行うための第2注目領域とを設定する。
 装置は、検査対象領域を撮像して検査対象領域の撮像画像(例えばワーク画像4)を出力するカメラ21に検査対象領域を撮像させる。
 装置は、撮像画像から、第1注目領域5に対応する第1画像領域と、第2注目領域5に対応する第2画像領域とを抽出する。
 装置は、複数の検査対象物の異常を検知するための学習モデル110と、第1画像領域とに基づいて第1検査対象物を検査する第1検査を実行する。
 装置は、学習モデルと、第2画像領域とに基づいて第2検査対象物を検査する第2検査を実行する。
 装置は、第1検査の結果と、前記第2検査の結果とを出力する。
 これにより、装置は、撮像画像から抽出した第1画像領域を用いて、第1検査対象物を検査して第1検査の結果を出力し、撮像画像から抽出した第2画像領域を用いて、第2検査対象物を検査した第2検査の結果を出力する。よって、装置は、撮像画像からそのまま第1検査対象物及び第2検査対象物を検査する場合と比較して、プロセッサ22の処理負荷及びメモリ(例えばRAM23)の使用量を軽減できる。
<Item 12>
The device (for example, inspection device 20) implements the following image processing method.
The apparatus inspects a first inspection object in an inspection object area including a plurality of inspection objects including a first inspection object (eg, part 2) and a second inspection object different from the first inspection object. A first attention area for performing inspection and a second attention area for performing inspection of a second inspection object are set.
The apparatus causes the camera 21 that captures an image of the inspection target area and outputs a captured image of the inspection target area (for example, the workpiece image 4) to capture the inspection target area.
The device extracts a first image region corresponding to the first region of interest 5 and a second image region corresponding to the second region of interest 5 from the captured image.
The apparatus performs a first inspection for inspecting a first inspection object based on a learning model 110 for detecting anomalies in a plurality of inspection objects and a first image region.
The apparatus performs a second inspection that inspects a second inspection object based on the learned model and the second image region.
The device outputs the result of the first test and the result of said second test.
As a result, the apparatus inspects the first inspection object using the first image area extracted from the captured image, outputs the result of the first inspection, and uses the second image area extracted from the captured image to A result of the second inspection of the second inspection object is output. Therefore, the apparatus can reduce the processing load of the processor 22 and the usage of the memory (for example, the RAM 23) as compared with the case of directly inspecting the first inspection object and the second inspection object from the captured image.
<項目13>
 画像処理プログラムは、次のことをプロセッサ22に実行させる。
 画像処理プログラムは、第1検査対象物(例えば部品2)と第1検査対象物とは異なる第2検査対象物とを含む複数の検査対象物を含む検査対象領域において、第1検査対象物の検査を行うための第1注目領域と、第2検査対象物の検査を行うための第2注目領域とを設定する。
 画像処理プログラムは、検査対象領域を撮像して前記検査対象領域の撮像画像(例えばワーク画像4)を出力するカメラ21に検査対象領域を撮像させる。
 画像処理プログラムは、撮像画像から、第1注目領域に対応する第1画像領域と、第2注目領域に対応する第2画像領域とを抽出する。
 画像処理プログラムは、複数の検査対象物の異常を検知するための学習モデル110と、第1画像領域とに基づいて第1検査対象物を検査する第1検査を実行する。
 画像処理プログラムは、学習モデルと、第2画像領域とに基づいて第2検査対象物を検査する第2検査を実行する。
 画像処理プログラムは、第1検査の結果と、第2検査の結果とを出力する。
 これにより、画像処理プログラムは、撮像画像から抽出した第1画像領域を用いて、第1検査対象物を検査して第1検査の結果を出力し、撮像画像から抽出した第2画像領域を用いて、第2検査対象物を検査した第2検査の結果を出力する。よって、画像処理プログラムは、撮像画像からそのまま第1検査対象物及び第2検査対象物を検査する場合と比較して、プロセッサ22の処理負荷及びメモリ(例えばRAM23)の使用量を軽減できる。
<Item 13>
The image processing program causes the processor 22 to do the following.
The image processing program, in an inspection area including a plurality of inspection objects including a first inspection object (for example, part 2) and a second inspection object different from the first inspection object, A first attention area for inspection and a second attention area for inspecting a second inspection object are set.
The image processing program causes the camera 21 that captures an image of the inspection target area and outputs a captured image of the inspection target area (for example, the work image 4) to capture the inspection target area.
The image processing program extracts a first image area corresponding to the first attention area and a second image area corresponding to the second attention area from the captured image.
The image processing program executes a first inspection for inspecting a first inspection object based on a learning model 110 for detecting anomalies in a plurality of inspection objects and a first image region.
The image processing program performs a second inspection for inspecting a second inspection object based on the learning model and the second image region.
The image processing program outputs the result of the first inspection and the result of the second inspection.
Thereby, the image processing program inspects the first inspection object using the first image area extracted from the captured image, outputs the result of the first inspection, and uses the second image area extracted from the captured image. to output the result of the second inspection of the second inspection object. Therefore, the image processing program can reduce the processing load of the processor 22 and the usage of the memory (for example, the RAM 23) as compared with the case of directly inspecting the first inspection object and the second inspection object from the captured image.
 以上、説明した本発明の実施形態は、当業者であれば、本発明が上述した実施形態に限定されるものではないこと、上述した実施形態は例示にすぎないということ、及び、本発明の目的から逸脱しない範囲において様々な変形が可能であるということは理解される。 The above-described embodiments of the present invention will allow those skilled in the art to understand that the present invention is not limited to the above-described embodiments, that the above-described embodiments are merely exemplary, and that It is understood that various modifications are possible without departing from the intended purpose.
 例えば、本明細書にて開示される処理に含まれるステップは、必ずしもシーケンス図及び/又はフローチャートに記載された順序に従って時系列に実行せずともよい。例えば、処理に含まれるステップは、シーケンス図及び/又はフローチャートに記載した順序と異なる順序で実行すること、及び/又は、並列的に実行することができる。また、処理に含まれるステップの一部を削除すること、及び/又は、さらなるステップを処理に追加することも可能である。 For example, the steps included in the processing disclosed in this specification do not necessarily have to be executed in chronological order according to the order described in the sequence diagrams and/or flowcharts. For example, steps in a process may be performed out of order and/or in parallel from the order illustrated in the sequence diagrams and/or flowcharts. It is also possible to omit some of the steps included in the process and/or add additional steps to the process.
 また、本明細書において説明した検査装置20の構成要素(例えば、撮像制御部101、最適条件決定部102、検査実行部103、及び/又は、注目領域抽出部111)を備える装置又はそのモジュール(例えば、撮像制御モジュール、最適条件決定モジュール、検査実行モジュール、及び/又は、注目領域抽出モジュール)が提供されてもよい。さらに、上述した構成要素の処理をプロセッサ、及び/又は、プロセッサに相当する処理要素
(例えば、ASIC:Application Specific Integrated Circuit,FPGA:Field Programmable Gate Array,CPLD:Complex Programmable Logic Device等)に実行させるためのプログラムが提供されてもよい。また、当該プログラムを記録した記録媒体(コンピュータに読み取り可能な非一時的記録媒体)が提供されてもよい。このような装置、モジュール、方法、プログラム及び記録媒体も本発明に含まれる。
Also, an apparatus or a module thereof ( For example, an imaging control module, an optimum condition determination module, an inspection execution module, and/or a region of interest extraction module) may be provided. Furthermore, to cause a processor and/or a processing element equivalent to the processor (for example, ASIC: Application Specific Integrated Circuit, FPGA: Field Programmable Gate Array, CPLD: Complex Programmable Logic Device, etc.) to execute the processing of the above-described components program may be provided. A recording medium (computer-readable non-temporary recording medium) recording the program may also be provided. Such devices, modules, methods, programs and recording media are also included in the present invention.
 なお、本出願は、2022年2月25日出願の日本特許出願(特願2022-028557)に基づくものであり、その内容は本出願の中に参照として援用される。 This application is based on a Japanese patent application (Japanese Patent Application No. 2022-028557) filed on February 25, 2022, the content of which is incorporated herein by reference.
 本開示の技術は、基板に各部品が正常に装着されているか否かを検査する装置等に有用である。 The technology of the present disclosure is useful for devices and the like that inspect whether or not each component is normally mounted on a board.
1 基板
2、2a、2b 部品
3 ワーク
4 ワーク画像
5、5a、5b 注目領域
10 検査システム
11 通信ネットワーク
12、13、14、15、16、17 ケーブル
19 検出センサ
20 検査装置
21 カメラ
22 プロセッサ
23 ROM
24 RAM
25 ストレージ
26 通信I/F
27 入出力I/F
30 照明装置
31 LED光源
32 入出力I/F
33 調光制御回路
40 管理装置
41 プロセッサ
42 ROM
43 RAM
44 ストレージ
45 通信I/F
46 入出力I/F
50 入力装置
52 スピーカ
60 表示装置
62 パトライト
101 撮像制御部
102 最適条件決定部
103 検査実行部
104 注目領域格納部
105 撮像条件格納部
106 最適撮像条件格納部1
107 学習モデル格納部
110 学習モデル
111 注目領域抽出部
112 合成画像格納部
201 注目領域設定部
202 撮像パターン生成部
203 照明パターン生成部
204 撮像条件生成部
205 学習モデル生成部
206 UI制御部
207 注目領域格納部
208 撮像パターン格納部
209 照明パターン格納部
210 撮像条件格納部
211 学習モデル格納部
212 検査結果格納部
300 パターン用UI画面
301 撮像パターンリスト領域
302 照明パターンリスト領域
303 撮像条件別領域
304 ワーク画像確認領域
320 注目領域用UI画面
321 注目領域リスト領域
322 注目領域確認領域
323 評価値確認領域
324 設定ボタン
340 検査結果一覧用UI画面
341 ワーク領域
342 検査結果
343 詳細ボタン
360 検査結果詳細用UI画面
361 説明領域
362 ワーク領域
363 部品リスト領域
391 縮小ワーク画像
392 分割ワーク画像
400 合成画像
 
1 board 2, 2a, 2b part 3 work 4 work image 5, 5a, 5b attention area 10 inspection system 11 communication network 12, 13, 14, 15, 16, 17 cable 19 detection sensor 20 inspection device 21 camera 22 processor 23 ROM
24 RAMs
25 Storage 26 Communication I/F
27 input/output I/F
30 lighting device 31 LED light source 32 input/output I/F
33 dimming control circuit 40 management device 41 processor 42 ROM
43 RAM
44 Storage 45 Communication I/F
46 input/output I/F
50 input device 52 speaker 60 display device 62 patrol light 101 imaging control unit 102 optimum condition determination unit 103 inspection execution unit 104 attention area storage unit 105 imaging condition storage unit 106 optimum imaging condition storage unit 1
107 learning model storage unit 110 learning model 111 attention area extraction unit 112 synthetic image storage unit 201 attention area setting unit 202 imaging pattern generation unit 203 illumination pattern generation unit 204 imaging condition generation unit 205 learning model generation unit 206 UI control unit 207 attention area Storage unit 208 Imaging pattern storage unit 209 Illumination pattern storage unit 210 Imaging condition storage unit 211 Learning model storage unit 212 Inspection result storage unit 300 Pattern UI screen 301 Imaging pattern list area 302 Illumination pattern list area 303 Imaging condition-specific area 304 Work image Confirmation area 320 Attention area UI screen 321 Attention area list area 322 Attention area confirmation area 323 Evaluation value confirmation area 324 Setting button 340 Inspection result list UI screen 341 Work area 342 Inspection result 343 Details button 360 Inspection result details UI screen 361 Description area 362 Work area 363 Parts list area 391 Reduced work image 392 Divided work image 400 Composite image

Claims (9)

  1.  1つ以上のプロセッサと、
     メモリと、
     前記メモリに保存されているプログラムと、を備え、
     前記プログラムは、
     第1検査対象物と前記第1検査対象物とは異なる第2検査対象物とを含む複数の検査対象物が存在する検査対象領域において、前記第1検査対象物の検査を行うための第1注目領域と、前記第2検査対象物の検査を行うための第2注目領域とを設定することと、
     前記検査対象領域を撮像して前記検査対象領域の撮像画像を出力するカメラに、前記検査対象領域の撮像画像として、第1撮像条件により撮像した第1撮像画像と、前記第1撮像条件とは異なる第2撮像条件により撮像した第2撮像画像とを撮像させることと、
     前記第1検査対象物と前記第2検査対象物とを含む複数の検査対象物の異常を検知するための学習モデルと、前記第1注目領域に対応する前記第1撮像画像の第1領域とに基づいて前記第1検査対象物を検査し、かつ前記学習モデルと前記第2注目領域に対応する前記第1撮像画像の第2領域とに基づいて前記第2検査対象物を検査する第1検査を実行することと、
     前記学習モデルと前記第1注目領域に対応する前記第2撮像画像の第1領域とに基づいて前記第1検査対象物を検査し、かつ前記学習モデルと前記第2注目領域に対応する前記第2撮像画像の第2領域とに基づいて前記第2検査対象物を検査する第2検査を実行することと、
     前記第1検査の結果と前記第2検査の結果とを出力することと、
     を、前記1つ以上のプロセッサに実行させる、
     検査装置。
    one or more processors;
    memory;
    a program stored in the memory;
    Said program
    In an inspection area where a plurality of inspection objects including a first inspection object and a second inspection object different from the first inspection object exist, a first inspection object for inspecting the first inspection object setting a region of interest and a second region of interest for inspecting the second inspection object;
    A first captured image captured under a first imaging condition as a captured image of the inspection target area to a camera that captures the inspection target area and outputs the captured image of the inspection target area, and the first imaging condition Capturing a second captured image captured under a different second imaging condition;
    a learning model for detecting anomalies in a plurality of inspection objects including the first inspection object and the second inspection object; and a first region of the first captured image corresponding to the first region of interest. and inspecting the second inspection object based on the learning model and a second region of the first captured image corresponding to the second region of interest. performing an inspection;
    inspecting the first inspection object based on the learning model and a first area of the second captured image corresponding to the first attention area, and inspecting the first inspection object corresponding to the learning model and the second attention area; 2 performing a second inspection for inspecting the second inspection object based on a second area of the captured image;
    outputting the result of the first inspection and the result of the second inspection;
    causes the one or more processors to execute
    inspection equipment.
  2.  前記プログラムは、
     前記第1注目領域に対応する前記撮像画像の第1領域に基づいて、前記第1撮像条件として少なくとも1つの撮像パラメータを含む第1撮像パターンを定めることと、
     前記第2注目領域に対応する前記撮像画像の第2領域に基づいて、前記第2撮像条件として少なくとも1つの撮像パラメータを含む第2撮像パターンを定めることと、
     を、前記1つ以上のプロセッサに実行させる、
     請求項1に記載の検査装置。
    Said program
    Determining a first imaging pattern including at least one imaging parameter as the first imaging condition based on a first region of the captured image corresponding to the first region of interest;
    Determining a second imaging pattern including at least one imaging parameter as the second imaging condition based on a second region of the captured image corresponding to the second region of interest;
    causes the one or more processors to execute
    The inspection device according to claim 1.
  3.  前記検査対象領域を照射する照明装置を備え、
     前記プログラムは、
     前記照明装置が前記検査対象領域を照射するための少なくとも1つの照射パラメータを含む照射パターンを定めること、
     を、前記1つ以上のプロセッサに実行させる、
     請求項2に記載の検査装置。
    A lighting device that irradiates the inspection target area,
    Said program
    defining an illumination pattern including at least one illumination parameter for the illumination device to illuminate the inspection target area;
    causes the one or more processors to execute
    The inspection device according to claim 2.
  4.  前記プログラムは、
     前記第1撮像画像として、前記少なくとも1つの照射パターンごとに前記第1撮像条件を適用して撮像した前記検査対象領域の撮像画像を前記カメラに出力させることと、
     前記第2撮像画像として、前記少なくとも1つの照射パターンごとに前記第2撮像条件を適用して撮像した前記検査対象領域の撮像画像を前記カメラに出力させることと、
     を前記1つ以上のプロセッサに実行させる、
     請求項3に記載の検査装置。
    Said program
    causing the camera to output, as the first captured image, a captured image of the inspection target region captured by applying the first imaging condition for each of the at least one irradiation pattern;
    causing the camera to output, as the second captured image, a captured image of the inspection target region captured by applying the second imaging condition for each of the at least one irradiation pattern;
    causes the one or more processors to execute
    The inspection device according to claim 3.
  5.  前記プログラムは、
     前記カメラが出力した前記第1撮像画像と前記第2撮像画像とに基づいた学習により、
    前記学習モデルを生成することと、
     を、前記1つ以上のプロセッサに実行させる、
     請求項1から4のいずれか1項に記載の検査装置。
    Said program
    By learning based on the first captured image and the second captured image output by the camera,
    generating the learning model;
    causes the one or more processors to execute
    The inspection device according to any one of claims 1 to 4.
  6.  前記プログラムは、
     前記複数の検査対象物の位置を定めた設計情報、又は前記検査対象領域の撮像画像のいずれかに基づいて前記第1注目領域と前記第2注目領域とを設定すること、
     を前記1つ以上のプロセッサに実行させる、
     請求項1から5のいずれか1項に記載の検査装置。
    Said program
    setting the first region of interest and the second region of interest based on either design information defining the positions of the plurality of inspection objects or a captured image of the inspection target region;
    causes the one or more processors to execute
    The inspection device according to any one of claims 1 to 5.
  7.  前記カメラを備える、
     請求項1から6のいずれか1項に記載の検査装置。
    comprising the camera;
    The inspection device according to any one of claims 1 to 6.
  8.  検査装置において、
     第1検査対象物と前記第1検査対象物とは異なる第2検査対象物とを含む複数の検査対象物が存在する検査対象領域において、前記第1検査対象物の検査を行うための第1注目領域と、前記第2検査対象物の検査を行うための第2注目領域とを設定し、
     前記検査対象領域を撮像して前記検査対象領域の撮像画像を出力するカメラに、前記検査対象領域の撮像画像として、第1撮像条件により撮像した第1撮像画像と、前記第1撮像条件とは異なる第2撮像条件により撮像した第2撮像画像とを撮像させ、
     前記第1検査対象物と前記第2検査対象物とを含む複数の検査対象物の異常を検知するための学習モデルと、前記第1注目領域に対応する前記第1撮像画像の第1領域とに基づいて前記第1検査対象物を検査し、かつ前記学習モデルと前記第2注目領域に対応する前記第1撮像画像の第2領域とに基づいて前記第2検査対象物を検査する第1検査を実行し、
     前記学習モデルと前記第1注目領域に対応する前記第2撮像画像の第1領域とに基づいて前記第1検査対象物を検査し、かつ前記学習モデルと前記第2注目領域に対応する前記第2撮像画像の第2領域とに基づいて前記第2検査対象物を検査する第2検査を実行し、
     前記第1検査の結果と前記第2検査の結果とを出力する、
     検査方法。
    In the inspection device,
    In an inspection area where a plurality of inspection objects including a first inspection object and a second inspection object different from the first inspection object exist, a first inspection object for inspecting the first inspection object setting a region of interest and a second region of interest for inspecting the second inspection object;
    A first captured image captured under a first imaging condition as a captured image of the inspection target area to a camera that captures the inspection target area and outputs the captured image of the inspection target area, and the first imaging condition Capturing a second captured image captured under a different second imaging condition,
    a learning model for detecting anomalies in a plurality of inspection objects including the first inspection object and the second inspection object; and a first region of the first captured image corresponding to the first region of interest. and inspecting the second inspection object based on the learning model and a second region of the first captured image corresponding to the second region of interest. run the inspection,
    inspecting the first inspection object based on the learning model and a first area of the second captured image corresponding to the first attention area, and inspecting the first inspection object corresponding to the learning model and the second attention area; 2 performing a second inspection for inspecting the second inspection object based on the second area of the captured image;
    outputting the results of the first inspection and the results of the second inspection;
    Inspection method.
  9.  第1検査対象物と前記第1検査対象物とは異なる第2検査対象物とを含む複数の検査対象物が存在する検査対象領域において、前記第1検査対象物の検査を行うための第1注目領域と、前記第2検査対象物の検査を行うための第2注目領域とを設定することと、
     前記検査対象領域を撮像して前記検査対象領域の撮像画像を出力するカメラに、前記検査対象領域の撮像画像として、第1撮像条件により撮像した第1撮像画像と、前記第1撮像条件とは異なる第2撮像条件により撮像した第2撮像画像とを撮像させることと、
     前記第1検査対象物と前記第2検査対象物とを含む複数の検査対象物の異常を検知するための学習モデルと、前記第1注目領域に対応する前記第1撮像画像の第1領域とに基づいて前記第1検査対象物を検査し、かつ前記学習モデルと前記第2注目領域に対応する前記第1撮像画像の第2領域とに基づいて前記第2検査対象物を検査する第1検査を実行することと、
     前記学習モデルと前記第1注目領域に対応する前記第2撮像画像の第1領域とに基づいて前記第1検査対象物を検査し、かつ前記学習モデルと前記第2注目領域に対応する前記第2撮像画像の第2領域とに基づいて前記第2検査対象物を検査する第2検査を実行することと、
     前記第1検査の結果と前記第2検査の結果とを出力することと、
     を、プロセッサに実行させる、
     検査プログラム。
     
    In an inspection area where a plurality of inspection objects including a first inspection object and a second inspection object different from the first inspection object exist, a first inspection object for inspecting the first inspection object setting a region of interest and a second region of interest for inspecting the second inspection object;
    A first captured image captured under a first imaging condition as a captured image of the inspection target area to a camera that captures the inspection target area and outputs the captured image of the inspection target area, and the first imaging condition Capturing a second captured image captured under a different second imaging condition;
    a learning model for detecting anomalies in a plurality of inspection objects including the first inspection object and the second inspection object; and a first region of the first captured image corresponding to the first region of interest. and inspecting the second inspection object based on the learning model and a second region of the first captured image corresponding to the second region of interest. performing an inspection;
    inspecting the first inspection object based on the learning model and a first area of the second captured image corresponding to the first attention area, and inspecting the first inspection object corresponding to the learning model and the second attention area; 2 performing a second inspection for inspecting the second inspection object based on a second area of the captured image;
    outputting the result of the first inspection and the result of the second inspection;
    causes the processor to execute
    inspection program.
PCT/JP2023/006090 2022-02-25 2023-02-20 Inspection device, inspection method, and inspection program WO2023162940A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022028557 2022-02-25
JP2022-028557 2022-02-25

Publications (1)

Publication Number Publication Date
WO2023162940A1 true WO2023162940A1 (en) 2023-08-31

Family

ID=87765849

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/006090 WO2023162940A1 (en) 2022-02-25 2023-02-20 Inspection device, inspection method, and inspection program

Country Status (1)

Country Link
WO (1) WO2023162940A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016045019A (en) * 2014-08-20 2016-04-04 オムロン株式会社 Teaching device for substrate inspection device, and teaching method
JP2019100917A (en) * 2017-12-05 2019-06-24 パナソニックIpマネジメント株式会社 Inspection program generation system, generation method of inspection program and generation program of inspection program
JP2021120631A (en) * 2020-01-30 2021-08-19 株式会社デンソーテン Image generator and image generation method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016045019A (en) * 2014-08-20 2016-04-04 オムロン株式会社 Teaching device for substrate inspection device, and teaching method
JP2019100917A (en) * 2017-12-05 2019-06-24 パナソニックIpマネジメント株式会社 Inspection program generation system, generation method of inspection program and generation program of inspection program
JP2021120631A (en) * 2020-01-30 2021-08-19 株式会社デンソーテン Image generator and image generation method

Similar Documents

Publication Publication Date Title
US10489900B2 (en) Inspection apparatus, inspection method, and program
US20050207655A1 (en) Inspection system and method for providing feedback
JP2019100917A (en) Inspection program generation system, generation method of inspection program and generation program of inspection program
TWI484164B (en) Optical re - inspection system and its detection method
JP2015143656A (en) Inspection apparatus and inspection method
JP5239561B2 (en) Substrate appearance inspection method and substrate appearance inspection apparatus
KR20100124653A (en) Apparatus and method for visual inspection
KR20140091916A (en) Inspection Method For Display Panel
US11682113B2 (en) Multi-camera visual inspection appliance and method of use
JP2013079880A (en) Image quality inspection method, image quality inspection device, and image quality inspection program
KR20120048748A (en) Pattern image transmitter for inspecting display and method of inspecting display using the same
JP2014055915A (en) Appearance inspection device, appearance inspection method, and program
WO2023162940A1 (en) Inspection device, inspection method, and inspection program
WO2023162941A1 (en) Inspection device, image processing method, and image processing program
CN106845466A (en) A kind of product identification method and its system based on image
JP2007033126A (en) Substrate inspection device, parameter adjusting method thereof and parameter adjusting device
JP2009080004A (en) Inspection device
KR100675766B1 (en) Image sensor inspection system
CN112213081A (en) Screen body detection equipment
JP2003216930A (en) Method and apparatus for inspecting discoloration
KR200386330Y1 (en) System for testing a electronic scales using vision system
JP2002031605A (en) Defect-confirming apparatus and automatic visual inspection apparatus
JP2005003474A (en) Method, apparatus and program for evaluating color display panel
JP5947168B2 (en) Appearance inspection apparatus, control method and program for appearance inspection apparatus
WO2020184567A1 (en) Image inspection device and image inspection method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23759941

Country of ref document: EP

Kind code of ref document: A1