WO2020158630A1 - Detecting device, learner, computer program, detecting method, and method for generating learner - Google Patents

Detecting device, learner, computer program, detecting method, and method for generating learner Download PDF

Info

Publication number
WO2020158630A1
WO2020158630A1 PCT/JP2020/002630 JP2020002630W WO2020158630A1 WO 2020158630 A1 WO2020158630 A1 WO 2020158630A1 JP 2020002630 W JP2020002630 W JP 2020002630W WO 2020158630 A1 WO2020158630 A1 WO 2020158630A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
detection
learning device
type
detection target
Prior art date
Application number
PCT/JP2020/002630
Other languages
French (fr)
Japanese (ja)
Inventor
中村 聡
Original Assignee
株式会社カネカ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社カネカ filed Critical 株式会社カネカ
Priority to JP2020569594A priority Critical patent/JPWO2020158630A1/en
Publication of WO2020158630A1 publication Critical patent/WO2020158630A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to a detection device, a learning device, a computer program, a detection method, and a learning device generation method.
  • An EL inspection using electroluminescence (EL) is used as one of the inspection methods for the presence/absence of defects in solar cell modules.
  • EL inspection an electric current is supplied to each cell constituting the solar cell module to cause the cell itself to emit light, and cracks, disconnections, and the like are confirmed based on an image captured by a camera on the surface of the cell (see Patent Document 1).
  • the brightness distribution and distortion of the image differ from cell to cell, and there are minute detection targets (for example, defects) that are difficult to confirm visually. Further, there are various types of shapes even for minute detection targets, and the quality determination of the cell also differs depending on the type of detection target. Therefore, it is desired to accurately detect a minute detection target.
  • the present invention has been made in view of such circumstances, and provides a detection device, a learning device, a computer program, a detection method, and a learning device generation method capable of accurately detecting a minute detection target. To aim.
  • a detection device includes a candidate area extraction unit that performs image processing on a first image to extract one or a plurality of candidate areas including a detection target candidate, and the candidate area extraction unit.
  • An image generation unit that generates one or a plurality of second images including at least one of the extracted candidate regions, a first learning device, and the second image generated by the image generation unit are input to the first learning device.
  • a detection unit that detects the presence or absence of a detection target.
  • a learning device includes a first learning device generated by using a first label image including a detection target and a second label image including no detection target as learning data, A second learning device generated by using, as learning data, a plurality of types of label images divided into different types of detection targets included in one label image.
  • a computer program causes a computer to perform a process of acquiring a first image and one or a plurality of candidate regions including a detection target candidate by performing image processing on the acquired first image.
  • a process of extracting, a process of generating one or a plurality of second images including at least one of the extracted candidate regions, and a process of inputting the generated second images to a first learning device and detecting the presence or absence of a detection target And execute.
  • a detection method acquires a first image, performs image processing on the acquired first image, extracts one or a plurality of candidate regions including a detection target candidate, and extracts the extracted candidate region.
  • One or a plurality of second images including at least one of the generated candidate areas is generated, and the generated second image is input to the first learning device to detect the presence or absence of the detection target.
  • a learning device generation method acquires a first label image that includes a detection target and a second label image that does not include a detection target, and acquires the first label image and the second label image.
  • a first learning device is generated by using the label image as learning data
  • a second learning device is generated by using a plurality of types of label images divided into different types of detection targets included in the first label image as learning data. To generate.
  • a minute detection target can be accurately detected.
  • FIG. 1 is a block diagram showing an example of the configuration of the detection device 50 according to the present embodiment.
  • the detection device 50 includes a control unit 51 that controls the entire device, an input unit 52, a defect determination unit 53, a candidate area extraction unit 54, a storage unit 55, an image generation unit 56, a learning device 57, and a detection unit 58.
  • the learning device 57 includes a first learning device 571 and a second learning device 572.
  • the storage unit 55 can store required data such as an input image described later and a result of detection processing.
  • the input unit 52 has a function as an acquisition unit, and acquires the input image in which the imaging target is captured.
  • the imaging target is an inspection target, and for example, in the case of a solar cell module, each cell that constitutes the module can be used, but the present invention is not limited to this.
  • the defect judgment unit 53 has a function as a judgment unit and judges whether or not a predetermined defect target is included in the input image.
  • the defect determination unit 53 has an image processing function and can determine the presence or absence of a predetermined defect target based on the input image.
  • the predetermined defect target includes, but is not limited to, single cell abnormality, disconnection (also referred to as finger disconnection), cleaning mark (also referred to as water mark), breakage coexisting crack, and the like.
  • FIG. 2 is a schematic diagram showing an example of a defect target.
  • FIG. 2 shows an image (also referred to as a cell image) obtained by picking up an image of a cell constituting a solar cell module. Horizontal black lines in the cell image indicate metal lines in the cell.
  • 2A schematically shows a case where there is a finger disconnection in the cell. The finger disconnection is a relatively large width disconnection. For example, if there is one location in the cell, the cell is determined to be defective.
  • FIG. 2B schematically shows the case where water marks are present in the cell. The water trace is a cleaning trace remaining after the cleaning step, and if there is one in the cell, the cell is determined to be defective.
  • FIG. 2 shows an image (also referred to as a cell image) obtained by picking up an image of a cell constituting a solar cell module. Horizontal black lines in the cell image indicate metal lines in the cell.
  • 2A schematically shows a case where there is a finger disconnection in the cell. The finger dis
  • 2C schematically shows a case where a crack having a wire break exists in the cell. If there is one location in the cell, the cell is determined to be defective.
  • 2D schematically shows a cell abnormality. The cell abnormality is when the cell does not emit light or when the amount of emitted light is extremely low, and the cell is determined to be defective.
  • the defect determination unit 53 outputs the input image determined to have no predetermined defect target to the candidate region extraction unit 54 as the first image. As a result, it is possible to exclude in advance defect targets that are relatively easy to detect, narrow down only the images that may include minute detection targets that are difficult to detect, and perform processing after the candidate region extraction unit 54.
  • the detecting unit 58 detects the cell as a defective product.
  • FIG. 3 is a schematic diagram showing an example of a detection target detected by the detection device 50 of the present embodiment.
  • the detection target is a minute defect target that is difficult to determine by the defect determination unit 53, and various types such as shape and size may exist.
  • the detection target is divided into four types A, B, C, and D for convenience according to the shape and the type, for example.
  • the types are not limited to four types.
  • the detection target can be cracks of different types such as shape and size, and although the detection target is described as "a target called a crack" in this specification, the detection target is a detection target. Is not limited to cracks.
  • the shape and size of the crack shown in FIG. 3 are examples, and the invention is not limited to the example of FIG.
  • the candidate area extraction unit 54 performs image processing on the first image output by the defect determination unit 53 to extract one or more candidate areas including the detection target candidate.
  • the image processing includes normal image processing such as density correction, noise removal, edge detection, and feature amount extraction.
  • the detection target candidate is, for example, a region including a minute defect that is difficult to detect by visual inspection or the determination by the defect determination unit 53, and is a region including a minute detection target candidate.
  • the detection unit 58 can determine that the cell corresponding to the first image from which the detection target candidate cannot be extracted by the candidate area extraction unit 54 is a good product.
  • the image generation unit 56 generates one or more second images including at least one of the candidate regions when the candidate region extraction unit 54 extracts a plurality of candidate regions in one first image. At least one means that a large number of candidate regions are not included, and can be, for example, one, two, or three (a small number). In this specification, the case where the second image includes one candidate region will be described, but the number of candidate regions included in one second image is not limited to one.
  • FIG. 4 is a schematic diagram showing an example of generation of the second image.
  • the candidate area extraction unit 54 extracts three candidate areas of types A, C, and D as detection targets in one first image.
  • the image generation unit 56 generates three second images including the second image including the type A candidate area, the second image including the type C, and the second image including the type D.
  • the image generation unit 56 can generate at least one of the candidate regions extracted by the candidate region extraction unit 54 at a predetermined position of the predetermined image to generate the second image.
  • the predetermined image can be, for example, an image having a constant brightness value (an image having a high brightness value, a bright image).
  • the candidate area can be a rectangular area surrounding the detection target candidate according to the shape and size of the detection target candidate.
  • the predetermined position can be, for example, the center (midpoint) of the predetermined image.
  • the candidate areas can be arranged so that the center of the candidate area matches the center of the predetermined image.
  • the position of the candidate area on the second image can be set to the same position for each second image, it is possible to detect a minute detection target with higher accuracy than when the position of the candidate area varies from image to image.
  • the first learning device 571 and the second learning device 572 can be configured by, for example, a multilayer neural network (deep learning), and for example, a convolutional neural network (Convolutional Neural Network) can be used. Learning may be used.
  • a convolutional neural network Convolutional Neural Network
  • Learning may be used.
  • the first learning device 571 and the second learning device 572 are described as being configured by a convolutional neural network, but the present invention is not limited to this.
  • FIG. 5 is a schematic diagram showing an example of the configuration of the first learning device 571 and the second learning device 572.
  • the convolutional neural network comprises a hidden layer between the input layer and the output layer.
  • the hidden layer includes a convolutional layer and a pooling layer having a plurality of stages, and a fully-bonded layer in the final stage. The number of convolutional layers, pooling layers, and total bonding layers can be appropriately determined.
  • the control unit 51 can perform the learning process of the first learning device 571 and the second learning device 572.
  • the control unit 51 and the learning device 57 are, for example, a CPU (for example, a multi-processor having a plurality of processor cores mounted), a GPU (Graphics Processing Units), a DSP (Digital Signal Processors), and an FPGA (Field-Programmable Gate Arrays). It can be configured by combining hardware such as.
  • the first learning device 571 can generate the first label image including the detection target and the second label image including no detection target as the learning data. First, the case of using the first label image will be described.
  • FIG. 6 is a schematic diagram showing an example of a generation method of the first learning device 571 using the first label image.
  • the first label image can be a set of images containing only type A, a set of images containing only type B, a set of images containing only type C, and a set of images containing only type D. ..
  • the first learning device 571 can generate the set of the first label images by giving it to the input layer and the label with the detection target (for example, level 11) to the output layer.
  • FIG. 7 is a schematic diagram showing an example of a method of generating the first learning device 571 using the second label image.
  • the second label image can be a set of images that do not include any of types A, B, C, and D.
  • the first learning device 571 can generate the set of the second label images by giving the set of the second label images to the input layer and the label (for example, label 12) having no detection target to the output layer.
  • the first learning device 571 detects that the detection target exists. Can be detected. If the second image does not include any of types A, B, C, and D as detection targets, the first learning device 571 can detect that there is no detection target.
  • the second learning device 572 can generate a plurality of types of label images, which are included in the first label image and are classified according to different types of detection targets, as learning data.
  • FIG. 8 is a schematic diagram showing an example of a method of generating the second learning device 572 using the type label image.
  • the type label images include a type A level image including type A and a type B level including type B.
  • An image, a type C level image including type C, and a type D level image including type D are divided into four sets.
  • the second learner 572 gives a set of type A level images to the input layer and gives a label (for example, label 21) indicating that the type is type A to the output layer to generate a set of type B level images. Is given to the input layer, a label (for example, label 23) indicating that the type is type B is given to the output layer, a set of type C level images is given to the input layer, and the type is type C Is given to the output layer, a set of type D level images is given to the input layer, and a label showing that the type is type D (for example, label 27) is given to the output layer. Can be given and generated.
  • the labels 24, 26, and 28 may be used together. The same applies when the labels 23, 25 and 27 are given to the output layer.
  • the second learning device 572 can detect the type of the detection target.
  • the quality of the cell is determined according to the number of cells existing in the cell.
  • the second learning device 572 can also perform learning so that the number of predetermined detection targets (for example, type D) can be detected.
  • a type label image including one or more type D label images is given to the input layer of the second learning device 572, and the number of type D label images (for example, labels 0 to N) is given to the output layer.
  • the second learning device 572 can be generated.
  • FIG. 9 is a schematic diagram showing an example of a method of detecting a detection target by the detection device 50 of the present embodiment.
  • the defect determination unit 53 determines whether or not the input image includes a predetermined defect target. When there is a defect target, the detection unit 58 detects that there is a defect, that is, a defective product. The defect determination unit 53 outputs the input image having no defect target to the candidate region extraction unit 54 as the first image.
  • the candidate area extraction unit 54 performs image processing on the first image and extracts one or more candidate areas including the detection target candidate. When there is no candidate area, the detection unit 58 detects that it is a good product. When there is a candidate area, the candidate area extraction unit 54 outputs the first image including the candidate area to the image generation unit 56.
  • the image generation unit 56 generates the second images in which the candidate regions are arranged at the predetermined positions by the number of the extracted candidate regions, and outputs the generated second images to the first learning device 571 in order.
  • the first learning device 571 is generated so as to be able to detect the presence or absence of the detection target by using the teacher data.
  • the detection unit 58 detects that the cell is a good product.
  • the detection unit 58 detects that the cell is defective. In this way, the detection unit 58 can input the second image to the first learning device 571 and detect the presence or absence of the detection target. Since the second image input to the first learning device 571 includes only a small number of candidate regions, the detection accuracy of the first learning device 571 can be increased.
  • the detection unit 58 can sequentially input each of the second images to the first learning device 571 to detect the presence or absence of the detection target. For example, when five candidate regions including the detection candidate target are extracted, the five second images are sequentially input to the first learning device 571, and the first learning device 571 determines whether there is a detection target for each second image. To detect. As a result, it is possible to detect whether or not all the detection candidate targets are detection targets without omission.
  • the second learning device 572 is generated so as to be able to detect the type of the detection target using the teacher data.
  • the detection unit 58 has a function as a type detection unit, and of the second images input to the first learning device 571, the second image in which the detection target is detected is used as the third image by the second learning device 572. input.
  • the second learning device 572 detects the type of the detection target.
  • the third images are the first images except for the second image which is determined by the first learning device 571 to have no detection target. It includes only the second image determined by the learning device 571 that the detection target is present.
  • the detection unit 58 can output the detection result for each type of types A to C. For example, when any one of types A to C is included, the detection unit 58 can detect the cell as a defective product.
  • the detection unit 58 can input the third image to the second learning device 572 and detect detection targets of different types. For example, when four third images including types A, B, C, and D are input to the second learning device 572, the detection unit 58 detects four different types of types A, B, C, and D. can do. Thus, even if there are various types of detection targets and different types, the types of detection targets can be detected.
  • the detection unit 58 can output the determination result of non-defective product/defective product according to the number of type D. For example, when four or more type D cracks are present in one cell, the cell can be determined as a defective product.
  • the detection unit 58 can sequentially input each of the third images to the second learning device 572 and detect the number of detection targets of the same type. For example, when three third images including type D are sequentially input to the second learning device 572, the second learning device 572 detects the type D three times, so the number of type D is detected as three. can do. With this, even when the quality of the imaging target is determined according to the number of minute detection targets existing in the imaging target, the number of detection targets can be detected, and thus the quality of the imaging target can be determined.
  • FIG. 10 is a schematic diagram showing another example of the configuration of the learning device 57.
  • the learning device 57 includes a plurality of second learning devices 572a, 572b, 572c,..., 572n.
  • the second learning devices 572a, 572b, 572c,..., 572n are learned so that detection targets of types A, B, C,... N can be detected.
  • the detection unit 58 inputs the fourth image excluding the third image in which one type is detected among the third images input to the one second learning device to another second learning device, and then the one type Different types can be detected.
  • the third image including type A and the third image including type B are sequentially input to one second learning device, and the one second learning device detects type A, type A
  • the third image including type B except for is input to the next second learning device as the fourth image.
  • the next second learning device can detect type B.
  • the detection type is 3 or more.
  • FIG. 11 is a schematic diagram showing an example of the configuration of the detection system.
  • the server 100 including the detection device 50 is connected to the communication network 1. Although only one server 100 is shown in FIG. 11, the number of servers 100 is not limited to one.
  • a plurality of client devices 10 are connected to the communication network 1.
  • the server 100 and the client device 10 have a communication function capable of transmitting and receiving required data.
  • the server 100 receives the input image.
  • the server 100 performs each process by the detection device 50 on the received input image.
  • the server 100 transmits the detection result (for example, the quality of the imaging target, the presence or absence of the detection target, the type of the detection target, etc.) to the client device 10. Accordingly, the client device 10 side can obtain the detection result only by transmitting the input image, and the client device 10 side does not need to include the detection device 50.
  • FIG. 12 is a flowchart showing an example of a processing procedure by the detection device 50.
  • the control unit 51 acquires the input image (S11) and determines the presence/absence of a defect (S12).
  • the defect is, for example, the one illustrated in FIG.
  • the control unit 51 inputs the first image into the candidate area extraction unit 54 (S13).
  • the first image is an input image determined to have no defect target.
  • the control unit 51 extracts a candidate area (S14) and determines the presence or absence of the candidate area (S15). If there is no candidate area (NO in S15), there is neither a defect target nor a detection target in the input image, so the imaging target (for example, cell) corresponding to the input image is determined as a non-defective item (S16), and the processing is performed. To finish.
  • the control unit 51 If there is a candidate area (YES in S15), the control unit 51 generates a second image (S17). For example, when there are three candidate areas in the first image, three second images in which the respective candidate areas are arranged at predetermined positions are generated.
  • the control unit 51 sequentially inputs the second image to the first learning device 571 (S18) and determines the presence/absence of a crack as a detection target (S19). When there is no crack (NO in S19), the control unit 51 performs the process of step S16.
  • the control unit 51 inputs the third image into the second learning device 572 (S20).
  • the third image is an image in which a crack is detected by the first learning device 571 among the second images input to the first learning device 571.
  • the control unit 51 detects the type of crack (S21).
  • the control unit 51 determines the presence or absence of another third image (S22), and when there is another third image (YES in S22), repeats the processing from step S20. If there is no other third image (NO in S22), the control unit 51 determines whether there is a crack of types A to C (S23), and if there is no crack of types A to C (NO in S23), type It is determined whether the number of D cracks is equal to or greater than a predetermined value (S24).
  • the control unit 51 When the number of type D cracks is not equal to or greater than the predetermined value (NO in S24), the control unit 51 performs the process of step S16.
  • the control unit 51 When there is a defect (YES in S12), there are cracks of types A to C (YES in S23), or when the number of type D cracks is equal to or more than a predetermined value (YES in S24), the control unit 51 The imaging target (eg, cell) corresponding to the input image is determined to be a defective product (S25), and the process ends. When there are a plurality of input images, the process shown in FIG. 12 may be repeated.
  • the detection device 50 can also be realized by using a computer including a CPU (processor), GPU, RAM (memory), and the like. That is, as shown in FIG. 12, by loading a computer program that defines the procedure of each processing into a RAM (memory) provided in the computer and executing the computer program by a CPU (processor), the detection device on the computer 50 can be realized.
  • the computer program may be recorded in a recording medium and distributed.
  • the present embodiment it is possible to detect a detection target with high accuracy even if it is a minute detection target that is difficult to visually confirm, or even a minute detection target having various shapes and sizes. Also, various types of detection targets can be detected.
  • the cells constituting the solar cell module are used as the imaging target and the "target called a crack" is used as the detection target.
  • the imaging target and the detection target are limited to these. It is not something that will be done.
  • the detection device performs image processing on the first image to extract one or more candidate regions including the detection target candidate, and the candidate region extraction unit extracts the candidate regions.
  • An image generation unit that generates one or a plurality of second images including at least one of the candidate regions, a first learning device, and the second image generated by the image generation unit are input to the first learning device and detected.
  • a detection unit that detects the presence or absence of a target.
  • the computer program causes a computer to perform a process of acquiring a first image and an image process on the acquired first image to extract one or a plurality of candidate regions including a detection target candidate.
  • the detection method obtains a first image, performs image processing on the obtained first image, extracts one or a plurality of candidate regions including a detection target candidate, and extracts the candidate region.
  • One or a plurality of second images including at least one of the candidate regions is generated, and the generated second image is input to the first learning device to detect the presence or absence of the detection target.
  • the candidate area extraction unit performs image processing on the first image and extracts one or more candidate areas including the detection target candidate.
  • the first image is, for example, an image determined to have no visually observable defect in the input image in which the imaging target is captured.
  • the image processing includes normal image processing such as density correction, noise removal, edge detection, and feature amount extraction.
  • the detection target candidate is, for example, a region including a minute defect that is difficult to detect by visual observation, and is a region including a minute detection target candidate. It should be noted that the first image from which the detection target candidate cannot be extracted can be determined to be a non-defective item.
  • the image generation unit generates one or more second images including at least one of the extracted candidate areas. At least one means that a large number of candidate regions are not included, and can be, for example, one, two, or three (a small number). For example, when three detection target candidates A, B, and C are extracted from one first image, the image generation unit causes the second image including the detection target candidate A and the second image including the detection target candidate B. , Second three images of the second image including the detection target candidate C are generated.
  • the first learning device can be configured by, for example, a multi-layer neural network (deep learning), for example, a convolutional neural network (Convolutional Neural Network) can be used, but other machine learning may be used.
  • the first learning device is generated so as to be able to detect the presence/absence of a detection target using the teacher data.
  • the detection unit inputs the second image to the first learning device and detects the presence or absence of the detection target. Since the second image input to the first learning device includes only a small number of candidate regions, the detection accuracy of the first learning device can be improved. In addition, in the normal image processing, even if it is difficult to detect a minute defect due to the influence of the luminance distribution or distortion of the image, by using the first learning device, learning is performed until the detection accuracy reaches the required accuracy. Therefore, it is possible to accurately detect a minute detection target.
  • the image generation unit arranges at least one of the candidate regions extracted by the candidate region extraction unit at a predetermined position of a predetermined image to generate a second image.
  • the image generation unit arranges at least one of the extracted candidate areas at a predetermined position of the predetermined image to generate the second image.
  • the predetermined image can be, for example, an image having a constant brightness value (an image having a high brightness value, a bright image).
  • the candidate area can be a rectangular area surrounding the detection target candidate according to the shape and size of the detection target candidate.
  • the predetermined position can be, for example, the center (midpoint) of the predetermined image.
  • the candidate areas can be arranged so that the center of the candidate area matches the center of the predetermined image.
  • the position of the candidate area on the second image can be set to the same position for each second image, it is possible to detect a minute detection target with higher accuracy than when the position of the candidate area varies from image to image.
  • the detection unit sequentially inputs each of the second images to the first learning device to detect the presence/absence of a detection target.
  • the detection unit sequentially inputs each of the second images to the first learning device and detects the presence or absence of the detection target. For example, when five candidate regions including the detection candidate target are extracted, the five second images are sequentially input to the first learning device, and the first learning device detects the presence or absence of the detection target for each second image. To do. Thereby, it is possible to detect whether or not all the detection candidate targets are the detection targets.
  • the first learning device is generated using the first label image including the detection target and the second label image including no detection target as the learning data.
  • the first learning device is generated using the first label image including the detection target and the second label image including no detection target as the learning data.
  • the detection target is divided into types A, B, C, and D according to the shape and the type, for convenience.
  • the first label image can be a set of images that include only type A, images that include only type B, images that include only type C, and images that include only type D.
  • the first learning device can be generated using the first label image and the label with the detection target.
  • the second label image can be a set of images that do not include any of types A, B, C, and D.
  • the first learning device can be generated using the second label image and the label having no detection target.
  • the second image includes any one of types A, B, C, and D, it can be detected that the detection target exists. If the second image does not include any of types A, B, C, and D, it can be detected that the detection target does not exist.
  • the detection device inputs a third image in which a detection target is detected among the second images input to the second learning device and the first learning device to the second learning device. And a type detection unit that detects the type of the detection target.
  • the second learning device can be configured by, for example, a multilayer neural network (deep learning), for example, a convolutional neural network (Convolutional Neural Network) can be used, but other machine learning may be used.
  • the second learning device is generated so that the type of the detection target can be detected using the teacher data.
  • the type detection unit inputs, to the second learning device, the third image in which the detection target is detected among the second images input to the first learning device, and detects the type of the detection target.
  • the third images are the first learning device except for the second image in which the first learning device determines that the detection target does not exist. Only the second image for which it is determined that the detection target exists is included.
  • the type detection unit inputs each of the third images to the second learning device in order and detects the number of detection targets of the same type.
  • the type detection unit sequentially inputs each of the third images to the second learning device and detects the number of detection targets of the same type. For example, if the three third images including the type D are sequentially input to the second learning device, the second learning device detects the type D three times. Therefore, the number of the type D is detected as three. You can With this, even when the quality of the imaging target is determined according to the number of minute detection targets existing in the imaging target, the number of detection targets can be detected, and thus the quality of the imaging target can be determined.
  • the type detection unit inputs the third image to the second learning device and detects detection targets of different types.
  • the type detection unit inputs the third image to the second learning device and detects a detection target of a different type. For example, when four third images including types A, B, C, and D are input to the second learning device, the type detection unit detects four different types of types A, B, C, and D. be able to. Thus, even if there are various types of detection targets and different types, the types of detection targets can be detected.
  • the detection device includes a plurality of the second learning devices, and the type detection unit detects the one type of the third images input to the one second learning device.
  • the fourth image excluding the three images is input to another second learning device to detect a type different from the one type.
  • the type detection unit inputs, to the other second learning device, the fourth image excluding the third image in which the one type is detected among the third images input to the one second learning device, and inputs the one type into the second learning device. And a type different from. For example, when the third image including type A and the third image including type B are sequentially input to one second learning device, and the one second learning device detects type A, type A The third image including type B except for is input to the next second learning device as the fourth image. The next second learning device can detect type B. Thus, even if there are various types of detection targets and different types, the types of detection targets can be detected.
  • the second learning device generates using a plurality of types of label images divided into different types of the detection target of the first label image including the detection target as learning data. I am doing it.
  • the second learning device is generated by using, as learning data, a plurality of types of label images divided into different types of detection targets included in the first label image. For example, when the set of first label images includes four types of detection targets of types A, B, C, and D, a type A level image including type A, a type B level image including type B, and a type C are selected. It is divided into four sets: a type C level image containing the type C and a type D level image containing the type D.
  • the second learning device generates using a set of type A level images and a label indicating that the type is type A, and uses a set of type B level images and a label indicating that the type is type B Generated by using a set of type C level images and a label indicating that the type is type C, and using a set of type D level images and a label indicating that the type is type D can do.
  • the detection apparatus includes an acquisition unit that acquires an input image in which an imaging target is captured, and a determination unit that determines whether or not a predetermined defect target is included in the input image acquired by the acquisition unit. And the input image determined by the determination unit to have no defect target is the first image.
  • the acquisition unit acquires the input image in which the imaging target is captured.
  • the imaging target is an inspection target, and for example, in the case of a solar cell module, each cell that constitutes the module can be used, but the present invention is not limited to this.
  • the judgment unit judges whether or not the input image includes a predetermined defect target.
  • the determination unit has an image processing function and can determine the presence or absence of a predetermined defect target based on the input image.
  • the predetermined defect target includes, but is not limited to, single cell abnormality, disconnection (also referred to as finger disconnection), cleaning mark (also referred to as water mark), breakage coexisting crack, and the like.
  • the input image determined by the determination unit to have no defect target is the first image. As a result, it is possible to exclude defects that are relatively easy to detect in advance, and narrow down and process only images that may include minute detection targets that are difficult to detect.
  • the learning device includes a first learning device generated by using, as learning data, a first label image including a detection target and a second label image including no detection target, and the first label.
  • the second learning device is generated by using, as learning data, a plurality of types of label images that are divided into different types of detection targets included in the image.
  • the learning device generation method acquires a first label image including a detection target and a second label image including no detection target, and acquires the first label image and the second label image acquired. Is used as learning data to generate a first learning device, and a plurality of types of label images divided into different types of detection targets included in the first label image are used as learning data to generate a second learning device. ..
  • the learning device includes a first learning device and a second learning device.
  • the first learning device and the second learning device can be configured by, for example, a multilayer neural network (deep learning), for example, a convolutional neural network (Convolutional Neural Network) can be used, but other machine learning can be performed. You may use.
  • a multilayer neural network deep learning
  • convolutional neural network Convolutional Neural Network
  • the first learning device is generated using the first label image including the detection target and the second label image including no detection target as the learning data.
  • the first label image is an image including only type A, an image including only type B, It can be a set of images including only type C and images including only type D.
  • the first learning device can be generated using the first label image and the label with the detection target.
  • the second label image can be a set of images that do not include any of types A, B, C, and D.
  • the first learning device can be generated using the second label image and the label having no detection target.
  • the second image includes any one of types A, B, C, and D, it can be detected that the detection target exists. If the second image does not include any of types A, B, C, and D, it can be detected that the detection target does not exist.
  • the second learning device is generated by using, as learning data, a plurality of types of label images divided into different types of detection targets included in the first label image. For example, when the set of first label images includes four types of detection targets of types A, B, C, and D, a type A level image including type A, a type B level image including type B, and a type C are selected. It is divided into four sets: a type C level image containing the type C and a type D level image containing the type D.
  • the second learning device generates using a set of type A level images and a label indicating that the type is type A, and uses a set of type B level images and a label indicating that the type is type B Generated by using a set of type C level images and a label indicating that the type is type C, and using a set of type D level images and a label indicating that the type is type D can do.
  • the detection target types can be detected.
  • the first label image includes at least one of candidate regions including detection target candidates.
  • the first label image includes at least one of the candidate areas including the detection target candidates. At least one means that a large number of candidate regions are not included, and can be, for example, one, two, or three (a small number). For example, when one learning image includes three detection target candidates A, B, and C, the first label image includes the image including the detection target candidate A, the image including the detection target candidate B, and the detection target candidate. There are three images including C. Since only a small number of candidate areas are included, the detection accuracy of the first learning device can be improved.
  • the first label image has a candidate area including a detection target candidate arranged at a predetermined position of a predetermined image.
  • a candidate area including a detection target candidate is arranged at a predetermined position of a predetermined image.
  • the predetermined image can be, for example, an image having a constant brightness value (an image having a high brightness value, a bright image).
  • the candidate area can be a rectangular area surrounding the detection target candidate according to the shape and size of the detection target candidate.
  • the predetermined position can be, for example, the center (midpoint) of the predetermined image.
  • the candidate areas can be arranged so that the center of the candidate area matches the center of the predetermined image.
  • the position of the candidate area on the first label image can be set to the same position for each first label image, it is possible to accurately detect a minute detection target as compared with the case where the position of the candidate area varies from image to image. it can.
  • the predetermined image has a white background color.
  • the background color of the specified image is white. As a result, the contrast between the minute detection target and the background can be increased.
  • the first label image is an image in which it is determined that there is no predetermined defect target among the images captured by the imaging target.
  • the first label image is an image in which it is determined that there is no predetermined defect target among the images captured by the imaging target.
  • the imaging target is an inspection target, and for example, in the case of a solar cell module, each cell that constitutes the module can be used, but the present invention is not limited to this.
  • the predetermined defect target includes, but is not limited to, single cell abnormality, disconnection (also referred to as finger disconnection), cleaning mark (also referred to as water mark), breakage coexisting crack, and the like. As a result, it is possible to exclude defects that are relatively easy to detect in advance, and narrow down and process only images that may include minute detection targets that are difficult to detect.

Abstract

Provided are a detecting device, a learner, a computer program, a detecting method, and a method for generating a learner, with which it is possible for a minute detection target to be detected accurately. This detecting device is provided with: a candidate region extracting unit which subjects a first image to image processing to extract one or a plurality of candidate regions each containing a detection target candidate; an image generating unit which generates one or a plurality of second images each containing at least one of the extracted candidate regions; a first learner; and a detecting unit which inputs the generated second image into the first learner to detect the presence or absence of the detection target.

Description

検出装置、学習器、コンピュータプログラム、検出方法及び学習器の生成方法Detecting device, learning device, computer program, detecting method, and learning device generating method
 本発明は、検出装置、学習器、コンピュータプログラム、検出方法及び学習器の生成方法に関する。 The present invention relates to a detection device, a learning device, a computer program, a detection method, and a learning device generation method.
 太陽電池モジュールの欠陥の有無の検査方法の一つとして、エレクトロルミネッセンス(EL)を利用したEL検査が用いられている。EL検査は、太陽電池モジュールを構成するセル毎に電流を流してセル自体を発光させ、セルの表面をカメラで撮影した画像に基づいてクラックや断線等を確認する(特許文献1参照)。 An EL inspection using electroluminescence (EL) is used as one of the inspection methods for the presence/absence of defects in solar cell modules. In the EL inspection, an electric current is supplied to each cell constituting the solar cell module to cause the cell itself to emit light, and cracks, disconnections, and the like are confirmed based on an image captured by a camera on the surface of the cell (see Patent Document 1).
特許第6208843号公報Japanese Patent No. 6208843
 しかし、セル毎に画像の輝度分布や歪みが異なり、目視では確認が難しい微小な検出対象(例えば、欠陥)が存在する。また、微小な検出対象にも様々な種類の形状があり、検出対象の種類に応じてセルの良否判定も異なる。このため、微小な検出対象を精度よく検出することが望まれる。 However, the brightness distribution and distortion of the image differ from cell to cell, and there are minute detection targets (for example, defects) that are difficult to confirm visually. Further, there are various types of shapes even for minute detection targets, and the quality determination of the cell also differs depending on the type of detection target. Therefore, it is desired to accurately detect a minute detection target.
 本発明は、斯かる事情に鑑みてなされたものであり、微小な検出対象を精度よく検出することができる検出装置、学習器、コンピュータプログラム、検出方法及び学習器の生成方法を提供することを目的とする。 The present invention has been made in view of such circumstances, and provides a detection device, a learning device, a computer program, a detection method, and a learning device generation method capable of accurately detecting a minute detection target. To aim.
 本発明の実施の形態に係る検出装置は、第1画像に対して画像処理を行って検出対象候補が含まれる1又は複数の候補領域を抽出する候補領域抽出部と、前記候補領域抽出部で抽出した候補領域の少なくとも一つが含まれる1又は複数の第2画像を生成する画像生成部と、第1学習器と、前記画像生成部で生成した第2画像を前記第1学習器に入力して検出対象の有無を検出する検出部とを備える。 A detection device according to an embodiment of the present invention includes a candidate area extraction unit that performs image processing on a first image to extract one or a plurality of candidate areas including a detection target candidate, and the candidate area extraction unit. An image generation unit that generates one or a plurality of second images including at least one of the extracted candidate regions, a first learning device, and the second image generated by the image generation unit are input to the first learning device. And a detection unit that detects the presence or absence of a detection target.
 本発明の実施の形態に係る学習器は、検出対象が含まれる第1ラベル画像と検出対象が含まれない第2ラベル画像とを学習データとして用いて生成された第1学習器と、前記第1ラベル画像に含まれる検出対象の異なる種別毎に区分された複数の種別ラベル画像を学習データとして用いて生成された第2学習器とを備える。 A learning device according to an embodiment of the present invention includes a first learning device generated by using a first label image including a detection target and a second label image including no detection target as learning data, A second learning device generated by using, as learning data, a plurality of types of label images divided into different types of detection targets included in one label image.
 本発明の実施の形態に係るコンピュータプログラムは、コンピュータに、第1画像を取得する処理と、取得した第1画像に対して画像処理を行って検出対象候補が含まれる1又は複数の候補領域を抽出する処理と、抽出した候補領域の少なくとも一つが含まれる1又は複数の第2画像を生成する処理と、生成した第2画像を第1学習器に入力して検出対象の有無を検出する処理とを実行させる。 A computer program according to an embodiment of the present invention causes a computer to perform a process of acquiring a first image and one or a plurality of candidate regions including a detection target candidate by performing image processing on the acquired first image. A process of extracting, a process of generating one or a plurality of second images including at least one of the extracted candidate regions, and a process of inputting the generated second images to a first learning device and detecting the presence or absence of a detection target And execute.
 本発明の実施の形態に係る検出方法は、第1画像を取得し、取得された第1画像に対して画像処理を行って検出対象候補が含まれる1又は複数の候補領域を抽出し、抽出された候補領域の少なくとも一つが含まれる1又は複数の第2画像を生成し、生成された第2画像を第1学習器に入力して検出対象の有無を検出する。 A detection method according to an embodiment of the present invention acquires a first image, performs image processing on the acquired first image, extracts one or a plurality of candidate regions including a detection target candidate, and extracts the extracted candidate region. One or a plurality of second images including at least one of the generated candidate areas is generated, and the generated second image is input to the first learning device to detect the presence or absence of the detection target.
 本発明の実施の形態に係る学習器の生成方法は、検出対象が含まれる第1ラベル画像と検出対象が含まれない第2ラベル画像とを取得し、取得された第1ラベル画像及び第2ラベル画像を学習データとして用いて第1学習器を生成し、前記第1ラベル画像に含まれる検出対象の異なる種別毎に区分された複数の種別ラベル画像を学習データとして用いて第2学習器を生成する。 A learning device generation method according to an embodiment of the present invention acquires a first label image that includes a detection target and a second label image that does not include a detection target, and acquires the first label image and the second label image. A first learning device is generated by using the label image as learning data, and a second learning device is generated by using a plurality of types of label images divided into different types of detection targets included in the first label image as learning data. To generate.
 本発明によれば、微小な検出対象を精度よく検出することができる。 According to the present invention, a minute detection target can be accurately detected.
本実施の形態の検出装置の構成の一例を示すブロック図である。It is a block diagram which shows an example of a structure of the detection apparatus of this Embodiment. 欠陥対象の一例を示す模式図である。It is a schematic diagram which shows an example of a defect target. 本実施の形態の検出装置が検出する検出対象の一例を示す模式図である。It is a schematic diagram which shows an example of the detection target which the detection apparatus of this Embodiment detects. 第2画像の生成の一例を示す模式図である。It is a schematic diagram which shows an example of generation of a second image. 第1学習器及び第2学習器の構成の一例を示す模式図である。It is a schematic diagram which shows an example of a structure of a 1st learning device and a 2nd learning device. 第1ラベル画像を用いた第1学習器の生成方法の一例を示す模式図である。It is a schematic diagram which shows an example of the generation method of the 1st learning device using a 1st label image. 第2ラベル画像を用いた第1学習器の生成方法の一例を示す模式図である。It is a schematic diagram which shows an example of the generation method of the 1st learning device using a 2nd label image. 種別ラベル画像を用いた第2学習器の生成方法の一例を示す模式図である。It is a schematic diagram which shows an example of the generation method of the 2nd learning device using a classification label image. 本実施の形態の検出装置による検出対象の検出方法の一例を示す模式図である。It is a schematic diagram which shows an example of the detection method of the detection target by the detection apparatus of this Embodiment. 学習器の構成の他の例を示す模式図である。It is a schematic diagram which shows the other example of a structure of a learning device. 検出システムの構成の一例を示す模式図である。It is a schematic diagram which shows an example of a structure of a detection system. 検出装置による処理の手順の一例を示すフローチャートである。It is a flow chart which shows an example of a procedure of processing by a detecting device.
 以下、本発明の実施の形態を図面に基づいて説明する。図1は本実施の形態の検出装置50の構成の一例を示すブロック図である。検出装置50は、装置全体を制御する制御部51、入力部52、欠陥判定部53、候補領域抽出部54、記憶部55、画像生成部56、学習器57及び検出部58を備える。学習器57は、第1学習器571及び第2学習器572を備える。記憶部55は、後述の入力画像、検出処理の結果など所要のデータを記憶することができる。 Embodiments of the present invention will be described below with reference to the drawings. FIG. 1 is a block diagram showing an example of the configuration of the detection device 50 according to the present embodiment. The detection device 50 includes a control unit 51 that controls the entire device, an input unit 52, a defect determination unit 53, a candidate area extraction unit 54, a storage unit 55, an image generation unit 56, a learning device 57, and a detection unit 58. The learning device 57 includes a first learning device 571 and a second learning device 572. The storage unit 55 can store required data such as an input image described later and a result of detection processing.
 入力部52は、取得部としての機能を有し、撮像対象が撮像された入力画像を取得する。撮像対象は、被検査対象であり、例えば、太陽電池モジュールの場合、モジュールを構成する各セルとすることができるが、これに限定されない。 The input unit 52 has a function as an acquisition unit, and acquires the input image in which the imaging target is captured. The imaging target is an inspection target, and for example, in the case of a solar cell module, each cell that constitutes the module can be used, but the present invention is not limited to this.
 欠陥判定部53は、判定部としての機能を有し、入力画像に所定の欠陥対象が含まれるか否かを判定する。欠陥判定部53は、画像処理機能を備え、入力画像に基づいて所定の欠陥対象の有無を判定できる。撮像対象が、セルの場合、所定の欠陥対象は、例えば、単セル異常、断線(フィンガー断線ともいう)、洗浄跡(水跡ともいう)、断線併発ひび等を含むが、これらに限定されない。 The defect judgment unit 53 has a function as a judgment unit and judges whether or not a predetermined defect target is included in the input image. The defect determination unit 53 has an image processing function and can determine the presence or absence of a predetermined defect target based on the input image. When the imaging target is a cell, the predetermined defect target includes, but is not limited to, single cell abnormality, disconnection (also referred to as finger disconnection), cleaning mark (also referred to as water mark), breakage coexisting crack, and the like.
 図2は欠陥対象の一例を示す模式図である。図2は、太陽電池モジュールを構成するセルを撮像して得られた画像(セル画像ともいう)を示す。セル画像内の横方向の平行な黒線は、セル内の金属線を示す。図2のAは、セル内にフィンガー断線が存在する場合を模式的に示すものである。フィンガー断線は、比較的は幅の大きい断線であり、例えば、セル内に1箇所あれば、当該セルは不良品と判定される。図2のBは、セル内に水跡が存在する場合を模式的に示すものである。水跡は、洗浄工程の後に残った洗浄跡であり、セル内に1箇所あれば、当該セルは不良品と判定される。図2のCは、セル内に断線併発ひびが存在する場合を模式的に示すものである。セル内に1箇所あれば、当該セルは不良品と判定される。図2のDは、セル異常を模式的に示すものである。セル異常はセルが発光しない、あるいは発光量が著しく低い場合であり、当該セルは不良品と判定される。 2 is a schematic diagram showing an example of a defect target. FIG. 2 shows an image (also referred to as a cell image) obtained by picking up an image of a cell constituting a solar cell module. Horizontal black lines in the cell image indicate metal lines in the cell. 2A schematically shows a case where there is a finger disconnection in the cell. The finger disconnection is a relatively large width disconnection. For example, if there is one location in the cell, the cell is determined to be defective. FIG. 2B schematically shows the case where water marks are present in the cell. The water trace is a cleaning trace remaining after the cleaning step, and if there is one in the cell, the cell is determined to be defective. FIG. 2C schematically shows a case where a crack having a wire break exists in the cell. If there is one location in the cell, the cell is determined to be defective. 2D schematically shows a cell abnormality. The cell abnormality is when the cell does not emit light or when the amount of emitted light is extremely low, and the cell is determined to be defective.
 欠陥判定部53は、所定の欠陥対象がないと判定した入力画像を第1画像として候補領域抽出部54へ出力する。これにより、比較的、検出の易しい欠陥対象を予め除外し、検出が困難な微小な検出対象が含まれる可能性のある画像だけを絞り込んで候補領域抽出部54以降の処理することができる。 The defect determination unit 53 outputs the input image determined to have no predetermined defect target to the candidate region extraction unit 54 as the first image. As a result, it is possible to exclude in advance defect targets that are relatively easy to detect, narrow down only the images that may include minute detection targets that are difficult to detect, and perform processing after the candidate region extraction unit 54.
 検出部58は、欠陥判定部53で所定の欠陥対象があると判定した場合、当該セルを不良品として検出する。 When the defect determining unit 53 determines that there is a predetermined defect target, the detecting unit 58 detects the cell as a defective product.
 図3は本実施の形態の検出装置50が検出する検出対象の一例を示す模式図である。検出対象は、欠陥判定部53で判定が困難な微小な欠陥対象であり、形状や大きさなど、様々な種類が存在し得る。本明細書では、図3に示すように、検出対象を、例えば、形状や種別に応じて、便宜上、タイプA、B、C、Dの4種類に分ける。なお、種別は4種類に限定されない。太陽電池モジュールを構成するセルの場合、検出対象は、形状や大きさなどの種類が異なるひびとすることができ、本明細書では検出対象を「ひびと称する対象」として説明するが、検出対象は、ひびに限定されない。また、図3に示す、ひびの形状や大きさは一例であって、図3の例に限定されない。 FIG. 3 is a schematic diagram showing an example of a detection target detected by the detection device 50 of the present embodiment. The detection target is a minute defect target that is difficult to determine by the defect determination unit 53, and various types such as shape and size may exist. In the present specification, as shown in FIG. 3, the detection target is divided into four types A, B, C, and D for convenience according to the shape and the type, for example. The types are not limited to four types. In the case of cells that make up a solar cell module, the detection target can be cracks of different types such as shape and size, and although the detection target is described as "a target called a crack" in this specification, the detection target is a detection target. Is not limited to cracks. Further, the shape and size of the crack shown in FIG. 3 are examples, and the invention is not limited to the example of FIG.
 候補領域抽出部54は、欠陥判定部53が出力した第1画像に対して画像処理を行って検出対象候補が含まれる1又は複数の候補領域を抽出する。画像処理は、例えば、濃度補正、雑音除去、エッジ検出、特徴量抽出などの通常の画像処理を含む。検出対象候補は、例えば、目視や欠陥判定部53での判定では検出が難しい微小な欠陥が含まれる領域であり、微小な検出対象候補を含む領域である。 The candidate area extraction unit 54 performs image processing on the first image output by the defect determination unit 53 to extract one or more candidate areas including the detection target candidate. The image processing includes normal image processing such as density correction, noise removal, edge detection, and feature amount extraction. The detection target candidate is, for example, a region including a minute defect that is difficult to detect by visual inspection or the determination by the defect determination unit 53, and is a region including a minute detection target candidate.
 検出部58は、候補領域抽出部54で検出対象候補が抽出できない第1画像に対応するセルを良品であると判定できる。 The detection unit 58 can determine that the cell corresponding to the first image from which the detection target candidate cannot be extracted by the candidate area extraction unit 54 is a good product.
 画像生成部56は、候補領域抽出部54により、1つの第1画像において複数の候補領域が抽出された場合、候補領域の少なくとも一つが含まれる1又は複数の第2画像を生成する。少なくとも一つとは、多数の候補領域を含まないという意味であり、例えば、一つ、二つ、あるいは三つ程度(少数)とすることができる。本明細書では、第2画像には一つの候補領域を含む場合について説明するが、一つの第2画像に含まれる候補領域の数は一つに限定されない。 The image generation unit 56 generates one or more second images including at least one of the candidate regions when the candidate region extraction unit 54 extracts a plurality of candidate regions in one first image. At least one means that a large number of candidate regions are not included, and can be, for example, one, two, or three (a small number). In this specification, the case where the second image includes one candidate region will be described, but the number of candidate regions included in one second image is not limited to one.
 図4は第2画像の生成の一例を示す模式図である。図4に示すように、候補領域抽出部54は、一つの第1画像において、検出対象としてタイプA、C、Dの3つそれぞれの候補領域を抽出したとする。この場合、画像生成部56は、タイプAの候補領域が含まれる第2画像、タイプCが含まれる第2画像、及びタイプDが含まれる第2画像の3つの第2画像を生成する。 FIG. 4 is a schematic diagram showing an example of generation of the second image. As shown in FIG. 4, it is assumed that the candidate area extraction unit 54 extracts three candidate areas of types A, C, and D as detection targets in one first image. In this case, the image generation unit 56 generates three second images including the second image including the type A candidate area, the second image including the type C, and the second image including the type D.
 画像生成部56は、候補領域抽出部54で抽出した候補領域の少なくとも一つを所定画像の所定位置に配置して第2画像を生成することができる。所定画像は、例えば、輝度値が一定の画像(輝度値が高い画像、明るい画像)とすることができる。候補領域は、検出対象候補の形状や大きさに応じて、検出対象候補を囲む矩形状の領域とすることができる。所定位置は、例えば、所定画像の中央(中点)とすることができる。例えば、候補領域の中央が所定画像の中央に一致するように候補領域を配置することができる。 The image generation unit 56 can generate at least one of the candidate regions extracted by the candidate region extraction unit 54 at a predetermined position of the predetermined image to generate the second image. The predetermined image can be, for example, an image having a constant brightness value (an image having a high brightness value, a bright image). The candidate area can be a rectangular area surrounding the detection target candidate according to the shape and size of the detection target candidate. The predetermined position can be, for example, the center (midpoint) of the predetermined image. For example, the candidate areas can be arranged so that the center of the candidate area matches the center of the predetermined image.
 第2画像上での候補領域の位置を、第2画像毎に同じ位置にできるので、候補領域の位置が画像毎にばらつく場合に比べて、微小な検出対象を精度よく検出することができる。 Since the position of the candidate area on the second image can be set to the same position for each second image, it is possible to detect a minute detection target with higher accuracy than when the position of the candidate area varies from image to image.
 所定画像の背景色を白にすることにより、微小な検出対象と背景とのコントラストを高くすることができる。 By setting the background color of the specified image to white, it is possible to increase the contrast between the minute detection target and the background.
 第1学習器571及び第2学習器572は、例えば、多層のニューラルネットワーク(深層学習)で構成することができ、例えば、畳み込みニューラルネットワーク(Convolutional Neural Network)を用いることができるが、他の機械学習を用いてもよい。本明細書では、第1学習器571及び第2学習器572は、畳み込みニューラルネットワークで構成されるものとして説明するが、これに限定されない。 The first learning device 571 and the second learning device 572 can be configured by, for example, a multilayer neural network (deep learning), and for example, a convolutional neural network (Convolutional Neural Network) can be used. Learning may be used. In the present specification, the first learning device 571 and the second learning device 572 are described as being configured by a convolutional neural network, but the present invention is not limited to this.
 図5は第1学習器571及び第2学習器572の構成の一例を示す模式図である。畳み込みニューラルネットワークは、入力層と出力層との間に隠れ層を備える。隠れ層は、複数段からなる畳み込み層及びプーリング層、及び最終段に全結合層を備える。なお、畳み込み層、プーリング層及び全結合層の数は適宜決定できる。 FIG. 5 is a schematic diagram showing an example of the configuration of the first learning device 571 and the second learning device 572. The convolutional neural network comprises a hidden layer between the input layer and the output layer. The hidden layer includes a convolutional layer and a pooling layer having a plurality of stages, and a fully-bonded layer in the final stage. The number of convolutional layers, pooling layers, and total bonding layers can be appropriately determined.
 次に、第1学習器571及び第2学習器572の生成方法(学習方法)について説明する。なお、制御部51は、第1学習器571及び第2学習器572の学習処理を行うことができる。制御部51及び学習器57は、例えば、CPU(例えば、複数のプロセッサコアを実装したマルチ・プロセッサなど)、GPU(Graphics Processing Units)、DSP(Digital Signal Processors)、FPGA(Field-Programmable Gate Arrays)などのハードウェアを組み合わせることによって構成することができる。 Next, the generation method (learning method) of the first learning device 571 and the second learning device 572 will be described. The control unit 51 can perform the learning process of the first learning device 571 and the second learning device 572. The control unit 51 and the learning device 57 are, for example, a CPU (for example, a multi-processor having a plurality of processor cores mounted), a GPU (Graphics Processing Units), a DSP (Digital Signal Processors), and an FPGA (Field-Programmable Gate Arrays). It can be configured by combining hardware such as.
 第1学習器571は、検出対象が含まれる第1ラベル画像と検出対象が含まれない第2ラベル画像とを学習データとして用いて生成することができる。まず、第1ラベル画像を用いる場合について説明する。 The first learning device 571 can generate the first label image including the detection target and the second label image including no detection target as the learning data. First, the case of using the first label image will be described.
 図6は第1ラベル画像を用いた第1学習器571の生成方法の一例を示す模式図である。第1ラベル画像は、タイプAのみが含まれる画像の集合、タイプBのみが含まれる画像の集合、タイプCのみが含まれる画像の集合、タイプDのみが含まれる画像の集合とすることができる。第1学習器571は、第1ラベル画像の集合を入力層に与え、検出対象ありのラベル(例えば、レベル11)を出力層に与えて生成できる。 FIG. 6 is a schematic diagram showing an example of a generation method of the first learning device 571 using the first label image. The first label image can be a set of images containing only type A, a set of images containing only type B, a set of images containing only type C, and a set of images containing only type D. .. The first learning device 571 can generate the set of the first label images by giving it to the input layer and the label with the detection target (for example, level 11) to the output layer.
 図7は第2ラベル画像を用いた第1学習器571の生成方法の一例を示す模式図である。第2ラベル画像は、タイプA、B、C、Dのいずれも含まれない画像の集合とすることができる。第1学習器571は、第2ラベル画像の集合を入力層に与え、検出対象なしのラベル(例えば、ラベル12)を出力層に与えて生成できる。 FIG. 7 is a schematic diagram showing an example of a method of generating the first learning device 571 using the second label image. The second label image can be a set of images that do not include any of types A, B, C, and D. The first learning device 571 can generate the set of the second label images by giving the set of the second label images to the input layer and the label (for example, label 12) having no detection target to the output layer.
 これにより、画像生成部56で生成した第2画像に、検出対象としてのタイプA、B、C、Dのいずれか一つが含まれている場合、第1学習器571は、検出対象が存在していることを検出できる。また、第2画像に、検出対象としてのタイプA、B、C、Dのいずれも含まれていない場合、第1学習器571は、検出対象が存在しないことを検出できる。 As a result, when the second image generated by the image generation unit 56 includes any one of types A, B, C, and D as the detection target, the first learning device 571 detects that the detection target exists. Can be detected. If the second image does not include any of types A, B, C, and D as detection targets, the first learning device 571 can detect that there is no detection target.
 次に、第2学習器572の生成方法(学習方法)について説明する。第2学習器572は、第1ラベル画像に含まれる検出対象の異なる種別毎に区分された複数の種別ラベル画像を学習データとして用いて生成できる。 Next, the generation method (learning method) of the second learning device 572 will be described. The second learning device 572 can generate a plurality of types of label images, which are included in the first label image and are classified according to different types of detection targets, as learning data.
 図8は種別ラベル画像を用いた第2学習器572の生成方法の一例を示す模式図である。例えば、第1ラベル画像の集合にタイプA、B、C、Dの4つの種別の検出対象が含まれる場合、種別ラベル画像として、タイプAを含むタイプAレベル画像、タイプBを含むタイプBレベル画像、タイプCを含むタイプCレベル画像、タイプDを含むタイプDレベル画像の4つの集合に分ける。 FIG. 8 is a schematic diagram showing an example of a method of generating the second learning device 572 using the type label image. For example, when the set of first label images includes four types of detection targets of types A, B, C, and D, the type label images include a type A level image including type A and a type B level including type B. An image, a type C level image including type C, and a type D level image including type D are divided into four sets.
 第2学習器572は、タイプAレベル画像の集合を入力層に与え、種別がタイプAであることを示すラベル(例えば、ラベル21)を出力層に与えて生成し、タイプBレベル画像の集合を入力層に与え、種別がタイプBであることを示すラベル(例えば、ラベル23)を出力層に与えて生成し、タイプCレベル画像の集合を入力層に与え、種別がタイプCであることを示すラベル(例えば、ラベル25)を出力層に与えて生成し、タイプDレベル画像の集合を入力層に与え、種別がタイプDであることを示すラベル(例えば、ラベル27)を出力層に与えて生成することができる。なお、ラベル21を出力層に与える場合、タイプB、C、Dは存在しないので、ラベル24、26、28を併用してもよい。ラベル23、25、27を出力層に与える場合も同様である。 The second learner 572 gives a set of type A level images to the input layer and gives a label (for example, label 21) indicating that the type is type A to the output layer to generate a set of type B level images. Is given to the input layer, a label (for example, label 23) indicating that the type is type B is given to the output layer, a set of type C level images is given to the input layer, and the type is type C Is given to the output layer, a set of type D level images is given to the input layer, and a label showing that the type is type D (for example, label 27) is given to the output layer. Can be given and generated. When the label 21 is applied to the output layer, the types B, C, and D do not exist, so the labels 24, 26, and 28 may be used together. The same applies when the labels 23, 25 and 27 are given to the output layer.
 これにより、第2学習器572は、検出対象の種別を検出できる。 With this, the second learning device 572 can detect the type of the detection target.
 検出対象としてのタイプA~Dのうち、例えば、タイプDについては、セル内に存在する個数に応じてセルの良否が決定されるとする。第2学習器572が所定の検出対象(例えば、タイプD)の個数を検出できるように学習させることもできる。この場合には、1又は複数のタイプDラベル画像を含む種別ラベル画像を第2学習器572の入力層に与え、タイプDラベル画像の枚数(例えば、ラベル0~N)を出力層に与えて第2学習器572を生成できる。 Among types A to D as detection targets, for example, for type D, the quality of the cell is determined according to the number of cells existing in the cell. The second learning device 572 can also perform learning so that the number of predetermined detection targets (for example, type D) can be detected. In this case, a type label image including one or more type D label images is given to the input layer of the second learning device 572, and the number of type D label images (for example, labels 0 to N) is given to the output layer. The second learning device 572 can be generated.
 次に、第1学習器571による検出対象の有無の検出、及び第2学習器572による検出対象の種別及び検出対象の個数の検出方法について説明する。 Next, a method of detecting the presence/absence of a detection target by the first learning device 571, and a method of detecting the type of detection target and the number of detection targets by the second learning device 572 will be described.
 図9は本実施の形態の検出装置50による検出対象の検出方法の一例を示す模式図である。欠陥判定部53は、入力画像に所定の欠陥対象が含まれるか否かを判定する。欠陥対象がある場合、検出部58は、欠陥あり、すなわち不良品であることを検出する。欠陥判定部53は、欠陥対象がない入力画像を第1画像として候補領域抽出部54へ出力する。 FIG. 9 is a schematic diagram showing an example of a method of detecting a detection target by the detection device 50 of the present embodiment. The defect determination unit 53 determines whether or not the input image includes a predetermined defect target. When there is a defect target, the detection unit 58 detects that there is a defect, that is, a defective product. The defect determination unit 53 outputs the input image having no defect target to the candidate region extraction unit 54 as the first image.
 候補領域抽出部54は、第1画像に対して画像処理を行って検出対象候補が含まれる1又は複数の候補領域を抽出する。候補領域がない場合、検出部58は、良品であることを検出する。候補領域がある場合、候補領域抽出部54は、候補領域が含まれる第1画像を画像生成部56へ出力する。 The candidate area extraction unit 54 performs image processing on the first image and extracts one or more candidate areas including the detection target candidate. When there is no candidate area, the detection unit 58 detects that it is a good product. When there is a candidate area, the candidate area extraction unit 54 outputs the first image including the candidate area to the image generation unit 56.
 画像生成部56は、候補領域が所定位置に配置された第2画像を、抽出された候補領域の数だけ生成し、生成した第2画像を順番に第1学習器571へ出力する。 The image generation unit 56 generates the second images in which the candidate regions are arranged at the predetermined positions by the number of the extracted candidate regions, and outputs the generated second images to the first learning device 571 in order.
 前述のように、第1学習器571は、教師データを用いて検出対象の有無を検出できるよう生成されている。第1学習器571で検出対象がないと判定された場合、検出部58は、セルが良品であることを検出する。また、第1学習器571で検出対象があると判定された場合、検出部58は、セルが不良品であることを検出する。このように、検出部58は、第2画像を第1学習器571に入力して検出対象の有無を検出することができる。第1学習器571に入力する第2画像は、少数の候補領域だけが含まれるので、第1学習器571の検出精度を高めることができる。また、通常の画像処理では、画像の輝度分布や歪み等の影響で微小な欠陥の検出が困難であるとしても、第1学習器571を用いることにより、検出精度が所要の精度になるまで学習させることができ、微小な検出対象を精度よく検出することができる。 As described above, the first learning device 571 is generated so as to be able to detect the presence or absence of the detection target by using the teacher data. When the first learning device 571 determines that there is no detection target, the detection unit 58 detects that the cell is a good product. When the first learning device 571 determines that there is a detection target, the detection unit 58 detects that the cell is defective. In this way, the detection unit 58 can input the second image to the first learning device 571 and detect the presence or absence of the detection target. Since the second image input to the first learning device 571 includes only a small number of candidate regions, the detection accuracy of the first learning device 571 can be increased. In addition, even if it is difficult to detect a minute defect due to the influence of the luminance distribution and distortion of the image in the normal image processing, by using the first learning device 571, learning is performed until the detection accuracy reaches the required accuracy. Therefore, a minute detection target can be accurately detected.
 また、検出部58は、第2画像それぞれを順番に第1学習器571に入力して検出対象の有無を検出することができる。例えば、検出候補対象を含む候補領域が5つ抽出された場合、5つの第2画像を順番に第1学習器571に入力し、第1学習器571は、第2画像毎に検出対象の有無を検出する。これにより、すべての検出候補対象について漏れなく、検出対象であるか否かを検出することができる。 Further, the detection unit 58 can sequentially input each of the second images to the first learning device 571 to detect the presence or absence of the detection target. For example, when five candidate regions including the detection candidate target are extracted, the five second images are sequentially input to the first learning device 571, and the first learning device 571 determines whether there is a detection target for each second image. To detect. As a result, it is possible to detect whether or not all the detection candidate targets are detection targets without omission.
 前述のように、第2学習器572は、教師データを用いて検出対象の種別を検出できるよう生成されている。 As described above, the second learning device 572 is generated so as to be able to detect the type of the detection target using the teacher data.
 検出部58は、種別検出部としての機能を有し、第1学習器571に入力された第2画像のうち、検出対象が検出された第2画像を第3画像として第2学習器572に入力する。第2学習器572は、検出対象の種別を検出する。候補領域が含まれる複数の第2画像が第1学習器571に入力された場合、第3画像は、第1学習器571で検出対象が存在しないと判定された第2画像を除き、第1学習器571で検出対象が存在すると判定された第2画像のみを含む。第2学習器572を用いることにより、検出対象の種別の検出精度が所要の精度になるまで学習させることができ、微小な検出対象の種別を精度よく検出することができる。 The detection unit 58 has a function as a type detection unit, and of the second images input to the first learning device 571, the second image in which the detection target is detected is used as the third image by the second learning device 572. input. The second learning device 572 detects the type of the detection target. When a plurality of second images including candidate regions are input to the first learning device 571, the third images are the first images except for the second image which is determined by the first learning device 571 to have no detection target. It includes only the second image determined by the learning device 571 that the detection target is present. By using the second learning device 572, learning can be performed until the detection accuracy of the type of the detection target reaches the required accuracy, and a minute type of the detection target can be accurately detected.
 図9に示すように、第2学習器572がタイプA~Cの有無を判定すると、検出部58は、タイプA~Cの種別毎の検出結果を出力することができる。例えば、タイプA~Cのいずれか一つの種別が含まれる場合、検出部58は、当該セルを不良品として検出できる。 As shown in FIG. 9, when the second learning device 572 determines the presence or absence of types A to C, the detection unit 58 can output the detection result for each type of types A to C. For example, when any one of types A to C is included, the detection unit 58 can detect the cell as a defective product.
 このように、検出部58は、第3画像を第2学習器572に入力し、異なる種別の検出対象を検出することができる。例えば、タイプA、B、C、Dが含まれる4つの第3画像が第2学習器572に入力された場合、検出部58は、タイプA、B、C、Dの4つの異なる種別を検出することができる。これにより、検出対象の形状が多種多様であって異なる種別がある場合でも、検出対象の種別を検出できる。 In this way, the detection unit 58 can input the third image to the second learning device 572 and detect detection targets of different types. For example, when four third images including types A, B, C, and D are input to the second learning device 572, the detection unit 58 detects four different types of types A, B, C, and D. can do. Thus, even if there are various types of detection targets and different types, the types of detection targets can be detected.
 また、第2学習器572がタイプDの個数を判定すると、検出部58は、タイプDの個数に応じた良品・不良品の判定結果を出力することができる。例えば、タイプDのひびの個数が一つのセル内に4個以上存在する場合、当該セルを不良品と判定できる。 Further, when the second learning device 572 determines the number of type D, the detection unit 58 can output the determination result of non-defective product/defective product according to the number of type D. For example, when four or more type D cracks are present in one cell, the cell can be determined as a defective product.
 このように、検出部58は、第3画像それぞれを順番に第2学習器572に入力し、同一種別の検出対象の個数を検出することができる。例えば、タイプDが含まれる3つの第3画像が順番に第2学習器572に入力されると、第2学習器572は、タイプDについて3回検出するので、タイプDの個数を3と検出することができる。これにより、撮像対象の中に存在する微小な検出対象の個数に応じて撮像対象の良否が決まる場合でも、検出対象の個数も検出できるので、撮像対象の良否を判定できる。 In this way, the detection unit 58 can sequentially input each of the third images to the second learning device 572 and detect the number of detection targets of the same type. For example, when three third images including type D are sequentially input to the second learning device 572, the second learning device 572 detects the type D three times, so the number of type D is detected as three. can do. With this, even when the quality of the imaging target is determined according to the number of minute detection targets existing in the imaging target, the number of detection targets can be detected, and thus the quality of the imaging target can be determined.
 図10は学習器57の構成の他の例を示す模式図である。学習器57は、複数の第2学習器572a、572b、572c、…、572nを備える。第2学習器572a、572b、572c、…、572nは、それぞれタイプA、B、C、…Nの検出対象を検出できるように学習してある。 FIG. 10 is a schematic diagram showing another example of the configuration of the learning device 57. The learning device 57 includes a plurality of second learning devices 572a, 572b, 572c,..., 572n. The second learning devices 572a, 572b, 572c,..., 572n are learned so that detection targets of types A, B, C,... N can be detected.
 例えば、異なる種別として、タイプA、Bがある場合、タイプAを検出する第2学習器、タイプBを検出する第2学習器の2つの第2学習器を備えることができる。検出部58は、一の第2学習器に入力された第3画像のうち、一の種別が検出された第3画像を除く第4画像を別の第2学習器に入力して一の種別と異なる種別を検出することができる。例えば、タイプAが含まれる第3画像、及びタイプBが含まれる第3画像が順番に一の第2学習器に入力され、当該一の第2学習器がタイプAを検出した場合、タイプAを除くタイプBが含まれる第3画像を第4画像として次の第2学習器に入力する。当該次の第2学習器はタイプBを検出できる。検出種別が3以上の場合も同様である。これにより、検出対象の形状が多種多様であって異なる種別がある場合でも、検出対象の種別を検出できる。 For example, when there are types A and B as different types, it is possible to provide two second learning devices, a second learning device that detects type A and a second learning device that detects type B. The detection unit 58 inputs the fourth image excluding the third image in which one type is detected among the third images input to the one second learning device to another second learning device, and then the one type Different types can be detected. For example, when the third image including type A and the third image including type B are sequentially input to one second learning device, and the one second learning device detects type A, type A The third image including type B except for is input to the next second learning device as the fourth image. The next second learning device can detect type B. The same applies when the detection type is 3 or more. Thus, even if there are various types of detection targets and different types, the types of detection targets can be detected.
 図11は検出システムの構成の一例を示す模式図である。本実施の形態の検出システムは、検出装置50を備えるサーバ100が通信ネットワーク1に接続されている。なお、図11では、サーバ100を1台だけ図示しているが、サーバ100の数は1台に限定されない。通信ネットワーク1には、複数のクライアント装置10が接続されている。サーバ100及びクライアント装置10は、所要のデータを送受信できる通信機能を備えている。 FIG. 11 is a schematic diagram showing an example of the configuration of the detection system. In the detection system of the present embodiment, the server 100 including the detection device 50 is connected to the communication network 1. Although only one server 100 is shown in FIG. 11, the number of servers 100 is not limited to one. A plurality of client devices 10 are connected to the communication network 1. The server 100 and the client device 10 have a communication function capable of transmitting and receiving required data.
 クライアント装置10が、通信ネットワーク1を介して撮像対象が撮像された入力画像をサーバ100へ送信すると、サーバ100は、入力画像を受信する。サーバ100は、受信した入力画像に対して、検出装置50による各処理を行う。サーバ100は、検出結果(例えば、撮像対象の良否、検出対象の有無、検出対象の種別など)を当該クライアント装置10へ送信する。これにより、クライアント装置10側では、入力画像を送信するだけで、検出結果を得ることができ、クライアント装置10側で検出装置50を備える必要がない。 When the client device 10 transmits the input image in which the imaging target is captured via the communication network 1 to the server 100, the server 100 receives the input image. The server 100 performs each process by the detection device 50 on the received input image. The server 100 transmits the detection result (for example, the quality of the imaging target, the presence or absence of the detection target, the type of the detection target, etc.) to the client device 10. Accordingly, the client device 10 side can obtain the detection result only by transmitting the input image, and the client device 10 side does not need to include the detection device 50.
 図12は検出装置50による処理の手順の一例を示すフローチャートである。以下では便宜上、処理の主体を制御部51として説明する。制御部51は、入力画像を取得し(S11)、欠陥の有無を判定する(S12)。ここで、欠陥とは、例えば、図2に例示したものである。欠陥がない場合(S12でNO)、制御部51は、候補領域抽出部54に第1画像を入力する(S13)。第1画像は、欠陥対象がないと判定された入力画像である。 FIG. 12 is a flowchart showing an example of a processing procedure by the detection device 50. Hereinafter, for convenience, the main body of processing will be described as the control unit 51. The control unit 51 acquires the input image (S11) and determines the presence/absence of a defect (S12). Here, the defect is, for example, the one illustrated in FIG. When there is no defect (NO in S12), the control unit 51 inputs the first image into the candidate area extraction unit 54 (S13). The first image is an input image determined to have no defect target.
 制御部51は、候補領域を抽出し(S14)、候補領域の有無を判定する(S15)。候補領域がない場合(S15でNO)、入力画像には、欠陥対象も無く、かつ検出対象も無いので、入力画像に対応する撮像対象(例えば、セル)を良品と判定し(S16)、処理を終了する。 The control unit 51 extracts a candidate area (S14) and determines the presence or absence of the candidate area (S15). If there is no candidate area (NO in S15), there is neither a defect target nor a detection target in the input image, so the imaging target (for example, cell) corresponding to the input image is determined as a non-defective item (S16), and the processing is performed. To finish.
 候補領域がある場合(S15でYES)、制御部51は、第2画像を生成する(S17)。例えば、第1画像内に3つの候補領域がある場合、それぞれの候補領域を所定位置に配置した第2画像を3つ生成する。制御部51は、第1学習器571に第2画像を順番に入力し(S18)、検出対象としてのひびの有無を判定する(S19)。ひびがない場合(S19でNO)、制御部51は、ステップS16の処理を行う。 If there is a candidate area (YES in S15), the control unit 51 generates a second image (S17). For example, when there are three candidate areas in the first image, three second images in which the respective candidate areas are arranged at predetermined positions are generated. The control unit 51 sequentially inputs the second image to the first learning device 571 (S18) and determines the presence/absence of a crack as a detection target (S19). When there is no crack (NO in S19), the control unit 51 performs the process of step S16.
 ひびがある場合(S19でYES)、制御部51は、第2学習器572に第3画像を入力する(S20)。第3画像は、第1学習器571に入力された第2画像のうち、第1学習器571でひびが検出された画像である。制御部51は、ひびの種別を検出する(S21)。 If there is a crack (YES in S19), the control unit 51 inputs the third image into the second learning device 572 (S20). The third image is an image in which a crack is detected by the first learning device 571 among the second images input to the first learning device 571. The control unit 51 detects the type of crack (S21).
 制御部51は、他の第3画像の有無を判定し(S22)、他の第3画像がある場合(S22でYES)、ステップS20以降の処理を繰り返す。他の第3画像がない場合(S22でNO)、制御部51は、タイプA~Cのひびの有無を判定し(S23)、タイプA~Cのひびがない場合(S23でNO)、タイプDのひびの個数が所定値以上であるか否かを判定する(S24)。 The control unit 51 determines the presence or absence of another third image (S22), and when there is another third image (YES in S22), repeats the processing from step S20. If there is no other third image (NO in S22), the control unit 51 determines whether there is a crack of types A to C (S23), and if there is no crack of types A to C (NO in S23), type It is determined whether the number of D cracks is equal to or greater than a predetermined value (S24).
 タイプDのひびの個数が所定値以上でない場合(S24でNO)、制御部51は、ステップS16の処理を行う。欠陥がある場合(S12でYES)、タイプA~Cのひびがある場合(S23でYES)、あるいは、タイプDのひびの個数が所定値以上である場合(S24でYES)、制御部51は、入力画像に対応する撮像対象(例えば、セル)を不良品と判定し(S25)、処理を終了する。なお、入力画像が複数ある場合には、図12に示す処理を繰り返せばよい。 When the number of type D cracks is not equal to or greater than the predetermined value (NO in S24), the control unit 51 performs the process of step S16. When there is a defect (YES in S12), there are cracks of types A to C (YES in S23), or when the number of type D cracks is equal to or more than a predetermined value (YES in S24), the control unit 51 The imaging target (eg, cell) corresponding to the input image is determined to be a defective product (S25), and the process ends. When there are a plurality of input images, the process shown in FIG. 12 may be repeated.
 検出装置50は、CPU(プロセッサ)、GPU、RAM(メモリ)などを備えたコンピュータを用いて実現することもできる。すなわち、図12に示すような、各処理の手順を定めたコンピュータプログラムをコンピュータに備えられたRAM(メモリ)にロードし、コンピュータプログラムをCPU(プロセッサ)で実行することにより、コンピュータ上で検出装置50を実現することができる。コンピュータプログラムは記録媒体に記録され流通されてもよい。 The detection device 50 can also be realized by using a computer including a CPU (processor), GPU, RAM (memory), and the like. That is, as shown in FIG. 12, by loading a computer program that defines the procedure of each processing into a RAM (memory) provided in the computer and executing the computer program by a CPU (processor), the detection device on the computer 50 can be realized. The computer program may be recorded in a recording medium and distributed.
 本実施の形態によれば、目視では確認が難しい微小な検出対象や、形状や大きさが多種多様の微小な検出対象であっても、精度良く検出対象を検出することができる。また、様々な検出対象の種別も検出することができる。 According to the present embodiment, it is possible to detect a detection target with high accuracy even if it is a minute detection target that is difficult to visually confirm, or even a minute detection target having various shapes and sizes. Also, various types of detection targets can be detected.
 上述の実施の形態では、撮像対象として太陽電池モジュールを構成するセルを用いて説明するとともに、検出対象として「ひびと称する対象」を用いて説明したが、撮像対象及び検出対象は、これらに限定されるものではない。 In the above-described embodiment, the cells constituting the solar cell module are used as the imaging target and the "target called a crack" is used as the detection target. However, the imaging target and the detection target are limited to these. It is not something that will be done.
 本実施の形態に係る検出装置は、第1画像に対して画像処理を行って検出対象候補が含まれる1又は複数の候補領域を抽出する候補領域抽出部と、前記候補領域抽出部で抽出した候補領域の少なくとも一つが含まれる1又は複数の第2画像を生成する画像生成部と、第1学習器と、前記画像生成部で生成した第2画像を前記第1学習器に入力して検出対象の有無を検出する検出部とを備える。 The detection device according to the present embodiment performs image processing on the first image to extract one or more candidate regions including the detection target candidate, and the candidate region extraction unit extracts the candidate regions. An image generation unit that generates one or a plurality of second images including at least one of the candidate regions, a first learning device, and the second image generated by the image generation unit are input to the first learning device and detected. And a detection unit that detects the presence or absence of a target.
 本実施の形態に係るコンピュータプログラムは、コンピュータに、第1画像を取得する処理と、取得した第1画像に対して画像処理を行って検出対象候補が含まれる1又は複数の候補領域を抽出する処理と、抽出した候補領域の少なくとも一つが含まれる1又は複数の第2画像を生成する処理と、生成した第2画像を第1学習器に入力して検出対象の有無を検出する処理とを実行させる。 The computer program according to the present embodiment causes a computer to perform a process of acquiring a first image and an image process on the acquired first image to extract one or a plurality of candidate regions including a detection target candidate. The process, the process of generating one or a plurality of second images including at least one of the extracted candidate regions, and the process of inputting the generated second image to the first learning device and detecting the presence or absence of the detection target. Let it run.
 本実施の形態に係る検出方法は、第1画像を取得し、取得された第1画像に対して画像処理を行って検出対象候補が含まれる1又は複数の候補領域を抽出し、抽出された候補領域の少なくとも一つが含まれる1又は複数の第2画像を生成し、生成された第2画像を第1学習器に入力して検出対象の有無を検出する。 The detection method according to the present embodiment obtains a first image, performs image processing on the obtained first image, extracts one or a plurality of candidate regions including a detection target candidate, and extracts the candidate region. One or a plurality of second images including at least one of the candidate regions is generated, and the generated second image is input to the first learning device to detect the presence or absence of the detection target.
 候補領域抽出部は、第1画像に対して画像処理を行って検出対象候補が含まれる1又は複数の候補領域を抽出する。第1画像は、例えば、撮像対象が撮像された入力画像に対して、目視で確認できる欠陥が存在しないと判定された画像である。画像処理は、例えば、濃度補正、雑音除去、エッジ検出、特徴量抽出などの通常の画像処理を含む。検出対象候補は、例えば、目視では検出が難しい微小な欠陥が含まれる領域であり、微小な検出対象候補を含む領域である。なお、検出対象候補が抽出できない第1画像は、撮像対象が良品であると判定できる。 The candidate area extraction unit performs image processing on the first image and extracts one or more candidate areas including the detection target candidate. The first image is, for example, an image determined to have no visually observable defect in the input image in which the imaging target is captured. The image processing includes normal image processing such as density correction, noise removal, edge detection, and feature amount extraction. The detection target candidate is, for example, a region including a minute defect that is difficult to detect by visual observation, and is a region including a minute detection target candidate. It should be noted that the first image from which the detection target candidate cannot be extracted can be determined to be a non-defective item.
 画像生成部は、抽出した候補領域の少なくとも一つが含まれる1又は複数の第2画像を生成する。少なくとも一つとは、多数の候補領域を含まないという意味であり、例えば、一つ、二つ、あるいは三つ程度(少数)とすることができる。例えば、1つの第1画像から3つの検出対象候補A、B、Cが抽出された場合、画像生成部が、検出対象候補Aが含まれる第2画像、検出対象候補Bが含まれる第2画像、検出対象候補Cが含まれる第2画像の3つの第2画像を生成する。 The image generation unit generates one or more second images including at least one of the extracted candidate areas. At least one means that a large number of candidate regions are not included, and can be, for example, one, two, or three (a small number). For example, when three detection target candidates A, B, and C are extracted from one first image, the image generation unit causes the second image including the detection target candidate A and the second image including the detection target candidate B. , Second three images of the second image including the detection target candidate C are generated.
 第1学習器は、例えば、多層のニューラルネットワーク(深層学習)で構成することができ、例えば、畳み込みニューラルネットワーク(Convolutional Neural Network)を用いることができるが、他の機械学習を用いてもよい。第1学習器は、教師データを用いて検出対象の有無を検出できるよう生成されている。 The first learning device can be configured by, for example, a multi-layer neural network (deep learning), for example, a convolutional neural network (Convolutional Neural Network) can be used, but other machine learning may be used. The first learning device is generated so as to be able to detect the presence/absence of a detection target using the teacher data.
 検出部は、第2画像を第1学習器に入力して検出対象の有無を検出する。第1学習器に入力する第2画像は、少数の候補領域だけが含まれるので、第1学習器の検出精度を高めることができる。また、通常の画像処理では、画像の輝度分布や歪み等の影響で微小な欠陥の検出が困難であるとしても、第1学習器を用いることにより、検出精度が所要の精度になるまで学習させることができ、微小な検出対象を精度よく検出することができる。 The detection unit inputs the second image to the first learning device and detects the presence or absence of the detection target. Since the second image input to the first learning device includes only a small number of candidate regions, the detection accuracy of the first learning device can be improved. In addition, in the normal image processing, even if it is difficult to detect a minute defect due to the influence of the luminance distribution or distortion of the image, by using the first learning device, learning is performed until the detection accuracy reaches the required accuracy. Therefore, it is possible to accurately detect a minute detection target.
 本実施の形態に係る検出装置において、前記画像生成部は、前記候補領域抽出部で抽出した候補領域の少なくとも一つを所定画像の所定位置に配置して第2画像を生成する。 In the detection device according to the present embodiment, the image generation unit arranges at least one of the candidate regions extracted by the candidate region extraction unit at a predetermined position of a predetermined image to generate a second image.
 画像生成部は、抽出した候補領域の少なくとも一つを所定画像の所定位置に配置して第2画像を生成する。所定画像は、例えば、輝度値が一定の画像(輝度値が高い画像、明るい画像)とすることができる。候補領域は、検出対象候補の形状や大きさに応じて、検出対象候補を囲む矩形状の領域とすることができる。所定位置は、例えば、所定画像の中央(中点)とすることができる。例えば、候補領域の中央が所定画像の中央に一致するように候補領域を配置することができる。 The image generation unit arranges at least one of the extracted candidate areas at a predetermined position of the predetermined image to generate the second image. The predetermined image can be, for example, an image having a constant brightness value (an image having a high brightness value, a bright image). The candidate area can be a rectangular area surrounding the detection target candidate according to the shape and size of the detection target candidate. The predetermined position can be, for example, the center (midpoint) of the predetermined image. For example, the candidate areas can be arranged so that the center of the candidate area matches the center of the predetermined image.
 第2画像上での候補領域の位置を、第2画像毎に同じ位置にできるので、候補領域の位置が画像毎にばらつく場合に比べて、微小な検出対象を精度よく検出することができる。 Since the position of the candidate area on the second image can be set to the same position for each second image, it is possible to detect a minute detection target with higher accuracy than when the position of the candidate area varies from image to image.
 本実施の形態に係る検出装置において、前記検出部は、前記第2画像それぞれを順番に前記第1学習器に入力して検出対象の有無を検出する。 In the detection device according to the present embodiment, the detection unit sequentially inputs each of the second images to the first learning device to detect the presence/absence of a detection target.
 検出部は、第2画像それぞれを順番に第1学習器に入力して検出対象の有無を検出する。例えば、検出候補対象を含む候補領域が5つ抽出された場合、5つの第2画像を順番に第1学習器に入力し、第1学習器は、第2画像毎に検出対象の有無を検出する。これにより、すべての検出候補対象について、検出対象であるか否かを検出することができる。 The detection unit sequentially inputs each of the second images to the first learning device and detects the presence or absence of the detection target. For example, when five candidate regions including the detection candidate target are extracted, the five second images are sequentially input to the first learning device, and the first learning device detects the presence or absence of the detection target for each second image. To do. Thereby, it is possible to detect whether or not all the detection candidate targets are the detection targets.
 本実施の形態に係る検出装置において、前記第1学習器は、検出対象が含まれる第1ラベル画像と検出対象が含まれない第2ラベル画像とを学習データとして用いて生成してある。 In the detection device according to the present embodiment, the first learning device is generated using the first label image including the detection target and the second label image including no detection target as the learning data.
 第1学習器は、検出対象が含まれる第1ラベル画像と検出対象が含まれない第2ラベル画像とを学習データとして用いて生成してある。検出対象を、例えば、形状や種別に応じて、便宜上、タイプA、B、C、Dと分けたとする。第1ラベル画像は、タイプAのみが含まれる画像、タイプBのみが含まれる画像、タイプCのみが含まれる画像、タイプDのみが含まれる画像の集合とすることができる。第1学習器は、第1ラベル画像と検出対象ありのラベルとを用いて生成できる。第2ラベル画像は、タイプA、B、C、Dのいずれも含まれない画像の集合とすることができる。第1学習器は、第2ラベル画像と検出対象なしのラベルとを用いて生成できる。 The first learning device is generated using the first label image including the detection target and the second label image including no detection target as the learning data. For example, it is assumed that the detection target is divided into types A, B, C, and D according to the shape and the type, for convenience. The first label image can be a set of images that include only type A, images that include only type B, images that include only type C, and images that include only type D. The first learning device can be generated using the first label image and the label with the detection target. The second label image can be a set of images that do not include any of types A, B, C, and D. The first learning device can be generated using the second label image and the label having no detection target.
 これにより、第2画像に、タイプA、B、C、Dのいずれか一つが含まれている場合、検出対象が存在していることを検出できる。また、第2画像に、タイプA、B、C、Dのいずれも含まれていない場合、検出対象が存在しないことを検出できる。 With this, when the second image includes any one of types A, B, C, and D, it can be detected that the detection target exists. If the second image does not include any of types A, B, C, and D, it can be detected that the detection target does not exist.
 本実施の形態に係る検出装置は、第2学習器と、前記第1学習器に入力された第2画像のうち、検出対象が検出された第3画像を前記第2学習器に入力して検出対象の種別を検出する種別検出部とを備える。 The detection device according to the present embodiment inputs a third image in which a detection target is detected among the second images input to the second learning device and the first learning device to the second learning device. And a type detection unit that detects the type of the detection target.
 第2学習器は、例えば、多層のニューラルネットワーク(深層学習)で構成することができ、例えば、畳み込みニューラルネットワーク(Convolutional Neural Network)を用いることができるが、他の機械学習を用いてもよい。第2学習器は、教師データを用いて検出対象の種別を検出できるよう生成されている。 The second learning device can be configured by, for example, a multilayer neural network (deep learning), for example, a convolutional neural network (Convolutional Neural Network) can be used, but other machine learning may be used. The second learning device is generated so that the type of the detection target can be detected using the teacher data.
 種別検出部は、第1学習器に入力された第2画像のうち、検出対象が検出された第3画像を第2学習器に入力して検出対象の種別を検出する。候補領域が含まれる複数の第2画像が第1学習器に入力された場合、第3画像は、第1学習器で検出対象が存在しないと判定された第2画像を除き、第1学習器で検出対象が存在すると判定された第2画像のみを含む。第2学習器を用いることにより、検出対象の種別の検出精度が所要の精度になるまで学習させることができ、微小な検出対象の種別を精度よく検出することができる。 The type detection unit inputs, to the second learning device, the third image in which the detection target is detected among the second images input to the first learning device, and detects the type of the detection target. When a plurality of second images including candidate regions are input to the first learning device, the third images are the first learning device except for the second image in which the first learning device determines that the detection target does not exist. Only the second image for which it is determined that the detection target exists is included. By using the second learning device, it is possible to learn until the detection accuracy of the type of the detection target reaches the required accuracy, and it is possible to accurately detect the minute type of the detection target.
 本実施の形態に係る検出装置において、前記種別検出部は、前記第3画像それぞれを順番に前記第2学習器に入力し、同一種別の検出対象の個数を検出する。 In the detection device according to the present embodiment, the type detection unit inputs each of the third images to the second learning device in order and detects the number of detection targets of the same type.
 種別検出部は、第3画像それぞれを順番に第2学習器に入力し、同一種別の検出対象の個数を検出する。例えば、タイプDが含まれる3つの第3画像が順番に第2学習器に入力されると、第2学習器は、タイプDについて3回検出するので、タイプDの個数を3と検出することができる。これにより、撮像対象の中に存在する微小な検出対象の個数に応じて撮像対象の良否が決まる場合でも、検出対象の個数も検出できるので、撮像対象の良否を判定できる。 The type detection unit sequentially inputs each of the third images to the second learning device and detects the number of detection targets of the same type. For example, if the three third images including the type D are sequentially input to the second learning device, the second learning device detects the type D three times. Therefore, the number of the type D is detected as three. You can With this, even when the quality of the imaging target is determined according to the number of minute detection targets existing in the imaging target, the number of detection targets can be detected, and thus the quality of the imaging target can be determined.
 本実施の形態に係る検出装置において、前記種別検出部は、前記第3画像を前記第2学習器に入力し、異なる種別の検出対象を検出する。 In the detection device according to the present embodiment, the type detection unit inputs the third image to the second learning device and detects detection targets of different types.
 種別検出部は、第3画像を第2学習器に入力し、異なる種別の検出対象を検出する。例えば、タイプA、B、C、Dが含まれる4つの第3画像が第2学習器に入力された場合、種別検出部は、タイプA、B、C、Dの4つの異なる種別を検出することができる。これにより、検出対象の形状が多種多様であって異なる種別がある場合でも、検出対象の種別を検出できる。 The type detection unit inputs the third image to the second learning device and detects a detection target of a different type. For example, when four third images including types A, B, C, and D are input to the second learning device, the type detection unit detects four different types of types A, B, C, and D. be able to. Thus, even if there are various types of detection targets and different types, the types of detection targets can be detected.
 本実施の形態に係る検出装置は、前記第2学習器を複数備え、前記種別検出部は、一の第2学習器に入力された前記第3画像のうち、一の種別が検出された第3画像を除く第4画像を別の第2学習器に入力して前記一の種別と異なる種別を検出する。 The detection device according to the present embodiment includes a plurality of the second learning devices, and the type detection unit detects the one type of the third images input to the one second learning device. The fourth image excluding the three images is input to another second learning device to detect a type different from the one type.
 第2学習器を複数備える。例えば、異なる種別として、タイプA、Bがある場合、タイプAを検出する第2学習器、タイプBを検出する第2学習器の2つの第2学習器を備えることができる。種別検出部は、一の第2学習器に入力された第3画像のうち、一の種別が検出された第3画像を除く第4画像を別の第2学習器に入力して一の種別と異なる種別を検出する。例えば、タイプAが含まれる第3画像、及びタイプBが含まれる第3画像が順番に一の第2学習器に入力され、当該一の第2学習器がタイプAを検出した場合、タイプAを除くタイプBが含まれる第3画像を第4画像として次の第2学習器に入力する。当該次の第2学習器はタイプBを検出できる。これにより、検出対象の形状が多種多様であって異なる種別がある場合でも、検出対象の種別を検出できる。 Includes multiple second learners. For example, when there are types A and B as different types, it is possible to provide two second learning devices, a second learning device that detects type A and a second learning device that detects type B. The type detection unit inputs, to the other second learning device, the fourth image excluding the third image in which the one type is detected among the third images input to the one second learning device, and inputs the one type into the second learning device. And a type different from. For example, when the third image including type A and the third image including type B are sequentially input to one second learning device, and the one second learning device detects type A, type A The third image including type B except for is input to the next second learning device as the fourth image. The next second learning device can detect type B. Thus, even if there are various types of detection targets and different types, the types of detection targets can be detected.
 本実施の形態に係る検出装置において、前記第2学習器は、検出対象が含まれる第1ラベル画像の前記検出対象の異なる種別毎に区分された複数の種別ラベル画像を学習データとして用いて生成してある。 In the detection device according to the present embodiment, the second learning device generates using a plurality of types of label images divided into different types of the detection target of the first label image including the detection target as learning data. I am doing it.
 第2学習器は、第1ラベル画像に含まれる検出対象の異なる種別毎に区分された複数の種別ラベル画像を学習データとして用いて生成してある。例えば、第1ラベル画像の集合にタイプA、B、C、Dの4つの種別の検出対象が含まれる場合、タイプAを含むタイプAレベル画像、タイプBを含むタイプBレベル画像、タイプCを含むタイプCレベル画像、タイプDを含むタイプDレベル画像の4つの集合に分ける。第2学習器は、タイプAレベル画像の集合と種別がタイプAであることを示すラベルとを用いて生成し、タイプBレベル画像の集合と種別がタイプBであることを示すラベルとを用いて生成し、タイプCレベル画像の集合と種別がタイプCであることを示すラベルとを用いて生成し、タイプDレベル画像の集合と種別がタイプDであることを示すラベルとを用いて生成することができる。 The second learning device is generated by using, as learning data, a plurality of types of label images divided into different types of detection targets included in the first label image. For example, when the set of first label images includes four types of detection targets of types A, B, C, and D, a type A level image including type A, a type B level image including type B, and a type C are selected. It is divided into four sets: a type C level image containing the type C and a type D level image containing the type D. The second learning device generates using a set of type A level images and a label indicating that the type is type A, and uses a set of type B level images and a label indicating that the type is type B Generated by using a set of type C level images and a label indicating that the type is type C, and using a set of type D level images and a label indicating that the type is type D can do.
 本実施の形態に係る検出装置は、撮像対象が撮像された入力画像を取得する取得部と、前記取得部で取得した入力画像に所定の欠陥対象が含まれるか否かを判定する判定部とを備え、前記判定部で欠陥対象がないと判定された入力画像を前記第1画像とする。 The detection apparatus according to the present embodiment includes an acquisition unit that acquires an input image in which an imaging target is captured, and a determination unit that determines whether or not a predetermined defect target is included in the input image acquired by the acquisition unit. And the input image determined by the determination unit to have no defect target is the first image.
 取得部は、撮像対象が撮像された入力画像を取得する。撮像対象は、被検査対象であり、例えば、太陽電池モジュールの場合、モジュールを構成する各セルとすることができるが、これに限定されない。 The acquisition unit acquires the input image in which the imaging target is captured. The imaging target is an inspection target, and for example, in the case of a solar cell module, each cell that constitutes the module can be used, but the present invention is not limited to this.
 判定部は、入力画像に所定の欠陥対象が含まれるか否かを判定する。判定部は、画像処理機能を備え、入力画像に基づいて所定の欠陥対象の有無を判定できる。撮像対象が、セルの場合、所定の欠陥対象は、例えば、単セル異常、断線(フィンガー断線ともいう)、洗浄跡(水跡ともいう)、断線併発ひび等を含むが、これらに限定されない。判定部で欠陥対象がないと判定された入力画像を第1画像とする。これにより、比較的、検出の易しい欠陥を予め除外し、検出が困難な微小な検出対象が含まれる可能性のある画像だけを絞り込んで処理することができる。 The judgment unit judges whether or not the input image includes a predetermined defect target. The determination unit has an image processing function and can determine the presence or absence of a predetermined defect target based on the input image. When the imaging target is a cell, the predetermined defect target includes, but is not limited to, single cell abnormality, disconnection (also referred to as finger disconnection), cleaning mark (also referred to as water mark), breakage coexisting crack, and the like. The input image determined by the determination unit to have no defect target is the first image. As a result, it is possible to exclude defects that are relatively easy to detect in advance, and narrow down and process only images that may include minute detection targets that are difficult to detect.
 本実施の形態に係る学習器は、検出対象が含まれる第1ラベル画像と検出対象が含まれない第2ラベル画像とを学習データとして用いて生成された第1学習器と、前記第1ラベル画像に含まれる検出対象の異なる種別毎に区分された複数の種別ラベル画像を学習データとして用いて生成された第2学習器とを備える。 The learning device according to the present embodiment includes a first learning device generated by using, as learning data, a first label image including a detection target and a second label image including no detection target, and the first label. The second learning device is generated by using, as learning data, a plurality of types of label images that are divided into different types of detection targets included in the image.
 本実施の形態に係る学習器の生成方法は、検出対象が含まれる第1ラベル画像と検出対象が含まれない第2ラベル画像とを取得し、取得された第1ラベル画像及び第2ラベル画像を学習データとして用いて第1学習器を生成し、前記第1ラベル画像に含まれる検出対象の異なる種別毎に区分された複数の種別ラベル画像を学習データとして用いて第2学習器を生成する。 The learning device generation method according to the present embodiment acquires a first label image including a detection target and a second label image including no detection target, and acquires the first label image and the second label image acquired. Is used as learning data to generate a first learning device, and a plurality of types of label images divided into different types of detection targets included in the first label image are used as learning data to generate a second learning device. ..
 学習器は、第1学習器及び第2学習器を備える。第1学習器及び第2学習器は、例えば、多層のニューラルネットワーク(深層学習)で構成することができ、例えば、畳み込みニューラルネットワーク(Convolutional Neural Network)を用いることができるが、他の機械学習を用いてもよい。 The learning device includes a first learning device and a second learning device. The first learning device and the second learning device can be configured by, for example, a multilayer neural network (deep learning), for example, a convolutional neural network (Convolutional Neural Network) can be used, but other machine learning can be performed. You may use.
 第1学習器は、検出対象が含まれる第1ラベル画像と検出対象が含まれない第2ラベル画像とを学習データとして用いて生成してある。検出対象を、例えば、形状や種別に応じて、便宜上、タイプA、B、C、Dと分けたとすると、第1ラベル画像は、タイプAのみが含まれる画像、タイプBのみが含まれる画像、タイプCのみが含まれる画像、タイプDのみが含まれる画像の集合とすることができる。第1学習器は、第1ラベル画像と検出対象ありのラベルとを用いて生成できる。第2ラベル画像は、タイプA、B、C、Dのいずれも含まれない画像の集合とすることができる。第1学習器は、第2ラベル画像と検出対象なしのラベルとを用いて生成できる。 The first learning device is generated using the first label image including the detection target and the second label image including no detection target as the learning data. For example, if the detection target is divided into types A, B, C, and D according to shape and type for convenience, the first label image is an image including only type A, an image including only type B, It can be a set of images including only type C and images including only type D. The first learning device can be generated using the first label image and the label with the detection target. The second label image can be a set of images that do not include any of types A, B, C, and D. The first learning device can be generated using the second label image and the label having no detection target.
 これにより、第2画像に、タイプA、B、C、Dのいずれか一つが含まれている場合、検出対象が存在していることを検出できる。また、第2画像に、タイプA、B、C、Dのいずれも含まれていない場合、検出対象が存在しないことを検出できる。 With this, when the second image includes any one of types A, B, C, and D, it can be detected that the detection target exists. If the second image does not include any of types A, B, C, and D, it can be detected that the detection target does not exist.
 第2学習器は、第1ラベル画像に含まれる検出対象の異なる種別毎に区分された複数の種別ラベル画像を学習データとして用いて生成してある。例えば、第1ラベル画像の集合にタイプA、B、C、Dの4つの種別の検出対象が含まれる場合、タイプAを含むタイプAレベル画像、タイプBを含むタイプBレベル画像、タイプCを含むタイプCレベル画像、タイプDを含むタイプDレベル画像の4つの集合に分ける。第2学習器は、タイプAレベル画像の集合と種別がタイプAであることを示すラベルとを用いて生成し、タイプBレベル画像の集合と種別がタイプBであることを示すラベルとを用いて生成し、タイプCレベル画像の集合と種別がタイプCであることを示すラベルとを用いて生成し、タイプDレベル画像の集合と種別がタイプDであることを示すラベルとを用いて生成することができる。 The second learning device is generated by using, as learning data, a plurality of types of label images divided into different types of detection targets included in the first label image. For example, when the set of first label images includes four types of detection targets of types A, B, C, and D, a type A level image including type A, a type B level image including type B, and a type C are selected. It is divided into four sets: a type C level image containing the type C and a type D level image containing the type D. The second learning device generates using a set of type A level images and a label indicating that the type is type A, and uses a set of type B level images and a label indicating that the type is type B Generated by using a set of type C level images and a label indicating that the type is type C, and using a set of type D level images and a label indicating that the type is type D can do.
 これにより、第3画像の集合に異なる種別の検出対象が含まれる場合、検出対象の種別を検出できる。 With this, when different types of detection targets are included in the third image set, the detection target types can be detected.
 本実施の形態に係る学習器において、前記第1ラベル画像は、検出対象候補が含まれる候補領域の少なくとも一つを含む。 In the learning device according to the present embodiment, the first label image includes at least one of candidate regions including detection target candidates.
 第1ラベル画像は、検出対象候補が含まれる候補領域の少なくとも一つを含む。少なくとも一つとは、多数の候補領域を含まないという意味であり、例えば、一つ、二つ、あるいは三つ程度(少数)とすることができる。例えば、1つの学習用画像に3つの検出対象候補A、B、Cが含まれる場合、第1ラベル画像は、検出対象候補Aが含まれる画像、検出対象候補Bが含まれる画像、検出対象候補Cが含まれる画像の3つの画像となる。少数の候補領域だけが含まれるので、第1学習器の検出精度を高めることができる。 The first label image includes at least one of the candidate areas including the detection target candidates. At least one means that a large number of candidate regions are not included, and can be, for example, one, two, or three (a small number). For example, when one learning image includes three detection target candidates A, B, and C, the first label image includes the image including the detection target candidate A, the image including the detection target candidate B, and the detection target candidate. There are three images including C. Since only a small number of candidate areas are included, the detection accuracy of the first learning device can be improved.
 本実施の形態に係る学習器において、前記第1ラベル画像は、検出対象候補が含まれる候補領域を所定画像の所定位置に配置してある。 In the learning device according to the present embodiment, the first label image has a candidate area including a detection target candidate arranged at a predetermined position of a predetermined image.
 第1ラベル画像は、検出対象候補が含まれる候補領域を所定画像の所定位置に配置してある。所定画像は、例えば、輝度値が一定の画像(輝度値が高い画像、明るい画像)とすることができる。候補領域は、検出対象候補の形状や大きさに応じて、検出対象候補を囲む矩形状の領域とすることができる。所定位置は、例えば、所定画像の中央(中点)とすることができる。例えば、候補領域の中央が所定画像の中央に一致するように候補領域を配置することができる。 In the first label image, a candidate area including a detection target candidate is arranged at a predetermined position of a predetermined image. The predetermined image can be, for example, an image having a constant brightness value (an image having a high brightness value, a bright image). The candidate area can be a rectangular area surrounding the detection target candidate according to the shape and size of the detection target candidate. The predetermined position can be, for example, the center (midpoint) of the predetermined image. For example, the candidate areas can be arranged so that the center of the candidate area matches the center of the predetermined image.
 第1ラベル画像上での候補領域の位置を、第1ラベル画像毎に同じ位置にできるので、候補領域の位置が画像毎にばらつく場合に比べて、微小な検出対象を精度よく検出することができる。 Since the position of the candidate area on the first label image can be set to the same position for each first label image, it is possible to accurately detect a minute detection target as compared with the case where the position of the candidate area varies from image to image. it can.
 本実施の形態に係る学習器において、前記所定画像は、背景色が白である。 In the learning device according to the present embodiment, the predetermined image has a white background color.
 所定画像は、背景色が白である。これにより、微小な検出対象と背景とのコントラストを高くすることができる。 The background color of the specified image is white. As a result, the contrast between the minute detection target and the background can be increased.
 本実施の形態に係る学習器において、前記第1ラベル画像は、撮像対象が撮像された画像のうち所定の欠陥対象がないと判定された画像である。 In the learning device according to the present embodiment, the first label image is an image in which it is determined that there is no predetermined defect target among the images captured by the imaging target.
 第1ラベル画像は、撮像対象が撮像された画像のうち所定の欠陥対象がないと判定された画像である。撮像対象は、被検査対象であり、例えば、太陽電池モジュールの場合、モジュールを構成する各セルとすることができるが、これに限定されない。撮像対象が、セルの場合、所定の欠陥対象は、例えば、単セル異常、断線(フィンガー断線ともいう)、洗浄跡(水跡ともいう)、断線併発ひび等を含むが、これらに限定されない。これにより、比較的、検出の易しい欠陥を予め除外し、検出が困難な微小な検出対象が含まれる可能性のある画像だけを絞り込んで処理することができる。 The first label image is an image in which it is determined that there is no predetermined defect target among the images captured by the imaging target. The imaging target is an inspection target, and for example, in the case of a solar cell module, each cell that constitutes the module can be used, but the present invention is not limited to this. When the imaging target is a cell, the predetermined defect target includes, but is not limited to, single cell abnormality, disconnection (also referred to as finger disconnection), cleaning mark (also referred to as water mark), breakage coexisting crack, and the like. As a result, it is possible to exclude defects that are relatively easy to detect in advance, and narrow down and process only images that may include minute detection targets that are difficult to detect.
 1 通信ネットワーク
 10 クライアント装置
 50 検出装置
 51 制御部
 52 入力部
 53 欠陥判定部
 54 候補領域抽出部
 55 記憶部
 56 画像生成部
 57 学習器
 571 第1学習器
 572、572a、572b、572c、572n 第2学習器
 58 検出部
 100 サーバ
1 Communication Network 10 Client Device 50 Detection Device 51 Control Unit 52 Input Unit 53 Defect Judgment Unit 54 Candidate Area Extraction Unit 55 Storage Unit 56 Image Generation Unit 57 Learning Device 571 First Learning Device 572, 572a, 572b, 572c, 572n Second Learner 58 Detector 100 Server

Claims (18)

  1.  第1画像に対して画像処理を行って検出対象候補が含まれる1又は複数の候補領域を抽出する候補領域抽出部と、
     前記候補領域抽出部で抽出した候補領域の少なくとも一つが含まれる1又は複数の第2画像を生成する画像生成部と、
     第1学習器と、
     前記画像生成部で生成した第2画像を前記第1学習器に入力して検出対象の有無を検出する検出部と
     を備える検出装置。
    A candidate area extraction unit that performs image processing on the first image to extract one or more candidate areas including detection target candidates;
    An image generation unit that generates one or more second images including at least one of the candidate regions extracted by the candidate region extraction unit;
    A first learner,
    A detection unit that inputs the second image generated by the image generation unit to the first learning device and detects the presence or absence of a detection target.
  2.  前記画像生成部は、
     前記候補領域抽出部で抽出した候補領域の少なくとも一つを所定画像の所定位置に配置して第2画像を生成する請求項1に記載の検出装置。
    The image generation unit,
    The detection device according to claim 1, wherein at least one of the candidate regions extracted by the candidate region extraction unit is arranged at a predetermined position of a predetermined image to generate the second image.
  3.  前記検出部は、
     前記第2画像それぞれを順番に前記第1学習器に入力して検出対象の有無を検出する請求項1又は請求項2に記載の検出装置。
    The detection unit,
    The detection device according to claim 1, wherein each of the second images is sequentially input to the first learning device to detect the presence or absence of a detection target.
  4.  前記第1学習器は、
     検出対象が含まれる第1ラベル画像と検出対象が含まれない第2ラベル画像とを学習データとして用いて生成してある請求項1から請求項3のいずれか一項に記載の検出装置。
    The first learner is
    The detection device according to any one of claims 1 to 3, which is generated by using a first label image including a detection target and a second label image including no detection target as learning data.
  5.  第2学習器と、
     前記第1学習器に入力された第2画像のうち、検出対象が検出された第3画像を前記第2学習器に入力して検出対象の種別を検出する種別検出部と
     を備える請求項1から請求項4のいずれか一項に記載の検出装置。
    A second learner,
    A type detection unit configured to input, to the second learning device, a third image in which a detection target has been detected among the second images input to the first learning device, and to detect the type of the detection target. 5. The detection device according to claim 4.
  6.  前記種別検出部は、
     前記第3画像それぞれを順番に前記第2学習器に入力し、同一種別の検出対象の個数を検出する請求項5に記載の検出装置。
    The type detection unit,
    The detection device according to claim 5, wherein each of the third images is sequentially input to the second learning device to detect the number of detection targets of the same type.
  7.  前記種別検出部は、
     前記第3画像を前記第2学習器に入力し、異なる種別の検出対象を検出する請求項5又は請求項6に記載の検出装置。
    The type detection unit,
    The detection device according to claim 5 or 6, wherein the third image is input to the second learning device to detect different types of detection targets.
  8.  前記第2学習器を複数備え、
     前記種別検出部は、
     一の第2学習器に入力された前記第3画像のうち、一の種別が検出された第3画像を除く第4画像を別の第2学習器に入力して前記一の種別と異なる種別を検出する請求項5又は請求項6に記載の検出装置。
    A plurality of the second learning device,
    The type detection unit,
    Of the third images input to one second learning device, a fourth image other than the third image in which one type is detected is input to another second learning device, and a different type from the one type. The detection device according to claim 5 or 6, which detects the.
  9.  前記第2学習器は、
     検出対象が含まれる第1ラベル画像の前記検出対象の異なる種別毎に区分された複数の種別ラベル画像を学習データとして用いて生成してある請求項5から請求項8のいずれか一項に記載の検出装置。
    The second learning device is
    The first label image including a detection target is generated by using a plurality of types of label images divided into different types of the detection target as learning data. Detection device.
  10.  撮像対象が撮像された入力画像を取得する取得部と、
     前記取得部で取得した入力画像に所定の欠陥対象が含まれるか否かを判定する判定部と
     を備え、
     前記判定部で欠陥対象がないと判定された入力画像を前記第1画像とする請求項1から請求項9のいずれか一項に記載の検出装置。
    An acquisition unit that acquires an input image in which the imaging target is captured,
    And a determination unit that determines whether or not a predetermined defect target is included in the input image acquired by the acquisition unit,
    The detection device according to any one of claims 1 to 9, wherein an input image determined by the determination unit to have no defect target is the first image.
  11.  検出対象が含まれる第1ラベル画像と検出対象が含まれない第2ラベル画像とを学習データとして用いて生成された第1学習器と、
     前記第1ラベル画像に含まれる検出対象の異なる種別毎に区分された複数の種別ラベル画像を学習データとして用いて生成された第2学習器と
     を備える学習器。
    A first learning device generated by using, as learning data, a first label image including a detection target and a second label image including no detection target;
    A second learning device, which is generated by using, as learning data, a plurality of types of label images divided into different types of detection targets included in the first label image.
  12.  前記第1ラベル画像は、
     検出対象候補が含まれる候補領域の少なくとも一つを含む請求項11に記載の学習器。
    The first label image is
    The learning device according to claim 11, comprising at least one of candidate regions in which a detection target candidate is included.
  13.  前記第1ラベル画像は、
     検出対象候補が含まれる候補領域を所定画像の所定位置に配置してある請求項12に記載の学習器。
    The first label image is
    The learning device according to claim 12, wherein a candidate region including a detection target candidate is arranged at a predetermined position of a predetermined image.
  14.  前記所定画像は、背景色が白である請求項13に記載の学習器。 The learning device according to claim 13, wherein the predetermined image has a white background color.
  15.  前記第1ラベル画像は、
     撮像対象が撮像された画像のうち所定の欠陥対象がないと判定された画像である請求項12から請求項14のいずれか一項に記載の学習器。
    The first label image is
    The learning device according to any one of claims 12 to 14, wherein the image capturing target is an image determined to have no predetermined defect target among the captured images.
  16.  コンピュータに、
     第1画像を取得する処理と、
     取得した第1画像に対して画像処理を行って検出対象候補が含まれる1又は複数の候補領域を抽出する処理と、
     抽出した候補領域の少なくとも一つが含まれる1又は複数の第2画像を生成する処理と、
     生成した第2画像を第1学習器に入力して検出対象の有無を検出する処理と
     を実行させるコンピュータプログラム。
    On the computer,
    A process of acquiring the first image,
    A process of performing image processing on the acquired first image to extract one or a plurality of candidate regions including detection target candidates;
    A process of generating one or more second images including at least one of the extracted candidate regions;
    A computer program that executes a process of inputting the generated second image to the first learning device and detecting the presence or absence of a detection target.
  17.  第1画像を取得し、
     取得された第1画像に対して画像処理を行って検出対象候補が含まれる1又は複数の候補領域を抽出し、
     抽出された候補領域の少なくとも一つが含まれる1又は複数の第2画像を生成し、
     生成された第2画像を第1学習器に入力して検出対象の有無を検出する検出方法。
    Take the first image,
    Image processing is performed on the acquired first image to extract one or more candidate regions including detection target candidates,
    Generating one or more second images including at least one of the extracted candidate regions,
    A detection method of inputting the generated second image to a first learning device to detect the presence or absence of a detection target.
  18.  検出対象が含まれる第1ラベル画像と検出対象が含まれない第2ラベル画像とを取得し、
     取得された第1ラベル画像及び第2ラベル画像を学習データとして用いて第1学習器を生成し、
     前記第1ラベル画像に含まれる検出対象の異なる種別毎に区分された複数の種別ラベル画像を学習データとして用いて第2学習器を生成する学習器の生成方法。
    Acquiring a first label image including a detection target and a second label image including no detection target,
    A first learning device is generated using the acquired first label image and second label image as learning data,
    A method for generating a learning device, which generates a second learning device by using a plurality of types of label images divided into different types of detection targets included in the first label image as learning data.
PCT/JP2020/002630 2019-01-31 2020-01-24 Detecting device, learner, computer program, detecting method, and method for generating learner WO2020158630A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020569594A JPWO2020158630A1 (en) 2019-01-31 2020-01-24 Detector, learner, computer program, detection method and learner generation method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-016262 2019-01-31
JP2019016262 2019-01-31

Publications (1)

Publication Number Publication Date
WO2020158630A1 true WO2020158630A1 (en) 2020-08-06

Family

ID=71840140

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/002630 WO2020158630A1 (en) 2019-01-31 2020-01-24 Detecting device, learner, computer program, detecting method, and method for generating learner

Country Status (2)

Country Link
JP (1) JPWO2020158630A1 (en)
WO (1) WO2020158630A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7453552B2 (en) 2020-12-07 2024-03-21 ダイトロン株式会社 Defect evaluation method and program

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000057349A (en) * 1998-08-10 2000-02-25 Hitachi Ltd Method for sorting defect, device therefor and method for generating data for instruction
JP2012026982A (en) * 2010-07-27 2012-02-09 Panasonic Electric Works Sunx Co Ltd Inspection device
JP2012181209A (en) * 2012-06-14 2012-09-20 Hitachi Ltd Defect classification method and apparatus therefor
JP2015041164A (en) * 2013-08-20 2015-03-02 キヤノン株式会社 Image processor, image processing method and program
JP2017102906A (en) * 2015-11-25 2017-06-08 キヤノン株式会社 Information processing apparatus, information processing method, and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000057349A (en) * 1998-08-10 2000-02-25 Hitachi Ltd Method for sorting defect, device therefor and method for generating data for instruction
JP2012026982A (en) * 2010-07-27 2012-02-09 Panasonic Electric Works Sunx Co Ltd Inspection device
JP2012181209A (en) * 2012-06-14 2012-09-20 Hitachi Ltd Defect classification method and apparatus therefor
JP2015041164A (en) * 2013-08-20 2015-03-02 キヤノン株式会社 Image processor, image processing method and program
JP2017102906A (en) * 2015-11-25 2017-06-08 キヤノン株式会社 Information processing apparatus, information processing method, and program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7453552B2 (en) 2020-12-07 2024-03-21 ダイトロン株式会社 Defect evaluation method and program

Also Published As

Publication number Publication date
JPWO2020158630A1 (en) 2021-10-07

Similar Documents

Publication Publication Date Title
TWI787296B (en) Optical inspection method, optical inspection device and optical inspection system
JP2017049974A (en) Discriminator generator, quality determine method, and program
JP6189127B2 (en) Soldering inspection apparatus, soldering inspection method, and electronic component
JP2006098151A (en) Pattern inspection device and pattern inspection method
JP2021515885A (en) Methods, devices, systems and programs for setting lighting conditions and storage media
US10726535B2 (en) Automatically generating image datasets for use in image recognition and detection
EP2212909B1 (en) Patterned wafer defect inspection system and method
JP2013134666A (en) Binary image generation device, classification device, binary image generation method, and classification method
JP2016181098A (en) Area detection device and area detection method
JP2007147442A (en) Method, device, and program for inspecting wood
JP2010135446A (en) Apparatus and method for inspecting solar battery cell, and recording medium having program of the method recorded thereon
JP2018004272A (en) Pattern inspection device and pattern inspection method
KR20210086303A (en) Pattern inspection apparatus based on deep learning and inspection method using the same
WO2020158630A1 (en) Detecting device, learner, computer program, detecting method, and method for generating learner
JP2020112483A (en) Exterior appearance inspection system, calculation model construction method and calculation model construction program
CN105572133B (en) Flaw detection method and device
JP2014126445A (en) Alignment device, defect inspection device, alignment method and control program
JP2023145412A (en) Defect detection method and system
JP6425468B2 (en) Teacher data creation support method, image classification method, teacher data creation support device and image classification device
JP4015436B2 (en) Gold plating defect inspection system
JP2710527B2 (en) Inspection equipment for periodic patterns
JP2010019646A (en) Imaging processing inspection method and imaging processing inspecting system
KR20210033900A (en) Learning apparatus, inspection apparatus, learning method and inspection method
KR101053779B1 (en) Metal mask inspection method of display means
JP2017156659A (en) Defect inspection device and defect inspection method for color filter

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20749308

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020569594

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20749308

Country of ref document: EP

Kind code of ref document: A1