WO2020158098A1 - Information processing device, information processing method, information processing program, learning method, and prelearned model - Google Patents

Information processing device, information processing method, information processing program, learning method, and prelearned model Download PDF

Info

Publication number
WO2020158098A1
WO2020158098A1 PCT/JP2019/043948 JP2019043948W WO2020158098A1 WO 2020158098 A1 WO2020158098 A1 WO 2020158098A1 JP 2019043948 W JP2019043948 W JP 2019043948W WO 2020158098 A1 WO2020158098 A1 WO 2020158098A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
information processing
restored
inspection
hidden
Prior art date
Application number
PCT/JP2019/043948
Other languages
French (fr)
Japanese (ja)
Inventor
健 猿渡
悟史 岡本
Original Assignee
株式会社Screenホールディングス
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Screenホールディングス filed Critical 株式会社Screenホールディングス
Priority to CN201980077404.1A priority Critical patent/CN113168686A/en
Publication of WO2020158098A1 publication Critical patent/WO2020158098A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention provides an information processing apparatus, an information processing method, and an information processing program, which are capable of performing learning using image data of a normal inspection object having no defect and detecting an abnormal inspection object having a defect. , A learning method and a learned model performed at the time of learning.
  • Patent Document 1 A defect detection technique using machine learning is described in Patent Document 1, for example.
  • Patent Document 1 discloses a determination system 301 capable of determining the presence or absence of a flaw portion Df1 included in an image of an object by using machine learning.
  • the determination system 301 includes a determination device 101, a storage device 131, and a learning device 151. Then, among the plurality of images stored in the storage device 131, 500 defective product images Sng that are images of the target object including the scratched portion Df1 and a non-defective product image Sg that is an image of the target object that does not include the scratched portion Df1. And 500 are selected, and these are divided into 63 partial images one by one.
  • the locus Tr1 is drawn with respect to the scratched portion Df1, and a label indicating whether or not the locus Tr1 is present is displayed.
  • the learning device 151 performs machine learning using the plurality of partial images and the label display. Further, the model for which the machine learning is completed is introduced into the determination device 101. When the image data is input to the model, it is determined whether or not the image data includes the flaw portion Df1, and the determination result is output.
  • the present invention has been made in view of such circumstances, and an abnormality having a defect is obtained by performing machine learning by using an image of a normal inspection object without a defect that can be easily acquired in large numbers. It is an object of the present invention to provide a technique capable of detecting various inspection objects.
  • the first invention of the present application is an information processing apparatus for detecting an abnormal inspection object having a defect by using a set of image data of a normal inspection object.
  • An image restoration unit that generates a restored image in which the hidden part is restored from image data in which a part of the inspection image in which an unknown inspection target is captured is hidden, and the restored image, And an output unit that outputs a determination result by the determination unit, and the image restoration unit captures a normal inspection target image.
  • the learning has already been performed by deep learning so that a restored image in which the hidden part is restored can be generated with high accuracy from image data in which a part of each of the plurality of learned images that have been hidden is hidden.
  • a second invention of the present application is the information processing apparatus according to the first invention, wherein the image restoration unit sequentially changes a part of the hidden part of the image data, and a plurality of the restored images. And the determination unit determines whether the inspection target is normal or abnormal by comparing each of the plurality of restored images with the inspection image.
  • a third invention of the present application is the information processing apparatus according to the second invention, wherein the determination unit includes the hidden part when the difference between the restored image and the inspection image is larger than a predetermined allowable value. Is determined as the location of the defect, and the output unit further outputs information relating to the location of the defect.
  • a fourth invention of the present application is the information processing apparatus according to any one of the first invention to the third invention, wherein the image restoration unit extracts a feature from the inspection image to generate a latent variable. And a decoding process for generating the restored image from the latent variable.
  • a fifth invention of the present application is the information processing apparatus according to the fourth invention, wherein the image restoration unit has adjusted the parameters of the encoding process and the decoding process by a convolutional neural network in the learning.
  • a sixth invention of the present application is an information processing method for detecting an abnormal inspection object having a defect by using a set of image data of the normal inspection object, wherein a) a normal inspection object is imaged.
  • a step of learning by deep learning a process of generating a restored image in which the hidden part is restored from image data in which a part of each of the plurality of learning images is hidden; b) whether normal or abnormal
  • the inspection object is compared by comparing the restored image restored from the image data in which a part of the inspection image in which the inspection object is picked up is hidden using the process learned in the step a) with the inspection image. Is determined to be normal or abnormal, and c) the step of outputting the determination result of the step b).
  • a seventh invention of the present application is an information processing program for detecting an abnormal inspection object having a defect by using a set of image data of a normal inspection object, wherein a) an inspection object whose normality or abnormality is unknown.
  • Image restoration processing for generating a restored image in which the hidden part is restored from image data in which a part of the captured inspection image is hidden; and b) comparing the restored image with the inspection image.
  • the computer is caused to execute a determination process for determining whether the inspection target is normal or abnormal, and c) an output process for outputting the determination result of the determination process, and the image restoration process is performed for the normal inspection target.
  • Has been learned by deep learning so that a restored image in which the hidden part is restored can be generated with high accuracy from image data in which a part of each of the plurality of learned images captured is hidden.
  • the eighth invention of the present application conceals a part of each of a plurality of learning images in which a normal inspection target object is imaged from the hidden image data. It is a learning method for learning the process of generating a restored image in which a part of the image is restored by deep learning.
  • a ninth invention of the present application is, in order to detect an abnormal inspection object having a defect, from the image data in which a part of each of a plurality of learning images in which a normal inspection object is imaged is hidden, the hidden object is hidden. It is a learned model in which a process of generating a restored image in which a part of it is restored is learned by deep learning.
  • the tenth invention of the present application is the information processing apparatus according to any one of the first to fifth inventions, and the inspection target is a tablet.
  • an abnormal inspection object having a defect is obtained by performing machine learning using an image of a normal inspection object having no defect, which can be easily acquired in large numbers. Objects can be detected. This makes it possible to detect a wide variety of defects including unknown ones in the inspection object with high accuracy.
  • an operator or the like can easily reconfirm visually the defect in the inspection image or the inspection object main body based on the information on the location of the defect.
  • the detection accuracy of the inspection target object having a defect can be further improved.
  • the inspection object having a defect can be detected with high accuracy.
  • FIG. 9 is a schematic diagram showing a state in which a restored image is generated from image data in which a part of a learning image in which a normal tablet is imaged is hidden.
  • FIG. 9 is a schematic diagram showing a state in which a restored image is generated from image data in which a part of the inspection image in which a tablet of which the normal state or abnormal state is unknown is captured is hidden.
  • FIG. 9 is a schematic diagram showing a state in which a restored image is generated from image data in which a part of the inspection image in which a tablet of which the normal state or abnormal state is unknown is captured is hidden.
  • a tablet which is a pharmaceutical product
  • a tablet which is a pharmaceutical product
  • an image such as a product name by an inkjet method
  • the program will be described.
  • FIG. 1 is a diagram showing the configuration of the tablet printing apparatus 1.
  • the tablet printing apparatus 1 prints images such as a product name, a product code, a company name, and a logo mark on the surface of each tablet 9 while conveying a plurality of tablets 9 for the purpose of identifying the product by an inkjet method. It is a device.
  • the tablet 9 of this embodiment has a disc shape (see FIG. 4 described later). However, the shape of the tablet 9 may be another shape such as an elliptical shape.
  • the direction in which the plurality of tablets 9 are conveyed is referred to as the “conveyance direction”, and the direction vertical and horizontal to the conveyance direction is referred to as the “width direction”.
  • the tablet 9 is formed with a groove-shaped dividing line 90 for dividing the tablet 9 in half.
  • the surface of the tablet 9 on which the score line 90 is formed will be referred to as a “slit surface”.
  • the score line 90 passes through the center of the score line surface and extends straight to both ends of the score line surface.
  • the dividing line 90 is formed on only one of the surfaces forming the upper surface and the lower surface of the disk-shaped tablet 9. That is, in this embodiment, only one of the upper surface and the lower surface of the tablet 9 is the secant surface.
  • the score lines 90 may be formed on both the upper surface and the lower surface of the disk-shaped tablet 9.
  • the score lines 90 may be formed on both the front and back surfaces of the tablet 9. Further, in the present embodiment, the product name and the like are printed only on the surface of the tablet 9 that faces the score line surface along the direction of the score line 90 on the back side. However, the printing location on the tablet 9 is not limited to this.
  • the tablet printing apparatus 1 of this embodiment includes a hopper 10, a feeder unit 20, a transport drum 30, a first printing unit 40, a second printing unit 50, a carry-out conveyor 60, and a control unit 70. ..
  • the hopper 10, the feeder unit 20, the transport drum 30, the first transport conveyor 41 of the first printing unit 40, the second transport conveyor 51 of the second printing unit 50, and the unloading conveyor 60 cause the tablets 9 to travel along a predetermined transport path.
  • a transport mechanism for transporting the same is formed.
  • the hopper 10 is a loading unit for receiving a large number of tablets 9 in a batch in the device.
  • the hopper 10 is arranged at the top of the housing 100 of the tablet printing apparatus 1.
  • the hopper 10 has an opening 11 located on the upper surface of the housing 100, and a funnel-shaped inclined surface 12 that gradually converges downward.
  • the plurality of tablets 9 charged into the opening 11 flow into the straight-moving feeder 21 along the inclined surface 12.
  • the feeder unit 20 is a mechanism that conveys the plurality of tablets 9 loaded into the hopper 10 to the conveyance drum 30.
  • the feeder unit 20 of the present embodiment has a straight feeder 21, a rotary feeder 22, and a supply feeder 23.
  • the straight feeder 21 has a flat plate-shaped vibrating trough 211.
  • the plurality of tablets 9 supplied from the hopper 10 to the vibrating trough 211 are conveyed to the rotary feeder 22 side by the vibration of the vibrating trough 211.
  • the rotary feeder 22 has a disk-shaped rotary base 221.
  • the plurality of tablets 9 dropped from the vibrating trough 211 onto the upper surface of the rotary table 221 are collected near the outer peripheral portion of the rotary table 221 by the centrifugal force generated by the rotation of the rotary table 221.
  • the supply feeder 23 has a plurality of cylindrical portions 231 extending vertically downward from the outer peripheral portion of the rotary table 221 to the transport drum 30.
  • FIG. 2 is a perspective view of the vicinity of the transport drum 30.
  • the plurality of tubular portions 231 are arranged in parallel with each other.
  • eight tubular portions 231 are arranged.
  • the plurality of tablets 9 transported to the outer peripheral portion of the rotary table 221 are respectively supplied to any one of the plurality of tubular portions 231, and fall inside the tubular portion 231. Then, a plurality of tablets 9 are stacked in each tubular portion 231.
  • the plurality of tablets 9 are distributed and supplied to the plurality of cylindrical portions 231, so that the plurality of tablets 9 are aligned in the plurality of transport rows. Then, the plurality of tablets 9 in each conveying line are sequentially supplied to the conveying drum 30 from the bottom.
  • the transport drum 30 is a mechanism that delivers a plurality of tablets 9 from the supply feeder 23 to the first transport conveyor 41.
  • the transport drum 30 has a substantially cylindrical outer peripheral surface.
  • the transport drum 30 rotates in the direction of the arrow in FIGS. 1 and 2 about a rotation shaft extending in the width direction by the power obtained from the motor.
  • a plurality of holding portions 31 are provided on the outer peripheral surface of the transport drum 30.
  • the holding portion 31 is a concave portion that is recessed inward from the outer peripheral surface of the transport drum 30.
  • the plurality of holding units 31 are arranged along the circumferential direction on the outer peripheral surface of the transport drum 30 at the widthwise positions corresponding to each of the plurality of transport rows described above. Further, a suction hole 32 is provided at the bottom of each holding portion 31.
  • a suction mechanism is provided inside the transport drum 30.
  • the suction mechanism When the suction mechanism is operated, a negative pressure lower than the atmospheric pressure is generated in each of the plurality of suction holes 32.
  • the holding unit 31 sucks and holds the tablets 9 supplied from the supply feeder 23 one by one by the negative pressure.
  • a blow mechanism is provided inside the transport drum 30. The blow mechanism blows the locally pressurized gas from the inside of the transport drum 30 toward the first transport conveyor 41 side described later.
  • the suction of the tablets 9 is released only in the holding unit 31 that faces the first transport conveyor 41 while maintaining the suction state of the tablets 9.
  • the transport drum 30 rotates while sucking and holding the plurality of tablets 9 supplied from the supply feeder 23, and can deliver the tablets 9 to the first transport conveyor 41.
  • a first state detection camera 33 is provided at a position facing the outer peripheral surface of the transport drum 30.
  • the first state detection camera 33 is an image pickup unit that takes an image of the state of the tablet 9 held on the transport drum 30.
  • the first state detection camera 33 captures an image of the tablet 9 transported by the transport drum 30, and transmits the obtained image to the control unit 70.
  • the control unit 70 detects the presence/absence of the tablet 9 in each holding unit 31 and the orientation of the front/back and the score line 90 of the tablet 9 held in the holding unit 31 based on the received image.
  • the first printing unit 40 is a processing unit for printing an image on one surface of the tablet 9. As shown in FIG. 1, the first printing unit 40 includes a first transport conveyor 41, a second state detection camera 42, a first head unit 43, a first inspection camera 44, and a first fixing unit 45.
  • the first conveyor 41 has a pair of first pulleys 411 and an annular first conveyor belt 412 that is stretched between the pair of first pulleys 411.
  • the first conveyor belt 412 is arranged such that a part thereof closely faces and faces the outer peripheral surface of the conveyor drum 30.
  • One of the pair of first pulleys 411 rotates by the power obtained from the motor.
  • the first conveyor belt 412 rotates in the direction of the arrow in FIGS. 1 and 2.
  • the other of the pair of first pulleys 411 rotates following the rotation of the first conveyor belt 412.
  • the first conveyor belt 412 is provided with a plurality of holding portions 413.
  • the holding portion 413 is a concave portion that is recessed inward from the outer surface of the first conveyor belt 412.
  • the plurality of holding units 413 are arranged in the transport direction at the width direction positions corresponding to each of the plurality of transport rows. That is, the plurality of holding portions 413 are arranged at intervals in the width direction and the conveyance direction.
  • the widthwise spacing between the plurality of holding portions 413 of the first conveyor belt 412 is equal to the widthwise spacing between the plurality of holding portions 31 of the transport drum 30.
  • Adsorption holes 414 are provided at the bottom of each holding portion 413.
  • the first conveyor 41 has a suction mechanism inside the first conveyor belt 412. When the suction mechanism is operated, a negative pressure lower than the atmospheric pressure is generated in each of the plurality of suction holes 414.
  • the holding unit 413 sucks and holds the tablets 9 delivered from the transport drum 30 one by one by the negative pressure. As a result, the first conveyor 41 conveys the plurality of tablets 9 while holding them in a state of being aligned in a plurality of conveyor rows spaced in the width direction.
  • the first transport belt 412 is provided with a blow mechanism. When the blow mechanism is operated, the suction hole 414 becomes a positive pressure higher than the atmospheric pressure in the holding unit 413 facing the second conveyor 51 described later.
  • the adsorption of the tablets 9 on the holding unit 413 is released, and the tablets 9 are delivered from the first transfer conveyor 41 to the second transfer conveyor 51.
  • the second state detection camera 42 is an image pickup unit that picks up an image of the state of the tablet 9 held on the first transfer conveyor 41 on the upstream side of the first head unit 43 in the transfer direction.
  • the first state detection camera 33 and the second state detection camera 42 image the surfaces of the tablet 9 opposite to each other.
  • the image obtained by the second state detection camera 42 is transmitted from the second state detection camera 42 to the control unit 70.
  • the control unit 70 detects the presence or absence of the tablet 9 in each holding unit 413, the front and back of the tablet 9 held in the holding unit 413, and the orientation of the score line 90 based on the received image.
  • the first head unit 43 is an inkjet type head unit that ejects ink droplets toward the upper surface of the tablet 9 conveyed by the first conveyor 41.
  • the first head unit 43 has four first heads 431 arranged in the transport direction.
  • the four first heads 431 eject ink droplets of different colors from the secant surface side toward the upper surface of the tablet 9 held by the holding portion 413.
  • the four heads 431 eject ink droplets of cyan, magenta, yellow, and black.
  • a multicolor image is printed on the surface of the tablet 9 by superimposing the single color images formed by these colors.
  • an edible ink produced from a raw material approved by the Japanese Pharmacopoeia, Food Sanitation Law, etc. is used as the ink ejected from each first head 431.
  • FIG. 3 is a bottom view of one first head 431.
  • the first conveyor belt 412 and the plurality of tablets 9 held by the first conveyor belt 412 are shown by a chain double-dashed line.
  • a plurality of nozzles 430 capable of ejecting ink droplets are provided on the lower surface of the first head 431.
  • the plurality of nozzles 430 are two-dimensionally arranged in the transport direction and the width direction on the lower surface of the first head 431.
  • the nozzles 430 are arranged with their positions displaced in the width direction.
  • the positions of the respective nozzles 430 in the width direction can be brought close to each other.
  • the plurality of nozzles 430 may be arranged in a line along the width direction.
  • the ink droplet ejection method may be a so-called thermal method, in which the heater is energized to thermally expand the ink in the nozzle 430 to eject the ink.
  • FIG. 4 is a perspective view around the first inspection camera 44.
  • the first inspection camera 44 is an image pickup unit for confirming whether or not the printing by the first head unit 43 is good and whether or not there is a defect in the tablet 9.
  • the first inspection camera 44 images the upper surface of the tablet 9 conveyed to the first conveyor belt 412 on the downstream side of the first head unit 43 in the conveyance direction. Further, the first inspection camera 44 transmits the obtained image to the control unit 70. Based on the received image, the control unit 70 inspects the upper surface of each tablet 9 for defects such as scratches, stains, printing position shifts, and dot missing. The method of detecting these defects will be described in detail later.
  • the eight first inspection cameras 44 are arranged at the positions corresponding to the eight tablets 9 arranged in the width direction on the first conveyor belt 412. Each first inspection camera 44 images one tablet 9 in the width direction. In addition, each first inspection camera 44 sequentially takes an image of the plurality of tablets 9 conveyed in the conveying direction. However, considering the arrangement space of the eight first inspection cameras 44, they may be arranged so as to be displaced from each other in the transport direction.
  • the first fixing unit 45 is a mechanism that fixes the ink ejected from the first head unit 43 to the tablet 9.
  • the first fixing unit 45 is arranged on the downstream side of the first inspection camera 44 in the transport direction.
  • the first fixing unit 45 may be arranged between the first head unit 43 and the first inspection camera 44.
  • a hot air drying type heater that blows hot air toward the tablets 9 transported by the first transport conveyor 41 is used.
  • the ink attached to the surface of the tablet 9 is dried by hot air and fixed on the surface of the tablet 9.
  • the second printing unit 50 is a processing unit for printing an image on the other surface of the tablet 9 after printing by the first printing unit 40.
  • the second printing unit 50 includes a second conveyor 51, a third state detection camera 52, a second head unit 53, a second inspection camera 54, a second fixing unit 55, and a defective product collecting unit. 56.
  • the second transfer conveyor 51 transfers the plurality of tablets 9 transferred from the first transfer conveyor 41 while holding them.
  • the third state detection camera 52 images the plurality of tablets 9 transported by the second transport conveyor 51 on the upstream side of the second head unit 53 in the transport direction.
  • the second head unit 53 ejects ink droplets toward the upper surface of the tablet 9 conveyed by the second conveyor 51.
  • the second inspection camera 54 images the plurality of tablets 9 transported by the second transport conveyor 51 on the downstream side of the second head unit 53 in the transport direction.
  • the second fixing unit 55 fixes the ink ejected from each head 531 of the second head unit 53 to the tablet 9.
  • the second transport conveyor 51, the third state detection camera 52, the second head unit 53, the second inspection camera 54, and the second fixing unit 55 are the above-described first transport conveyor 41, second state detection camera 42, and first state detection camera 42.
  • the head unit 43, the first inspection camera 44, and the first fixing unit 45 have the same configurations.
  • the defective product collecting unit 56 collects the tablets 9 determined to be defective based on the captured images Ip obtained from the first inspection camera 44 and the second inspection camera 54 described above.
  • the defective item recovery unit 56 includes a blow mechanism arranged inside the second transfer conveyor 51 and a recovery box 561. When the tablet 9 determined to be a defective product is transported to the defective product collection unit 56, the blow mechanism blows a pressurized gas toward the tablet 9 from the inside of the second transport conveyor 51. As a result, the tablet 9 falls off the second conveyor 51 and is collected in the collection box 561.
  • the carry-out conveyor 60 is a mechanism for carrying out the plurality of tablets 9 determined as non-defective products to the outside of the housing 100 of the tablet printing apparatus 1.
  • the upstream end of the carry-out conveyor 60 is located below the second pulley 511 of the second transfer conveyor 51.
  • the downstream end of the carry-out conveyor 60 is located outside the housing 100.
  • a belt transport mechanism is used for the carry-out conveyor 60, for example.
  • the plurality of tablets 9 that have passed through the defective product collecting unit 56 are dropped from the second conveyor 51 to the upper surface of the carry-out conveyor 60 by releasing the suction of the suction holes. Then, the plurality of tablets 9 are carried out of the housing 100 by the carry-out conveyor 60.
  • FIG. 5 is a block diagram showing the connection between the control unit 70 and each unit in the tablet printing apparatus 1.
  • the control unit 70 includes a computer having a processor 701 such as a CPU, a memory 702 such as a RAM, a storage device 703 such as a hard disk drive, a receiving unit 704, and a transmitting unit 705.
  • the storage device 703 stores a computer program P and data D for executing the printing process and inspection of the tablets 9.
  • the receiving unit 704 and the transmitting unit 705 may be provided separately from the control unit 70.
  • the computer program P is read from the storage medium M storing the program P and stored in the storage device 703 of the control unit 70.
  • Examples of the storage medium M include a CD-ROM, a DVD-ROM, a flash memory and the like.
  • the program P may be input to the control unit 70 via a network.
  • control unit 70 via the receiving unit 704 and the transmitting unit 705, the above-described linear feeder 21, rotary feeder 22, and transport drum 30 (including a motor, a suction mechanism, and a blow mechanism).
  • a first state detection camera 33 A first state detection camera 33, a first conveyor 41 (including a motor, a suction mechanism, and a blow mechanism), a second state detection camera 42, a first head unit 43 (a plurality of nozzles 430 of each first head 431).
  • Ethernet registered trademark
  • wireless communication such as Bluetooth (registered trademark) or Wi-Fi (registered trademark)
  • control unit 70 When the control unit 70 receives information from each unit via the receiving unit 704, the control unit 70 temporarily reads the computer program P and the data D stored in the storage device 703 into the memory 702, and based on the computer program P and the data D.
  • the processor 701 performs arithmetic processing. Further, the control unit 70 controls the operation of each of the above units by instructing each unit via the transmission unit 705. Thereby, each process for the plurality of tablets 9 proceeds.
  • FIG. 6 is a block diagram conceptually showing a part of the function of the control unit 70 in the tablet printing apparatus 1.
  • the control unit 70 of the present embodiment has an angle recognition unit 71, a head control unit 72, and an inspection unit. These functions are realized by temporarily reading the computer program P and the data D stored in the storage device 703 into the memory 702 and causing the processor 701 to perform arithmetic processing based on the computer program P and the data D.
  • the function as the inspection unit is realized by the information processing device 200 including some or all of the mechanical elements of the control unit 70. In the information processing device 200, a learned learning model generated by machine learning in advance is installed.
  • the angle recognition unit 71 has a function of recognizing the rotation angle (direction of the secant 90) of each tablet 9 being conveyed.
  • the angle recognition unit 71 acquires captured images of the first state detection camera 33 and the second state detection camera 42, and recognizes the rotation angle of each tablet 9 transported by the first transport conveyor 41 based on the captured images. To do. Further, the angle recognition unit 71 acquires a captured image of the third state detection camera 52 and recognizes the rotation angle of each tablet 9 transported by the second transport conveyor 51 based on the captured image.
  • the product name and the like are printed only on the surface of the tablet 9 that faces the scored line along the direction of the scored line 90 on the back side. Therefore, the angle recognition unit 71, based on the captured images obtained from the first state detection camera 33 and the second state detection camera 42, the rotation angle (secant line) when each tablet 9 passes through the first head unit 43. 90 direction) is recognized. Similarly, the angle recognition unit 71 recognizes, for each tablet 9, the rotation angle (direction of the secant 90) when passing through the second head unit 53, based on the captured image obtained from the third state detection camera 52. ..
  • the angle recognition unit 71 recognizes the rotation angle of some of the tablets 9 when passing through the first head unit 43, based on the captured image obtained from the first state detection camera 33.
  • the rotation angle when passing the first head unit 43 may be recognized based on the captured image obtained from the second state detection camera 42.
  • the rotation angle when passing through the second head unit 53 is recognized based on the captured image obtained from the third state detection camera 52, and for the other tablets 9, the rotation angle is changed to the second.
  • the rotation angle when passing through the second head unit 53 may be recognized based on the captured image obtained from the state detection camera 42.
  • the head control unit 72 is a function for controlling the operation of the first head unit 43 and the second head unit 53. As shown in FIG. 6, the head control unit 72 has a first storage unit 721.
  • the function of the first storage unit 721 is realized by, for example, the storage device 703 described above.
  • the first storage unit 721 stores print image data D1 including information on an image printed on the tablet 9.
  • the image is a product name, a product code, a company name, a logo mark, or the like, and is formed of, for example, a character string including alphabets and numbers (see FIG. 4 and FIG. 7 described later). However, the image may be a mark or an illustration other than the character string.
  • the print image data D1 also includes such information that specifies the print position and the print direction of the image on the tablet 9.
  • the head control unit 72 When printing on the surface of the tablet 9 as a product, the head control unit 72 reads the print image data D1 from the first storage unit 721. In addition, the head control unit 72 rotates the read print image data D1 according to the rotation angle recognized by the angle recognition unit 71. Then, the head controller 72 controls the first head 431 or the second head 531 based on the rotated print image data D1. As a result, the image represented by the print image data D1 is printed on the surface of the tablet 9 along the dividing line 90.
  • the configuration of the information processing device 200 will be described.
  • the function as the inspection unit in the control unit 70 is realized by the information processing device 200 including some or all of the mechanical elements of the control unit 70.
  • a learned learning model generated by machine learning in advance is installed in the information processing device 200.
  • the information processing device 200 is a device capable of inspecting the tablet 9 for defects such as scratches and detecting an abnormal tablet 9 having a defect.
  • the information processing apparatus 200 has an image restoration unit 201, a determination unit 202, and an output unit 203 as functions.
  • a process of generating a learning model installed in the information processing device 200 by machine learning will be described.
  • the flow of the learning is conceptually illustrated by the broken line in FIG.
  • a plurality of learning images Io (see FIG. 7) in which a normal tablet 9 is imaged are prepared in advance. Specifically, on the downstream side of the first head unit 43 in the transport direction, among the tablets 9 transported on the first transport belt 412, a large number of tablets 9 without defects such as scratches are detected by the first inspection camera 44. The image is taken. Then, a plurality of captured images of the upper surface of the tablet 9 are prepared as learning images (learning image Io) of the normal tablet 9. In this embodiment, 1000 learning images Io are prepared. Note that, in general, machine learning itself is performed outside the tablet printing apparatus 1. The plurality of learning images Io are input to the image restoration unit 201.
  • the image restoration unit 201 divides each learning image Io into a plurality of sections (see FIG. 8). In this embodiment, it is divided into a total of 16 sections (sections S1 to S16) of four vertical sections and four horizontal sections. However, the number of divided learning images Io is not limited to this. Further, in this embodiment, the divided sections S1 to S16 have the same size. However, the learning image Io may be divided into a plurality of sections having different sizes.
  • the image restoration unit 201 creates image data Ih in which one of the sections S1 to S16 of each learning image Io is hidden.
  • image data Ih in which the section S2 of the sections S1 to S16 of the learning image Io is hidden is illustrated.
  • the image restoration unit 201 of this embodiment creates 16 image data Ih for each learning image Io while hiding one of the sections S1 to S16 in order from the section S1.
  • the image restoration unit 201 creates 16 pieces of image data Ih for each of the 1000 learning images Io, that is, a total of 16000 pieces of image data Ih.
  • the image restoration unit 201 may create a predetermined number of image data Ih for each learning image Io while randomly hiding one of the sections S1 to S16 using a random generator.
  • the image restoration unit 201 performs learning processing by deep learning so that a restored image Ir in which a hidden part is restored can be generated from each image data Ih with high accuracy. Specifically, the image restoration unit 201 uses the original learning image Io for which each image data Ih was generated as the teacher data, and also the learning model X related to the image restoration process for generating the restored image Ir with high accuracy. Machine learning of (a, b, c). Here, the teacher data is correct data. Note that FIG. 8 illustrates, as an example, how the restored image Ir in which the hidden section S2 is restored is generated from the image data Ih with high accuracy.
  • the image restoration unit 201 repeatedly executes, by the convolutional neural network, an encoding process for extracting a feature from the image data Ih to generate a latent variable and a decoding process for generating a restored image Ir from the latent variable.
  • the convolutional neural network include U-Net or FusionNet.
  • an error back propagation method or a gradient descent method is used.
  • the parameters of the encoding process and the decoding process are adjusted and updated and saved.
  • the parameters of the encoding process and the decoding process indicate a plurality of parameters a, b, c... In the learning model X(a, b, c... ).
  • the image restoration unit 201 may perform learning once using each image data Ih or may perform learning a plurality of times.
  • the method of machine learning for the image restoration process for generating the restored image Ir with high accuracy is not limited to this.
  • the image restoration unit 201 compares the generated restoration image Ir with the learning image Io in addition to the learning model X(a, b, c%) Which generates the restored image Ir, and which is the real image? It may further have a learning model Y (p, q, r%) for determining.
  • the learning model X(a, b, c)) and the learning model Y(p, q, r%) may be machine-learned alternately while competing with each other, and a hostile generation network may be provided.
  • the adversarial generation network include GANs or pix2pix.
  • the tablet printing apparatus 1 can detect the defect of the tablet 9 by using the learning model X(a, b, c... ).
  • the defect detection of the tablet 9 is performed, first, the information processing apparatus 200 in the tablet printing apparatus 1 transfers the first transport belt 412 from the first inspection camera 44 to the downstream side of the first head unit 43 in the transport direction. The captured image Ip of the transported tablet 9 is acquired. Further, the captured image Ip of the tablet 9 conveyed to the second conveyor belt 512 is acquired from the second inspection camera 54 on the downstream side of the second head unit 53 in the conveying direction.
  • the inspection image Ii is an image in which the presence or absence of a defect is unknown, that is, the tablet 9 whose normality or abnormality is unknown is imaged.
  • the inspection image Ii of the tablet 9 has a defect De at a position located in a section S15 described later. Further, in this embodiment, a scratch is assumed as the defect De.
  • the defect De may be dirt due to ink, a print position shift, a dot dropout, or the like.
  • the image restoration unit 201 divides each inspection image Ii into a total of 16 sections (sections S1 to S16) of the same four vertical sections and four horizontal sections as at the time of learning.
  • the image restoration unit 201 creates image data Ih in which one of the sections S1 to S16 of each inspection image Ii is hidden.
  • FIG. 9 and FIG. 10 each show a state in which a restored image Ir in which one hidden partition is restored is generated with high accuracy from the image data Ih. In particular, in FIG.
  • the restored image Ir in which the section S1 is restored from the image data Ih in which the section S1 is hidden (hereinafter referred to as “image data Ih1” for ease of description) (hereinafter “restored image Ir1 for ease of description”). ") is generated with high accuracy.
  • a restored image Ir in which the section S15 is restored from the image data Ih in which the section S15 is hidden (hereinafter referred to as “image data Ih15” for ease of description”). ”) is generated with high accuracy. Since the section S15 is hidden, the image restoration unit 201 cannot recognize the defect De, but the defect De is displayed in white in the image data Ih of FIG. 10 for easy description.
  • the image restoration unit 201 extracts the feature from the image data Ih in which a part of the inspection image Ii is hidden by the convolutional neural network and generates the latent variable by the convolutional neural network, as in the learning, and the latent variable. While performing the decoding process for generating the restored image Ir from the image data Ih, the hidden part of the image data Ih is used by using the learning model X (a, b, c%) which has been learned at the time of learning. A plurality of restored images Ir are generated while sequentially changing.
  • the image restoration unit 201 first restores the section S1 by using the learning model X (a, b, c%) From the image data Ih1 in which the section S1 is hidden from the inspection image Ii. The image Ir1 is generated and output to the determination unit 202. Next, the image restoration unit 201 uses the learning model X (a, b, c%) From the image data Ih2 in which the section S2 is hidden from the inspection image Ii to generate a restored image Ir2 in which the section S2 is restored. Then, it outputs to the determination unit 202. The image restoration unit 201 repeatedly performs such restoration processing while sequentially changing some hidden places.
  • the image restoration unit 201 uses the learning model X(a, b, c%) From the image data Ih15 in which the section S15 is hidden from the inspection image Ii to generate the restored image Ir15 in which the section S15 is restored. , To the determination unit 202. Finally, the image restoration unit 201 uses the learning model X(a, b, c%) From the image data Ih16 in which the section S16 is hidden from the inspection image Ii to generate the restored image Ir16 in which the section S16 is restored. Then, it outputs to the determination unit 202.
  • the learning model X (a, b, c%) That has been learned at the time of learning has the image data Ih in which a part of the image of the normal tablet 9 without the defect De is hidden. Is a model in which the parameters for generating the restored image Ir in which the hidden part is restored are adjusted. Therefore, as shown in FIG. 9, the image restoration unit 201 uses the learning model X(a, b, c%) From the image data Ih1 in which the section S1 in the inspection image Ii in which the defect De is not present is hidden and uses the restored image. When the Ir1 is generated, the restored image Ir1 including the section S1 having no defect De is accurately restored in a portion of the inspection image Ii where the defect De does not exist.
  • the image restoration unit 201 uses the learning model X(a, b, c%) From the image data Ih15 in which the section S15 in the inspection image Ii where the defect De is present is hidden to use the restored image Ir15.
  • the image restoration unit 201 cannot recognize the defect De. Therefore, although the defect De exists in the section S15 of the inspection image Ii, the image restoration unit 201 generates the restored image Ir15 having no defect De without recognizing the presence of the defect De.
  • the determination unit 202 compares each of the plurality of restored images Ir with the inspection image Ii, so that the tablet 9 has the defect De. It is determined whether there is a normal one or an abnormal one having a defect De, and the determination result Dr is output to the output unit 203. Specifically, the determination unit 202 first compares the restored image Ir1 generated by the image restoration unit 201 and the inspection image Ii, and determines that the difference in pixel value between the restored image Ir1 and the inspection image Ii is greater than a predetermined allowable value. Is also large.
  • the determination unit 202 compares the restored image Ir2 generated by the image restoration unit 201 and the inspection image Ii, and determines whether the difference in pixel value between the restored image Ir2 and the inspection image Ii is larger than a predetermined allowable value. Determine whether. The determination unit 202 executes such determination processing on all the restored images Ir. Eventually, the determination unit 202 compares the restored image Ir15 generated by the image restoration unit 201 with the inspection image Ii, and determines whether the difference in pixel value between the restored image Ir15 and the inspection image Ii is larger than a predetermined allowable value. To judge.
  • the determination unit 202 compares the restored image Ir16 generated by the image restoration unit 201 and the inspection image Ii, and determines whether the difference in pixel value between the restored image Ir16 and the inspection image Ii is larger than a predetermined allowable value. Determine whether.
  • the restored image Ir15 generated by the image restoration unit 201 has no defect De.
  • a defect De is present at a position located in the section S15 in the inspection image Ii. Therefore, the difference in pixel value between the restored image Ir15 and the inspection image Ii becomes a significantly large value, unlike other comparison results.
  • the determination unit 202 determines the location hidden in the image data Ih that is the source of the restored image Ir. It is determined as the place where the defect De exists. Then, the determination unit 202 outputs the determination result Dr concerning the presence or absence of the defect De and the location of the defect De to the output unit 203.
  • the determination unit 202 restores the restored image of the hidden section in the image data Ih that is the source of each of the restored images Ir. It may be possible to determine whether or not the difference in pixel value is larger than a predetermined allowable value by connecting the two to each other and comparing the result with the entire inspection image Ii.
  • the output unit 203 outputs information regarding the presence/absence of the defect De in the tablet 9 and the location of the defect De to a monitor or a speaker, and also to the defective product recovery unit 56. , Information about the tablet 9 having the defect De is transmitted and collected.
  • the output unit 203 may further display that effect.
  • the abnormal tablet 9 having the defect De is detected by performing the machine learning using the image of the normal tablet 9 having no defect De that can be easily acquired in large numbers. be able to. As a result, a wide variety of defects De including unknown ones in the tablet 9 can be detected with high accuracy.
  • the output unit 203 outputs information regarding the location of the defect De as well as the presence or absence of the defect De. Thereby, the worker or the like can easily reconfirm the tablet 9 determined to have the defect De by using the information on the location of the defect De. Thereby, the detection accuracy of the tablet 9 having the defect De can be further improved.
  • the image restoration unit 201 repeatedly executes, by the convolutional neural network, an encoding process for extracting a feature from the image data Ih to generate a latent variable and a decoding process for generating a restored image Ir from the latent variable. To do. Therefore, even if the position of the tablet 9 in the inspection image Ii or the learning image Io is slightly displaced, or even if the inspection image Ii or the learning image Io contains some noise, the tablet 9 having the defect De is increased. It can be detected with accuracy.
  • learning and detection of the defect De in the tablet 9 are performed using the image of the upper surface of the tablet 9 after the tablet 9 is printed.
  • the learning and the detection of the defect De in the tablet 9 may be performed using the image of the tablet 9 before the printing process is performed on the tablet 9.
  • learning and detection of the defect De in the tablet 9 may be performed using an image obtained by imaging the tablet 9 from an oblique direction. As a result, it is possible to detect not only the front and back surfaces of the tablet 9 but also the defects De existing on the side surface of the tablet 9.
  • the learning model X (a, b, c%) whose machine learning has been completed outside the tablet printing apparatus 1 is installed in the information processing apparatus 200, and the defect De in the tablet 9 is detected. It was However, the learning model X (a, b, c%) May be machine-learned in a state where the learning model X (a, b, c%) Is already installed in the information processing apparatus 200 in the tablet printing apparatus 1, and the defect De in the tablet 9 may be detected as it is. ..
  • the tablet 9 which is a pharmaceutical product is used as an example of the inspection target.
  • the information processing apparatus 200 of the above-described embodiment is for determining the presence or absence of a defect De such as a scratch, stain, printing position shift, or dot missing on the tablet 9 that is the inspection object and the location of the defect De.
  • the inspection object may be a base material such as a film or paper, a printed circuit board, or the like, which is subjected to a printing process in various printing devices, or a component or the like used in various devices. That is, the inspection object may be an object having a substantially constant appearance in the normal case.
  • the information processing device 200 may determine the presence or absence of a defect De on the appearance and the location of the defect De in the inspection object.
  • the information processing apparatus of the present invention is an information processing apparatus that detects an abnormal inspection object having a defect by using a set of image data of a normal inspection object, and an inspection object whose normality or abnormality is unknown.
  • An image restoration unit that generates a restored image in which a hidden portion is restored from image data in which a portion of the inspection image in which an object is captured is hidden, and the restored image is compared with the inspection image to perform the inspection.
  • the image restoration unit includes a determination unit that determines whether the target object is normal or abnormal, and an output unit that outputs the determination result by the determination unit.
  • the image restoration unit includes a plurality of learning images of the normal inspection target object.
  • the image restoration unit may have adjusted the parameters of the encoding process and the decoding process by the convolutional neural network, for example, when the learning is completed.
  • the information processing method of the present invention is an information processing method for detecting an abnormal inspection object having a defect by using a set of image data of a normal inspection object, wherein a) the normal inspection object is A step of learning by deep learning a process of generating a restored image in which a hidden part is restored from image data in which a part of each of a plurality of captured learning images is hidden, and b) normal or abnormal By comparing the restored image restored by using the process learned in step a) with the inspection image from the image data in which a part of the inspection image obtained by capturing the unknown inspection target is hidden, It suffices to have a step of determining whether is normal or abnormal, and a step of c) outputting the determination result of step b).
  • the information processing program executed by the information processing apparatus of the present invention is an information processing program for detecting an abnormal inspection object having a defect by using a set of image data of a normal inspection object, and a) An image restoration process for generating a restored image in which a hidden portion is restored from image data in which a portion of the inspection image in which an inspection target that is normal or abnormal is captured is hidden, and b) a restored image ,
  • the computer performs the determination process of determining whether the inspection target is normal or abnormal by comparing with the inspection image, and c) the output process of outputting the determination result of the determination process, and the image restoration process is performed normally. Learned by deep learning so that it is possible to accurately generate a restored image in which a hidden part is restored from image data in which a part of each of multiple learning images in which various inspection objects are captured is hidden If
  • the present invention provides a method in which a part of each of a plurality of learning images in which a normal inspection target object is captured is hidden from image data. What is necessary is just to learn the process of generating a restored image in which the copy is restored by deep learning.
  • the present invention provides a method in which a part of each of a plurality of learning images in which a normal inspection target object is captured is hidden from image data. It suffices to have a learned model learned by deep learning the process of generating a restored image in which the part is restored.
  • the first printing unit 40 and the second printing unit 50 each have four heads.
  • the number of heads included in each of the printing units 40 and 50 may be 1 to 3, or 5 or more.
  • the detailed configuration of the tablet printing apparatus 1 may be different from the drawings of the present application. Further, the respective elements appearing in the above-described embodiments and modified examples may be appropriately combined within a range where no contradiction occurs.
  • Tablet Printing Device 9 Tablets 10 Hopper 20 Feeder Section 30 Conveying Drum 33 First State Detection Camera 40 First Printing Section 41 First Conveying Conveyor 42 Second State Detection Camera 43 First Head Unit 44 First Inspection Camera 45 First Fixing Part 50 Second printing part 51 Second transfer conveyor 52 Third state detection camera 53 Second head unit 54 Second inspection camera 55 Second fixing part 56 Defective product recovery part 60 Carry-out conveyor 70 Control part 71 Angle recognition part 90 Dividing line 100 Housing 200 Information processing device 201 Image restoration unit 202 Judgment unit 203 Output unit 411 First pulley 412 First conveyor belt 431 First head 511 Second pulley 512 Second conveyor belt 531 Second head 561 Collection box 701 Processor 702 Memory 703 Storage device 704 Reception unit 705 Transmission unit D data D1 Print image data De Defect Dr judgment result Ih image data Ii inspection image Io learning image Ip captured image Ir restored image P computer program X learning model Y learning model

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Image Analysis (AREA)

Abstract

This information processing device detects an inspection object that has a defect using the set of image data of a defect-free inspection object. The device comprises an image restoration unit, a determination unit, and an output unit. The image restoration unit generates a restored image (Ir) in which a hidden portion is restored, from image data (Ih) in which there is a hidden portion of an inspection image in which is imaged an inspection object for which the presence of a defect is unknown. The determination unit compares the restored image (Ir) with the inspection image and thereby determines whether or not there is defect. The output unit outputs the result of determination. The image restoration unit is pretrained by deep learning so as to be capable of generating the restored image (Ir) in which the hidden portion is restored, from image data in which is hidden a portion of each of a plurality of learning images in which a defect-free inspection object is imaged. Thus, it is possible to use the image of a defect-free inspection object which is easily obtainable in large numbers, as data for learning and perform machine learning for detecting an inspection object having a defect.

Description

情報処理装置、情報処理方法、情報処理プログラム、学習方法および学習済モデルInformation processing apparatus, information processing method, information processing program, learning method, and learned model
 本発明は、欠陥の無い正常な検査対象物の画像データを用いて学習を行い、欠陥を有する異常な検査対象物を検出することを可能とする情報処理装置、情報処理方法および情報処理プログラムと、当該学習時において行った学習方法および学習済モデルに関する。 The present invention provides an information processing apparatus, an information processing method, and an information processing program, which are capable of performing learning using image data of a normal inspection object having no defect and detecting an abnormal inspection object having a defect. , A learning method and a learned model performed at the time of learning.
 従来、画像処理を用いて欠陥を有する異常な検査対象物を検出する技術が知られている。特に近年では、機械学習を応用させた技術の導入が進んでいる。機械学習を用いた欠陥検出技術については、例えば、特許文献1に記載されている。 Conventionally, a technique of detecting an abnormal inspection object having a defect by using image processing is known. Particularly in recent years, the introduction of technology applying machine learning is progressing. A defect detection technique using machine learning is described in Patent Document 1, for example.
特開2018-81629号公報Japanese Patent Laid-Open No. 2018-81629
 特許文献1には、機械学習を用いて対象物の画像に含まれるキズ部Df1の有無を判定することが可能な判定システム301が開示されている。判定システム301は、判定装置101と、蓄積装置131と、学習装置151とを備える。そして、蓄積装置131に蓄積された複数の画像のうち、キズ部Df1を含む対象物の画像である不良品画像Sngの500枚と、キズ部Df1を含まない対象物の画像である良品画像Sgの500枚とが選択され、これらが1枚ずつ63個の部分画像に分割される。また、部分画像にキズ部Df1が含まれる場合、キズ部Df1に対して軌跡Tr1が描画され、さらに軌跡Tr1の有無に関するラベル表示がなされる。次に、学習装置151は、当該複数の部分画像およびラベル表示を用いて機械学習する。さらに、機械学習が完了したモデルは、判定装置101に導入される。当該モデルに、画像データが入力されると、当該画像データにキズ部Df1が含まれるか否か判定され、判定結果が出力される。 Patent Document 1 discloses a determination system 301 capable of determining the presence or absence of a flaw portion Df1 included in an image of an object by using machine learning. The determination system 301 includes a determination device 101, a storage device 131, and a learning device 151. Then, among the plurality of images stored in the storage device 131, 500 defective product images Sng that are images of the target object including the scratched portion Df1 and a non-defective product image Sg that is an image of the target object that does not include the scratched portion Df1. And 500 are selected, and these are divided into 63 partial images one by one. When the partial image includes the scratched portion Df1, the locus Tr1 is drawn with respect to the scratched portion Df1, and a label indicating whether or not the locus Tr1 is present is displayed. Next, the learning device 151 performs machine learning using the plurality of partial images and the label display. Further, the model for which the machine learning is completed is introduced into the determination device 101. When the image data is input to the model, it is determined whether or not the image data includes the flaw portion Df1, and the determination result is output.
 しかしながら、欠陥を有する異常な検査対象物を検出するための機械学習を行う場合、少なくとも数千~数百万枚程度の多数の検査対象物の画像を、学習用データとして用いる必要がある。一方、工業製品の製造過程において、欠陥は頻繁に発生するものではなく、欠陥を有する検査対象物の画像を数千~数百万枚程度も取得することは現実的に難しい。また、欠陥の種類および状態は、未知のものを含めて多種多様であり、全ての種類および状態の欠陥を含む画像を取得することは、さらに難しい。 However, when performing machine learning to detect abnormal inspection objects with defects, it is necessary to use images of at least several thousands to several millions of inspection objects as learning data. On the other hand, in the manufacturing process of industrial products, defects do not occur frequently, and it is practically difficult to acquire several thousands to several millions of images of defective inspection objects. Further, there are various types of defects and states including unknown ones, and it is more difficult to acquire an image including defects of all types and states.
 本発明は、このような事情に鑑みなされたものであり、容易に多数取得することが可能な、欠陥の無い正常な検査対象物の画像を用いて機械学習を行うことによって、欠陥を有する異常な検査対象物を検出することができる技術を提供することを目的とする。 The present invention has been made in view of such circumstances, and an abnormality having a defect is obtained by performing machine learning by using an image of a normal inspection object without a defect that can be easily acquired in large numbers. It is an object of the present invention to provide a technique capable of detecting various inspection objects.
 上記課題を解決するため、本願の第1発明は、正常な検査対象物の画像データの集合を用いて、欠陥を有する異常な検査対象物を検出する情報処理装置であって、正常か異常か不明な検査対象物が撮像された検査画像の一部が隠された画像データから、前記隠された一部が復元された復元画像を生成する画像復元部と、前記復元画像を、前記検査画像と比較することによって、検査対象物が正常か異常かを判定する判定部と、前記判定部による判定結果を出力する出力部と、を備え、前記画像復元部は、正常な検査対象物が撮像された複数の学習画像のそれぞれの一部が隠された画像データから、前記隠された一部が復元された復元画像を高精度に生成できるように、ディープラーニングにより学習済みである。 In order to solve the above-mentioned problems, the first invention of the present application is an information processing apparatus for detecting an abnormal inspection object having a defect by using a set of image data of a normal inspection object. An image restoration unit that generates a restored image in which the hidden part is restored from image data in which a part of the inspection image in which an unknown inspection target is captured is hidden, and the restored image, And an output unit that outputs a determination result by the determination unit, and the image restoration unit captures a normal inspection target image. The learning has already been performed by deep learning so that a restored image in which the hidden part is restored can be generated with high accuracy from image data in which a part of each of the plurality of learned images that have been hidden is hidden.
 本願の第2発明は、第1発明の情報処理装置であって、前記画像復元部は、前記画像データのうち、前記隠された一部の場所を順次に変更しながら、複数の前記復元画像を生成し、前記判定部は、複数の前記復元画像のそれぞれと、前記検査画像とを比較することによって、検査対象物が正常か異常かを判定する。 A second invention of the present application is the information processing apparatus according to the first invention, wherein the image restoration unit sequentially changes a part of the hidden part of the image data, and a plurality of the restored images. And the determination unit determines whether the inspection target is normal or abnormal by comparing each of the plurality of restored images with the inspection image.
 本願の第3発明は、第2発明の情報処理装置であって、前記判定部は、前記復元画像と前記検査画像との差異が所定の許容値よりも大きい場合に、前記隠された一部の場所を欠陥の場所として決定し、前記出力部は、さらに前記欠陥の場所に係る情報を出力する。 A third invention of the present application is the information processing apparatus according to the second invention, wherein the determination unit includes the hidden part when the difference between the restored image and the inspection image is larger than a predetermined allowable value. Is determined as the location of the defect, and the output unit further outputs information relating to the location of the defect.
 本願の第4発明は、第1発明から第3発明までのいずれか1発明の情報処理装置であって、前記画像復元部は、前記検査画像から特徴を抽出して潜在変数を生成するエンコード処理と、前記潜在変数から前記復元画像を生成するデコード処理と、を実行する。 A fourth invention of the present application is the information processing apparatus according to any one of the first invention to the third invention, wherein the image restoration unit extracts a feature from the inspection image to generate a latent variable. And a decoding process for generating the restored image from the latent variable.
 本願の第5発明は、第4発明の情報処理装置であって、前記画像復元部は、前記学習において、畳み込みニューラルネットワークにより、前記エンコード処理および前記デコード処理のパラメータが調整済みである。 A fifth invention of the present application is the information processing apparatus according to the fourth invention, wherein the image restoration unit has adjusted the parameters of the encoding process and the decoding process by a convolutional neural network in the learning.
 本願の第6発明は、正常な検査対象物の画像データの集合を用いて、欠陥を有する異常な検査対象物を検出する情報処理方法であって、a)正常な検査対象物が撮像された複数の学習画像のそれぞれの一部が隠された画像データから、前記隠された一部が復元された復元画像を生成する処理をディープラーニングによって学習する工程と、b)正常か異常か不明な検査対象物が撮像された検査画像の一部が隠された画像データから、前記工程a)において学習した処理を用いて復元された復元画像を、前記検査画像と比較することによって、検査対象物が正常か異常かを判定する工程と、c)前記工程b)による判定結果を出力する工程と、を有する。 A sixth invention of the present application is an information processing method for detecting an abnormal inspection object having a defect by using a set of image data of the normal inspection object, wherein a) a normal inspection object is imaged. A step of learning by deep learning a process of generating a restored image in which the hidden part is restored from image data in which a part of each of the plurality of learning images is hidden; b) whether normal or abnormal The inspection object is compared by comparing the restored image restored from the image data in which a part of the inspection image in which the inspection object is picked up is hidden using the process learned in the step a) with the inspection image. Is determined to be normal or abnormal, and c) the step of outputting the determination result of the step b).
 本願の第7発明は、正常な検査対象物の画像データの集合を用いて、欠陥を有する異常な検査対象物を検出する情報処理プログラムであって、a)正常か異常か不明な検査対象物が撮像された検査画像の一部が隠された画像データから、前記隠された一部が復元された復元画像を生成する画像復元処理と、b)前記復元画像を、前記検査画像と比較することによって、検査対象物が正常か異常かを判定する判定処理と、c)前記判定処理による判定結果を出力する出力処理と、をコンピュータに実行させ、前記画像復元処理は、正常な検査対象物が撮像された複数の学習画像のそれぞれの一部が隠された画像データから、前記隠された一部が復元された復元画像を高精度に生成できるように、ディープラーニングにより学習済みである。 A seventh invention of the present application is an information processing program for detecting an abnormal inspection object having a defect by using a set of image data of a normal inspection object, wherein a) an inspection object whose normality or abnormality is unknown. Image restoration processing for generating a restored image in which the hidden part is restored from image data in which a part of the captured inspection image is hidden; and b) comparing the restored image with the inspection image. Accordingly, the computer is caused to execute a determination process for determining whether the inspection target is normal or abnormal, and c) an output process for outputting the determination result of the determination process, and the image restoration process is performed for the normal inspection target. Has been learned by deep learning so that a restored image in which the hidden part is restored can be generated with high accuracy from image data in which a part of each of the plurality of learned images captured is hidden.
 本願の第8発明は、欠陥を有する異常な検査対象物を検出するために、正常な検査対象物が撮像された複数の学習画像のそれぞれの一部が隠された画像データから、前記隠された一部が復元された復元画像を生成する処理を、ディープラーニングによって学習する、学習方法である。 In order to detect an abnormal inspection target object having a defect, the eighth invention of the present application conceals a part of each of a plurality of learning images in which a normal inspection target object is imaged from the hidden image data. It is a learning method for learning the process of generating a restored image in which a part of the image is restored by deep learning.
 本願の第9発明は、欠陥を有する異常な検査対象物を検出するために、正常な検査対象物が撮像された複数の学習画像のそれぞれの一部が隠された画像データから、前記隠された一部が復元された復元画像を生成する処理を、ディープラーニングによって学習した、学習済モデルである。 A ninth invention of the present application is, in order to detect an abnormal inspection object having a defect, from the image data in which a part of each of a plurality of learning images in which a normal inspection object is imaged is hidden, the hidden object is hidden. It is a learned model in which a process of generating a restored image in which a part of it is restored is learned by deep learning.
 本願の第10発明は、第1発明から第5発明までのいずれか1発明の情報処理装置であって、検査対象物は錠剤である。 The tenth invention of the present application is the information processing apparatus according to any one of the first to fifth inventions, and the inspection target is a tablet.
 本願の第1発明~第10発明によれば、容易に多数取得することが可能な、欠陥の無い正常な検査対象物の画像を用いて機械学習を行うことによって、欠陥を有する異常な検査対象物を検出することができる。これにより、検査対象物における未知のものを含む多種多様な欠陥を高精度に検出することができる。 According to the first invention to the tenth invention of the present application, an abnormal inspection object having a defect is obtained by performing machine learning using an image of a normal inspection object having no defect, which can be easily acquired in large numbers. Objects can be detected. This makes it possible to detect a wide variety of defects including unknown ones in the inspection object with high accuracy.
 特に、本願の第3発明によれば、作業員等が、欠陥の場所に係る情報に基づいて、検査画像または検査対象物本体における欠陥を、目視で容易に再確認することができる。これにより、欠陥を有する検査対象物の検出精度をさらに高めることができる。 In particular, according to the third invention of the present application, an operator or the like can easily reconfirm visually the defect in the inspection image or the inspection object main body based on the information on the location of the defect. As a result, the detection accuracy of the inspection target object having a defect can be further improved.
 特に、本願の第4発明または第5発明によれば、検査画像における検査対象物の位置が多少ずれた場合でも、欠陥を有する検査対象物を高精度で検出することができる。 Particularly, according to the fourth invention or the fifth invention of the present application, even if the position of the inspection object in the inspection image is slightly shifted, the inspection object having a defect can be detected with high accuracy.
錠剤印刷装置の構成を示した図である。It is the figure which showed the structure of the tablet printing apparatus. 搬送ドラム付近の斜視図である。It is a perspective view of the vicinity of a conveyance drum. ヘッドの下面図である。It is a bottom view of a head. 検査カメラ付近の斜視図である。It is a perspective view near an inspection camera. 制御部と錠剤印刷装置内の各部との接続を示したブロック図である。It is a block diagram showing a connection between a control unit and each unit in the tablet printing apparatus. 錠剤印刷装置内の制御部における機能の一部を概念的に示したブロック図である。It is the block diagram which showed notionally a part of function in the control part in a tablet printing apparatus. 正常な錠剤が撮像された学習画像の例を示した図である。It is the figure which showed the example of the learning image which imaged the normal tablet. 正常な錠剤が撮像された学習画像のうちの一部が隠された画像データから、復元画像を生成する様子を示す概要図である。FIG. 9 is a schematic diagram showing a state in which a restored image is generated from image data in which a part of a learning image in which a normal tablet is imaged is hidden. 正常か異常か不明な錠剤が撮像された検査画像のうちの一部が隠された画像データから、復元画像を生成する様子を示す概要図である。FIG. 9 is a schematic diagram showing a state in which a restored image is generated from image data in which a part of the inspection image in which a tablet of which the normal state or abnormal state is unknown is captured is hidden. 正常か異常か不明な錠剤が撮像された検査画像のうちの一部が隠された画像データから、復元画像を生成する様子を示す概要図である。FIG. 9 is a schematic diagram showing a state in which a restored image is generated from image data in which a part of the inspection image in which a tablet of which the normal state or abnormal state is unknown is captured is hidden.
 以下、本発明の実施形態について、図面を参照しつつ説明する。本発明の一実施形態では、検査対象物として、医薬品である錠剤を例に挙げて、説明する。そして、錠剤の表面に、インクジェット方式で製品名等の画像を記録した後、錠剤の汚れや傷等の欠陥の有無を検査し、欠陥を有する異常な錠剤を検出することができる、装置、方法、およびプログラムについて、説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. In one embodiment of the present invention, a tablet, which is a pharmaceutical product, will be described as an example of the inspection target. Then, on the surface of the tablet, after recording an image such as a product name by an inkjet method, it is possible to detect the presence or absence of defects such as stains and scratches on the tablet, and detect an abnormal tablet having a defect, a device and method. , And the program will be described.
 <1.錠剤印刷装置の全体構成>
 本発明の一実施形態に係る錠剤9の欠陥を検出する後述する情報処理装置200を含む錠剤印刷装置1の全体構成について、図1を参照しつつ説明する。図1は、錠剤印刷装置1の構成を示した図である。錠剤印刷装置1は、複数の錠剤9を搬送しながら、各錠剤9の表面に、製品としての識別を目的として、製品名、製品コード、会社名、ロゴマーク等の画像をインクジェット方式で印刷する装置である。本実施形態の錠剤9は、円盤形状を有する(後述する図4参照)。ただし、錠剤9の形状は、楕円形状等の他の形状であってもよい。なお、以下の説明においては、複数の錠剤9が搬送される方向を「搬送方向」と称し、搬送方向に対して垂直かつ水平な方向を「幅方向」と称する。
<1. Overall structure of tablet printing device>
An overall configuration of a tablet printing apparatus 1 including an information processing apparatus 200 described below that detects a defect of a tablet 9 according to an embodiment of the present invention will be described with reference to FIG. FIG. 1 is a diagram showing the configuration of the tablet printing apparatus 1. The tablet printing apparatus 1 prints images such as a product name, a product code, a company name, and a logo mark on the surface of each tablet 9 while conveying a plurality of tablets 9 for the purpose of identifying the product by an inkjet method. It is a device. The tablet 9 of this embodiment has a disc shape (see FIG. 4 described later). However, the shape of the tablet 9 may be another shape such as an elliptical shape. In the following description, the direction in which the plurality of tablets 9 are conveyed is referred to as the “conveyance direction”, and the direction vertical and horizontal to the conveyance direction is referred to as the “width direction”.
 また、錠剤9には、錠剤9を半分に割るための溝状の割線90が形成されている。以下、錠剤9のうちの割線90が形成される面を「割線面」と称する。割線90は、割線面の中心を通り、割線面の両端まで真っ直ぐに延びる。なお、本実施形態では、円盤形状の錠剤9の上面および下面を形成する面のうちの一方のみに割線90が形成されている場合を想定する。すなわち、本実施形態では、錠剤9の上面および下面のうちの一方のみが割線面である。ただし、割線90は、円盤形状の錠剤9の上面および下面を形成する面の両方に形成されていてもよい。すなわち、割線90は、錠剤9の表裏面の両方に形成されていてもよい。さらに、本実施形態では、錠剤9の割線面に対向する面のみに対して、裏側に有る割線90の向きに沿って、製品名等を印刷する。ただし、錠剤9における印刷箇所は、これに限定されない。 Also, the tablet 9 is formed with a groove-shaped dividing line 90 for dividing the tablet 9 in half. Hereinafter, the surface of the tablet 9 on which the score line 90 is formed will be referred to as a “slit surface”. The score line 90 passes through the center of the score line surface and extends straight to both ends of the score line surface. In the present embodiment, it is assumed that the dividing line 90 is formed on only one of the surfaces forming the upper surface and the lower surface of the disk-shaped tablet 9. That is, in this embodiment, only one of the upper surface and the lower surface of the tablet 9 is the secant surface. However, the score lines 90 may be formed on both the upper surface and the lower surface of the disk-shaped tablet 9. That is, the score lines 90 may be formed on both the front and back surfaces of the tablet 9. Further, in the present embodiment, the product name and the like are printed only on the surface of the tablet 9 that faces the score line surface along the direction of the score line 90 on the back side. However, the printing location on the tablet 9 is not limited to this.
 図1に示すように、本実施形態の錠剤印刷装置1は、ホッパー10、フィーダ部20、搬送ドラム30、第1印刷部40、第2印刷部50、搬出コンベア60、および制御部70を有する。ホッパー10、フィーダ部20、搬送ドラム30、第1印刷部40の第1搬送コンベア41、第2印刷部50の第2搬送コンベア51、および搬出コンベア60によって、錠剤9を所定の搬送経路に沿って搬送する搬送機構が形成されている。 As shown in FIG. 1, the tablet printing apparatus 1 of this embodiment includes a hopper 10, a feeder unit 20, a transport drum 30, a first printing unit 40, a second printing unit 50, a carry-out conveyor 60, and a control unit 70. .. The hopper 10, the feeder unit 20, the transport drum 30, the first transport conveyor 41 of the first printing unit 40, the second transport conveyor 51 of the second printing unit 50, and the unloading conveyor 60 cause the tablets 9 to travel along a predetermined transport path. A transport mechanism for transporting the same is formed.
 ホッパー10は、多数の錠剤9を一括して装置内に受け入れるための投入部である。ホッパー10は、錠剤印刷装置1の筐体100の最上部に配置されている。ホッパー10は、筐体100の上面に位置する開口部11と、下方へ向かうにつれて徐々に収束する漏斗状の傾斜面12とを有する。開口部11へ投入された複数の錠剤9は、傾斜面12に沿って直進フィーダ21へ流れ込む。 The hopper 10 is a loading unit for receiving a large number of tablets 9 in a batch in the device. The hopper 10 is arranged at the top of the housing 100 of the tablet printing apparatus 1. The hopper 10 has an opening 11 located on the upper surface of the housing 100, and a funnel-shaped inclined surface 12 that gradually converges downward. The plurality of tablets 9 charged into the opening 11 flow into the straight-moving feeder 21 along the inclined surface 12.
 フィーダ部20は、ホッパー10へ投入された複数の錠剤9を、搬送ドラム30まで搬送する機構である。本実施形態のフィーダ部20は、直進フィーダ21、回転フィーダ22、および供給フィーダ23を有する。直進フィーダ21は、平板状の振動トラフ211を有する。ホッパー10から振動トラフ211に供給された複数の錠剤9は、振動トラフ211の振動によって、回転フィーダ22側へ搬送される。回転フィーダ22は、円盤状の回転台221を有する。振動トラフ211から回転台221の上面に落下した複数の錠剤9は、回転台221の回転による遠心力で、回転台221の外周部付近へ集まる。 The feeder unit 20 is a mechanism that conveys the plurality of tablets 9 loaded into the hopper 10 to the conveyance drum 30. The feeder unit 20 of the present embodiment has a straight feeder 21, a rotary feeder 22, and a supply feeder 23. The straight feeder 21 has a flat plate-shaped vibrating trough 211. The plurality of tablets 9 supplied from the hopper 10 to the vibrating trough 211 are conveyed to the rotary feeder 22 side by the vibration of the vibrating trough 211. The rotary feeder 22 has a disk-shaped rotary base 221. The plurality of tablets 9 dropped from the vibrating trough 211 onto the upper surface of the rotary table 221 are collected near the outer peripheral portion of the rotary table 221 by the centrifugal force generated by the rotation of the rotary table 221.
 供給フィーダ23は、回転台221の外周部から搬送ドラム30まで、鉛直下向きに延びる複数の筒状部231を有する。図2は、搬送ドラム30付近の斜視図である。図2に示すように、複数の筒状部231は、互いに平行に配列されている。図2の例では、8本の筒状部231が配列されている。回転台221の外周部へ搬送された複数の錠剤9は、それぞれ、複数の筒状部231のいずれか1つに供給され、筒状部231内を落下する。そして、各筒状部231内に、複数の錠剤9が積層される。このように、複数の錠剤9は、複数の筒状部231に分散供給されることによって、複数の搬送列に整列される。そして、各搬送列の複数の錠剤9が、下端のものから順に搬送ドラム30へ供給される。 The supply feeder 23 has a plurality of cylindrical portions 231 extending vertically downward from the outer peripheral portion of the rotary table 221 to the transport drum 30. FIG. 2 is a perspective view of the vicinity of the transport drum 30. As shown in FIG. 2, the plurality of tubular portions 231 are arranged in parallel with each other. In the example of FIG. 2, eight tubular portions 231 are arranged. The plurality of tablets 9 transported to the outer peripheral portion of the rotary table 221 are respectively supplied to any one of the plurality of tubular portions 231, and fall inside the tubular portion 231. Then, a plurality of tablets 9 are stacked in each tubular portion 231. In this way, the plurality of tablets 9 are distributed and supplied to the plurality of cylindrical portions 231, so that the plurality of tablets 9 are aligned in the plurality of transport rows. Then, the plurality of tablets 9 in each conveying line are sequentially supplied to the conveying drum 30 from the bottom.
 搬送ドラム30は、供給フィーダ23から第1搬送コンベア41へ、複数の錠剤9を受け渡す機構である。搬送ドラム30は、略円筒形状の外周面を有する。搬送ドラム30は、モータから得られる動力により、幅方向に延びる回転軸を中心として、図1および図2中の矢印の方向へ回転する。図2に示すように、搬送ドラム30の外周面には、複数の保持部31が設けられている。保持部31は、搬送ドラム30の外周面から内側へ向けて凹む凹部である。複数の保持部31は、上述した複数の搬送列の各々に対応する幅方向位置において、搬送ドラム30の外周面に、周方向に沿って配列されている。また、各保持部31の底部には、吸着孔32が設けられている。 The transport drum 30 is a mechanism that delivers a plurality of tablets 9 from the supply feeder 23 to the first transport conveyor 41. The transport drum 30 has a substantially cylindrical outer peripheral surface. The transport drum 30 rotates in the direction of the arrow in FIGS. 1 and 2 about a rotation shaft extending in the width direction by the power obtained from the motor. As shown in FIG. 2, a plurality of holding portions 31 are provided on the outer peripheral surface of the transport drum 30. The holding portion 31 is a concave portion that is recessed inward from the outer peripheral surface of the transport drum 30. The plurality of holding units 31 are arranged along the circumferential direction on the outer peripheral surface of the transport drum 30 at the widthwise positions corresponding to each of the plurality of transport rows described above. Further, a suction hole 32 is provided at the bottom of each holding portion 31.
 搬送ドラム30の内部には、吸引機構が設けられている。吸引機構を動作させると、複数の吸着孔32のそれぞれに、大気圧よりも低い負圧が生じる。保持部31は、当該負圧によって、供給フィーダ23から供給される錠剤9を、1つずつ吸着保持する。また、搬送ドラム30の内部には、ブロー機構が設けられている。ブロー機構は、搬送ドラム30の内側から後述する第1搬送コンベア41側へ向けて、局所的に加圧された気体を吹き付ける。これにより、第1搬送コンベア41に対向しない保持部31においては、錠剤9の吸着状態を維持しつつ、第1搬送コンベア41に対向する保持部31のみにおいて、錠剤9の吸着が解除される。搬送ドラム30は、このように、供給フィーダ23から供給される複数の錠剤9を吸着保持しつつ回転し、それらの錠剤9を、第1搬送コンベア41へ受け渡すことができる。 A suction mechanism is provided inside the transport drum 30. When the suction mechanism is operated, a negative pressure lower than the atmospheric pressure is generated in each of the plurality of suction holes 32. The holding unit 31 sucks and holds the tablets 9 supplied from the supply feeder 23 one by one by the negative pressure. A blow mechanism is provided inside the transport drum 30. The blow mechanism blows the locally pressurized gas from the inside of the transport drum 30 toward the first transport conveyor 41 side described later. As a result, in the holding unit 31 that does not face the first transport conveyor 41, the suction of the tablets 9 is released only in the holding unit 31 that faces the first transport conveyor 41 while maintaining the suction state of the tablets 9. In this way, the transport drum 30 rotates while sucking and holding the plurality of tablets 9 supplied from the supply feeder 23, and can deliver the tablets 9 to the first transport conveyor 41.
 搬送ドラム30の外周面と対向する位置には、第1状態検出カメラ33が設けられている。第1状態検出カメラ33は、搬送ドラム30に保持された錠剤9の状態を撮像する撮像部である。第1状態検出カメラ33は、搬送ドラム30により搬送される錠剤9を撮像し、得られた画像を制御部70へ送信する。制御部70は、受信した画像に基づいて、各保持部31における錠剤9の有無や、保持部31に保持された錠剤9の表裏および割線90の向きを検出する。 A first state detection camera 33 is provided at a position facing the outer peripheral surface of the transport drum 30. The first state detection camera 33 is an image pickup unit that takes an image of the state of the tablet 9 held on the transport drum 30. The first state detection camera 33 captures an image of the tablet 9 transported by the transport drum 30, and transmits the obtained image to the control unit 70. The control unit 70 detects the presence/absence of the tablet 9 in each holding unit 31 and the orientation of the front/back and the score line 90 of the tablet 9 held in the holding unit 31 based on the received image.
 第1印刷部40は、錠剤9の一方の面に画像を印刷するための処理部である。図1に示すように、第1印刷部40は、第1搬送コンベア41、第2状態検出カメラ42、第1ヘッドユニット43、第1検査カメラ44、および第1定着部45を有する。 The first printing unit 40 is a processing unit for printing an image on one surface of the tablet 9. As shown in FIG. 1, the first printing unit 40 includes a first transport conveyor 41, a second state detection camera 42, a first head unit 43, a first inspection camera 44, and a first fixing unit 45.
 第1搬送コンベア41は、一対の第1プーリ411と、一対の第1プーリ411の間に掛け渡された環状の第1搬送ベルト412とを有する。第1搬送ベルト412は、その一部分が、搬送ドラム30の外周面に近接して対向するように配置される。一対の第1プーリ411の一方は、モータから得られる動力で回転する。これにより、第1搬送ベルト412が、図1および図2中の矢印の方向へ回動する。このとき、一対の第1プーリ411の他方は、第1搬送ベルト412の回動に伴い従動回転する。 The first conveyor 41 has a pair of first pulleys 411 and an annular first conveyor belt 412 that is stretched between the pair of first pulleys 411. The first conveyor belt 412 is arranged such that a part thereof closely faces and faces the outer peripheral surface of the conveyor drum 30. One of the pair of first pulleys 411 rotates by the power obtained from the motor. As a result, the first conveyor belt 412 rotates in the direction of the arrow in FIGS. 1 and 2. At this time, the other of the pair of first pulleys 411 rotates following the rotation of the first conveyor belt 412.
 図2に示すように、第1搬送ベルト412には、複数の保持部413が設けられている。保持部413は、第1搬送ベルト412の外側の面から内側へ向けて凹む凹部である。複数の保持部413は、複数の搬送列の各々に対応する幅方向位置において、搬送方向に配列されている。すなわち、複数の保持部413は、幅方向および搬送方向に、それぞれ間隔を空けて配列されている。第1搬送ベルト412における複数の保持部413の幅方向の間隔は、搬送ドラム30における複数の保持部31の幅方向の間隔と等しい。 As shown in FIG. 2, the first conveyor belt 412 is provided with a plurality of holding portions 413. The holding portion 413 is a concave portion that is recessed inward from the outer surface of the first conveyor belt 412. The plurality of holding units 413 are arranged in the transport direction at the width direction positions corresponding to each of the plurality of transport rows. That is, the plurality of holding portions 413 are arranged at intervals in the width direction and the conveyance direction. The widthwise spacing between the plurality of holding portions 413 of the first conveyor belt 412 is equal to the widthwise spacing between the plurality of holding portions 31 of the transport drum 30.
 各保持部413の底部には、吸着孔414が設けられている。また、第1搬送コンベア41は、第1搬送ベルト412の内側に、吸引機構を有する。吸引機構を動作させると、複数の吸着孔414のそれぞれに、大気圧よりも低い負圧が生じる。保持部413は、当該負圧によって、搬送ドラム30から渡される錠剤9を、1つずつ吸着保持する。これにより、第1搬送コンベア41は、複数の錠剤9を、幅方向に間隔を空けた複数の搬送列に整列された状態で保持しつつ、搬送する。さらに、第1搬送ベルト412には、ブロー機構が設けられている。ブロー機構を動作させると、後述する第2搬送コンベア51に対向する保持部413において、吸着孔414が大気圧よりも高い陽圧となる。これにより、当該保持部413における錠剤9の吸着が解除され、第1搬送コンベア41から第2搬送コンベア51へ、錠剤9が受け渡される。なお、第1搬送ベルト412に搬送される複数の錠剤9には、割線面側から保持部413に保持される錠剤9と、割線面に対向する面側から保持部413に保持される錠剤9とが、混在する。そして、各錠剤9は、第1搬送コンベア41から第2搬送コンベア51へ受け渡される際に、表裏が反転する。 Adsorption holes 414 are provided at the bottom of each holding portion 413. The first conveyor 41 has a suction mechanism inside the first conveyor belt 412. When the suction mechanism is operated, a negative pressure lower than the atmospheric pressure is generated in each of the plurality of suction holes 414. The holding unit 413 sucks and holds the tablets 9 delivered from the transport drum 30 one by one by the negative pressure. As a result, the first conveyor 41 conveys the plurality of tablets 9 while holding them in a state of being aligned in a plurality of conveyor rows spaced in the width direction. Further, the first transport belt 412 is provided with a blow mechanism. When the blow mechanism is operated, the suction hole 414 becomes a positive pressure higher than the atmospheric pressure in the holding unit 413 facing the second conveyor 51 described later. As a result, the adsorption of the tablets 9 on the holding unit 413 is released, and the tablets 9 are delivered from the first transfer conveyor 41 to the second transfer conveyor 51. Note that among the plurality of tablets 9 conveyed to the first conveyor belt 412, the tablet 9 held by the holding portion 413 from the secant surface side and the tablet 9 held by the holding portion 413 from the surface side facing the secant surface. And are mixed. Then, when each tablet 9 is transferred from the first transfer conveyor 41 to the second transfer conveyor 51, the front and back are inverted.
 第2状態検出カメラ42は、第1ヘッドユニット43よりも搬送方向の上流側において、第1搬送コンベア41に保持された錠剤9の状態を撮像する撮像部である。第1状態検出カメラ33と第2状態検出カメラ42とは、錠剤9の互いに反対側の面を撮像する。第2状態検出カメラ42において得られた画像は、第2状態検出カメラ42から制御部70へ送信される。制御部70は、受信した画像に基づいて、各保持部413における錠剤9の有無や、保持部413に保持された錠剤9の表裏および割線90の向きを検出する。 The second state detection camera 42 is an image pickup unit that picks up an image of the state of the tablet 9 held on the first transfer conveyor 41 on the upstream side of the first head unit 43 in the transfer direction. The first state detection camera 33 and the second state detection camera 42 image the surfaces of the tablet 9 opposite to each other. The image obtained by the second state detection camera 42 is transmitted from the second state detection camera 42 to the control unit 70. The control unit 70 detects the presence or absence of the tablet 9 in each holding unit 413, the front and back of the tablet 9 held in the holding unit 413, and the orientation of the score line 90 based on the received image.
 第1ヘッドユニット43は、第1搬送コンベア41により搬送される錠剤9の上面に向けてインク滴を吐出する、インクジェット方式のヘッドユニットである。第1ヘッドユニット43は、搬送方向に沿って配列された4つの第1ヘッド431を有する。4つの第1ヘッド431は、複数の錠剤9のうち、割線面側から保持部413に保持されている錠剤9の上面に向けて、互いに異なる色のインク滴を吐出する。例えば、4つのヘッド431は、シアン、マゼンタ、イエロー、およびブラックの各色のインク滴を吐出する。これらの各色により形成される単色画像の重ね合わせによって、錠剤9の表面に、多色画像が印刷される。なお、各第1ヘッド431から吐出されるインクには、日本薬局方、食品衛生法等で認可された原料により製造された可食性インクが使用される。 The first head unit 43 is an inkjet type head unit that ejects ink droplets toward the upper surface of the tablet 9 conveyed by the first conveyor 41. The first head unit 43 has four first heads 431 arranged in the transport direction. Of the plurality of tablets 9, the four first heads 431 eject ink droplets of different colors from the secant surface side toward the upper surface of the tablet 9 held by the holding portion 413. For example, the four heads 431 eject ink droplets of cyan, magenta, yellow, and black. A multicolor image is printed on the surface of the tablet 9 by superimposing the single color images formed by these colors. As the ink ejected from each first head 431, an edible ink produced from a raw material approved by the Japanese Pharmacopoeia, Food Sanitation Law, etc. is used.
 図3は、1つの第1ヘッド431の下面図である。図3には、第1搬送ベルト412と、第1搬送ベルト412に保持された複数の錠剤9とが、二点鎖線で示されている。図3中に拡大して示したように、第1ヘッド431の下面には、インク滴を吐出可能な複数のノズル430が設けられている。本実施形態では、第1ヘッド431の下面に、複数のノズル430が、搬送方向および幅方向に二次元的に配列されている。各ノズル430は、幅方向に位置をずらして配列されている。このように、複数のノズル430を二次元的に配置すれば、各ノズル430の幅方向の位置を、互いに接近させることができる。ただし、複数のノズル430は、幅方向に沿って一列に配列されていてもよい。 FIG. 3 is a bottom view of one first head 431. In FIG. 3, the first conveyor belt 412 and the plurality of tablets 9 held by the first conveyor belt 412 are shown by a chain double-dashed line. As shown in an enlarged manner in FIG. 3, a plurality of nozzles 430 capable of ejecting ink droplets are provided on the lower surface of the first head 431. In the present embodiment, the plurality of nozzles 430 are two-dimensionally arranged in the transport direction and the width direction on the lower surface of the first head 431. The nozzles 430 are arranged with their positions displaced in the width direction. By thus arranging the plurality of nozzles 430 two-dimensionally, the positions of the respective nozzles 430 in the width direction can be brought close to each other. However, the plurality of nozzles 430 may be arranged in a line along the width direction.
 ノズル430からのインク滴の吐出方式には、例えば、ピエゾ素子に電圧を加えて変形させることにより、ノズル430内のインクを加圧して吐出させる、いわゆるピエゾ方式が用いられる。ただし、インク滴の吐出方式は、ヒータに通電してノズル430内のインクを加熱膨張させることにより吐出する、いわゆるサーマル方式であってもよい。 As a method of ejecting ink droplets from the nozzle 430, for example, a so-called piezo method is used, in which a voltage is applied to a piezo element to deform it, thereby pressurizing and ejecting the ink in the nozzle 430. However, the ink droplet ejection method may be a so-called thermal method, in which the heater is energized to thermally expand the ink in the nozzle 430 to eject the ink.
 図4は、第1検査カメラ44付近の斜視図である。第1検査カメラ44は、第1ヘッドユニット43による印刷の良否および錠剤9の欠陥の有無を確認するための撮像部である。第1検査カメラ44は、第1ヘッドユニット43よりも搬送方向の下流側において、第1搬送ベルト412に搬送される錠剤9の上面を撮像する。また、第1検査カメラ44は、得られた画像を制御部70へ送信する。制御部70は、受信した画像に基づいて、各錠剤9の上面に、傷、汚れ、印刷位置のずれ、またはドット欠け等の欠陥が無いかどうかを検査する。これらの欠陥の検出方法については、詳細を後述する。 FIG. 4 is a perspective view around the first inspection camera 44. The first inspection camera 44 is an image pickup unit for confirming whether or not the printing by the first head unit 43 is good and whether or not there is a defect in the tablet 9. The first inspection camera 44 images the upper surface of the tablet 9 conveyed to the first conveyor belt 412 on the downstream side of the first head unit 43 in the conveyance direction. Further, the first inspection camera 44 transmits the obtained image to the control unit 70. Based on the received image, the control unit 70 inspects the upper surface of each tablet 9 for defects such as scratches, stains, printing position shifts, and dot missing. The method of detecting these defects will be described in detail later.
 なお、本実施形態では、8つの第1検査カメラ44が、第1搬送ベルト412上の幅方向に並ぶ8つの錠剤9にそれぞれ対応する位置に配置される。各第1検査カメラ44は、幅方向に1つの錠剤9を撮像する。また、各第1検査カメラ44は、搬送方向に搬送される複数の錠剤9を順次撮像する。ただし、8つの第1検査カメラ44の配置スペースを考慮して、これらを互いに搬送方向に位置をずらして配置してもよい。 In the present embodiment, the eight first inspection cameras 44 are arranged at the positions corresponding to the eight tablets 9 arranged in the width direction on the first conveyor belt 412. Each first inspection camera 44 images one tablet 9 in the width direction. In addition, each first inspection camera 44 sequentially takes an image of the plurality of tablets 9 conveyed in the conveying direction. However, considering the arrangement space of the eight first inspection cameras 44, they may be arranged so as to be displaced from each other in the transport direction.
 第1定着部45は、第1ヘッドユニット43から吐出されたインクを、錠剤9に定着させる機構である。本実施形態では、第1検査カメラ44よりも搬送方向の下流側に、第1定着部45が配置されている。ただし、第1ヘッドユニット43と第1検査カメラ44との間に、第1定着部45が配置されていてもよい。第1定着部45には、例えば、第1搬送コンベア41により搬送される錠剤9へ向けて、熱風を吹き付ける熱風乾燥式のヒータが用いられる。錠剤9の表面に付着したインクは、熱風により乾燥して、錠剤9の表面に定着する。 The first fixing unit 45 is a mechanism that fixes the ink ejected from the first head unit 43 to the tablet 9. In the present embodiment, the first fixing unit 45 is arranged on the downstream side of the first inspection camera 44 in the transport direction. However, the first fixing unit 45 may be arranged between the first head unit 43 and the first inspection camera 44. For the first fixing unit 45, for example, a hot air drying type heater that blows hot air toward the tablets 9 transported by the first transport conveyor 41 is used. The ink attached to the surface of the tablet 9 is dried by hot air and fixed on the surface of the tablet 9.
 第2印刷部50は、第1印刷部40による印刷後に、錠剤9の他方の面に画像を印刷するための処理部である。図1に示すように、第2印刷部50は、第2搬送コンベア51、第3状態検出カメラ52、第2ヘッドユニット53、第2検査カメラ54、第2定着部55、および欠陥品回収部56を有する。 The second printing unit 50 is a processing unit for printing an image on the other surface of the tablet 9 after printing by the first printing unit 40. As shown in FIG. 1, the second printing unit 50 includes a second conveyor 51, a third state detection camera 52, a second head unit 53, a second inspection camera 54, a second fixing unit 55, and a defective product collecting unit. 56.
 第2搬送コンベア51は、第1搬送コンベア41から受け渡された複数の錠剤9を保持しつつ搬送する。第3状態検出カメラ52は、第2ヘッドユニット53よりも搬送方向の上流側において、第2搬送コンベア51により搬送される複数の錠剤9を撮像する。第2ヘッドユニット53は、第2搬送コンベア51により搬送される錠剤9の上面に向けてインク滴を吐出する。第2検査カメラ54は、第2ヘッドユニット53よりも搬送方向の下流側において、第2搬送コンベア51により搬送される複数の錠剤9を撮像する。第2定着部55は、第2ヘッドユニット53の各ヘッド531から吐出されたインクを、錠剤9に定着させる。 The second transfer conveyor 51 transfers the plurality of tablets 9 transferred from the first transfer conveyor 41 while holding them. The third state detection camera 52 images the plurality of tablets 9 transported by the second transport conveyor 51 on the upstream side of the second head unit 53 in the transport direction. The second head unit 53 ejects ink droplets toward the upper surface of the tablet 9 conveyed by the second conveyor 51. The second inspection camera 54 images the plurality of tablets 9 transported by the second transport conveyor 51 on the downstream side of the second head unit 53 in the transport direction. The second fixing unit 55 fixes the ink ejected from each head 531 of the second head unit 53 to the tablet 9.
 第2搬送コンベア51、第3状態検出カメラ52、第2ヘッドユニット53、第2検査カメラ54、および第2定着部55は、上述した第1搬送コンベア41、第2状態検出カメラ42、第1ヘッドユニット43、第1検査カメラ44、および第1定着部45と同等の構成を有する。 The second transport conveyor 51, the third state detection camera 52, the second head unit 53, the second inspection camera 54, and the second fixing unit 55 are the above-described first transport conveyor 41, second state detection camera 42, and first state detection camera 42. The head unit 43, the first inspection camera 44, and the first fixing unit 45 have the same configurations.
 欠陥品回収部56は、上述の第1検査カメラ44および第2検査カメラ54から得られた撮影画像Ipに基づいて、欠陥品と判定された錠剤9を回収する。欠陥品回収部56は、第2搬送コンベア51の内側に配置されたブロー機構と、回収箱561とを有する。欠陥品と判定された錠剤9が、欠陥品回収部56まで搬送されると、ブロー機構は、第2搬送コンベア51の内側から、当該錠剤9に向けて、加圧された気体を吹き付ける。これにより、当該錠剤9が、第2搬送コンベア51から脱落して、回収箱561に回収される。 The defective product collecting unit 56 collects the tablets 9 determined to be defective based on the captured images Ip obtained from the first inspection camera 44 and the second inspection camera 54 described above. The defective item recovery unit 56 includes a blow mechanism arranged inside the second transfer conveyor 51 and a recovery box 561. When the tablet 9 determined to be a defective product is transported to the defective product collection unit 56, the blow mechanism blows a pressurized gas toward the tablet 9 from the inside of the second transport conveyor 51. As a result, the tablet 9 falls off the second conveyor 51 and is collected in the collection box 561.
 搬出コンベア60は、良品と判定された複数の錠剤9を、錠剤印刷装置1の筐体100の外部へ搬出する機構である。搬出コンベア60の上流側の端部は、第2搬送コンベア51の第2プーリ511の下方に位置する。搬出コンベア60の下流側の端部は、筐体100の外部に位置する。搬出コンベア60には、例えば、ベルト搬送機構が用いられる。欠陥品回収部56を通過した複数の錠剤9は、吸着孔の吸引が解除されることによって、第2搬送コンベア51から搬出コンベア60の上面に落下する。そして、搬出コンベア60によって、複数の錠剤9が、筐体100の外部へ搬出される。 The carry-out conveyor 60 is a mechanism for carrying out the plurality of tablets 9 determined as non-defective products to the outside of the housing 100 of the tablet printing apparatus 1. The upstream end of the carry-out conveyor 60 is located below the second pulley 511 of the second transfer conveyor 51. The downstream end of the carry-out conveyor 60 is located outside the housing 100. A belt transport mechanism is used for the carry-out conveyor 60, for example. The plurality of tablets 9 that have passed through the defective product collecting unit 56 are dropped from the second conveyor 51 to the upper surface of the carry-out conveyor 60 by releasing the suction of the suction holes. Then, the plurality of tablets 9 are carried out of the housing 100 by the carry-out conveyor 60.
 制御部70は、錠剤印刷装置1内の各部を動作制御する。図5は、制御部70と、錠剤印刷装置1内の各部との接続を示したブロック図である。図5中に概念的に示したように、制御部70は、CPU等のプロセッサ701、RAM等のメモリ702、ハードディスクドライブ等の記憶装置703、受信部704、および送信部705を有するコンピュータにより構成される。記憶装置703内には、錠剤9の印刷処理および検査を実行するためのコンピュータプログラムPおよびデータDが、記憶されている。ただし、受信部704および送信部705は、制御部70とは別体として設けられてもよい。 The control unit 70 controls the operation of each unit in the tablet printing apparatus 1. FIG. 5 is a block diagram showing the connection between the control unit 70 and each unit in the tablet printing apparatus 1. As conceptually shown in FIG. 5, the control unit 70 includes a computer having a processor 701 such as a CPU, a memory 702 such as a RAM, a storage device 703 such as a hard disk drive, a receiving unit 704, and a transmitting unit 705. To be done. The storage device 703 stores a computer program P and data D for executing the printing process and inspection of the tablets 9. However, the receiving unit 704 and the transmitting unit 705 may be provided separately from the control unit 70.
 なお、コンピュータプログラムPは、このプログラムPが記憶された記憶媒体Mから読み出されて、制御部70の記憶装置703に記憶される。記憶媒体Mの例としては、CD-ROM、DVD-ROM、フラッシュメモリなどを挙げることができる。ただし、プログラムPは、ネットワークを介して制御部70に入力されてもよい。 The computer program P is read from the storage medium M storing the program P and stored in the storage device 703 of the control unit 70. Examples of the storage medium M include a CD-ROM, a DVD-ROM, a flash memory and the like. However, the program P may be input to the control unit 70 via a network.
 また、図5に示すように、制御部70は、受信部704および送信部705を介して、上述した直進フィーダ21、回転フィーダ22、搬送ドラム30(モータ、吸引機構、およびブロー機構を含む)、第1状態検出カメラ33、第1搬送コンベア41(モータ、吸引機構、およびブロー機構を含む)、第2状態検出カメラ42、第1ヘッドユニット43(各第1ヘッド431の複数のノズル430を含む)、第1検査カメラ44、第1定着部45、第2搬送コンベア51、第3状態検出カメラ52、第2ヘッドユニット53(各第2ヘッド531の複数のノズル430を含む)、第2検査カメラ54、第2定着部55、欠陥品回収部56、および搬出コンベア60と、それぞれイーサネット(登録商標)等の有線通信、Bluetooth(登録商標)またはWi-Fi(登録商標)等の無線通信を可能に、接続されている。 In addition, as shown in FIG. 5, the control unit 70, via the receiving unit 704 and the transmitting unit 705, the above-described linear feeder 21, rotary feeder 22, and transport drum 30 (including a motor, a suction mechanism, and a blow mechanism). , A first state detection camera 33, a first conveyor 41 (including a motor, a suction mechanism, and a blow mechanism), a second state detection camera 42, a first head unit 43 (a plurality of nozzles 430 of each first head 431). Included), the first inspection camera 44, the first fixing section 45, the second conveyor 51, the third state detection camera 52, the second head unit 53 (including the plurality of nozzles 430 of each second head 531), the second Wired communication such as Ethernet (registered trademark) or wireless communication such as Bluetooth (registered trademark) or Wi-Fi (registered trademark) with the inspection camera 54, the second fixing unit 55, the defective product collecting unit 56, and the carry-out conveyor 60, respectively. Enabled, connected.
 制御部70は、各部から受信部704を介して情報を受信すると、記憶装置703に記憶されたコンピュータプログラムPおよびデータDをメモリ702に一時的に読み出し、当該コンピュータプログラムPおよびデータDに基づいて、プロセッサ701が演算処理を行う。さらに、制御部70は、送信部705を介して各部へ司令を行うことによって、上記の各部を動作制御する。これにより、複数の錠剤9に対する各処理が進行する。 When the control unit 70 receives information from each unit via the receiving unit 704, the control unit 70 temporarily reads the computer program P and the data D stored in the storage device 703 into the memory 702, and based on the computer program P and the data D. The processor 701 performs arithmetic processing. Further, the control unit 70 controls the operation of each of the above units by instructing each unit via the transmission unit 705. Thereby, each process for the plurality of tablets 9 proceeds.
 <2.制御部内のデータ処理>
 図6は、錠剤印刷装置1内の制御部70における機能の一部を概念的に示したブロック図である。図6に示すように、本実施形態の制御部70は、角度認識部71、ヘッド制御部72、および検査部を有する。これらの機能は、記憶装置703に記憶されたコンピュータプログラムPおよびデータDをメモリ702に一時的に読み出し、当該コンピュータプログラムPおよびデータDに基づいて、プロセッサ701が演算処理を行うことによって、実現される。また、検査部としての機能は、制御部70の一部または全部の機械要素から成る情報処理装置200により実現される。情報処理装置200には、予め機械学習により生成された学習済みの学習モデルがインストールされている。
<2. Data processing in control unit>
FIG. 6 is a block diagram conceptually showing a part of the function of the control unit 70 in the tablet printing apparatus 1. As shown in FIG. 6, the control unit 70 of the present embodiment has an angle recognition unit 71, a head control unit 72, and an inspection unit. These functions are realized by temporarily reading the computer program P and the data D stored in the storage device 703 into the memory 702 and causing the processor 701 to perform arithmetic processing based on the computer program P and the data D. It Further, the function as the inspection unit is realized by the information processing device 200 including some or all of the mechanical elements of the control unit 70. In the information processing device 200, a learned learning model generated by machine learning in advance is installed.
 角度認識部71は、搬送される各錠剤9の回転角度(割線90の向き)を認識するための機能である。角度認識部71は、第1状態検出カメラ33および第2状態検出カメラ42の撮影画像を取得し、当該撮影画像に基づいて、第1搬送コンベア41により搬送される各錠剤9の回転角度を認識する。また、角度認識部71は、第3状態検出カメラ52の撮影画像を取得し、当該撮影画像に基づいて、第2搬送コンベア51により搬送される各錠剤9の回転角度を認識する。 The angle recognition unit 71 has a function of recognizing the rotation angle (direction of the secant 90) of each tablet 9 being conveyed. The angle recognition unit 71 acquires captured images of the first state detection camera 33 and the second state detection camera 42, and recognizes the rotation angle of each tablet 9 transported by the first transport conveyor 41 based on the captured images. To do. Further, the angle recognition unit 71 acquires a captured image of the third state detection camera 52 and recognizes the rotation angle of each tablet 9 transported by the second transport conveyor 51 based on the captured image.
 上述のとおり、本実施形態では、錠剤9の割線面に対向する面のみに対して、裏側に有る割線90の向きに沿って、製品名等を印刷する。このため、角度認識部71は、第1状態検出カメラ33および第2状態検出カメラ42から得られる撮影画像に基づいて、錠剤9毎に、第1ヘッドユニット43を通過するときの回転角度(割線90の向き)を認識する。同様に、角度認識部71は、第3状態検出カメラ52から得られる撮影画像に基づいて、錠剤9毎に、第2ヘッドユニット53を通過するときの回転角度(割線90の向き)を認識する。 As described above, in the present embodiment, the product name and the like are printed only on the surface of the tablet 9 that faces the scored line along the direction of the scored line 90 on the back side. Therefore, the angle recognition unit 71, based on the captured images obtained from the first state detection camera 33 and the second state detection camera 42, the rotation angle (secant line) when each tablet 9 passes through the first head unit 43. 90 direction) is recognized. Similarly, the angle recognition unit 71 recognizes, for each tablet 9, the rotation angle (direction of the secant 90) when passing through the second head unit 53, based on the captured image obtained from the third state detection camera 52. ..
 なお、搬送される複数の錠剤9の表裏は一定ではない。このため、図4のように、割線面側から保持部413に保持される錠剤9と、割線面に対向する面側から保持部413に保持される錠剤9とが、混在して搬送される場合がある。このような場合には、角度認識部71は、一部の錠剤9については、第1状態検出カメラ33から得られる撮影画像に基づいて、第1ヘッドユニット43を通過するときの回転角度を認識し、他の錠剤9については、第2状態検出カメラ42から得られる撮影画像に基づいて、第1ヘッドユニット43を通過するときの回転角度を認識すればよい。また、一部の錠剤9については、第3状態検出カメラ52から得られる撮影画像に基づいて、第2ヘッドユニット53を通過するときの回転角度を認識し、他の錠剤9については、第2状態検出カメラ42から得られる撮影画像に基づいて、第2ヘッドユニット53を通過するときの回転角度を認識すればよい。 Note that the front and back of the tablets 9 that are transported are not constant. Therefore, as shown in FIG. 4, the tablet 9 held by the holding portion 413 from the secant surface and the tablet 9 held by the holding portion 413 from the surface side facing the secant surface are mixed and conveyed. There are cases. In such a case, the angle recognition unit 71 recognizes the rotation angle of some of the tablets 9 when passing through the first head unit 43, based on the captured image obtained from the first state detection camera 33. However, regarding the other tablets 9, the rotation angle when passing the first head unit 43 may be recognized based on the captured image obtained from the second state detection camera 42. For some of the tablets 9, the rotation angle when passing through the second head unit 53 is recognized based on the captured image obtained from the third state detection camera 52, and for the other tablets 9, the rotation angle is changed to the second. The rotation angle when passing through the second head unit 53 may be recognized based on the captured image obtained from the state detection camera 42.
 ヘッド制御部72は、第1ヘッドユニット43および第2ヘッドユニット53を動作制御するための機能である。図6に示すように、ヘッド制御部72は、第1記憶部721を有する。第1記憶部721の機能は、例えば、上述した記憶装置703により実現される。第1記憶部721には、錠剤9に印刷される画像に関する情報を含む印刷画像データD1が記憶される。当該画像は、製品名、製品コード、会社名、ロゴマーク等であり、例えば、アルファベットと数字とを含む文字列で形成される(図4および後述する図7参照)。ただし、当該画像は、文字列以外のマークやイラストであってもよい。さらに、当該画像は、錠剤9における割線面に対向する面に、裏面に有る割線90に沿って印刷される。ただし、画像は、錠剤9の割線面に、割線90に沿って印刷されてもよい。印刷画像データD1は、このような、錠剤9における画像の印刷位置および印刷の向きを指定する情報も含む。 The head control unit 72 is a function for controlling the operation of the first head unit 43 and the second head unit 53. As shown in FIG. 6, the head control unit 72 has a first storage unit 721. The function of the first storage unit 721 is realized by, for example, the storage device 703 described above. The first storage unit 721 stores print image data D1 including information on an image printed on the tablet 9. The image is a product name, a product code, a company name, a logo mark, or the like, and is formed of, for example, a character string including alphabets and numbers (see FIG. 4 and FIG. 7 described later). However, the image may be a mark or an illustration other than the character string. Further, the image is printed on the surface of the tablet 9 that faces the dividing line along the dividing line 90 on the back surface. However, the image may be printed on the dividing line surface of the tablet 9 along the dividing line 90. The print image data D1 also includes such information that specifies the print position and the print direction of the image on the tablet 9.
 製品としての錠剤9の表面に印刷を行うときには、ヘッド制御部72は、第1記憶部721から印刷画像データD1を読み出す。また、ヘッド制御部72は、読み出された印刷画像データD1を、角度認識部71において認識された回転角度に応じて回転させる。そして、ヘッド制御部72は、回転された印刷画像データD1に基づいて、第1ヘッド431または第2ヘッド531を制御する。これにより、錠剤9の表面に、割線90に沿って、印刷画像データD1が表す画像が印刷される。 When printing on the surface of the tablet 9 as a product, the head control unit 72 reads the print image data D1 from the first storage unit 721. In addition, the head control unit 72 rotates the read print image data D1 according to the rotation angle recognized by the angle recognition unit 71. Then, the head controller 72 controls the first head 431 or the second head 531 based on the rotated print image data D1. As a result, the image represented by the print image data D1 is printed on the surface of the tablet 9 along the dividing line 90.
 検査部の機能については、詳細を後述する。 The details of the function of the inspection unit will be described later.
 <3.情報処理装置200について>
 続いて、情報処理装置200の構成について、説明する。上述のとおり、制御部70内の検査部としての機能は、制御部70の一部または全部の機械要素から成る情報処理装置200により実現される。情報処理装置200には、予め機械学習により生成された学習済みの学習モデルがインストールされている。情報処理装置200は、錠剤9における傷等の欠陥の有無を検査し、欠陥を有する異常な錠剤9を検出することができる装置である。図6に示すように、情報処理装置200は、機能として、画像復元部201、判定部202、および出力部203を有する。
<3. Information processing device 200>
Next, the configuration of the information processing device 200 will be described. As described above, the function as the inspection unit in the control unit 70 is realized by the information processing device 200 including some or all of the mechanical elements of the control unit 70. In the information processing device 200, a learned learning model generated by machine learning in advance is installed. The information processing device 200 is a device capable of inspecting the tablet 9 for defects such as scratches and detecting an abnormal tablet 9 having a defect. As shown in FIG. 6, the information processing apparatus 200 has an image restoration unit 201, a determination unit 202, and an output unit 203 as functions.
 まず、情報処理装置200にインストールされる学習モデルを機械学習により生成する工程について、説明する。当該学習時の流れを、図6上に破線にて概念的に図示している。学習時においては、予め正常な錠剤9が撮像された複数の学習画像Io(図7参照)が用意される。具体的には、第1ヘッドユニット43よりも搬送方向の下流側において、第1搬送ベルト412に搬送される錠剤9のうち、傷等の欠陥が無い錠剤9が、第1検査カメラ44によって多数撮像される。そして、撮像された錠剤9の上面の画像が、正常な錠剤9の学習用の画像(学習画像Io)として複数枚用意される。本実施形態では、1000枚の学習画像Ioが用意される。なお、一般に、機械学習自体は、錠剤印刷装置1の外部において、実施される。複数枚の学習画像Ioは、画像復元部201に入力される。 First, a process of generating a learning model installed in the information processing device 200 by machine learning will be described. The flow of the learning is conceptually illustrated by the broken line in FIG. During learning, a plurality of learning images Io (see FIG. 7) in which a normal tablet 9 is imaged are prepared in advance. Specifically, on the downstream side of the first head unit 43 in the transport direction, among the tablets 9 transported on the first transport belt 412, a large number of tablets 9 without defects such as scratches are detected by the first inspection camera 44. The image is taken. Then, a plurality of captured images of the upper surface of the tablet 9 are prepared as learning images (learning image Io) of the normal tablet 9. In this embodiment, 1000 learning images Io are prepared. Note that, in general, machine learning itself is performed outside the tablet printing apparatus 1. The plurality of learning images Io are input to the image restoration unit 201.
 画像復元部201に学習画像Ioが入力されると、画像復元部201は、各学習画像Ioを複数の区画に分割する(図8参照)。本実施形態では、縦4区画、横4区画の合計16区画(区画S1~区画S16)に分割する。ただし、学習画像Ioを分割する数は、これに限定されない。また、本実施形態では、分割された区画S1~区画S16は、互いに大きさが等しい。ただし、学習画像Ioを互いに大きさの異なる複数の区画に分割してもよい。 When the learning image Io is input to the image restoration unit 201, the image restoration unit 201 divides each learning image Io into a plurality of sections (see FIG. 8). In this embodiment, it is divided into a total of 16 sections (sections S1 to S16) of four vertical sections and four horizontal sections. However, the number of divided learning images Io is not limited to this. Further, in this embodiment, the divided sections S1 to S16 have the same size. However, the learning image Io may be divided into a plurality of sections having different sizes.
 次に、画像復元部201は、各学習画像Ioの区画S1~区画S16のうちの1区画が隠された画像データIhを作成する。図8の上部には、例として、学習画像Ioの区画S1~区画S16のうち、区画S2が隠された画像データIhを図示している。なお、本実施形態の画像復元部201は、区画S1~区画S16のうちの1区画を、区画S1から順に隠しつつ、各学習画像Ioあたり16枚の画像データIhを作成する。画像復元部201は、1000枚の学習画像Ioのそれぞれに対して16枚、すなわち、合計16000枚の画像データIhを作成する。ただし、画像復元部201は、ランダムジェネレータを用いて、区画S1~区画S16のうちの1区画をランダムに隠しつつ、各学習画像Ioあたり所定枚数の画像データIhを作成してもよい。 Next, the image restoration unit 201 creates image data Ih in which one of the sections S1 to S16 of each learning image Io is hidden. In the upper part of FIG. 8, as an example, the image data Ih in which the section S2 of the sections S1 to S16 of the learning image Io is hidden is illustrated. The image restoration unit 201 of this embodiment creates 16 image data Ih for each learning image Io while hiding one of the sections S1 to S16 in order from the section S1. The image restoration unit 201 creates 16 pieces of image data Ih for each of the 1000 learning images Io, that is, a total of 16000 pieces of image data Ih. However, the image restoration unit 201 may create a predetermined number of image data Ih for each learning image Io while randomly hiding one of the sections S1 to S16 using a random generator.
 続いて、画像復元部201は、各画像データIhから、隠された一部が復元された復元画像Irを高精度に生成できるように、ディープラーニングによる学習処理を行う。具体的には、画像復元部201は、各画像データIhが生成された元の学習画像Ioを、教師データとしつつ、復元画像Irを高精度に生成するための画像復元処理に係る学習モデルX(a,b,c…)を機械学習する。ここで、教師データとは、すなわち正解のデータである。なお、図8は、例として、画像データIhから、隠された区画S2が復元された復元画像Irを高精度に生成する様子を図示している。 Next, the image restoration unit 201 performs learning processing by deep learning so that a restored image Ir in which a hidden part is restored can be generated from each image data Ih with high accuracy. Specifically, the image restoration unit 201 uses the original learning image Io for which each image data Ih was generated as the teacher data, and also the learning model X related to the image restoration process for generating the restored image Ir with high accuracy. Machine learning of (a, b, c...). Here, the teacher data is correct data. Note that FIG. 8 illustrates, as an example, how the restored image Ir in which the hidden section S2 is restored is generated from the image data Ih with high accuracy.
 この時、画像復元部201は、畳み込みニューラルネットワークにより、画像データIhから特徴を抽出して潜在変数を生成するエンコード処理と、潜在変数から復元画像Irを生成するデコード処理とを繰り返し実行する。畳み込みニューラルネットワークとしては、例えば、U-NetまたはFusionNet等が挙げられる。そして、デコード処理後の復元画像Irと、エンコード処理前の画像データIhが生成された元の学習画像Ioとの画素値の差異を最小化するように、誤差逆伝播法または勾配降下法等を用いて、エンコード処理およびデコード処理のパラメータを調整しつつ更新保存していく。ここで、エンコード処理およびデコード処理のパラメータとは、学習モデルX(a,b,c…)における複数のパラメータa,b,c…を示す。なお、画像復元部201は、各画像データIhを用いて1回の学習を行ってもよく、複数回の学習を行ってもよい。 At this time, the image restoration unit 201 repeatedly executes, by the convolutional neural network, an encoding process for extracting a feature from the image data Ih to generate a latent variable and a decoding process for generating a restored image Ir from the latent variable. Examples of the convolutional neural network include U-Net or FusionNet. Then, in order to minimize the difference in pixel value between the restored image Ir after the decoding process and the original learning image Io from which the image data Ih before the encoding process is generated, an error back propagation method or a gradient descent method is used. By using the parameters, the parameters of the encoding process and the decoding process are adjusted and updated and saved. Here, the parameters of the encoding process and the decoding process indicate a plurality of parameters a, b, c... In the learning model X(a, b, c... ). The image restoration unit 201 may perform learning once using each image data Ih or may perform learning a plurality of times.
 ただし、復元画像Irを高精度に生成する画像復元処理を機械学習する方法は、これに限定されない。例えば、画像復元部201は、復元画像Irを生成する学習モデルX(a,b,c…)に加え、生成された復元画像Irと学習画像Ioとを比較して、どちらが本物の画像かどうかを判定する学習モデルY(p,q,r…)をさらに有しもよい。そして、学習モデルX(a,b,c…)による生成結果および学習モデルY(p,q,r…)の判定結果に基づいて、誤差逆伝播法を用いて学習モデルX(a,b,c…)と学習モデルY(p,q,r…)とを互いに競い合わせながら交互に機械学習する、敵対的生成ネットワークを有してもよい。敵対的生成ネットワークとしては、例えば、GANsまたはpix2pix等が挙げられる。 However, the method of machine learning for the image restoration process for generating the restored image Ir with high accuracy is not limited to this. For example, the image restoration unit 201 compares the generated restoration image Ir with the learning image Io in addition to the learning model X(a, b, c...) Which generates the restored image Ir, and which is the real image? It may further have a learning model Y (p, q, r...) for determining. Then, based on the generation result of the learning model X (a, b, c...) And the determination result of the learning model Y (p, q, r...), the learning model X(a, b, c)) and the learning model Y(p, q, r...) may be machine-learned alternately while competing with each other, and a hostile generation network may be provided. Examples of the adversarial generation network include GANs or pix2pix.
 以上により、機械学習が完了すると、情報処理装置200に、学習済みの学習モデルX(a,b,c…)がインストールされる。そして、錠剤印刷装置1は、その学習モデルX(a,b,c…)を用いて、錠剤9の欠陥検出を行うことが可能となる。錠剤9の欠陥検出を行うときには、まず、錠剤印刷装置1内の情報処理装置200は、第1検査カメラ44から、第1ヘッドユニット43よりも搬送方向の下流側において、第1搬送ベルト412に搬送される錠剤9の撮影画像Ipを取得する。また、第2検査カメラ54から、第2ヘッドユニット53よりも搬送方向の下流側において、第2搬送ベルト512に搬送される錠剤9の撮影画像Ipを取得する。そして、撮影画像Ipを、角度認識部71において認識された回転角度に応じて回転させて、検査画像Iiを生成する。検査画像Iiは、欠陥の有無が不明、すなわち、正常か異常か不明な錠剤9が撮像された画像である。なお、以下の説明では、錠剤9の検査画像Iiの後述する区画S15に位置する箇所に、欠陥Deを有する場合を想定する。また、本実施形態では、欠陥Deとして、傷を想定する。ただし、欠陥Deは、インクによる汚れ、印刷位置のずれ、またはドット欠け等であってもよい。 As described above, when the machine learning is completed, the learned learning model X (a, b, c...) Is installed in the information processing device 200. Then, the tablet printing apparatus 1 can detect the defect of the tablet 9 by using the learning model X(a, b, c... ). When the defect detection of the tablet 9 is performed, first, the information processing apparatus 200 in the tablet printing apparatus 1 transfers the first transport belt 412 from the first inspection camera 44 to the downstream side of the first head unit 43 in the transport direction. The captured image Ip of the transported tablet 9 is acquired. Further, the captured image Ip of the tablet 9 conveyed to the second conveyor belt 512 is acquired from the second inspection camera 54 on the downstream side of the second head unit 53 in the conveying direction. Then, the photographed image Ip is rotated according to the rotation angle recognized by the angle recognition unit 71 to generate the inspection image Ii. The inspection image Ii is an image in which the presence or absence of a defect is unknown, that is, the tablet 9 whose normality or abnormality is unknown is imaged. In the following description, it is assumed that the inspection image Ii of the tablet 9 has a defect De at a position located in a section S15 described later. Further, in this embodiment, a scratch is assumed as the defect De. However, the defect De may be dirt due to ink, a print position shift, a dot dropout, or the like.
 続いて、画像復元部201は、各検査画像Iiを、学習時と同じ縦4区画、横4区画の合計16区画(区画S1~区画S16)に分割する。次に、画像復元部201は、各検査画像Iiの区画S1~区画S16のうちの1区画が隠された画像データIhを作成する。図9および図10はそれぞれ、画像データIhから、隠された1区画が復元された復元画像Irを高精度に生成する様子を図示している。特に、図9では、区画S1が隠された画像データIh(説明容易のため以下「画像データIh1」と称する)から、区画S1が復元された復元画像Ir(説明容易のため以下「復元画像Ir1」と称する)を高精度に生成する様子を図示している。また、図10では、区画S15が隠された画像データIh(説明容易のため以下「画像データIh15」と称する)から、区画S15が復元された復元画像Ir(説明容易のため以下「復元画像Ir15」と称する)を高精度に生成する様子を図示している。なお、区画S15は隠されているため、画像復元部201は欠陥Deを認識できないが、説明容易のため、図10の画像データIhでは、欠陥Deを白色で表示している。 Subsequently, the image restoration unit 201 divides each inspection image Ii into a total of 16 sections (sections S1 to S16) of the same four vertical sections and four horizontal sections as at the time of learning. Next, the image restoration unit 201 creates image data Ih in which one of the sections S1 to S16 of each inspection image Ii is hidden. FIG. 9 and FIG. 10 each show a state in which a restored image Ir in which one hidden partition is restored is generated with high accuracy from the image data Ih. In particular, in FIG. 9, the restored image Ir in which the section S1 is restored from the image data Ih in which the section S1 is hidden (hereinafter referred to as “image data Ih1” for ease of description) (hereinafter “restored image Ir1 for ease of description”). ") is generated with high accuracy. Further, in FIG. 10, a restored image Ir in which the section S15 is restored from the image data Ih in which the section S15 is hidden (hereinafter referred to as “image data Ih15” for ease of description) (hereinafter “restored image Ir15 for ease of description”). ") is generated with high accuracy. Since the section S15 is hidden, the image restoration unit 201 cannot recognize the defect De, but the defect De is displayed in white in the image data Ih of FIG. 10 for easy description.
 続いて、画像復元部201は、学習時と同様に、畳み込みニューラルネットワークにより、検査画像Iiの一部が隠された画像データIhから特徴を抽出して潜在変数を生成するエンコード処理と、潜在変数から復元画像Irを生成するデコード処理とを実行しつつ、学習時において学習済である学習モデルX(a,b,c…)を用いて、画像データIhから、隠された一部の場所を順次に変更しながら、複数の復元画像Irを生成する。 Then, the image restoration unit 201 extracts the feature from the image data Ih in which a part of the inspection image Ii is hidden by the convolutional neural network and generates the latent variable by the convolutional neural network, as in the learning, and the latent variable. While performing the decoding process for generating the restored image Ir from the image data Ih, the hidden part of the image data Ih is used by using the learning model X (a, b, c...) which has been learned at the time of learning. A plurality of restored images Ir are generated while sequentially changing.
 具体的には、画像復元部201は、まず、検査画像Iiから区画S1が隠された画像データIh1から、学習モデルX(a,b,c…)を用いて、区画S1が復元された復元画像Ir1を生成し、判定部202へ出力する。次に、画像復元部201は、検査画像Iiから区画S2が隠された画像データIh2から、学習モデルX(a,b,c…)を用いて、区画S2が復元された復元画像Ir2を生成し、判定部202へ出力する。画像復元部201は、このような復元処理を、隠された一部の場所を順次に変更しながら、繰り返し実行する。やがて、画像復元部201は、検査画像Iiから区画S15が隠された画像データIh15から、学習モデルX(a,b,c…)を用いて、区画S15が復元された復元画像Ir15を生成し、判定部202へ出力する。最後に、画像復元部201は、検査画像Iiから区画S16が隠された画像データIh16から、学習モデルX(a,b,c…)を用いて、区画S16が復元された復元画像Ir16を生成し、判定部202へ出力する。 Specifically, the image restoration unit 201 first restores the section S1 by using the learning model X (a, b, c...) From the image data Ih1 in which the section S1 is hidden from the inspection image Ii. The image Ir1 is generated and output to the determination unit 202. Next, the image restoration unit 201 uses the learning model X (a, b, c...) From the image data Ih2 in which the section S2 is hidden from the inspection image Ii to generate a restored image Ir2 in which the section S2 is restored. Then, it outputs to the determination unit 202. The image restoration unit 201 repeatedly performs such restoration processing while sequentially changing some hidden places. Eventually, the image restoration unit 201 uses the learning model X(a, b, c...) From the image data Ih15 in which the section S15 is hidden from the inspection image Ii to generate the restored image Ir15 in which the section S15 is restored. , To the determination unit 202. Finally, the image restoration unit 201 uses the learning model X(a, b, c...) From the image data Ih16 in which the section S16 is hidden from the inspection image Ii to generate the restored image Ir16 in which the section S16 is restored. Then, it outputs to the determination unit 202.
 ここで、上述のとおり、学習時において学習済である学習モデルX(a,b,c…)は、欠陥Deが無い正常な錠剤9が撮像された画像の一部が隠された画像データIhから、隠された一部が復元された復元画像Irを生成するためのパラメータが調整されたモデルである。このため、図9に示すように、画像復元部201が、検査画像Iiにおける欠陥Deが存在しない区画S1を隠した画像データIh1から学習モデルX(a,b,c…)を用いて復元画像Ir1を生成する場合は、検査画像Iiにおける欠陥Deが存在しない箇所において、欠陥Deが無い区画S1を含む復元画像Ir1が、精度よく復元される。一方、図10に示すように、画像復元部201が、検査画像Iiにおける欠陥Deが存在する区画S15を隠した画像データIh15から学習モデルX(a,b,c…)を用いて復元画像Ir15を生成する場合は、画像復元部201は欠陥Deを認識できない。このため、検査画像Iiにおける区画S15には欠陥Deが存在するものの、画像復元部201は欠陥Deが存在することを認識しないまま、欠陥Deが無い復元画像Ir15を生成してしまう。 Here, as described above, the learning model X (a, b, c...) That has been learned at the time of learning has the image data Ih in which a part of the image of the normal tablet 9 without the defect De is hidden. Is a model in which the parameters for generating the restored image Ir in which the hidden part is restored are adjusted. Therefore, as shown in FIG. 9, the image restoration unit 201 uses the learning model X(a, b, c...) From the image data Ih1 in which the section S1 in the inspection image Ii in which the defect De is not present is hidden and uses the restored image. When the Ir1 is generated, the restored image Ir1 including the section S1 having no defect De is accurately restored in a portion of the inspection image Ii where the defect De does not exist. On the other hand, as shown in FIG. 10, the image restoration unit 201 uses the learning model X(a, b, c...) From the image data Ih15 in which the section S15 in the inspection image Ii where the defect De is present is hidden to use the restored image Ir15. When the image is generated, the image restoration unit 201 cannot recognize the defect De. Therefore, although the defect De exists in the section S15 of the inspection image Ii, the image restoration unit 201 generates the restored image Ir15 having no defect De without recognizing the presence of the defect De.
 続いて、判定部202は、画像復元部201から複数の復元画像Irが順に入力されると、複数の復元画像Irのそれぞれを、検査画像Iiと比較することによって、錠剤9が、欠陥Deが無い正常なものか或いは欠陥Deを有する異常なものかを判定し、判定結果Drを出力部203へ出力する。具体的には、判定部202は、まず、画像復元部201が生成した復元画像Ir1と検査画像Iiとを比較し、復元画像Ir1と検査画像Iiとの画素値の差異が所定の許容値よりも大きいか否かを判定する。次に、判定部202は、画像復元部201が生成した復元画像Ir2と検査画像Iiとを比較し、復元画像Ir2と検査画像Iiとの画素値の差異が所定の許容値よりも大きいか否かを判定する。判定部202は、このような判定処理を、全ての復元画像Irに対して実行する。やがて、判定部202は、画像復元部201が生成した復元画像Ir15と検査画像Iiとを比較し、復元画像Ir15と検査画像Iiとの画素値の差異が所定の許容値よりも大きいか否かを判定する。最後に、判定部202は、画像復元部201が生成した復元画像Ir16と検査画像Iiとを比較し、復元画像Ir16と検査画像Iiとの画素値の差異が所定の許容値よりも大きいか否かを判定する。 Subsequently, when the plurality of restored images Ir are sequentially input from the image restoration unit 201, the determination unit 202 compares each of the plurality of restored images Ir with the inspection image Ii, so that the tablet 9 has the defect De. It is determined whether there is a normal one or an abnormal one having a defect De, and the determination result Dr is output to the output unit 203. Specifically, the determination unit 202 first compares the restored image Ir1 generated by the image restoration unit 201 and the inspection image Ii, and determines that the difference in pixel value between the restored image Ir1 and the inspection image Ii is greater than a predetermined allowable value. Is also large. Next, the determination unit 202 compares the restored image Ir2 generated by the image restoration unit 201 and the inspection image Ii, and determines whether the difference in pixel value between the restored image Ir2 and the inspection image Ii is larger than a predetermined allowable value. Determine whether. The determination unit 202 executes such determination processing on all the restored images Ir. Eventually, the determination unit 202 compares the restored image Ir15 generated by the image restoration unit 201 with the inspection image Ii, and determines whether the difference in pixel value between the restored image Ir15 and the inspection image Ii is larger than a predetermined allowable value. To judge. Finally, the determination unit 202 compares the restored image Ir16 generated by the image restoration unit 201 and the inspection image Ii, and determines whether the difference in pixel value between the restored image Ir16 and the inspection image Ii is larger than a predetermined allowable value. Determine whether.
 上述のとおり、画像復元部201が生成した復元画像Ir15には、欠陥Deは存在しない。一方、検査画像Iiにおける区画S15に位置する箇所には、欠陥Deが存在する。このため、復元画像Ir15と検査画像Iiとの画素値の差異は、他の比較結果とは異なり、大幅に大きな値となる。このように、判定部202は、復元画像Irと検査画像Iiとの差異が所定の許容値よりも大きい場合に、当該復元画像Irの元となった画像データIhにおいて隠されていた場所を、欠陥Deが存在する場所として決定する。そして、判定部202は、欠陥Deの有無および欠陥Deの場所に係る判定結果Drを出力部203へ出力する。 As described above, the restored image Ir15 generated by the image restoration unit 201 has no defect De. On the other hand, a defect De is present at a position located in the section S15 in the inspection image Ii. Therefore, the difference in pixel value between the restored image Ir15 and the inspection image Ii becomes a significantly large value, unlike other comparison results. As described above, when the difference between the restored image Ir and the inspection image Ii is larger than the predetermined allowable value, the determination unit 202 determines the location hidden in the image data Ih that is the source of the restored image Ir. It is determined as the place where the defect De exists. Then, the determination unit 202 outputs the determination result Dr concerning the presence or absence of the defect De and the location of the defect De to the output unit 203.
 なお、判定部202は、画像復元部201から複数の復元画像Irが入力されると、複数の復元画像Irのそれぞれの元となった画像データIhにおいて隠されていた区画の、復元後の画像を互いに繋ぎ合わせた上で、検査画像Ii全体と比較して、画素値の差異が所定の許容値よりも大きいか否かを判定してもよい。 When the plurality of restored images Ir are input from the image restoration unit 201, the determination unit 202 restores the restored image of the hidden section in the image data Ih that is the source of each of the restored images Ir. It may be possible to determine whether or not the difference in pixel value is larger than a predetermined allowable value by connecting the two to each other and comparing the result with the entire inspection image Ii.
 以上により、第1搬送コンベア41に搬送される錠剤9および第2搬送コンベア51に搬送される錠剤9の欠陥Deの有無が判定され、すべての錠剤9の検査が完了する。出力部203は、判定部202から判定結果Drが入力されると、錠剤9における欠陥Deの有無および欠陥Deの場所に係る情報を、モニターまたはスピーカー等に出力するとともに、欠陥品回収部56へ、欠陥Deを有する錠剤9に関する情報を送信して、回収させる。なお、出力部203は、判定部202によって錠剤9に欠陥Deが無いと判定された場合に、さらにその旨を表示してもよい。 As described above, the presence or absence of defects De in the tablets 9 conveyed to the first conveyor 41 and the tablets 9 conveyed to the second conveyor 51 is determined, and the inspection of all tablets 9 is completed. When the determination result Dr is input from the determination unit 202, the output unit 203 outputs information regarding the presence/absence of the defect De in the tablet 9 and the location of the defect De to a monitor or a speaker, and also to the defective product recovery unit 56. , Information about the tablet 9 having the defect De is transmitted and collected. When the determination unit 202 determines that the tablet 9 does not have the defect De, the output unit 203 may further display that effect.
 上述のとおり、本実施形態では、容易に多数取得することが可能な、欠陥Deの無い正常な錠剤9の画像を用いて機械学習を行うことによって、欠陥Deを有する異常な錠剤9を検出することができる。これにより、錠剤9における未知のものを含む多種多様な欠陥Deを高精度に検出することができる。 As described above, in the present embodiment, the abnormal tablet 9 having the defect De is detected by performing the machine learning using the image of the normal tablet 9 having no defect De that can be easily acquired in large numbers. be able to. As a result, a wide variety of defects De including unknown ones in the tablet 9 can be detected with high accuracy.
 また、出力部203からは、欠陥Deの有無とともに欠陥Deの場所に係る情報が出力される。これにより、作業員等が、当該欠陥Deの場所に係る情報を用いて、欠陥Deを有すると判定された錠剤9を容易に再確認することができる。これにより、欠陥Deを有する錠剤9の検出精度をさらに高めることができる。 Further, the output unit 203 outputs information regarding the location of the defect De as well as the presence or absence of the defect De. Thereby, the worker or the like can easily reconfirm the tablet 9 determined to have the defect De by using the information on the location of the defect De. Thereby, the detection accuracy of the tablet 9 having the defect De can be further improved.
 また、本実施形態の画像復元部201は、畳み込みニューラルネットワークにより、画像データIhから特徴を抽出して潜在変数を生成するエンコード処理と、潜在変数から復元画像Irを生成するデコード処理とを繰り返し実行する。このため、検査画像Iiまたは学習画像Ioにおける錠剤9の位置が多少ずれている場合、または検査画像Iiまたは学習画像Ioに多少のノイズが含まれている場合でも、欠陥Deを有する錠剤9を高精度で検出することができる。 Further, the image restoration unit 201 according to the present embodiment repeatedly executes, by the convolutional neural network, an encoding process for extracting a feature from the image data Ih to generate a latent variable and a decoding process for generating a restored image Ir from the latent variable. To do. Therefore, even if the position of the tablet 9 in the inspection image Ii or the learning image Io is slightly displaced, or even if the inspection image Ii or the learning image Io contains some noise, the tablet 9 having the defect De is increased. It can be detected with accuracy.
 <4.変形例>
 以上、本発明の主たる実施形態について説明したが、本発明は、上述の実施形態に限定されるものではない。
<4. Modification>
Although the main embodiments of the present invention have been described above, the present invention is not limited to the above-mentioned embodiments.
 上述の実施形態では、錠剤9に印刷処理を行った後の、錠剤9の上面の画像を用いて、学習および錠剤9における欠陥Deの検出を行っていた。しかしながら、錠剤9に印刷処理を行う前の、錠剤9の画像を用いて、学習および錠剤9における欠陥Deの検出を行ってもよい。また、錠剤9を斜め方向から撮像した画像を用いて、学習および錠剤9における欠陥Deの検出を行ってもよい。これにより、錠剤9の表裏面のみでなく、錠剤9の側面に存在する欠陥Deの検出をすることができる。 In the above-described embodiment, learning and detection of the defect De in the tablet 9 are performed using the image of the upper surface of the tablet 9 after the tablet 9 is printed. However, the learning and the detection of the defect De in the tablet 9 may be performed using the image of the tablet 9 before the printing process is performed on the tablet 9. Further, learning and detection of the defect De in the tablet 9 may be performed using an image obtained by imaging the tablet 9 from an oblique direction. As a result, it is possible to detect not only the front and back surfaces of the tablet 9 but also the defects De existing on the side surface of the tablet 9.
 上述の実施形態では、錠剤印刷装置1の外部において予め機械学習が完了した学習モデルX(a,b,c…)を情報処理装置200内にインストールし、錠剤9における欠陥Deの検出を行っていた。しかしながら、学習モデルX(a,b,c…)を、錠剤印刷装置1内の情報処理装置200に既にインストールした状態において機械学習を行い、さらにそのまま錠剤9における欠陥Deの検出を行ってもよい。 In the above-described embodiment, the learning model X (a, b, c...) whose machine learning has been completed outside the tablet printing apparatus 1 is installed in the information processing apparatus 200, and the defect De in the tablet 9 is detected. It was However, the learning model X (a, b, c...) May be machine-learned in a state where the learning model X (a, b, c...) Is already installed in the information processing apparatus 200 in the tablet printing apparatus 1, and the defect De in the tablet 9 may be detected as it is. ..
 上述の実施形態では、検査対象物の例として、医薬品である錠剤9を用いていた。そして、上述の実施形態の情報処理装置200は、検査対象物である錠剤9における傷、汚れ、印刷位置のずれ、またはドット欠け等の欠陥Deの有無および欠陥Deの場所を判定するものであった。しかしながら、検査対象物は、さまざまな印刷装置において印刷処理が行われる、フィルムや紙等の基材、またはプリント基板等であってもよく、さまざまな装置に用いられる部品等であってもよい。すなわち、検査対象物は、正常な場合に略一定の外観を有する物体であればよい。そして、情報処理装置200は、当該検査対象物における、外観上の欠陥Deの有無および欠陥Deの場所を判定するものであってもよい。 In the above-described embodiment, the tablet 9 which is a pharmaceutical product is used as an example of the inspection target. Then, the information processing apparatus 200 of the above-described embodiment is for determining the presence or absence of a defect De such as a scratch, stain, printing position shift, or dot missing on the tablet 9 that is the inspection object and the location of the defect De. It was However, the inspection object may be a base material such as a film or paper, a printed circuit board, or the like, which is subjected to a printing process in various printing devices, or a component or the like used in various devices. That is, the inspection object may be an object having a substantially constant appearance in the normal case. Then, the information processing device 200 may determine the presence or absence of a defect De on the appearance and the location of the defect De in the inspection object.
 すなわち、本発明の情報処理装置は、正常な検査対象物の画像データの集合を用いて、欠陥を有する異常な検査対象物を検出する情報処理装置であって、正常か異常か不明な検査対象物が撮像された検査画像の一部が隠された画像データから、隠された一部が復元された復元画像を生成する画像復元部と、復元画像を、検査画像と比較することによって、検査対象物が正常か異常かを判定する判定部と、判定部による判定結果を出力する出力部と、を備え、画像復元部は、正常な検査対象物が撮像された複数の学習画像のそれぞれの一部が隠された画像データから、隠された一部が復元された復元画像を高精度に生成できるように、ディープラーニングにより学習済みであればよい。また、画像復元部は、学習完了時において、例えば、畳み込みニューラルネットワークにより、エンコード処理およびデコード処理のパラメータが調整済みであればよい。 That is, the information processing apparatus of the present invention is an information processing apparatus that detects an abnormal inspection object having a defect by using a set of image data of a normal inspection object, and an inspection object whose normality or abnormality is unknown. An image restoration unit that generates a restored image in which a hidden portion is restored from image data in which a portion of the inspection image in which an object is captured is hidden, and the restored image is compared with the inspection image to perform the inspection. The image restoration unit includes a determination unit that determines whether the target object is normal or abnormal, and an output unit that outputs the determination result by the determination unit. The image restoration unit includes a plurality of learning images of the normal inspection target object. It suffices that learning has already been performed by deep learning so that a restored image in which the hidden part is restored can be generated with high accuracy from the image data in which the hidden part is hidden. Further, the image restoration unit may have adjusted the parameters of the encoding process and the decoding process by the convolutional neural network, for example, when the learning is completed.
 また、本発明の情報処理方法は、正常な検査対象物の画像データの集合を用いて、欠陥を有する異常な検査対象物を検出する情報処理方法であって、a)正常な検査対象物が撮像された複数の学習画像のそれぞれの一部が隠された画像データから、隠された一部が復元された復元画像を生成する処理をディープラーニングによって学習する工程と、b)正常か異常か不明な検査対象物が撮像された検査画像の一部が隠された画像データから、工程a)において学習した処理を用いて復元された復元画像を、検査画像と比較することによって、検査対象物が正常か異常かを判定する工程と、c)工程b)による判定結果を出力する工程と、を有していればよい。 Further, the information processing method of the present invention is an information processing method for detecting an abnormal inspection object having a defect by using a set of image data of a normal inspection object, wherein a) the normal inspection object is A step of learning by deep learning a process of generating a restored image in which a hidden part is restored from image data in which a part of each of a plurality of captured learning images is hidden, and b) normal or abnormal By comparing the restored image restored by using the process learned in step a) with the inspection image from the image data in which a part of the inspection image obtained by capturing the unknown inspection target is hidden, It suffices to have a step of determining whether is normal or abnormal, and a step of c) outputting the determination result of step b).
 また、本発明の情報処理装置が実行する情報処理プログラムは、正常な検査対象物の画像データの集合を用いて、欠陥を有する異常な検査対象物を検出する情報処理プログラムであって、a)正常か異常か不明な検査対象物が撮像された検査画像の一部が隠された画像データから、隠された一部が復元された復元画像を生成する画像復元処理と、b)復元画像を、検査画像と比較することによって、検査対象物が正常か異常かを判定する判定処理と、c)判定処理による判定結果を出力する出力処理と、をコンピュータに実行させ、画像復元処理は、正常な検査対象物が撮像された複数の学習画像のそれぞれの一部が隠された画像データから、隠された一部が復元された復元画像を高精度に生成できるように、ディープラーニングにより学習済みであればよい。 Further, the information processing program executed by the information processing apparatus of the present invention is an information processing program for detecting an abnormal inspection object having a defect by using a set of image data of a normal inspection object, and a) An image restoration process for generating a restored image in which a hidden portion is restored from image data in which a portion of the inspection image in which an inspection target that is normal or abnormal is captured is hidden, and b) a restored image , The computer performs the determination process of determining whether the inspection target is normal or abnormal by comparing with the inspection image, and c) the output process of outputting the determination result of the determination process, and the image restoration process is performed normally. Learned by deep learning so that it is possible to accurately generate a restored image in which a hidden part is restored from image data in which a part of each of multiple learning images in which various inspection objects are captured is hidden If
 また、本発明は、欠陥を有する異常な検査対象物を検出するために、正常な検査対象物が撮像された複数の学習画像のそれぞれの一部が隠された画像データから、隠された一部が復元された復元画像を生成する処理を、ディープラーニングによって学習するものであればよい。 Further, in order to detect an abnormal inspection target object having a defect, the present invention provides a method in which a part of each of a plurality of learning images in which a normal inspection target object is captured is hidden from image data. What is necessary is just to learn the process of generating a restored image in which the copy is restored by deep learning.
 また、本発明は、欠陥を有する異常な検査対象物を検出するために、正常な検査対象物が撮像された複数の学習画像のそれぞれの一部が隠された画像データから、隠された一部が復元された復元画像を生成する処理を、ディープラーニングによって学習した学習済モデルを有していればよい。 Further, in order to detect an abnormal inspection target object having a defect, the present invention provides a method in which a part of each of a plurality of learning images in which a normal inspection target object is captured is hidden from image data. It suffices to have a learned model learned by deep learning the process of generating a restored image in which the part is restored.
 これにより、容易に多数取得することが可能な、欠陥の無い正常な検査対象物の画像を用いて機械学習を行うことによって、欠陥を有する異常な検査対象物を検出することができる。これにより、検査対象物における未知のものを含む多種多様な欠陥を高精度に検出することができる。 With this, it is possible to detect an abnormal inspection object having a defect by performing machine learning using an image of a normal inspection object having no defect, which can be easily acquired in large numbers. This makes it possible to detect a wide variety of defects including unknown ones in the inspection object with high accuracy.
 なお、上述の実施形態では、第1印刷部40および第2印刷部50に、それぞれ4つのヘッドが設けられていた。しかしながら、各印刷部40,50に含まれるヘッドの数は、1~3つであってもよく、5つ以上であってもよい。 In addition, in the above-described embodiment, the first printing unit 40 and the second printing unit 50 each have four heads. However, the number of heads included in each of the printing units 40 and 50 may be 1 to 3, or 5 or more.
 また、錠剤印刷装置1の細部の構成については、本願の各図と相違していてもよい。また、上述の実施形態や変形例に登場した各要素を、矛盾が生じない範囲で、適宜に組み合わせてもよい。 Also, the detailed configuration of the tablet printing apparatus 1 may be different from the drawings of the present application. Further, the respective elements appearing in the above-described embodiments and modified examples may be appropriately combined within a range where no contradiction occurs.
 1 錠剤印刷装置
 9 錠剤
 10 ホッパー
 20 フィーダ部
 30 搬送ドラム
 33 第1状態検出カメラ
 40 第1印刷部
 41 第1搬送コンベア
 42 第2状態検出カメラ
 43 第1ヘッドユニット
 44 第1検査カメラ
 45 第1定着部
 50 第2印刷部
 51 第2搬送コンベア
 52 第3状態検出カメラ
 53 第2ヘッドユニット
 54 第2検査カメラ
 55 第2定着部
 56 欠陥品回収部
 60 搬出コンベア
 70 制御部
 71 角度認識部
 90 割線
 100 筐体
 200 情報処理装置
 201 画像復元部
 202 判定部
 203 出力部
 411 第1プーリ
 412 第1搬送ベルト
 431 第1ヘッド
 511 第2プーリ
 512 第2搬送ベルト
 531 第2ヘッド
 561 回収箱
 701 プロセッサ
 702 メモリ
 703 記憶装置
 704 受信部
 705 送信部
 D データ
 D1 印刷画像データ
 De 欠陥
 Dr 判定結果
 Ih 画像データ
 Ii 検査画像
 Io 学習画像
 Ip 撮影画像
 Ir 復元画像
 P コンピュータプログラム
 X 学習モデル
 Y 学習モデル

 
1 Tablet Printing Device 9 Tablets 10 Hopper 20 Feeder Section 30 Conveying Drum 33 First State Detection Camera 40 First Printing Section 41 First Conveying Conveyor 42 Second State Detection Camera 43 First Head Unit 44 First Inspection Camera 45 First Fixing Part 50 Second printing part 51 Second transfer conveyor 52 Third state detection camera 53 Second head unit 54 Second inspection camera 55 Second fixing part 56 Defective product recovery part 60 Carry-out conveyor 70 Control part 71 Angle recognition part 90 Dividing line 100 Housing 200 Information processing device 201 Image restoration unit 202 Judgment unit 203 Output unit 411 First pulley 412 First conveyor belt 431 First head 511 Second pulley 512 Second conveyor belt 531 Second head 561 Collection box 701 Processor 702 Memory 703 Storage device 704 Reception unit 705 Transmission unit D data D1 Print image data De Defect Dr judgment result Ih image data Ii inspection image Io learning image Ip captured image Ir restored image P computer program X learning model Y learning model

Claims (20)

  1.  正常な検査対象物の画像データの集合を用いて、欠陥を有する異常な検査対象物を検出する情報処理装置であって、
     正常か異常か不明な検査対象物が撮像された検査画像の一部が隠された画像データから、前記隠された一部が復元された復元画像を生成する画像復元部と、
     前記復元画像を、前記検査画像と比較することによって、検査対象物が正常か異常かを判定する判定部と、
     前記判定部による判定結果を出力する出力部と、
    を備え、
     前記画像復元部は、正常な検査対象物が撮像された複数の学習画像のそれぞれの一部が隠された画像データから、前記隠された一部が復元された復元画像を高精度に生成できるように、ディープラーニングにより学習済みである、情報処理装置。
    An information processing apparatus for detecting an abnormal inspection object having a defect using a set of image data of a normal inspection object,
    An image restoration unit that generates a restored image in which the hidden part is restored from image data in which a part of the inspection image in which the inspection target that is normal or abnormal is captured is hidden,
    By comparing the restored image with the inspection image, a determination unit for determining whether the inspection object is normal or abnormal,
    An output unit that outputs the determination result by the determination unit,
    Equipped with
    The image restoration unit can accurately generate a restored image in which the hidden part is restored from image data in which a part of each of the plurality of learning images in which a normal inspection object is captured is hidden. Information processing device that has been learned by deep learning.
  2.  請求項1に記載の情報処理装置であって、
     前記画像復元部は、前記画像データのうち、前記隠された一部の場所を順次に変更しながら、複数の前記復元画像を生成し、
     前記判定部は、複数の前記復元画像のそれぞれと、前記検査画像とを比較することによって、検査対象物が正常か異常かを判定する、情報処理装置。
    The information processing apparatus according to claim 1, wherein
    The image restoration unit generates a plurality of restored images while sequentially changing the hidden part of the image data,
    The information processing apparatus, wherein the determination unit determines whether the inspection target is normal or abnormal by comparing each of the plurality of restored images with the inspection image.
  3.  請求項2に記載の情報処理装置であって、
     前記判定部は、前記復元画像と前記検査画像との差異が所定の許容値よりも大きい場合に、前記隠された一部の場所を前記欠陥の場所として決定し、
     前記出力部は、さらに前記欠陥の場所に係る情報を出力する、情報処理装置。
    The information processing apparatus according to claim 2, wherein
    The determination unit, when the difference between the restored image and the inspection image is larger than a predetermined allowable value, determines the hidden part of the location as the location of the defect,
    The information processing apparatus, wherein the output unit further outputs information on the location of the defect.
  4.  請求項1から請求項3までのいずれか1項に記載の情報処理装置であって、
     前記画像復元部は、
      前記検査画像から特徴を抽出して潜在変数を生成するエンコード処理と、
      前記潜在変数から前記復元画像を生成するデコード処理と、
    を実行する、情報処理装置。
    The information processing apparatus according to any one of claims 1 to 3,
    The image restoration unit,
    An encoding process for generating a latent variable by extracting a feature from the inspection image,
    A decoding process for generating the restored image from the latent variable,
    An information processing apparatus that executes.
  5.  請求項4に記載の情報処理装置であって、
     前記画像復元部は、前記学習において、畳み込みニューラルネットワークにより、前記エンコード処理および前記デコード処理のパラメータが調整済みである、情報処理装置。
    The information processing apparatus according to claim 4, wherein
    In the learning, the image restoration unit has adjusted parameters of the encoding process and the decoding process by a convolutional neural network.
  6.  正常な検査対象物の画像データの集合を用いて、欠陥を有する異常な検査対象物を検出する情報処理方法であって、
     a)正常な検査対象物が撮像された複数の学習画像のそれぞれの一部が隠された画像データから、前記隠された一部が復元された復元画像を生成する処理をディープラーニングによって学習する工程と、
     b)正常か異常か不明な検査対象物が撮像された検査画像の一部が隠された画像データから、前記工程a)において学習した処理を用いて復元された復元画像を、前記検査画像と比較することによって、検査対象物が正常か異常かを判定する工程と、
     c)前記工程b)による判定結果を出力する工程と、
    を有する、情報処理方法。
    An information processing method for detecting an abnormal inspection object having a defect using a set of image data of a normal inspection object,
    a) Deep learning is used to learn a process of generating a restored image in which the hidden part is restored from image data in which a part of each of a plurality of learning images in which a normal inspection object is captured is hidden. Process,
    b) A restoration image restored by using the process learned in the step a) from image data in which a part of the inspection image in which the inspection target object of which normality or abnormality is unknown is hidden is referred to as the inspection image. A step of determining whether the inspection object is normal or abnormal by comparing,
    c) outputting the judgment result of the step b),
    An information processing method comprising:
  7.  正常な検査対象物の画像データの集合を用いて、欠陥を有する異常な検査対象物を検出する情報処理プログラムであって、
     a)正常か異常か不明な検査対象物が撮像された検査画像の一部が隠された画像データから、前記隠された一部が復元された復元画像を生成する画像復元処理と、
     b)前記復元画像を、前記検査画像と比較することによって、検査対象物が正常か異常かを判定する判定処理と、
     c)前記判定処理による判定結果を出力する出力処理と、
    をコンピュータに実行させ、
     前記画像復元処理は、正常な検査対象物が撮像された複数の学習画像のそれぞれの一部が隠された画像データから、前記隠された一部が復元された復元画像を高精度に生成できるように、ディープラーニングにより学習済みである、情報処理プログラム。
    An information processing program for detecting an abnormal inspection object having a defect using a set of image data of a normal inspection object,
    a) An image restoration process for generating a restored image in which the hidden part is restored from image data in which a part of the inspection image in which the inspection target whose normality or abnormality is unknown is captured is hidden,
    b) a determination process of determining whether the inspection object is normal or abnormal by comparing the restored image with the inspection image,
    c) output processing for outputting the determination result of the determination processing,
    To run on your computer,
    The image restoration processing can highly accurately generate a restored image in which the hidden part is restored from image data in which a part of each of the plurality of learning images in which a normal inspection object is captured is hidden. Information processing programs that have been learned by deep learning.
  8.  欠陥を有する異常な検査対象物を検出するために、正常な検査対象物が撮像された複数の学習画像のそれぞれの一部が隠された画像データから、前記隠された一部が復元された復元画像を生成する処理を、ディープラーニングによって学習する、学習方法。 In order to detect an abnormal inspection object having a defect, the hidden part is restored from image data in which a part of each of a plurality of learning images in which a normal inspection object is captured is hidden. A learning method for learning the process of generating a restored image by deep learning.
  9.  欠陥を有する異常な検査対象物を検出するために、正常な検査対象物が撮像された複数の学習画像のそれぞれの一部が隠された画像データから、前記隠された一部が復元された復元画像を生成する処理を、ディープラーニングによって学習した、学習済モデル。 In order to detect an abnormal inspection object having a defect, the hidden part is restored from image data in which a part of each of a plurality of learning images in which a normal inspection object is captured is hidden. A learned model that learned the process of generating the restored image by deep learning.
  10.  請求項1から請求項5までのいずれか1項に記載の情報処理装置であって、
     検査対象物は錠剤である、情報処理装置。
    The information processing apparatus according to any one of claims 1 to 5,
    An information processing device whose inspection target is a tablet.
  11.  請求項6に記載の情報処理方法であって、
     検査対象物は錠剤である、情報処理方法。
    The information processing method according to claim 6,
    The information processing method in which the inspection target is a tablet.
  12.  請求項7に記載の情報処理プログラムであって、
     検査対象物は錠剤である、情報処理プログラム。
    The information processing program according to claim 7,
    An information processing program in which the inspection object is a tablet.
  13.  請求項8に記載の学習方法であって、
     検査対象物は錠剤である、学習方法。
    The learning method according to claim 8, wherein
    The learning method, in which the inspection object is a tablet.
  14.  請求項9に記載の学習済モデルであって、
     検査対象物は錠剤である、学習済モデル。
    The trained model according to claim 9,
    The object to be inspected is a tablet, a learned model.
  15.  請求項6に記載の情報処理方法であって、
     前記工程a)では、前記画像データのうち、前記隠された一部の場所を順次に変更しながら、複数の前記復元画像を生成し、
     前記工程b)では、複数の前記復元画像のそれぞれと、前記検査画像とを比較することによって、検査対象物が正常か異常かを判定する、情報処理方法。
    The information processing method according to claim 6,
    In the step a), a plurality of the restored images are generated while sequentially changing the hidden part of the image data,
    In the step b), the information processing method of determining whether the inspection object is normal or abnormal by comparing each of the plurality of restored images with the inspection image.
  16.  請求項15に記載の情報処理方法であって、
     前記工程b)では、前記復元画像と前記検査画像との差異が所定の許容値よりも大きい場合に、前記隠された一部の場所を前記欠陥の場所として決定し、
     前記工程c)では、さらに前記欠陥の場所に係る情報を出力する、情報処理方法。
    The information processing method according to claim 15, wherein
    In the step b), when the difference between the restored image and the inspection image is larger than a predetermined allowable value, the hidden partial location is determined as the location of the defect,
    In the step c), the information processing method further comprising outputting information on the location of the defect.
  17.  請求項6、請求項15、または請求項16のいずれか1項に記載の情報処理方法であって、
     前記工程a)では、
      前記検査画像から特徴を抽出して潜在変数を生成するエンコード処理と、
      前記潜在変数から前記復元画像を生成するデコード処理と、
    を実行する、情報処理方法。
    The information processing method according to any one of claims 6, 15, or 16.
    In the step a),
    An encoding process for generating a latent variable by extracting a feature from the inspection image,
    A decoding process for generating the restored image from the latent variable,
    An information processing method for executing.
  18.  請求項17に記載の情報処理方法であって、
     前記工程a)では、前記学習において、畳み込みニューラルネットワークにより、前記エンコード処理および前記デコード処理のパラメータが調整済みである、情報処理方法。
    The information processing method according to claim 17, wherein
    In the step a), in the learning, the information processing method, wherein the convolutional neural network has adjusted the parameters of the encoding process and the decoding process.
  19.  請求項7に記載の情報処理プログラムであって、
     前記画像復元処理では、前記画像データのうち、前記隠された一部の場所を順次に変更しながら、複数の前記復元画像を生成し、
     前記判定処理では、複数の前記復元画像のそれぞれと、前記検査画像とを比較することによって、検査対象物が正常か異常かを判定する、情報処理プログラム。
    The information processing program according to claim 7,
    In the image restoration processing, while sequentially changing the hidden part of the image data, a plurality of restored images are generated,
    In the determination process, an information processing program that determines whether the inspection target is normal or abnormal by comparing each of the plurality of restored images with the inspection image.
  20.  請求項19に記載の情報処理プログラムであって、
     前記判定処理では、前記復元画像と前記検査画像との差異が所定の許容値よりも大きい場合に、前記隠された一部の場所を前記欠陥の場所として決定し、
     前記出力処理では、さらに前記欠陥の場所に係る情報を出力する、情報処理プログラム。
    The information processing program according to claim 19,
    In the determination process, when the difference between the restored image and the inspection image is larger than a predetermined allowable value, the hidden part of the location is determined as the location of the defect,
    An information processing program that further outputs information relating to the location of the defect in the output processing.
PCT/JP2019/043948 2019-01-31 2019-11-08 Information processing device, information processing method, information processing program, learning method, and prelearned model WO2020158098A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201980077404.1A CN113168686A (en) 2019-01-31 2019-11-08 Information processing apparatus, information processing method, information processing program, learning method, and learned model

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019015884A JP7312560B2 (en) 2019-01-31 2019-01-31 Information processing device, information processing method, information processing program, learning method, and trained model
JP2019-015884 2019-01-31

Publications (1)

Publication Number Publication Date
WO2020158098A1 true WO2020158098A1 (en) 2020-08-06

Family

ID=71840404

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/043948 WO2020158098A1 (en) 2019-01-31 2019-11-08 Information processing device, information processing method, information processing program, learning method, and prelearned model

Country Status (4)

Country Link
JP (1) JP7312560B2 (en)
CN (1) CN113168686A (en)
TW (1) TWI724655B (en)
WO (1) WO2020158098A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230281787A1 (en) * 2020-08-26 2023-09-07 Mitsubishi Heavy Industries, Ltd. Image generation device, image generation method, and program
JP7330328B1 (en) 2022-05-17 2023-08-21 Ckd株式会社 Visual inspection auxiliary device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180374569A1 (en) * 2017-06-27 2018-12-27 Nec Laboratories America, Inc. Reconstructor and contrastor for medical anomaly detection
WO2019087803A1 (en) * 2017-10-31 2019-05-09 日本電気株式会社 Image processing device, image processing method, and recording medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08136466A (en) * 1994-11-10 1996-05-31 Dainippon Screen Mfg Co Ltd Image pattern inspection device
JP5025893B2 (en) * 2004-03-29 2012-09-12 ソニー株式会社 Information processing apparatus and method, recording medium, and program
JP5660361B2 (en) * 2010-03-26 2015-01-28 ソニー株式会社 Image processing apparatus and method, and program
CN107949848B (en) * 2015-06-26 2022-04-15 英特尔公司 Defect detection and correction in digital images
JP2017097718A (en) * 2015-11-26 2017-06-01 株式会社リコー Identification processing device, identification system, identification method, and program
US10181185B2 (en) * 2016-01-11 2019-01-15 Kla-Tencor Corp. Image based specimen process control
JP2017217784A (en) 2016-06-06 2017-12-14 フロイント産業株式会社 Solid preparation printing machine and solid preparation printing method
JP2018079240A (en) 2016-11-18 2018-05-24 株式会社Screenホールディングス Printer and validation method
TWI647626B (en) * 2017-11-09 2019-01-11 慧穩科技股份有限公司 Intelligent image information and big data analysis system and method using deep learning technology
TWM558943U (en) * 2017-11-22 2018-04-21 Aiwin Technology Co Ltd Intelligent image information and big data analysis system using deep-learning technology
CN108961217B (en) * 2018-06-08 2022-09-16 南京大学 Surface defect detection method based on regular training

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180374569A1 (en) * 2017-06-27 2018-12-27 Nec Laboratories America, Inc. Reconstructor and contrastor for medical anomaly detection
WO2019087803A1 (en) * 2017-10-31 2019-05-09 日本電気株式会社 Image processing device, image processing method, and recording medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
KURIHARA SATOSHI, HAMAGAMI, TOMOMI: "Anomaly detection of fundus image using image completion", PAPERS OF TECHNICAL MEETING IEE JAPAN, September 2018 (2018-09-01), pages 85 - 102 . 104-118 *

Also Published As

Publication number Publication date
CN113168686A (en) 2021-07-23
TWI724655B (en) 2021-04-11
JP2020123238A (en) 2020-08-13
JP2023126337A (en) 2023-09-07
JP7312560B2 (en) 2023-07-21
TW202032498A (en) 2020-09-01

Similar Documents

Publication Publication Date Title
KR101943283B1 (en) Tablet printing device and tablet printing method
WO2015198754A1 (en) Tablet printing apparatus and tablet printing method
JP6325699B2 (en) Tablet printer
JP6978970B2 (en) Printing equipment and printing method
JP2018000208A (en) Tablet printing device and tablet printing method
WO2020158098A1 (en) Information processing device, information processing method, information processing program, learning method, and prelearned model
JP7405663B2 (en) Tablet printing device and tablet printing method
KR20200034594A (en) Transport apparatus and transport method
KR20210090546A (en) Transport processing apparatus
JP2018079240A (en) Printer and validation method
JP6633704B2 (en) Tablet printing method
JP7496458B2 (en) Information processing device, information processing method, information processing program, learning device, learning method, and trained model
CN110386421B (en) Vibration feeder and printing device
WO2019155680A1 (en) Determination device, determination method, tablet printing device, and tablet printing method
JP7061490B2 (en) Transport equipment and transport method
JP7146678B2 (en) Determination device, determination method, tablet printing device and tablet printing method
JP7451364B2 (en) Tablet printing device and tablet printing method
JP7397132B2 (en) Tablet printing device and tablet printing method
JP2017060886A (en) Tablet printing method
JP2021112345A (en) Printer
JP2022170002A (en) Area determination method, tablet inspection method, and tablet inspection apparatus
KR20240019722A (en) Tablet printing apparatus and tablet printing method
JP2021041052A (en) Printing method and printer

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19912655

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19912655

Country of ref document: EP

Kind code of ref document: A1