CN113168686A - Information processing apparatus, information processing method, information processing program, learning method, and learned model - Google Patents

Information processing apparatus, information processing method, information processing program, learning method, and learned model Download PDF

Info

Publication number
CN113168686A
CN113168686A CN201980077404.1A CN201980077404A CN113168686A CN 113168686 A CN113168686 A CN 113168686A CN 201980077404 A CN201980077404 A CN 201980077404A CN 113168686 A CN113168686 A CN 113168686A
Authority
CN
China
Prior art keywords
image
learning
inspection
images
restored
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980077404.1A
Other languages
Chinese (zh)
Inventor
猿渡健
冈本悟史
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Screen Holdings Co Ltd
Original Assignee
Screen Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Screen Holdings Co Ltd filed Critical Screen Holdings Co Ltd
Publication of CN113168686A publication Critical patent/CN113168686A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Abstract

The information processing apparatus detects a defective inspection object using a set of image data of the inspection object having no defect. The apparatus includes an image restoration unit, a determination unit, and an output unit. An image restoration unit generates a restoration image (Ir) in which a part of a concealed inspection image is restored from image data (Ih) in which a part of the concealed inspection image is concealed, and the inspection image is an image obtained by imaging an inspection object that is not determined to have or have no defects. The determination unit determines the presence or absence of a defect by comparing the restored image (Ir) with the inspection image. The output unit outputs the determination result. The image restoration unit performs learning by deep learning so that a restored image (Ir) in which a part of the plurality of learning images is hidden can be generated with high accuracy from the image data in which a part of the plurality of learning images is hidden, the learning images being images of the inspection object having no defects. Thus, machine learning for detecting a defective inspection object can be performed using an image from which a plurality of inspection objects having no defects can be easily obtained as data for learning.

Description

Information processing apparatus, information processing method, information processing program, learning method, and learned model
Technical Field
The present invention relates to an information processing device, an information processing method, and an information processing program capable of learning using image data of a normal inspection object having no defects and detecting a defective abnormal inspection object, and a learning method and a learned model performed at the time of the learning.
Background
Conventionally, a technique for detecting a defective abnormal inspection object by using image processing is known. In particular, in recent years, introduction of a technique to which machine learning is applied is advancing. A defect detection technique using machine learning is described in patent document 1, for example.
Documents of the prior art
Patent document
Patent document 1: japanese patent laid-open publication No. 2018-81629
Disclosure of Invention
Problems to be solved by the invention
Patent document 1 discloses a determination system 301 capable of determining the presence or absence of a defective portion Df1 included in an image of an object by using machine learning. The determination system 301 includes a determination device 101, an accumulation device 131, and a learning device 151. Then, among the plurality of images accumulated in the accumulating apparatus 131, 500 defective images Sng, which are images of the object including the defective portion Df1, and 500 non-defective images Sg, which are images of the object not including the defective portion Df1, are selected and divided into 63 partial images for each 1 image. When the partial image includes the defective portion Df1, a trace Tr1 is drawn on the defective portion Df1, and a label display is performed on the presence or absence of the trace Tr 1. Then, the learning device 151 performs machine learning using the plurality of partial images and the label display. Further, the model in which the machine learning is completed is introduced to the determination device 101. When image data is input to the model, it is determined whether or not the image data includes the defective portion Df1, and the determination result is output.
However, in the case of machine learning for detecting a defective abnormal inspection object, it is necessary to use, as data for learning, images of a plurality of inspection objects of at least several thousands to several millions. On the other hand, in the production process of industrial products, defects do not frequently occur, and it is actually difficult to obtain images of thousands to millions of defective inspection objects. In addition, the types and states of defects are various including unknown types and states, and it is more difficult to acquire images including defects of all types and states.
The present invention has been made in view of the above circumstances, and an object thereof is to provide a technique capable of detecting a defective abnormal inspection object by performing machine learning using an image that can easily acquire a plurality of normal inspection objects without defects.
Means for solving the problems
In order to solve the above problem, a first aspect of the present invention is an information processing apparatus for detecting a defective and abnormal inspection object using a set of image data of a normal inspection object, the information processing apparatus including: an image restoration unit that generates a restoration image in which a part of a test image is restored from image data in which the part of the test image is concealed, the test image being an image obtained by imaging a test object that is not determined to be normal or abnormal; a determination unit that compares the restored image with the inspection image to determine whether the inspection target is normal or abnormal; and an output unit that outputs a result of the determination by the determination unit, wherein the image restoration unit completes learning by deep learning so that a restored image in which a part of the plurality of learning images is restored can be generated with high accuracy from image data in which a part of each of the plurality of learning images is hidden, the learning image being an image obtained by imaging a normal inspection target object.
A second aspect of the present invention is the information processing apparatus of the first aspect, wherein the image restoration unit generates the plurality of restoration images while sequentially changing a position of the hidden portion in the image data, and the determination unit determines whether the inspection target is normal or abnormal by comparing each of the plurality of restoration images with the inspection image.
A third aspect of the present invention is the information processing apparatus of the second aspect, wherein when a difference between the restored image and the inspection image is larger than a predetermined allowable value, the determination unit determines a position of the part that is hidden as a position of the defect, and the output unit further outputs information on the position of the defect.
A fourth aspect of the present invention is the information processing apparatus of any one of the first to third aspects, wherein the image restoration unit executes: an encoding process of extracting features from the inspection image to generate latent variables; and a decoding process of generating the restored image from the latent variable.
A fifth aspect of the present invention is the information processing apparatus of the fourth aspect, wherein in the learning, the image restoration unit adjusts parameters of the encoding process and the decoding process by a convolutional neural network.
A sixth aspect of the present invention is an information processing method for detecting a defective and abnormal inspection object using a set of image data of a normal inspection object, the method including the steps of: a) the following processing is learned by deep learning: generating a restored image in which a part of a plurality of learning images is restored from image data in which the part of the plurality of learning images is concealed, the learning images being images of a normal inspection object; b) comparing a restoration image, which is an image restored from image data in which a part of an inspection image is hidden using the processing learned in the step a), with an inspection image obtained by imaging an inspection object that is not determined to be normal or abnormal, to determine whether the inspection object is normal or abnormal; and c) outputting the judgment result of the step b).
A seventh aspect of the present invention is an information processing program for detecting a defective and abnormal inspection object using a set of image data of a normal inspection object, the information processing program causing a computer to execute: a) an image restoration process of generating a restored image in which a part of a test image is restored from image data in which the part of the test image is concealed, the test image being an image obtained by imaging a test object that is not determined to be normal or abnormal; b) a determination process of comparing the restored image with the inspection image to determine whether the inspection object is normal or abnormal; and c) an output process of outputting a determination result of the determination process, wherein in the image restoration process, learning is completed by deep learning so that a restored image in which a part of each of a plurality of learning images is restored can be generated with high accuracy from image data in which the part of the concealed learning image is concealed, the learning image being an image obtained by imaging a normal inspection target.
An eighth aspect of the present invention is a learning method for learning, by deep learning, the following processing in order to detect a defective abnormal inspection object: a restoration image in which a part of a plurality of learning images is concealed is generated from image data in which the part of the concealed learning images is concealed, the learning images being images of a normal inspection object.
The ninth invention of the present application is a learned model that has learned the following processing by deep learning: in order to detect a defective and abnormal inspection object, a restored image in which a part of a plurality of learning images is restored is generated from image data in which the part of the learning images is concealed, the learning images being images of a normal inspection object.
A tenth aspect of the present invention is the information processing apparatus according to any one of the first to fifth aspects, wherein the object to be inspected is a tablet.
Effects of the invention
According to the first to tenth aspects of the present invention, a defective abnormal inspection object can be detected by performing machine learning using an image from which a plurality of normal inspection objects having no defects can be easily obtained. This makes it possible to accurately detect various defects including unknown defects in the inspection target.
In particular, according to the third aspect of the present invention, the operator or the like can easily confirm the defect in the inspection image or the inspection object body again by visual observation based on the information on the position of the defect. This can further improve the detection accuracy of the defective inspection object.
In particular, according to the fourth or fifth aspect of the present invention, even when the position of the inspection object in the inspection image is slightly shifted, the defective inspection object can be detected with high accuracy.
Drawings
Fig. 1 is a diagram showing the structure of a tablet printing apparatus.
Fig. 2 is a perspective view of the vicinity of the conveying roller.
Fig. 3 is a bottom view of the showerhead.
Fig. 4 is a perspective view of the vicinity of the inspection camera.
Fig. 5 is a block diagram showing the connection between the control unit and each unit in the tablet printing apparatus.
Fig. 6 is a block diagram conceptually showing a part of the function of the control unit in the tablet printing apparatus.
Fig. 7 is a diagram showing an example of a learning image obtained by capturing a normal tablet.
Fig. 8 is a schematic diagram showing a mode of generating a restored image from image data in which a part of a learning image is hidden, the learning image being an image obtained by imaging a normal tablet.
Fig. 9 is a schematic diagram showing a mode of generating a restored image from image data in which a part of an inspection image is hidden, the inspection image being an image obtained by imaging a tablet which is not determined to be normal or abnormal.
Fig. 10 is a schematic diagram showing a mode of generating a restored image from image data in which a part of an inspection image is hidden, the inspection image being an image obtained by imaging a tablet which is not determined to be normal or abnormal.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to the drawings. In one embodiment of the present invention, a tablet as a pharmaceutical is taken as an example of an object to be examined. Next, an apparatus, a method, and a program will be described which are capable of detecting whether or not a defect such as dirt or a flaw in a tablet is present after an image such as a product name is recorded on the surface of the tablet by an inkjet method, and detecting an abnormal tablet having the defect.
< 1. integral Structure of tablet printing apparatus >
Referring to fig. 1, the overall configuration of a tablet printing apparatus 1 according to an embodiment of the present invention will be described, and the tablet printing apparatus 1 includes an information processing apparatus 200 described later that detects a defect in a tablet 9. Fig. 1 is a diagram showing a configuration of a tablet printing apparatus 1. The tablet printing apparatus 1 is an apparatus that transports a plurality of tablets 9 and prints images such as product names, product codes, company names, and marks on the surfaces of the tablets 9 by an inkjet method for the purpose of product identification. The tablet 9 of the present embodiment has a disk shape (see fig. 4 described later). However, the shape of the tablet 9 may be other shapes such as an elliptical shape. In the following description, a direction in which the plurality of tablets 9 are conveyed is referred to as a "conveying direction", and a direction perpendicular and horizontal to the conveying direction is referred to as a "width direction".
Further, a dividing line 90 for dividing the tablet 9 into halves is formed in the tablet 9. Hereinafter, the surface of the tablet 9 on which the dividing line 90 is formed is referred to as a "dividing line surface". The cutting line 90 passes through the center of the cutting line surface and extends straight to both ends of the cutting line surface. In the present embodiment, it is assumed that the dividing line 90 is formed only on one of the surfaces forming the upper surface and the lower surface of the disc-shaped tablet 9. That is, in the present embodiment, only one of the upper surface and the lower surface of the tablet 9 is a cut line surface. However, the dividing line 90 may be formed on both surfaces of the upper surface and the lower surface of the disc-shaped tablet 9. That is, the dividing line 90 may be formed on both the front and back surfaces of the tablet 9. In the present embodiment, the product name and the like are printed only on the surface of the tablet 9 facing the dividing line surface along the direction of the dividing line 90 on the back surface side. However, the printing position in the tablet 9 is not limited to this.
As shown in fig. 1, the tablet printing apparatus 1 of the present embodiment includes a hopper 10, a feeder (feeder) unit 20, a transport cylinder (dry) 30, a first printing unit 40, a second printing unit 50, a discharge conveyor (conveyor)60, and a control unit 70. The hopper 10, the feeder unit 20, the transport cylinder 30, the first transport conveyor 41 of the first printing unit 40, the second transport conveyor 51 of the second printing unit 50, and the carry-out conveyor 60 form a transport mechanism for transporting the tablets 9 along a predetermined transport path.
The hopper 10 is a loading section for collectively storing the plurality of tablets 9 in the apparatus. The hopper 10 is disposed at the uppermost portion of the housing 100 of the tablet printing apparatus 1. The hopper 10 has an opening 11 located on the upper surface of the casing 100 and a funnel-shaped inclined surface 12 converging downward. The plurality of tablets 9 put into the opening 11 flow into the straight feeder 21 along the inclined surface 12.
The feeder unit 20 is a mechanism for feeding the plurality of tablets 9 loaded into the hopper 10 to the transport drum 30. The feeder unit 20 of the present embodiment includes a linear feeder 21, a rotary feeder 22, and a supply feeder 23. The linear feeder 21 has a plate-shaped vibration groove 211. The plurality of tablets 9 supplied from the hopper 10 to the vibration tank 211 are conveyed to the rotary feeder 22 side by the vibration of the vibration tank 211. The rotary feeder 22 has a disk-shaped rotary table 221. The plurality of tablets 9 falling from the vibration groove 211 onto the upper surface of the rotary table 221 are gathered near the outer peripheral portion of the rotary table 221 by the centrifugal force generated by the rotation of the rotary table 221.
The supply feeder 23 includes a plurality of cylindrical portions 231 extending vertically downward from the outer peripheral portion of the rotary table 221 to the transport drum 30. Fig. 2 is a perspective view of the vicinity of the conveying roller 30. As shown in fig. 2, the plurality of cylindrical portions 231 are arranged in parallel with each other. In the example of fig. 2, 8 cylindrical portions 231 are arranged. The plurality of tablets 9 conveyed to the outer periphery of the rotary table 221 are supplied to any 1 of the plurality of cylindrical portions 231, and fall down inside the cylindrical portions 231. Then, a plurality of tablets 9 are stacked in each cylindrical portion 231. In this way, the plurality of tablets 9 are supplied to the plurality of cylindrical portions 231 in a dispersed manner, and arranged in a plurality of conveying rows. Then, the plurality of tablets 9 in each conveying row are sequentially supplied to the conveying drum 30 from the lower tablet.
The transport drum 30 is a mechanism for delivering the plurality of tablets 9 from the supply feeder 23 to the first transport conveyor 41. The transport roller 30 has a substantially cylindrical outer peripheral surface. The transport drum 30 is rotated in the direction of the arrow in fig. 1 and 2 around a rotation shaft extending in the width direction by power obtained from a motor. As shown in fig. 2, a plurality of holding portions 31 are provided on the outer peripheral surface of the transport drum 30. The holding portion 31 is a concave portion recessed inward from the outer peripheral surface of the transport drum 30. A plurality of holding portions 31 are arranged along the circumferential direction on the outer circumferential surface of the conveying roller 30 at the width direction positions corresponding to the plurality of conveying lines. Further, suction holes 32 are provided in the bottom of each holding portion 31.
A suction mechanism is provided inside the transport drum 30. When the suction mechanism is operated, negative pressure lower than atmospheric pressure is generated in each of the plurality of suction holes 32. The holding unit 31 suctions and holds the tablets 9 supplied from the supply feeder 23 one by the negative pressure. Further, an air blowing mechanism is provided inside the conveying drum 30. The air blowing mechanism partially blows the pressurized gas from the inner side of the transport drum 30 toward the first transport conveyor 41 described later. Thereby, the suction state of the tablets 9 is maintained in the holding portion 31 not opposed to the first conveying belt 41, and the suction of the tablets 9 is released only in the holding portion 31 opposed to the first conveying belt 41. In this way, the transport drum 30 rotates while holding the plurality of tablets 9 supplied from the supply feeder 23 by suction, and can deliver the tablets 9 to the first transport conveyor 41.
A first state detection camera 33 is provided at a position facing the outer peripheral surface of the transport drum 30. The first state detection camera 33 is an imaging unit that images the state of the tablet 9 held by the transport drum 30. The first state detection camera 33 captures an image of the tablet 9 conveyed by the conveying drum 30, and sends the obtained image to the control section 70. The control unit 70 detects the presence or absence of the tablet 9 in each holding unit 31, the front and back surfaces of the tablet 9 held by the holding unit 31, and the direction of the dividing line 90 based on the received image.
The first printing unit 40 is a processing unit for printing an image on one surface of the tablet 9. As shown in fig. 1, the first printing section 40 includes a first transport conveyor 41, a second state detection camera 42, a first head unit 43, a first inspection camera 44, and a first fixing section 45.
The first conveyor belt 41 has a pair of first pulleys (pullies) 411 and an endless first conveyor belt 412 bridged between the pair of first pulleys 411. The first conveyor belt 412 is disposed with a portion thereof close to and opposed to the outer peripheral surface of the conveyor roller 30. One of the pair of first pulleys 411 is rotated by power obtained from the motor. Thereby, the first conveyor belt 412 rotates in the direction of the arrow in fig. 1 and 2. At this time, the other of the pair of first pulleys 411 is driven to rotate in accordance with the rotation of the first conveyor belt 412.
As shown in fig. 2, the first conveyor 412 is provided with a plurality of holding portions 413. The holding portion 413 is a concave portion recessed from the outer surface of the first conveyor belt 412 toward the inside. A plurality of holding portions 413 are arranged in the conveying direction at the width direction positions corresponding to the plurality of conveying rows, respectively. That is, the plurality of holding portions 413 are arranged at intervals in the width direction and the conveying direction. The intervals in the width direction of the plurality of holding portions 413 in the first conveyor belt 412 are equal to the intervals in the width direction of the plurality of holding portions 31 in the conveyor roller 30.
A suction hole 414 is provided in the bottom of each holding portion 413. The first conveyor belt 41 has a suction mechanism inside the first conveyor belt 412. When the suction mechanism is operated, negative pressure lower than atmospheric pressure is generated in each of the plurality of suction holes 414. The holding portion 413 sucks and holds the tablets 9 delivered from the transport drum 30 one by the negative pressure. Thus, the first conveying belt 41 conveys the plurality of tablets 9 while holding the tablets in a state of being arranged in a plurality of conveying rows at intervals in the width direction. An air blowing mechanism is provided on the first conveyor 412. When the air blowing mechanism is operated, the suction holes 414 are at a positive pressure higher than the atmospheric pressure in the holding portion 413 facing the second conveyor belt 51, which will be described later. Thereby, the adsorption of the tablets 9 on the holding portion 413 is released, and the tablets 9 are delivered from the first conveyor belt 41 to the second conveyor belt 51. Further, among the plurality of tablets 9 conveyed to the first conveyor 412, the tablets 9 held by the holding portion 413 from the dividing line surface side and the tablets 9 held by the holding portion 413 from the surface side opposite to the dividing line surface are mixed. When the tablets 9 are transferred from the first conveyor belt 41 to the second conveyor belt 51, the front and back surfaces thereof are reversed.
The second state detection camera 42 is an imaging unit that images the state of the tablets 9 held on the first conveyor 41 on the upstream side in the conveying direction from the first head unit 43. The first state detection camera 33 and the second state detection camera 42 capture images of the surfaces of the tablet 9 on the opposite sides to each other. The image obtained by the second state detection camera 42 is sent from the second state detection camera 42 to the control section 70. The control unit 70 detects the presence or absence of the tablet 9 in each holding unit 413, the front and back surfaces of the tablet 9 held in the holding unit 413, and the direction of the dividing line 90 based on the received image.
The first head unit 43 is an ink jet type head unit that ejects ink droplets onto the upper surface of the tablet 9 conveyed by the first conveyor belt 41. The first head unit 43 has 4 first heads 431 aligned along the conveying direction. The 4 first heads 431 discharge ink droplets of different colors from each other to the upper surface of the tablet 9 held by the holding portion 413 from the dividing line surface side among the plurality of tablets 9. For example, the 4 heads 431 discharge ink droplets of cyan, magenta, yellow, and black colors. By superimposing monochromatic images formed by these respective colors, a multicolor image is printed on the surface of the tablet 9. In addition, edible ink manufactured from materials approved by japanese pharmacopoeia, food sanitation law, and the like is used as ink discharged from each of the first heads 431.
Fig. 3 is a bottom view of the 1 first showerhead 431. In fig. 3, the first conveyor belt 412 and the plurality of tablets 9 held by the first conveyor belt 412 are indicated by two-dot chain lines. As enlarged in fig. 3, a plurality of nozzles 430 capable of ejecting ink droplets are provided on the lower surface of the first head 431. In the present embodiment, a plurality of nozzles 430 are two-dimensionally arrayed in the conveyance direction and the width direction on the lower surface of the first head 431. The nozzles 430 are arranged at positions shifted in the width direction. In this way, if the plurality of nozzles 430 are arranged two-dimensionally, the positions of the nozzles 430 in the width direction can be made close to each other. However, the plurality of nozzles 430 may be arranged in a row along the width direction.
An ejection method for ejecting ink droplets from the nozzles 430 is, for example, a so-called piezoelectric method in which ink in the nozzles 430 is pressurized and ejected by applying a voltage to a piezoelectric element to deform the piezoelectric element. However, the ink droplet discharge method may be a so-called thermal method in which ink in the nozzle 430 is thermally expanded by energizing a heater to discharge ink.
Fig. 4 is a perspective view of the vicinity of the first inspection camera 44. The first inspection camera 44 is an imaging unit for confirming whether or not printing is performed by the first head unit 43 and whether or not a defect of the tablet 9 is present. The first inspection camera 44 photographs the upper surface of the tablet 9 conveyed to the first conveyor belt 412 on the downstream side in the conveying direction from the first head unit 43. The first inspection camera 44 sends the obtained image to the control unit 70. The control unit 70 checks whether or not there is a defect such as a flaw, dirt, a shift in printing position, or a dot missing on the upper surface of each tablet 9 based on the received image. The method of detecting these defects will be described in detail later.
In the present embodiment, the 8 first inspection cameras 44 are disposed at positions corresponding to the 8 tablets 9 arranged in the width direction on the first conveyor 412. Each first inspection camera 44 photographs 1 tablet 9 in the width direction. In addition, each of the first inspection cameras 44 sequentially photographs the plurality of tablets 9 conveyed in the conveying direction. However, the arrangement space of the 8 first inspection cameras 44 may be considered, and they may be arranged so as to be shifted from each other in the transport direction.
The first fixing section 45 is a mechanism for fixing the ink discharged from the first head unit 43 to the tablet 9. In the present embodiment, the first fixing unit 45 is disposed on the downstream side in the conveying direction from the first inspection camera 44. However, the first fixing unit 45 may be disposed between the first head unit 43 and the first inspection camera 44. The first fixing section 45 uses, for example, a hot air drying type heater that blows hot air to the tablets 9 conveyed by the first conveying belt 41. The ink adhering to the surface of tablet 9 is dried by hot air and fixed to the surface of tablet 9.
The second printing unit 50 is a processing unit for printing an image on the other surface of the tablet 9 after the printing by the first printing unit 40. As shown in fig. 1, the second printing section 50 includes a second transport conveyor 51, a third state detection camera 52, a second head unit 53, a second inspection camera 54, a second fixing section 55, and a defective product collecting section 56.
The second conveying conveyor 51 conveys while holding the plurality of tablets 9 delivered from the first conveying conveyor 41. The third state detection camera 52 photographs the plurality of tablets 9 conveyed by the second conveyor belt 51 on the upstream side in the conveying direction from the second head unit 53. The second head unit 53 ejects ink droplets onto the upper surface of the tablets 9 conveyed by the second conveying conveyor 51. The second inspection camera 54 photographs the plurality of tablets 9 conveyed by the second conveyor belt 51 on the downstream side in the conveying direction from the second head unit 53. The second fixing unit 55 fixes the ink discharged from each head 531 of the second head unit 53 to the tablet 9.
The second transport belt 51, the third state detection camera 52, the second head unit 53, the second inspection camera 54, and the second fixing unit 55 have the same configurations as the first transport belt 41, the second state detection camera 42, the first head unit 43, the first inspection camera 44, and the first fixing unit 45 described above.
The defective tablet collecting unit 56 collects tablets 9 determined to be defective based on the captured images Ip obtained from the first inspection camera 44 and the second inspection camera 54. The defective product collecting unit 56 includes an air blowing mechanism and a collecting box 561 disposed inside the second conveyor 51. When the tablet 9 determined to be defective is conveyed to the defective product recovery portion 56, the air blowing mechanism blows pressurized gas from the inside of the second conveyor belt 51 toward the tablet 9. Thereby, the tablets 9 are dropped off from the second conveying belt 51 and collected in the collecting box 561.
The carrying-out conveyor 60 is a mechanism for carrying out the plurality of tablets 9 determined as good products to the outside of the casing 100 of the tablet printing apparatus 1. The upstream end of the discharge conveyor 60 is positioned below the second pulley 511 of the second conveyor 51. The downstream end of the discharge conveyor 60 is located outside the casing 100. The carrying-out conveyor 60 uses, for example, a belt conveying mechanism. The plurality of tablets 9 having passed through the defective product collecting portion 56 are released from the suction by the suction holes, and fall from the second conveyor belt 51 onto the upper surface of the carrying-out conveyor 60. Then, the plurality of tablets 9 are carried out to the outside of the casing 100 by the carrying-out conveyor 60.
The control unit 70 controls the operation of each unit in the tablet printing apparatus 1. Fig. 5 is a block diagram showing the connection between the control unit 70 and each unit in the tablet printing apparatus 1. As conceptually shown in fig. 5, the control unit 70 is configured by a computer having a processor 701 such as a CPU, a memory 702 such as a RAM, a storage device 703 such as a hard disk drive, a receiving unit 704, and a transmitting unit 705. The storage device 703 stores therein a computer program P and data D for executing printing processing and inspection of the tablet 9. However, the receiving unit 704 and the transmitting unit 705 may be provided separately from the control unit 70.
The computer program P is read from the storage medium M in which the program P is stored, and is stored in the storage device 703 of the control unit 70. Examples of the storage medium M include a CD-ROM, a DVD-ROM, and a flash memory. However, the program P may be input to the control unit 70 via a network.
As shown in fig. 5, the control unit 70 communicates with the linear feeders 21, the rotary feeders 22, the transport drum 30 (including a motor, a suction mechanism, and an air blowing mechanism), the first state detection camera 33, the first transport conveyors 41 (including a motor, a suction mechanism, and an air blowing mechanism), the second state detection camera 42, the first head unit 43 (including the plurality of nozzles 430 of the first heads 431), the first inspection camera 44, the first fixing unit 45, the second transport conveyors 51, the third state detection camera 52, the second head unit 53 (including the plurality of nozzles 430 of the second heads 531), the second inspection camera 54, the second fixing unit 55, the defective product recovery unit 56, and the carrying out conveyor 60 via the receiving unit 704 and the transmitting unit 705, via a wire such as an ethernet (registered trademark), Bluetooth (Bluetooth), registered trademark) or Wi-Fi (registered trademark).
When receiving information from each unit via the receiving unit 704, the control unit 70 temporarily reads out the computer program P and the data D stored in the storage device 703 to the memory 702, and the processor 701 performs arithmetic processing based on the computer program P and the data D. The control unit 70 controls the operation of each unit by giving an instruction to each unit via the transmission unit 705. Thereby, each process for the plurality of tablets 9 is performed.
< 2. data processing in control section >
Fig. 6 is a block diagram conceptually showing a part of the functions of the control section 70 in the tablet printing apparatus 1. As shown in fig. 6, the control unit 70 of the present embodiment includes an angle recognition unit 71, a head control unit 72, and an inspection unit. These functions are realized by temporarily reading out the computer program P and the data D stored in the storage device 703 to the memory 702, and performing arithmetic processing by the processor 701 based on the computer program P and the data D. The function as the inspection unit is realized by the information processing device 200 configured by a part or all of the machine elements of the control unit 70. The information processing apparatus 200 is equipped with a learned learning model generated by machine learning in advance.
The angle recognizing section 71 has a function of recognizing a rotation angle (direction of the dividing line 90) of each tablet 9 being conveyed. The angle recognizing unit 71 acquires the images captured by the first state detection camera 33 and the second state detection camera 42, and recognizes the rotation angle of each tablet 9 conveyed by the first conveyor belt 41 based on the captured images. The angle recognizing unit 71 acquires the image captured by the third state detection camera 52, and recognizes the rotation angle of each tablet 9 conveyed by the second conveyor belt 51 based on the captured image.
As described above, in the present embodiment, the product name and the like are printed only on the surface of the tablet 9 facing the dividing line surface along the direction of the dividing line 90 on the back surface side. Therefore, the angle recognizing unit 71 recognizes the rotation angle (the direction of the dividing line 90) when passing through the first head unit 43 for each tablet 9 based on the captured images obtained from the first state detecting camera 33 and the second state detecting camera 42. Similarly, the angle recognizing unit 71 recognizes the rotation angle (the direction of the dividing line 90) when passing through the second head unit 53 for each tablet 9 based on the captured image obtained from the third state detecting camera 52.
In addition, the surface and the back surface of the plurality of tablets 9 conveyed are not fixed. Therefore, as shown in fig. 4, the tablet 9 held in the holding portion 413 from the dividing line surface side and the tablet 9 held in the holding portion 413 from the surface side opposite to the dividing line surface may be conveyed in a mixed manner. In this case, the angle recognizing unit 71 may recognize the rotation angle when passing through the first head unit 43 based on the captured image obtained from the first state detection camera 33 for some tablets 9, and recognize the rotation angle when passing through the first head unit 43 based on the captured image obtained from the second state detection camera 42 for other tablets 9. It is sufficient that the rotation angle when passing through the second head unit 53 is recognized for some tablets 9 based on the captured image obtained from the third state detection camera 52, and the rotation angle when passing through the second head unit 53 is recognized for other tablets 9 based on the captured image obtained from the second state detection camera 42.
The head control unit 72 has a function of controlling the operations of the first head unit 43 and the second head unit 53. As shown in fig. 6, the head control portion 72 has a first storage portion 721. The function of the first storage unit 721 is realized by the storage device 703 described above, for example. The first storage portion 721 stores the print image data D1 including information on the image to be printed on the tablet 9. The image is a product name, a product code, a company name, a mark, or the like, and is formed of, for example, a character string including letters and numbers (see fig. 4 and fig. 7 described later). However, the image may be a mark or an illustration other than a character string. The image is printed on the surface of the tablet 9 opposite to the cut line surface along the cut line 90 located on the back surface. However, the image may be printed on the cut line surface of the tablet 9 along the cut line 90. The print image data D1 also includes information specifying the print position and print direction of the image on the tablet 9.
When printing is performed on the surface of the tablet 9 as a product, the head control portion 72 reads the print image data D1 from the first storage portion 721. The head control unit 72 rotates the read print image data D1 in accordance with the rotation angle recognized by the angle recognition unit 71. The head control unit 72 controls the first head 431 and the second head 531 based on the rotated print image data D1. Thereby, an image indicated by the print image data D1 is printed on the surface of the tablet 9 along the dividing line 90.
The function of the inspection unit will be described in detail later.
< 3 > about information processing device 200
Next, the configuration of the information processing apparatus 200 will be explained. As described above, the function of the inspection unit in the control unit 70 is realized by the information processing device 200 including a part or all of the machine elements of the control unit 70. The information processing apparatus 200 is equipped with a learned learning model generated by machine learning in advance. The information processing apparatus 200 is an apparatus capable of detecting a defect such as no damage on the tablet 9 and detecting an abnormal tablet 9 having the defect. As shown in fig. 6, the information processing apparatus 200 includes, as functions, an image restoration unit 201, a determination unit 202, and an output unit 203.
First, a procedure of generating a learning model to be installed in the information processing apparatus 200 by machine learning will be described. The flow at the time of this learning is conceptually illustrated with a broken line in fig. 6. In learning, a plurality of learning images Io (see fig. 7) obtained by imaging the normal tablet 9 are prepared in advance. Specifically, the first inspection camera 44 captures a plurality of times the tablets 9 conveyed to the first conveyor 412 without defects such as scratches among the tablets 9 on the downstream side in the conveying direction from the first head unit 43. Then, a plurality of images of the upper surface of tablet 9 photographed are prepared as images (learning images Io) for learning of normal tablet 9. In the present embodiment, 1000 learning images Io are prepared. In addition, the machine learning itself is usually carried out outside the tablet printing apparatus 1. The plurality of learning images Io are input to the image restoration unit 201.
When the learning images Io are input to the image restoration section 201, the image restoration section 201 divides each learning image Io into a plurality of regions (see fig. 8). In the present embodiment, the division is performed into a total of 16 regions (region S1 to region S16) of 4 vertical regions and 4 horizontal regions. Also, the number of the divided learning images Io is not limited thereto. In the present embodiment, the divided regions S1 to S16 have the same size. However, the learning image Io may be divided into a plurality of regions having different sizes.
Next, the image restoration unit 201 generates image data Ih in which 1 of the regions S1 to S16 of each learning image Io is hidden. In the upper part of fig. 8, as an example, image data Ih in which a region S2 of the region S1 to the region S16 of the learning image Io is hidden is illustrated. The image restoration unit 201 of the present embodiment hides 1 of the regions S1 to S16 in order from the region S1, and generates 16 pieces of image data Ih for each learning image Io. The image restoration unit 201 generates 16 pieces of image data Ih for 1000 learning images Io, that is, 16000 pieces of image data Ih in total. However, the image restoration unit 201 may randomly hide 1 of the regions S1 to S16 and generate a predetermined number of pieces of image data Ih for each learning image Io using a random generator.
Next, the image restoration unit 201 performs a learning process based on the deep learning so that the restored image Ir in which a part of the image data Ih is hidden can be generated with high accuracy from each image data Ih. Specifically, the image restoration unit 201 performs machine learning on the learning model X (a, b, c …) relating to the image restoration process for generating the restored image Ir with high accuracy, while using the original learning image Io for generating each image data Ih as training data. Here, the training data is correct data. Further, as an example, fig. 8 illustrates a case where the restored image Ir in which the hidden region S2 is restored from the image data Ih is generated with high accuracy.
At this time, the image restoration unit 201 repeatedly executes an encoding process of extracting features from the image data Ih and generating latent variables and a decoding process of generating the restored image Ir from the latent variables by using a convolutional neural network. Examples of the convolutional neural network include U-Net and Fusion Net. Then, parameters of the encoding process and the decoding process are adjusted and updated and stored by using an error back propagation method, a gradient descent method, or the like so as to minimize a difference in pixel value between the restored image Ir after the decoding process and the original learning image Io in which the image data Ih before the encoding process is generated. Here, the parameters of the encoding process and the decoding process represent a plurality of parameters a, b, c … in the learning model X (a, b, c …). The image restoration unit 201 may perform learning 1 time or may perform learning a plurality of times using each image data Ih.
However, the method of performing machine learning for the image restoration process for generating the restored image Ir with high accuracy is not limited to this. For example, the image restoration unit 201 may include a learning model Y (p, q, r …) that compares the generated restoration image Ir with the learning image Io to determine which image is the original image, in addition to the learning model X (a, b, c …) that generates the restoration image Ir. Further, an adversary generation network may be provided in which machine learning is alternately performed while mutually competing the learning model X (a, b, c …) and the learning model Y (p, q, r …) with an error back propagation method based on the generation result of the learning model X (a, b, c …) and the determination result of the learning model Y (p, q, r …). Examples of the enemy generation network include GANs and pix 2 pix.
Through the above, when the machine learning is completed, the learned learning model X (a, b, c …) is installed in the information processing apparatus 200. The tablet printing apparatus 1 can detect a defect of the tablet 9 using the learning model X (a, b, c …). When detecting a defect in the tablet 9, first, the information processing device 200 in the tablet printing apparatus 1 acquires, from the first inspection camera 44, a captured image Ip of the tablet 9 conveyed to the first conveyor 412 on the downstream side in the conveying direction from the first head unit 43. Further, the second inspection camera 54 acquires the captured image Ip of the tablet 9 conveyed to the second conveyor 512 on the downstream side in the conveying direction from the second head unit 53. Then, the captured image Ip is rotated by the rotation angle recognized by the angle recognition unit 71, and the inspection image Ii is generated. The inspection image Ii is an image in which a tablet 9 that is not definitely flawless, i.e., not definitely normal or abnormal, is captured. In the following description, it is assumed that a defect De is present in the inspection image Ii of the tablet 9 at a position located in a region S15 described later. In the present embodiment, a flaw De is assumed to be a flaw. However, the defect De may be contamination due to ink, a shift in printing position, or dot missing.
Next, the image restoration unit 201 divides each test image Ii into 16 total regions of 4 vertical regions and 4 horizontal regions (region S1 to region S16) similar to those in the learning. Next, the image restoration unit 201 creates image data Ih in which 1 of the regions S1 to S16 of each inspection image Ii is hidden. Fig. 9 and 10 each illustrate a case where a restored image Ir in which 1 hidden region is restored from image data Ih with high accuracy is generated. In particular, fig. 9 shows a case where the restored image Ir (hereinafter referred to as "restored image Ir 1" for ease of description) restored the region S1 is generated with high accuracy from the image data Ih (hereinafter referred to as "image data Ih 1" for ease of description) in which the region S1 is hidden. Fig. 10 illustrates a case where the restored image Ir (hereinafter referred to as "restored image Ir 15" for ease of description) restored the region S15 is generated with high accuracy from the image data Ih (hereinafter referred to as "image data Ih 15" for ease of description) in which the region S15 is hidden. Since the region S15 is hidden, the image restoration unit 201 cannot recognize the defect De, but for convenience of explanation, the defect De is displayed in white in the image data Ih in fig. 10.
Next, the image restoration unit 201 performs, as in the case of learning, an encoding process of extracting features from the image data Ih in which a part of the inspection image Ii is hidden and generating latent variables and a decoding process of generating the restoration image Ir from the latent variables by using a convolutional neural network, and generates a plurality of restoration images Ir from the image data Ih while sequentially changing the positions of the hidden parts by using the learning model X (a, b, c …) learned at the time of learning.
Specifically, the image restoration unit 201 first generates a restored image Ir1 in which the region S1 is restored, using the learning model X (a, b, c …), from the image data Ih1 in which the region S1 is hidden from the inspection image Ii, and outputs the restored image Ir1 to the determination unit 202. Next, the image restoration unit 201 generates a restored image Ir2 in which the region S2 is restored, using the learning model X (a, b, c …), from the image data Ih2 in which the region S2 is hidden from the inspection image Ii, and outputs the restored image Ir2 to the determination unit 202. The image restoration unit 201 repeatedly executes such restoration processing while sequentially changing the positions of the hidden parts. Then, the image restoration unit 201 generates a restored image Ir15 in which the region S15 is restored, using the learning model X (a, b, c …), from the image data Ih15 in which the region S15 is hidden from the inspection image Ii, and outputs the restored image Ir15 to the determination unit 202. Finally, the image restoration unit 201 generates a restored image Ir16 in which the region S16 is restored, using the learning model X (a, b, c …), from the image data Ih16 in which the region S16 is hidden from the inspection image Ii, and outputs the restored image Ir16 to the determination unit 202.
Here, as described above, the learning model X (a, b, c …) learned at the time of learning is a model in which the parameters for generating the restored image Ir in which a part is hidden from the image data Ih in which a part of the image obtained by capturing the normal tablet 9 without the defect De is hidden are adjusted. Therefore, as shown in fig. 9, when the image restoration unit 201 generates the restoration image Ir1 from the image data Ih1 of the region S1 where the defect De is not present in the inspection image Ii using the learning model X (a, b, c …), the restoration image Ir1 including the region S1 where the defect De is not present is restored with high accuracy at the position where the defect De is not present in the inspection image Ii. On the other hand, as shown in fig. 10, when the image restoration unit 201 generates the restoration image Ir15 from the image data Ih15 of the region S15 in which the defect De is present in the inspection image Ii, using the learning model X (a, b, c …), the image restoration unit 201 cannot recognize the defect De. Therefore, although the defect De is present in the region S15 in the inspection image Ii, the image restoration unit 201 generates the restored image Ir15 without the defect De in a state where the presence of the defect De is not recognized.
Next, when the plurality of restoration images Ir are sequentially input from the image restoration unit 201, the determination unit 202 compares each of the plurality of restoration images Ir with the inspection image Ii to determine whether the tablet 9 is a normal tablet having no defect De or an abnormal tablet having a defect De, and outputs the determination result Dr to the output unit 203. Specifically, the determination unit 202 first compares the restoration image Ir1 generated by the image restoration unit 201 with the inspection image Ii, and determines whether or not the difference in pixel values between the restoration image Ir1 and the inspection image Ii is greater than a predetermined allowable value. Next, the determination unit 202 compares the restoration image Ir2 generated by the image restoration unit 201 with the inspection image Ii, and determines whether or not the difference in pixel values between the restoration image Ir2 and the inspection image Ii is greater than a predetermined allowable value. The determination unit 202 executes such determination processing for all the restored images Ir. After that, the determination unit 202 compares the restoration image Ir15 generated by the image restoration unit 201 with the inspection image Ii, and determines whether or not the difference in pixel value between the restoration image Ir15 and the inspection image Ii is larger than a predetermined allowable value. Finally, the determination unit 202 compares the restoration image Ir16 generated by the image restoration unit 201 with the inspection image Ii, and determines whether or not the difference in pixel values between the restoration image Ir16 and the inspection image Ii is greater than a predetermined allowable value.
As described above, the restored image Ir15 generated by the image restoring unit 201 has no defect De. On the other hand, there is a defect De at a position located in the region S15 in the inspection image Ii. Therefore, the difference in pixel value between the restored image Ir15 and the inspection image Ii is a value that becomes large, unlike the other comparison results. In this way, when the difference between the restored image Ir and the inspection image Ii is larger than a predetermined allowable value, the determination unit 202 determines the position hidden in the image data Ih that is the source of the restored image Ir as the position where the defect De exists. Then, the determination unit 202 outputs the determination result Dr concerning the presence of the defect De and the position of the defect De to the output unit 203.
When the plurality of restoration images Ir are input from the image restoration unit 201, the determination unit 202 may join the restored images in the areas hidden in the image data Ih that are the sources of the plurality of restoration images Ir, compare the joined images with the entire inspection image Ii, and determine whether or not the difference in pixel value is greater than a predetermined allowable value.
As a result, it is determined whether or not the tablets 9 conveyed to the first conveyor 41 and the tablets 9 conveyed to the second conveyor 51 have defects De, and the inspection of all the tablets 9 is completed. When the determination result Dr is input from the determination unit 202, the output unit 203 outputs information on the presence or absence of a defect De and the position of the defect De in the tablet 9 to a monitor, a speaker, or the like, and transmits and collects information on the tablet 9 having the defect De to the defective product collection unit 56. Note that, when the determination unit 202 determines that there is no defect De in the tablet 9, the output unit 203 may further display the determination result.
As described above, in the present embodiment, by performing machine learning using images of a plurality of normal tablets 9 that can easily be obtained without a defect De, an abnormal tablet 9 having a defect De can be detected. This makes it possible to accurately detect a variety of defects De including unknown defects in the tablet 9.
Further, the output unit 203 outputs information on the presence or absence of the defect De and the position of the defect De. This allows the operator or the like to easily confirm again the tablet 9 determined to have the defect De using the information on the position of the defect De. This can further improve the detection accuracy of the tablet 9 having the defect De.
The image restoration unit 201 of the present embodiment repeatedly executes, by a convolutional neural network, an encoding process of extracting features from the image data Ih to generate latent variables and a decoding process of generating the restored image Ir from the latent variables. Therefore, even in the case where the positions of the tablets 9 in the inspection image Ii or the learning image Io are slightly deviated or in the case where some noise is included in the inspection image Ii or the learning image Io, the tablets 9 having the defect De can be detected with high accuracy.
< 4. modification
While the main embodiments of the present invention have been described above, the present invention is not limited to the above embodiments.
In the above embodiment, learning and detection of the defect De in the tablet 9 are performed using the image of the upper surface of the tablet 9 after the printing process is performed on the tablet 9. However, learning and detection of the defect De in the tablet 9 may be performed using the image of the tablet 9 before the printing process is performed on the tablet 9. Further, learning and detection of the defect De in the tablet 9 may be performed using an image obtained by imaging the tablet 9 from an oblique direction. This makes it possible to detect not only the front and back surfaces of tablet 9 but also a defect De existing on the side surface of tablet 9.
In the above-described embodiment, the learning model X (a, b, c …) in which machine learning has been completed in advance outside the tablet printing apparatus 1 is incorporated into the information processing apparatus 200, and the defect De in the tablet 9 is detected. However, the machine learning may be performed in a state where the learning model X (a, b, c …) is already installed in the information processing apparatus 200 in the tablet printing apparatus 1, and the defect De in the tablet 9 may be directly detected.
In the above-described embodiment, as an example of the test object, the tablet 9 as a pharmaceutical is used. The information processing apparatus 200 of the above-described embodiment determines the presence or absence of a defect De such as a flaw, a stain, a shift in printing position, or a dot missing in the tablet 9 as the inspection target, and the position of the defect De. However, the inspection object may be a film, a substrate such as paper, or a printed board, which is subjected to printing processing in various printing apparatuses, or may be a member used in various apparatuses. That is, the inspection object may be any object having a substantially constant appearance in a normal state. Further, the information processing apparatus 200 may determine the presence or absence of an apparent defect De in the inspection target and the position of the defect De.
That is, an information processing apparatus according to the present invention is an information processing apparatus for detecting a defective and abnormal inspection object using a set of image data of a normal inspection object, the information processing apparatus including: an image restoration unit that generates a restoration image in which a part of a concealed inspection image is restored from image data in which a part of the concealed inspection image is concealed, the inspection image being an image obtained by imaging an inspection target object that is not determined to be normal or abnormal; a determination unit that compares the restored image with the inspection image to determine whether the inspection target is normal or abnormal; and an output unit that outputs the determination result of the determination unit, wherein the image restoration unit completes learning by deep learning so that a restored image in which a part of the plurality of concealed learning images is restored can be generated with high accuracy from the image data in which a part of each of the plurality of learning images is concealed, the learning image being an image obtained by imaging a normal inspection target. When learning is completed, the image restoration unit may adjust parameters of the encoding process and the decoding process using, for example, a convolutional neural network.
An information processing method according to the present invention is an information processing method for detecting a defective and abnormal inspection object using a set of image data of a normal inspection object, the method including the steps of: a) the following processing is learned by deep learning: generating a restored image in which a part of a plurality of learning images is restored from image data in which a part of each of the learning images is hidden, the learning images being images of a normal inspection object; b) comparing a restoration image, which is an image restored from image data in which a part of the inspection image is hidden using the processing learned in step a), with an inspection image obtained by imaging the inspection object which is not determined to be normal or abnormal, to determine whether the inspection object is normal or abnormal; and c) outputting the judgment result of the step b).
In addition, the information processing program executed by the information processing apparatus according to the present invention is an information processing program for detecting a defective and abnormal inspection object by using a set of image data of a normal inspection object, and the program may cause a computer to execute: a) an image restoration process of generating a restored image in which a part of a concealed inspection image is restored from image data in which a part of the concealed inspection image is concealed, the inspection image being an image obtained by imaging an inspection target object that is not determined to be normal or abnormal; b) a determination process of comparing the restored image with the inspection image to determine whether the inspection object is normal or abnormal; and c) an output process of outputting the determination result of the determination process, wherein in the image restoration process, learning is completed by deep learning so that a restored image in which a part of the concealed image is restored can be generated with high accuracy from image data in which a part of each of a plurality of learning images is concealed, the learning images being images obtained by imaging a normal inspection target.
In the present invention, in order to detect a defective abnormal inspection object, the following processing may be learned by deep learning: a restoration image in which a part of a hidden learning image is restored is generated from image data in which a part of a hidden learning image is hidden, the learning image being an image obtained by imaging a normal inspection object.
In the present invention, in order to detect a defective abnormal inspection object, it is sufficient to have a learned model that learns the following processing by deep learning: in order to detect a defective and abnormal inspection object, a restored image in which a part of a plurality of learning images is restored is generated from image data in which a part of each of the learning images is hidden, the learning images being images of a normal inspection object.
Thus, by performing machine learning using an image from which a plurality of normal inspection objects having no defects can be easily obtained, defective abnormal inspection objects can be detected. This makes it possible to accurately detect various defects including unknown defects in the inspection target.
In the above embodiment, the first printing unit 40 and the second printing unit 50 are provided with 4 heads, respectively. However, the number of heads included in each of the printing portions 40 and 50 may be 1 to 3, or 5 or more.
The structure of the detailed portion of the tablet printing apparatus 1 may be different from those shown in the drawings of the present application. In addition, the respective elements appearing in the above-described embodiments and modifications may be appropriately combined within a range in which no contradiction occurs.
Description of the symbols
1 tablet printing device
9 tablets
10 hopper
20 feeding part
30 conveying roller
33 first State detection Camera
40 first printing section
41 first conveyor belt
42 second State detection Camera
43 first head unit
44 first inspection camera
45 first fixing part
50 second printing part
51 second conveyor belt
52 third State detection Camera
53 second head unit
54 second inspection camera
55 second fixing section
56 defective product recovery part
60 carry-out conveyor
70 control part
71 Angle recognition part
90 secant
100 case
200 information processing apparatus
201 image restoration unit
202 judgment unit
203 output part
411 first pulley
412 first conveyor belt
431 first nozzle
511 second pulley
512 second conveyer belt
531 second nozzle
561 recycling box
701 processor
702 memory
703 memory device
704 receiver
705 transmitting unit
D data
D1 print image data
De defect
Dr determination result
Ih image data
Ii examination image
Io learning image
Ip shot image
Ir restored image
P computer program
X learning model
And Y learning model.

Claims (20)

1. An information processing apparatus for detecting a defective and abnormal inspection object using a set of image data of a normal inspection object, the information processing apparatus comprising:
an image restoration unit that generates a restoration image in which a part of a test image is restored from image data in which the part of the test image is concealed, the test image being an image obtained by imaging a test object that is not determined to be normal or abnormal;
a determination unit that compares the restored image with the inspection image to determine whether the inspection target is normal or abnormal; and
an output unit that outputs a result of the determination by the determination unit,
the image restoration unit completes learning by deep learning so that a restored image in which a part of a plurality of learning images is concealed can be generated with high accuracy from image data in which the part of the concealed learning images is concealed, the learning images being images of a normal inspection object.
2. The information processing apparatus according to claim 1,
the image restoration unit generates a plurality of restoration images while sequentially changing the position of the hidden part of the image data,
the determination unit determines whether the inspection target object is normal or abnormal by comparing each of the plurality of restored images with the inspection image.
3. The information processing apparatus according to claim 2,
the determination unit determines a position of the hidden part as a position of the defect when a difference between the restored image and the inspection image is larger than a predetermined allowable value,
the output section also outputs information relating to the position of the defect.
4. The information processing apparatus according to any one of claims 1 to 3,
the image restoration unit executes the following processing:
an encoding process of extracting features from the inspection image to generate latent variables; and
and a decoding process of generating the restored image from the latent variable.
5. The information processing apparatus according to claim 4,
in the learning, the image restoration unit adjusts parameters of the encoding process and the decoding process by a convolutional neural network.
6. An information processing method for detecting a defective and abnormal inspection object using a set of image data of a normal inspection object, comprising:
a) the following processing is learned by deep learning: generating a restored image in which a part of a plurality of learning images is restored from image data in which the part of the plurality of learning images is concealed, the learning images being images of a normal inspection object;
b) comparing a restoration image, which is an image restored from image data in which a part of an inspection image is hidden using the processing learned in the step a), with an inspection image obtained by imaging an inspection object that is not determined to be normal or abnormal, to determine whether the inspection object is normal or abnormal;
c) outputting the judgment result of the step b).
7. An information processing program for detecting a defective and abnormal inspection object using a set of image data of a normal inspection object, the information processing program causing a computer to perform:
a) an image restoration process of generating a restored image in which a part of a test image is restored from image data in which the part of the test image is concealed, the test image being an image obtained by imaging a test object that is not determined to be normal or abnormal;
b) a determination process of comparing the restored image with the inspection image to determine whether the inspection object is normal or abnormal; and
c) an output process of outputting a judgment result of the judgment process,
in the image restoration processing, the learning is completed by deep learning so that a restored image in which a part of the concealed part is restored can be generated with high accuracy from image data in which a part of each of a plurality of learning images is concealed, the learning images being images of a normal inspection object.
8. A learning method for detecting a defective abnormal object to be inspected, characterized by learning the following processing by deep learning:
a restoration image in which a part of a plurality of learning images is concealed is generated from image data in which the part of the concealed learning images is concealed, the learning images being images of a normal inspection object.
9. A learned model characterized by a plurality of models,
the following processing is learned by deep learning: in order to detect a defective and abnormal inspection object, a restored image in which a part of a plurality of learning images is restored is generated from image data in which the part of the learning images is concealed, the learning images being images of a normal inspection object.
10. The information processing apparatus according to any one of claims 1 to 5,
the test object is a tablet.
11. The information processing method according to claim 6,
the test object is a tablet.
12. The information processing program according to claim 7,
the test object is a tablet.
13. The learning method according to claim 8,
the test object is a tablet.
14. The learned model of claim 9,
the test object is a tablet.
15. The information processing method according to claim 6,
in the step a), a plurality of the restoration images are generated while sequentially changing the position of the part of the image data which is hidden,
in the step b), the plurality of restored images are compared with the inspection image, respectively, to determine whether the inspection object is normal or abnormal.
16. The information processing method according to claim 15,
in the step b), when a difference between the restored image and the inspection image is larger than a predetermined allowable value, a position of the part to be concealed is determined as a position of the defect,
in said step c), information relating to the location of said defect is also output.
17. The information processing method according to any one of claims 6, 15, and 16,
in the step a), the following processing is performed:
an encoding process of extracting features from the inspection image to generate latent variables; and
and a decoding process of generating the restored image from the latent variable.
18. The information processing method according to claim 17,
in the step a), in the learning, parameters of the encoding process and the decoding process are adjusted by a convolutional neural network.
19. The information processing program according to claim 7,
in the image restoration process, a plurality of restoration images are generated while sequentially changing the position of the hidden part in the image data,
in the determination process, the restoration images are compared with the inspection image, respectively, to determine whether the inspection object is normal or abnormal.
20. The information processing program according to claim 19,
in the determination process, when a difference between the restored image and the inspection image is larger than a predetermined allowable value, a position of the part to be concealed is determined as a position of the defect,
in the output process, information relating to the position of the defect is also output.
CN201980077404.1A 2019-01-31 2019-11-08 Information processing apparatus, information processing method, information processing program, learning method, and learned model Pending CN113168686A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2019-015884 2019-01-31
JP2019015884A JP7312560B2 (en) 2019-01-31 2019-01-31 Information processing device, information processing method, information processing program, learning method, and trained model
PCT/JP2019/043948 WO2020158098A1 (en) 2019-01-31 2019-11-08 Information processing device, information processing method, information processing program, learning method, and prelearned model

Publications (1)

Publication Number Publication Date
CN113168686A true CN113168686A (en) 2021-07-23

Family

ID=71840404

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980077404.1A Pending CN113168686A (en) 2019-01-31 2019-11-08 Information processing apparatus, information processing method, information processing program, learning method, and learned model

Country Status (4)

Country Link
JP (2) JP7312560B2 (en)
CN (1) CN113168686A (en)
TW (1) TWI724655B (en)
WO (1) WO2020158098A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021186294A (en) * 2020-05-29 2021-12-13 株式会社三洋物産 Game machine
WO2022044150A1 (en) * 2020-08-26 2022-03-03 三菱重工業株式会社 Image generation device, image generation method, and program
JP7330328B1 (en) 2022-05-17 2023-08-21 Ckd株式会社 Visual inspection auxiliary device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08136466A (en) * 1994-11-10 1996-05-31 Dainippon Screen Mfg Co Ltd Image pattern inspection device
JP5025893B2 (en) * 2004-03-29 2012-09-12 ソニー株式会社 Information processing apparatus and method, recording medium, and program
JP5660361B2 (en) * 2010-03-26 2015-01-28 ソニー株式会社 Image processing apparatus and method, and program
JP2017217784A (en) 2016-06-06 2017-12-14 フロイント産業株式会社 Solid preparation printing machine and solid preparation printing method
JP2018079240A (en) 2016-11-18 2018-05-24 株式会社Screenホールディングス Printer and validation method
US10733722B2 (en) 2017-06-27 2020-08-04 Nec Corporation Reconstructor and contrastor for anomaly detection
WO2019087803A1 (en) 2017-10-31 2019-05-09 日本電気株式会社 Image processing device, image processing method, and recording medium
TWI647626B (en) * 2017-11-09 2019-01-11 慧穩科技股份有限公司 Intelligent image information and big data analysis system and method using deep learning technology
TWM558943U (en) * 2017-11-22 2018-04-21 Aiwin Technology Co Ltd Intelligent image information and big data analysis system using deep-learning technology

Also Published As

Publication number Publication date
JP7312560B2 (en) 2023-07-21
JP2020123238A (en) 2020-08-13
WO2020158098A1 (en) 2020-08-06
TWI724655B (en) 2021-04-11
TW202032498A (en) 2020-09-01
JP2023126337A (en) 2023-09-07

Similar Documents

Publication Publication Date Title
KR101943283B1 (en) Tablet printing device and tablet printing method
CN113168686A (en) Information processing apparatus, information processing method, information processing program, learning method, and learned model
JP2018000208A (en) Tablet printing device and tablet printing method
JP6978970B2 (en) Printing equipment and printing method
JP7405663B2 (en) Tablet printing device and tablet printing method
JP2024045130A (en) Conveyor and Processing Equipment
KR102442438B1 (en) Transport processing apparatus
JP2018079240A (en) Printer and validation method
JP2019072497A (en) Tablet printing apparatus and tablet printing method
CN110936729B (en) Tablet printing apparatus and tablet printing method
CN110386421B (en) Vibration feeder and printing device
JP7023733B2 (en) Judgment device, judgment method, tablet printing device and tablet printing method
JP7061490B2 (en) Transport equipment and transport method
JP7146678B2 (en) Determination device, determination method, tablet printing device and tablet printing method
CN112839619A (en) Tablet printing device and tablet printing method
JP7451364B2 (en) Tablet printing device and tablet printing method
TWI836852B (en) Tablet inspection device and tablet printing device
JP7194712B2 (en) Tablet printing device, tablet printing method, tablet manufacturing device and tablet manufacturing method
KR20240019722A (en) Tablet printing apparatus and tablet printing method
JP2023036528A (en) Tablet printing device and tablet printing method
KR20230098025A (en) Tablet printing device and tablet printing method
JP2022170002A (en) Area determination method, tablet inspection method, and tablet inspection apparatus
JP2023160134A (en) Granular material processing device and granular material processing method
CN115742585A (en) Tablet printing device and tablet printing method
KR20230109104A (en) Tablet inspecting device and tablet printing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination