WO2018216075A1 - Dispositif de traitement d'image, système de communication multiplex et procédé de traitement d'image - Google Patents

Dispositif de traitement d'image, système de communication multiplex et procédé de traitement d'image Download PDF

Info

Publication number
WO2018216075A1
WO2018216075A1 PCT/JP2017/019049 JP2017019049W WO2018216075A1 WO 2018216075 A1 WO2018216075 A1 WO 2018216075A1 JP 2017019049 W JP2017019049 W JP 2017019049W WO 2018216075 A1 WO2018216075 A1 WO 2018216075A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
information
machine learning
classification information
image processing
Prior art date
Application number
PCT/JP2017/019049
Other languages
English (en)
Japanese (ja)
Inventor
伸夫 長坂
英和 金井
憲司 渡邉
紘佑 土田
Original Assignee
株式会社Fuji
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Fuji filed Critical 株式会社Fuji
Priority to PCT/JP2017/019049 priority Critical patent/WO2018216075A1/fr
Priority to JP2019519818A priority patent/JP6824398B2/ja
Publication of WO2018216075A1 publication Critical patent/WO2018216075A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K13/00Apparatus or processes specially adapted for manufacturing or adjusting assemblages of electric components
    • H05K13/08Monitoring manufacture of assemblages

Definitions

  • the present invention relates to an image processing apparatus, a multiple communication system, and an image processing method for processing image data obtained by imaging a working state of a work machine.
  • a component mounting machine that mounts electronic components on a circuit board, it recognizes a mark or code written on a circuit board transported on a production line, or inspects the state of a circuit board or electronic component.
  • a camera In order to have a camera.
  • a technique related to this type of camera for example, there is a technique for setting a region of interest (ROI: Region Of Interest) in an imaging region and outputting image data (for example, Patent Document 1).
  • the imaging apparatus disclosed in Patent Document 1 can read image data from each region by setting a plurality of regions of interest in the imaging region and assigning a readout address to each region of interest. ing.
  • the component mounting machine captures a plurality of images in each process during the mounting operation. Therefore, it is desired to improve the accuracy of image processing by analyzing a plurality of image data captured during the mounting operation.
  • the present invention has been made in view of the above-described problems, and an image processing apparatus capable of improving the accuracy of image processing by adding classification information for machine learning to image data obtained by imaging the working state of a work machine.
  • An object is to provide a communication system and an image processing method.
  • the present specification describes an imaging device that captures an operation state of a work machine that performs a work on a workpiece, and image data captured when the image data is captured by the imaging device.
  • a machine learning classification information adding unit for adding and outputting machine learning classification information indicating the state of the work machine, and the image data with the machine learning classification information added thereto from the machine learning classification information adding unit
  • a machine learning classification information processing unit that performs processing on the image data based on the machine learning classification information is disclosed.
  • the present specification also discloses a multiple communication system including an image processing apparatus and an image processing method.
  • the machine learning classification information processing unit can perform classification, association, and the like based on machine learning classification information for a plurality of pieces of image data obtained by imaging the working state by the imaging device.
  • the accuracy of image processing after feedback can be improved, for example, by analyzing each classified or associated image data and feeding back (applying) the analysis result to the image processing conditions of the image data.
  • FIG. 1 shows a configuration of a substrate-based work system 10 of the present embodiment.
  • the substrate work system 10 of this embodiment includes a plurality of component mounting machines 11, an inspection device 12, a host computer 13, and a learning PC 14.
  • the plurality of component mounting machines 11 are, for example, arranged in parallel and connected to each other, and constitute a production line for mounting electronic components on a circuit board.
  • the inspection device 12 inspects the mounting state of the circuit board on which electronic components are mounted by the plurality of component mounting machines 11.
  • the host computer 13 is connected to a plurality of component mounting machines 11 and an inspection device 12 and is a device that controls the component mounting machine 11 and the like in an integrated manner.
  • the host computer 13 creates and manages a control program (so-called recipe) for controlling operations of the component mounting machine 11 and the inspection apparatus 12.
  • the learning PC 14 is mainly configured by a computer including a CPU, a RAM, and the like, and includes a storage device 14A.
  • the storage device 14A includes, for example, a hard disk and a nonvolatile memory.
  • the learning PC 14 is connected to a plurality of component mounting machines 11 and an inspection device 12.
  • the learning PC 14 receives image data with header information from each of the component mounting machines 11, accumulates the received image data in the storage device 14A, and performs machine learning.
  • the interface of the learning PC 14 for connecting to each of the component mounting machines 11 is not particularly limited.
  • an Ethernet (registered trademark) standard, a USB standard, a Thunderbolt (registered trademark) standard, or the like can be used.
  • the component mounting machine 11 and the inspection apparatus 12 transmit information related to an error (error history information ER) to the learning PC 14.
  • FIG. 2 is a block diagram showing the configuration of the component mounting machine 11.
  • the component mounting machine 11 includes an apparatus main body 21, a mounting head 22, an X-axis slider 23, a Y-axis slider 24, and the like.
  • the component mounting machine 11 mounts an electronic component held by the suction nozzle of the mounting head 22 on a circuit board conveyed on the production line based on the control of the mounting control board 32 of the apparatus main body 21. To implement.
  • the apparatus body 21 is connected to the multiplex communication apparatus 25 and the Y-axis slider 24 on the body side.
  • the multiplex communication device 25 on the main body side is connected to the multiplex communication device 26 on the mounting head 22 side and the multiplex communication device 27 on the X-axis slider 23 side.
  • the multiplex communication device 25 is connected to each of the multiplex communication devices 26 and 27 via, for example, an optical fiber cable 28 and performs multiplexed communication.
  • the optical fiber cable 28 preferably has high bending resistance.
  • the wire connecting the multiplex communication devices 25, 26, 27 to each other is not limited to an optical fiber cable.
  • the multiplex communication device 25 is not limited to wired communication, and may perform multiplexed communication with each of the multiplex communication devices 26 and 27 by wireless communication.
  • the multiplex communication devices 25 to 27 transmit and receive image data GD1, I / O information IO, encoder information ED, trigger signal TRG, and the like via the optical fiber cable 28.
  • the multiplex communication devices 25 to 27 multiplex and transmit / receive image data GD1 and the like by, for example, time division multiplex (TDM) multiplex communication.
  • TDM time division multiplex
  • the communication method by the multiplex communication devices 25 to 27 is not limited to the time division multiplex method but may be another multiplex method (frequency division or the like).
  • the apparatus main body 21 includes an image processing board 31 and an amplifier board 33 in addition to the mounting control board 32 described above.
  • the image processing board 31 transmits a trigger signal TRG to the IPS camera 41 of the mounting head 22 and the mark camera 51 of the X-axis slider 23 based on an imaging instruction from the mounting control board 32.
  • the trigger signal TRG is transmitted to the IPS camera 41 and the like via multiplexed communication.
  • the IPS camera 41 performs imaging in response to reception of the trigger signal TRG.
  • the amplifier board 33 receives the encoder information ED from the encoder attached to the servo motor 43 of the mounting head 22 and the servo motor 53 of the X-axis slider 23.
  • the amplifier board 33 receives the encoder information ED from the mounting head 22 or the like via multiplexed communication.
  • the amplifier board 33 transmits the received encoder information ED to the mounting control board 32.
  • the mounting control board 32 determines, for example, a position to move the mounting head 22 based on the encoder information ED.
  • the mounting control board 32 transmits information such as a position change to the amplifier board 33.
  • the amplifier board 33 controls the servo motor 53 and the like by changing the power supplied to the servo motor 53 and the like based on the information received from the mounting control board 32, and determines the position and moving speed of the mounting head 22 and the like. Control.
  • the mounting head 22 is connected to a multiplex communication device 26 on the head side.
  • the mounting head 22 includes an IPS camera 41, various I / O elements 42, and servo motors 43, 44, 45 for each axis.
  • the mounting head 22 is attached to the X-axis slider 23.
  • the mounting head 22 is moved in the X-axis direction in the upper part of the apparatus main body 21 by the X-axis slider 23, and is moved in the Y-axis direction by the Y-axis slider 24.
  • the IPS camera 41 is, for example, a machine vision camera compatible with the Camera Link (registered trademark) standard.
  • the IPS camera 41 is a camera that captures an image of the electronic component sucked by the suction nozzle of the mounting head 22 from the side.
  • the image processing board 31 receives the image data GD1 from the IPS camera 41 of the mounting head 22 via the multiplexed communication, and detects the sucked electronic component from the image data GD1.
  • the mounting control board 32 determines the quality of the suction posture, the presence or absence of a suction error, and the like based on the detection information from the image processing board 31.
  • the IPS camera 41 or the like is not limited to the camera link (registered trademark) standard, and may be a camera compatible with another standard, for example, a standard of GigE Vision (registered trademark) or CoaXpress (registered trademark).
  • the I / O element 42 is a sensor or a relay element that detects the position of the suction nozzle of the mounting head 22, the suction status of the parts, the separation status of the suction nozzle and the circuit board, and the like.
  • the I / O element 42 transmits I / O information IO such as a detection result to the mounting control board 32 via multiplexed communication.
  • the servo motor 43 rotates a rotary head (rotary head) that holds a plurality of suction nozzles provided in the mounting head 22, thereby rotating the suction nozzles together around the vertical axis (R-axis direction). It functions as a drive source for moving.
  • the servo motor 44 functions as a drive source for moving each suction nozzle in the vertical direction (Z-axis direction), for example.
  • the servo motor 45 functions as a drive source for rotating the individual suction nozzles about the vertical axis ( ⁇ -axis direction).
  • Each of the servo motors 43 to 45 is attached with an encoder for detecting a rotational position and the like.
  • the amplifier board 33 of the apparatus main body 21 receives the encoder information ED of the encoders of the servo motors 43 to 45 via multiplexed communication.
  • the amplifier board 33 controls the power supplied to the servo motors 43 to 45 based on the received encoder information ED, and controls the position of the suction nozzle.
  • the X-axis slider 23 is a slider device that moves the position of the mounting head 22 in the X-axis direction.
  • the X-axis slider 23 includes a mark camera 51, various I / O elements 52, and a servo motor 53.
  • the mark camera 51 is a device that images a circuit board from above, and images identification information (such as a mark) written on the circuit board.
  • the image processing board 31 receives the image data GD1 from the mark camera 51 of the X-axis slider 23 via multiplexed communication, and detects identification information from the image data GD1.
  • the mounting control board 32 determines work contents and the like based on the detection result of the image processing board 31.
  • the I / O element 52 is a position sensor that outputs a signal corresponding to the position of the X-axis slider 23, for example.
  • the I / O element 52 transmits I / O information IO indicating a detection result or the like to the mounting control board 32 via multiplexed communication.
  • Servo motor 53 functions as a drive source for moving mounting head 22 in the X-axis direction.
  • the amplifier board 33 of the apparatus main body 21 receives the encoder information ED from the encoder attached to the servo motor 53 via multiplexed communication.
  • the amplifier board 33 controls the power supplied to the servo motor 53 based on the received encoder information ED, and controls the rotational position and rotational speed of the servo motor 53. Thereby, the amplifier board 33 can control the position of the mounting head 22 in the X-axis direction.
  • the Y-axis slider 24 is a slider device that moves the position of the X-axis slider 23 in the Y-axis direction.
  • the Y-axis slider 24 includes a servo motor 29 as a drive source.
  • the amplifier board 33 is connected to the Y-axis slider 24 without using multiplexed communication.
  • the amplifier board 33 controls the power supplied to the servo motor 29 based on the encoder information ED received from the encoder of the servo motor 29.
  • the mounting head 22 is attached to the X-axis slider 23. For this reason, the mounting head 22 can move in the X-axis direction and the Y-axis direction above the circuit board in accordance with the driving of the X-axis slider 23 and the Y-axis slider 24.
  • FIG. 3 schematically shows a state where the component mounting machine 11 is viewed from above.
  • the mounting control board 32 drives a transport device 74 having a conveyor for transporting the circuit board 72, and loads the circuit board 72 from the component mounting machine 11 on the upstream side.
  • the mounting control board 32 holds the circuit board 72 carried in a fixed manner at the mounting position (position shown in FIG. 3). The mounting control board 32 starts the mounting operation after holding the circuit board 72.
  • the component mounting machine 11 includes, for example, a plurality of tape feeders 77 as devices for supplying the electronic component 75.
  • Each of the tape feeders 77 sequentially sends out the electronic components 75 to the supply position 77A based on the control of the mounting control board 32.
  • the mounting control board 32 drives the X-axis slider 23 and the Y-axis slider 24 via multiplexed communication to move the mounting head 22 to the supply position 77A of the electronic component 75.
  • the mounting head 22 sucks and holds the electronic component 75 by driving one suction nozzle 78 among a plurality of suction nozzles 78 (only one is shown in FIG. 3).
  • the mounting head 22 sucks the electronic component 75 by the suction nozzle 78 and then moves to above the circuit board 72 in accordance with the driving of the X-axis slider 23 and the Y-axis slider 24 (see arrow 79).
  • the mounting control board 32 drives the servo motors 43 to 45 of the mounting head 22 to change the position of the suction nozzle 78 and mounts the electronic component 75 on the circuit board 72.
  • the mounting head 22 moves to the supply position 77A in order to acquire the next electronic component 75 (see arrow 81).
  • the mounting head 22 repeatedly performs such mounting work on the same circuit board 72.
  • the circuit board 72 is carried out to the component mounting machine 11 on the downstream side (see arrow 82).
  • the mounting control board 32 images a plurality of image data GD1 by the IPS camera 41 and the mark camera 51 during the series of operations described above.
  • the mounting control board 32 images the suction nozzle 78 that sucks and holds the electronic component 75 by the IPS camera 41 in a fixed imaging region 84 that moves from the supply position 77 ⁇ / b> A to the circuit board 72.
  • the mounting control board 32 images the suction nozzle 78 after mounting, that is, the suction nozzle 78 not holding the electronic component 75 in the fixed imaging region 85 above the circuit board 72 by the IPS camera 41.
  • the multiplex communication device 25 of this embodiment separates image data GD2 in which header information HD is added to the image data GD1 to the learning PC 14 separately from the image data GD1 transmitted to the device main body 21. Send to.
  • FIG. 4 shows a data configuration in which the header information HD is added to the image data GD1 of the IPS camera 41 as an example of the image data GD2 with the header information HD. As shown in FIG. 4, various information such as width information 61 is added to the image data GD2 as header information HD.
  • the header information HD of the present embodiment includes, for example, width information 61, height information 62, camera number (camera number) 64, trigger count information 65, I / O information IO, error detection code (such as checksum) 68, and Encoder information ED1, ED2, ED3, ED4, ED5.
  • the multiplex communication device 25 generates, for example, image data GD2 every time the image data GD1 is captured and transmits the image data GD2 to the learning PC 14. Note that the multiplex communication device 25 may collectively transmit a plurality of image data GD2 to the learning PC 14. Also, the device that adds the header information HD to the image data GD1 is not limited to the multiplex communication device 25, but may be another device, for example, the multiplex communication device 26. In this case, the multiplex communication device 26 may add the header information HD to the image data GD1 before performing the multiplexing process.
  • the image data GD1 has, for example, the number of pixels of 1024 (height) ⁇ 1024 (width) pixels.
  • the image data GD1 is, for example, monochrome image data, each color is represented by 8 bits (256 gradations), and has a data amount of 2 bytes per pixel.
  • the image data GD1 may be image data itself (entire image) captured by the IPS camera 41, or data of a region of interest (ROI: Region : Of Interest) (part of image data) set in the imaging region. Good.
  • ROI Region : Of Interest
  • the height information 62 is information indicating the height (number of pixels) of the region of interest.
  • the camera number 64 is a device-specific number for identifying a plurality of cameras (IPS camera 41, mark camera 51, etc.) from each other.
  • the trigger count information 65 is information for identifying the order in which imaging instructions are given to the IPS camera 41 and the like. More specifically, for example, the multiplex communication device 25 counts the number of times the trigger signal TRG (imaging instruction) is transmitted from the image processing board 31 to the IPS camera 41. For example, the multiplex communication device 25 detects the rising edge of the trigger signal TRG and performs count-up processing. When imaging by the IPS camera 41 is performed, the multiplex communication device 25 adds the count value at that time as the trigger count information 65 to the image data GD1.
  • the multiplex communication device 25 can reset the count value based on the control of the mounting control board 32, for example.
  • the mounting control board 32 resets the count value of the multiplex communication device 25 every hour.
  • the learning PC 14 can collate with not only the storage time (acquisition time) of the image data GD2 but also error history information ER of the apparatus main body 21 described later based on the trigger count information 65.
  • the I / O information IO is detection information of the I / O element 42, for example.
  • the error detection code 68 is data for detecting an error in the header information HD and the image data GD1, and is data such as a checksum and CRC (cyclic redundancy check), for example.
  • the learning PC 14 detects an error in the header information HD of the received image data GD2 based on the detection code 68, the learning PC 14 requests the multiplex communication device 25 to retransmit the image data GD2.
  • Encoder information ED1 is encoder information ED of the servo motor 43 (R axis direction).
  • the encoder information ED2 is encoder information ED of the servo motor 44 (Z-axis direction).
  • the encoder information ED3 is encoder information ED of the servo motor 45 ( ⁇ axis direction).
  • the multiplex communication device 25 adds the encoder information ED1 to ED3 such as the servo motor 43 obtained by separating the multiplexed data received from the multiplex communication device 26 as header information HD to the image data GD1.
  • the encoder information ED4 is the encoder information ED of the servo motor 53 (X-axis direction) of the X-axis slider 23.
  • the multiplex communication device 25 adds the encoder information ED4 of the servo motor 53 (X-axis slider 23) acquired by separating the multiplexed data received from the multiplex communication device 27 to the image data GD1 as header information HD.
  • the encoder information ED5 is encoder information ED of the servo motor 29 (Y-axis direction) of the Y-axis slider 24.
  • the multiplex communication device 25 inputs the encoder information ED5 of the servo motor 29 from the Y-axis slider 24 or the amplifier board 33 and adds it to the image data GD1 as header information HD.
  • the multiplex communication device 25 adds, for example, the encoder information ED1 to ED5 when the image data GD1 is imaged to the image data GD1 as header information HD. More specifically, the encoder information ED1 to ED5 is, for example, position information of the mounting head 22 and the suction nozzle when the image data GD1 is captured by the IPS camera 41 (machine learning classification information indicating the state of the working machine). is there. As a result, the learning PC 14 can classify the image data GD2 and perform machine learning based on the encoder information ED1 to ED5.
  • the header information HD shown in FIG. 4 is an example, and other header information HD may be added.
  • the multiplex communication device 25 may transmit the image data GD2 to which the same or different header information HD as the image data GD2 is added to the device main body 21 instead of the image data GD1. Thereby, the apparatus main body 21 can perform processing (classification or the like) based on the header information HD. Also, the multiplex communication device 25 adds the header information HD to the image data GD1 of the mark camera 51 of the X-axis slider 23 and transmits it to the learning PC 14 in the same manner as described above.
  • step (hereinafter, simply referred to as “S”) 11 in FIG. 5 the learning PC 14 receives image data GD2 received from a plurality of multiplex communication devices 25 (component mounting machines 11) arranged on the production line. Accumulate in storage device 14A.
  • the learning PC 14 reads the image data GD2 from the storage device 14A, and decodes the header information HD added to the read image data GD2.
  • the learning PC 14 performs automatic classification of the image data GD2 for use as machine learning data based on the decoded header information HD (S12).
  • the learning PC 14 uses, for example, the plurality of suction nozzles 78 provided in the mounting head 22 based on the encoder information ED1 (R-axis direction), the encoder information ED2 (Z-axis direction), and the encoder information ED3 ( ⁇ -axis direction). Detect position. Based on the detected position of the suction nozzle 78, the learning PC 14 determines which suction nozzle 78 is the suction nozzle 78 imaged in the image data GD2, that is, the suction nozzle 78 to be imaged. Note that the learning PC 14 may determine the suction nozzle 78 to be imaged based on not only the encoder information ED1 to ED3 but other information, for example, the I / O information IO.
  • a storage area (such as a data folder) for storing (classifying) image data GD2 before or after mounting is secured for each suction nozzle 78 in accordance with the number of suction nozzles 78.
  • the learning PC 14 determines the type and identification number of the suction nozzle 78 that is the imaging target of the image data GD2, and the data folder corresponding to the determination result of the suction nozzle 78 (in FIG. 5).
  • the image data GD2 is stored in the nozzles 1 to 23) (S13).
  • the types of data folders shown in FIG. 5 are examples, and the number of data folders may be increased according to the types of electronic components 75, for example.
  • the learning PC 14 detects the position of the mounting head 22 (suction nozzle 78) based on, for example, the encoder information ED4 (X-axis direction) and the encoder information ED5 (Y-axis direction). For example, based on the encoder information ED4 and ED5, the learning PC 14 determines whether it is the image data GD2 before mounting the electronic component 75 (see FIG. 3) or the image data GD2 after mounting. As shown in FIG. 3, in the component mounting machine 11 of the present embodiment, an imaging region 84 that images the suction nozzle 78 that sucks the electronic component 75 and an imaging region that images the suction nozzle 78 that does not suck the electronic component 75. 84 is set.
  • the imaging regions 84 and 85 have different positions (coordinates) in the X-axis direction and the Y-axis direction. For this reason, the learning PC 14 determines the value of the XY coordinates based on the encoder information ED4 and ED5, and determines which of the imaging areas 84 and 85 is the image data GD2. As a result, the learning PC 14 can determine whether the image data GD2 is before the mounting (imaging area 84) or the image data GD2 after the mounting (imaging area 85).
  • the learning PC 14 may determine before and after wearing based on the I / O information IO. When the I / O element 42 detects the electronic component 75, the learning PC 14 determines that the image data GD2 is not yet mounted. If the I / O element 42 cannot detect the electronic component 75, the learning PC 14 may determine that the image data GD2 is after mounting. Further, the learning PC 14 may determine before and after mounting based on the encoder information ED2 (Z-axis direction). For example, the learning PC 14 may determine that it is before mounting when the encoder information ED2 indicates that the position of the suction nozzle 78 is raised to the upper end in the Z-axis direction.
  • the learning PC 14 may determine that it is after the mounting if the encoder information ED2 indicates that the position of the suction nozzle 78 is lowered in the Z-axis direction.
  • the mounting control board 32 images the suction nozzle 78 immediately after mounting the electronic component 75 on the circuit board 72 by the IPS camera 41 before moving to the upper end in the Z-axis direction.
  • the learning PC 14 classifies the image data GD2 classified by the above-described processing of S13 in more detail based on a criterion as to whether or not an error has occurred during imaging.
  • each of the component mounting machines 11 mounting control boards 32
  • transmits error history information ER which is information about errors that occur during work, to the learning PC 14.
  • the inspection apparatus 12 transmits information determined to be defective in the inspection of the circuit board 72 after mounting (position information of the electronic component 75 with defective mounting) to the learning PC 14 as error history information ER.
  • the learning PC 14 stores the received error history information ER in the storage device 14A (S15 in FIG. 6).
  • FIG. 7 shows an example of the error history information ER received from the component mounting machine 11.
  • the error history information ER includes information on time, trigger count information, camera number (No), and error contents.
  • the learning PC 14 reads out the image data GD2 stored for each data folder in the storage device 14A in S13. Further, the learning PC 14 reads the error history information ER stored in the storage device 14A.
  • the learning PC 14 collates the storage time of the image data GD2 read from the storage device 14A, the trigger count information 65 (see FIG. 4), and the camera number 64 with the time of the error history information ER (S15 in FIG. 6). ). More specifically, the learning PC 14 determines whether there is a time that matches the imaging (storing) time of the image data GD2 in the time information (time shown in the first column from the left in FIG. 7) of the error history information ER. Search from within. In addition, the learning PC 14 retrieves the trigger count information of the target data retrieved from the error history information ER (the number of times of imaging shown in the second column from the left in FIG. 7) and the trigger count information 65 of the image data GD2 (see FIG. 4).
  • the learning PC 14 determines whether the camera number 64 of the image data GD2 matches the camera number of the error history information ER (No in the third column from the left in FIG. 7). In this way, the learning PC 14 searches the error history information ER for error information that matches the header information HD of the image data GD2 (S15). Thereby, the learning PC 14 can determine whether or not an error has occurred during the imaging of the image data GD2. Further, for example, even when the system times of the multiplex communication device 25 and the learning PC 14 are different from each other, an error corresponding to the image data GD2 is based on information other than the time (such as trigger count information 65). Information can be detected. Note that the learning PC 14 may classify the image data GD2 based on the error history information ER of the inspection device 12 (position information of the electronic component 75 with poor mounting).
  • the storage device 14A has a storage area (such as a subfolder) for further classifying each storage area (such as a data folder) before and after mounting according to the presence or absence of an error.
  • the learning PC 14 classifies the image data GD2 based on the presence / absence of errors based on the header information HD and the error history information ER, and stores them in each data folder (S16).
  • the learning PC 14 may notify the user of a classification error.
  • manual classification by the user can be performed on some image data GD2 that are difficult to classify.
  • the learning PC 14 performs machine learning based on the image data GD2 classified in S16.
  • the machine learning here is a technique for realizing, for example, a function similar to human learning ability by a computer. More specifically, for example, by learning the teacher data that is a plurality of events (image data) that occurred, the analysis of the characteristics of the teacher data, weighting of the features, patterning, etc. are performed, and the learning results are used. Thus, processing for improving the accuracy of image processing (error detection accuracy, etc.) is performed.
  • Machine learning is, for example, deep learning or support vectors. For example, the learning PC 14 applies a deep learning algorithm using the classified image data GD2 as teacher data.
  • the image data GD2 of this embodiment is classified into four data folders for one suction nozzle 78.
  • the first is, for example, an image of the suction nozzle 78 before mounting, and is image data GD2 in a normal state in which the electronic component 75 is sucked.
  • the second is, for example, an image of the suction nozzle 78 before mounting, and an error image obtained by picking up the suction nozzle 78 that could not pick up the electronic component 75 from the supply position 77A of the tape feeder 77 (see FIG. 3).
  • This is data GD2.
  • it is image data GD2 in which an electronic component 75 is sucked but an error has occurred.
  • the suction nozzle 78 sucks the electronic component 75 in an inappropriate state (a state in which the electronic component 75 is standing with the leads facing sideways or the electronic component 75 is tilted, etc.).
  • an inappropriate state a state in which the electronic component 75 is standing with the leads facing sideways or the electronic component 75 is tilted, etc.
  • the case where mounting of 72 fails is considered.
  • the third is image data GD2 in a normal state in which, for example, the suction nozzle 78 after mounting is imaged and the electronic component 75 is not suctioned (mounting is completed).
  • the fourth is, for example, an image of the suction nozzle 78 after mounting, and is image data GD2 in an error state in which the electronic component 75 is sucked.
  • an error in this case for example, it may be considered that the electronic component 75 has failed to be detached from the suction nozzle 78 to the circuit board 72 at the mounting position of the circuit board 72.
  • the mounting head 22 captures an image of the suction nozzle 78 in the imaging region 85 (see FIG. 3) when moving toward the next supply position 77 ⁇ / b> A while the electronic component 75 that has not been mounted is sucked by the suction nozzle 78.
  • the learning PC 14 reads the image data GD2 (teacher data) to which tags (information) before attaching (normal), before attaching (error), after attaching (normal), and after attaching (error) are added.
  • the feature points of the image data GD2 are automatically extracted. Thereby, for example, the learning PC 14 determines whether an error has occurred by machine learning from a plurality of image data GD2 (big data) without setting what the user should pay attention to.
  • the feature points of the image data GD2 (such as the degree of inclination of the electronic component 75) can be automatically extracted. More specifically, it is possible to automatically learn, for example, how much the electronic component 75 is inclined with respect to the suction nozzle 78 before an error occurs before mounting.
  • the image data GD2 is machine-learned and the feature point is fed back.
  • the accuracy of image processing of the mounting control board 32 and the like can be improved.
  • the learning PC 14 classifies the image data GD1 based on the header information HD, and uses the classified image data GD1 as machine learning teacher data to perform machine learning. Apply the algorithm.
  • the image data GD2 obtained by imaging the working state of the component mounting machine 11 (work machine) is classified based on the header information HD (machine learning classification information) and used as teacher data for machine learning.
  • the accuracy of the image processing on the image data GD1 can be improved based on the learning result.
  • header information HD (machine learning classification information) is used to give an imaging instruction to encoder information ED of a motor (servo motor 43, etc.) that functions as a drive source of the component mounting machine 11 (work machine), IPS camera 41, etc.
  • This is at least one piece of information among trigger count information 65 (trigger information) for identifying the order and I / O information IO of the detection device (I / O elements 42, 52) provided in the component mounting machine 11 (work machine). .
  • Encoder information ED is information indicating the operating state (position, etc.) of the movable part (mounting head 22) of the component mounting machine 11 (work machine), for example.
  • the information indicates the positions of the mounting head 22 and the suction nozzle 78 before and after mounting the electronic component 75 on the circuit board 72 and during the suction operation of the electronic component 75.
  • the plurality of image data GD2 can be classified for each operation state of the component mounting machine 11.
  • the trigger count added to the image data GD2 is associated with the number of times the imaging instruction is counted in the mounting control board 32 and error information. By comparing with the information 65, it is possible to determine later when and in what state the image data GD1 is captured (whether an error has occurred). Thereby, based on the trigger count information 65, for example, the plurality of image data GD2 can be classified for each occurrence of an error.
  • the I / O information IO is, for example, output information of the I / O element 42 (sensor) indicating whether or not the electronic component 75 is sucked by the suction nozzle 78 of the mounting head 22. Thereby, based on the I / O information IO, the plurality of image data GD2 can be classified according to whether or not the electronic component 75 is sucked.
  • the header information HD (machine learning classification information) includes encoder information ED of a plurality of motors (such as the servo motor 53) that function as a drive source of the component mounting machine 11 (work machine).
  • the learning PC 14 (machine learning classification information processing unit) is an encoder for a plurality of motors (servo motor 53, servo motor 29, etc.) as information related to the imaging position (before and after mounting) of the IPS camera 41 (imaging device).
  • the image data GD1 is classified based on the information ED. According to this, the learning PC 14 can classify the image data GD1 for each imaging position by classifying the image data GD1 based on the encoder information ED.
  • the learning PC 14 is not limited to the encoder information ED, and may classify the image data GD1 based on, for example, whether or not the illumination of the IPS camera 41 is turned on. For example, when the learning PC 14 detects that the illumination of the IPS camera 41 is turned on based on the I / O information IO of the header information HD, the IPS camera 41 (the mounting head 22) is placed at a predetermined imaging position. The image data GD1 may be classified as being arranged.
  • the component mounting machine 11 includes a mounting head 22 (movable unit) that performs work on the circuit board 72, an image processing board 31 (image processing unit) that inputs image data GD1 and performs image processing on the image data GD1, and an image.
  • a mounting control board 32 control unit that controls the mounting head 22 based on the result of image processing of the processing board 31.
  • the image processing board 31 is provided separately from the learning PC 14 (machine learning classification information processing unit).
  • the learning PC 14 for processing the image data GD2 to which the header information HD is added and the image processing board 31 (image processing unit) for processing the image data GD1 to which the header information HD is not added are provided separately.
  • the component mounting machine 11 needs to sequentially execute a predetermined operation in real time based on the image data GD1 captured by the IPS camera 41 or the like. For this reason, by providing the image processing board 31 and the learning PC 14 separately, the processing delay of the learning PC 14, the update work for repairing the malfunction of the learning PC 14, etc. It is possible to prevent the processing, that is, the processing of the component mounting machine 11 from being delayed (the production line is stopped, etc.).
  • the on-board working system 10 further includes a storage device 14A for storing the image data GD2 processed by the learning PC 14.
  • the image data GD2 processed (classified or the like) by the learning PC 14 is stored (accumulated) in the storage device 14A.
  • a plurality of image data GD2 accumulated in the storage device 14A can be collectively analyzed.
  • the image data GD2 can be analyzed in a state where communication with the component mounting machine 11 is disconnected (offline).
  • the learning PC 14 classifies the image data GD2 based on the error history information ER indicating the history of errors occurring in the component mounting machine 11 in addition to the header information HD (machine learning classification information), and classifies the image data.
  • a machine learning algorithm is applied using GD2 as teacher data for machine learning.
  • the learning PC 14 classifies the image data GD2 in more detail based on the error history information ER of the component mounting machine 11 in addition to the header information HD indicating the state of the component mounting machine 11, and performs machine learning. Use as teacher data. Thereby, if the learning result is fed back, the accuracy of the image processing can be improved more reliably.
  • the component mounting machine 11 includes a multiplex communication system including multiplex communication devices 25, 26, 27, and the like.
  • the component mounting machine 11 multiplexes and transmits the image data GD1 with other work data (encoder information ED and I / O information IO). According to this, the component mounting machine 11 can multiplex a plurality of data and transmit them through the same transmission path, thereby reducing wiring.
  • the board-to-board work system 10 is an example of an image processing apparatus and a multiple communication system.
  • the component mounting machine 11 is an example of a work machine.
  • the learning PC 14 is an example of a machine learning classification information processing unit.
  • the mounting head 22 is an example of a drive unit and a movable unit.
  • the multiplex communication device 25 is an example of a machine learning classification information adding unit.
  • Servo motors 29, 43 to 45, 53 are examples of motors.
  • the image processing board 31 is an example of an image processing unit.
  • the mounting control board 32 is an example of a control unit.
  • the IPS camera 41 and the mark camera 51 are examples of an imaging device.
  • the trigger count information 65 is an example of trigger information.
  • the circuit board 72 is an example of a work.
  • the I / O elements 42 and 52 are an example of a detection device.
  • the header information HD is an example of machine learning classification information.
  • the image processing apparatus (to-board work system 10) of the present embodiment is an image pickup apparatus (IPS camera 41, mark camera 51) that picks up the work state of the component mounting machine 11 (work machine) that performs work on the circuit board 72 (work). ) And header information HD (classification information for machine learning) indicating the state of the component mounting machine 11 (work machine) when the image data GD1 is imaged is added to the image data GD1 imaged by the IPS camera 41 or the like.
  • IPS camera 41, mark camera 51 image pickup apparatus
  • header information HD classification information for machine learning
  • Multiplex communication device 25 (classification information addition unit for machine learning) to be output and image data GD2 with header information HD added thereto from multiplex communication device 25, and learning to perform processing on image data GD2 based on header information HD PC 14 (classification information processing unit for machine learning).
  • the learning PC 14 (classification information processing unit for machine learning) performs header information HD (classification information for machine learning) on a plurality of image data GD1 obtained by imaging the working state by the IPS camera 41 or the like (imaging device). ), Classification, association, etc. can be implemented.
  • analysis is performed for each image data GD2 that is classified or associated, and the analysis result is fed back (applied) to the image processing conditions of the image data GD1 (image data GD2), so that the accuracy of image processing after feedback is achieved. Can be improved.
  • this application is not limited to the said embodiment, It is possible to implement in the form which gave various change and improvement based on the knowledge of those skilled in the art.
  • machine learning is performed by the learning PC 14 using the classified image data GD2, and the accuracy of the image processing is improved.
  • the present invention is not limited to this.
  • the user may check the automatically classified image data GD2, and manually change the image processing settings (feature point extraction, etc.) by the apparatus main body 21. In this case, since the classification is automatically performed in advance, it is possible to reduce the work burden on the user.
  • the processing by the machine learning classification information processing unit (learning PC 14) of the present application is not limited to the classification processing of the image data GD2 based on the machine learning classification information (header information HD).
  • the learning PC 14 may correct the positional deviation of the suction nozzle 78 by detecting an error in the position of the suction nozzle 78 based on the encoder information ED. As a result, the position shift of the suction nozzle 78 is suppressed, so that the accuracy of image processing can be improved.
  • the learning PC 14 that performs machine learning is provided separately from the apparatus main body 21, but is not limited thereto.
  • the mounting control board 32 may perform machine learning in addition to overall mounting work.
  • the position information of the drive unit in the present application is not limited to the encoder information ED of a motor (servo motor 43 or the like) that functions as a drive source of the component mounting machine 11 (work machine).
  • the position information of the output shaft of the actuator (drive unit) to be driven may be used.
  • the classification information for machine learning in the present application is not limited to the header information HD, but may be information added to the footer of the image data GD1, for example.
  • the trigger information in the present application is not limited to the trigger count information 65 indicating the number of times of imaging. For example, other trigger information that can specify the image data GD1 (work process name when starting imaging, the suction nozzle 78 and the electronic component 75). Information indicating the order of imaging).
  • the component mounting machine 11 was employ
  • an inspection apparatus 12 that performs inspection on a circuit board after work may be employed as a work machine that is an object of image processing.
  • another work robot used in the FA field may be employed as a work machine to be subjected to image processing.
  • 10 board-to-board work system image processing device, multiple communication system
  • 11 parts mounting machine work machine
  • 14 learning PC machine learning classification information processing unit
  • 14A storage device 22 mounting head (drive unit, movable) Part
  • 25 multiplex communication device classification information adding part for machine learning
  • 29, 43 to 45 53 servo motor (motor), 31 image processing board (image processing part), 32 mounting control board (control part), 41 IPS camera (imaging device), 42, 52 I / O element (detection device), 51 mark camera (imaging device), 65 trigger count information (trigger information), 72 circuit board (work), ED, ED1 to ED5 encoder Information, ER error history information, HD header information (classification information for machine learning), IO I / O information, GD1, GD2 image data

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Operations Research (AREA)
  • Manufacturing & Machinery (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Supply And Installment Of Electrical Components (AREA)
  • Image Analysis (AREA)

Abstract

Le but de la présente invention est de fournir un dispositif de traitement d'image, un système de communication multiplex et un procédé de traitement d'image aptes à améliorer la précision de traitement d'image par ajout d'informations de classification d'apprentissage machine à des données d'image capturées de conditions de fonctionnement d'une machine en train de fonctionner. À cet effet, l'invention concerne un dispositif de traitement d'image comportant : un dispositif de capture d'image qui capture une image de conditions de fonctionnement d'une machine en fonctionnement pour effectuer une opération sur une pièce à travailler; une unité d'ajout d'informations de classification d'apprentissage automatique qui ajoute aux données d'image capturées par le dispositif de capture d'image des informations de classification d'apprentissage automatique indiquant des conditions de la machine en fonctionnement lorsque les données d'image ont été capturées, et qui délivre les données d'image résultantes; et une unité de traitement d'informations de classification d'apprentissage automatique qui reçoit les données d'image délivrées par l'unité d'ajout d'informations de classification d'apprentissage automatique, et à laquelle les informations de classification d'apprentissage automatique ont été ajoutées, et qui traite les données d'image reçues sur la base des informations de classification d'apprentissage machine.
PCT/JP2017/019049 2017-05-22 2017-05-22 Dispositif de traitement d'image, système de communication multiplex et procédé de traitement d'image WO2018216075A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2017/019049 WO2018216075A1 (fr) 2017-05-22 2017-05-22 Dispositif de traitement d'image, système de communication multiplex et procédé de traitement d'image
JP2019519818A JP6824398B2 (ja) 2017-05-22 2017-05-22 画像処理装置、多重通信システム及び画像処理方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/019049 WO2018216075A1 (fr) 2017-05-22 2017-05-22 Dispositif de traitement d'image, système de communication multiplex et procédé de traitement d'image

Publications (1)

Publication Number Publication Date
WO2018216075A1 true WO2018216075A1 (fr) 2018-11-29

Family

ID=64395350

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/019049 WO2018216075A1 (fr) 2017-05-22 2017-05-22 Dispositif de traitement d'image, système de communication multiplex et procédé de traitement d'image

Country Status (2)

Country Link
JP (1) JP6824398B2 (fr)
WO (1) WO2018216075A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021002249A1 (fr) * 2019-07-01 2021-01-07 株式会社小松製作所 Procédé de production d'un modèle d'estimation de classification de travail entraîné, données pour l'apprentissage, procédé exécuté par ordinateur et système comprenant un engin de chantier
JPWO2021001995A1 (fr) * 2019-07-04 2021-01-07
WO2023195173A1 (fr) * 2022-04-08 2023-10-12 株式会社Fuji Système de montage de composant et procédé de classification d'image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04139592A (ja) * 1990-10-01 1992-05-13 Fujitsu Ltd ニューロコンピュータへの自動教示方式
JP2008130865A (ja) * 2006-11-22 2008-06-05 Fuji Mach Mfg Co Ltd 部品吸着姿勢判別方法及び部品吸着姿勢判別システム
JP2011159699A (ja) * 2010-01-29 2011-08-18 Hitachi High-Tech Instruments Co Ltd 異常検出装置を備えた部品実装装置
JP2012212323A (ja) * 2011-03-31 2012-11-01 Sony Corp 情報処理装置、情報処理方法、及び、プログラム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04139592A (ja) * 1990-10-01 1992-05-13 Fujitsu Ltd ニューロコンピュータへの自動教示方式
JP2008130865A (ja) * 2006-11-22 2008-06-05 Fuji Mach Mfg Co Ltd 部品吸着姿勢判別方法及び部品吸着姿勢判別システム
JP2011159699A (ja) * 2010-01-29 2011-08-18 Hitachi High-Tech Instruments Co Ltd 異常検出装置を備えた部品実装装置
JP2012212323A (ja) * 2011-03-31 2012-11-01 Sony Corp 情報処理装置、情報処理方法、及び、プログラム

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021002249A1 (fr) * 2019-07-01 2021-01-07 株式会社小松製作所 Procédé de production d'un modèle d'estimation de classification de travail entraîné, données pour l'apprentissage, procédé exécuté par ordinateur et système comprenant un engin de chantier
JP7503370B2 (ja) 2019-07-01 2024-06-20 株式会社小松製作所 学習済みの作業分類推定モデルの製造方法、コンピュータによって実行される方法、および作業機械を含むシステム
JPWO2021001995A1 (fr) * 2019-07-04 2021-01-07
WO2021001995A1 (fr) * 2019-07-04 2021-01-07 株式会社Fuji Système de montage de composants
CN114073175A (zh) * 2019-07-04 2022-02-18 株式会社富士 元件安装系统
CN114073175B (zh) * 2019-07-04 2023-09-22 株式会社富士 元件安装系统
JP7394852B2 (ja) 2019-07-04 2023-12-08 株式会社Fuji 部品実装システム
WO2023195173A1 (fr) * 2022-04-08 2023-10-12 株式会社Fuji Système de montage de composant et procédé de classification d'image

Also Published As

Publication number Publication date
JP6824398B2 (ja) 2021-02-03
JPWO2018216075A1 (ja) 2019-12-12

Similar Documents

Publication Publication Date Title
US11972589B2 (en) Image processing device, work robot, substrate inspection device, and specimen inspection device
WO2018216075A1 (fr) Dispositif de traitement d'image, système de communication multiplex et procédé de traitement d'image
CN108142000B (zh) 基板作业系统及元件安装装置
JP2014060249A (ja) ダイボンダ、および、ダイの位置認識方法
WO2015136669A1 (fr) Dispositif de traitement d'image et système de production de substrat
JP2006339392A (ja) 部品実装装置
JP2014072409A (ja) 部品検査方法及び装置
KR101051106B1 (ko) 전자부품 실장장치 및 전자부품 실장방법
JP6411663B2 (ja) 部品実装装置
JP6956272B2 (ja) トレース支援装置
US20190254201A1 (en) Component mounter
US11330751B2 (en) Board work machine
CN109196971B (zh) 元件安装系统
US10771672B2 (en) Detachable-head-type camera and work machine
JP6131315B2 (ja) 通信システム及び電子部品装着装置
US12082345B2 (en) Component mounting machine
CN116114390B (zh) 错误原因推定装置以及错误原因推定方法
JP7153127B2 (ja) 良否判定装置および良否判定方法
JP7094366B2 (ja) 検査設定装置および検査設定方法
US11751371B2 (en) Component mounting device
JP7153126B2 (ja) 良否判定装置および良否判定方法
WO2023012981A1 (fr) Système de montage de composants
JP7473735B2 (ja) 異物検出装置および異物検出方法
WO2024033961A1 (fr) Dispositif de détection de corps étrangers et procédé de détection de corps étrangers
JPWO2017179156A1 (ja) 対基板作業機

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17910543

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019519818

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17910543

Country of ref document: EP

Kind code of ref document: A1