WO2020144776A1 - 制御装置および制御方法 - Google Patents
制御装置および制御方法 Download PDFInfo
- Publication number
- WO2020144776A1 WO2020144776A1 PCT/JP2019/000394 JP2019000394W WO2020144776A1 WO 2020144776 A1 WO2020144776 A1 WO 2020144776A1 JP 2019000394 W JP2019000394 W JP 2019000394W WO 2020144776 A1 WO2020144776 A1 WO 2020144776A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- unit
- relative position
- template image
- image
- target
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 21
- 238000003384 imaging method Methods 0.000 claims abstract description 55
- 238000012937 correction Methods 0.000 claims abstract description 30
- 238000003702 image correction Methods 0.000 claims abstract description 29
- 238000012545 processing Methods 0.000 claims description 54
- 238000012546 transfer Methods 0.000 claims description 40
- 230000008859 change Effects 0.000 claims description 3
- 238000004088 simulation Methods 0.000 claims description 2
- 230000006870 function Effects 0.000 description 52
- 230000014509 gene expression Effects 0.000 description 24
- 238000012821 model calculation Methods 0.000 description 23
- 238000003860 storage Methods 0.000 description 22
- 238000010586 diagram Methods 0.000 description 14
- 238000004364 calculation method Methods 0.000 description 11
- 238000003672 processing method Methods 0.000 description 5
- 238000006073 displacement reaction Methods 0.000 description 4
- 230000007723 transport mechanism Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000032258 transport Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05K—PRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
- H05K13/00—Apparatus or processes specially adapted for manufacturing or adjusting assemblages of electric components
- H05K13/08—Monitoring manufacture of assemblages
- H05K13/081—Integration of optical monitoring devices in assembly lines; Processes using optical monitoring devices specially adapted for controlling devices or machines in assembly lines
- H05K13/0815—Controlling of component placement on the substrate during or after manufacturing
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D3/00—Control of position or direction
- G05D3/12—Control of position or direction using feedback
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05K—PRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
- H05K13/00—Apparatus or processes specially adapted for manufacturing or adjusting assemblages of electric components
- H05K13/08—Monitoring manufacture of assemblages
- H05K13/081—Integration of optical monitoring devices in assembly lines; Processes using optical monitoring devices specially adapted for controlling devices or machines in assembly lines
- H05K13/0812—Integration of optical monitoring devices in assembly lines; Processes using optical monitoring devices specially adapted for controlling devices or machines in assembly lines the monitoring devices being integrated in the mounting machine, e.g. for monitoring components, leads, component placement
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B23/00—Testing or monitoring of control systems or parts thereof
- G05B23/02—Electric testing or monitoring
- G05B23/0205—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
- G05B23/0218—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults
- G05B23/0243—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults model based detection method, e.g. first-principles knowledge model
- G05B23/0254—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults model based detection method, e.g. first-principles knowledge model based on a quantitative model, e.g. mathematical relationships between inputs and outputs; functions: observer, Kalman filter, residual calculation, Neural Networks
-
- G06T3/14—
-
- G06T5/73—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/88—Image or video recognition using optical means, e.g. reference filters, holographic masks, frequency domain filters or spatial domain filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/146—Aligning or centring of the image pick-up or image-field
-
- H—ELECTRICITY
- H05—ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
- H05K—PRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
- H05K13/00—Apparatus or processes specially adapted for manufacturing or adjusting assemblages of electric components
- H05K13/08—Monitoring manufacture of assemblages
- H05K13/081—Integration of optical monitoring devices in assembly lines; Processes using optical monitoring devices specially adapted for controlling devices or machines in assembly lines
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J15/00—Gripping heads and other end effectors
- B25J15/06—Gripping heads and other end effectors with vacuum or magnetic holding means
- B25J15/0616—Gripping heads and other end effectors with vacuum or magnetic holding means with vacuum
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/0093—Programme-controlled manipulators co-operating with conveyor means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20201—Motion blur correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30141—Printed circuit board [PCB]
Definitions
- the present invention relates to a control device and a control method for correcting a target position for moving a movable part such as a mounting head driven by an actuator using an imaging device.
- positioning control is performed by driving actuators such as servo motors and linear motors to move movable parts such as mounting heads to target positions.
- actuators such as servo motors and linear motors
- movable parts such as mounting heads to target positions.
- Patent Document 1 discloses a component mounting apparatus including a mounting head and an imaging device.
- the component mounting apparatus has a function of acquiring the contour of the component held by the mounting head from the brightness distribution of the captured image acquired by the imaging device, and correcting the mounting position of the component based on the acquired contour position of the component.
- the component mounting apparatus shortens the time required to complete positioning by performing imaging without temporarily stopping the mounting head.
- the relative speed between the subject and the imaging device is not zero.
- the relative speed between the subject and the image pickup device is not zero, and therefore the relative position between the subject and the image pickup device changes during exposure, causing blurring in the captured image. Therefore, there is a problem that it is not uniquely determined which time in the exposure time the contour position of the component acquired from the photographed image corresponds to, and the correction accuracy of the position error decreases.
- the present invention has been made in view of the above, and an object thereof is to obtain a control device capable of accurately correcting a position error.
- a control device changes a relative position between a movable part and a target object as the movable part moves, and acquires a captured image of the target object.
- An image pickup device which is a control device for controlling a controlled device having: a drive unit that drives the movable unit based on a drive command signal that moves the movable unit to a target position; A relative position estimation unit that calculates an estimated value of the relative position between the target object and the imaging device, and a template image that is registered in advance is corrected based on a time-series signal of the estimated value of the relative position within the imaging time of the imaging device.
- a template image correction unit and a target position correction unit that corrects the target position using the corrected template image are provided.
- FIG. 5 is a block diagram showing a configuration example of the machine model calculation unit shown in FIG.
- the flowchart which shows the drive control processing of the control apparatus shown in FIG. The flowchart which shows the image processing of the control apparatus shown in FIG. Diagram showing dedicated hardware for realizing the functions of the control device shown in FIG.
- FIG. 1 is a schematic diagram showing a configuration of a control system 10 according to the first exemplary embodiment of the present invention.
- the control system 10 includes a control device 1 and a control target device 2 which is a mechanical system controlled by the control device 1.
- the controlled device 2 is an electronic component mounter that mounts electronic components on a board.
- the control device 1 controls the operation of each part of the controlled device 2.
- the control target device 2 includes an X-axis motor 200, a Y-axis motor 201, an image pickup device 202, a movable unit 203, a suction nozzle 204, and a printed circuit board transport mechanism 205.
- the X-axis motor 200 and the Y-axis motor 201 are actuators that change the position of the movable portion 203.
- the direction in which the X-axis motor 200 moves the movable portion 203 is orthogonal to the direction in which the Y-axis motor 201 moves the movable portion 203.
- the movable unit 203 is a mounting head that mounts electronic components, and is moved in parallel with the surface of the printed board 206 by the X-axis motor 200 and the Y-axis motor 201.
- the movable unit 203 holds the electronic component by using the suction nozzle 204 and places the electronic component at a target position 207 on the printed board 206.
- the X-axis motor 200 and the Y-axis motor 201 are linear motors in FIG. 1, other linear motion mechanisms such as a combination of a rotary servo motor and a ball screw may be used.
- the imaging device 202 is fixed to the movable portion 203, and moves with the movement of the movable portion 203. For this reason, the relative position between the imaging device 202 and the printed circuit board transport mechanism 205, which is the target, changes as the movable unit 203 moves.
- the printed circuit board transport mechanism 205 transports the printed circuit board 206 according to a command from the control device 1.
- the target position 207 is a position where the movable portion 203 is a target of movement, and is a position on the printed circuit board 206 where electronic components are provided.
- FIG. 2 is a diagram showing a first state of control system 10 shown in FIG.
- FIG. 3 is a diagram showing a second state of control system 10 shown in FIG.
- FIG. 4 is a diagram showing a third state of control system 10 shown in FIG.
- sucking the electronic component 208 by the suction nozzle 204 of the movable unit 203 and disposing the electronic component 208 at the target position 207 will be described.
- the control system 10 is in the position control of the movable portion 203.
- the target position 207 is not within the visual field region V of the image pickup device 202.
- the control device 1 controls the positioning of the movable portion 203 using a default target position P0 preset based on the design data of the printed circuit board 206. If the printed circuit board 206 is distorted or the controlled device 2 is thermally expanded, an error may occur, and the target position 207 on which the electronic component 208 should be originally mounted may deviate from the default target position P0.
- the controller 1 causes the central axis C of the suction nozzle 204 to coincide with the default target position P0. Since the electronic component 208 is placed on the printed board 206 while the movable portion 203 is moved, the electronic component 208 is placed at a position deviated from the position where it should be originally placed.
- the control system 10 has a function of correcting the default target position P0 by using the captured image of the imaging device 202.
- the control device 1 determines the movable portion 203 and the target position 207 from the captured image of the image pickup device 202. The relative position between them is calculated, and the positioning control is performed while correcting the error between the target position 207 and the default target position P0 based on the calculated relative position.
- the central axis C of the suction nozzle 204 and the central position of the target position 207 coincide, and the positioning control is completed.
- FIG. 5 is a diagram showing a functional configuration of the control system 10 shown in FIG.
- the control system 10 includes a control device 1 and a control target device 2.
- the control device 1 includes a command generation unit 100, a mechanical model calculation unit 101, an X-axis drive unit 102 and a Y-axis drive unit 103 that are drive units, an imaging command generation unit 104, a relative position storage unit 105, and a template.
- the image correction unit 106, the image processing unit 107, and the target position correction unit 108 are included.
- the control device 1 controls the X-axis motor 200 and the Y-axis motor 201 based on the captured image acquired by the imaging device 202 mounted on the control target device 2.
- the command generation unit 100 calculates a position command including the X-axis command position rx(t) and the Y-axis command position ry(t) based on the current target position P(t).
- a technique for generating a position command by performing interpolation on a target position that changes from moment to moment for example, the method disclosed in Japanese Patent Laid-Open No. 2012-20895 can be used.
- the command generation unit 100 inputs the generated position command to the machine model calculation unit 101.
- the mechanical model calculation unit 101 calculates a transfer function indicating the mechanical characteristics of the controlled device 2 and a position command including the X-axis command position rx(t) and the Y-axis command position ry(t) calculated by the command generation unit 100. Based on the method described below, the X-axis and Y-axis current feedforward signals in which the gain of the frequency component that excites the mechanical vibration of the controlled device 2 is reduced in order to suppress the vibration of the controlled device 2, and A drive command signal indicating a position to be followed by the movable portion 203 and including a position reference signal for moving the movable portion 203 to the target position P(t) is calculated.
- the machine model calculation unit 101 performs the control in parallel with the above-described operation between the imaging device 202 and the target position 207 when the X-axis drive unit 102 and the Y-axis drive unit 103 perform control based on the drive command signal. Calculate an estimate of the relative position between.
- the mechanical model calculation unit 101 inputs the calculated drive command signal to the X-axis drive unit 102 and the Y-axis drive unit 103, and inputs the estimated value of the relative position to the relative position storage unit 105.
- FIG. 6 is a block diagram showing a configuration example of the machine model calculation unit 101 shown in FIG.
- the machine model calculation unit 101 shown in FIG. 6 includes a drive command generation unit 301 and a relative position estimation unit 302.
- the drive command generation unit 301 includes a command distributor 303, an X-axis drive command generation unit 304, and a Y-axis drive command generation unit 305.
- the current value of the X-axis motor 200 at time t is ux(t), and the position feedback value of the X-axis motor 200 at time t is x1(t).
- the current value of the Y-axis motor 201 at time t is uy(t), and the position feedback value of the Y-axis motor 201 at time t is y1(t).
- the X component of the relative position between the imaging device 202 and the target position 207 at time t is x2(t), and the Y component is y2(t).
- the denominator polynomial Dx(s) and the numerator polynomial Nx1(s) of the transfer function from the current value of the X-axis motor 200 to the position feedback value, and the Y-axis motor It is assumed that the denominator polynomial Dy(s) and the numerator polynomial Ny1(s) of the transfer function from the current value of 201 to the position feedback value are obtained in advance.
- These relationships are expressed by the following mathematical expression (1).
- the Laplace transform of the function f(t) is represented as L[f(t)].
- Expressions (1) and (2) are tested by operating the controlled device 2, and the current values and position feedback values of the X-axis motor 200 and the Y-axis motor 201, the imaging device 202 and the target are calculated. This can be obtained by acquiring time-series data of the actual measurement value of the relative position with respect to the position 207 and performing signal processing on the time-series data. Further, by adjusting the position feedback and the scale of the relative position, the numerator polynomial of the transfer function is normalized as shown by the following mathematical expression (3).
- the X-axis command position rx(t) and the Y-axis command position ry(t) calculated by the command generation unit 100 are input to the command distributor 303 of the drive command generation unit 301 and the relative position estimation unit 302.
- the command distributor 303 inputs the X-axis command position rx(t) calculated by the command generation unit 100 to the X-axis drive command generation unit 304, and sets the Y-axis command position ry(t) calculated by the command generation unit 100 to Y. It is input to the axis drive command generation unit 305.
- the X-axis drive command generation unit 304 uses the preset transfer function Fx(s) of the X-axis command filter and the transfer function represented by Formula (1) to feed the current of the X-axis motor 200.
- the forward signal ux * (t) and the position reference signal x1 * (t) are calculated based on the transfer function shown in the following mathematical expression (4).
- the Y-axis drive command generation unit 305 uses the transfer function Fy(s) of the Y-axis command filter set in advance and the transfer function represented by Expression (1) to feed the current of the Y-axis motor 201.
- the forward signal uy * (t) and the position reference signal y1 * (t) are calculated based on the transfer function shown in the following mathematical expression (5).
- the relative position estimation unit 302 uses the transfer functions Fx(s) and Fy(s) of the command filter described above and the transfer function of Expression (2) to calculate the X component and the Y component of the estimated value of the relative position as follows. It is calculated based on the transfer function shown in Equation (6).
- the estimated value of the function may be shown with a hat attached to the function.
- a function with a hat may be represented as a hat (function).
- the estimated value of the function x2(t) is represented as hat(x2)(t).
- hat(x2)(t) represents the X component of the estimated value of the relative position
- hat(y2)(t) represents the Y component of the estimated value of the relative position.
- the X-axis drive command generation unit 304 multiplies the transfer function Fx(s) of the X-axis command filter by the denominator polynomial Dx(s) of the transfer function from the X-axis current value to the position feedback.
- the current feedforward signal ux * (t) of the X-axis motor 200 is calculated based on the transfer function, and the transfer function Fx(s) of the X-axis command filter and the X-axis current value to the position feedback are calculated. This indicates that the X-axis position reference signal x1 * (t) is calculated based on the transfer function obtained by multiplying the numerator polynomial Nx1(s) of the transfer function.
- the Y-axis drive command generation unit 305 calculates the transfer function Fy(s) of the Y-axis command filter and the denominator of the transfer function from the Y-axis current value to the position feedback.
- the current feedforward signal uy * (t) of the Y-axis motor 201 is calculated based on the transfer function obtained by multiplying the polynomial Dy(s), and the transfer function Fy(s) of the Y-axis command filter and the Y-axis are calculated. It represents that the Y-axis position reference signal y1 * (t) is calculated based on the transfer function multiplied by the numerator polynomial Ny1(s) of the transfer function from the current value to the position feedback.
- the relative position estimation unit 302 multiplies the transfer function Fx(s) of the X-axis command filter by the numerator polynomial Nx2(s) of the transfer function from the X-axis current value to the relative position. It represents that the X component hat(x2)(t) of the estimated value of the relative position is calculated based on the transfer function.
- the relative position estimation unit 302 multiplies the transfer function Fy(s) of the Y-axis command filter by the numerator polynomial Ny2(s) of the transfer function from the Y-axis current value to the relative position. It represents that the Y component hat(y2)(t) of the estimated value of the relative position is calculated based on the transfer function.
- the X-axis drive command generation unit 304 inputs the current feedforward signal ux * (t) of the X-axis motor 200 and the position reference signal x1 * (t), which are X-axis drive command signals, to the X-axis drive unit 102.
- the Y-axis drive command generation unit 305 inputs the current feedforward signal uy * (t) of the Y-axis motor 201 and the position reference signal y1 * (t), which are Y-axis drive command signals, to the Y-axis drive unit 103.
- the relative position estimation unit 302 also inputs the calculated X component hat(x2)(t) of the estimated relative position and the calculated Y component hat(y2)(t) of the relative position into the relative position storage unit 105. ..
- the X-axis drive unit 102 feeds the X-axis motor 200 based on the current feedforward signal ux * (t) and the position reference signal x1 * (t), which are the X-axis drive command signals calculated by the mechanical model calculation unit 101.
- the position control of the X-axis motor 200 is performed by two-degree-of-freedom control that is a combination of forward control and feedback control.
- the Y-axis drive unit 103 feeds the Y-axis based on the current feedforward signal uy * (t) and the position reference signal y1 * (t) of the Y-axis motor 201, which are the Y-axis drive command signals calculated by the mechanical model calculation unit 101.
- the position control of the Y-axis motor 201 is performed by two-degree-of-freedom control that is a combination of forward control and feedback control.
- the machine included in the current feedforward signal and the position reference signal can be obtained.
- the gain of the frequency component that excites the system vibration can be reduced. Therefore, the vibration of the mechanical system can be suppressed by controlling the X-axis motor 200 and the Y-axis motor 201 by the two-degree-of-freedom control using the current feedforward signal and the position reference signal.
- the X component hat(x2)(t) of the relative position between the imaging device 202 and the target position 207 and the Y component hat(y2)(t) of the estimated value of the relative position are calculated using the mathematical expression (6).
- the machine model calculation unit 101 may be the drive command signal.
- a set of a current feedforward signal, a position reference signal, and a velocity reference signal may be output.
- the mechanical model calculation unit 101 may output a torque feedforward signal as the feedforward signal instead of the current feedforward signal.
- the image capturing command generation unit 104 generates an image capturing command for controlling the image capturing timing of the image capturing device 202 based on a predetermined image processing cycle.
- the imaging command generation unit 104 inputs the generated imaging command to the imaging device 202 and the relative position storage unit 105.
- the relative position storage unit 105 stores the X component hat(x2)(t) of the estimated relative position value and the Y component hat(y2)(t) of the estimated relative position value calculated by the mechanical model calculation unit 101,
- the imaging command generated by the command generation unit 104 is input.
- the relative position storage unit 105 stores the X component hat(x2)(t) of the estimated relative position and the Y component hat(y2)(t of the estimated relative position from the exposure start time to the exposure end time of the imaging device 202.
- Time-series signals are stored as relative position memory.
- the template image correction unit 106 corrects the template image used by the image processing unit 107 based on the relative position storage stored in the relative position storage unit 105, and the image processing unit 107 calculates an observed value of the relative position.
- a reference position in the template image used for the calculation is calculated.
- the template image is a pattern image used in an image processing method such as a template matching method.
- the template image refers to an image including the target position 207, which is obtained by previously photographing the printed circuit board 206 while the movable portion 203 is stationary.
- the template image correction unit 106 simulates the blurring of the captured image based on the relative position storage, and corrects the template image based on the simulation result.
- the image processing unit 107 uses an image processing method such as a template matching method to search the captured image captured by the image capturing apparatus 202 during positioning for a region that matches the template image registered in advance, and The observed value of the relative position with respect to the target position 207 is calculated.
- an image processing method such as a template matching method to search the captured image captured by the image capturing apparatus 202 during positioning for a region that matches the template image registered in advance, and The observed value of the relative position with respect to the target position 207 is calculated.
- an image processing method such as a template matching method to search the captured image captured by the image capturing apparatus 202 during positioning for a region that matches the template image registered in advance, and The observed value of the relative position with respect to the target position 207 is calculated.
- the relative position between the image capturing device 202 and the target position 207 during capturing changes, causing a subject blur. Therefore, the template image captured in the still state and the actual captured image do not exactly match. Therefore, in the image processing using the template image captured in the still state
- the template image correction unit 106 predicts the subject blur based on the relative position storage stored in the relative position storage unit 105. Then, by correcting the template image, it is possible to suppress the error in the observed value of the relative position due to the subject blur.
- the template image correction unit 106 corrects the template image so as to include subject blurring.
- the template image correction unit 106 changes the relative position between the image capturing device 202 and the target position 207 during image capturing by the image capturing device 202 based on the relative position storage stored in the relative position storage unit 105.
- a filter M corresponding to the generated subject blur is calculated.
- Formula (7) converts the difference between the estimated value of the relative position at the exposure start time T1 and the estimated value of the relative position at the time t into a pixel unit, and calculates a delta function offset by a distance in the pixel unit at the time [T1, T2] and then averaged in a range of surrounding ⁇ 1/2 pixels.
- the pixel coordinates (X, Y) can take a negative value in the formula (7), when the formula (7) is realized by a computer, it is suitable for the logical pixel coordinates (X, Y). By adding an offset value, the pixel coordinates on the implementation will not be negative.
- the template image correction unit 106 calculates the convolutional sum of the template image K registered in advance and the above-described filter M based on the mathematical expression (8) to obtain the corrected template image K′ in consideration of the subject blur. To calculate.
- the template image correction unit 106 inputs the corrected template image K′ to the image processing unit 107.
- FIG. 7 is a diagram showing an example of the template image K before correction by the template image correction unit 106 shown in FIG.
- the template image K is an image taken in a still state.
- FIG. 8 is a diagram showing a template image K′ after the template image K shown in FIG. 7 is corrected.
- the template image K′ is an image in consideration of subject blur, which is obtained by calculating the convolutional sum of the template image K and the filter M shown in FIG. 7.
- FIG. 9 is a diagram showing an example of the filter coefficient of each pixel coordinate of the filter M used by the template image correction unit 106 shown in FIG.
- the filter coefficient of each pixel of the filter M can be obtained by numerically calculating Expression (7) based on the estimated value of the relative position at time [T1, T2].
- T1, T2 the estimated value of the relative position at time [T1, T2].
- pixels with a filter coefficient of zero are drawn black, and pixels with a non-zero filter coefficient are drawn bright, that is, white, according to the size of the filter coefficient.
- the corrected template image K′ shown in FIG. 8 is obtained by calculating the convolutional sum of the template image K shown in FIG. 7 and the filter M shown in FIG.
- the image processing unit 107 receives the captured image captured by the image capturing apparatus 202 and the corrected template image K′ output by the template image correction unit 106.
- the image processing unit 107 uses an image processing method such as a template matching method to search for a region in the captured image that matches the corrected template image K′.
- the filter coefficient since the filter coefficient is designed with reference to the estimated value of the relative position at the exposure start time T1, it corresponds to the logical origin of the corrected template image K′ in the captured image. By calculating the pixel coordinates, the subject position at the exposure start time T1 can be calculated.
- the photographed image is scanned with the corrected template image K′, the degree of similarity with the corrected template image K′ in each region on the photographed image is calculated, and the degree of similarity is maximum or a preset threshold value is set.
- the above area is specified.
- the observation value of the relative position between the image pickup device 202 and the target position 207 is calculated based on the pixel coordinates corresponding to the logical origin of the corrected template image K′ within the specified region.
- the template image correction unit 106 calculates the convolutional sum of the template image K and the filter M in the spatial domain
- the image processing unit 107 uses the discrete Fourier transform such as the phase-only correlation method.
- the template image correction unit 106 calculates the discrete Fourier transform of the template image K and the filter M, and further multiplies the two in the spatial frequency domain to perform correction.
- the discrete Fourier transform of the later template image K′ may be calculated.
- the template image correction unit 106 does not calculate the discrete Fourier transform of the template image K online, but the template image correction unit 106 stores the result of previously calculating the discrete Fourier transform of the template image K offline. Good.
- the image processing unit 107 inputs the observed value of the relative position between the imaging device 202 and the target position 207 to the target position correction unit 108.
- the relative position storage stored in the relative position storage unit 105 and the observed value of the relative position between the imaging device 202 and the target position 207 are input to the target position correction unit 108.
- the target position correction unit 108 generates a target position correction signal indicating the correction amount of the target position, and inputs the generated target position correction signal to the command generation unit 100.
- the coordinate system of the filter is set based on the estimated value of the relative position at the exposure start time T1, and therefore the relative position at the exposure start time T1 is set.
- the correction amount of the target position for the command generation unit 100 may be calculated based on the error between the estimated value of 1 and the observed value of the relative position obtained by the image processing. Therefore, the calculation formula of the filter coefficient of the filter M is changed to the following formula (9), and the error between the estimated value of the relative position at the exposure end time T2 and the observed value of the relative position obtained by the image processing is changed. Based on this, the correction amount of the target position for the command generation unit 100 may be calculated.
- the estimated value of the relative position from the exposure start time T1 to the exposure end time T2 is calculated in the filter coefficient calculation.
- the estimated value of the relative position at the time corrected for the dead time may be used.
- the relative position storage unit 105 stores the X component hat(x2)(t) of the estimated value of the relative position in the time range in which the correction of the dead time is added to the exposure start time T1 and the exposure end time T2 and the relative value.
- the time series signal of the Y component hat(y2)(t) of the estimated position value may be stored as the relative position memory.
- FIG. 10 is a flowchart showing a drive control process of the control device 1 shown in FIG.
- FIG. 11 is a flowchart showing image processing of the control device 1 shown in FIG.
- the control device 1 separates the task of drive control processing from the task of image processing, and executes each task at different calculation cycles.
- the control device 1 includes a plurality of CPUs (Central Processing Units), each task may be assigned to a different CPU.
- CPUs Central Processing Units
- the command generation unit 100 of the control device 1 calculates the position command based on the current value of the target position (step S101).
- the command generation unit 100 inputs the calculated position command to the machine model calculation unit 101.
- the mechanical model calculation unit 101 calculates a drive command signal for moving the movable unit 203 to the target position based on the position command (step S102).
- the mechanical model calculation unit 101 inputs the calculated drive command signal to the X-axis drive unit 102 and the Y-axis drive unit 103. Further, the mechanical model calculation unit 101 calculates an estimated value of the relative position between the image pickup device 202 and the target position based on the position command (step S103).
- the machine model calculation unit 101 inputs the calculated estimated value of the relative position to the relative position storage unit 105.
- the relative position storage unit 105 stores the estimated value of the relative position input from the machine model calculation unit 101 (step S104).
- the drive unit including the X-axis drive unit 102 and the Y-axis drive unit 103 executes control processing of the motor including the X-axis motor 200 and the Y-axis motor 201 based on the drive command signal (step S105).
- the imaging command generation unit 104 of the control device 1 outputs an imaging command to the imaging device 202 (step S201).
- the template image correction unit 106 calculates the filter of the blurred image based on the relative position storage (step S202).
- the template image correction unit 106 calculates a corrected template image based on the blur image filter (step S203).
- the template image correction unit 106 inputs the corrected template image to the image processing unit 107.
- the image processing unit 107 acquires a captured image of the imaging device 202 (step S204).
- the image processing unit 107 calculates the observation value of the relative position between the imaging device 202 and the target position by image processing (step S205).
- the image processing unit 107 inputs the calculated observed value of the relative position to the target position correction unit 108.
- the target position correction unit 108 calculates a target position correction signal indicating a correction amount of the target position based on the estimated value of the relative position indicated by the relative position memory and the observed value of the relative position (step S206).
- Command generation unit 100 machine model calculation unit 101, X-axis drive unit 102, Y-axis drive unit 103, imaging command generation unit 104, relative position storage unit 105, template image correction unit 106, image processing unit 107, and target position correction
- the unit 108 is realized by a processing circuit. These processing circuits may be realized by dedicated hardware or may be control circuits using a CPU.
- FIG. 12 is a diagram showing dedicated hardware for realizing the functions of the control device 1 shown in FIG.
- the processing circuit 90 is a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array), or a combination thereof.
- this control circuit is, for example, the control circuit 91 having the configuration shown in FIG.
- FIG. 13 is a diagram showing the configuration of the control circuit 91 for realizing the functions of the control device 1 shown in FIG.
- the control circuit 91 includes a processor 92 and a memory 93.
- the processor 92 is a CPU and is also called a central processing unit, a processing unit, an arithmetic unit, a microprocessor, a microcomputer, a DSP (Digital Signal Processor), or the like.
- the memory 93 is, for example, a nonvolatile or volatile semiconductor memory such as a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, an EPROM (Erasable Programmable ROM), an EEPROM (registered trademark) (Electrically EPROM), These include magnetic disks, flexible disks, optical disks, compact disks, mini disks, and DVD (Digital Versatile Disk).
- a RAM Random Access Memory
- ROM Read Only Memory
- flash memory an EPROM (Erasable Programmable ROM), an EEPROM (registered trademark) (Electrically EPROM)
- magnetic disks flexible disks, optical disks, compact disks, mini disks, and DVD (Digital Versatile Disk).
- the control circuit 91 When the above processing circuit is realized by the control circuit 91, it is realized by the processor 92 reading and executing the program stored in the memory 93 and corresponding to the processing of each component.
- the memory 93 is also used as a temporary memory in each process executed by the processor 92.
- the subject blur is predicted and the template image K is corrected based on the estimated value of the relative position between the imaging device 202 and the target position 207. While estimating the position by image processing. Therefore, it is possible to accurately correct the position error due to the subject shake in the positioning control, and it is possible to improve the accuracy of the positioning control. This method is applicable even when the shape of the subject is complicated, or when the subject shape is greatly distorted due to subject blurring, and is highly versatile.
- a mechanical system such as the controlled device 2 has a finite mechanical rigidity. Therefore, when the acceleration or deceleration of the positioning command is increased, the mechanical system vibrates and the positioning time increases. There is a problem that sometimes.
- the control device 1 uses the current feedforward signal and the position reference signal as shown in Expressions (4) and (5), and the X-axis drive unit.
- the 102 and the Y-axis drive unit 103 By controlling the 102 and the Y-axis drive unit 103, mechanical vibration after the completion of positioning can be suppressed and the positioning time can be shortened.
- the control device 1 of the present embodiment is based on the estimated value of the relative position between the imaging device 202 and the target position 207 in consideration of mechanical characteristics, as shown in Expression (6). , Calculate the correction amount of the target position. For this reason, it is possible to suppress the deterioration of the position estimation accuracy of the image processing due to the displacement of the imaging device 202 due to the vibration and displacement of the mechanical system, and to perform the positioning at high speed and with high accuracy.
- control device 1 is applied to positioning control
- present embodiment is not limited to such an example.
- the technique for the control device 1 to correct the template image used for image processing based on the prediction of the subject blur should be applied to motion control devices other than positioning control, such as a trajectory control device and a roll-to-roll mechanical system.
- motion control devices other than positioning control such as a trajectory control device and a roll-to-roll mechanical system.
- the present embodiment is not limited to this example. Not limited.
- the drive command signal and the estimated value of the relative position are calculated without using the detailed mechanical properties of the controlled device 2. It is also possible to calculate.
- the controlled device 2 is a rigid body, that is, the transfer functions of the controlled device 2 on the X axis and the Y axis can be expressed by the following mathematical expression (10), and the drive command generation unit 301 and The relative position estimation unit 302 may be configured.
- Jx is the rigid inertia of the X axis
- Jy is the rigid inertia of the Y axis.
- the technique of the present embodiment is a table in which the movable unit 203 moves a target object. Therefore, it can be applied to a machine configured such that the relative position between the imaging device 202 and the target changes with the movement of the target.
- the mechanical model calculation unit 101 estimates the relative position between the imaging device 202 and the target position 207 only from the previously identified mechanical characteristics and the command position information output by the command generation unit 100.
- the controlled device 2 is provided with an additional sensor such as an acceleration sensor, the observer using the detected values estimates the relative position between the imaging device 202 and the target position 207. It is also possible.
- control device 1 has been described in the above-described embodiment, the technology disclosed in the present embodiment is implemented as a computer program for realizing the control method of the control device 1. It may be implemented as a storage medium for storing a computer program.
- control device 1 control device, 2 control target device, 10 control system, 90 processing circuit, 91 control circuit, 92 processor, 93 memory, 100 command generation unit, 101 machine model calculation unit, 102 X axis drive unit, 103 Y axis drive unit, 104 imaging command generation unit, 105 relative position storage unit, 106 template image correction unit, 107 image processing unit, 108 target position correction unit, 200 X-axis motor, 201 Y-axis motor, 202 imaging device, 203 movable unit, 204 suction nozzle , 205 printed circuit board transport mechanism, 206 printed circuit board, 207 target position, 208 electronic parts, 301 drive command generation unit, 302 relative position estimation unit, 303 command distributor, 304 X-axis drive command generation unit, 305 Y-axis drive command generation Part, C center axis, P0 default target position, V field of view area.
Abstract
Description
図1は、本発明の実施の形態1にかかる制御システム10の構成を示す模式図である。制御システム10は、制御装置1と、制御装置1の制御対象の機械系である制御対象装置2とを備える。制御対象装置2は、本実施の形態では、電子部品を基板上に実装する電子部品実装機である。制御装置1は、制御対象装置2の各部の動作を制御する。制御対象装置2は、X軸モータ200と、Y軸モータ201と、撮像装置202と、可動部203と、吸着ノズル204と、プリント基板搬送機構205とを有する。
Claims (8)
- 可動部と、前記可動部の移動に伴って目標物との相対位置が変化し、前記目標物の撮影画像を取得する撮像装置と、を有する制御対象装置を制御する制御装置であって、
前記可動部を目標位置に移動させる駆動指令信号に基づいて、前記可動部を駆動する駆動部と、
前記駆動指令信号に基づいて、前記目標物と前記撮像装置との相対位置の推定値を算出する相対位置推定部と、
前記撮像装置の撮影時間内における前記相対位置の推定値の時系列信号に基づいて、予め登録されたテンプレート画像を補正するテンプレート画像補正部と、
補正後の前記テンプレート画像を用いて前記目標位置を補正する目標位置補正部と、
を備えることを特徴とする制御装置。 - 前記テンプレート画像補正部は、前記時系列信号に基づいて前記撮影画像のぶれをシミュレーションして、シミュレーション結果に基づいて前記テンプレート画像を補正することを特徴とする請求項1に記載の制御装置。
- 前記テンプレート画像補正部は、前記時系列信号に基づいて前記撮像装置の撮影中の前記相対位置の変化に起因する前記撮影画像のぶれに相当するフィルタを計算し、前記テンプレート画像に前記フィルタを作用させることにより、前記テンプレート画像の補正を行うことを特徴とする請求項1または2に記載の制御装置。
- 前記テンプレート画像補正部は、前記テンプレート画像に前記フィルタを作用させる演算を空間周波数領域上で行うことで空間周波数領域上の補正後の前記テンプレート画像を算出し、
空間周波数領域上の前記撮影画像と補正後の前記テンプレート画像とに基づく画像処理により、前記相対位置の観測値を出力する画像処理部、
をさらに備え、
前記目標位置補正部は、前記推定値および前記観測値に基づいて、前記目標位置を補正することを特徴とする請求項3に記載の制御装置。 - 補正後の前記テンプレート画像を用いて前記撮影画像を画像処理して前記相対位置の観測値を出力する画像処理部、
をさらに備え、
前記目標位置補正部は、前記推定値および前記観測値に基づいて、前記目標位置を補正することを特徴とする請求項1から3のいずれか1項に記載の制御装置。 - 前記画像処理部は、テンプレートマッチングを用いて前記観測値を出力することを特徴とする請求項4または5に記載の制御装置。
- 前記制御対象装置の機械特性を示す伝達関数に基づき、前記制御対象装置の機械振動を励起する周波数成分のゲインを低下した電流フィードフォワード信号と、前記可動部が追従すべき位置参照信号とを含む駆動指令信号を算出して前記駆動部に入力する駆動指令生成部、
をさらに備え、
前記相対位置推定部は、前記駆動部が前記電流フィードフォワード信号および前記位置参照信号を用いた2自由度制御を行った場合における前記相対位置の推定値を前記伝達関数に基づいて算出することを特徴とする請求項1から6のいずれか1項に記載の制御装置。 - 可動部と、前記可動部の移動に伴って目標物との相対位置が変化し、前記目標物の撮影画像を取得する撮像装置と、を有する制御対象装置を制御する制御装置が、
前記可動部を駆動する駆動部に入力される駆動指令信号に基づいて、前記目標物と前記撮像装置との相対位置の推定値を算出するステップと、
前記撮像装置の撮影時間内における前記相対位置の推定値の時系列信号に基づいて、予め登録されたテンプレート画像を補正するステップと、
補正後の前記テンプレート画像を用いて前記可動部の移動の目標位置を補正するステップと、
を含むことを特徴とする制御方法。
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020217019767A KR102537029B1 (ko) | 2019-01-09 | 2019-01-09 | 제어 장치 및 제어 방법 |
DE112019006604.8T DE112019006604T5 (de) | 2019-01-09 | 2019-01-09 | Steuervorrichtung und steuerverfahren |
JP2019537854A JP6771674B1 (ja) | 2019-01-09 | 2019-01-09 | 制御装置および制御方法 |
US17/296,253 US11874679B2 (en) | 2019-01-09 | 2019-01-09 | Using an imaging device to correct positioning errors |
CN201980087703.3A CN113260941B (zh) | 2019-01-09 | 2019-01-09 | 控制装置及控制方法 |
PCT/JP2019/000394 WO2020144776A1 (ja) | 2019-01-09 | 2019-01-09 | 制御装置および制御方法 |
TW109100030A TWI772731B (zh) | 2019-01-09 | 2020-01-02 | 控制裝置及控制方法 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2019/000394 WO2020144776A1 (ja) | 2019-01-09 | 2019-01-09 | 制御装置および制御方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020144776A1 true WO2020144776A1 (ja) | 2020-07-16 |
Family
ID=71521565
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2019/000394 WO2020144776A1 (ja) | 2019-01-09 | 2019-01-09 | 制御装置および制御方法 |
Country Status (7)
Country | Link |
---|---|
US (1) | US11874679B2 (ja) |
JP (1) | JP6771674B1 (ja) |
KR (1) | KR102537029B1 (ja) |
CN (1) | CN113260941B (ja) |
DE (1) | DE112019006604T5 (ja) |
TW (1) | TWI772731B (ja) |
WO (1) | WO2020144776A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023199572A1 (ja) * | 2022-04-11 | 2023-10-19 | パナソニックIpマネジメント株式会社 | テンプレート登録装置、テンプレート登録方法およびテンプレート登録システム |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7293795B2 (ja) * | 2019-03-27 | 2023-06-20 | 株式会社タダノ | クレーンの制御方法およびクレーン |
CN117368000B (zh) * | 2023-10-13 | 2024-05-07 | 昆山美仑工业样机有限公司 | 一种配备自适应装夹机构的静扭试验台 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003198200A (ja) * | 2001-12-28 | 2003-07-11 | Yamagata Casio Co Ltd | 画像処理方法及びそれを用いた部品搭載装置 |
JP2006073959A (ja) * | 2004-09-06 | 2006-03-16 | Yamaha Motor Co Ltd | 部品認識装置及び表面実装機並びに部品試験装置 |
JP2007013021A (ja) * | 2005-07-04 | 2007-01-18 | Juki Corp | 部品認識方法及び装置 |
JP2010133718A (ja) * | 2008-12-02 | 2010-06-17 | Seiko Epson Corp | 作業対象物の位置検出方法および位置検出装置 |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7385626B2 (en) * | 2002-10-21 | 2008-06-10 | Sarnoff Corporation | Method and system for performing surveillance |
JP4361904B2 (ja) | 2005-03-10 | 2009-11-11 | パナソニック株式会社 | 部品実装方法及び部品実装装置 |
US7581313B2 (en) | 2005-03-10 | 2009-09-01 | Panasonic Corporation | Component mounting method and mounter |
JP4664752B2 (ja) * | 2005-06-30 | 2011-04-06 | Juki株式会社 | 部品吸着方法及び装置 |
JP2007306106A (ja) * | 2006-05-09 | 2007-11-22 | Sony Corp | 撮像装置、撮像方法、およびプログラム |
JP4769684B2 (ja) * | 2006-10-12 | 2011-09-07 | 株式会社デンソーアイティーラボラトリ | 電子走査式レーダ装置 |
CN101601279B (zh) * | 2007-08-03 | 2011-11-16 | 松下电器产业株式会社 | 摄像装置以及摄像方法 |
JP4623111B2 (ja) * | 2008-03-13 | 2011-02-02 | ソニー株式会社 | 画像処理装置、画像処理方法及びプログラム |
CN101877959B (zh) * | 2009-04-29 | 2012-10-10 | 三星Techwin株式会社 | 电子部件识别装置及具有其的贴片机 |
JP5694695B2 (ja) | 2010-07-13 | 2015-04-01 | 三井金属鉱業株式会社 | 断熱耐火物及びその製造方法 |
JP2012109656A (ja) * | 2010-11-15 | 2012-06-07 | Mitsubishi Electric Corp | 画像処理装置及び方法、並びに画像表示装置及び方法 |
JP5798371B2 (ja) * | 2011-05-09 | 2015-10-21 | 富士機械製造株式会社 | 基準マークモデルテンプレート作成方法 |
US9279661B2 (en) * | 2011-07-08 | 2016-03-08 | Canon Kabushiki Kaisha | Information processing apparatus and information processing method |
JP5547144B2 (ja) * | 2011-09-08 | 2014-07-09 | 株式会社東芝 | 監視装置、その方法、及び、そのプログラム |
JP2015033006A (ja) * | 2013-08-02 | 2015-02-16 | オリンパス株式会社 | 画像処理装置、画像処理方法、画像処理プログラム及び顕微鏡システム |
JP6318520B2 (ja) * | 2013-09-27 | 2018-05-09 | 株式会社リコー | 撮像装置、撮像システムおよび撮像方法 |
JP5968379B2 (ja) * | 2013-10-29 | 2016-08-10 | キヤノン株式会社 | 画像処理装置およびその制御方法 |
JP2015213139A (ja) * | 2014-05-07 | 2015-11-26 | 国立大学法人 東京大学 | 位置決め装置 |
CN104408742B (zh) * | 2014-10-29 | 2017-04-05 | 河海大学 | 一种基于空间‑时间频谱联合分析的运动目标检测方法 |
JP6646916B2 (ja) * | 2015-11-11 | 2020-02-14 | 株式会社Fuji | 基板用の画像処理装置および画像処理方法 |
EP3589105B1 (en) | 2017-02-23 | 2023-11-15 | Fuji Corporation | Substrate working device and image processing method |
-
2019
- 2019-01-09 CN CN201980087703.3A patent/CN113260941B/zh active Active
- 2019-01-09 WO PCT/JP2019/000394 patent/WO2020144776A1/ja active Application Filing
- 2019-01-09 JP JP2019537854A patent/JP6771674B1/ja active Active
- 2019-01-09 DE DE112019006604.8T patent/DE112019006604T5/de active Pending
- 2019-01-09 KR KR1020217019767A patent/KR102537029B1/ko active IP Right Grant
- 2019-01-09 US US17/296,253 patent/US11874679B2/en active Active
-
2020
- 2020-01-02 TW TW109100030A patent/TWI772731B/zh active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003198200A (ja) * | 2001-12-28 | 2003-07-11 | Yamagata Casio Co Ltd | 画像処理方法及びそれを用いた部品搭載装置 |
JP2006073959A (ja) * | 2004-09-06 | 2006-03-16 | Yamaha Motor Co Ltd | 部品認識装置及び表面実装機並びに部品試験装置 |
JP2007013021A (ja) * | 2005-07-04 | 2007-01-18 | Juki Corp | 部品認識方法及び装置 |
JP2010133718A (ja) * | 2008-12-02 | 2010-06-17 | Seiko Epson Corp | 作業対象物の位置検出方法および位置検出装置 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023199572A1 (ja) * | 2022-04-11 | 2023-10-19 | パナソニックIpマネジメント株式会社 | テンプレート登録装置、テンプレート登録方法およびテンプレート登録システム |
Also Published As
Publication number | Publication date |
---|---|
KR102537029B1 (ko) | 2023-05-26 |
US11874679B2 (en) | 2024-01-16 |
US20220004206A1 (en) | 2022-01-06 |
CN113260941B (zh) | 2023-10-24 |
DE112019006604T5 (de) | 2021-11-11 |
JPWO2020144776A1 (ja) | 2021-02-18 |
JP6771674B1 (ja) | 2020-10-21 |
TWI772731B (zh) | 2022-08-01 |
CN113260941A (zh) | 2021-08-13 |
KR20210095918A (ko) | 2021-08-03 |
TW202029879A (zh) | 2020-08-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6771674B1 (ja) | 制御装置および制御方法 | |
JP6167622B2 (ja) | 制御システムおよび制御方法 | |
JP6430986B2 (ja) | ロボットを用いた位置決め装置 | |
JP6209002B2 (ja) | 撮像装置およびその制御方法 | |
JP2015213139A (ja) | 位置決め装置 | |
WO2021012122A1 (zh) | 机器人手眼标定方法、装置、计算设备、介质以及产品 | |
JP2007319938A (ja) | ロボット装置及び物体の三次元形状の取得方法 | |
CN110581946B (zh) | 控制系统、控制装置、图像处理装置以及存储介质 | |
CN110581945B (zh) | 控制系统、控制装置、图像处理装置以及存储介质 | |
WO2019208074A1 (ja) | 制御システム、制御方法およびプログラム | |
US11230003B2 (en) | Robot system configured to perform learning control | |
JP6596286B2 (ja) | 画像の高解像化システム及び高解像化方法 | |
JP6647053B2 (ja) | レーザ加工装置 | |
JP7162574B2 (ja) | 制御装置および周波数特性の同定方法 | |
WO2020179507A1 (ja) | 制御装置および位置合わせ装置 | |
JP7040567B2 (ja) | 制御装置、制御装置の制御方法、情報処理プログラム、および記録媒体 | |
CN110581944B (zh) | 控制系统、控制装置以及存储介质 | |
JP5693976B2 (ja) | 位置決め装置及び位置決め方法 | |
CN117726689A (zh) | 摄像机固有模型的自动统一 | |
CN117321382A (zh) | 基于由视觉传感器拍摄到的图像来计算三维位置的拍摄装置 | |
JP2020201586A (ja) | 撮像装置、車両及びプログラム | |
JP2006269911A (ja) | 実装部品検査装置および検査方法 | |
JPH05315798A (ja) | 部品位置補正制御方法および装置 | |
JP2010161686A (ja) | 撮像方法、ピッキング方法及びピッキング装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2019537854 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19909303 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20217019767 Country of ref document: KR Kind code of ref document: A |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19909303 Country of ref document: EP Kind code of ref document: A1 |