WO2019208109A1 - 制御システム、制御方法、および制御プログラム - Google Patents

制御システム、制御方法、および制御プログラム Download PDF

Info

Publication number
WO2019208109A1
WO2019208109A1 PCT/JP2019/014129 JP2019014129W WO2019208109A1 WO 2019208109 A1 WO2019208109 A1 WO 2019208109A1 JP 2019014129 W JP2019014129 W JP 2019014129W WO 2019208109 A1 WO2019208109 A1 WO 2019208109A1
Authority
WO
WIPO (PCT)
Prior art keywords
control
control parameter
unit
imaging
movement command
Prior art date
Application number
PCT/JP2019/014129
Other languages
English (en)
French (fr)
Japanese (ja)
Inventor
正樹 浪江
健祐 垂水
Original Assignee
オムロン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by オムロン株式会社 filed Critical オムロン株式会社
Priority to CN201980018736.2A priority Critical patent/CN111868658B/zh
Priority to KR1020207026085A priority patent/KR102612470B1/ko
Publication of WO2019208109A1 publication Critical patent/WO2019208109A1/ja

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D3/00Control of position or direction
    • G05D3/12Control of position or direction using feedback

Definitions

  • This disclosure relates to a technique for positioning a workpiece based on the position of the workpiece measured by a visual sensor.
  • Patent Document 1 discloses a visual system in which a movable table, a moving mechanism for moving the movable table, and a workpiece placed on the movable table are repeatedly imaged and the position of the workpiece is repeatedly detected.
  • a workpiece positioning device including a sensor is disclosed. Each time the position is detected by the visual sensor, the workpiece positioning device calculates the difference between the detected position and the target position, and moves the movable base when it is determined that the difference is within the allowable range. Stop.
  • the workpiece positioning device calculates the difference between the position detected by the visual sensor after the movable table stops moving and the target position, and determines whether the calculated difference is within an allowable range. If it is determined that the difference is outside the allowable range, the moving direction of the movable table that reduces the difference is determined, and the moving mechanism is controlled to move the movable table in the determined moving direction.
  • proportional control that is, P control
  • a value obtained by multiplying the required movement amount by a proportional gain is a movement command. If the proportional gain is too small, the time until the workpiece reaches the target position (hereinafter also referred to as “alignment time”) becomes long. On the other hand, if the proportional gain is too large, an overshoot in which the workpiece exceeds the target position occurs, or vibration in which overshoot and undershoot are repeated occurs. As a result, the alignment time becomes long.
  • the present disclosure has been made to solve the above-described problems, and an object in one aspect is to provide a control system capable of reducing the man-hours for adjusting control parameters related to feedback control. .
  • the objective in the other situation is to provide the control method which can reduce the adjustment man-hour of the control parameter regarding feedback control.
  • An object in another aspect is to provide a control program capable of reducing the man-hours for adjusting control parameters related to feedback control.
  • the control system captures an image of the object based on a moving mechanism for moving the object and an imaging instruction, and the actual position of the object from an image obtained by the imaging.
  • a detection unit for detecting position-related information regarding the position of the moving mechanism for each predetermined control cycle shorter than the interval at which the imaging instruction is output to the visual sensor; Based on the actual position and the position-related information, the position determination unit for determining the estimated position of the target object at the present time for each control period, and the estimated position according to the set control parameter
  • Feedback control for generating a movement command for adjusting to a target position of an object for each control cycle and outputting the movement command to the moving mechanism for each control cycle If, on the basis of a motion command which is determined in advance in transition position-related information as a feedback value obtained from the sequentially output to the detection unit to the moving mechanism, and an adjusting unit for adjusting the control parameter.
  • control parameters relating to feedback control are automatically adjusted.
  • the control system can reduce the adjustment man-hour of the control parameter regarding feedback control.
  • the control system captures an image of the object based on a moving mechanism for moving the object and an imaging instruction, and the actual position of the object from an image obtained by the imaging. And at each predetermined control cycle shorter than the interval at which the imaging instruction is output to the visual sensor, the actual position is set to the target position of the object according to a predetermined control parameter.
  • a feedback control unit that generates a movement command for matching and outputs the movement command to the movement mechanism, and a feedback value obtained from the visual sensor by sequentially outputting a predetermined movement command to the movement mechanism.
  • an adjusting unit for adjusting the control parameter based on the transition of the position.
  • control parameters relating to feedback control are automatically adjusted.
  • the control system can reduce the adjustment man-hour of the control parameter regarding feedback control.
  • the adjustment unit generates a plurality of control parameter candidates that can be set in the feedback control unit on the basis of the value of the control parameter determined based on the transition, and the plurality of control parameters
  • Each of the candidates is sequentially set in the feedback control unit, and for each control parameter candidate, an alignment time required for moving the object from a predetermined position to the target position is measured, and the plurality of control parameter candidates are determined.
  • the control parameter candidate having the shortest alignment time is selected as the control parameter as the optimization result.
  • control system can further optimize the control parameters related to feedback control.
  • the adjustment unit generates the plurality of control parameter candidates by multiplying each of a plurality of predetermined magnifications by the reference control parameter.
  • control system can easily generate control parameter candidates from the reference control parameters.
  • the adjustment unit calculates a maximum rate of change per unit time in the transition, and the feedback control unit is based on the time point when the maximum rate of change appears in the transition and the maximum rate of change.
  • the delay time of the object to be controlled by is calculated, and the control parameter is determined based on the delay time.
  • control system can adjust the control parameter related to feedback control more optimally by determining the control parameter based on the delay time.
  • control parameter includes a proportional gain used for proportional control of the feedback control unit.
  • the proportional gain is automatically adjusted.
  • the control system can reduce the man-hour for adjusting the proportional gain.
  • a moving mechanism control method for moving an object captures the object by outputting an imaging instruction to a visual sensor, and the image of the object is obtained from an image obtained by imaging.
  • control parameters relating to feedback control are automatically adjusted.
  • the control system can reduce the adjustment man-hour of the control parameter regarding feedback control.
  • a control program for a moving mechanism for moving an object captures the object by outputting an imaging instruction to a visual sensor to a controller for controlling the moving mechanism.
  • a movement command for adjusting the estimated position to the target position of the object is generated for each control cycle, and the movement command is generated for each control cycle.
  • the control parameter is adjusted based on the transition of position-related information as a feedback value obtained by sequentially outputting a predetermined movement command to the moving mechanism and detecting the sequential movement command to the moving mechanism. And executing a step.
  • control parameters relating to feedback control are automatically adjusted.
  • the control system can reduce the adjustment man-hour of the control parameter regarding feedback control.
  • the man-hours for adjusting control parameters related to feedback control can be reduced.
  • FIG. 1 is a schematic diagram showing an outline of a control system 1 according to the present embodiment.
  • the control system 1 performs alignment using image processing.
  • the alignment typically means a process of placing an object (hereinafter also referred to as “work W”) at an original position of a production line in the manufacturing process of an industrial product.
  • the control system 1 positions the glass substrate with respect to the exposure mask before the circuit pattern printing process (exposure process) on the glass substrate in the production line of the liquid crystal panel.
  • the control system 1 includes, for example, a visual sensor 50, a controller 200, a moving mechanism 400, and an encoder 450.
  • the visual sensor 50 includes, for example, an imaging unit 52 and an image processing unit 54.
  • the moving mechanism 400 includes, for example, a servo driver 402, a servo motor 410, and a stage 420.
  • the imaging unit 52 performs an imaging process of imaging an object present in the imaging field of view and generating image data, and images the workpiece W placed on the stage 420.
  • the imaging unit 52 performs imaging according to the imaging trigger TR from the controller 200.
  • the image data generated by the imaging unit 52 is sequentially output to the image processing unit 54.
  • the image processing unit 54 performs image analysis on the image data obtained from the image pickup unit 52 measures the actual position PV v of the workpiece W.
  • the actual position PV v is output to the controller 200 every time it is measured.
  • the controller 200 is a PLC (programmable logic controller), for example, and performs various FA controls.
  • the controller 200 includes a position determination unit 252, a feedback control unit 254, and an adjustment unit 264 as an example of a functional configuration.
  • the position determination unit 252 is based on the actual position PV v measured by the visual sensor 50 and the encoder value PV m (position related information) obtained for each control cycle Ts shorter than the imaging interval Tb by the visual sensor 50.
  • the position of the workpiece W (hereinafter also referred to as “estimated position PV”) is estimated for each control cycle Ts.
  • the imaging cycle Tb varies depending on the imaging situation or the like, and is about 60 ms, for example.
  • the control cycle Ts is fixed, for example 1 ms.
  • the estimated position PV is output to the feedback control unit 254 every control cycle Ts.
  • the feedback control unit 254 generates a movement command MV for adjusting the estimated position PV to the target position SV for each control cycle Ts according to the control parameter 262, and outputs the movement command MV to the servo driver 402 for each control cycle Ts.
  • the movement command MV is, for example, any one of a command position, a command speed, and a command torque for the servo driver 402.
  • the target position SP is predetermined for each production process, and is sequentially switched according to the current production process.
  • the target position SP is detected from the image by the visual sensor 50 performing predetermined image processing. In this case, the visual sensor 50 detects a predetermined mark from the image and recognizes the mark as the target position SP.
  • the feedback control by the feedback control unit 254 is realized by, for example, PID (Proportional Integral Differential) control, PI control, PD control, or P control.
  • PID Proportional Integral Differential
  • PI control Proportional Integral Differential
  • PD control Physical Device Control
  • P control Proportional Integral Differential
  • a feedback control unit 254 that performs P control is shown.
  • the feedback control unit 254 includes a subtraction unit 256 and a multiplication unit 258.
  • the subtraction unit 256 subtracts the estimated position PV determined by the position determination unit 252 from the target position SP, and outputs the subtraction result to the multiplication unit 258.
  • the multiplication unit 258 amplifies / attenuates the subtraction result obtained by the subtraction unit 256 by a proportional gain Kp times defined by the control parameter 262.
  • the multiplication result by the multiplication unit 258 is integrated and output to the servo driver 402 as a position command. Alternatively, the multiplication result by the multiplication unit 258 is output to the servo driver 402 as it is as a speed command.
  • the servo driver 402 drives the servo motor 410 according to the movement command MV received every control cycle Ts. More specifically, the servo driver 402 acquires the encoder value PV m from the encoder 450 (detection unit) for each control cycle Ts. Servo driver 402, a speed / position indicated by the encoder value PV m, so as to match the speed / position indicated by the movement command MV, feedback control of the servo motor 410. As an example, feedback control by the servo driver 402 is realized by PID control, PI control, PD control, or P control.
  • the adjustment unit 264 adjusts the control parameter 262 related to the feedback control unit 254. More specifically, the controller 200 has a normal control mode and an adjustment mode for the control parameter 262 as operation modes. When the operation mode is set to the normal control mode, the switch SW is switched so that the feedback control unit 254 and the servo driver 402 are connected. On the other hand, when the operation mode is set to the adjustment mode, the switch SW is switched so that the adjustment unit 264 and the servo driver 402 are connected.
  • the adjustment unit 264 sequentially outputs the movement command MV n which is determined in advance in the moving mechanism 400 sequentially acquires the feedback value encoder value PV m from the encoder 450 (detector). Then, the adjustment unit 264 adjusts the control parameters 262 on the basis of transition of the acquired encoder value PV m. Details of the adjustment method of the control parameter 262 will be described later. As described above, the control parameter 262 is automatically adjusted, so that the number of adjustment steps for the control parameter 262 is reduced.
  • FIG. 1 only one component group of the position determination unit 252, the feedback control unit 254, the adjustment unit 264, the servo driver 402, the servo motor 410, and the encoder 450 is shown. , As many as the number of axes for driving the stage 420 are provided. Each component group is responsible for controlling the stage 420 in one axial direction. In this case, the actual position PV v measured by the visual sensor 50 is decomposed into actual position in each axis direction, the actual position after the decomposition is to be output to the corresponding component group.
  • FIG. 2 is a schematic diagram showing an outline of the control system 1 according to the modification.
  • the adjustment unit 264 adjusts the control parameter 262 based on the transition of the encoder value PV m obtained from the encoder 450 by inputting a predetermined movement command MV n to the movement mechanism 400.
  • the adjustment unit 264 inputs a predetermined movement command MV n to the movement mechanism 400 and based on the transition of the actual position PV v obtained from the visual sensor 50. Adjust. Details of the adjustment method of the control parameter 262 will be described later.
  • control system 1 shown in FIG. 2 is different from the control system 1 shown in FIG. 1 in that the position determination unit 252 is not provided and the encoder value PV m is not fed back to the controller 200. Since the other points of control system 1 shown in FIG. 2 are the same as those of control system 1 shown in FIG. 1, their description will not be repeated.
  • FIG. 3 is a diagram illustrating an example of a device configuration of the control system 1.
  • the control system 1 includes a visual sensor 50, a controller 200, and a moving mechanism 400.
  • the visual sensor 50 includes the image processing apparatus 100 and one or more cameras (cameras 102 and 104 in the example of FIG. 3).
  • the moving mechanism 400 includes base plates 4 and 7, ball screws 6 and 9, a servo driver 402 (servo drivers 402X and 402Y in the example of FIG. 3), a stage 420, and one or more servo motors 410 (of FIG. 3). In the example, servo motors 410X and 410Y) are included.
  • the image processing apparatus 100 detects a feature portion 12 (for example, a screw hole) of the workpiece W based on image data obtained by the cameras 102 and 104 photographing the workpiece W.
  • the image processing apparatus 100 detects the detected position of the feature portion 12 as the actual position PV v of the workpiece W.
  • the controller 200 is connected to one or more servo drivers 402 (servo drivers 402X and 402Y in the example of FIG. 3).
  • the servo driver 402X drives the servo motor 410X to be controlled in accordance with the movement command in the X direction received from the controller 200.
  • the servo driver 402Y drives the servo motor 410Y to be controlled in accordance with the movement command in the Y direction received from the controller 200.
  • the controller 200 gives a target position in the X direction as a command value to the servo driver 402X in accordance with the target trajectory TGx generated in the X direction. Further, the controller 200 gives a target position in the Y direction as a command value to the servo driver 402Y according to the target trajectory TGy generated in the Y direction.
  • the workpiece W is moved to the target position SP by sequentially updating the respective target positions in the X and Y directions.
  • the controller 200 and the servo driver 402 are connected in a daisy chain via a field network.
  • a field network for example, EtherCAT (registered trademark) is adopted.
  • EtherCAT registered trademark
  • the field network is not limited to EtherCAT, and any communication means can be adopted.
  • the controller 200 and the servo driver 402 may be directly connected by a signal line. Further, the controller 200 and the servo driver 402 may be configured integrally.
  • the base plate 4 is provided with a ball screw 6 that moves the stage 420 along the X direction.
  • the ball screw 6 is engaged with a nut included in the stage 420.
  • the servo motor 410X connected to one end of the ball screw 6 is rotationally driven, the nut included in the stage 420 and the ball screw 6 are relatively rotated, and as a result, the stage 420 is moved along the X direction.
  • the base plate 7 is provided with a ball screw 9 for moving the stage 420 and the base plate 4 along the Y direction.
  • the ball screw 9 is engaged with a nut included in the base plate 4.
  • the servo motor 410Y connected to one end of the ball screw 9 is rotationally driven, the nut included in the base plate 4 and the ball screw 9 are relatively rotated. As a result, the stage 420 and the base plate 4 move along the Y direction.
  • FIG. 3 shows a biaxially driven moving mechanism 400 by servomotors 410X and 410Y, the moving mechanism 400 is a servomotor that drives the stage 420 in the rotational direction ( ⁇ direction) on the XY plane. May be further incorporated.
  • FIG. 4 is a schematic diagram illustrating an example of a hardware configuration of the image processing apparatus 100 configuring the visual sensor 50.
  • image processing apparatus 100 typically has a structure according to a general-purpose computer architecture, and a processor executes various programs as described later by executing a preinstalled program. Realize image processing.
  • the image processing apparatus 100 includes a processor 110 such as a CPU (Central Processing Unit) or an MPU (Micro-Processing Unit), a RAM (Random Access Memory) 112, a display controller 114, and a system controller 116. , An I / O (Input Output) controller 118, a hard disk 120, a camera interface 122, an input interface 124, a controller interface 126, a communication interface 128, and a memory card interface 130. These units are connected to each other so that data communication is possible with the system controller 116 as a center.
  • a processor 110 such as a CPU (Central Processing Unit) or an MPU (Micro-Processing Unit), a RAM (Random Access Memory) 112, a display controller 114, and a system controller 116.
  • An I / O (Input Output) controller 118 a hard disk 120, a camera interface 122, an input interface 124, a controller interface 126, a communication interface 128, and a memory card
  • the processor 110 exchanges programs (codes) and the like with the system controller 116 and executes them in a predetermined order, thereby realizing the target arithmetic processing.
  • the system controller 116 is connected to the processor 110, the RAM 112, the display controller 114, and the I / O controller 118 via buses, and performs data exchange with each unit and processes of the entire image processing apparatus 100. To manage.
  • the RAM 112 is typically a volatile storage device such as a DRAM (Dynamic Random Access Memory), a program read from the hard disk 120, camera images (image data) acquired by the cameras 102 and 104, Stores processing results for camera images and work data.
  • DRAM Dynamic Random Access Memory
  • the display controller 114 is connected to the display unit 132, and outputs signals for displaying various types of information to the display unit 132 in accordance with internal commands from the system controller 116.
  • the I / O controller 118 controls data exchange with a recording medium or an external device connected to the image processing apparatus 100. More specifically, the I / O controller 118 is connected to the hard disk 120, the camera interface 122, the input interface 124, the controller interface 126, the communication interface 128, and the memory card interface 130.
  • the hard disk 120 is typically a nonvolatile magnetic storage device, and stores various setting values in addition to the control program 150 executed by the processor 110.
  • the control program 150 installed in the hard disk 120 is distributed while being stored in the memory card 136 or the like.
  • a semiconductor storage device such as a flash memory or an optical storage device such as a DVD-RAM (Digital Versatile Disk Random Access Memory) may be employed.
  • the camera interface 122 corresponds to an input unit that receives image data generated by photographing a workpiece, and mediates data transmission between the processor 110 and the cameras 102 and 104.
  • the camera interface 122 includes image buffers 122a and 122b for temporarily storing image data from the cameras 102 and 104, respectively.
  • image buffers 122a and 122b for temporarily storing image data from the cameras 102 and 104, respectively.
  • a single image buffer that can be shared among the cameras may be provided.
  • it is preferable that a plurality of cameras are independently arranged in association with each camera.
  • the input interface 124 mediates data transmission between the processor 110 and input devices such as a keyboard 134, a mouse, a touch panel, and a dedicated console.
  • the controller interface 126 mediates data transmission between the processor 110 and the controller 200.
  • the communication interface 128 mediates data transmission between the processor 110 and other personal computers or server devices (not shown).
  • the communication interface 128 typically includes Ethernet (registered trademark), USB (Universal Serial Bus), or the like.
  • the memory card interface 130 mediates data transmission between the processor 110 and the memory card 136 as a recording medium.
  • the memory card 136 is distributed in a state where the control program 150 executed by the image processing apparatus 100 is stored, and the memory card interface 130 reads the control program from the memory card 136.
  • the memory card 136 is a general-purpose semiconductor storage device such as SD (Secure Digital), a magnetic recording medium such as a flexible disk, or an optical recording medium such as a CD-ROM (Compact Disk Read Only Memory). Become.
  • a program downloaded from a distribution server or the like may be installed in the image processing apparatus 100 via the communication interface 128.
  • an OS for providing a basic function of the computer ( Operating System) may be installed.
  • the control program according to the present embodiment may execute processing by calling necessary modules out of program modules provided as part of the OS in a predetermined order and / or timing. Good.
  • control program according to the present embodiment may be provided by being incorporated in a part of another program. Even in that case, the program itself does not include the modules included in the other programs to be combined as described above, and the processing is executed in cooperation with the other programs. That is, the control program according to the present embodiment may be in a form incorporated in such another program.
  • control program may be implemented as a dedicated hardware circuit.
  • FIG. 5 is a schematic diagram illustrating a hardware configuration of the controller 200.
  • controller 200 includes a main control unit 210.
  • FIG. 5 shows servo motors 410X, 410Y, 410 ⁇ for three axes, and the number of servo drivers 402X, 402Y, 402 corresponding to the number of axes is provided.
  • the main control unit 210 includes a chip set 212, a processor 214, a nonvolatile memory 216, a main memory 218, a system clock 220, a memory card interface 222, a communication interface 228, an internal bus controller 230, and a field bus. Controller 238.
  • the chip set 212 and other components are coupled via various buses.
  • the processor 214 and the chipset 212 typically have a configuration according to a general-purpose computer architecture. That is, the processor 214 interprets and executes the instruction codes sequentially supplied from the chip set 212 according to the internal clock.
  • the chip set 212 exchanges internal data with various connected components and generates instruction codes necessary for the processor 214.
  • the system clock 220 generates a system clock having a predetermined period and provides it to the processor 214.
  • the chip set 212 has a function of caching data obtained as a result of execution of arithmetic processing by the processor 214.
  • the main control unit 210 has a nonvolatile memory 216 and a main memory 218 as storage means.
  • the nonvolatile memory 216 holds the OS, system program, user program, data definition information, log information, and the like in a nonvolatile manner.
  • the main memory 218 is a volatile storage area, holds various programs to be executed by the processor 214, and is also used as a working memory when executing the various programs.
  • the main control unit 210 includes a communication interface 228, an internal bus controller 230, and a field bus controller 238 as communication means. These communication circuits transmit and receive data.
  • the communication interface 228 exchanges data with the image processing apparatus 100.
  • the internal bus controller 230 controls exchange of data via the internal bus 226. More specifically, the internal bus controller 230 includes a buffer memory 236, a DMA (Dynamic Memory Access) control circuit 232, and an internal bus control circuit 234.
  • DMA Dynamic Memory Access
  • the memory card interface 222 connects the memory card 224 detachable to the main control unit 210 and the processor 214.
  • the fieldbus controller 238 is a communication interface for connecting to a field network.
  • the controller 200 is connected to a servo driver 402 (for example, servo drivers 402X, 402Y, 402 ⁇ ) via a fieldbus controller 238.
  • a servo driver 402 for example, servo drivers 402X, 402Y, 402 ⁇
  • a fieldbus controller 238 for example, EtherCAT (registered trademark), EtherNet / IP (registered trademark), CompoNet (registered trademark), or the like is adopted.
  • Adjustment processing of control parameter 262> The adjustment flow of the control parameter 262 by the adjustment unit 264 will be described with reference to FIGS. 6 and 7.
  • FIG. 6 is a flowchart showing the flow of the adjustment process of the control parameter 262.
  • the process illustrated in FIG. 6 is realized by the processor 214 of the controller 200 functioning as the adjustment unit 264. In other aspects, some or all of the processing shown in FIG. 6 may be performed by circuit elements or other hardware.
  • the process shown in FIG. 6 represents a control flow for a certain axial direction. Actually, the processes shown in FIG. 6 are executed in parallel in the axial direction.
  • step S110 the adjustment unit 264 executes an initialization process.
  • the processor 214 initializes the measurement time t to 0, and initializes a variable PV n ⁇ 1 for storing a feedback value to 0.
  • the “feedback value” here corresponds to the encoder value PV m detected by the encoder 450 in the example of FIG. 1, and corresponds to the actual position PV v measured by the visual sensor 50 in the example of FIG. 2.
  • step S ⁇ b > 112 the adjustment unit 264 generates a movement command MV n to be output to the servo driver 402 according to the following (Equation 1), and outputs the movement command MV n to the servo driver 402.
  • step S114 the adjustment unit 264 obtains the feedback value PVn as a response movement command MV n.
  • the “feedback value” corresponds to the encoder value PV m detected by the encoder 450 in the example of FIG. 1 and corresponds to the actual position PV v measured by the visual sensor 50 in the example of FIG. .
  • step S120 the adjustment unit 264, according to the following (Equation 2), rate of change per time of the feedback value PV n determines whether it exceeds the maximum rate of change R max at the present time. More specifically, the adjustment section 264, by subtracting the previous feedback value PV n-1 from the current feedback value PV n, by dividing the difference results in a control cycle Ts, the maximum change of the division result is currently It is determined whether or not the rate R max is exceeded.
  • Adjustment unit 264 if the rate of change per time of the feedback value PV n is determined to exceed the maximum rate of change R max at the present time (YES in step S120), switches the control to step S122. Otherwise (NO in step S120), adjustment unit 264 switches control to step S130.
  • step S122 the adjusting unit 264, according to the following (Equation 3) is rewritten to the maximum rate of change that is updated maximum change rate R max that is recorded at the moment new.
  • step S122 (PV n ⁇ PV n ⁇ 1 ) / Ts (Formula 3) Further, the adjustment unit 264 stores the feedback value PV n at the time when the maximum change rate R max appears as the feedback value PV r . In addition, the adjustment unit 264 stores the time when the maximum change rate R max appears as the time Tr .
  • the various information stored in step S122 is stored in, for example, a storage unit (for example, nonvolatile memory 216 or main memory 218 (see FIG. 5)) of controller 200.
  • step S124 the adjusting portion 264 in this feedback value PV n update previous feedback value PV n-1. Further, the adjustment unit 264 adds the control cycle Ts to the measurement time t, and updates the measurement time t.
  • step S130 the adjustment unit 264 determines whether or not to finish measuring the feedback value. As an example, the adjustment unit 264 determines to end the measurement of the feedback value when a predetermined measurement end condition is satisfied. In one aspect, the measurement end condition is satisfied when the number of executions of the process of step S130 reaches a predetermined number. In another aspect, the measurement end condition is satisfied when the maximum rate of change R max converges to a constant value. If adjustment unit 264 determines that the measurement of the feedback value is to be terminated (YES in step S130), control is switched to step S140. If not (NO in step S130), adjustment unit 264 returns control to step S112.
  • FIG. 7 is a diagram showing, on the time axis, the relationship between the movement command MV n input to the moving mechanism 400 and the feedback value PV n output from the moving mechanism 400 as a response.
  • the ramp-shaped movement command MV n shown in FIG. 7 is input to the servo driver 402.
  • the motion command MV n is input to the servo driver 402 becomes like a ramp, as shown in FIG. 7, the movement command MV n is a speed command
  • the movement command MVn input to the servo driver 402 has a constant value.
  • the adjustment unit 264 controls the control parameter 262 (for example, proportional gain Kp) related to the feedback control unit 254 based on the transition of the feedback value PV n obtained by sequentially outputting the predetermined movement command MV n to the movement mechanism 400. ).
  • control parameter 262 for example, proportional gain Kp
  • step S140 the adjustment unit 264 calculates a steady gain K according to the following (formula 4).
  • K R max / R mv (Formula 4) “R max ” shown in (Expression 4) corresponds to the maximum change rate stored in step S122. “R mv ” represents an inclination (that is, a change rate) in the transition of the movement command MV n .
  • step S142 the adjustment unit 264 calculates the delay time of the control target of the feedback control unit 254.
  • the “delay time” represents the time from when the movement command is given to the control target of the feedback control unit 254 until the output corresponding to the movement command appears.
  • the control target of the feedback control unit 254 indicates a control system including the moving mechanism 400 and the encoder 450.
  • the control target of the feedback control unit 254 indicates a control system including the visual sensor 50, the moving mechanism 400, and the encoder 450 in the example of the control system 1 illustrated in FIG. 2.
  • Adjustment unit 264 time T r of the maximum rate of change R max has appeared in the transition of the feedback value PV n, on the basis of the maximum rate of change R max, to calculate the delay time L of the controlled object by the feedback control unit 254.
  • the delay time L is calculated based on the following (Formula 5).
  • step S144 the adjustment unit 264 calculates a proportional gain Kp based on the steady gain K calculated in step S140 and the delay time L calculated in step S142.
  • the proportional gain Kp is calculated based on, for example, the following (Formula 6).
  • Kp ⁇ / (K ⁇ L) (Formula 6) “ ⁇ ” shown in (Expression 6) is a predetermined coefficient. As shown in (Expression 6), as the delay time L is longer, the adjustment unit 264 decreases the proportional gain Kp. In other words, the adjustment unit 264 increases the proportional gain Kp as the delay time L is shorter. In addition, the adjustment unit 264 decreases the proportional gain Kp as the steady gain increases. In other words, the adjustment unit 264 increases the proportional gain Kp as the steady gain decreases.
  • step S146 the adjustment unit 264 sets the proportional gain Kp calculated in step S144 as the control parameter 262 of the feedback control unit 254.
  • step S140 the example in which the steady gain K is calculated in step S140 has been described.
  • the movement command MV as an input value and the feedback value as an output value.
  • the steady gain K is 1, so the process of step S140 may be omitted.
  • the proportional gain Kp is determined based on the delay time L as shown in the above (formula 6).
  • the delay time L is calculated based on the following (formula 7).
  • FIG. 8 is a flowchart showing the flow of optimization processing of the control parameter 262.
  • the processing shown in FIG. 8 is realized by the processor 214 of the controller 200 functioning as the adjustment unit 264. In other aspects, some or all of the processing shown in FIG. 8 may be performed by circuit elements or other hardware.
  • control parameter 262 determined in the above “E. Adjustment process of control parameter 262” is also referred to as “reference proportional gain Kp”.
  • step S150 the adjustment unit 264 acquires a magnification ⁇ (i) by which the reference proportional gain Kp is multiplied.
  • the magnification ⁇ (i) may be determined in advance or may be arbitrarily set by the user.
  • the magnification ⁇ (i) is, for example, array data for managing variables “ ⁇ (1) to ⁇ (n)”.
  • step S152 the adjustment unit 264 initializes the variable i to 1.
  • step S154 the adjustment unit 264 multiplies the reference proportional gain Kp by the magnification ⁇ (i) to generate a setting candidate proportional gain Kp (i).
  • step S156 the adjustment unit 264 sets the proportional gain Kp (i) of the setting candidate generated in step S154 in the feedback control unit 254, and causes the feedback control unit 254 to execute a predetermined alignment process.
  • the predetermined alignment process is a process of moving the workpiece W from a predetermined start position to a predetermined target position SP.
  • the adjustment unit 264 measures an alignment time Ta (i) required to move the workpiece W from a predetermined start position to a predetermined target position SP.
  • step S158 the adjustment unit 264 stores the alignment time Ta (i) measured in step S156 in the storage unit (for example, the nonvolatile memory 216 or the main memory 218 (see FIG. 5)) of the controller 200.
  • the storage unit for example, the nonvolatile memory 216 or the main memory 218 (see FIG. 5)
  • step S160 the adjustment unit 264 increments the variable i. That is, the adjustment unit 264 adds 1 to the variable i.
  • step S170 the adjustment unit 264 determines whether or not the variable i is smaller than the predetermined value n.
  • the predetermined value n represents the number of executions of steps S154, S156, S158, and S160, and is defined in advance.
  • control is returned to step S154. Otherwise (NO in step S170), adjustment unit 264 switches control to step S172.
  • step S172 the adjustment unit 264 selects a control parameter candidate having the shortest alignment time Ta (i) among the control parameter candidates Kp (i) as the control parameter 262 as the optimum result.
  • the adjustment unit 264 generates the control parameter candidate Kp (i) by multiplying each of the predetermined magnifications ⁇ (i) by the reference proportional gain Kp. Then, the adjustment unit 264 sequentially sets each of the control parameter candidates Kp (i) as the control parameter 262 of the feedback control unit 254, and moves the workpiece W from the predetermined position to the target position SP for each control parameter candidate. The alignment time Ta (i) required for the above is measured. Thereafter, the adjustment unit 264 selects a control parameter candidate having the shortest alignment time Ta (i) from among the control parameter candidates Kp (i) as a control parameter 262 as an optimization result. Thereby, the adjustment unit 264 can further optimize the control parameter 262 determined in the “E. Adjustment process of the control parameter 262”.
  • the adjustment unit 264 may calculate a maximum overshoot distance for each control parameter candidate, and select a control parameter candidate that minimizes the maximum overshoot distance as an optimization result.
  • the adjustment unit 264 may calculate a movement distance for each control parameter candidate and select a control parameter candidate that minimizes the movement distance as an optimization result.
  • FIG. 9 is a flowchart showing a process for determining estimated position PV by position determination unit 252 shown in FIG. Below, with reference to FIG. 9, the determination process of the estimated position PV by the position determination part 252 is demonstrated.
  • step S ⁇ b> 421 the position determination unit 252 detects whether or not the actual position PV v is obtained from the visual sensor 50. If it is the time when the actual position PV v is obtained (YES in step S421), position determination unit 252 switches control to step S422. Otherwise (NO in step S421), position determination unit 252 switches control to step S427.
  • step S422 the position determination unit 252, real position PV v is equal to or a normal value. For example, the position determination unit 252 determines that the actual position PV v is a normal value if the value is within a predetermined range. When position determination unit 252 determines that actual position PV v is a normal value (YES in step S422), control is switched to step S423. Otherwise (NO in step S427), position determination unit 252 switches control to step S427.
  • the position determination unit 252 receives an input of the actual position PV v.
  • the position determination unit 252 estimates the encoder value PV ms of the imaging time that is the basis for calculating the actual position PV v .
  • the imaging time is, for example, an exposure start time (time when the shutter of the imaging unit 52 is opened) and an exposure end time (time when the shutter of the imaging unit 52 is closed). It is set by the middle time.
  • step S425 the position determination unit 252 uses the actual position PV v and encoder value PV m at the same time and the encoder value PV ms at the imaging time from which the actual position PV v is calculated, to calculate the estimated position PV. calculate. More specifically, in step S425, the position determination unit 252 calculates the estimated position PV using the following (Equation 9).
  • step S ⁇ b> 426 the position determination unit 252 outputs the calculated estimated position PV to the feedback control unit 254. Further, the position determination unit 252 stores the estimated position PV as the reference estimated position PV p, and stores the encoder value PV m at this time as the reference encoder value PV mp .
  • step S427 the position determination unit 252 determines whether or not the output of the actual position PV v is one or more times. If position determination unit 252 determines that the output of actual position PV v is one or more times (YES in step S427), control is switched to step S428. Otherwise (NO in step S427), processor 214 switches control to step S426.
  • step S428 the position determination unit 252 calculates the estimated position PV using the encoder value PV m , the reference estimated position PV p , and the reference encoder value PV mp . More specifically, in step S428, the position determination unit 252 calculates the estimated position PV using the following (Expression 10).
  • a movement command for adjusting the estimated position to the target position of the object is generated for each control cycle, and the movement command is output to the movement mechanism (400) for each control cycle.
  • a feedback control unit (254) that generates and outputs the movement command to the movement mechanism (400);
  • H.264 a control system (1).
  • the adjustment unit (264) Generating a plurality of control parameter candidates that can be set in the feedback control unit (254) based on the value of the control parameter determined based on the transition, Each of the plurality of control parameter candidates is sequentially set in the feedback control unit (254), and for each control parameter candidate, an alignment time required for moving the object from a predetermined position to the target position is measured.
  • the control system (1) according to Configuration 1 or 2, wherein a control parameter candidate having the shortest alignment time is selected as the control parameter as an optimization result among the plurality of control parameter candidates.
  • the adjustment unit (264) Calculate the maximum rate of change per unit time in the transition, Based on the time when the maximum rate of change appears in the transition and the maximum rate of change, the delay time of the object to be controlled by the feedback control unit (254) is calculated, The control system (1) according to any one of configurations 1 to 4, wherein the control parameter is determined based on the delay time.
  • a control method of a moving mechanism (400) for moving an object Imaging the object by outputting an imaging instruction to the visual sensor (50), and causing the visual sensor (50) to measure the actual position of the object from an image obtained by imaging; Detecting position related information related to the position of the moving mechanism (400) for each predetermined control period shorter than the interval at which the imaging instruction is output to the visual sensor (50); Determining an estimated position of the object at a current time for each control period based on the actual position and the position related information; According to the set control parameter, a movement command for adjusting the estimated position to the target position of the object is generated for each control cycle, and the movement command is output to the movement mechanism (400) for each control cycle. And steps to A step of adjusting the control parameter based on a transition of position related information as a feedback value obtained in the step of sequentially outputting a predetermined movement command to the movement mechanism (400) and detecting the control.
  • a control program for a moving mechanism (400) for moving an object stores a controller for controlling the moving mechanism (400). Imaging the object by outputting an imaging instruction to the visual sensor (50), and causing the visual sensor (50) to measure the actual position of the object from an image obtained by imaging; Detecting position related information related to the position of the moving mechanism (400) for each predetermined control period shorter than the interval at which the imaging instruction is output to the visual sensor (50); Determining an estimated position of the object at a current time for each control period based on the actual position and the position related information; According to the set control parameter, a movement command for adjusting the estimated position to the target position of the object is generated for each control cycle, and the movement command is output to the movement mechanism (400) for each control cycle. And steps to Performing a step of adjusting the control parameter based on a transition of position-related information as a feedback value obtained by sequentially outputting a predetermined movement command to the movement mechanism (400) and detecting the predetermined value; Control program
  • control system 4, 7 base plate, 6, 9 ball screw, 12 features, 50 visual sensor, 52 imaging unit, 54 image processing unit, 100 image processing device, 102, 104 camera, 110, 214 processor, 112 RAM, 114 Display controller, 116 system controller, 118 I / O controller, 120 hard disk, 122 camera interface, 122a image buffer, 124 input interface, 126 controller interface, 128, 228 communication interface, 130, 222 memory card interface, 132 display unit, 134 Keyboard, 136,224 memory card, 150 control program, 210 main control unit, 212 chipset 216, non-volatile memory, 218 main memory, 220 system clock, 226 internal bus, 230 internal bus controller, 232 control circuit, 234 internal bus control circuit, 236 buffer memory, 238 fieldbus controller, 252 position determination unit, 254 feedback Control unit, 256 subtraction unit, 258 multiplication unit, 262 control parameter, 264 adjustment unit, 400 moving mechanism, 402, 402X, 402Y servo driver, 410, 410X,

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position Or Direction (AREA)
  • Manipulator (AREA)
  • Studio Devices (AREA)
  • Feedback Control In General (AREA)
PCT/JP2019/014129 2018-04-26 2019-03-29 制御システム、制御方法、および制御プログラム WO2019208109A1 (ja)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980018736.2A CN111868658B (zh) 2018-04-26 2019-03-29 控制系统、控制方法以及计算机可读存储介质
KR1020207026085A KR102612470B1 (ko) 2018-04-26 2019-03-29 제어 시스템, 제어 방법 및 컴퓨터 판독 가능한 기억 매체

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-085123 2018-04-26
JP2018085123A JP6922829B2 (ja) 2018-04-26 2018-04-26 制御システム、制御方法、および制御プログラム

Publications (1)

Publication Number Publication Date
WO2019208109A1 true WO2019208109A1 (ja) 2019-10-31

Family

ID=68295246

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/014129 WO2019208109A1 (ja) 2018-04-26 2019-03-29 制御システム、制御方法、および制御プログラム

Country Status (4)

Country Link
JP (1) JP6922829B2 (zh)
KR (1) KR102612470B1 (zh)
CN (1) CN111868658B (zh)
WO (1) WO2019208109A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09247975A (ja) * 1996-03-12 1997-09-19 Matsushita Electric Ind Co Ltd モータドライブ装置
JP2003330510A (ja) * 2002-05-14 2003-11-21 Yaskawa Electric Corp 数値制御装置の同期制御方法
JP2009122779A (ja) * 2007-11-12 2009-06-04 Mitsubishi Electric Corp 制御システムおよび制御支援装置
JP2015213139A (ja) * 2014-05-07 2015-11-26 国立大学法人 東京大学 位置決め装置

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06242803A (ja) * 1993-02-16 1994-09-02 Matsushita Electric Ind Co Ltd 自動調整サーボ制御装置
JP3424849B2 (ja) * 1994-01-14 2003-07-07 株式会社安川電機 マニピュレータのコンプライアンス制御装置
JP4745798B2 (ja) * 2005-11-11 2011-08-10 株式会社日立産機システム 電動機制御装置の自動調整法および装置
JP2007219691A (ja) * 2006-02-15 2007-08-30 Seiko Epson Corp Pid制御装置および制御パラメータ更新方法
JP2011134169A (ja) * 2009-12-25 2011-07-07 Mitsubishi Heavy Ind Ltd 制御パラメータ調整方法及び調整装置
JP5834545B2 (ja) * 2011-07-01 2015-12-24 セイコーエプソン株式会社 ロボット、ロボット制御装置、ロボット制御方法、およびロボット制御プログラム
WO2013165841A1 (en) * 2012-04-30 2013-11-07 Johnson Controls Technology Company Control system
WO2014115263A1 (ja) * 2013-01-23 2014-07-31 株式会社日立製作所 位置決め制御システム
JP6167622B2 (ja) * 2013-04-08 2017-07-26 オムロン株式会社 制御システムおよび制御方法
CN104898568B (zh) * 2015-05-20 2018-01-19 西安交通大学 基于刚度辨识的数控机床进给系统控制参数优化方法
JP6174636B2 (ja) 2015-07-24 2017-08-02 ファナック株式会社 ワークを位置決めするためのワーク位置決め装置
JP6551184B2 (ja) * 2015-11-18 2019-07-31 オムロン株式会社 シミュレーション装置、シミュレーション方法、およびシミュレーションプログラム
JP6844158B2 (ja) * 2016-09-09 2021-03-17 オムロン株式会社 制御装置および制御プログラム

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09247975A (ja) * 1996-03-12 1997-09-19 Matsushita Electric Ind Co Ltd モータドライブ装置
JP2003330510A (ja) * 2002-05-14 2003-11-21 Yaskawa Electric Corp 数値制御装置の同期制御方法
JP2009122779A (ja) * 2007-11-12 2009-06-04 Mitsubishi Electric Corp 制御システムおよび制御支援装置
JP2015213139A (ja) * 2014-05-07 2015-11-26 国立大学法人 東京大学 位置決め装置

Also Published As

Publication number Publication date
CN111868658A (zh) 2020-10-30
JP6922829B2 (ja) 2021-08-18
KR20210004957A (ko) 2021-01-13
KR102612470B1 (ko) 2023-12-12
CN111868658B (zh) 2024-08-09
JP2019188551A (ja) 2019-10-31

Similar Documents

Publication Publication Date Title
JP6167622B2 (ja) 制御システムおよび制御方法
CN110581945B (zh) 控制系统、控制装置、图像处理装置以及存储介质
CN110581946B (zh) 控制系统、控制装置、图像处理装置以及存储介质
US10656616B2 (en) Control device, control system, and recording medium
TW202029879A (zh) 控制裝置及控制方法
JP2006146572A (ja) サーボ制御装置および方法
CN111886556B (zh) 控制系统、控制方法以及计算机可读存储介质
WO2019208109A1 (ja) 制御システム、制御方法、および制御プログラム
JP2006293624A (ja) 多軸制御装置
WO2020003945A1 (ja) 位置決めシステム、制御方法およびプログラム
WO2019208107A1 (ja) 制御システム、制御方法、および制御プログラム
JP7003454B2 (ja) 制御装置、位置制御システム、位置制御方法、および、位置制御プログラム
JP7020262B2 (ja) 制御システム、制御方法およびプログラム
CN110581944B (zh) 控制系统、控制装置以及存储介质
US6759670B2 (en) Method for dynamic manipulation of a position of a module in an optical system
JP2020140646A (ja) 制御装置および位置合わせ装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19792775

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19792775

Country of ref document: EP

Kind code of ref document: A1