WO2023189077A1 - Dispositif de commande et dispositif d'imagerie - Google Patents
Dispositif de commande et dispositif d'imagerie Download PDFInfo
- Publication number
- WO2023189077A1 WO2023189077A1 PCT/JP2023/006795 JP2023006795W WO2023189077A1 WO 2023189077 A1 WO2023189077 A1 WO 2023189077A1 JP 2023006795 W JP2023006795 W JP 2023006795W WO 2023189077 A1 WO2023189077 A1 WO 2023189077A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- mode
- image sensor
- robot
- imaging
- imaging mode
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 145
- 230000015654 memory Effects 0.000 claims abstract description 29
- 238000012545 processing Methods 0.000 claims description 93
- 230000001133 acceleration Effects 0.000 claims description 14
- 230000004044 response Effects 0.000 claims description 12
- 238000013473 artificial intelligence Methods 0.000 description 35
- 238000001514 detection method Methods 0.000 description 32
- 238000000034 method Methods 0.000 description 28
- 230000008569 process Effects 0.000 description 24
- 230000033001 locomotion Effects 0.000 description 20
- 238000010586 diagram Methods 0.000 description 16
- 238000013459 approach Methods 0.000 description 14
- 230000005540 biological transmission Effects 0.000 description 12
- 238000012546 transfer Methods 0.000 description 10
- 230000008859 change Effects 0.000 description 6
- 239000012636 effector Substances 0.000 description 5
- 238000004519 manufacturing process Methods 0.000 description 5
- 102100026338 F-box-like/WD repeat-containing protein TBL1Y Human genes 0.000 description 4
- 101000835691 Homo sapiens F-box-like/WD repeat-containing protein TBL1X Proteins 0.000 description 4
- 101000835690 Homo sapiens F-box-like/WD repeat-containing protein TBL1Y Proteins 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 241000282412 Homo Species 0.000 description 2
- FFBHFFJDDLITSX-UHFFFAOYSA-N benzyl N-[2-hydroxy-4-(3-oxomorpholin-4-yl)phenyl]carbamate Chemical compound OC1=C(NC(=O)OCC2=CC=CC=C2)C=CC(=C1)N1CCOCC1=O FFBHFFJDDLITSX-UHFFFAOYSA-N 0.000 description 2
- 230000000903 blocking effect Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 230000032258 transport Effects 0.000 description 2
- 238000003466 welding Methods 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/667—Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
Definitions
- the present disclosure relates to a control device and an imaging device.
- Patent Document 1 discloses a control system that performs alignment (for example, placing a target object in its original position on a production line) using image processing.
- the control system includes two cameras with different viewing angles, each of which takes an image of a positioning mark, which is a feature for positioning, provided at a predetermined position on a glass substrate, which is an object.
- the control system adjusts the position of the object using images captured by the two cameras, and moves the object to the final target point.
- Patent Document 1 an object is imaged by switching between two cameras with different viewing angles, and there is a problem in that complicated control such as camera switching is required. Furthermore, two cameras are required to photograph the same field of view, which increases the size of the device. Furthermore, in Patent Document 1, the two cameras each recognize predetermined positioning marks on the target object (for example, a glass substrate), and the control system receives the recognition results and moves the target object to a predetermined final target position. Execute control. In other words, since elements such as the surroundings of the workpiece other than the positioning marks have not been observed, it is presumed that it is possible to use it in a closed environment where there are no people around. There is a problem in that it is difficult to install it in a human collaborative robot that works together in a human environment.
- a human collaborative robot that works together in a human environment.
- the present disclosure has been devised in view of the above-mentioned conventional situation, and aims to control working machines with high precision.
- the present disclosure includes one or more processors, one or more memories, and a program stored in the memory, the program controlling operation of a work machine, and connecting to the work machine.
- a control device is provided that causes the processor to execute a first control mode in which an image sensor is operated by switching between a first imaging mode and a second imaging mode different from the first imaging mode. do.
- a work machine can be controlled with high precision.
- FIG. 1 is a diagram showing an example use case of a robot control system 100 according to the present embodiment.
- FIG. 2 is a diagram showing an example of the system configuration of the robot control system 100 according to the present embodiment.
- FIG. 3 is a diagram showing an example of the imaging mode of the camera device 10 according to the present embodiment.
- FIG. 4 is a conceptual diagram of each process of the robot's work.
- FIG. 5 is a diagram showing the relationship between the robot control gain and the camera imaging mode in each step of the robot's work.
- FIG. 6 is a flowchart showing the robot control process.
- FIG. 7 is a conceptual diagram of an example of control when the robot hand approaches an object.
- FIG. 8 is a conceptual diagram of an example of control when the robot hand approaches an object.
- FIG. 1 is a diagram showing an example use case of the robot control system 100 according to the present embodiment.
- the robot control system 100 includes at least a camera device 10, a robot controller 30, and a robot 50.
- the camera device 10 and the robot controller 30 are connected to each other, and the robot controller 30 and the robot 50 are connected to each other so that data or signals can be input and output to each other.
- the robot control system 100 is placed, for example, in a production facility such as a factory.
- the robot control system 100 controls a robot 50 that mounts (mounts) a part on a predetermined position of an object (for example, a workpiece) that is transferred along a transfer direction DR1 on a belt conveyor CB installed in a production facility.
- use cases of the robot control system 100 according to the present embodiment are not limited to the above-mentioned component mounting, but also include attaching labels to workpieces, driving screws to workpieces, component assembly, component processing, welding, painting, bonding, etc. may be used.
- the robot 50 according to the present embodiment may be used as a human-cooperative robot that performs work in collaboration with humans (see FIG. 4).
- the camera device 10 (specifically, camera devices 10A, 10B, ...) is a robot arm (specifically, robot arm AR1, AR2, ...) of a robot 50 (specifically, robot 50A, 50B, ). ) is fixed at the tip.
- the tip of the robot arm is, for example, near the robot hand or end effector.
- the camera device 10 images objects (for example, works WK1, WK2, . . . ) on the belt conveyor CB that transports the objects in the transport direction DR1 as objects within the visual field (step T1).
- the camera device 10 determines whether or not the captured image includes a target object through a pattern matching process using the captured image captured in step T1 and AI (Artificial Intelligence) that is provided in advance to be executable.
- AI Artificial Intelligence
- the camera device 10 sends the detection result of the object (for example, coordinates indicating the position of the object included in the captured image) to the robot controller 30. Details of the internal configuration of the camera device 10 will be described later with reference to FIG. 2.
- the robot controller 30 receives the object detection results from the camera device 10 and recognizes the object (step T2). Next, the robot controller 30 generates an instruction for the movement of the robot 50 (for example, an instruction for driving or controlling an actuator (not shown) included in the robot 50) according to the position where the target object is detected, and instructs the robot 50 to move the robot 50. Output (step T2).
- an instruction for the movement of the robot 50 for example, an instruction for driving or controlling an actuator (not shown) included in the robot 50
- the robot 50 moves based on instructions from the robot controller 30 (step T3).
- the movements performed by the robot 50 based on instructions from the robot controller 30 include, for example, a series of movements for mounting parts picked up by the robot hand at a specified position on the object being transferred on the belt conveyor CB. This is the operation.
- the content of the movement performed by the robot 50 is adaptively determined according to the use case of the robot control system 100, and it is clear that it is not limited to the above-mentioned component mounting.
- FIG. 2 is a diagram showing an example of the system configuration of the robot control system 100 according to the present embodiment.
- the robot control system 100 includes at least a camera device 10, a robot controller 30, a robot vision controller 31, and robots 50A and 50B.
- the robots 50A and 50B have the same configuration, they may have different configurations. Further, for convenience of explanation, only two robots are illustrated, but the number may be one, or three or more.
- the camera device 10 includes a board on which a lens 11, an image sensor 12, a CPU (Central Processing Unit) 13, an ISP (Image Signal Processor) 14, a memory 15, an AI processing unit 16, and a data output I/F 17 are mounted. .
- This board is placed in a casing (not shown) of the camera device 10.
- the lens 11 (an example of an imaging unit) includes, for example, a focus lens and a zoom lens. Incident light, which is light reflected by a subject (for example, a target object such as workpieces WK1, WK2, . . . ), enters the lens 11. If a visible light cut filter and an IR (Infrared Ray) cut filter are arranged between the lens 11 and the image sensor 12, the incident light that enters the lens 11 passes through one of the filters to the image sensor 12. An optical image of the subject is formed on the light receiving surface (imaging surface). As the lens 11, lenses with various focal lengths or shooting ranges can be used depending on the installation location of the camera device 10, the shooting purpose, etc.
- the camera device 10 may include a lens drive section (not shown) that controls the drive of the lens 11. Further, the CPU 13 or the ISP 14 adjusts (changes) internal parameters related to driving the lens 11 (for example, the position of the focus lens, the position of the zoom lens corresponding to the zoom magnification), and controls the lens 11 via a lens driving section (not shown). may be driven. Further, the lens 11 may be fixedly arranged.
- the visible light cut filter (an example of an imaging unit) has the property of blocking visible light (for example, light having a wavelength of 400 to 760 [nm]) out of the incident light that has passed through the lens 11 (that is, the light reflected by the subject). has.
- the visible light cut filter blocks visible light among the incident light that has passed through the lens 11.
- the camera device 10 may include a filter drive unit (not shown) that controls the drive of the visible light cut filter. When the filter drive unit is provided, the visible light cut filter is moved by the filter drive unit (not shown) so that it is located between the lens 11 and the image sensor 12 during a predetermined period (for example, at night) based on a control signal from the CPU 13 or the ISP 14. ).
- the IR cut filter (an example of an imaging unit) allows visible light (for example, light with a wavelength of 400 to 760 [nm]) to pass through, and near-infrared light (for example, light with a wavelength of 780 [nm] or more). It has blocking properties.
- the IR cut filter blocks near-infrared light of the incident light that has passed through the lens 11 and allows visible light to pass through.
- the camera device 10 may include a filter drive section (not shown) that controls the drive of the IR cut filter.
- the IR cut filter is controlled by a filter drive unit (not shown) so as to be located between the lens 11 and the image sensor 12 during a predetermined period (for example, during the day) based on a control signal from the CPU 13 or the ISP 14. placed through.
- the image sensor 12 (an example of an imaging unit) is a CCD (Charge Coupled Device) sensor or a CMOS (Complementary Metal Oxide Semiconductor) in which a plurality of pixels suitable for imaging visible light or near-infrared light are arranged. tor) sensor and exposure control It includes a circuit (not shown) and a signal processing circuit (not shown).
- the image sensor 12 performs photoelectric conversion at predetermined intervals to convert light received by a light receiving surface (imaging surface) made up of a plurality of pixels into an electrical signal.
- the predetermined interval of photoelectric conversion is determined according to the so-called frame rate (fps: frame per second). For example, if the frame rate is 120 [fps], the predetermined interval is 1/120 [second].
- the image sensor 12 outputs a red component signal (R signal), a green component signal (G signal), and a blue component signal according to the light reflected by the object (for example, objects such as workpieces WK1, WK2, etc.).
- B signals are acquired as electrical signals temporally continuously for each pixel.
- a signal processing circuit (not shown) of the image sensor 12 converts an electrical signal (analog signal) into digital imaging data.
- a data transmission bus for direct transfer is provided between the image sensor 12 and the memory 15.
- the image sensor 12 transfers digital image data to the memory 15 at predetermined intervals (see above) depending on the frame rate via the data transmission bus.
- the memory 15 stores digital image data received from the image sensor 12. Note that the image sensor 12 may send digital image data to the CPU 13 at predetermined intervals (see above) depending on the frame rate.
- FIG. 3 is a diagram illustrating an example of the imaging mode of the camera device 10 according to the first embodiment.
- the imaging mode table TBL1 shown in FIG. 3 indicates the imaging mode of the camera device 10, which has a record that combines each item of frame rate, output resolution, and processing delay time.
- the sensor frame rate of the imaging mode table TBL1 corresponds to the frame rate of the image sensor 12.
- the sensor output resolution of the imaging mode table TBL1 corresponds to the output resolution of the image sensor 12 (predetermined resolution described later).
- the treatment delay time in the imaging mode table TBL1 corresponds to the processing delay time allowed for the entire processing of the camera device 10.
- the imaging mode of the camera device 10 can be set by the user selecting the frame rate and output resolution (predetermined resolution described later) of the image sensor 12 according to the processing delay time. Further, the CPU 13 or the ISP 14 may generate a signal for changing the imaging mode of the camera device 10 and send it to the image sensor 12 based on the detection of an event or the input of an external signal while the camera device 10 is operating. . In this case, the image sensor 12 changes the imaging mode (specifically, frame rate, output resolution) of the camera device 10 based on a signal from the CPU 13 or the ISP 14. This allows the camera device 10 to dynamically and arbitrarily change the imaging mode at necessary timing.
- the processing delay time shown in FIG. 3 is an example, and may vary somewhat depending on the performance, frame rate, resolution, etc. of the AI processing unit 16 used.
- Imaging mode 1 is configured by a combination of a frame rate of 480 [fps], an output resolution of VGA (that is, 640 x 480 dots), and a processing delay time of 10 [msec], and the frame rate and output resolution of the camera device 10 are It is set to each value corresponding to imaging mode 1.
- the camera device 10 stores all the processing executed within the camera device 10 (specifically, the memory 15 of the captured image captured by the image sensor 12 in order to send the detection result of the object to the robot controller 30). , resizing processing as necessary, object detection processing by the AI processing unit 16, and transmission processing of the object detection results to the robot controller 30) within the processing delay time of imaging mode 1. It can be suppressed. Therefore, in imaging mode 1, it is possible to reduce the processing delay to the extent that processing congestion does not occur within the camera device 10, and prompt the robot controller 30 to quickly issue instructions based on the detection results of the target object. I can do it.
- Imaging mode 2 is configured by a combination of a frame rate of 240 [fps], an output resolution of 1.3 MP (that is, 1280 x 960 dots), and a processing delay time of 20 [msec], and the frame rate and output of the camera device 10 are The resolution is set to each value corresponding to imaging mode 2.
- the camera device 10 stores all the processing executed within the camera device 10 (specifically, the memory 15 of the captured image captured by the image sensor 12 in order to send the detection result of the object to the robot controller 30). , resizing processing as necessary, object detection processing by the AI processing unit 16, and transmission processing of object detection results to the robot controller 30) within the processing delay time of imaging mode 2. It can be suppressed. Therefore, in imaging mode 2, it is possible to reduce the processing delay to the extent that processing congestion does not occur within the camera device 10, and prompt the robot controller 30 to quickly issue instructions based on the detection results of the target object. I can do it.
- imaging mode 3 the frame rate is 120 [fps], the output resolution is Full HD (that is, 1920 x 1080 dots), and the processing delay time is 40 [msec]. It is set to each value corresponding to imaging mode 3.
- the camera device 10 stores all the processing executed within the camera device 10 (specifically, the memory 15 of the captured image captured by the image sensor 12 in order to send the detection result of the object to the robot controller 30).
- resizing processing as necessary, object detection processing by the AI processing unit 16, and transmission processing of the object detection results to the robot controller 30) processing time is within the processing allowable delay time of imaging mode 3. can be suppressed to Therefore, in imaging mode 3, it is possible to reduce the processing delay to the extent that processing congestion does not occur within the camera device 10, and prompt the robot controller 30 to quickly issue instructions based on the detection results of the target object. I can do it.
- the image sensor 12 adjusts (changes) internal parameters related to the exposure conditions of the camera device 10 (for example, exposure time, gain, frame rate) using an exposure control circuit based on an exposure control signal from the CPU 13 or the ISP 14.
- an exposure control circuit based on an exposure control signal from the CPU 13 or the ISP 14.
- the image sensor 12 is capable of capturing image data of an object at a predetermined resolution (for example, 1920 x 1080 dots corresponding to 2.0 MP (megapixels), which is Full HD), and performs various processing such as cropping or binning.
- a predetermined resolution for example, 1920 x 1080 dots corresponding to 2.0 MP (megapixels), which is Full HD
- Cropping is a process in which the image sensor 12 cuts out an image area in a specific range (for example, a bright central part) that is a part of the entire image area of the captured image data. Therefore, the cropped image becomes an image whose size (in other words, resolution) is reduced compared to the image data before cropping.
- Binning means that the image sensor 12 aggregates pixel components (e.g., pixel values) of multiple pixels (e.g., 2 ⁇ 2, 4 ⁇ 4, etc.) constituting image data into a single pixel ( This is a process that handles the data by combining them. Therefore, like a cropped image, the binned image is an image with a reduced number of pixels (in other words, resolution) compared to the captured image data before binning.
- the image sensor 12 has a resolution smaller than a predetermined resolution (for example, 640 x 480 dots which is VGA (Video Graphics Array) corresponding to imaging mode 1, or compatible with imaging mode 2).
- the image data is converted into 1.3MP (1280 x 960 dots) image data and output.
- the predetermined resolution of the image sensor 12 is not limited to Full HD (1920 ⁇ 1080 dots) (see FIG. 3), but may be 1.3 MP (1280 ⁇ 960 dots) or VGA (640 ⁇ 480 dots).
- the CPU 13 is a processor that functions as a controller that controls the overall operation of the camera device 10.
- the CPU 13 performs control processing for unifying the operations of each part of the camera device 10, data input/output processing with respect to each part of the camera device 10, data arithmetic processing, and data storage processing.
- the CPU 13 operates according to programs and control data stored in the memory 15.
- the CPU 13 uses the memory 15 during operation, and transfers data or information generated or acquired by the CPU 13 to the memory 15 for temporary storage. Further, the CPU 13 transfers the image data from the image sensor 12 to the memory 15 to temporarily store it and/or sends it to the ISP 14.
- the CPU 13 includes a timer (not shown) or an illuminance sensor (not shown).
- the CPU 13 generates a control signal for instructing which filter to be placed between the lens 11 and the image sensor 12 is a visible light cut filter or an IR cut filter, based on the output of the timer or the illuminance sensor. and sends it to a filter driver (not shown).
- the CPU 13 controls internal parameters (for example, exposure time, gain, frame rate, resolution) of the image sensor 12 based on signals from the robot vision controller 31.
- the ISP 14 is a processor that manages various image processes performed within the camera device 10.
- the ISP 14 reads the image data from the image sensor 12 from the memory 15 and performs image processing using the read image data. Further, the ISP 14 performs a resizing process to convert the size of the image data read from the memory 15 into a size suitable for object detection processing (for example, pattern matching) by the AI processing unit 16.
- the ISP 14 uses the memory 15 during operation, and transfers data or information generated or acquired by the ISP 14 to the memory 15 for temporary storage.
- the memory 15 includes at least a RAM (Random Access Memory) and a ROM (Read Only Memory).
- the memory 15 holds programs necessary for executing the operations of the camera device 10, and further temporarily holds data or information generated during the operation of each part of the camera device 10.
- the RAM is, for example, a work memory used when each part of the camera device 10 operates.
- the ROM stores and retains programs and control data for controlling each part of the camera device 10 in advance, for example.
- the AI processing unit 16 (an example of an AI processing unit) is configured using, for example, a GPU (Graphics Processing Unit) and memory. Note that a DSP (Digital Signal Processor) may be used instead of or together with the GPU.
- the AI processing unit 16 uses AI (artificial intelligence), for example, to detect the position of the object from the image data of the object (for example, objects such as workpieces WK1, WK2, etc.) imaged by the image sensor 12. Perform detection processing (for example, pattern matching). Note that the AI processing unit 16 may perform the above-described detection processing using pattern matching processing known as existing image processing without using AI.
- a data transmission bus P1 for direct transfer is provided between the memory 15 and the AI processing section 16.
- the AI processing unit 16 acquires imaging data used for object detection processing from the memory 15 via the data transmission bus P1, and executes detection processing of the object included in the imaging data. If the GPU constituting the AI processing unit 16 has a processing performance of 4 Tops, for example, the AI processing unit 16 can execute the object detection process in the order of 1 to 4 milliseconds.
- the AI processing unit 16 is capable of executing a learned model (not shown) that has already been generated, for example, by machine learning or the like.
- the learned model corresponds to a data set in which a detection target to be detected by the AI processing unit 16 is determined by machine learning or the like based on a plurality of image data.
- the objects to be detected by the AI processing unit 16 are, for example, various parts used in the production factory where the robot control system 100 is installed.
- the AI processing unit 16 executes a process of detecting a detection target included in the image data from the image sensor 12 using the input image data having a specified size and the learned model described above.
- the detection process is, for example, image processing using AI (hereinafter referred to as AI image processing).
- the AI processing unit 16 calculates, for example, a distance (hereinafter referred to as an assumed distance) to the object (for example, the workpiece WK1) from the result of the detection processing of the detection object included in the imaging data.
- the assumed distance is, for example, the difference between the expected position of the target and the actual position of the target in the captured image, or the difference between the tip of the robot arm and the target. It is.
- a data transmission bus for direct transfer is provided between the AI processing section 16 and the data output I/F 17.
- the AI processing unit 16 sends data of the detection results obtained through the object detection processing to the data output I/F 17 via the data transmission bus.
- the data output I/F 17 (an example of an output interface) is configured using a circuit that can input and output data or signals to and from the robot vision controller 31 connected to the rear stage of the camera device 10.
- the data output I/F 17 outputs the data (for example, coordinate information) of the object detection result from the AI processing unit 16 to the robot vision controller 31 according to a predetermined data transmission method (for example, GigE).
- GigE Gigabit Ethernet (registered trademark)
- GigE Gigabit Ethernet (registered trademark)
- the data transmission method using the wired connection between the data output I/F 17 and the robot vision controller 31 is not limited to GigE.
- the robot vision controller 31 selects one of the resolution and frame rate of the image sensor 12 based on the detection result obtained by the detection process of the object outputted from the data output I/F 17 using a predetermined data transmission method (for example, GigE). A signal for controlling at least one is transmitted to the camera device 10. The robot vision controller 31 transmits the data acquired from the data output I/F 17 to the robot controller 30.
- a predetermined data transmission method for example, GigE
- the robot controller 30 inputs the data of the object detection results output from the robot vision controller 31, and instructs appropriate movement according to the detected position of the object (for example, drives an actuator (not shown) included in the robot 50). or an instruction for its control) and outputs it to the robot 50 (specifically, the robots 50A, 50B, . . . ).
- the robot 50 (specifically, robots 50A, 50B, . . . ) includes a robot driver (specifically, robot drivers 51A, 51B, . . . ) and a robot motor (specifically, robot motors 52A, 52B, . . . ). ).
- a robot driver specifically, robot drivers 51A, 51B, . . .
- a robot motor specifically, robot motors 52A, 52B, . . . ).
- the number of robots 50 included in the robot control system 100 may be one or three or more.
- the robot driver 51A controls the robot motor 52A to generate a driving force for causing the robot 50A to perform operations based on instructions from the robot controller 30. Further, the robot driver 51B controls the robot motor 52B to generate a driving force for causing the robot 50B to execute an operation based on an instruction from the robot controller 30.
- the robot motor 52A moves in response to an instruction from the robot controller 30 toward the object being transferred on the belt conveyor CB (for example, moves a part picked up by the robot hand toward the object).
- the driving force for causing the distal end of the robot 50A (for example, the robot hand or end effector) to carry out the movement of mounting the robot at a prescribed position is controlled.
- the robot motor 52B moves in response to an instruction from the robot controller 30 toward the object being transferred on the belt conveyor CB (for example, moves a part picked up by the robot hand toward the object).
- the driving force for causing the distal end of the robot 50B (for example, the robot hand or end effector) to carry out the movement of mounting the robot at a prescribed position is controlled.
- the robot 50 (specifically, robots 50A, 50B, ...) includes a robot base (not shown) in addition to the robot driver and robot motor. ) and robot arms AR1 and AR2.
- Each of the robot arms AR1 and AR2 is sequentially connected by, for example, six joints (specifically, three bending joints and three torsion joints).
- a robot hand or an end effector is disposed at the tip of each of the robot arms AR1, AR2 to suction and hold a component to be mounted on a predetermined position of each of the workpieces WK1, WK2, and to mount the component.
- each of the camera devices 10A and 10B is fixed to the tip portion (for example, a robot hand or an end effector) of each of the robot arms AR1 and AR2.
- FIG. 4 is a conceptual diagram of each process of the robot's work.
- FIG. 5 is a diagram showing the relationship between the robot control gain and the camera imaging mode in each step of the robot's work.
- FIG. 4 is an example of a scene where the robot 50A and the human hm work together.
- a plurality of screws Pi are regularly arranged in the screw box CA.
- the robot 50A performs the work of attaching the screw Pi placed at a prescribed position in the screw box CA to the workpiece WK1.
- the robot control gain is, for example, proportional control in PID (Proportional-Integral-Differential Controller) control.
- the robot control gain is the ability to control the speed of movement of the robot 50A.
- the robot control gain is high (for example, when the P value is relatively small) and the distance from the target value is large, the output to the robot motor increases, resulting in movement that responds quickly to instructions from the robot controller. can be realized.
- the distance from the target value becomes smaller, the output to the robot motor becomes smaller and the response speed to instructions from the robot controller becomes slower.
- the robot control gain is set so that the robot can move quickly in response to instructions from the robot controller when the distance from the target value is small, the output to the robot motor will increase when the distance from the target value is large. A problem arises in that the image becomes too large.
- the robot control gain when the robot control gain is high, compared to when the robot control gain is low, the output to the robot motor in a predetermined time is larger if the distance to the target value is the same. Therefore, the acceleration of the work machine (for example, the motion acceleration of the robot hand HD) increases.
- the above problem can be solved by, for example, changing the settings of the robot control gain according to the situation.
- “low” and “high” robot control gains shown in FIG. 5 have relative meanings, and are not limited to two types.
- the value of the robot control gain may be determined in advance and changed depending on the situation (for example, the distance from the target value, the process, etc.).
- step 1 the robot hand HD moves from above the belt conveyor CB to the position of the screw box CA in which the screw Pi is stored.
- step 1 the robot hand HD moves quickly because there are no people or obstacles around the robot hand HD.
- the camera device 10 may have a wide field of view and move while monitoring the surroundings of the robot hand HD.
- the robot control gain in process 1 is low because precise control is not required until the next target value and the distance to the target value is long.
- step 2 the robot hand HD moves to the position of the screw Pi to be picked up and approaches the screw.
- the robot hand HD moves at high speed in order to accurately and quickly approach the screw Pi.
- the robot control gain in step 2 is approaching the target value, and high-speed response control is required for more precise control, so it becomes higher.
- the imaging mode of the camera device 10 in step 2 is imaging mode 1 in which the frame rate of the camera device 10 is high because high-speed control of the robot hand HD is required. Note that in step 2, since the screw Pi is close to the screw Pi, it is possible to suitably image the screw Pi even in a situation where the angle of view is narrow (that is, the resolution is low).
- step 3 the robot hand HD picks up the screw Pi.
- the robot hand HD moves at high speed in order to accurately and quickly pick up the screw Pi.
- the robot control gain in step 3 is close to the target value, and increases because high-speed response control is required for more precise control.
- the imaging mode of the camera device 10 in step 3 is imaging mode 1 in which the frame rate of the camera device 10 is high because high-speed control of the robot hand HD is required. Note that in step 3 as well, since the screw Pi is close to the screw Pi, it is possible to sufficiently image the screw Pi even in a situation where the angle of view is narrow (that is, the resolution is low).
- step 4 the robot hand HD moves to a position near the workpiece WK1.
- the robot hand HD moves quickly because there are no people or obstacles around the robot hand HD.
- the camera device 10 may have a wide field of view and move while monitoring the surroundings of the robot hand HD.
- the robot control gain in step 4 is low because precise control is not required up to the target value and the distance to the target value is long.
- the frame rate of the camera device 10 is low because precise high-speed response control of the robot hand HD is not required to reach the target value, and the camera device 10 has a wide angle of view (that is, in order to monitor the surroundings). (higher resolution) imaging mode 3.
- step 5 after confirming that the robot hand HD approaches the person hm or the workpiece WK1, the robot hand HD moves onto the workpiece WK1.
- the robot hand HD may move at a low speed so as not to give a large impact even if the person hm moves suddenly and collides with the person hm.
- the robot hand HD may be moved at high speed.
- the camera device 10 may have a wide field of view and move while monitoring the surroundings of the robot hand HD.
- the robot control gain in step 5 may be low because precise control to the target value is not required.
- the imaging mode of the camera device 10 in step 5 is an imaging mode in which the frame rate of the camera device 10 is low because high-speed control of the robot hand HD is not required, and a wide angle of view (that is, high resolution) is used to monitor the surroundings. It becomes 3.
- step 6 the robot hand HD approaches the target workpiece WK1 and moves to the position of the screw hole of the workpiece WK1 (in the plane direction with respect to the workpiece WK1).
- step 6 the robot hand HD accurately and quickly moves to the installation position of the screw Pi.
- the robot control gain in step 6 is close to the target value and becomes high because high-speed response control is required for further precision.
- the imaging mode of the camera device 10 in step 6 is imaging mode 1 in which the frame rate of the camera device 10 is high because high-speed control of the robot hand HD is required. Note that in step 6, since the screw hole is approached, it is possible to sufficiently image the screw hole even in a situation where the angle of view is narrow (that is, the resolution is low).
- step 7 the robot hand HD moves up and down with respect to the workpiece WK1 and assembles the screw Pi to the workpiece WK1.
- the robot control gain in step 7 is close to the target value and increases because high-speed response control is required for more precise control.
- the imaging mode of the camera device 10 in step 7 is imaging mode 1 in which the frame rate of the camera device 10 is high because high-speed control of the robot hand HD is required. Note that in step 7 as well, since the screw hole is approached, it is possible to sufficiently image the screw hole even in a situation where the angle of view is narrow (that is, the resolution is low).
- step 8 the robot hand HD finishes assembling the screw Pi to the workpiece WK1 and leaves the workpiece WK1.
- the camera device 10 may have a wide field of view and move while monitoring the surroundings of the robot hand HD.
- the robot control gain in step 8 is low because precise control to the target value is not required.
- the imaging mode of the camera device 10 in step 8 is a low frame rate of the camera device 10 because high-speed control of the robot hand HD is not required, and a human (that is, high resolution) imaging mode with an angle of view to monitor the surroundings. It becomes 3.
- step 8 the process of the robot 50A returns to step 1 again.
- a mode with a low robot control gain is defined as a first operation mode
- a mode with a high robot control gain is defined as a second operation mode
- the acceleration of the robot 50A in the second operation mode is greater than the acceleration of the robot 50A in the first operation mode.
- the camera device 10 may be operated by switching the imaging mode of the image sensor 12 in accordance with switching the robot control gain from the first operation mode to the second operation mode (hereinafter referred to as the first control mode).
- the camera device 10 changes the imaging mode of the image sensor 12 from imaging mode 3 to imaging mode 1, which captures more images per unit time than imaging mode 3. Switch.
- the camera device 10 switches the imaging mode of the image sensor 12 from imaging mode 3 to imaging mode 1, which has a smaller resolution than imaging mode 3.
- the camera device 10 may switch the robot control gain mode in accordance with the switching of the imaging mode of the image sensor 12 (hereinafter referred to as the third control mode).
- FIG. 6 is a flowchart showing the robot control process. Each process related to the flowchart in FIG. 6 is executed by the camera device 10.
- the camera device 10 transmits control information regarding the motion trajectory of the robot 50 to the robot controller 30.
- the robot controller 30 transmits the control information to the robot 50 after acquiring the control information regarding the motion trajectory of the robot 50 .
- the robot 50 starts operating based on the control information acquired from the robot controller 30 (step St99). This process is repeated during robot operation after the robot first starts operating.
- the camera device 10 images an object (for example, a screw Pi) (Step St100).
- the camera device 10 recognizes the object using the captured image captured in step St100 (step St101).
- the camera device 10 calculates whether the trajectory of the robot 50 deviates from the assumed trajectory based on the result of step St101, and determines whether the trajectory of the robot 50 is appropriate (step St102). For example, when the robot hand HD of the robot 50 approaches a target object, the camera device 10 determines whether there is any deviation in the trajectory of the robot hand HD, and determines whether the trajectory of the robot hand HD is appropriate according to the amount of deviation in the trajectory. Determine whether or not. When the camera device 10 determines that the trajectory of the robot 50 is appropriate (step St102, YES), the camera device 10 returns to step St100. It is determined that the robot hand HD is moving properly (that is, the trajectory of the robot hand HD is appropriate), and the operation continues without making any correction to the trajectory of the robot hand HD.
- step St102 NO
- the camera device 10 determines that the trajectory of the robot 50 is not appropriate (step St102, NO)
- the camera device 10 outputs the determination result to the robot controller 30.
- the robot controller 30 controls the trajectory of the robot 50 based on the determination result in step St102 (step St103). For example, if the camera device 10 determines that the robot hand HD of the robot 50 approaches the target object and moves on its current trajectory and cannot reach the target object, the camera device 10 changes the trajectory of the robot hand HD. An instruction to modify is sent to the robot controller 30.
- step St104 the camera device 10 determines whether to change the robot control mode and the imaging mode of the image sensor 12 (step St104). When the camera device 10 determines that the robot control mode and the imaging mode are not to be changed (step St104, NO), the camera device 10 returns to step St99.
- the camera device 10 determines and sets the robot control mode or imaging mode to be changed (step St105). That is, the camera device 10 operates by switching the imaging mode of the image sensor 12 to a different imaging mode from the currently set imaging mode. The same applies to the robot control mode. For example, when the process of the robot 50A moves from process 3 to process 4 in FIG. 5, the camera device 10 changes the imaging mode of the image sensor 12 from imaging mode 1 to imaging mode 3.
- the camera device 10 changes from the imaging mode 1 in which the number of images captured per unit time that the image sensor 12 outputs is 480 images (frame rate is 480 fps) to the imaging mode 1 in which the number of images captured per unit time that the image sensor 12 is outputted is 480 images (frame rate is 480 fps).
- the imaging mode is changed to imaging mode 3, which captures 120 images (frame rate is 120 fps).
- the resolution of the image sensor 12 is changed by the camera device 10 from VGA to a higher resolution of FHD (see FIG. 3).
- the gain of the robot control mode is also changed from high to low.
- step St105 the processing of the camera device 10 returns to step St99.
- FIG. 7 is a conceptual diagram of an example of control when the robot hand approaches an object.
- FIG. 7 shows an example of the operation of step 1 in FIG.
- Coordinates K1, K2, and K3 represent the position of the robot hand HD.
- the robot hand HD approaches the target object, the screw Pi, from the coordinate K1.
- the trajectory OB1 represents the trajectory of the motion of the robot hand HD when approaching the object.
- the field of view VW1 represents an image of the field of view of the image sensor 12 at a certain timing in step 1 according to FIG.
- the field of view VW1 corresponds to the image of the field of view of the image sensor 12 at the coordinate K2 (shown in FIG. 7).
- the field of view VW2 represents an image of the field of view of the image sensor 12 at a certain timing in step 2 according to FIG.
- the field of view VW2 corresponds to an image of the field of view of the image sensor 12 at the coordinate L3 (shown in FIG. 8).
- the camera device 10 determines the screw Pi as the target object.
- the camera device 10 fixed to the robot hand HD moves to an approximate position near the screw Pi as the robot hand HD moves, so it can be controlled in imaging mode 3, which does not require high-precision control. conduct.
- the robot hand HD operates with a low robot control gain because it does not require precise movements.
- the camera device 10 calculates the trajectory of the motion of the robot hand HD.
- the camera device 10 calculates a corrected trajectory based on the position of the screw Pi and the current trajectory, and outputs the calculation result to the robot controller 30.
- the robot controller 30 controls the operation of the robot hand HD based on instructions from the camera device 10.
- the robot hand HD moves from the coordinate K2 to the coordinate K3 under the control of the robot controller 30.
- the camera device 10 again calculates a corrected trajectory from the position of the screw Pi and the current trajectory, and outputs the calculation result to the robot controller 30.
- the robot controller 30 controls the operation of the robot hand HD based on instructions from the camera device 10.
- the camera device 10 uses the image sensor 12 Change the imaging mode. In the example shown in FIG. 7, the camera device 10 changes the imaging mode from imaging mode 3 to imaging mode 1 upon transition from process 1 to process 2.
- FIG. 8 is a conceptual diagram of an example of control when the robot hand approaches an object.
- FIG. 8 shows an example of the operation of step 2 in FIG.
- FIG. 8 shows an example when the visual field in FIG. 7 becomes visual field VW2.
- Coordinates L1, L2, L3, L4, L5, and L6 represent the position of the robot hand HD.
- the robot hand HD approaches the target object, the screw Pi, from the coordinate L1.
- the trajectory OB2 represents the trajectory of the motion of the robot hand HD when approaching the object.
- the camera device 10 controls the image sensor 12 in imaging mode 1 with a high frame rate in order to perform more precise control.
- the robot hand HD operates with a high robot control gain because it requires precise movements.
- the camera device 10 calculates the trajectory of the motion of the robot hand HD.
- the camera device 10 calculates a corrected trajectory based on the position of the screw Pi and the current trajectory, and outputs the calculation result to the robot controller 30.
- the robot controller 30 controls the operation of the robot hand HD based on instructions from the camera device 10.
- the robot hand HD moves to the coordinate L3 under the control of the robot controller 30. Controls the movement of the robot hand HD and corrects its trajectory.
- the camera device 10 performs similar control at coordinates L3, L4, L5, and L6.
- the camera device 10 switches the imaging mode of the image sensor 12 based on the result of image processing by the AI processing unit 16 (hereinafter referred to as the second control mode).
- the camera device 10 sets the imaging mode 1 with a frame rate of 480 fps when the estimated distance calculated by the AI processing unit 16 is less than or equal to the first distance.
- the camera device 10 sets the imaging mode 3 in which the frame rate is 120 fps when the assumed distance is equal to or greater than the second distance, which is larger than the first distance.
- the camera device 10 sets the imaging mode 1 in which the resolution of the image sensor 12 is VGA when the expected distance is less than or equal to the first distance.
- the camera device 10 sets an imaging mode 3 in which the resolution is FHD, which is higher than the VGA of the imaging mode 1, when the assumed distance is a second distance or more, which is larger than the first distance.
- the camera device 10 may switch and execute the first control mode, the second control mode, and the third control mode.
- the camera device 10 may switch the control mode using a state in which the image sensor 12 does not acquire or cannot acquire imaging data as a trigger.
- control device for example, camera device 10
- the control device can control the working machine with high precision according to the work content of the working machine and the surrounding environment.
- control device can operate the work machine safely and with high precision by setting the frame rate of the image sensor to an optimal setting depending on the situation.
- control device can perform image processing with low delay and operate the work machine safely and with high precision.
- control device can operate the work machine safely and with high precision by optimizing the combination of the frame rate and resolution of the image sensor.
- control device can detect a wide variety of objects using AI processing.
- control device can operate the work device safely and with high precision by switching the imaging mode based on the result of image processing using AI processing.
- control device can increase the frame rate when the expected distance to the target object becomes shorter, and can acquire captured images in a shorter cycle and detect the target object with high precision.
- control device can reduce the resolution when the expected distance to the target object becomes shorter, perform image processing with low delay, and operate the working machine safely and with high precision.
- control device can control the settings of the image sensor in conjunction with the operating mode of the working machine, and can operate the working machine safely and with high precision.
- control device can acquire captured images and detect objects at appropriate intervals according to the operating mode of the working machine, and can control the working machine safely and with high precision. .
- control device can change the resolution according to the operating mode of the work machine and perform image processing with low delay.
- control device can control the working machine with high precision according to the surrounding situation by selecting an operation mode of the working machine suitable for each imaging mode.
- control device can flexibly select a control mode suitable for the situation and environment in which the work machine is used.
- the technology of the present disclosure is useful as a control device and an imaging device that control working machines with high precision.
- Robot controller 31 Robot vision controller 50, 50A, 50B Robot 51A, 51B Robot driver 52A, 52B Robot motor 100 Robot control system AR1, AR2 Robot arm HD Robot hand hm Person CA Screw box Pi Screw CB Belt conveyor DR1 Transfer direction WK1, WK2 Work OB1, OB2 Trajectory K1, K2, K3, L1, L2, L3, L4, L5, L6 Coordinates VW1, VW2 Field of view
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
Ce dispositif de commande comprend au moins un processeur, au moins une mémoire et un programme stocké dans la mémoire, le programme amenant le processeur : à commander une opération d'une machine de travail; et à exécuter un premier mode de commande pour amener un capteur d'image connecté à la machine de travail à fonctionner par commutation entre un premier mode d'imagerie et un second mode d'imagerie différent du premier mode d'imagerie.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022056283 | 2022-03-30 | ||
JP2022-056283 | 2022-03-30 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023189077A1 true WO2023189077A1 (fr) | 2023-10-05 |
Family
ID=88200557
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2023/006795 WO2023189077A1 (fr) | 2022-03-30 | 2023-02-24 | Dispositif de commande et dispositif d'imagerie |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2023189077A1 (fr) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018020638A1 (fr) * | 2016-07-28 | 2018-02-01 | 富士機械製造株式会社 | Dispositif d'imagerie, système d'imagerie, et procédé de traitement d'image |
-
2023
- 2023-02-24 WO PCT/JP2023/006795 patent/WO2023189077A1/fr unknown
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018020638A1 (fr) * | 2016-07-28 | 2018-02-01 | 富士機械製造株式会社 | Dispositif d'imagerie, système d'imagerie, et procédé de traitement d'image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11267142B2 (en) | Imaging device including vision sensor capturing image of workpiece | |
CN110948491B (zh) | 一种基于视觉跟随的工业机器人抓取方法 | |
TWI615691B (zh) | 防碰撞系統及防碰撞方法 | |
CN1218806C (zh) | 具有视觉焊缝自动跟踪功能的弧焊机器人控制平台 | |
US10447930B2 (en) | Monitoring camera and swing correction method | |
JP2009241247A (ja) | ステレオ画像型検出移動装置 | |
JP5448069B2 (ja) | ロボット制御装置及び方法 | |
TW201927496A (zh) | 機器人系統以及機器人控制方法 | |
JP2014124765A (ja) | 自動ねじ締め装置 | |
JP2014188617A (ja) | ロボット制御システム、ロボット、ロボット制御方法及びプログラム | |
JP6605611B2 (ja) | ロボットシステム | |
CN102466958A (zh) | 具有摄影成像功能的运算装置及其投影自动对焦方法 | |
JP2022183308A (ja) | ロボットシステム、ロボットシステムの制御方法、ロボットシステムを用いた物品の製造方法、制御装置、操作装置、操作装置の制御方法、撮像装置、撮像装置の制御方法、制御プログラム及び記録媒体 | |
JP2006224291A (ja) | ロボットシステム | |
WO2023189077A1 (fr) | Dispositif de commande et dispositif d'imagerie | |
CA2231095A1 (fr) | Camera video a capacite de zoom amelioree | |
US11192254B2 (en) | Robot system and adjustment method therefor | |
WO2023145698A1 (fr) | Dispositif de caméra et procédé de traitement d'image | |
JP2005205519A (ja) | ロボットハンド装置 | |
JP5223683B2 (ja) | ワーク保持位置姿勢計測システムおよびワーク搬送システム | |
JP2960733B2 (ja) | 画像表示方法及び画像表示装置並びに遠隔操作装置 | |
JP2009267681A (ja) | ブレ補正装置および光学装置 | |
TW202235234A (zh) | 控制裝置以及機器人系統 | |
CN114728420A (zh) | 机器人、机器人系统以及控制方法 | |
JP5539157B2 (ja) | 撮像装置及びその制御方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23779107 Country of ref document: EP Kind code of ref document: A1 |