WO2024024090A1 - Dispositif de comptage de composants et système de robot - Google Patents

Dispositif de comptage de composants et système de robot Download PDF

Info

Publication number
WO2024024090A1
WO2024024090A1 PCT/JP2022/029305 JP2022029305W WO2024024090A1 WO 2024024090 A1 WO2024024090 A1 WO 2024024090A1 JP 2022029305 W JP2022029305 W JP 2022029305W WO 2024024090 A1 WO2024024090 A1 WO 2024024090A1
Authority
WO
WIPO (PCT)
Prior art keywords
parts
feature points
likelihood
component
counting device
Prior art date
Application number
PCT/JP2022/029305
Other languages
English (en)
Japanese (ja)
Inventor
航 有田
Original Assignee
ヤマハ発動機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ヤマハ発動機株式会社 filed Critical ヤマハ発動機株式会社
Priority to PCT/JP2022/029305 priority Critical patent/WO2024024090A1/fr
Publication of WO2024024090A1 publication Critical patent/WO2024024090A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06MCOUNTING MECHANISMS; COUNTING OF OBJECTS NOT OTHERWISE PROVIDED FOR
    • G06M11/00Counting of objects distributed at random, e.g. on a surface
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation

Definitions

  • the present invention relates to a parts counting device and a robot system, and particularly to a parts counting device and a robot system that count parts.
  • Patent Document 1 discloses a parts counting device that counts parts.
  • This parts counting device is configured to count parts based on a photographed image of parts arranged on a sheet taken by a photographing device. Further, this parts counting device is configured to display, when parts on a sheet overlap, an image processed to enable identification of the portion where overlapping parts are detected. If parts overlap, it is not possible to accurately count the parts. Therefore, the user visually recognizes the overlap of parts on the displayed image, manually eliminates the overlap of parts on the sheet, and causes the parts counting device to perform the counting process again.
  • This invention was made to solve the above problems, and one object of the invention is to provide a parts counting device and a robot that can count parts with high accuracy while saving the user's effort.
  • the goal is to provide a system.
  • a parts counting device includes: a vibrating part that vibrates a housing part that accommodates a plurality of parts; a photographing part that continuously photographs parts in the housing part vibrated by the vibrating part;
  • the apparatus includes an information processing unit that tracks parts that move due to vibration based on a photographed image and counts the number of parts in the storage section. Note that in this specification, vibration is a broad concept that includes shaking.
  • the parts counting device is provided with a vibrating part that vibrates the housing part that accommodates a plurality of parts.
  • the parts can be moved by causing the vibrating part to vibrate, so there is no need for the user to manually eliminate overlapping parts.
  • the user's effort can be saved.
  • parts can be counted while creating a situation with high counting accuracy (a situation where parts are not placed close to each other). Therefore, parts can be counted with high accuracy. As a result, parts can be counted accurately while saving the user's effort.
  • the information processing section creates a likelihood map of the parts included in the photographed images based on the photographed images sequentially photographed by the photographing section, and calculates the likelihood map of the parts included in the photographed images. Configured to track parts based on the map. With this configuration, parts can be recognized with high accuracy based on the likelihood map, and therefore parts can be tracked with high accuracy.
  • the information processing unit estimates the position and orientation of the part based on the created likelihood map, and extracts feature points of the part based on the estimated position and orientation of the part.
  • the part is configured to be tracked by tracking feature points.
  • the information processing unit sets an extraction region for extracting feature points based on the estimated position and orientation of the part, and extracts the feature points from within the extraction region. It is configured. With this configuration, feature points can be extracted from within the extraction region where there is a high possibility that feature points exist, so feature points of parts can be extracted with higher accuracy. As a result, the parts can be tracked more accurately by tracking the extracted feature points with more accuracy.
  • the information processing unit obtains the degree of coincidence between the feature points extracted from within the extraction region and the feature points in the reference image, and selects the degree of coincidence among the feature points extracted from within the extraction region. It is configured to determine a feature point that is equal to or greater than a threshold value as a feature point to be tracked. With this configuration, by tracking feature points with a high degree of coincidence, parts can be tracked with even higher accuracy.
  • the information processing unit is configured to count the parts in the storage unit by counting the parts whose feature points are tracked.
  • the information processing unit is configured to count the parts in the storage unit by counting the parts whose feature points are tracked.
  • the information processing unit counts the parts for each type of parts by extracting the feature points of the parts for each type of parts and counting the number of parts whose feature points are tracked for each type. It is configured. With this configuration, even when a plurality of types of parts are present in the storage section, the parts can be accurately counted for each type of part.
  • the information processing unit compares the likelihood of the currently created likelihood map with the likelihood of the likelihood map created a predetermined number of times ago for each part region, Among the compared component regions, for the component region for which the likelihood map created this time has the maximum likelihood, the likelihood map created this time is used to extract feature points, and the likelihood map created this time is used to extract the feature points. For component regions for which the likelihood map does not have the maximum likelihood, the likelihood map that has a higher likelihood than the currently created likelihood map among the likelihood maps created a predetermined number of times ago is used to extract feature points. It is composed of With this configuration, it is possible to always track extracted feature points with a high likelihood.
  • the information processing section is configured to track parts using a machine learning model generated by machine learning.
  • parts can be easily and accurately tracked using a machine learning model.
  • the information processing unit is preferably configured to notify the user when the number of counted parts does not satisfy a predetermined number.
  • the vibrating section includes a vibrator that vibrates the housing section in the horizontal direction.
  • the parts can be easily moved by vibrating them horizontally, so it is easy to create a situation with high counting accuracy (a situation where parts are not placed close to each other). .
  • a robot system includes a robot that transfers and loads parts in a storage unit or a robot that transfers and loads parts into the storage unit, and a parts counting device that counts the parts in the storage unit,
  • the parts counting device includes a vibrating part that vibrates the housing part, a photographing part that continuously photographs the parts inside the housing part that are vibrated by the vibrating part, and a part counting apparatus that detects the parts that are moved by vibration based on the photographed images that are sequentially photographed by the photographing part. and an information processing unit that tracks the parts and counts the parts in the storage section.
  • a vibrating section that vibrates the housing section that accommodates a plurality of parts.
  • the parts can be moved by causing the vibrating part to vibrate, so there is no need for the user to manually eliminate overlapping parts.
  • the user's effort can be saved.
  • parts can be counted while creating a situation with high counting accuracy (a situation where parts are not placed close to each other). Therefore, parts can be counted with high accuracy. As a result, parts can be counted accurately while saving the user's effort.
  • parts can be counted with high accuracy while saving the user's effort.
  • FIG. 1 is a schematic diagram showing a parts counting device according to one embodiment.
  • FIG. 3 is a diagram for explaining a parts counting process of a parts counting device according to an embodiment.
  • FIG. 3 is a diagram for explaining feature point extraction processing in parts counting processing according to an embodiment.
  • FIG. 3 is a diagram for explaining a machine learning model used in feature point extraction processing according to an embodiment.
  • 5 is a flowchart for explaining control processing related to parts counting according to an embodiment.
  • 5 is a flowchart for explaining image processing of captured images according to an embodiment.
  • FIG. 1 is a schematic diagram showing a configuration example of a robot system including a parts counting device according to an embodiment.
  • 12 is a flowchart for explaining control processing on the imaging unit side regarding component counting according to a modification of the embodiment.
  • 12 is a flowchart for explaining control processing on the information processing unit side regarding parts counting according to a modified example of one embodiment.
  • the parts counting device 100 is a device that counts a plurality of parts 1 housed in a housing section 2. As shown in FIG.
  • the accommodating part 2 is, for example, a tray.
  • the component 1 is, for example, a screw or a washer.
  • storage units 2 containing parts 1 are sometimes supplied for assembly of industrial products.
  • the parts counting device 100 counts the parts 1 stored in the storage part 2 before being supplied to an assembly line of industrial products, thereby determining whether the correct number of parts 1 is stored in the storage part 2. It is used to test whether The parts counting device 100 can also be said to be an inspection device.
  • the parts counting device 100 includes a mounting section 10, a vibration section 20, an imaging section 30, and a control section 40.
  • the mounting section 10 is configured such that the accommodating section 2 can be mounted thereon.
  • the mounting section 10 is a table on which the accommodating section 2 can be placed. Further, the mounting section 10 is provided with a vibrating section 20.
  • the vibrating section 20 is configured to vibrate the housing section 2.
  • the vibrating section 20 includes a first vibrator 21, a second vibrator 22, and a diaphragm 23.
  • the first vibrator 21 is configured to horizontally vibrate the housing section 2 mounted on the mounting section 10.
  • the first vibrator 21 is configured to vibrate the housing section 2 mounted on the mounting section 10 in the front-rear direction and the left-right direction (two directions substantially orthogonal to each other in a horizontal plane).
  • the second vibrator 22 is configured to vibrate the housing section 2 mounted on the mounting section 10 in the vertical direction.
  • the diaphragm 23 is configured to be vibrated when each of the first vibrator 21 and the second vibrator 22 vibrates. When the diaphragm 23 is vibrated, the component 1 inside the housing section 2 is vibrated.
  • the first vibrator 21 is an example of a "vibrator" in the claims.
  • the photographing section 30 is a camera that continuously photographs the component 1 inside the housing section 2 that is vibrated by the vibrating section 20.
  • the photographing unit 30 is configured to perform continuous photographing of moving images or still images similar to moving images by photographing at a predetermined frame rate.
  • the photographing section 30 is arranged above the mounting section 10 (that is, above the component 1 that is the subject). Further, the photographing section 30 is provided with an illumination section 31 that irradiates illumination light toward the component 1 that is a subject during photographing.
  • the control unit 40 is configured to control the operation of the parts counting device 100.
  • the control section 40 includes an information processing section 41, a device control section 42, and a storage section 43.
  • the information processing unit 41 includes a processor such as a CPU (central processing unit), and processes various information.
  • the information processing unit 41 acquires a captured image 50 (see FIG. 2) of the component 1 captured by the imaging unit 30, and counts the components 1 in the storage unit 2 based on the acquired captured image 50.
  • the system is configured to perform the processing that follows.
  • the device control unit 42 includes a processor such as a CPU (central processing unit), and controls the operations of the vibration unit 20 and the imaging unit 30.
  • the storage unit 43 includes a rewritable nonvolatile memory such as a flash memory, and is configured to be able to store various types of information.
  • the device control section 42 moves the component 1 inside the storage section 2 by vibrating the storage section 2 with the vibration section 20, and moves the component 1 inside the storage section 2 using the imaging section 30. It is configured to shoot continuously.
  • the information processing section 41 is configured to acquire a photographed image 50 by the photographing section 30 and to count the parts 1 in the storage section 2 based on the acquired photographed image 50.
  • the information processing unit 41 tracks the component 1 that moves due to vibration based on the photographed images 50 sequentially photographed by the photographing unit 30, and also tracks the component 1 that moves due to vibration and It is configured to count parts 1 in section 2.
  • the information processing section 41 is configured to create a likelihood map 60 of the component 1 included in the photographed image 50 based on the photographed images 50 sequentially photographed by the photographing section 30.
  • the likelihood map 60 is a map representing the likelihood distribution of the component 1 in the captured image 50.
  • the information processing unit 41 uses a method using template matching or a method of extracting feature amounts to determine the degree to which the region including each pixel of the photographed image 50 corresponds to the part showing the part 1.
  • the configuration is configured to create a likelihood map 60 by calculating the likelihood expressed. Usually, when the parts 1 are adjacent to each other, the likelihood is calculated to be low, and when the parts 1 are not adjacent to each other, the likelihood is calculated to be high. Furthermore, the information processing unit 41 is configured to track the component 1 based on the created likelihood map 60.
  • the information processing unit 41 estimates the position and orientation (orientation) of the component 1 based on the created likelihood map 60, and estimates the position and orientation of the component 1.
  • the component 1 is tracked by extracting feature points 1a of the component 1 based on the position and orientation of the component 1, and tracking the extracted feature points 1a.
  • the information processing unit 41 selects a high likelihood whose likelihood is equal to or higher than a threshold value (for example, 90% or higher) based on the created likelihood map 60.
  • the high likelihood area 61 is acquired, and the center position 62 of the acquired high likelihood area 61 is estimated as the center position of the component 1. Further, as shown in FIG.
  • the information processing unit 41 is configured to estimate the orientation of the component 1 based on the shape of the acquired high likelihood region 61. For example, when the component 1 has an elongated shape such as a screw, the information processing unit 41 is configured to estimate the orientation of the component 1 on the assumption that the longitudinal direction of the high likelihood region 61 is the longitudinal direction of the component 1. .
  • the information processing unit 41 sets an extraction region 70 for extracting the feature points 1a based on the estimated position and orientation of the component 1, and It is configured to extract feature points 1a from within.
  • the extraction area 70 is an area where there is a high possibility that the feature point 1a of the component 1 exists.
  • the extraction area 70 is an area representing the outer shape (contour) of the component 1.
  • Information representing the external shape of the component 1 is stored in the storage section 43 in advance.
  • the information processing unit 41 arranges the component 1 at the estimated position and in the estimated orientation based on the information representing the external shape of the component 1 stored in the storage unit 43 and the estimated position and orientation of the component 1.
  • the extraction area 70 is set based on this assumption.
  • the information processing unit 41 obtains the degree of coincidence between the feature point 1a extracted from within the extraction region 70 and the feature point 1a within the reference image 80, and Among the feature points 1a extracted from within 70, the feature point 1a whose matching degree is equal to or higher than a threshold value is determined as the feature point 1a to be tracked (final feature point 1a).
  • the reference image 80 is an image representing the component 1, and the feature point 1a is set in advance in the reference image 80.
  • the feature points 1a of the reference image 80 for example, feature points 1a near the outer shape such as corners and edges are set. Thereby, it is possible to set a feature point 1a that holds information such as thickness and length that is highly related to the type of component 1.
  • One or more reference images 80 are stored in the storage section 43 in advance.
  • the information processing unit 41 is configured to count the parts 1 in the storage unit 2 by counting the parts 1 tracking the feature point 1a. .
  • the number of parts 1 that tracks the feature point 1a (that is, the detected part 1) is one, so the number of parts 1 counted is one.
  • the photographing section 30 continuously photographs the component 1 inside the storage section 2, and the photographed images 50 are sequentially acquired.
  • the information processing section 41 is configured to continue the above-described component counting process until the counted number of components 1 reaches a predetermined number (the correct number to be accommodated in the storage section 2).
  • the information processing unit 41 is configured to notify the user when the counted number of parts 1 does not satisfy a predetermined number. Specifically, if the number of counted parts 1 is less than a predetermined number even after a predetermined time has elapsed, or if the number of counted parts 1 is greater than a predetermined number, Configured to notify the user.
  • the information processing unit 41 is configured to notify the user that the counted number of parts 1 does not satisfy a predetermined number by displaying the information on the display unit.
  • the information processing section 41 displays a notification on the display section of the parts counting device 100; for example, when the parts counting device 100 does not include a display section, the information processing section 41 displays a notification on the external display. Display a notification in the section.
  • the information processing unit 41 calculates the likelihood (accuracy) of the likelihood map 60 created this time (the latest likelihood map 60) and the likelihood of the likelihood map 60 created a predetermined number of times ago. (accuracy) for each part area (area with high likelihood such as high likelihood area 61), and among the compared part areas, the part area where the likelihood is maximum in the likelihood map 60 created this time.
  • the likelihood map 60 created this time is used to extract the feature points 1a, and among the compared component regions, for the component regions for which the likelihood does not have the maximum in the likelihood map 60 created this time, the likelihood map 60 created this time is used to extract the feature points 1a.
  • the likelihood map 60 having a higher likelihood than the likelihood map 60 created this time is used for extracting the feature point 1a.
  • the information processing unit 41 selects the component region that has the highest likelihood among the likelihood maps 60 created a predetermined number of times before. The configuration is such that the likelihood map 60 having a high degree of accuracy is used for extracting the feature points 1a.
  • FIGS. 2 and 3 illustrate an example in which one type of component 1 is housed in the housing section 2, but a plurality of types of components 1 are housed in the housing section 2.
  • the information processing unit 41 extracts the feature points 1a of the parts 1 for each type of parts 1, and extracts the feature points 1a from the feature points 1a. By counting the parts 1 to be tracked for each type, the parts 1 are counted for each type of parts 1.
  • the information processing unit 41 creates a separate likelihood map 60 for each type of component 1, or creates one likelihood map 60 that includes likelihood information for each type of component 1. It is composed of Although a detailed explanation will be omitted, the information processing unit 41 is configured to extract the feature points 1a of the component 1 for each type of component 1, in the same manner as described above, based on the created likelihood map 60. ing.
  • the information processing unit 41 is configured to track the component 1 using a machine learning model 90 generated by machine learning. Specifically, as shown in FIG. 4a, the information processing unit 41 is configured to extract feature points 1a using a trained machine learning model 90. More specifically, the machine learning model 90 inputs an image within the extraction region 70 of the photographed image 50 and outputs feature point data 91, which is two-dimensional data representing the feature point 1a within the extraction region 70. It is composed of The machine learning model 90 is configured by, for example, a convolution network. After setting the extraction area 70, the information processing unit 41 cuts out an image within the extraction area 70 of the photographed image 50 and inputs it to the machine learning model 90. Furthermore, the information processing unit 41 extracts the feature points 1a within the extraction region 70 by acquiring feature point data 91 from the machine learning model 90 into which the image has been input.
  • FIG. 4b shows the learning procedure of the machine learning model 90.
  • a learning image 92 is input to the machine learning model 90, and the machine learning model 90 is caused to output feature point data 91.
  • feature points are extracted based on the feature point data 91, and tracking is performed using the extracted feature points.
  • a loss value representing tracking accuracy is calculated based on the tracking execution result.
  • the parameters of the machine learning model 90 are updated by reversely assigning the loss values to the parameters of the machine learning model 90.
  • the trained machine learning model 90 is stored in advance in the storage unit 43 and used.
  • step S1 the first vibrator 21 and the second vibrator 22 of the vibrating section 20 are activated.
  • step S2 the component 1 inside the housing section 2 is photographed by the photographing section 30 while the housing section 2 is vibrated by the vibrating section 20. As a result, a photographed image 50 in which the component 1 inside the storage section 2 is captured is obtained.
  • step S3 image processing of the photographed image 50 is performed. Note that details of image processing of the photographed image 50 will be described later.
  • step S4 it is determined whether a predetermined number of parts have been detected (counted). If it is determined that a predetermined number of parts have been detected, the process advances to step S5.
  • step S5 the first vibrator 21 and the second vibrator 22 of the vibrating section 20 are stopped. Then, the control process is ended.
  • step S4 if it is determined that the predetermined number of parts have not been detected, the process proceeds to step S6.
  • step S6 it is determined whether the error condition is satisfied.
  • the error conditions include a condition that the number of parts 1 counted is less than a predetermined number even after a predetermined time has elapsed, and a condition that the number of parts 1 counted is greater than the predetermined number. If it is determined that the error condition is satisfied, the process advances to step S7.
  • step S7 the user is notified of an error that the counted number of parts 1 does not meet the predetermined number. After that, the first vibrator 21 and the second vibrator 22 of the vibrating section 20 are stopped, and the control process is ended.
  • step S6 determines whether the error condition is not satisfied. If it is determined in step S6 that the error condition is not satisfied, the process proceeds to step S2. Then, the processes of steps S2 to S6 are repeated until a predetermined number of parts are detected or an error condition is satisfied.
  • processing for emphasizing the component 1, such as binarization processing and edge enhancement processing, may be performed as appropriate.
  • step S11 a likelihood map 60 of the component 1 is created based on the photographed image 50.
  • step S12 the position of component 1 and the orientation of component 1 are estimated based on the likelihood map 60. Furthermore, when multiple types of parts 1 are stored in the housing section 2, a separate likelihood map 60 is created for each type of part 1, or a likelihood map 60 for each type of part 1 is created. By creating one likelihood map 60 including information, the type of part 1 is estimated.
  • step S13 the likelihood of the likelihood map 60 created this time and the likelihood of the likelihood map 60 created a predetermined number of times ago are compared for each part area, and the likelihood of the likelihood map 60 created this time is compared with the likelihood of the likelihood map 60 created a predetermined number of times ago. It is determined for each part area whether the degree is the maximum.
  • step S14 is performed for the component region for which the likelihood is determined to be the maximum in the likelihood map 60 created this time.
  • step S14 the feature points 1a of the component 1 are extracted using the likelihood map 60 created this time. Further, the feature point 1a is updated to the feature point 1a of the extracted part 1 and is stored in the storage unit 43.
  • step S15 the extracted feature point 1a of the component 1 is tracked, thereby tracking the component 1. After that, the process advances to step S4.
  • step S13 for the component region for which it is determined that the likelihood does not reach the maximum in the likelihood map 60 created this time, the process of step S16 is performed.
  • step S16 the feature point 1a is extracted using a likelihood map 60 that has a higher likelihood than the likelihood map 60 created this time among the likelihood maps 60 created a predetermined number of times ago.
  • the extracted feature point 1a of the component 1 is tracked, so that the component 1 is tracked. After that, the process advances to step S4.
  • the robot system 200 includes a parts counting device 100, a robot 110, and a robot control section 120.
  • the robot 110 is a picking robot that transfers the component 1 into the storage section 2 or transfers the component 1 into the storage section 2 .
  • the robot 110 is, for example, a vertically articulated robot.
  • the robot 110 includes an articulated arm 111 and a hand 112 that is attached to the tip of the arm 111 and holds the component 1 .
  • the robot control unit 120 is configured to control the operation of the robot 110.
  • the parts counting device 100 counts the parts 1 in the storage unit 2, for example, in order to check whether the number of parts 1 transferred into the storage unit 2 by the robot 110 is correct. .
  • the parts counting device 100 counts the parts 1 in the storage unit 2 in order to check whether the number of parts 1 in the storage unit 2 before being transferred by the robot 110 is correct. Note that the parts counting process by the parts counting device 100 is as described above, so detailed explanation will not be repeated.
  • the parts 1 in the storage unit 2 are counted by the parts counting device 100, and the number of parts 1 in the storage unit 2 is correct.
  • the photographing unit 30 and the information processing unit 41 of the parts counting device 100 operate as follows. That is, the photographing section 30 is configured to continue photographing the component 1 even after the vibrating section 20 stops vibrating the housing section 2 .
  • the information processing unit 41 also recognizes the posture of the component 1 that is stationary within the storage unit 2 based on the image taken by the imaging unit 30 after the vibration unit 20 stops vibrating the storage unit 2.
  • the robot control unit 120 is configured to determine, based on the orientation of the component 1 recognized by the information processing unit 41, the portion of the robot 110 where the hand 112 holds the component 1. Thereby, it is possible to effectively utilize the photographing unit 30 and the information processing unit 41 of the parts counting device 100 to determine the part where the part 1 is held by the hand 112 of the robot 110.
  • the information processing section 41 displays an image taken by the photographing section 30 after the vibrating section 20 stops vibrating the housing section 2. Based on this, the posture of the component 1 that is stationary within the storage section 2 is recognized for each type of component 1. Then, the robot control unit 120 determines the part where the hand 112 of the robot 110 should hold the part 1 according to the type of the part 1, based on the posture of the part 1 for each type recognized by the information processing unit 41. It is composed of As a result, it is possible to effectively utilize the photographing section 30 and the information processing section 41 of the parts counting device 100 to appropriately determine the part where the part 1 is held by the hand 112 of the robot 110 according to the type of the part 1. It is.
  • the vibrating section 20 that vibrates the accommodating section 2 that accommodates the plurality of components 1 is provided.
  • the component 1 can be moved by causing the vibrating section 20 to vibrate the component 1, so there is no need for the user to manually eliminate the overlap of the components 1.
  • the user's effort can be saved.
  • the information processing unit 41 creates the likelihood map 60 of the component 1 included in the photographed image 50 based on the photographed image 50, and also creates the likelihood map 60 of the component 1 included in the photographed image 50.
  • the device is configured to track the part 1. Thereby, the component 1 can be recognized with high precision based on the likelihood map 60, so the component 1 can be tracked with high precision.
  • the information processing unit 41 estimates the position and orientation of the component 1 based on the created likelihood map 60, and based on the estimated position and orientation of the component 1,
  • the component 1 is configured to be tracked by extracting feature points 1a of the component 1 and tracking the extracted feature points 1a.
  • the position and orientation of the component 1 can be estimated with high precision based on the likelihood map 60, so the feature points 1a of the component 1 can be extracted with high precision.
  • the component 1 can be accurately tracked.
  • the information processing unit 41 sets the extraction region 70 for extracting the feature points 1a based on the estimated position and orientation of the component 1, and extracts the feature points from within the extraction region 70. 1a.
  • the feature point 1a can be extracted from within the extraction region 70 where there is a high possibility that the feature point 1a exists, so the feature point 1a of the component 1 can be extracted with higher accuracy.
  • the component 1 can be tracked more accurately by tracking the extracted feature points 1a with more accuracy.
  • the information processing unit 41 obtains the degree of coincidence between the feature point 1a extracted from within the extraction region 70 and the feature point 1a within the reference image 80, and Among the feature points 1a extracted from , the feature point 1a whose matching degree is equal to or higher than a threshold value is determined as the feature point 1a to be tracked. Thereby, by tracking the feature points 1a with a high degree of coincidence, the component 1 can be tracked with even higher accuracy.
  • the information processing unit 41 is configured to count the parts 1 in the storage unit 2 by counting the parts 1 that track the feature point 1a. Thereby, by counting the parts 1 that track the feature point 1a, it is possible to avoid counting the same part 1 twice, and therefore it is possible to count the parts 1 with higher accuracy.
  • the information processing unit 41 extracts the feature points 1a of the component 1 for each type of the component 1, and counts the number of parts 1 that track the feature points 1a for each type. , is configured to count parts 1 for each type of parts 1. Thereby, even when a plurality of types of parts 1 exist in the storage section 2, the parts 1 can be accurately counted for each type of parts 1.
  • the information processing unit 41 calculates the likelihood of the likelihood map 60 created this time and the likelihood of the likelihood map 60 created a predetermined number of times ago for each part area.
  • the likelihood map 60 created this time is used to extract the feature points 1a, and the Among these, for component regions where the likelihood does not reach the maximum in the likelihood map 60 created this time, among the likelihood maps 60 created a predetermined number of times ago, the likelihood is larger than the likelihood map 60 created this time.
  • the map 60 is configured to be used for extracting feature points 1a. Thereby, it is possible to always track the extracted feature point 1a in a state where the likelihood is large.
  • the information processing unit 41 is configured to track the component 1 using the machine learning model 90 generated by machine learning. Thereby, the component 1 can be easily and accurately tracked using the machine learning model 90.
  • the information processing unit 41 is configured to notify the user when the counted number of parts 1 does not satisfy a predetermined number.
  • the predetermined number if the number of counted parts 1 does not satisfy the predetermined number (if the number of parts 1 is greater or less than the predetermined number), the user can easily confirm that the number of parts 1 does not satisfy the predetermined number. Since this can be known, measures can be easily taken so that the number of parts 1 satisfies the predetermined number.
  • the vibrating section 20 includes the first vibrator 21 that vibrates the housing section 2 in the horizontal direction.
  • the component 1 can be easily moved by vibrating the component 1 in the horizontal direction, so a situation with high counting accuracy (a situation in which the components 1 are not placed close to each other) can be easily created.
  • the parts counting device is applied to a robot system, but the present invention is not limited to this.
  • the parts counting device may be applied to systems other than robot systems.
  • the parts counting device is an inspection device that inspects the number of parts by counting the parts, but the present invention is not limited to this.
  • the parts counting device may be an inspection device further provided with an inspection function other than the function of inspecting the number of parts.
  • feature points extracted based on a likelihood map are tracked, but the present invention is not limited to this.
  • feature points may be extracted by image processing other than the likelihood map, and the extracted feature points may be tracked.
  • an example was shown in which an extraction region for extracting feature points is set, but the present invention is not limited to this. In the present invention, it is not necessary to set an extraction area for extracting feature points.
  • the likelihood of the likelihood map created this time and the likelihood of the likelihood map created a predetermined number of times ago are compared for each part area.
  • feature points are extracted using a machine learning model
  • present invention is not limited to this.
  • feature points may be extracted using rule-based image processing other than machine learning models.
  • a machine learning model trained to extract feature points is used to extract feature points and track parts
  • the present invention is not limited to this.
  • a machine learning model trained to create a likelihood map may be used to create a likelihood map and track parts.
  • the vibrating section includes both the first vibrator that vibrates the housing section in the horizontal direction and the second vibrator that vibrates the housing section in the vertical direction.
  • the vibrating section may include only one of the first vibrator that vibrates the housing section in the horizontal direction and the second vibrator that vibrates the housing section in the vertical direction.
  • the processing operation of the control unit is explained using a flow-driven flowchart in which the processing is performed in order along the processing flow, but the present invention is not limited to this.
  • the processing operation of the control unit may be performed by event-driven processing in which processing is executed on an event-by-event basis. In this case, it may be completely event-driven, or it may be a combination of event-driven and flow-driven.
  • the photographing process and the image processing may be executed in parallel in separate flows, as shown in the modified examples of FIGS. 8 and 9.
  • FIGS. 8 and 9 the same processes as those in the flowchart of FIG. 5 are denoted by the same reference numerals, and detailed explanation thereof will be omitted.
  • step S101 it is determined whether all of the captured images 50 stored in the storage unit 43 have been subjected to image processing. If it is determined that all of the captured images 50 stored in the storage unit 43 have not been subjected to image processing, the process of step S101 is repeated. Further, if it is determined that all of the captured images 50 stored in the storage unit 43 have been subjected to image processing, the process advances to step S102. Then, in step S102, the photographing section 30 photographs the component 1 inside the storage section 2. Further, the photographed image 50 is stored in the storage unit 43. Then, in step S103, it is determined whether or not photography has been performed a predetermined number of times. If it is determined that photography has not been performed a predetermined number of times, the process of step S102 is repeated. Furthermore, if it is determined that photography has been performed a predetermined number of times, the process advances to step S4.
  • step S111 it is determined whether or not there are any captured images 50 stored in the storage unit 43 for which image processing has not been performed. If it is determined that there is no unprocessed image among the captured images 50 stored in the storage unit 43, the process of step S111 is repeated. If it is determined that some of the photographed images 50 stored in the storage unit 43 have not been subjected to image processing, the process proceeds to step S3, and image processing is performed on the photographed images 50 for which image processing has not been performed. Then, the process advances to step S111.
  • the image capturing section 30 can perform imaging while waiting for the information processing section 41 to complete the image processing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un dispositif de comptage de composants (100) qui comprend : une unité d'oscillation (20) qui fait osciller une unité de boîtier (2) qui loge une pluralité de composants (1) ; une unité de photographie (30) qui photographie en continu les composants dans l'unité de boîtier qui oscille par l'unité d'oscillation ; et une unité de traitement d'informations (41) qui, sur la base d'images photographiques (50) photographiées séquentiellement par l'unité de photographie, suit les composants qui sont déplacés par oscillation et qui compte le nombre des composants dans l'unité de boîtier.
PCT/JP2022/029305 2022-07-29 2022-07-29 Dispositif de comptage de composants et système de robot WO2024024090A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/029305 WO2024024090A1 (fr) 2022-07-29 2022-07-29 Dispositif de comptage de composants et système de robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/029305 WO2024024090A1 (fr) 2022-07-29 2022-07-29 Dispositif de comptage de composants et système de robot

Publications (1)

Publication Number Publication Date
WO2024024090A1 true WO2024024090A1 (fr) 2024-02-01

Family

ID=89705906

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/029305 WO2024024090A1 (fr) 2022-07-29 2022-07-29 Dispositif de comptage de composants et système de robot

Country Status (1)

Country Link
WO (1) WO2024024090A1 (fr)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07200770A (ja) * 1993-12-28 1995-08-04 Sanyo Electric Co Ltd 錠剤検査システム
JP2002173103A (ja) * 2000-12-07 2002-06-18 Ajinomoto Co Inc 食品固形物の計数充填方法及び装置
JP2013015924A (ja) * 2011-06-30 2013-01-24 Panasonic Corp 薬剤計数装置およびその方法
JP2020016919A (ja) * 2018-07-23 2020-01-30 Necソリューションイノベータ株式会社 カウント装置、カウント方法及びプログラム
WO2020183837A1 (fr) * 2019-03-08 2020-09-17 三菱電機株式会社 Système de comptage, dispositif de comptage, dispositif d'apprentissage automatique, procédé de comptage, procédé d'agencement de composants et programme
WO2021186494A1 (fr) * 2020-03-16 2021-09-23 日本電気株式会社 Dispositif et procédé de suivi d'objet, et support d'enregistrement
JP2022096438A (ja) * 2020-12-17 2022-06-29 株式会社東芝 対象物検出装置、対象物検出方法及び対象物検出プログラム

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07200770A (ja) * 1993-12-28 1995-08-04 Sanyo Electric Co Ltd 錠剤検査システム
JP2002173103A (ja) * 2000-12-07 2002-06-18 Ajinomoto Co Inc 食品固形物の計数充填方法及び装置
JP2013015924A (ja) * 2011-06-30 2013-01-24 Panasonic Corp 薬剤計数装置およびその方法
JP2020016919A (ja) * 2018-07-23 2020-01-30 Necソリューションイノベータ株式会社 カウント装置、カウント方法及びプログラム
WO2020183837A1 (fr) * 2019-03-08 2020-09-17 三菱電機株式会社 Système de comptage, dispositif de comptage, dispositif d'apprentissage automatique, procédé de comptage, procédé d'agencement de composants et programme
WO2021186494A1 (fr) * 2020-03-16 2021-09-23 日本電気株式会社 Dispositif et procédé de suivi d'objet, et support d'enregistrement
JP2022096438A (ja) * 2020-12-17 2022-06-29 株式会社東芝 対象物検出装置、対象物検出方法及び対象物検出プログラム

Similar Documents

Publication Publication Date Title
JP6700872B2 (ja) 像振れ補正装置及びその制御方法、撮像装置、プログラム、記憶媒体
JP5538160B2 (ja) 瞳孔検出装置及び瞳孔検出方法
JP2019028843A (ja) 人物の視線方向を推定するための情報処理装置及び推定方法、並びに学習装置及び学習方法
JP2008021092A (ja) ロボットシステムのシミュレーション装置
US10207409B2 (en) Image processing method, image processing device, and robot system
JP6758903B2 (ja) 情報処理装置、情報処理方法、プログラム、システム、および物品製造方法
JP2017504826A (ja) 画像装置、及び画像装置における自動的な焦点合わせのための方法、並びに対応するコンピュータプログラム
JP2020087312A (ja) 行動認識装置、行動認識方法及びプログラム
US11126844B2 (en) Control apparatus, robot system, and method of detecting object
JP2010184300A (ja) 姿勢変更システムおよび姿勢変更方法
WO2024024090A1 (fr) Dispositif de comptage de composants et système de robot
JP2007010419A (ja) 対象物の3次元形状検証システム。
JP2009276073A (ja) 平面推定方法、曲面推定方法、および平面推定装置
CN112566758A (zh) 机器人控制装置、机器人控制方法及机器人控制程序
JP2021026599A (ja) 画像処理システム
WO2021177245A1 (fr) Dispositif de traitement d'image, système de création d'instruction de travail et procédé de création d'instruction de travail
JP2019215225A (ja) 画像検査システム及びその制御方法
JP2015230515A (ja) 被写体追尾装置およびカメラ
JP2008014857A (ja) プリント板の検査用座標取得装置、検査用座標取得方法、及び検査用座標取得プログラム
JP6602089B2 (ja) 画像処理装置及びその制御方法
JP2011118767A (ja) 表情モニタリング方法および表情モニタリング装置
JPH08285526A (ja) 画像認識方式
JPH11120351A (ja) 画像マッチング装置および画像マッチングプログラムを格納する記憶媒体
CN116722785B (zh) 一种电机转动方向自动校准方法及装置
CN110764609A (zh) 用于数据同步的方法和装置以及计算设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22953178

Country of ref document: EP

Kind code of ref document: A1