JP2020192677A - Identification transfer device and identification transfer method of target workpiece - Google Patents
Identification transfer device and identification transfer method of target workpiece Download PDFInfo
- Publication number
- JP2020192677A JP2020192677A JP2020088802A JP2020088802A JP2020192677A JP 2020192677 A JP2020192677 A JP 2020192677A JP 2020088802 A JP2020088802 A JP 2020088802A JP 2020088802 A JP2020088802 A JP 2020088802A JP 2020192677 A JP2020192677 A JP 2020192677A
- Authority
- JP
- Japan
- Prior art keywords
- target work
- work
- captured
- arm robot
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012546 transfer Methods 0.000 title claims abstract description 38
- 238000000034 method Methods 0.000 title claims abstract description 17
- 238000013473 artificial intelligence Methods 0.000 claims abstract description 9
- 238000003384 imaging method Methods 0.000 claims description 35
- 230000006870 function Effects 0.000 claims description 30
- 239000004278 EU approved seasoning Substances 0.000 claims description 16
- 235000011194 food seasoning agent Nutrition 0.000 claims description 16
- 238000001514 detection method Methods 0.000 claims description 14
- 235000013305 food Nutrition 0.000 claims description 13
- 239000004615 ingredient Substances 0.000 claims description 11
- 238000013135 deep learning Methods 0.000 claims description 10
- 238000010801 machine learning Methods 0.000 claims description 9
- 238000010411 cooking Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000005259 measurement Methods 0.000 description 13
- 238000012545 processing Methods 0.000 description 10
- 235000012149 noodles Nutrition 0.000 description 7
- 244000195452 Wasabia japonica Species 0.000 description 5
- 235000000760 Wasabia japonica Nutrition 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 235000014347 soups Nutrition 0.000 description 5
- 238000005286 illumination Methods 0.000 description 4
- 235000015067 sauces Nutrition 0.000 description 3
- 235000013601 eggs Nutrition 0.000 description 2
- 238000005538 encapsulation Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 229920005989 resin Polymers 0.000 description 2
- 239000011347 resin Substances 0.000 description 2
- 235000013555 soy sauce Nutrition 0.000 description 2
- 229920003002 synthetic resin Polymers 0.000 description 2
- 239000000057 synthetic resin Substances 0.000 description 2
- 244000291564 Allium cepa Species 0.000 description 1
- 235000002732 Allium cepa var. cepa Nutrition 0.000 description 1
- 241001474374 Blennius Species 0.000 description 1
- 235000002566 Capsicum Nutrition 0.000 description 1
- 241000238557 Decapoda Species 0.000 description 1
- 239000006002 Pepper Substances 0.000 description 1
- 235000016761 Piper aduncum Nutrition 0.000 description 1
- 235000017804 Piper guineense Nutrition 0.000 description 1
- 244000203593 Piper nigrum Species 0.000 description 1
- 235000008184 Piper nigrum Nutrition 0.000 description 1
- 244000088415 Raphanus sativus Species 0.000 description 1
- 235000006140 Raphanus sativus var sativus Nutrition 0.000 description 1
- 244000000231 Sesamum indicum Species 0.000 description 1
- 235000003434 Sesamum indicum Nutrition 0.000 description 1
- 244000273928 Zingiber officinale Species 0.000 description 1
- 235000006886 Zingiber officinale Nutrition 0.000 description 1
- 230000004308 accommodation Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 239000003610 charcoal Substances 0.000 description 1
- 235000013409 condiments Nutrition 0.000 description 1
- 239000010730 cutting oil Substances 0.000 description 1
- 235000015071 dressings Nutrition 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 235000008397 ginger Nutrition 0.000 description 1
- 235000008960 ketchup Nutrition 0.000 description 1
- 238000004898 kneading Methods 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 235000010746 mayonnaise Nutrition 0.000 description 1
- 239000008268 mayonnaise Substances 0.000 description 1
- 235000021485 packed food Nutrition 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 235000015927 pasta Nutrition 0.000 description 1
- 229920000728 polyester Polymers 0.000 description 1
- 235000013324 preserved food Nutrition 0.000 description 1
- 235000020991 processed meat Nutrition 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
- 235000013311 vegetables Nutrition 0.000 description 1
Landscapes
- Image Analysis (AREA)
- Manipulator (AREA)
Abstract
Description
本発明は、調味料や具材などの食品が封入された対象ワーク(袋体)を識別してて所定位置に運搬する対象ワークの識別移載装置及び識別移載方法に関する。 The present invention relates to an identification transfer device and an identification transfer method for a target work (bag body) in which foods such as seasonings and ingredients are enclosed and to identify and transport the target work to a predetermined position.
本願発明は、調味料や調理用の具材などの食品を封入した対象ワーク(袋体)を捕捉し、コンベアで搬送されている容器などの移載先に収容する対象ワークの識別移載装置に関する。調味料や調理用具材を封入する袋体は、封入物を視認することができる一部または全体が透明な樹脂製等のフィルム、ないしは、封入物を視認することができない不透明な樹脂製等のフィルムで構成されているのが一般的である。袋体に封入される調味料としては、醤油、ソース、ドレッシング、タレ、つゆ等の液体状調味料、刻み海苔、刻みネギ、七味、擦りゴマ、胡椒等の固形状又は粉末状調味料、練りわさび、大根おろし、おろし生姜、マヨネーズ、ケチャップ、コチュジャン等のペースト状調味料等が知られている。袋体に封入される調理用の具材として、うどんやパスタ等の茹で麺類、チャーシュー等の食肉加工食品、味付け卵等の卵加工食品、乾燥カット野菜や乾燥エビ等の乾燥食品等が知られている。このような調味料や調理用の具材などを封入した袋体は、弁当等の容器詰め食品の製造ラインにおいて、その種別ごとにコンテナボックスや深底あるいは浅底トレーにバラ積みされた状態で供給され、製造ライン上で順次に手作業あるいは装置により容器に収容される。このようなバラ積み状態の調味料や調理用の具材を封入した袋体を移載する識別移載装置に関しての先行文献は少ないが、このような装置に関するものとしては下記のものがあり、本願出願人も関連するものを出願している。 INDUSTRIAL APPLICABILITY The present invention is an identification transfer device for a target work (bag body) in which foods such as seasonings and cooking ingredients are enclosed, and which is stored in a transfer destination such as a container transported by a conveyor. Regarding. The bag that encloses the seasonings and cooking utensils is made of a film made of resin, etc. that is partially or wholly transparent so that the encapsulation can be visually recognized, or made of opaque resin, etc., in which the encapsulation cannot be visually recognized. It is generally composed of a film. The seasonings enclosed in the bag include liquid seasonings such as soy sauce, sauce, dressing, sauce, and soup, solid or powdered seasonings such as chopped seaweed, chopped onions, shichimi, sesame seeds, and pepper, and kneading. Paste-like seasonings such as wasabi, grated radish, grated ginger, mayonnaise, ketchup, and kochujan are known. Boiled noodles such as udon and pasta, processed meat foods such as charcoal, processed egg foods such as seasoned eggs, and dried foods such as dried cut vegetables and dried shrimp are known as cooking ingredients enclosed in a bag. ing. Bags containing such seasonings and cooking ingredients are stacked in container boxes, deep-bottomed or shallow-bottomed trays for each type in the production line of packaged foods such as bento boxes. They are fed and sequentially housed in containers on the production line, either manually or by equipment. Although there are few prior documents regarding an identification transfer device that transfers a bag body containing such loosely stacked seasonings and cooking ingredients, there are the following as related to such a device. The applicant of the present application has also applied for a related item.
特許文献1は、「(課題)不定形状、不定重量の対象物に対するピッキング機能を向上する。(解決手段)対象物の形状情報を取得するカメラ2及び制御手順(ステップS105〜S120)と、対象物の形状情報に基づいて当該対象物の重量を推定する重量推定部32と、対象物に対するピッキング動作を行うピッキングロボット5と、重量推定部32で推定した重量に基づいてピッキングロボット5のピッキング動作を制御する作業計画部33と、を有し、重量推定部32は、入力される対象物の形状情報と出力すべき重量との対応関係を学習した機械学習(例えば人工ニューラルネットワーク)により当該対象物の重量を推定する。」ピッキング装置が開示されている。 Patent Document 1 describes "(problem) improving a picking function for an object having an indefinite shape and an indefinite weight. (Solution) a camera 2 for acquiring shape information of the object, a control procedure (steps S105 to S120), and the object. A weight estimation unit 32 that estimates the weight of the object based on the shape information of the object, a picking robot 5 that performs a picking operation on the object, and a picking operation of the picking robot 5 based on the weight estimated by the weight estimation unit 32. The weight estimation unit 32 has a work planning unit 33 that controls the object, and the weight estimation unit 32 uses machine learning (for example, an artificial neural network) that learns the correspondence between the shape information of the input object and the weight to be output. Estimate the weight of an object. ”A picking device has been disclosed.
特許文献2は、「(課題)高い再現性を伴って粉末状ないし粒子状の食材を自動的に振りかけることができる自動選別移載装置を提供すること。(解決手段)下方に向けて垂下された回転軸と、回転軸を軸線回りに回転させる回転駆動機構と、回転軸の下端部に設けられた底板14と、底板の周囲を取り囲むように設けられ、平面視で底板よりも大きい内面を有する下方筒体22と、底板の上方側に設けられ、平面視で底板よりも小さい内面を有する上方筒体21と、を備え、上方筒体の下端と底板の上面との間には、隙間Gが設けられている。」自動選別移載装置が開示されている。 Patent Document 2 states, "(Problem) To provide an automatic sorting and transferring device capable of automatically sprinkling powdered or particulate foodstuffs with high reproducibility. (Solution) It is hung downward. A rotating shaft, a rotating drive mechanism that rotates the rotating shaft around the axis, a bottom plate 14 provided at the lower end of the rotating shaft, and an inner surface that is larger than the bottom plate in a plan view and is provided so as to surround the bottom plate. A lower cylinder 22 having a lower cylinder 22 and an upper cylinder 21 provided on the upper side of the bottom plate and having an inner surface smaller than the bottom plate in a plan view are provided, and a gap is provided between the lower end of the upper cylinder and the upper surface of the bottom plate. G is provided. ”The automatic sorting and transferring device is disclosed.
特許文献3は、「(課題)照明光や外光による照明条件や板材の切削油の塗布状態等の撮影条件に左右されずに正確に板材等の計測対象物の位置や傾き情報を求めることが可能な画像処理装置および画像処理方法を提供する。(解決手段)本発明の画像処理装置100は、計測対象物5に照明光を照射する照明部6と、前記計測対象物5をはさんで前記照明部6と相対する位置から前記計測対象物5全体の撮像画像を取得する撮像部12と、前記計測対象物5の特定形状の範囲の位置計測データを計測する位置計測部2と、画像処理部4を備え、前記画像処理部4は、前記撮像画像から前記位置計測部2が計測する計測範囲を算出する計測位置算出部43と、前記位置計測データと3次元基準データから前記計測対象物の傾き情報を求める傾き情報抽出部44を有することを特徴とする。」画像処理装置を開示している。 Patent Document 3 states, "(Problem) Obtaining accurate position and tilt information of a measurement object such as a plate material without being influenced by imaging conditions such as illumination conditions by illumination light or external light and a state in which cutting oil is applied to the plate material. (Solution) The image processing device 100 of the present invention sandwiches a lighting unit 6 that irradiates a measurement object 5 with illumination light and the measurement object 5 so as to be capable of providing an image processing device and an image processing method. An image pickup unit 12 that acquires an image of the entire measurement object 5 from a position facing the illumination unit 6, and a position measurement unit 2 that measures position measurement data in a specific shape range of the measurement object 5. An image processing unit 4 is provided, and the image processing unit 4 includes a measurement position calculation unit 43 that calculates a measurement range measured by the position measurement unit 2 from the captured image, and the measurement from the position measurement data and the three-dimensional reference data. It is characterized by having a tilt information extraction unit 44 that obtains tilt information of an object. ”The image processing apparatus is disclosed.
しかしながら、調味料や具材などの食品を封入した対象ワーク(袋体)を弁当箱などの容器に収容する際に、バラ積みされた袋体を一つずつ正確に認識・捕捉し、効率よく容器に収容することは難しかった。例えば、バラ積みされた袋体が多数入れられたトレーの中では、袋体の重なり度合、形状、向き、傾きなどが雑然とした状態となっており、また、透明な袋体では、袋体の形状を機械では識別できない場合があるといった課題を有する。 However, when the target work (bag body) containing foods such as seasonings and ingredients is stored in a container such as a lunch box, the bags stacked in bulk are accurately recognized and captured one by one, and efficiently. It was difficult to put it in a container. For example, in a tray containing a large number of loosely stacked bags, the degree of overlap, shape, orientation, inclination, etc. of the bags are in a cluttered state, and in a transparent bag, the bags are in a cluttered state. There is a problem that the shape of the bag may not be identified by a machine.
そこで本発明の目的は、対象ワークの形状や、色、表面の図柄などに関わらず、無造作に配された対象ワークの中から対象ワークを正確に識別して、効率的に運搬する精度の高い対象ワークの識別移載装置及び識別移載方法を提供することにある。 Therefore, an object of the present invention is to accurately identify the target work from among the randomly arranged target works regardless of the shape, color, surface pattern, etc. of the target work, and to carry the target work with high accuracy. An object of the present invention is to provide an identification transfer device and an identification transfer method for a target work.
本発明は、調味料や具材などの食品が封入された対象ワークを識別して、移載する対象ワークの識別移載装置であって、撮像部と、制御部と、アームロボットを備え、前記撮像部は、無造作に積まれた複数の対象ワークを撮像し、前記制御部は、前記撮像部で撮像された情報から捕捉する対象ワークを検出する、捕捉対象ワーク検出機能と、捕捉対象ワークを移載させるように前記アームロボットを制御するアームロボット制御機能を有し、捕捉対象ワーク検出機能は、前記撮像部で撮像した対象ワークの情報と、制御部が有する対象ワークの情報から、類似度を判定するステップと、前記類似度の値に、事前にスコア化された前記アームロボットからの距離スコアを加算して合計値を求めるステップにより、前記合計値の最も大きな対象ワークを捕捉対象ワークと判定し、アームロボットにより移載することを特徴とする。
本発明によれば、複数の対象ワークが無造作に積まれた中から、正確に対象ワークを検出し、検出された複数の対象ワークのうち、アームロボットからの距離が近いものを捕捉対象ワークと判定することで、効率的な移載作業を行うことができる。
The present invention is an identification transfer device for identifying and transferring a target work in which foods such as seasonings and ingredients are enclosed, and includes an imaging unit, a control unit, and an arm robot. The imaging unit captures a plurality of target works randomly stacked, and the control unit detects a target work to be captured from the information captured by the imaging unit, a capture target work detection function, and a capture target work. It has an arm robot control function that controls the arm robot so as to transfer the robot, and the capture target work detection function is similar from the information of the target work captured by the imaging unit and the information of the target work possessed by the control unit. By the step of determining the degree and the step of adding the distance score from the arm robot scored in advance to the value of the similarity to obtain the total value, the target work having the largest total value is captured. It is characterized in that it is determined that it is transferred by an arm robot.
According to the present invention, a target work is accurately detected from among a plurality of target works randomly stacked, and among the plurality of detected target works, those having a short distance from the arm robot are regarded as capture target works. By making a judgment, efficient transfer work can be performed.
次に、本発明は、制御部が類似度を判定するために有する対象ワークの情報は、深層学習や機械学習といった人工知能(AI)により得られる情報であることを特徴とし、制御部による類似度の判定には、撮像部で撮像された情報のうち、対象ワークの大きさ、形状、色、表面の図柄のうち、少なくとも一つが利用されることを特徴とする。
本発明によれば、制御部は深層学習や機械学習といった人工知能(AI)によって、対象ワークの情報を習得するので、対象ワークの大きさ、形状、色、表面の図柄などに制限がなく、どのような情報を制御部に提供するかによって、所望する様々な形態の対象ワークの識別が可能となり、その精度も向上させていくことができる。
Next, the present invention is characterized in that the information of the target work that the control unit has to determine the similarity is information obtained by artificial intelligence (AI) such as deep learning or machine learning, and the similarity by the control unit. The degree is determined by using at least one of the size, shape, color, and surface pattern of the target work among the information captured by the imaging unit.
According to the present invention, since the control unit acquires information on the target work by artificial intelligence (AI) such as deep learning or machine learning, there are no restrictions on the size, shape, color, surface pattern, etc. of the target work. Depending on what kind of information is provided to the control unit, it is possible to identify various desired forms of the target work, and the accuracy thereof can be improved.
さらに、本発明の制御部は、対象ワークの中心位置を検出する中心位置検出機能を有し、対象ワークの中心位置と前記撮像部との距離を算出し、対象ワークの中心位置をアームロボットにより捕捉することを特徴とする。
本発明によれば、深層学習や機械学習といった人工知能(AI)による学習を応用し、対象ワークの中心位置を検出する機能を持たせることで、比較的大きな対象ワーク、円形など多様な形状の対象ワークであっても、捕捉後の脱落などのミスなく移載を行うことができる。
Further, the control unit of the present invention has a center position detection function for detecting the center position of the target work, calculates the distance between the center position of the target work and the imaging unit, and uses an arm robot to determine the center position of the target work. It is characterized by capturing.
According to the present invention, by applying learning by artificial intelligence (AI) such as deep learning and machine learning and providing a function of detecting the central position of the target work, various shapes such as a relatively large target work and a circular shape can be provided. Even the target work can be reprinted without any mistakes such as dropping after capture.
また、本発明の制御部は、対象ワークの向き・傾き判別機能を有し、対象ワークの向きに応じてアームロボットを捕捉に最適な角度に制御し、捕捉した対象ワークを一定の向きで移載先に配することを特徴とする。
本発明によれば、深層学習や機械学習といった人工知能(AI)による学習を応用し、対象ワークの向き判別機能を持たせることで、移載先(ここでは、弁当などの食品用容器内)に、所望する向きに揃えて捕捉された対象ワークを収容することができ、収容(盛付け)抜けの発見が容易になり、製品の美観にも貢献する。
また傾き判別機能により、アームロボットを対象ワークの傾きに応じ、捕捉に最適な角度に制御するので、移載ミスの低減が図られる。
Further, the control unit of the present invention has a function of determining the orientation / inclination of the target work, controls the arm robot at an optimum angle for capture according to the orientation of the target work, and moves the captured target work in a fixed direction. It is characterized by arranging it at the destination.
According to the present invention, by applying learning by artificial intelligence (AI) such as deep learning and machine learning and providing a function of determining the orientation of the target work, the transfer destination (here, in a food container such as a lunch box). In addition, the target work captured in the desired direction can be accommodated, making it easier to detect missing accommodation (placing) and contributing to the aesthetic appearance of the product.
In addition, the tilt discrimination function controls the arm robot to an optimum angle for capturing according to the tilt of the target work, so that transfer errors can be reduced.
本発明によれば、無造作に積まれた複数の対象ワークから、人工知能(AI)によって得られた情報を基に対象ワークを識別し、移載用のアームロボットからの距離を勘案して対象ワークとして検出するので、効率的に作業が可能となり、対象ワークの識別に利用する類似度など対象ワークの識別移載に必要な情報を深層学習や機械学習を行うことで、その作業精度向上を図ることも可能となる。
この効率化、精度向上により、人手不足の環境下、労働環境の改善や労働力の確保にも貢献する。
According to the present invention, a target work is identified based on information obtained by artificial intelligence (AI) from a plurality of target works randomly piled up, and the target is considered in consideration of the distance from the arm robot for transfer. Since it is detected as a work, it is possible to work efficiently, and the work accuracy can be improved by performing deep learning or machine learning on the information necessary for identifying and transferring the target work, such as the similarity used to identify the target work. It is also possible to plan.
By improving efficiency and accuracy, we will contribute to improving the working environment and securing a labor force in an environment of labor shortage.
本発明を適用した具体的な実施の形態について、図面を参照しながら以下、詳細に説明する。 Specific embodiments to which the present invention is applied will be described in detail below with reference to the drawings.
(本発明の第1の実施の形態)
図1は食品加工工場における搬送ラインHrを示す概略図であり、搬送ラインにおける対象ワークの識別移載装置100を詳細に示す図である。
対象ワークの識別移載装置100は、搬送ラインのベルトコンベア上を上流側から下流側へ流れる容器を移載先とし、前記ベルトコンベアを挟んで、アームロボットZ7と複数の対象ワークが無造作に積まれたトレーZ10が対向して配され、トレー上部には鉛直下向きにトレーを撮像できるように撮像部12が配されている(図1、図2)。
また、撮像部12と、撮像部12で撮像された画像データが処理される制御部46がネットワーク接続され、制御部46からの指示によりアームロボットZ7が対象ワークを捕捉し、容器に盛付け、下流側の蓋閉め等の工程に送られる。
本実施の形態における、対象ワーク5は麺つゆ、わさび等の薬味、醤油やソース等の調味料が封入してあり、ポリエステルや多層フィルム等によって構成された一般的な袋体であり、対象ワーク5を収容する移載先は弁当用の容器や透明な麺類の容器である。
(First Embodiment of the present invention)
FIG. 1 is a schematic view showing a transport line Hr in a food processing factory, and is a diagram showing in detail the identification transfer device 100 for a target work in the transport line.
The target work identification transfer device 100 uses a container that flows from the upstream side to the downstream side on the belt conveyor of the transport line as the transfer destination, and the arm robot Z7 and a plurality of target works are randomly stacked on the belt conveyor. The trays Z10 are arranged so as to face each other, and an imaging unit 12 is arranged above the tray so that the tray can be imaged vertically downward (FIGS. 1 and 2).
Further, the imaging unit 12 and the control unit 46 for processing the image data captured by the imaging unit 12 are connected to the network, and the arm robot Z7 captures the target work and puts it on the container according to the instruction from the control unit 46. It is sent to a process such as closing the lid on the downstream side.
In the present embodiment, the target work 5 is a general bag body in which condiments such as noodle soup and wasabi, and seasonings such as soy sauce and sauce are enclosed, and is made of polyester, a multilayer film, or the like. The transfer destination for accommodating 5 is a container for lunch and a container for transparent noodles.
(撮像部12)
本実施の形態における撮像部12は、複数の対象ワーク(袋体)5が無造作にバラ積まれたトレーZ10の上部に鉛直下向きにトレー内を撮像できるように配され、撮像した画像データは、制御部において画像処理され判定や位置情報の取得に使用される。本実施の形態における撮像部12には、2Dカメラ、3Dカメラなどの産業用カメラや、奥行き認識可能なデプスカメラ(Depth Camera)を使用している。
(Imaging unit 12)
The imaging unit 12 in the present embodiment is arranged so that the inside of the tray can be imaged vertically downward on the upper part of the tray Z10 in which a plurality of target workpieces (bags) 5 are randomly stacked, and the captured image data is obtained. Image processing is performed in the control unit and used for judgment and acquisition of position information. An industrial camera such as a 2D camera or a 3D camera or a depth camera (Depth Camera) capable of recognizing the depth is used for the image pickup unit 12 in the present embodiment.
(制御部における捕捉対象ワーク検出機能)
本実施の形態の制御部46における対象ワーク検出機能47は、機械学習や深層学習といった人工知能(AI)によって得られる対象ワーク5の情報を基に、撮像部12によって撮像された画像データから、移載対象として検出すべき対象ワーク5を識別する。
ここで、「対象ワークの識別」とは、撮像部12で得られたRGB画像データやDepth画像データと、対象ワークを検出するために深層学習のうちSSDによる学習済みモデルによって得られる対象ワークの情報とを照合し、その合致度合いを「類似度」Rとして数値化し、類似度(数値)が高いものほど検出すべき対象ワーク5である可能性が増加することを意味する。類似度判定部48は、撮像部12で得られた画像を、画像処理部4を経て、前記学習済みモデルと照合することで、捕捉する対象ワーク(以後、捕捉対象ワーク)5aを判定する(図3)。その一例としては、図4から、対象ワーク5を識別し、類似度の高いものから順に5件検出していることが分かる。類似度Rの算出には(前記制御部による類似度の判定)には、前記撮像部12で撮像された撮像情報のうち対象ワーク5の大きさ、形状、色、表面の図柄のうちの少なくとも一つが利用される。
(Capture target work detection function in the control unit)
The target work detection function 47 in the control unit 46 of the present embodiment is based on the information of the target work 5 obtained by artificial intelligence (AI) such as machine learning and deep learning, from the image data captured by the imaging unit 12. Identify the target work 5 to be detected as the transfer target.
Here, "identification of the target work" means the RGB image data and the depth image data obtained by the imaging unit 12 and the target work obtained by the trained model by SSD in the deep learning to detect the target work. The information is collated, the degree of matching is quantified as "similarity" R, and the higher the degree of similarity (numerical value) is, the more likely it is that the target work 5 should be detected. The similarity determination unit 48 determines the target work to be captured (hereinafter, the capture target work) 5a by collating the image obtained by the imaging unit 12 with the trained model via the image processing unit 4 (hereinafter, the capture target work) 5a. Figure 3). As an example, it can be seen from FIG. 4 that the target work 5 is identified and five cases are detected in order from the one having the highest degree of similarity. For the calculation of the similarity R (determination of the similarity by the control unit), at least one of the size, shape, color, and surface pattern of the target work 5 among the imaging information captured by the imaging unit 12 One is used.
図5は、麺類の容器Z8に、わさびの袋5Wと麺つゆの袋5MをアームロボットZ7で詰め込む作業を行うことを説明する図である。図6は、撮像部12によって得られたRGB画像Z21とDepth画像Z22を示した例を説明する図である。
次に、前述の撮像部12によって得られる画像データの画角(撮影領域)に、後述するアームロボットZ7からの距離に応じた距離スコアをあらかじめ設定しておき、検出された対象ワーク5の位置に応じて距離スコアを付与し、類似度に加算し、最も値が大きく表れた対象ワーク5を移載する捕捉対象ワーク5として判定する。
FIG. 5 is a diagram illustrating that the work of packing the wasabi bag 5W and the noodle soup bag 5M into the noodle container Z8 with the arm robot Z7 is performed. FIG. 6 is a diagram illustrating an example showing the RGB image Z21 and the Depth image Z22 obtained by the imaging unit 12.
Next, a distance score corresponding to the distance from the arm robot Z7, which will be described later, is set in advance in the angle of view (shooting area) of the image data obtained by the image pickup unit 12, and the position of the detected target work 5 is set. A distance score is given according to the above, added to the similarity, and the target work 5 having the largest value is determined as the capture target work 5 to be transferred.
図7は、あらかじめ設定された距離スコアを示す図であり、本実施の形態においては、アームロボットZ7の位置を中心とし、対象ワーク5の大きさを基に、所定の半径rの半円から、半径2rの半円、半径3r・・・Nrと徐々に半径の大きな同心円の半円をN個描き、N個の半円内に対象ワーク5があれば150点を加点するが、抽出された対象ワーク5のうち、複数がN個の半円内にある場合は、N個のうち幾つの半円に重なるかにより、半円に重なった数をMとして、類似度+150点×Mとする計算式で最も値が大きく表れた対象ワーク5を捕捉対象ワーク5aであると判定することになる。
これによりアームロボットZ7の移動距離を短縮させ、効率的な移載(容器への盛付け)が可能となるが、距離スコアの付与方法は、アームロボットZ7からの距離が近い半円から順に大きなスコアを設定するようにするなど上記方法に限定されない。
FIG. 7 is a diagram showing a preset distance score. In the present embodiment, the arm robot Z7 is centered, and the target work 5 is based on the size of the target work 5 from a semicircle having a predetermined radius r. , Semicircle with radius 2r, radius 3r ... Nr and gradually draw N semicircles of concentric circles with large radius, and if there is a target work 5 in the N semicircles, 150 points will be added, but it will be extracted. When a plurality of the target works 5 are within N semicircles, the number of overlapping semicircles is defined as M depending on how many semicircles of N are overlapped, and the similarity is +150 points × M. It is determined that the target work 5 having the largest value in the calculation formula to be captured is the capture target work 5a.
This shortens the moving distance of the arm robot Z7 and enables efficient transfer (mounting on a container), but the method of assigning the distance score is to increase the distance from the arm robot Z7 in order from the shortest semicircle. It is not limited to the above method such as setting a score.
(制御部における捕捉対象ワーク5aの向き判別機能及び中心検出機能)
そして、画像データから最も値が大きく表れて捕捉対象ワーク5aであると判定された対象ワーク5aの所定の領域を、前述の画像データから抽出し、対象ワーク5aの向きを判別する。
本実施の形態においては、捕捉対象ワーク5aの向きとは、その検出範囲が−180°<θ≦+180°で、画像データはトレーを上方から鉛直下向きに撮像されたものであるため、鉛直方向に対し、対象ワーク5aが0°を基準に何度回転した状態かを判別することになる(図8)。
さらに、捕捉対象ワーク5aの中心5cを求めることになるが、捕捉対象ワーク5aの向きの判別、中心検出にも深層学習によってモデル化された情報が用いられる。
捕捉対象ワーク5aの向き判別機能は、移載先である容器への盛付に際し、捕捉対象ワーク5aの向きも所望の向きに揃えることを可能とするものであり、対象ワーク5の中心位置検出機能は、比較的サイズや重量の大きな対象ワーク5や円形など多様な形状の対象ワーク5をアームロボットZ7で捕捉する際、捕捉ミスや運搬中の脱落を防止する効果も得られる。
これらの機能は、盛付け抜けなどのミスも防止でき、ミスの有無確認が容易になると同時に、完成した製品の美観などの観点からも有意義なものと言える。
(Orientation determination function and center detection function of capture target work 5a in the control unit)
Then, a predetermined region of the target work 5a, which has the largest value appearing from the image data and is determined to be the target work 5a, is extracted from the above-mentioned image data, and the orientation of the target work 5a is determined.
In the present embodiment, the orientation of the work 5a to be captured is the vertical direction because the detection range is −180 ° <θ≤ + 180 ° and the image data is obtained by capturing the tray vertically downward from above. On the other hand, it is determined how many times the target work 5a is rotated with respect to 0 ° (FIG. 8).
Further, the center 5c of the capture target work 5a is obtained, and the information modeled by deep learning is also used for determining the orientation of the capture target work 5a and detecting the center.
The orientation determination function of the capture target work 5a enables the orientation of the capture target work 5a to be aligned with the desired orientation when mounting on the container to be transferred, and detects the center position of the target work 5. As a function, when the target work 5 having a relatively large size and weight and the target work 5 having various shapes such as a circular shape are captured by the arm robot Z7, an effect of preventing a capture error or dropping during transportation can be obtained.
These functions can prevent mistakes such as missing pieces, make it easier to check for mistakes, and at the same time, are meaningful from the viewpoint of the aesthetics of the finished product.
(制御部における高さ位置測定機能)
上記ステップにより、捕捉対象ワーク5aが平面上(X−Y平面)でどこにどの向きで存在しているかが把握できたが、次に、検出された捕捉対象ワーク5aの中心と撮像部12との距離を測定することで、捕捉対象ワーク5aの高さ位置(Z軸)を測定し、位置を3次元的(X−Y−Z)に把握することになる。本実施の形態では、奥行(ここでは高さ)を認識できるDepth画像を基に、撮像部(デプスカメラ)12と捕捉対象ワーク5aの中心との距離を求める。
(Height position measurement function in the control unit)
By the above steps, it was possible to grasp where and in what direction the capture target work 5a exists on the plane (XY plane). Next, the center of the detected capture target work 5a and the imaging unit 12 By measuring the distance, the height position (Z-axis) of the work 5a to be captured is measured, and the position is grasped three-dimensionally (XYZ). In the present embodiment, the distance between the imaging unit (depth camera) 12 and the center of the work 5a to be captured is obtained based on the Depth image that can recognize the depth (here, the height).
この後、捕捉対象ワーク5aの位置情報を、画像データを基にしたピクセル座標系(px)からワールド座標系(mm)へ変換すると同時に、捕捉対象ワーク5aの向き情報と高さ位置情報を基に傾きを求め、アームロボットZ7が捕捉対象ワーク5aにどのようにアプローチして捕捉すればよいかの最適解を伝達し、捕捉するよう指令を出すことになる。 After that, the position information of the work 5a to be captured is converted from the pixel coordinate system (px) based on the image data to the world coordinate system (mm), and at the same time, the orientation information and the height position information of the work 5a to be captured are used as the basis. The arm robot Z7 transmits the optimum solution of how to approach and capture the work 5a to be captured, and issues a command to capture the work 5a.
(アームロボット)
アームロボットZ7は、制御部46と接続されており、制御部からの指令に基づき、トレー内に無造作に積まれた複数の対象ワーク5の中から捕捉対象ワーク5aを捕捉し、移載先である容器に盛付けする。本実施の形態のアームロボットZ7は、アームを2つ有しており、それぞれ別のトレーZ10から対象ワークを識別して容器に移載する。
図9は上記一連のステップをフロー図で表したものであり、上記ステップが繰り返し行われることとなる。
本実施の形態では、アームロボットZ7は捕捉対象となる袋体の向き、傾きに応じて袋体を捕捉する方法として、真空式で、対象となる袋体を吸着させる方法が取られているが、その方法は限定されるものではない。
(Arm robot)
The arm robot Z7 is connected to the control unit 46, and based on a command from the control unit, captures the capture target work 5a from a plurality of target workpieces 5 randomly stacked in the tray, and at the transfer destination. Serve in a container. The arm robot Z7 of the present embodiment has two arms, and the target work is identified from different trays Z10 and transferred to the container.
FIG. 9 shows the series of steps in a flow diagram, and the steps are repeated.
In the present embodiment, as a method of capturing the bag body according to the orientation and inclination of the bag body to be captured, the arm robot Z7 uses a vacuum method to suck the target bag body. , The method is not limited.
100 対象ワークの識別移載装置、
12 撮像部(撮像カメラ)、
4 画像処理部、
46 制御部、
47 対象ワーク検出機能、
48 類似度判定部、
5 対象ワーク(袋体)、
5a 捕捉対象ワーク、
5c 対象ワークの中心、
5W わさび(ワサビの入った合成樹脂製の袋体)、
5M 麺つゆ(麺つゆの入った合成樹脂製の袋体)、
Hr 搬送ライン、
R 類似度、
r,2r,3r 距離スコア、
Z7 アームロボット、
Z8 移載先(プラスチック製の食品容器)、
Z10 トレー(対象ワークが無造作に積まれた容器)、
Z21 RGB画像、
Z22 Depth画像
100 Target work identification transfer device,
12 Imaging unit (imaging camera),
4 Image processing unit,
46 Control unit,
47 Target work detection function,
48 Similarity Judgment Unit,
5 Target work (bag body),
5a Work to be captured,
5c Center of target work,
5W wasabi (synthetic resin bag containing wasabi),
5M noodle soup (synthetic resin bag containing noodle soup),
Hr transport line,
R similarity,
r, 2r, 3r distance score,
Z7 arm robot,
Z8 transfer destination (plastic food container),
Z10 tray (container in which target workpieces are randomly stacked),
Z21 RGB image,
Z22 Depth image
Claims (9)
撮像部と、制御部と、アームロボットを備え、
前記撮像部は、無造作に積まれた複数の対象ワークを撮像し、
前記制御部は、前記撮像部で撮像された情報から捕捉対象ワークを検出する捕捉対象ワーク検出機能と、捕捉対象ワークを移載させるように前記アームロボットを制御するアームロボット制御機能を有し、
捕捉対象ワーク検出機能は、前記撮像部で撮像した対象ワークの情報と、制御部が有する対象ワークの情報から、類似度を判定し、前記類似度の値に、事前にスコア化された前記アームロボットからの距離に応じて付与されるスコアを加算して合計値を求め、
前記合計値の最も大きな対象ワークを捕捉対象ワークと判定し、アームロボット制御機能によりアームロボットで移載することを特徴とする対象ワークの識別移載装置。 It is a device for identifying and transferring the target work in which foods such as seasonings and ingredients are enclosed.
Equipped with an imaging unit, a control unit, and an arm robot,
The imaging unit captures a plurality of target workpieces randomly stacked, and captures images.
The control unit has a capture target work detection function that detects the capture target work from the information captured by the imaging unit, and an arm robot control function that controls the arm robot so as to transfer the capture target work.
The capture target work detection function determines the similarity from the information of the target work imaged by the imaging unit and the information of the target work possessed by the control unit, and the arm is scored in advance to the value of the similarity. Add the scores given according to the distance from the robot to obtain the total value,
An identification transfer device for a target work, characterized in that the target work having the largest total value is determined to be a capture target work and is transferred by an arm robot by an arm robot control function.
撮像部と、制御部と、アームロボットを備え、
前記撮像部は、無造作に積まれた複数の対象ワークを撮像し、
前記制御部は、前記撮像部で撮像された情報から捕捉対象ワークを検出する捕捉対象ワーク検出機能と、捕捉対象ワークを移載させるように前記アームロボットを制御するアームロボット制御機能を有し、
捕捉対象ワーク検出機能は、前記撮像部で撮像した対象ワークの情報と、制御部が有する対象ワークの情報から、類似度を判定する判定ステップと、前記類似度の値に、事前にスコア化された前記アームロボットからの距離に応じて付与されるスコアを加算して合計値を求める合計値算出ステップにより、
前記合計値の最も大きな対象ワークを捕捉対象ワークと判定し、アームロボット制御機能によりアームロボットで移載することを特徴とする対象ワークの識別移載方法。 It is a method of identifying and transferring the target work in which foods such as seasonings and cooking ingredients are enclosed.
Equipped with an imaging unit, a control unit, and an arm robot,
The imaging unit captures a plurality of target workpieces randomly stacked, and captures images.
The control unit has a capture target work detection function that detects the capture target work from the information captured by the imaging unit, and an arm robot control function that controls the arm robot so as to transfer the capture target work.
The capture target work detection function is scored in advance into a determination step for determining the similarity from the information of the target work imaged by the imaging unit and the information of the target work possessed by the control unit, and the value of the similarity. By the total value calculation step of adding the scores given according to the distance from the arm robot to obtain the total value.
A method for identifying and transferring a target work, characterized in that the target work having the largest total value is determined to be the target work to be captured and transferred by the arm robot by the arm robot control function.
The control unit has a center position detection function for detecting the center position of the work to be captured, calculates the distance between the center position of the work to be captured and the imaging unit, and captures the center position of the work to be captured by an arm robot. The method for identifying and transferring a target work according to any one of claims 6 to 8, wherein the target work is identified and transferred.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019095564 | 2019-05-21 | ||
JP2019095564 | 2019-05-21 |
Publications (2)
Publication Number | Publication Date |
---|---|
JP2020192677A true JP2020192677A (en) | 2020-12-03 |
JP7418743B2 JP7418743B2 (en) | 2024-01-22 |
Family
ID=73545547
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2020088802A Active JP7418743B2 (en) | 2019-05-21 | 2020-05-21 | Identification transfer device and identification transfer method for target workpieces |
Country Status (1)
Country | Link |
---|---|
JP (1) | JP7418743B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112643681A (en) * | 2020-12-16 | 2021-04-13 | 湖南涉外经济学院 | Intelligent path planning device and method for industrial mechanical arm |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014058004A (en) * | 2012-09-14 | 2014-04-03 | Yaskawa Electric Corp | Robot device |
JP2014108868A (en) * | 2012-12-03 | 2014-06-12 | Ishida Co Ltd | Transfer device and box packing system including transfer device |
JP2017173142A (en) * | 2016-03-24 | 2017-09-28 | 株式会社別川製作所 | Image processing device, image processing method and micro joint cutting system |
JP2018027581A (en) * | 2016-08-17 | 2018-02-22 | 株式会社安川電機 | Picking system |
JP2018201447A (en) * | 2017-06-07 | 2018-12-27 | 日本製粉株式会社 | Automatic topping device |
-
2020
- 2020-05-21 JP JP2020088802A patent/JP7418743B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014058004A (en) * | 2012-09-14 | 2014-04-03 | Yaskawa Electric Corp | Robot device |
JP2014108868A (en) * | 2012-12-03 | 2014-06-12 | Ishida Co Ltd | Transfer device and box packing system including transfer device |
JP2017173142A (en) * | 2016-03-24 | 2017-09-28 | 株式会社別川製作所 | Image processing device, image processing method and micro joint cutting system |
JP2018027581A (en) * | 2016-08-17 | 2018-02-22 | 株式会社安川電機 | Picking system |
JP2018201447A (en) * | 2017-06-07 | 2018-12-27 | 日本製粉株式会社 | Automatic topping device |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112643681A (en) * | 2020-12-16 | 2021-04-13 | 湖南涉外经济学院 | Intelligent path planning device and method for industrial mechanical arm |
Also Published As
Publication number | Publication date |
---|---|
JP7418743B2 (en) | 2024-01-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6793428B1 (en) | Robot multi-gripper assembly and method for gripping and holding objects | |
US20220306407A1 (en) | Vision-assisted robotized depalletizer | |
US20190087976A1 (en) | Information processing device, image recognition method and non-transitory computer readable medium | |
US10124489B2 (en) | Locating, separating, and picking boxes with a sensor-guided robot | |
JP2021030439A (en) | Robotic multi-gripper assemblies and methods for gripping and holding objects | |
JP2018027581A (en) | Picking system | |
US10239210B2 (en) | Vision-assisted system and method for picking of rubber bales in a bin | |
KR20220165262A (en) | Pick and Place Robot System | |
Tsarouchi et al. | A method for detection of randomly placed objects for robotic handling | |
CN106573381A (en) | Truck unloader visualization | |
CN108700869A (en) | For identification with the sensory perceptual system and method that handle various objects | |
CN113351522A (en) | Article sorting method, device and system | |
CN112509043B (en) | Robot intelligent unstacking method suitable for random mixed paper box stack | |
JP7418743B2 (en) | Identification transfer device and identification transfer method for target workpieces | |
Panda et al. | Robotics for general material handling machines in food plants | |
JP2019058967A (en) | Food transfer system and food holding device | |
JP2021139775A (en) | Mass estimation device and food packaging machine | |
CA2984284C (en) | Apparatus and method for tracing with vision | |
JP6860432B2 (en) | Work recognition device and work recognition method | |
JP2023016800A (en) | Robotic system with depth-based processing mechanism, and methods for operating robotic system | |
JP7424800B2 (en) | Control device, control method, and control system | |
JP5332873B2 (en) | Bag-like workpiece recognition device and method | |
JP2011509820A (en) | Automated method and system for identifying and classifying foods | |
CN112802106A (en) | Object grabbing method and device | |
JP2023121062A (en) | Position detector, picking system, method for detecting position, and position detector program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A621 | Written request for application examination |
Free format text: JAPANESE INTERMEDIATE CODE: A621 Effective date: 20230418 |
|
A871 | Explanation of circumstances concerning accelerated examination |
Free format text: JAPANESE INTERMEDIATE CODE: A871 Effective date: 20230427 |
|
A977 | Report on retrieval |
Free format text: JAPANESE INTERMEDIATE CODE: A971007 Effective date: 20230915 |
|
A131 | Notification of reasons for refusal |
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20231010 |
|
A521 | Request for written amendment filed |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20231127 |
|
TRDD | Decision of grant or rejection written | ||
A01 | Written decision to grant a patent or to grant a registration (utility model) |
Free format text: JAPANESE INTERMEDIATE CODE: A01 Effective date: 20231208 |
|
A61 | First payment of annual fees (during grant procedure) |
Free format text: JAPANESE INTERMEDIATE CODE: A61 Effective date: 20231226 |
|
R150 | Certificate of patent or registration of utility model |
Ref document number: 7418743 Country of ref document: JP Free format text: JAPANESE INTERMEDIATE CODE: R150 |