TWI675000B - Object delivery method and system - Google Patents

Object delivery method and system Download PDF

Info

Publication number
TWI675000B
TWI675000B TW108110079A TW108110079A TWI675000B TW I675000 B TWI675000 B TW I675000B TW 108110079 A TW108110079 A TW 108110079A TW 108110079 A TW108110079 A TW 108110079A TW I675000 B TWI675000 B TW I675000B
Authority
TW
Taiwan
Prior art keywords
processing unit
data
dimensional
point cloud
holding module
Prior art date
Application number
TW108110079A
Other languages
Chinese (zh)
Other versions
TW202035255A (en
Inventor
陳政隆
春祿 阮
賴宗誠
Original Assignee
所羅門股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 所羅門股份有限公司 filed Critical 所羅門股份有限公司
Priority to TW108110079A priority Critical patent/TWI675000B/en
Application granted granted Critical
Publication of TWI675000B publication Critical patent/TWI675000B/en
Publication of TW202035255A publication Critical patent/TW202035255A/en

Links

Landscapes

  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

一種物件運送方法包含:(A)在一運送單元的一固持模組固持一物件的情形下,一處理單元控制一深度相機進行拍攝以獲得一筆參考資料,該參考資料指示出該物件所呈現出的一初始姿態;(B)該處理單元將該參考資料與一預先儲存的模板資料比對而產生一筆校正資料,該模板資料指示出另一物件所呈現出的一目標姿態,且該校正資料相關於該物件於該初始姿態與該目標姿態之間在三維空間中的偏移量;(C)該處理單元根據該校正資料及一預先儲存的預定路徑控制該固持模組被帶動至一校正位置後解除固持該物件。An object transport method includes: (A) In the case where an object is held by a holding module of a transport unit, a processing unit controls a depth camera to take a picture to obtain a reference, which indicates that the object appears (B) the processing unit compares the reference data with a pre-stored template data to generate a piece of correction data, the template data indicates a target posture presented by another object, and the correction data Related to the offset of the object in the three-dimensional space between the initial posture and the target posture; (C) the processing unit controls the holding module to be driven to a correction according to the correction data and a pre-stored predetermined path After holding the position, release the object.

Description

物件運送方法及系統Object transportation method and system

本發明是有關於一種物件運送方法及系統,特別是指一種涉及影像辨識的物件運送方法及系統。The present invention relates to a method and system for object transportation, and more particularly, to a method and system for object transportation related to image recognition.

在現代社會中,許多產業皆已逐步朝向全自動化發展以節省人力成本。舉例來說,在自動化的生產、加工或包裝產線上,機械手臂常被應用於將多個物件逐一運送至待加工的區域(例如輸送帶或者加工平台上),以準備對物件進行下一道加工程序。In modern society, many industries have gradually moved towards fully automated development to save labor costs. For example, in automated production, processing, or packaging lines, robotic arms are often used to transport multiple items one by one to the area to be processed (such as on a conveyor belt or processing platform) in preparation for the next processing of the object program.

然而,在每一次將物件運送至待加工區域的過程中,機械手臂取得物件時與物件接觸的位置皆不相同,因此,每一個物件在被機械手臂固持時所呈現的姿態也都不一樣。如此一來,就算機械手臂運送物件的路徑固定,被機械手臂放置在待加工區域的物件仍可能會產生角度或位置上的明顯偏移。However, in each process of transporting the object to the area to be processed, the position of the robot arm contacting the object when acquiring the object is different. Therefore, the posture of each object when held by the robot arm is also different. In this way, even if the path of the robotic arm to transport the objects is fixed, the objects placed by the robotic arm in the area to be processed may still have a significant deviation in angle or position.

對於需要高精確度的自動加工程序而言,若物件在待加工區域內的姿態偏差過大,便容易造成加工結果不如預期,甚至導致加工失敗而增加時間及物料成本。因此,現有技術在物件的運送上仍存在明顯的改善空間。For an automatic processing program that requires high accuracy, if the attitude deviation of the object in the area to be processed is too large, it will easily cause the processing result to be less than expected, and even cause processing failure and increase time and material costs. Therefore, the prior art still has significant room for improvement in the transportation of objects.

本發明之其中一目的,在於提供一種能改善現有技術之不足的物件運送方法。One object of the present invention is to provide a method for transporting articles which can improve the shortcomings of the prior art.

於是,本發明物件運送方法由一物件運送系統對一物件實施,該物件運送系統包含一深度相機、一適用於移動該物件的運送單元及一處理單元,該運送單元包括一可被帶動且適用於固持該物件的固持模組。該物件運送方法包含:(A)在該固持模組固持該物件的情形下,該處理單元控制該深度相機進行拍攝以獲得一筆參考資料,該參考資料指示出該物件於該深度相機的一拍攝範圍內所呈現出的一初始姿態;(B)該處理單元將該參考資料與一預先儲存的模板資料比對而產生一筆校正資料,該模板資料指示出另一物件於該拍攝範圍內所呈現出的一目標姿態,且該校正資料相關於該物件於該初始姿態與該目標姿態之間在三維空間中的偏移量;(C)該處理單元根據該校正資料及一預先儲存的預定路徑控制該固持模組被帶動至一校正位置後解除固持該物件。Thus, the object transport method of the present invention is implemented by an object transport system including an in-depth camera, a transport unit adapted to move the object, and a processing unit. The transport unit includes a transportable and applicable unit. A holding module for holding the object. The object transportation method includes: (A) in the case that the holding module holds the object, the processing unit controls the depth camera to take a picture to obtain a reference material, the reference material indicates a shot of the object on the depth camera An initial posture presented within the range; (B) the processing unit compares the reference data with a pre-stored template data to generate a piece of correction data, the template data indicates that another object is presented in the shooting range A target posture, and the correction data is related to the offset of the object in the three-dimensional space between the initial posture and the target posture; (C) the processing unit according to the correction data and a pre-stored predetermined path Control the holding module to be driven to a correction position and release the holding of the object.

在本發明物件運送方法的一些實施態樣中,該參考資料及該模板資料各包含多筆座標點資料,且每一座標點資料包含一個二維影像部分及一個三維點雲部分,並且,步驟(B)包含:(b1)該處理單元利用實例分割技術根據該參考資料的該等二維影像部分辨識出該物件所呈現出的該初始姿態;(b2)該處理單元根據該初始姿態與該目標姿態之間在二維平面上的差異產生一預估偏移結果;(b3)該處理單元以該預估偏移結果為基準地以該參考資料的該等三維點雲部分與該模板資料的該等三維點雲部分進行三維匹配,並根據三維匹配的結果產生該校正資料。In some embodiments of the method for transporting objects according to the present invention, the reference data and the template data each include multiple coordinate point data, and each coordinate point data includes a two-dimensional image part and a three-dimensional point cloud part, and step ( B) Contains: (b1) the processing unit uses the instance segmentation technology to identify the initial posture presented by the object based on the two-dimensional image portions of the reference material; (b2) the processing unit according to the initial posture and the target The difference in the two-dimensional plane between the attitudes produces an estimated offset result; (b3) the processing unit uses the estimated offset result as a reference to the three-dimensional point cloud portions of the reference data and the template data. The three-dimensional point cloud parts are subjected to three-dimensional matching, and the correction data is generated according to the three-dimensional matching result.

在本發明物件運送方法的一些實施態樣中,在步驟(b2)中,該預估偏移結果指示出該物件於該初始姿態與該目標姿態之間在三維空間中的預估偏移程度,並且,在步驟(b3)中,該處理單元是先根據該預估偏移結果產生一對應該預估偏移結果的初步匹配資料,再根據該初步匹配資料對該參考資料的該等三維點雲部分進行一初步匹配處理,然後,該處理單元再以經過該初步匹配處理之該參考資料的該等三維點雲部分與該模板資料的該等三維點雲部分進行三維匹配,其中,該初步匹配處理是根據該初步匹配資料對該參考資料的該等三維點雲部分進行三維空間中的位移及旋轉,且該初步匹配資料包含一初步匹配位移方向、一初步匹配位移距離、一初步匹配旋轉方向以及一初步匹配旋轉角度。In some implementation aspects of the object transport method of the present invention, in step (b2), the estimated offset result indicates an estimated offset degree of the object in the three-dimensional space between the initial posture and the target posture. And, in step (b3), the processing unit first generates a pair of preliminary matching data that should be the estimated offset result according to the estimated offset result, and then according to the preliminary matching data to the three-dimensional reference data. The point cloud part is subjected to a preliminary matching process, and then the processing unit performs three-dimensional matching on the three-dimensional point cloud part of the reference material and the three-dimensional point cloud part of the template data after the preliminary matching process. The preliminary matching processing is to perform displacement and rotation of the three-dimensional point cloud portion of the reference material in the three-dimensional space according to the preliminary matching data, and the preliminary matching data includes a preliminary matching displacement direction, a preliminary matching displacement distance, and a preliminary matching. Rotation direction and a preliminary matching rotation angle.

在本發明物件運送方法的一些實施態樣中,在步驟(A)中,該參考資料具有多個對應於該物件的參考特徵部位,並且,在步驟(B)中,該模板資料具有多個分別對應該等參考特徵部位的目標特徵部位,並且,該處理單元是根據每一參考特徵部位與該參考特徵部位所對應的該目標特徵部位之間在三維空間中的相對位置產生該校正資料。In some embodiments of the method for transporting an object according to the present invention, in step (A), the reference material has a plurality of reference feature parts corresponding to the object, and in step (B), the template data has a plurality of reference features. Corresponding to target feature parts of the reference feature parts, and the processing unit generates the correction data according to a relative position in the three-dimensional space between each reference feature part and the target feature part corresponding to the reference feature part.

在本發明物件運送方法的一些實施態樣中,在步驟(B)中,該校正資料包含一位移校正資料及一旋轉校正資料,該位移校正資料對應於該物件於該初始姿態與該目標姿態之間在三維空間中的位置偏移,該旋轉校正資料對應於該物件於該初始姿態與該目標姿態之間在三維空間中的角度偏移。In some embodiments of the method for transporting an object according to the present invention, in step (B), the correction data includes a displacement correction data and a rotation correction data, and the displacement correction data corresponds to the object in the initial posture and the target posture. The position is offset in three-dimensional space, and the rotation correction data corresponds to the angular offset of the object in the three-dimensional space between the initial posture and the target posture.

在本發明物件運送方法的一些實施態樣中,該物件運送系統還包含一儲存單元,且該物件運送方法還包含位於步驟(A)之前的:(D)在該固持模組固持另該物件的情形下,該處理單元控制該固持模組被帶動至一對應該深度相機的拍攝位置,並控制該深度相機進行拍攝以獲得該模板資料;(E)該處理單元將該模板資料儲存於該儲存單元;(F)在該固持模組固持該物件的情形下,該處理單元控制該固持模組被帶動至該拍攝位置,並執行步驟(A)。In some embodiments of the object transport method of the present invention, the object transport system further includes a storage unit, and the object transport method further includes before step (A): (D) holding another object in the holding module In the case, the processing unit controls the holding module to be driven to a pair of shooting positions corresponding to the depth camera, and controls the depth camera to shoot to obtain the template data; (E) the processing unit stores the template data in the Storage unit; (F) in the case that the holding module holds the object, the processing unit controls the holding module to be driven to the shooting position, and executes step (A).

在本發明物件運送方法的一些實施態樣中,該儲存單元儲存有一以深度學習的方式所訓練的物件辨識類神經網路模型,該運送單元還包括一用於帶動該固持模組的機械手臂,以及一可被該機械手臂帶動且與該固持模組位置相對應的攝影模組;該物件運送方法還包含一介於步驟(E)及(F)之間的:(G)該處理單元藉由該物件辨識類神經網路模型以及該攝影模組的機器視覺從其他多個物件中辨識出該物件,且控制該機械手臂將該固持模組帶動至該物件的位置,並控制該固持模組固持該物件。In some embodiments of the object transport method of the present invention, the storage unit stores an object recognition neural network model trained in a deep learning manner, and the transport unit further includes a robot arm for driving the holding module. , And a photographic module that can be driven by the robotic arm and corresponds to the position of the holding module; the object transportation method further includes a step between (E) and (F): (G) the processing unit borrows The object recognition-type neural network model and the machine vision of the photographic module identify the object from other objects, and control the robotic arm to drive the holding module to the position of the object, and control the holding mold The group holds the object.

在本發明物件運送方法的一些實施態樣中,該物件運送系統是包含多台深度相機,且該等深度相機的多個拍攝鏡頭分別從多個不同的方向對準一拍攝位置;In some embodiments of the object transport method of the present invention, the object transport system includes a plurality of depth cameras, and a plurality of shooting lenses of the depth cameras are respectively aligned to a shooting position from a plurality of different directions;

該物件運送方法還包含位於步驟(A)之前的:(H)在該固持模組固持另該物件且位於該拍攝位置的情形下,該處理單元控制該等深度相機分別進行拍攝以獲得多個三維點雲模板,並將該等三維點雲模板以三维拼接的方式合併為該模板資料並儲存該模板資料。在步驟(A)中,在該固持模組位於該拍攝位置的情形下,該處理單元是控制該等深度相機分別進行拍攝以獲得多個三維點雲模型,並將該等三維點雲模型以三维拼接的方式合併為該參考資料。The object transportation method further includes before the step (A): (H) In a case where the holding module holds another object and is located at the shooting position, the processing unit controls the depth cameras to shoot separately to obtain multiple A three-dimensional point cloud template, and merge the three-dimensional point cloud templates into the template data by three-dimensional stitching and store the template data. In step (A), when the holding module is located at the shooting position, the processing unit controls the depth cameras to shoot separately to obtain a plurality of three-dimensional point cloud models, and uses the three-dimensional point cloud models to Three-dimensional stitching is incorporated into this reference.

本發明之另一目的,在於提供能實施該物件運送方法的一種物件運送系統。Another object of the present invention is to provide an article transport system capable of implementing the article transport method.

本發明物件運送系統適用於運送一物件,該物件運送系統包含一深度相機、一運送單元及一電連接該深度相機及該運送單元的處理單元。該運送單元適用於移動該物件,並包括一可被帶動且適用於固持該物件的固持模組。在該固持模組固持該物件的情形下,該處理單元控制該深度相機進行拍攝以獲得一筆參考資料,該參考資料指示出該物件於該深度相機的一拍攝範圍內所呈現出的一初始姿態,該處理單元將該參考資料與一預先儲存的模板資料比對而產生一筆校正資料,該模板資料指示出另一物件於該拍攝範圍內所呈現出的一目標姿態,且該校正資料相關於該物件於該初始姿態與該目標姿態之間在三維空間中的偏移量,該處理單元根據該校正資料及一預先儲存的預定路徑控制該固持模組被帶動至一校正位置後解除固持該物件。The object transport system of the present invention is suitable for transporting an object. The object transport system includes a depth camera, a transport unit, and a processing unit electrically connected to the depth camera and the transport unit. The transport unit is adapted to move the object and includes a holding module which can be driven and adapted to hold the object. In the case where the holding module holds the object, the processing unit controls the depth camera to take a picture to obtain a reference, which indicates an initial posture that the object exhibits within a shooting range of the depth camera. , The processing unit compares the reference data with a pre-stored template data to generate a piece of correction data, the template data indicates a target posture presented by another object in the shooting range, and the correction data is related to The offset of the object in the three-dimensional space between the initial posture and the target posture, the processing unit controls the holding module to be driven to a correction position according to the calibration data and a pre-stored predetermined path, and then releases the holding of the module. object.

在本發明物件運送系統的一些實施態樣中,該物件運送系統是包含多台深度相機,且該等深度相機的多個拍攝鏡頭分別從多個不同的方向對準一拍攝位置,在該固持模組固持另該物件且位於該拍攝位置的情形下,該處理單元控制該等深度相機分別進行拍攝以獲得多個三維點雲模板,並將該等三維點雲模板以三维拼接的方式合併為該模板資料並儲存該模板資料,在該固持模組位於該拍攝位置的情形下,該處理單元是控制該等深度相機分別進行拍攝以獲得多個三維點雲模型,並將該等三維點雲模型以三维拼接的方式合併為該參考資料。In some embodiments of the object transport system of the present invention, the object transport system includes a plurality of depth cameras, and a plurality of shooting lenses of the depth cameras are respectively aligned from a plurality of different directions to a shooting position, and the holding In the case where the module holds another object and is located at the shooting position, the processing unit controls the depth cameras to shoot separately to obtain multiple 3D point cloud templates, and merges the 3D point cloud templates into a 3D stitching method into The template data is stored and the template data is stored. In the case that the holding module is located at the shooting position, the processing unit controls the depth cameras to shoot separately to obtain a plurality of three-dimensional point cloud models, and the 3D point cloud The models are merged into this reference material in three-dimensional stitching.

本發明之功效在於:該物件運送系統能根據該物件於該初始姿態與該目標姿態之間在三維空間中的偏移量產生該校正資料,並根據該校正資料及該預定路徑控制該固持模組被帶動至該校正位置,而能藉此校正該物件於該初始姿態與該目標姿態之間在三維空間中的偏移,如此一來,該物件運送系統能使多個物件以更加統一的姿態被放置到待加工位置,因此有利於後續的各種自動加工程序,而的確能改善現有技術之不便。The effect of the present invention is that the object transport system can generate the correction data according to the offset of the object in the three-dimensional space between the initial posture and the target posture, and control the holding mold according to the correction data and the predetermined path. The group is driven to the correction position, which can correct the object's offset in the three-dimensional space between the initial attitude and the target attitude. In this way, the object transport system can make multiple objects more uniform. The attitude is placed at the position to be processed, so it is conducive to subsequent automatic processing programs, and it can indeed improve the inconvenience of the existing technology.

在本發明被詳細描述之前,應當注意在以下的說明內容中,類似的元件是以相同的編號來表示。此外,本專利說明書中所述的「電連接」是泛指多個電子設備/裝置/元件之間透過導電材料相連接而達成的有線電連接,以及透過無線通訊技術進行無線信號傳輸的無線電連接。並且,本專利說明書中所述的「電連接」亦泛指兩個電子設備/裝置/元件之間直接相連而形成的「直接電連接」,以及兩個電子設備/裝置/元件之間還透過其他電子設備/裝置/元件相連而形成的「間接電連接」。Before the present invention is described in detail, it should be noted that in the following description, similar elements are represented by the same numbers. In addition, the "electrical connection" described in this patent specification generally refers to a wired electrical connection between multiple electronic devices / devices / components connected through conductive materials, and a radio connection for wireless signal transmission through wireless communication technology . In addition, the "electrical connection" described in this patent specification also refers to the "direct electrical connection" formed by direct connection between two electronic devices / devices / components, and also through the two electronic devices / devices / components. "Indirect electrical connection" formed by the connection of other electronic equipment / devices / components.

同時參閱圖1及圖2,本發明物件運送系統1的一第一實施例適用於將一堆交錯堆疊的物件5(如圖2所示)逐一地從一待運送位置運送至一待加工位置。該等物件5彼此相同且可例如是某種電子產品的零組件,而該待加工位置可例如是一輸送帶或者一加工平台上的特定位置,但並不以此為限。Referring to FIG. 1 and FIG. 2 at the same time, a first embodiment of the object transport system 1 of the present invention is suitable for transporting a stack of objects 5 (shown in FIG. 2) from a to-be-transported position to a to-be-processed position one by one. . The objects 5 are identical to each other and may be, for example, components of an electronic product, and the processing position may be, for example, a conveyor belt or a specific position on a processing platform, but is not limited thereto.

該物件運送系統1包含一深度相機11、一適用於逐一運送該等物件5的運送單元12、一儲存單元13,以及一電連接該深度相機11、該運送單元12及該儲存單元13的處理單元14。The object transport system 1 includes a depth camera 11, a transport unit 12 adapted to transport the objects 5 one by one, a storage unit 13, and a process electrically connected to the depth camera 11, the transport unit 12 and the storage unit 13. Unit 14.

如圖2所示,該深度相機11具有一拍攝鏡頭111,並且,該深度相機11是被固定地設置,以使得該拍攝鏡頭111對準一拍攝位置。補充說明的是,該深度相機11在本實施例中例如是被實施為一應用飛時測距(英文為Time of Flight,簡稱ToF)技術的深度相機。然而,在其他實施例中,該深度相機11也可被實施為應用結構光(英文為Structured Light)技術或者立體視覺(英文為Stereo Vision)技術的深度相機,而並不以本實施例為限。As shown in FIG. 2, the depth camera 11 has a shooting lens 111, and the depth camera 11 is fixedly arranged so that the shooting lens 111 is aligned with a shooting position. It is added that, in this embodiment, the depth camera 11 is, for example, implemented as a depth camera using Time of Flight (ToF in English for short) technology. However, in other embodiments, the depth camera 11 may also be implemented as a depth camera using structured light (Structured Light in English) technology or stereo vision (Stereo Vision in English) technology, and is not limited to this embodiment. .

該運送單元12包括一可活動地機械手臂121、一能夠固持該等物件5其中任一者的固持模組122,以及一攝影模組123。The transport unit 12 includes a movable robot arm 121, a holding module 122 capable of holding any one of the objects 5, and a photographing module 123.

該機械手臂121具有一固定端124、一相反於該固定端124的自由端125,以及多個位於該固定端124及該自由端125之間的可動關節126(圖2中僅示例性地示出兩個可動關節126,但在實際的實施態樣中,該等可動關節126的數量可為更多個)。The robot arm 121 has a fixed end 124, a free end 125 opposite to the fixed end 124, and a plurality of movable joints 126 located between the fixed end 124 and the free end 125 (only shown as an example in FIG. 2). There are two movable joints 126, but in an actual implementation, the number of the movable joints 126 may be more).

該固持模組122設置於該機械手臂121的自由端125,藉此,該機械手臂121能夠帶動該固持模組122在一工作範圍內進行各方向的位移及旋轉。並且,該固持模組122在本實施例中例如被實施為一夾具,而能以夾取的方式固持該等物件5的其中任一者。然而,在其他實施中,該固持模組122亦可根據不同需求地被實施為一吸盤、一抱具或一針爪等不同類型的機械手臂末端工具(英文為End of Arm Tooling,簡稱EOAT),也就是說,該固持模組122的具體實施態樣只要能夠固持該等物件5的其中任一者即可實施,而並不以本實施例為限。The holding module 122 is disposed at the free end 125 of the robot arm 121, whereby the robot arm 121 can drive the holding module 122 to move and rotate in various directions within a working range. Moreover, in this embodiment, the holding module 122 is implemented as, for example, a jig, and any one of the objects 5 can be held in a clamping manner. However, in other implementations, the holding module 122 can also be implemented as a different type of robot arm end tool (English: End of Arm Tooling, EOAT for short) according to different needs. That is, the specific implementation of the holding module 122 can be implemented as long as it can hold any one of the objects 5, and is not limited to this embodiment.

該攝影模組123設置於該機械手臂121且與該固持模組122位置相對應,更具體地說,該攝影模組123例如是較遠離於該機械手臂121的固定端124而較鄰近於該機械手臂121的自由端125。藉此,該攝影模組123能夠與該固持模組122一同被該機械手臂121帶動。進一步說明的是,該攝影模組123在本實施例中例如是被實施為獨立於該深度相機11的另一台深度相機,而且,較佳地,該攝影模組123例如是被實施為應用立體視覺技術的深度相機,而有助於降低該運送單元12整體的成本,但並不以此為限。The photographing module 123 is disposed on the robot arm 121 and corresponds to the position of the holding module 122. More specifically, the photographing module 123 is, for example, farther away from the fixed end 124 of the robot arm 121 and closer to the robot arm 121. The free end 125 of the robot arm 121. Thereby, the photographing module 123 can be driven by the robot arm 121 together with the holding module 122. It is further explained that, in this embodiment, the photographing module 123 is implemented as, for example, another depth camera independent of the depth camera 11, and preferably, the photographing module 123 is implemented as an application, for example. The depth camera of the stereo vision technology helps reduce the overall cost of the transport unit 12, but is not limited to this.

如圖1所示的,該儲存單元13儲存有一預定路徑R、一對應該等物件5之外觀的物件辨識類神經網路模型M1,以及一對應該等物件5之外觀的實例分割類神經網路模型M2。As shown in FIG. 1, the storage unit 13 stores an object recognition neural network model M1 having a predetermined path R, a pair of objects that should wait for the appearance of the object 5, and a pair of instance segmentation neural networks that should wait for the appearance of the object 5. Road model M2.

該預定路徑R在本實施例中是用於供該處理單元14控制該機械手臂121帶動該固持模組122由該拍攝位置移動至一鄰近該待加工位置的預定位置,且該預定路徑R例如是包含多個彼此之間具有順序性的三維座標,但並不以此為限。The predetermined path R is used in this embodiment for the processing unit 14 to control the robot arm 121 to drive the holding module 122 from the shooting position to a predetermined position adjacent to the processing position, and the predetermined path R is, for example, It contains a plurality of three-dimensional coordinates that are sequential with each other, but it is not limited to this.

該物件辨識類神經網路模型M1是以深度學習的方式所訓練而成,並且,藉由該物件辨識類神經網路模型M1,即便該等物件5彼此交錯堆疊,該處理單元14仍能根據該攝影模組123的機器視覺辨識出該等物件5中位於最上層的其中一至多者,並辨識出位於最上層之每一物件5的位置及角度。The object recognition neural network model M1 is trained in a deep learning manner, and with the object recognition neural network model M1, even if the objects 5 are staggered and stacked on each other, the processing unit 14 can still The machine vision of the photographing module 123 recognizes one or more of the objects 5 on the uppermost layer, and recognizes the position and angle of each of the objects 5 on the uppermost layer.

該實例分割類神經網路模型M2亦是以深度學習的方式所訓練而成,並且,藉由該實例分割類神經網路模型M2,該處理單元14能利用實例分割(英文為Instance Segmentation)技術從一幅二維影像中辨識出各個獨立物體的外型輪廓。更進一步地說,在本實施例中,該實例分割類神經網路模型M2例如是基於「Mask R-CNN」技術配合深度學習而完成,但不以此為限。補充說明的是,「R-CNN」的全名為「Region-based Convolutional Neural Network 」,而「Convolutional Neural Network」的中文是「卷積神經網路」。此外,在其他實施例中,該實例分割類神經網路模型M2亦可是基於其他種類的卷積神經網路相關技術配合深度學習而完成,故並不以本實施例為限。The instance segmentation neural network model M2 is also trained in a deep learning manner, and by using the instance segmentation neural network model M2, the processing unit 14 can use instance segmentation (English Segmentation) technology Identify the outline of each independent object from a two-dimensional image. Furthermore, in this embodiment, the example segmentation neural network model M2 is completed based on “Mask R-CNN” technology and deep learning, but is not limited thereto. It is added that the full name of "R-CNN" is "Region-based Convolutional Neural Network", and the Chinese name of "Convolutional Neural Network" is "Convolutional Neural Network". In addition, in other embodiments, the example segmentation neural network model M2 can also be completed based on other types of convolutional neural network related technologies and deep learning, so it is not limited to this embodiment.

該處理單元14在本實施例中例如被實施為一台電腦的中央處理模組,並且,較佳地,該處理單元14例如是被實施為一台工作站(Workstation)等級電腦的中央處理模組,但並不以此為限。The processing unit 14 is implemented as a central processing module of a computer in this embodiment, and preferably, the processing unit 14 is implemented as a central processing module of a workstation computer, for example. , But not limited to this.

以下示例性地詳細說明本實施例的該物件運送系統1如何實施一物件運送方法。在本實施例中,該物件運送方法包含一建模流程以及一位於該建模流程之後的運送流程。具體來說,該建模流程是由該物件運送系統1對一個與該等物件5實質上完全相同的建模用物件(圖式未示出)進行,而該運送流程則為該物件運送系統1實際將每一物件5運送至該待加工位置的過程。The following exemplifies in detail how the article transport system 1 of this embodiment implements an article transport method. In this embodiment, the object transportation method includes a modeling process and a transportation process located after the modeling process. Specifically, the modeling process is performed by the article transport system 1 on a modeling object (not shown in the figure) that is substantially identical to the objects 5, and the transport process is the article transport system. 1 The process of actually transporting each item 5 to the position to be processed.

參閱圖3並配合參閱圖1,以下先對該物件運送方法的建模流程進行說明。Referring to FIG. 3 and referring to FIG. 1, the following first describes the modeling process of the object transportation method.

首先,在步驟S11中,該處理單元14控制該機械手臂121將該固持模組122帶動至該拍攝位置,並控制該固持模組122固持著該建模用物件,以使該建模用物件顯露於該深度相機11的一拍攝範圍之內。補充說明的是,在本步驟中,該建模用物件受該固持模組122固持的位置及角度可以選擇性地透過人為進行手動調整,以使得該建模用物件相對於該深度相機11是呈現被擺正的角度且位於該拍攝範圍的中心,但不以此為限。First, in step S11, the processing unit 14 controls the robot arm 121 to drive the holding module 122 to the shooting position, and controls the holding module 122 to hold the modeling object so that the modeling object It is exposed within a shooting range of the depth camera 11. It is added that, in this step, the position and angle of the modeling object held by the holding module 122 can be manually adjusted manually, so that the modeling object is relative to the depth camera 11. It is displayed at a right angle and is located in the center of the shooting range, but not limited to this.

配合參閱圖4,接著,在步驟S12中,該處理單元14控制該深度相機11進行拍攝,而藉此從該深度相機11獲得一筆模板資料D1(示於圖4)。在本實施例中,該模板資料D1指示出該建模用物件在該深度相機11的拍攝範圍內所呈現的一目標姿態,更明確地說,該目標姿態在本實施例中代表該建模用物件在該拍攝範圍內所呈現的位置及角度。Referring to FIG. 4, in step S12, the processing unit 14 controls the depth camera 11 to take a picture, and thereby obtains a piece of template data D1 from the depth camera 11 (shown in FIG. 4). In this embodiment, the template data D1 indicates a target posture presented by the modeling object within the shooting range of the depth camera 11. More specifically, the target posture represents the modeling in this embodiment. Use the position and angle of the object in the shooting range.

在本實施例中,該模板資料D1包含多筆座標點資料,且每一座標點資料包含一個二維影像部分及一個三維點雲部分。每一個二維影像部分包含一紅色色彩值、一綠色色彩值及一藍色色彩值,而每一個三維點雲部分則包含一X軸座標值、一Y軸座標值及一Z軸座標值。In this embodiment, the template data D1 includes a plurality of coordinate point data, and each coordinate point data includes a two-dimensional image portion and a three-dimensional point cloud portion. Each two-dimensional image portion includes a red color value, a green color value, and a blue color value, and each three-dimensional point cloud portion includes an X-axis coordinate value, a Y-axis coordinate value, and a Z-axis coordinate value.

更具體地說,該模板資料D1的該等二維影像部分共同構成一被包含於該模板資料D1內且為彩色的二維目標影像D11(也就是一張彩色照片),且該二維目標影像D11具有一呈現出該建模用物件之外觀的二維目標部分D111。另一方面,該模板資料D1的該等三維點雲部分則共同構成一被包含於該模板資料D1內的三維點雲模板D12(也就是一個3D Model),且該三維點雲模板D12具有一呈現出該建模用物件之外觀的三維目標部分D121。換句話說,該模板資料D1的該等二維影像部分共同指示出該建模用物件在二維平面中所呈現的該目標姿態,而該模板資料D1的該等三維點雲部分則共同指示出該建模用物件在三維空間中所呈現的該目標姿態。More specifically, the two-dimensional image portions of the template data D1 collectively constitute a color two-dimensional target image D11 (that is, a color photo) included in the template data D1, and the two-dimensional target The image D11 has a two-dimensional target portion D111 showing the appearance of the modeling object. On the other hand, the three-dimensional point cloud portions of the template data D1 collectively constitute a three-dimensional point cloud template D12 (that is, a 3D Model) included in the template data D1, and the three-dimensional point cloud template D12 has a The three-dimensional target portion D121 showing the appearance of the modeling object. In other words, the two-dimensional image portions of the template data D1 collectively indicate the target pose presented by the modeling object in a two-dimensional plane, and the three-dimensional point cloud portions of the template data D1 collectively indicate The target pose presented by the modeling object in three-dimensional space is obtained.

接著,在步驟S13中,該處理單元14對該模板資料D1進行一背景濾除處理,並將完成該背景濾除處理的該模板資料D1儲存於該儲存單元13。具體而言,該背景濾除處理是對該模板資料D1的三維點雲模板D12進行,且例如是將該三維點雲模板D12中深度大於一預定門檻的每一個三維點雲部分作為雜訊濾除,藉此,便能夠濾除該三維點雲模板D12中的部分背景而保留該三維目標部分D121。並且,該三維目標部分D121在該三維點雲模板D12中的位置被作為一目標位置,而該三維目標部分D121在該三維點雲模板D12中的角度則被作為一目標角度。補充說明的是,從應用上的意義來說,該模板資料D1是用於定義該固持模組122固持每一物件5並且位於該拍攝位置時,受該固持模組122所固持的物件5在該拍攝範圍中所應呈現的理想位置及角度。Next, in step S13, the processing unit 14 performs a background filtering process on the template data D1, and stores the template data D1 that has completed the background filtering process in the storage unit 13. Specifically, the background filtering process is performed on the 3D point cloud template D12 of the template data D1, and for example, each 3D point cloud portion in the 3D point cloud template D12 with a depth greater than a predetermined threshold is used as noise filtering. In addition, a part of the background in the three-dimensional point cloud template D12 can be filtered out and the three-dimensional target portion D121 can be retained. In addition, the position of the three-dimensional target portion D121 in the three-dimensional point cloud template D12 is used as a target position, and the angle of the three-dimensional target portion D121 in the three-dimensional point cloud template D12 is used as a target angle. It is added that, in an application sense, the template data D1 is used to define that when the holding module 122 holds each object 5 and is located at the shooting position, the objects 5 held by the holding module 122 are in The ideal position and angle in this shooting range.

上述的步驟S11至步驟S13即為該物件運送方法的建模流程。接著,參閱圖5並配合參閱圖1及圖4,以下對該物件運送方法的運送流程進行說明,並且,為了便於描述及理解,以下僅說明該物件運送系統1對該等物件5的其中一者(以下以「該物件5」稱之)進行運送的過程。The above steps S11 to S13 are the modeling process of the article transportation method. Next, referring to FIG. 5 and in conjunction with FIGS. 1 and 4, the following describes the transportation process of the article transportation method, and for convenience of description and understanding, only one of the articles 5 of the article transportation system 1 will be described below. (Hereinafter referred to as "this article 5") to carry out the process.

首先,在步驟S21中,該處理單元14控制該機械手臂121將該攝影模組123帶動至一對應該待運送位置的攝影位置,以使得該攝影模組123對準該堆物件5,並使該堆物件5中位於最上層的一或多者位於該攝影模組123的攝影範圍之內。First, in step S21, the processing unit 14 controls the robot arm 121 to drive the photographing module 123 to a pair of photographing positions corresponding to the positions to be transported, so that the photographing module 123 is aligned with the stack of objects 5 and makes One or more of the stacks of objects 5 located at the uppermost layer are within the shooting range of the camera module 123.

接著,在步驟S22中,該處理單元14藉由該物件辨識類神經網路模型M1以及該攝影模組123的機器視覺從該堆物件5位於最上層的一或多者中辨識出該物件5。Next, in step S22, the processing unit 14 recognizes the object 5 from one or more of the stack of objects 5 located at the uppermost layer by using the object recognition neural network model M1 and the machine vision of the photographing module 123. .

接著,在步驟S23中,該處理單元14控制該機械手臂121將該固持模組122帶動至該物件5的位置,並控制該固持模組122固持該物件5。Next, in step S23, the processing unit 14 controls the robot arm 121 to drive the holding module 122 to the position of the object 5, and controls the holding module 122 to hold the object 5.

接著,在步驟S24中,在該固持模組122固持該物件5的情形下,該處理單元14控制該機械手臂121將該固持模組122帶動至該拍攝位置,以使得該物件5隨著該固持模組122一同被帶動,並使該物件5顯露於該深度相機11的拍攝範圍之內(亦即如圖2所示的情形)。Next, in step S24, when the holding module 122 holds the object 5, the processing unit 14 controls the robot arm 121 to drive the holding module 122 to the shooting position, so that the object 5 follows the The holding module 122 is driven together, and the object 5 is exposed within the shooting range of the depth camera 11 (that is, the situation shown in FIG. 2).

接著,在步驟S25中,該處理單元14控制該深度相機11進行拍攝,而藉此從該深度相機11獲得一筆參考資料D2(示於圖4)。在本實施例中,該參考資料D2指示出該物件5於該深度相機11的拍攝範圍內所呈現出的一初始姿態,更明確地說,該初始姿態在本實施例中代表該物件5在該拍攝範圍內所呈現的一初始位置及一初始角度。Next, in step S25, the processing unit 14 controls the depth camera 11 to take a picture, and thereby obtains a piece of reference data D2 from the depth camera 11 (shown in FIG. 4). In this embodiment, the reference material D2 indicates an initial posture presented by the object 5 within the shooting range of the depth camera 11. More specifically, the initial posture represents the object 5 in this embodiment. An initial position and an initial angle presented in the shooting range.

類似於該模板資料D1地,該參考資料D2亦包含多筆座標點資料,且該參考資料D2的每一座標點資料亦包含一個二維影像部分及一個三維點雲部分,每一個二維影像部分包含一紅色色彩值、一綠色色彩值及一藍色色彩值,而每一個三維點雲部分包含一X軸座標值、一Y軸座標值及一Z軸座標值。並且,該參考資料D2的該等二維影像部分共同構成一被包含於該參考資料D2內且為彩色的二維參考影像D21(也就是另一張彩色照片),且該二維參考影像D21具有一呈現出該物件5之外觀的二維關鍵部分D211。另一方面,該參考資料D2的該等三維點雲部分則共同構成一被包含於該參考資料D2內的三維點雲模型D22(也就是另一個3D Model),且該三維點雲模型D22具有一呈現出該物件5之外觀的三維關鍵部分D221。換句話說,該參考資料D2的該等二維影像部分共同指示出該物件5在二維平面中所呈現的該初始姿態,而該參考資料D2的該等三維點雲部分則共同指示出該物件5在三維空間中所呈現的該初始姿態。特別說明的是,該參考資料D2與該模板資料D1是性質相同但彼此獨立的兩筆資料。Similar to the template data D1, the reference data D2 also contains multiple coordinate point data, and each coordinate point data of the reference data D2 also includes a two-dimensional image part and a three-dimensional point cloud part, and each two-dimensional image part It includes a red color value, a green color value, and a blue color value, and each three-dimensional point cloud part includes an X-axis coordinate value, a Y-axis coordinate value, and a Z-axis coordinate value. In addition, the two-dimensional image portions of the reference material D2 collectively constitute a two-dimensional reference image D21 (that is, another color photograph) included in the reference material D2 and is a color, and the two-dimensional reference image D21 It has a two-dimensional key portion D211 showing the appearance of the object 5. On the other hand, the three-dimensional point cloud portions of the reference material D2 together constitute a three-dimensional point cloud model D22 (that is, another 3D Model) included in the reference material D2, and the three-dimensional point cloud model D22 has A three-dimensional key portion D221 showing the appearance of the object 5. In other words, the two-dimensional image portions of the reference material D2 collectively indicate the initial pose presented by the object 5 in a two-dimensional plane, and the three-dimensional point cloud portions of the reference material D2 collectively indicate the This initial posture of the object 5 in the three-dimensional space. In particular, the reference data D2 and the template data D1 are two data of the same nature but independent of each other.

接著,在步驟S26中,該處理單元14對該參考資料D2進行該背景濾除處理。類似於前述地,該背景濾除處理是對該參考資料D2的三維點雲模型D22進行,且例如是將該三維點雲模型D22中深度大於該預定門檻的每一個三維點雲部分作為雜訊濾除,而藉此濾除該三維點雲模型D22中的部分背景而保留該三維關鍵部分D221。Next, in step S26, the processing unit 14 performs the background filtering process on the reference material D2. Similar to the foregoing, the background filtering process is performed on the three-dimensional point cloud model D22 of the reference material D2, and for example, each three-dimensional point cloud portion in the three-dimensional point cloud model D22 whose depth is greater than the predetermined threshold is used as noise. Filtering, thereby filtering out a part of the background in the three-dimensional point cloud model D22 while retaining the three-dimensional key portion D221.

接著,在步驟S27中,該處理單元14將完成該背景濾除處理的該參考資料D2與該模板資料D1比對,並根據比對的結果產生一筆校正資料。在本實施例中,該校正資料包含一位移校正資料及一旋轉校正資料,該位移校正資料對應於該物件5於該初始姿態與該目標姿態之間在三維空間中的位置偏移(也就是距離上的偏移量),而該旋轉校正資料則對應於該物件5於該初始姿態與該目標姿態之間在三維空間中的角度偏移(也就是方向上的偏移量)。更詳細地說,該位移校正資料例如包含一個三維校正位移方向、一個三維校正位移距離、一個三維校正旋轉方向,以及一個三維校正旋轉角度,但不以此為限。Next, in step S27, the processing unit 14 compares the reference data D2 and the template data D1 that have completed the background filtering process, and generates a piece of correction data according to the comparison result. In this embodiment, the correction data includes a displacement correction data and a rotation correction data, and the displacement correction data corresponds to a position offset of the object 5 in the three-dimensional space between the initial posture and the target posture (that is, Offset in distance), and the rotation correction data corresponds to the angular offset (ie, the offset in the direction) of the object 5 in the three-dimensional space between the initial attitude and the target attitude. More specifically, the displacement correction data includes, for example, a 3D correction displacement direction, a 3D correction displacement distance, a 3D correction rotation direction, and a 3D correction rotation angle, but is not limited thereto.

進一步配合參閱圖6,以下詳細說明本實施例中的該處理單元14如何將該參考資料D2與該模板資料D1比對。Further referring to FIG. 6, the following describes in detail how the processing unit 14 in this embodiment compares the reference data D2 with the template data D1.

首先,在圖6所示的子步驟S271中,藉由該儲存單元13所儲存的該實例分割類神經網路模型M2,該處理單元14利用實例分割技術對該參考資料D2的該二維參考影像D21(也就是該參考資料D2的該等二維影像部分)進行影像辨識,而從該二維參考影像D21中辨識出該二維關鍵部分D211(也就是從該二維參考影像D21中辨識出該物件5的外觀)。First, in a sub-step S271 shown in FIG. 6, the instance segmentation neural network model M2 is stored by the storage unit 13, and the processing unit 14 uses the instance segmentation technique to the two-dimensional reference to the reference material D2. Image D21 (that is, the two-dimensional image portions of the reference material D2) is used for image recognition, and the two-dimensional key portion D211 (that is, identified from the two-dimensional reference image D21) is identified from the two-dimensional reference image D21. The appearance of the object 5).

接著,在子步驟S272中,藉由該實例分割類神經網路模型M2,該處理單元14進一步根據該參考資料D2的該二維關鍵部分D211估算出該三維關鍵部分D221於該三維點雲模型D22中的位置及角度,並產生一對應該初始姿態的估算結果。換句話說,該處理單元14是根據該參考資料D2中的彩色照片估算該物件5在三維空間中所呈現的該初始姿態。補充說明的是,在該實例分割類神經網路模型M2的訓練過程中,該實例分割類神經網路模型M2會根據多張該物件5的照片進行深度學習,且該等照片會分別以多個不同的角度呈現出該物件5的外觀,因此,藉由該實例分割類神經網路模型M2,該處理單元14能夠根據該物件5的照片估算出該物件5在三維空間中的概略位置及角度。在本實施例中,該估算結果例如包含一估算位置及一估算角度,但並不以此為限。Next, in sub-step S272, the neural network model M2 is segmented by the example, and the processing unit 14 further estimates the three-dimensional key portion D221 in the three-dimensional point cloud model according to the two-dimensional key portion D211 of the reference material D2. The position and angle in D22, and produce a pair of estimation results corresponding to the initial attitude. In other words, the processing unit 14 estimates the initial pose of the object 5 in the three-dimensional space based on the color photograph in the reference material D2. It is added that during the training of the example segmentation neural network model M2, the example segmentation neural network model M2 will perform deep learning based on multiple photos of the object 5, and these photos will be divided into multiple photos. The appearance of the object 5 is presented at different angles. Therefore, by segmenting the neural network model M2 through this example, the processing unit 14 can estimate the approximate position of the object 5 in the three-dimensional space based on the photo of the object 5 and angle. In this embodiment, the estimation result includes, for example, an estimated position and an estimated angle, but is not limited thereto.

接著,在子步驟S273中,該處理單元14將該估算結果的估算位置及估算角度分別與該目標位置及該目標角度比對並產生一預估偏移結果,接著再根據該預估偏移結果產生一對應該預估偏移結果的初步匹配資料。具體而言,該預估偏移結果指示出該物件5於該初始姿態與該目標姿態之間在三維空間中的一預估偏移程度,而該初步匹配資料則相當於是以該預估偏移程度為基準並用於在三維空間中校正該預估偏移程度的預估校正參數。更詳細地說,該初步匹配資料在本實施例中例如包含一初步匹配位移方向、一初步匹配位移距離、一初步匹配旋轉方向以及一初步匹配旋轉角度。Next, in sub-step S273, the processing unit 14 compares the estimated position and the estimated angle of the estimated result with the target position and the target angle, respectively, and generates an estimated offset result, and then according to the estimated offset, The result is a pair of preliminary matches that should estimate the offset results. Specifically, the estimated offset result indicates an estimated offset degree of the object 5 in the three-dimensional space between the initial posture and the target posture, and the preliminary matching data is equivalent to the estimated offset. An estimated correction parameter used as a reference for correcting the estimated offset degree in a three-dimensional space. In more detail, the preliminary matching data in this embodiment includes, for example, a preliminary matching displacement direction, a preliminary matching displacement distance, a preliminary matching rotation direction, and a preliminary matching rotation angle.

接著,在子步驟S274中,該處理單元14根據該初步匹配資料對該參考資料D2的三維點雲模型D22進行一初步匹配處理。更具體地說,在該初步匹配處理中,該處理單元14是根據該初步匹配位移方向及該初步匹配位移距離對該參考資料D2的三維點雲模型D22進行三維空間中的位移,另一方面,該處理單元14還根據該初步匹配旋轉方向及該初步匹配旋轉角度對該參考資料D2的三維點雲模型D22進行三維空間中的旋轉。具體來說,該初步匹配位移方向及該初步匹配位移距離是用於校正該三維關鍵部分D221於該估算位置與該目標位置之間的差異,而該初步匹配旋轉方向及該初步匹配旋轉角度則是用於校正該三維關鍵部分D221於該估算角度與該目標角度之間的差異,如此一來,便能夠將該三維關鍵部分D221調整為更加接近該三維目標部分D121的姿態。Next, in sub-step S274, the processing unit 14 performs a preliminary matching process on the three-dimensional point cloud model D22 of the reference material D2 according to the preliminary matching data. More specifically, in the preliminary matching process, the processing unit 14 performs a displacement in a three-dimensional space on the three-dimensional point cloud model D22 of the reference material D2 according to the preliminary matching displacement direction and the preliminary matching displacement distance. The processing unit 14 further rotates the three-dimensional point cloud model D22 of the reference material D2 in the three-dimensional space according to the preliminary matching rotation direction and the preliminary matching rotation angle. Specifically, the preliminary matching displacement direction and the preliminary matching displacement distance are used to correct a difference between the estimated position and the target position of the three-dimensional key portion D221, and the preliminary matching rotation direction and the preliminary matching rotation angle are It is used to correct the difference between the estimated angle and the target angle of the three-dimensional key portion D221, so that the three-dimensional key portion D221 can be adjusted to a posture closer to the three-dimensional target portion D121.

接著,在子步驟S275中,該處理單元14以經過該初步匹配處理的該三維點雲模型D22與該模板資料D1的三維點雲模板D12進行三維匹配(3D Matching),並根據三維匹配的結果產生該校正資料。更明確地說,該處理單元14是根據該三維關鍵部分D221與該三維目標部分D121進行三維匹配而產生該校正資料。值得一提的是,由於在該處理單元14在子步驟S274中已對該三維點雲模型D22進行該初步匹配處理,因此,在子步驟S275中,該處理單元14相當於以該預估偏移結果為基準地以該三維關鍵部分D221及該三維目標部分D121進行三維匹配。藉由對該三維點雲模型D22進行該初步匹配處理,能夠預先縮小該三維關鍵部分D221與該三維目標部分D121在位置及角度上的差異,如此一來,便能夠以較少的運算量並以較快的速度完成該三維關鍵部分D221與該三維目標部分D121之間的三維匹配。另外,由於三維匹配本身的技術細節並非本專利說明書的重點,故在此不加詳述。Next, in sub-step S275, the processing unit 14 performs three-dimensional matching (3D Matching) with the three-dimensional point cloud model D22 after the preliminary matching process and the three-dimensional point cloud template D12 of the template data D1, and according to the result of the three-dimensional matching Generate the calibration data. More specifically, the processing unit 14 generates the correction data according to the three-dimensional matching between the three-dimensional key portion D221 and the three-dimensional target portion D121. It is worth mentioning that since the processing unit 14 has performed the preliminary matching processing on the three-dimensional point cloud model D22 in the sub-step S274, the processing unit 14 is equivalent to using the estimated bias in the sub-step S275. The three-dimensional matching is performed on the three-dimensional key portion D221 and the three-dimensional target portion D121 based on the shift result. By performing the preliminary matching process on the three-dimensional point cloud model D22, the difference in position and angle between the three-dimensional key portion D221 and the three-dimensional target portion D121 can be reduced in advance. In this way, it is possible to reduce The three-dimensional matching between the three-dimensional key portion D221 and the three-dimensional target portion D121 is completed at a faster speed. In addition, since the technical details of the three-dimensional matching itself are not the focus of this patent specification, they are not described in detail here.

接著,在圖5所示的步驟S28中,該處理單元14根據該校正資料及該預定路徑R控制該機械手臂121活動,以使得該機械手臂121將固持著該物件5的該固持模組122帶動至一對應該校正資料及該預定路徑R的校正位置,具體來說,該校正位置是該固持模組122在該處理單元14根據該校正資料及該預定路徑R控制該機械手臂121活動完畢時所處於的位置。並且,在該固持模組122已被該機械手臂121帶動至該校正位置的情形下,該處理單元14控制該固持模組122解除對該物件5的固持,以完成對該物件5的運送。Next, in step S28 shown in FIG. 5, the processing unit 14 controls the movement of the robot arm 121 according to the calibration data and the predetermined path R, so that the robot arm 121 will hold the holding module 122 holding the object 5. Drive to a pair of correction positions corresponding to the correction data and the predetermined path R, specifically, the correction position is that the holding module 122 controls the robot arm 121 in the processing unit 14 according to the correction data and the predetermined path R to complete the movement Where you are. In addition, in a case where the holding module 122 has been driven to the correction position by the robot arm 121, the processing unit 14 controls the holding module 122 to release the holding of the object 5 to complete the transportation of the object 5.

具體而言,藉由該處理單元14根據該校正資料控制該機械手臂121活動,能夠校正該物件5於該初始姿態與該目標姿態之間在三維空間中的偏移量。另一方面,藉由該處理單元14根據該預定路徑R控制該機械手臂121活動,則能夠使該固持模組122被該機械手臂121帶動至該預定位置。並且,在本實施例中,較佳地,該處理單元14是同時根據該校正資料及該預定路徑R控制該機械手臂121活動,然而,在其他實施例中,該處理單元14也可以是先根據該校正資料控制該機械手臂121活動,再根據該預定路徑R控制該機械手臂121活動,而並不以本實施例為限。Specifically, by controlling the movement of the robot arm 121 according to the correction data by the processing unit 14, it is possible to correct the offset of the object 5 in the three-dimensional space between the initial posture and the target posture. On the other hand, by controlling the movement of the robot arm 121 according to the predetermined path R by the processing unit 14, the holding module 122 can be driven to the predetermined position by the robot arm 121. Moreover, in this embodiment, preferably, the processing unit 14 controls the movement of the robot arm 121 according to the calibration data and the predetermined path R at the same time. However, in other embodiments, the processing unit 14 may also be the first The movement of the robot arm 121 is controlled according to the calibration data, and the movement of the robot arm 121 is controlled according to the predetermined path R, but not limited to this embodiment.

上述的步驟S21至步驟S28即為該物件運送方法的運送流程。The above steps S21 to S28 are the transportation process of the article transportation method.

進一步說明的是,若該物件運送系統1共運送了N個物件5(N為大於1的整數),則相當於執行了N次運送流程,而在該N次的運送流程中,該固持模組122於步驟S23固持各個物件5的接觸位置可能皆不相同,故每一個物件5被該固持模組122固持時所呈現的初始姿態也都不一樣,也就是說,若該處理單元14僅根據該預定路徑R控制該機械手臂121將該固持模組122帶動至該預定位置並解除對物件5的固持,則物件5被放置的實際位置便可能與該待加工位置存在明顯的偏差。It is further explained that if the article transport system 1 transports a total of N articles 5 (N is an integer greater than 1), it is equivalent to performing N transport processes, and in the N transport processes, the holding mold The contact position of each object 5 held by the group 122 in step S23 may be different. Therefore, the initial posture of each object 5 when held by the holding module 122 is different, that is, if the processing unit 14 only According to the predetermined path R, the robot arm 121 is controlled to drive the holding module 122 to the predetermined position and release the holding of the object 5, so that the actual position where the object 5 is placed may have a significant deviation from the position to be processed.

而在本實施例中,該處理單元14在步驟S25中是根據各物件5所呈現的初始姿態來產生對應的校正資料,因此,該校正資料能夠用來控制該機械手臂121將各物件5從各種初始姿態校正為統一的該目標姿態。並且,在步驟S28中,由於該處理單元14是根據該校正資料及該預定路徑R控制該機械手臂121帶動該固持模組122至該校正位置,故相當於根據各物件5的初始姿態而對該預定路徑R校正(亦即校正該預定位置),如此一來,每一個物件5便能被更加準確地放置於該待加工位置。具體來說,本實施例至少能將該等物件5被放置於該待加工位置時的姿態公差縮減至1.5毫米以內,而有利於後續的各種自動加工程序。In this embodiment, the processing unit 14 generates corresponding correction data according to the initial posture presented by each object 5 in step S25. Therefore, the correction data can be used to control the robot arm 121 to move each object 5 from Various initial attitudes are corrected to the target attitude uniformly. Moreover, in step S28, since the processing unit 14 controls the robot arm 121 to drive the holding module 122 to the correction position according to the calibration data and the predetermined path R, it is equivalent to aligning the objects 5 according to the initial posture of each object 5. The predetermined path R is corrected (that is, the predetermined position is corrected). In this way, each object 5 can be more accurately placed at the position to be processed. Specifically, this embodiment can at least reduce the attitude tolerance of the objects 5 when they are placed in the to-be-processed position to within 1.5 mm, which is beneficial to various subsequent automatic processing programs.

以上即為本發明物件運送系統1之第一實施例的說明。The above is the description of the first embodiment of the article transport system 1 of the present invention.

本發明物件運送系統1還具有一第二實施例,以下針對該第二實施例(以下以「本實施例」稱之)與該第一實施例之間的差異進行說明。The article transport system 1 of the present invention also has a second embodiment. The differences between the second embodiment (hereinafter referred to as "this embodiment") and the first embodiment will be described below.

在本實施例中,該儲存單元13是儲存該預定路徑R、該物件辨識類神經網路模型M1,以及一對應該等物件5之外觀的特徵辨識類神經網路模型(圖式未示出)。該特徵辨識類神經網路模型是以深度學習的方式所訓練而成,並且,藉由該特徵辨識類神經網路模型,該處理單元14能從二維影像及/或三維點雲中辨識出該物件5之外觀中事先被定義好的一或多個特徵部位。In this embodiment, the storage unit 13 stores the predetermined path R, the object recognition-type neural network model M1, and a pair of feature recognition-type neural network models corresponding to the appearance of the object 5 (not shown in the figure). ). The feature recognition neural network model is trained in a deep learning manner, and by using the feature recognition neural network model, the processing unit 14 can recognize from a two-dimensional image and / or a three-dimensional point cloud. One or more characteristic parts of the appearance of the object 5 are defined in advance.

並且,在本實施例之建模流程的步驟S12中,該模板資料D1的三維點雲模板D12具有多個目標特徵部位,且該等目標特徵部位則皆位於該三維目標部分D121中。另一方面,在該運送流程的步驟S25中,該參考資料D2的三維點雲模型D22具有多個分別對應該等目標特徵部位的參考特徵部位,且該等參考特徵部位則皆位於該三維關鍵部分D221中。Moreover, in step S12 of the modeling process of this embodiment, the three-dimensional point cloud template D12 of the template data D1 has a plurality of target feature parts, and the target feature parts are all located in the three-dimensional target part D121. On the other hand, in step S25 of the transportation process, the three-dimensional point cloud model D22 of the reference material D2 has a plurality of reference feature parts corresponding to the target feature parts, and the reference feature parts are all located on the three-dimensional key. Section D221.

另外,在該運送流程的步驟S27中,該處理單元14將該參考資料D2與該模板資料D1比對的方式與第一實施例不同。In addition, in step S27 of the transportation process, the processing unit 14 compares the reference data D2 with the template data D1 in a manner different from that in the first embodiment.

具體而言,在本實施例中,藉由該特徵辨識類神經網路模型,該處理單元14是根據每一參考特徵部位與該參考特徵部位所對應的該目標特徵部位之間在三維空間中的相對位置產生該校正資料。換句話說,在本實施例中,該處理單元14是根據該等目標特徵部位及該等參考特徵部位彼此之間的相對位置比對出該物件5於該初始姿態與該目標姿態之間在三維空間中的偏移,進而產生該校正資料,因此,本實施例並非如第一實施例是以該三維關鍵部分D221與該三維目標部分D121進行三維匹配。然而,本實施例的物件運送系統1亦是藉由該模板資料D1與該參考資料D2的比對而校正每一物件5於該初始姿態與該目標姿態之間在三維空間中的偏移,因此亦能達成與第一實施例相同的技術功效。Specifically, in this embodiment, by using the feature recognition neural network model, the processing unit 14 is in a three-dimensional space based on each reference feature part and the target feature part corresponding to the reference feature part. The relative position of the generated calibration data. In other words, in this embodiment, the processing unit 14 compares the object 5 between the initial posture and the target posture according to the relative positions of the target characteristic parts and the reference characteristic parts with each other. The offset in the three-dimensional space further generates the correction data. Therefore, this embodiment is not a three-dimensional matching between the three-dimensional key portion D221 and the three-dimensional target portion D121 as in the first embodiment. However, the object transport system 1 of this embodiment also corrects the offset of each object 5 in the three-dimensional space between the initial posture and the target posture by comparing the template data D1 with the reference data D2. Therefore, the same technical effect as that of the first embodiment can be achieved.

此外,在另一實施例中,該物件運送系統1也可以是包含多台深度相機11,且該等深度相機11的拍攝鏡頭111是分別從不同的方向各自對準該拍攝位置。舉例來說,該等深度相機11的數量可例如是三台,且該三台深度相機11的三個拍攝鏡頭111例如是從正面、左側及右側分別對準該拍攝位置。並且,在該建模流程中,在該固持模組122固持著該建模用物件且被該機械手臂121帶動至該拍攝位置的情形下,該處理單元14是控制該三台深度相機11分別拍攝該建模用物件,而藉此從該三台深度相機11分別獲得三個三維點雲模板D12。接著,該處理單元14將該三個三維點雲模板D12以三维拼接(3D Stitching)的方式合併為一個拼接三維點雲模板,並將該拼接三維點雲模板作為模板資料地儲存於該儲存單元13。然後,在該運送流程中,該處理單元14亦是控制該三台深度相機11分別拍攝該物件5,而藉此從該三台深度相機11獲得三個三維點雲模型D22,接著,該處理單元14將該三個三維點雲模型D22以三维拼接的方式合併為一個拼接三維點雲模型。接著,該處理單元14再將該拼接三維點雲模型作為該參考資料而與該模板資料(也就是該拼接三維點雲模板)比對,且比對的方式可例如是採用第一實施例中所述的三維匹配(3D Matching),或者也可以採用第二實施例中所述的特徵部位比對。藉由合併該等三維點雲模板D12及合併該等三維點雲模型D22,能夠使的被用來比對的三維點雲更加完整,而能進一步提升比對及校正的精準度。In addition, in another embodiment, the object transport system 1 may include a plurality of depth cameras 11, and the shooting lenses 111 of the depth cameras 11 are respectively aligned with the shooting position from different directions. For example, the number of the depth cameras 11 may be, for example, three, and the three shooting lenses 111 of the three depth cameras 11 are respectively aligned at the shooting positions from the front, the left, and the right. Moreover, in the modeling process, when the holding module 122 holds the modeling object and is driven to the shooting position by the robot arm 121, the processing unit 14 controls the three depth cameras 11 respectively. The modeling object is photographed, thereby obtaining three three-dimensional point cloud templates D12 from the three depth cameras 11 respectively. Then, the processing unit 14 merges the three three-dimensional point cloud templates D12 into a stitching three-dimensional point cloud template in a three-dimensional stitching manner, and stores the stitched three-dimensional point cloud template as template data in the storage unit. 13. Then, in the transportation process, the processing unit 14 also controls the three depth cameras 11 to photograph the object 5 respectively, thereby obtaining three three-dimensional point cloud models D22 from the three depth cameras 11, and then, The unit 14 merges the three three-dimensional point cloud models D22 into one stitching three-dimensional point cloud model in a three-dimensional stitching manner. Then, the processing unit 14 compares the stitching three-dimensional point cloud model with the template data (that is, the stitching three-dimensional point cloud template) as the reference material, and the comparison method may be, for example, in the first embodiment. The three-dimensional matching (3D Matching) or the feature part comparison described in the second embodiment may also be adopted. By merging the three-dimensional point cloud templates D12 and merging the three-dimensional point cloud models D22, the three-dimensional point cloud used for comparison can be more complete, and the accuracy of comparison and correction can be further improved.

綜上所述,在本發明物件運送系統1的每一實施例中,該物件運送系統1皆能根據該物件於該初始姿態與該目標姿態之間在三維空間中的偏移量產生該校正資料,並根據該校正資料及該預定路徑R控制該機械手臂121活動,而能藉此校正該物件5於該初始姿態與該目標姿態之間在三維空間中的偏移,如此一來,該物件運送系統1便能使該等物件5以更加統一的姿態被放置到該待加工位置,而有利於後續的各種自動加工程序(尤其是需要高精確度的自動加工程序),因此,該物件運送系統1的確能改善現有技術之不便,而確實能達成本發明之目的。In summary, in each embodiment of the object transport system 1 of the present invention, the object transport system 1 can generate the correction according to the offset of the object in the three-dimensional space between the initial attitude and the target attitude. Data, and controlling the movement of the robot arm 121 according to the correction data and the predetermined path R, thereby being able to correct the offset of the object 5 in the three-dimensional space between the initial posture and the target posture. The object transport system 1 enables the objects 5 to be placed in the to-be-processed position with a more uniform attitude, which is beneficial to subsequent automatic processing programs (especially automatic processing programs that require high accuracy). Therefore, the object The transportation system 1 can indeed improve the inconvenience of the prior art, and indeed can achieve the purpose of the invention.

惟以上所述者,僅為本發明之實施例而已,當不能以此限定本發明實施之範圍,凡是依本發明申請專利範圍及專利說明書內容所作之簡單的等效變化與修飾,皆仍屬本發明專利涵蓋之範圍內。However, the above are only examples of the present invention. When the scope of implementation of the present invention cannot be limited in this way, any simple equivalent changes and modifications made in accordance with the scope of the patent application and the content of the patent specification of the present invention are still Within the scope of the invention patent.

1‧‧‧物件運送系統
11‧‧‧深度相機
111‧‧‧拍攝鏡頭
12‧‧‧運送單元
121‧‧‧機械手臂
122‧‧‧固持模組
123‧‧‧攝影模組
124‧‧‧固定端
125‧‧‧自由端
126‧‧‧可動關節
13‧‧‧儲存單元
14‧‧‧處理單元
R‧‧‧預定路徑
M1‧‧‧物件辨識類神經網路模型
M2‧‧‧實例分割類神經網路模型
5‧‧‧物件
D1‧‧‧模板資料
D11‧‧‧二維目標影像
D111‧‧‧二維目標部分
D12‧‧‧三維點雲模板
D121‧‧‧三維目標部分
D2‧‧‧參考資料
D21‧‧‧二維參考影像
D211‧‧‧二維關鍵部分
D22‧‧‧三維點雲模型
D221‧‧‧三維關鍵部分
S11~S13‧‧‧步驟
S21~S28‧‧‧步驟
S271~S275‧‧‧子步驟
1‧‧‧ Object Delivery System
11‧‧‧ Depth Camera
111‧‧‧ shooting lens
12‧‧‧ Shipping Unit
121‧‧‧ robot arm
122‧‧‧ holding module
123‧‧‧Photographic Module
124‧‧‧Fixed end
125‧‧‧ free end
126‧‧‧movable joint
13‧‧‧Storage unit
14‧‧‧ processing unit
R‧‧‧ scheduled route
M1‧‧‧ object recognition neural network model
M2‧‧‧ instance segmentation neural network model
5‧‧‧ objects
D1‧‧‧Template Information
D11‧‧‧Two-dimensional target image
D111‧‧‧Two-dimensional target part
D12‧‧‧3D point cloud template
D121‧‧‧Three-dimensional target part
D2‧‧‧Reference
D21‧‧‧Two-dimensional reference image
D211‧‧‧Two-dimensional key part
D22‧‧‧3D point cloud model
D221‧‧‧Three-dimensional key part
S11 ~ S13‧‧‧‧steps
S21 ~ S28‧‧‧‧step
S271 ~ S275‧‧‧Sub-step

本發明之其他的特徵及功效,將於參照圖式的實施方式中清楚地呈現,其中:
圖1是一方塊示意圖,示例性地繪示本發明物件運送系統的一第一實施例;
圖2是一示意圖,示例性地繪示該第一實施例的一機械手臂固持著一物件並供一深度相機拍攝;
圖3是一流程圖,示例性地說明該第一實施例如何實施一物件運送方法中的一建模流程;
圖4是一示意圖,示例性地繪示該物件運送方法中所產生的一筆模板資料及一筆參考資料;
圖5是一流程圖,示例性地說明該第一實施例如何實施該物件運送方法中的一運送流程;及
圖6是一流程圖,示例性地說明一處理單元如何比對參考資料及該模板資料。
Other features and effects of the present invention will be clearly presented in the embodiments with reference to the drawings, in which:
FIG. 1 is a schematic block diagram illustrating a first embodiment of an article transport system according to the present invention;
FIG. 2 is a schematic diagram exemplarily illustrating that a robot arm according to the first embodiment holds an object and is photographed by a depth camera;
FIG. 3 is a flowchart exemplarily illustrating how the first embodiment implements a modeling process in an article transport method; FIG.
FIG. 4 is a schematic diagram exemplarily showing a template data and a reference material generated in the object transportation method;
FIG. 5 is a flowchart exemplarily illustrating how the first embodiment implements a transportation process in the article transportation method; and FIG. 6 is a flowchart exemplarily illustrating how a processing unit compares reference materials and the Template information.

Claims (10)

一種物件運送方法,由一物件運送系統對一物件實施,該物件運送系統包含一深度相機、一適用於移動該物件的運送單元及一處理單元,該運送單元包括一可被帶動且適用於固持該物件的固持模組;該物件運送方法包含:
(A)在該固持模組固持該物件的情形下,該處理單元控制該深度相機進行拍攝以獲得一筆參考資料,該參考資料指示出該物件於該深度相機的一拍攝範圍內所呈現出的一初始姿態;
(B)該處理單元將該參考資料與一預先儲存的模板資料比對而產生一筆校正資料,該模板資料指示出另一物件於該拍攝範圍內所呈現出的一目標姿態,且該校正資料相關於該物件於該初始姿態與該目標姿態之間在三維空間中的偏移量;及
(C)該處理單元根據該校正資料及一預先儲存的預定路徑控制該固持模組被帶動至一校正位置後解除固持該物件。
An object conveying method is implemented by an object conveying system including an depth camera, a conveying unit adapted to move the object, and a processing unit. The conveying unit includes a movable and suitable for holding The object holding module; the object transportation method includes:
(A) In the case where the holding module holds the object, the processing unit controls the depth camera to take a picture to obtain a reference, which indicates that the object appears within a shooting range of the depth camera. An initial posture
(B) The processing unit compares the reference data with a pre-stored template data to generate a piece of correction data, the template data indicates a target posture presented by another object within the shooting range, and the correction data Related to the offset of the object in the three-dimensional space between the initial pose and the target pose; and
(C) The processing unit controls the holding module to be driven to a calibration position according to the calibration data and a pre-stored predetermined path to release the holding of the object.
如請求項1所述的物件運送方法,其中,該參考資料及該模板資料各包含多筆座標點資料,且每一座標點資料包含一個二維影像部分及一個三維點雲部分,並且,步驟(B)包含:
(b1)該處理單元利用實例分割技術根據該參考資料的該等二維影像部分辨識出該物件所呈現出的該初始姿態;
(b2)該處理單元根據該初始姿態與該目標姿態之間在二維平面上的差異產生一預估偏移結果;及
(b3)該處理單元以該預估偏移結果為基準地以該參考資料的該等三維點雲部分與該模板資料的該等三維點雲部分進行三維匹配,並根據三維匹配的結果產生該校正資料。
The method for transporting objects according to claim 1, wherein the reference data and the template data each include a plurality of coordinate point data, and each coordinate point data includes a two-dimensional image portion and a three-dimensional point cloud portion, and the step ( B) Contains:
(b1) the processing unit uses the instance segmentation technology to identify the initial posture presented by the object based on the two-dimensional image portions of the reference material;
(b2) the processing unit generates an estimated offset result based on a difference in the two-dimensional plane between the initial attitude and the target attitude; and
(b3) The processing unit uses the estimated offset result as a reference to perform three-dimensional matching on the three-dimensional point cloud portions of the reference data and the three-dimensional point cloud portions of the template data, and generates the Correct the data.
如請求項2所述的物件運送方法,其中,在步驟(b2)中,該預估偏移結果指示出該物件於該初始姿態與該目標姿態之間在三維空間中的預估偏移程度,並且,在步驟(b3)中,該處理單元是先根據該預估偏移結果產生一對應該預估偏移結果的初步匹配資料,再根據該初步匹配資料對該參考資料的該等三維點雲部分進行一初步匹配處理,然後,該處理單元再以經過該初步匹配處理之該參考資料的該等三維點雲部分與該模板資料的該等三維點雲部分進行三維匹配,其中,該初步匹配處理是根據該初步匹配資料對該參考資料的該等三維點雲部分進行三維空間中的位移及旋轉,且該初步匹配資料包含一初步匹配位移方向、一初步匹配位移距離、一初步匹配旋轉方向以及一初步匹配旋轉角度。The object transport method according to claim 2, wherein in step (b2), the estimated offset result indicates an estimated offset degree of the object in the three-dimensional space between the initial posture and the target posture. And, in step (b3), the processing unit first generates a pair of preliminary matching data that should be the estimated offset result according to the estimated offset result, and then according to the preliminary matching data to the three-dimensional reference data. The point cloud part is subjected to a preliminary matching process, and then the processing unit performs three-dimensional matching on the three-dimensional point cloud part of the reference material and the three-dimensional point cloud part of the template data after the preliminary matching process. The preliminary matching process is to perform displacement and rotation of the three-dimensional point cloud portions of the reference material in three-dimensional space according to the preliminary matching data, and the preliminary matching data includes a preliminary matching displacement direction, a preliminary matching displacement distance, and a preliminary matching. Rotation direction and a preliminary matching rotation angle. 如請求項1所述的物件運送方法,其中,在步驟(A)中,該參考資料具有多個對應於該物件的參考特徵部位,並且,在步驟(B)中,該模板資料具有多個分別對應該等參考特徵部位的目標特徵部位,並且,該處理單元是根據每一參考特徵部位與該參考特徵部位所對應的該目標特徵部位之間在三維空間中的相對位置產生該校正資料。The article transportation method according to claim 1, wherein in step (A), the reference material has a plurality of reference feature parts corresponding to the object, and in step (B), the template material has a plurality of Corresponding to target feature parts of the reference feature parts, and the processing unit generates the correction data according to a relative position in the three-dimensional space between each reference feature part and the target feature part corresponding to the reference feature part. 如請求項1所述的物件運送方法,其中,在步驟(B)中,該校正資料包含一位移校正資料及一旋轉校正資料,該位移校正資料對應於該物件於該初始姿態與該目標姿態之間在三維空間中的位置偏移,該旋轉校正資料對應於該物件於該初始姿態與該目標姿態之間在三維空間中的角度偏移。The object transport method according to claim 1, wherein in step (B), the correction data includes a displacement correction data and a rotation correction data, and the displacement correction data corresponds to the object in the initial posture and the target posture The position is offset in three-dimensional space, and the rotation correction data corresponds to the angular offset of the object in the three-dimensional space between the initial posture and the target posture. 如請求項1所述的物件運送方法,其中,該物件運送系統還包含一儲存單元,且該物件運送方法還包含位於步驟(A)之前的:
(D)在該固持模組固持另該物件的情形下,該處理單元控制該固持模組被帶動至一對應該深度相機的拍攝位置,並控制該深度相機進行拍攝以獲得該模板資料;
(E)該處理單元將該模板資料儲存於該儲存單元;及
(F)在該固持模組固持該物件的情形下,該處理單元控制該固持模組被帶動至該拍攝位置,並執行步驟(A)。
The article transportation method according to claim 1, wherein the article transportation system further includes a storage unit, and the article transportation method further includes before the step (A):
(D) In the case where the holding module holds another object, the processing unit controls the holding module to be driven to a pair of shooting positions corresponding to the depth camera, and controls the depth camera to shoot to obtain the template data;
(E) the processing unit stores the template data in the storage unit; and
(F) In the case where the holding module holds the object, the processing unit controls the holding module to be driven to the shooting position, and executes step (A).
如請求項6所述的物件運送方法,其中,該儲存單元儲存有一以深度學習的方式所訓練的物件辨識類神經網路模型,該運送單元還包括一用於帶動該固持模組的機械手臂,以及一可被該機械手臂帶動且與該固持模組位置相對應的攝影模組;該物件運送方法還包含一介於步驟(E)及(F)之間的:(G)該處理單元藉由該物件辨識類神經網路模型以及該攝影模組的機器視覺從其他多個物件中辨識出該物件,且控制該機械手臂將該固持模組帶動至該物件的位置,並控制該固持模組固持該物件。The object transport method according to claim 6, wherein the storage unit stores an object recognition neural network model trained in a deep learning manner, and the transport unit further includes a robot arm for driving the holding module. , And a photographic module that can be driven by the robotic arm and corresponds to the position of the holding module; the object transportation method further includes a step between (E) and (F): (G) the processing unit borrows The object recognition-type neural network model and the machine vision of the photographic module identify the object from other objects, and control the robotic arm to drive the holding module to the position of the object, and control the holding mold The group holds the object. 如請求項1所述的物件運送方法,其中,該物件運送系統是包含多台深度相機,且該等深度相機的多個拍攝鏡頭分別從多個不同的方向對準一拍攝位置;
該物件運送方法還包含位於步驟(A)之前的:(H)在該固持模組固持另該物件且位於該拍攝位置的情形下,該處理單元控制該等深度相機分別進行拍攝以獲得多個三維點雲模板,並將該等三維點雲模板以三维拼接的方式合併為該模板資料並儲存該模板資料;
在步驟(A)中,在該固持模組位於該拍攝位置的情形下,該處理單元是控制該等深度相機分別進行拍攝以獲得多個三維點雲模型,並將該等三維點雲模型以三维拼接的方式合併為該參考資料。
The object transport method according to claim 1, wherein the object transport system includes a plurality of depth cameras, and a plurality of shooting lenses of the depth cameras are respectively aligned to a shooting position from a plurality of different directions;
The object transportation method further includes before the step (A): (H) In a case where the holding module holds another object and is located at the shooting position, the processing unit controls the depth cameras to shoot separately to obtain multiple Three-dimensional point cloud template, and merge the three-dimensional point cloud templates into the template data by three-dimensional stitching and store the template data;
In step (A), when the holding module is located at the shooting position, the processing unit controls the depth cameras to shoot separately to obtain a plurality of three-dimensional point cloud models, and uses the three-dimensional point cloud models to Three-dimensional stitching is incorporated into this reference.
一種物件運送系統,適用於運送一物件,該物件運送系統包含:
一深度相機;
一運送單元,適用於移動該物件,並包括一可被帶動且適用於固持該物件的固持模組;及
一處理單元,電連接該深度相機及該運送單元;
其中,在該固持模組固持該物件的情形下,該處理單元控制該深度相機進行拍攝以獲得一筆參考資料,該參考資料指示出該物件於該深度相機的一拍攝範圍內所呈現出的一初始姿態,該處理單元將該參考資料與一預先儲存的模板資料比對而產生一筆校正資料,該模板資料指示出另一物件於該拍攝範圍內所呈現出的一目標姿態,且該校正資料相關於該物件於該初始姿態與該目標姿態之間在三維空間中的偏移量,該處理單元根據該校正資料及一預先儲存的預定路徑控制該固持模組被帶動至一校正位置後解除固持該物件。
An object transport system is suitable for transporting an object. The object transport system includes:
A depth camera
A transport unit adapted to move the object and including a holding module which can be driven and adapted to hold the object; and a processing unit electrically connected to the depth camera and the transport unit;
Wherein, in the case that the object is held by the holding module, the processing unit controls the depth camera to take a picture to obtain a reference material, the reference material indicates that an object appears in a shooting range of the depth camera. Initial posture, the processing unit compares the reference data with a pre-stored template data to generate a piece of correction data, the template data indicates a target posture presented by another object within the shooting range, and the correction data In relation to the offset of the object in the three-dimensional space between the initial posture and the target posture, the processing unit controls the holding module to be driven to a correction position according to the correction data and a pre-stored predetermined path, and is released. Hold the object.
如請求項9所述的物件運送系統,是包含多台深度相機,且該等深度相機的多個拍攝鏡頭分別從多個不同的方向對準一拍攝位置,在該固持模組固持另該物件且位於該拍攝位置的情形下,該處理單元控制該等深度相機分別進行拍攝以獲得多個三維點雲模板,並將該等三維點雲模板以三维拼接的方式合併為該模板資料並儲存該模板資料,在該固持模組位於該拍攝位置的情形下,該處理單元是控制該等深度相機分別進行拍攝以獲得多個三維點雲模型,並將該等三維點雲模型以三维拼接的方式合併為該參考資料。The object transport system according to claim 9 includes a plurality of depth cameras, and the plurality of shooting lenses of the depth cameras are respectively aligned to a shooting position from a plurality of different directions, and the other object is held in the holding module. In the case of being located at the shooting position, the processing unit controls the depth cameras to shoot separately to obtain a plurality of three-dimensional point cloud templates, and merges the three-dimensional point cloud templates into the template data and stores the Template data. In the case where the holding module is located at the shooting position, the processing unit controls the depth cameras to shoot separately to obtain multiple three-dimensional point cloud models, and the three-dimensional point cloud models are three-dimensionally stitched. Combined into that reference.
TW108110079A 2019-03-22 2019-03-22 Object delivery method and system TWI675000B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW108110079A TWI675000B (en) 2019-03-22 2019-03-22 Object delivery method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW108110079A TWI675000B (en) 2019-03-22 2019-03-22 Object delivery method and system

Publications (2)

Publication Number Publication Date
TWI675000B true TWI675000B (en) 2019-10-21
TW202035255A TW202035255A (en) 2020-10-01

Family

ID=69023982

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108110079A TWI675000B (en) 2019-03-22 2019-03-22 Object delivery method and system

Country Status (1)

Country Link
TW (1) TWI675000B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI746333B (en) * 2020-12-30 2021-11-11 所羅門股份有限公司 Destacking method and destacking system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1162681C (en) * 1999-03-19 2004-08-18 松下电工株式会社 Three-D object recognition method and pin picking system using the method
US20110205338A1 (en) * 2010-02-24 2011-08-25 Samsung Electronics Co., Ltd. Apparatus for estimating position of mobile robot and method thereof
TW201415414A (en) * 2012-06-29 2014-04-16 Mitsubishi Electric Corp Method for registering data
TW201641071A (en) * 2015-05-20 2016-12-01 國立交通大學 Method and system for recognizing multiple instruments during minimally invasive surgery
CN106228563A (en) * 2016-07-29 2016-12-14 杭州鹰睿科技有限公司 Automatic setup system based on 3D vision
TW201721588A (en) * 2015-12-14 2017-06-16 財團法人工業技術研究院 Method for suturing 3D coordinate information and the device using the same
CN109459984A (en) * 2018-11-02 2019-03-12 宁夏巨能机器人股份有限公司 A kind of positioning grasping system and its application method based on three-dimensional point cloud

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1162681C (en) * 1999-03-19 2004-08-18 松下电工株式会社 Three-D object recognition method and pin picking system using the method
US20110205338A1 (en) * 2010-02-24 2011-08-25 Samsung Electronics Co., Ltd. Apparatus for estimating position of mobile robot and method thereof
TW201415414A (en) * 2012-06-29 2014-04-16 Mitsubishi Electric Corp Method for registering data
TW201641071A (en) * 2015-05-20 2016-12-01 國立交通大學 Method and system for recognizing multiple instruments during minimally invasive surgery
TW201721588A (en) * 2015-12-14 2017-06-16 財團法人工業技術研究院 Method for suturing 3D coordinate information and the device using the same
CN106228563A (en) * 2016-07-29 2016-12-14 杭州鹰睿科技有限公司 Automatic setup system based on 3D vision
CN109459984A (en) * 2018-11-02 2019-03-12 宁夏巨能机器人股份有限公司 A kind of positioning grasping system and its application method based on three-dimensional point cloud

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI746333B (en) * 2020-12-30 2021-11-11 所羅門股份有限公司 Destacking method and destacking system

Also Published As

Publication number Publication date
TW202035255A (en) 2020-10-01

Similar Documents

Publication Publication Date Title
EP3705239B1 (en) Calibration system and method for robotic cells
CN111452040B (en) System and method for associating machine vision coordinate space in a pilot assembly environment
TWI650626B (en) Robot processing method and system based on 3d image
CN111015665B (en) Method and system for performing automatic camera calibration for robotic control
CN108827154B (en) Robot non-teaching grabbing method and device and computer readable storage medium
JP6444027B2 (en) Information processing apparatus, information processing apparatus control method, information processing system, and program
CN107192331A (en) A kind of workpiece grabbing method based on binocular vision
CN113146172B (en) Multi-vision-based detection and assembly system and method
JP5539138B2 (en) System and method for determining the posture of an object in a scene
JP2008087074A (en) Workpiece picking apparatus
JP2009115783A (en) Method and system for determining 3d posture of object in scene
JP2015090298A (en) Information processing apparatus, and information processing method
CN112109072B (en) Accurate 6D pose measurement and grabbing method for large sparse feature tray
JP2010122777A (en) Workpiece identifying method and workpiece identifying device
US20190255706A1 (en) Simulation device that simulates operation of robot
US20190287258A1 (en) Control Apparatus, Robot System, And Method Of Detecting Object
TWI675000B (en) Object delivery method and system
JP7427370B2 (en) Imaging device, image processing device, image processing method, calibration method for imaging device, robot device, method for manufacturing articles using robot device, control program, and recording medium
TWI660255B (en) Workpiece processing method and processing system
CN111993420A (en) Fixed binocular vision 3D guide piece feeding system
Motai et al. SmartView: hand-eye robotic calibration for active viewpoint generation and object grasping
CN113500593A (en) Method for grabbing designated part of shaft workpiece for loading
JP2017220036A (en) Workpiece detection system and clothing detection system
TW202404768A (en) Automatic operation methods and systems
KR102661635B1 (en) System and method for tying together machine vision coordinate spaces in a guided assembly environment