TWI656421B - Control method of self-propelled equipment - Google Patents

Control method of self-propelled equipment Download PDF

Info

Publication number
TWI656421B
TWI656421B TW106140165A TW106140165A TWI656421B TW I656421 B TWI656421 B TW I656421B TW 106140165 A TW106140165 A TW 106140165A TW 106140165 A TW106140165 A TW 106140165A TW I656421 B TWI656421 B TW I656421B
Authority
TW
Taiwan
Prior art keywords
self
propelled vehicle
image
virtual plane
plane area
Prior art date
Application number
TW106140165A
Other languages
Chinese (zh)
Other versions
TW201923498A (en
Inventor
陳政隆
春祿 阮
劉祐延
蔡讚耀
Original Assignee
所羅門股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 所羅門股份有限公司 filed Critical 所羅門股份有限公司
Priority to TW106140165A priority Critical patent/TWI656421B/en
Application granted granted Critical
Publication of TWI656421B publication Critical patent/TWI656421B/en
Publication of TW201923498A publication Critical patent/TW201923498A/en

Links

Abstract

一種自走設備的控制方法,適用於一個自走車、一個設置在該自走車上的機械手臂、一個設置在該機械手臂上的影像擷取單元、及一個工件,並包含下列步驟:藉由該自走車移動至一個實際位置,該實際位置相對於一個預定位置有一個誤差量;藉由該影像擷取單元朝一個實際方向擷取一個影像;藉由該自走車對該影像作影像辨識,以獲得該工件與該影像擷取單元的相對位置關係;藉由該自走車根據該相對位置關係,獲得該誤差量,以控制該機械手臂對該工件作動。因而實現一種利用光學追蹤的技術,達成提升該自走設備的定位精度。A control method of a self-propelled device is suitable for a self-propelled vehicle, a robot arm provided on the self-propelled vehicle, an image capture unit provided on the robot arm, and a workpiece, and includes the following steps: borrowing Moving from the self-propelled vehicle to an actual position, the actual position has an error amount relative to a predetermined position; an image is captured in an actual direction by the image capturing unit; the image is processed by the self-propelled vehicle Image recognition to obtain the relative positional relationship between the workpiece and the image capturing unit; the self-propelled vehicle obtains the error amount according to the relative positional relationship to control the robotic arm to actuate the workpiece. Therefore, a technology using optical tracking is implemented to improve the positioning accuracy of the self-propelled device.

Description

自走設備的控制方法Control method of self-propelled equipment

本發明是有關於一種自走設備的控制方法,特別是指一種利用光學追蹤的技術來提升自走設備的定位精度的控制方法。The invention relates to a method for controlling an autonomous device, and in particular to a method for controlling the positioning accuracy of an autonomous device using optical tracking technology.

自走車(Automated Guided Vehicle;AGV)是一種廣泛應用於倉儲系統或是工廠製造等等的習知設備。該習知的自走車可以在軌道上移動,也可以不需要軌道的輔助,就能夠在設定的不同位置之間移動,使得設置在自走車上的機械手臂能夠在正確的位置,執行夾取或移動一個工件。Automated Guided Vehicle (AGV) is a conventional device widely used in storage systems or factory manufacturing. The conventional self-propelled vehicle can move on the track, or it can move between different positions without the assistance of the track, so that the robotic arm provided on the self-propelled vehicle can perform the clamping at the correct position. Take or move a workpiece.

舉例來說,當自走車要由一個第一位置移動到一個第二位置時,通常採用下列二種方法來達成自走車的定位。第一種方法,在該第二位置附近設置至少一個三角板,且在自走車上設置至少一個雷射發光源,當該至少一個雷射發光源產生的雷射光,在對應的該至少一個三角板上反射,使得自走車上的感應器偵測到時,能夠確認已移動到該第二位置。第二種方法,直接在自走車的控制單元中,設定該第二位置的座標,使得自走車直接移動至該第二位置。For example, when the self-propelled vehicle is to be moved from a first position to a second position, the following two methods are usually used to achieve the positioning of the self-propelled vehicle. In the first method, at least one triangular plate is provided near the second position, and at least one laser light source is provided on the self-propelled vehicle. When the laser light generated by the at least one laser light source is on the corresponding at least one triangle plate The upward reflection enables the sensor on the self-propelled vehicle to detect that it has moved to the second position. The second method is to set the coordinates of the second position directly in the control unit of the self-propelled vehicle, so that the self-propelled vehicle moves directly to the second position.

由於自走車在每次的移動時,會產生不固定的誤差值,例如,在水平的某個方向多移動或少移動幾公分,或者,自走車的位置雖然正確但角度卻有偏離,使得機械手臂無法正確執行夾取。在前述的這二種方法中,第二種方法完全未對自走車的定位作補償,而第一種方法雖然有借助雷射反光的技術作位置的補償,但在實際執行上卻有其相對的限制與不便。Because each time the self-propelled vehicle moves, it will produce non-fixed error values. For example, it moves more or less by a few centimeters in a certain horizontal direction, or the position of the self-propelled vehicle deviates although it is correct. This makes the robotic arm unable to perform gripping correctly. Of the two methods mentioned above, the second method does not compensate for the positioning of the self-propelled vehicle at all, while the first method has the position compensation by using the technology of laser reflection, but it has its practical implementation. Relative restrictions and inconveniences.

因此,本發明的目的,即在提供一種利用光學追蹤的技術來提升定位精度的控制方法。Therefore, an object of the present invention is to provide a control method that uses optical tracking technology to improve positioning accuracy.

於是,本發明自走設備的控制方法,適用於一個自走車、一個設置在該自走車上的機械手臂、一個設置在該機械手臂上的影像擷取單元、及一個在目標位置的工件,該控制方法包含步驟(a)~(d)。Therefore, the control method of the self-propelled equipment of the present invention is applicable to a self-propelled vehicle, a robot arm provided on the self-propelled vehicle, an image capture unit provided on the robot arm, and a workpiece at a target position The control method includes steps (a) to (d).

步驟(a),藉由該自走車接收一個驅動信號,而移動至一個實際位置,該實際位置相對於一個預定位置有一個誤差量。In step (a), the self-propelled vehicle receives a driving signal and moves to an actual position. The actual position has an error amount relative to a predetermined position.

步驟(b),藉由該影像擷取單元朝一個實際方向擷取一個影像,當該自走車在該預定位置時,該實際方向是由該影像擷取單元指向該工件。In step (b), an image is captured in an actual direction by the image capture unit. When the self-propelled vehicle is at the predetermined position, the actual direction is pointed by the image capture unit to the workpiece.

步驟(c),藉由該自走車對該影像作影像辨識,以獲得該工件與該影像擷取單元之間的相對位置關係。In step (c), the image is identified by the self-propelled vehicle to obtain a relative position relationship between the workpiece and the image capturing unit.

步驟(d),藉由該自走車根據該相對位置關係,獲得該誤差量,以控制該機械手臂作動。In step (d), the self-propelled vehicle obtains the error amount according to the relative position relationship to control the robot arm to act.

在一些實施態樣中,該控制方法還適用於一個鄰近該工件的標記,其中,在步驟(b)中,該實際方向是由該影像擷取單元指向該標記。在步驟(c)中,該自走車對該影像中,作該標記的影像辨識,而獲得該相對位置關係。In some implementation aspects, the control method is also applicable to a mark adjacent to the workpiece, wherein in step (b), the actual direction is directed by the image capturing unit to the mark. In step (c), the self-propelled vehicle recognizes the image of the mark in the image to obtain the relative position relationship.

在一些實施態樣中,其中,在步驟(c)中,當該自走車在該影像中,無法辨識到該標記時,該自走車控制該機械手臂移動,直到該自走車對該影像擷取單元所擷取的該等影像所作的分析中,辨識到該標記為止。In some implementation aspects, in step (c), when the self-propelled vehicle is in the image and the mark cannot be recognized, the self-propelled vehicle controls the movement of the robotic arm until the self-propelled vehicle responds to the In the analysis performed on the images captured by the image capturing unit, the mark is recognized.

在一些實施態樣中,其中,在步驟(c)中,該自走車控制該機械手臂移動,使得該影像擷取單元對多個預定區域輪流擷取該等影像。定義一個虛擬平面區域,該虛擬平面區域的法線方向與該實際方向平行,該虛擬平面區域包含該等預定區域,該虛擬平面區域的該等預定區域之其中一者會通過該標記的所在位置。In some implementation forms, in step (c), the self-propelled vehicle controls the movement of the robot arm, so that the image capturing unit captures the images in turn for a plurality of predetermined areas. Define a virtual plane area, the normal direction of the virtual plane area is parallel to the actual direction, the virtual plane area contains the predetermined areas, and one of the predetermined areas of the virtual plane area will pass the location of the mark .

在一些實施態樣中,其中,在步驟(c)中,該虛擬平面區域的大小相關於該自走車的一個最大移動誤差量、一個最大轉動誤差量、及該目標位置與該預定位置之間的距離。In some embodiments, in step (c), the size of the virtual plane area is related to a maximum moving error amount, a maximum rotating error amount of the self-propelled vehicle, and a ratio between the target position and the predetermined position. Distance.

在一些實施態樣中,其中,在步驟(c)中,該虛擬平面區域所包含的該等預定區域的數量,相關於該虛擬平面區域的大小及每個影像在該虛擬平面區域所對應的大小的比值。In some implementation forms, in step (c), the number of the predetermined areas included in the virtual plane area is related to the size of the virtual plane area and the size of each image in the virtual plane area. The ratio of sizes.

在另一些實施態樣中,其中,在步驟(c)中,該標記可以是一個棋盤(Chessboard)圖、一個二維條碼(QR Code)、一個條碼(Bar Code)、及一個Aruco碼之其中任一者。In other embodiments, in step (c), the mark may be a Chessboard map, a QR code, a bar code, and an Aruco code. Either.

本發明的功效在於:藉由對該影像擷取單元所擷取的該影像,作影像辨識,而獲得該工件與該影像擷取單元的相對位置關係,進而能夠獲得該預定位置與該實際位置的該誤差量。此外,當單一個該影像無法辨識成功時,藉由控制該機械手臂移動,而獲得更新的另一個影像且作再次的影像辨識,直到辨識成功為止。因而能夠實現一種利用光學追蹤的技術,達成提升該自走設備的定位精度。The effect of the present invention is: by using the image captured by the image capture unit for image recognition, the relative position relationship between the workpiece and the image capture unit is obtained, and then the predetermined position and the actual position can be obtained The amount of error. In addition, when a single image cannot be identified successfully, by controlling the movement of the robot arm, another updated image is obtained and image recognition is performed again until the identification is successful. Therefore, a technology using optical tracking can be implemented to improve the positioning accuracy of the self-propelled device.

在本發明被詳細描述之前,應當注意在以下的說明內容中,類似的元件是以相同的編號來表示。Before the present invention is described in detail, it should be noted that in the following description, similar elements are represented by the same numbers.

參閱圖1與圖2,圖2是一個俯視的示意圖,示例性地說明一個自走設備及一個工件7之間的位置關係。本發明自走設備的控制方法適用於該自走設備,該自走設備包含一個自走車(Automated Guided Vehicle;AGV)1、一個設置在該自走車1上的機械手臂2、及一個設置在該機械手臂2上的影像擷取單元3。在本實施例中,該影像擷取單元3是一個相機模組,用於擷取一個靜態影像,此外,該自走車1包括一個處理單元(圖未示),用於收發信號及處理運算,以控制該機械手臂2及自身的移動,但不在此限。該控制方法包含步驟S1~S4。Referring to FIG. 1 and FIG. 2, FIG. 2 is a schematic plan view illustrating the positional relationship between a self-propelled device and a workpiece 7 by way of example. The control method of the self-propelled device of the present invention is applicable to the self-propelled device. The self-propelled device includes an Automated Guided Vehicle (AGV) 1, a robot arm provided on the self-propelled vehicle 1, and a setting. An image capturing unit 3 on the robot arm 2. In this embodiment, the image capturing unit 3 is a camera module for capturing a still image. In addition, the self-propelled vehicle 1 includes a processing unit (not shown) for receiving and sending signals and processing operations. To control the movement of the robot arm 2 and itself, but not limited to this. The control method includes steps S1 to S4.

於步驟S1,藉由該自走車1接收一個驅動信號,而移動至一個實際位置,該實際位置相對於一個預定位置有一個誤差量。再參閱圖3,圖3是一個俯視的座標圖,示例性地說明該預定位置O1及該實際位置O2之間的相對關係。舉例來說,該自走車1欲由圖2的左側編號1~3的一個初始位置移動至右側編號1~3的該預定位置,使得該機械手臂2能夠對放置在一個平台8上的一個工件7執行夾取或其他動作。但在實際移動時,該自走車1並不能每次都剛好移動至該預定位置O1,而是移動至圖3的該實際位置O2。In step S1, the self-propelled vehicle 1 receives a driving signal and moves to an actual position. The actual position has an error amount relative to a predetermined position. Referring again to FIG. 3, FIG. 3 is a coordinate diagram in a plan view, and illustrates a relative relationship between the predetermined position O1 and the actual position O2 by way of example. For example, the self-propelled vehicle 1 is to be moved from an initial position numbered 1 to 3 on the left side of FIG. 2 to the predetermined position numbered 1 to 3 on the right side, so that the robotic arm 2 can be positioned on a platform 8 The workpiece 7 performs gripping or other actions. However, during actual movement, the self-propelled vehicle 1 cannot just move to the predetermined position O1 every time, but moves to the actual position O2 of FIG. 3.

另外要補充說明的是:該驅動信號例如是來自預先設定的位置座標或其他裝置的啟動信號等等的已知技術,而使得該自走車1獲知要移動至該預定位置而受驅動,不影響或限制本發明的技術手段。In addition, it should be added that the driving signal is, for example, a known technique such as a preset position coordinate or a start signal from another device, etc., so that the self-propelled vehicle 1 is driven to learn to move to the predetermined position. Influence or limit the technical means of the present invention.

參閱圖1、圖2與圖3,圖3的X1方向及Y1方向是該自走車1在該預定位置O1時,以該自走車1為參考點的座標系,而X2方向及Y2方向是該自走車1在該實際位置O2時,以該自走車1為參考點的座標系。於步驟S2,藉由該影像擷取單元3朝一個實際方向擷取一個影像。更詳細地舉例來說,當該自走車1在該初始位置時,該影像擷取單元3是朝向該實際方向,假設該自走車1是如圖2與圖3一樣,朝向一個X1方向移動,則在該誤差量等於零的情況下,也就是該自走車1移動至該預定位置O1時,該影像擷取單元3指向該工件7的方向就是該實際方向。換句話說,該影像擷取單元3在該自走車1由該初始位置移動至該實際位置O2的過程中,並未改變相對於該機械手臂2或相對於該自走車1的相對位置關係。Referring to FIG. 1, FIG. 2, and FIG. 3, the X1 direction and the Y1 direction of FIG. 3 are the coordinate systems with the self-propelled vehicle 1 as the reference point when the self-propelled vehicle 1 is at the predetermined position O1, and the X2 and Y2 directions It is the coordinate system that uses the self-propelled vehicle 1 as a reference point when the self-propelled vehicle 1 is at the actual position O2. In step S2, an image is captured by the image capturing unit 3 in an actual direction. In more detail, for example, when the self-propelled vehicle 1 is in the initial position, the image capturing unit 3 is oriented toward the actual direction. It is assumed that the self-propelled vehicle 1 is oriented toward an X1 direction as shown in FIG. 2 and FIG. 3. If the error amount is equal to zero, that is, when the self-propelled vehicle 1 moves to the predetermined position O1, the direction in which the image capturing unit 3 points to the workpiece 7 is the actual direction. In other words, during the movement of the self-propelled vehicle 1 from the initial position to the actual position O2, the image capturing unit 3 does not change the relative position with respect to the robot arm 2 or with respect to the self-propelled vehicle 1. relationship.

於步驟S3,藉由該自走車1對該影像作影像辨識,以獲得該工件7與該影像擷取單元3之間的相對位置關係。在本實施例中,該平台8上還設置一個鄰近該工件7的標記6,該自走車1是對該影像中,作該標記6的影像辨識,而獲得該相對位置關係。該標記6可以是一個棋盤(Chessboard)圖、一個二維條碼(QR Code)、一個條碼(Bar Code)、或一個Aruco碼之其中任一者,以便於該自走車1作影像辨識,但不在此限。In step S3, the image is recognized by the self-propelled vehicle 1 to obtain a relative position relationship between the workpiece 7 and the image capturing unit 3. In this embodiment, a mark 6 adjacent to the workpiece 7 is also provided on the platform 8. The self-propelled vehicle 1 is used to identify the mark 6 in the image to obtain the relative position relationship. The mark 6 may be any one of a Chessboard map, a QR code, a bar code, or an Aruco code, so that the self-propelled vehicle 1 can be used for image recognition, but Not in this limit.

為方便說明起見,在有關步驟S3的說明中,因為該影像擷取單元3、該機械手臂2、及該自走車1之間的相對位置都是可以預先獲得,因此,假設該影像擷取單元3、該機械手臂2、及該自走車1的該預定位置O1與該實際位置O2,在圖3的座標圖中,分別都是以O1與O2表示,並不會影響實際上執行本發明的可行性。For the convenience of description, in the description of step S3, because the relative positions between the image capture unit 3, the robot arm 2, and the self-propelled vehicle 1 can be obtained in advance, therefore, it is assumed that the image capture The predetermined position O1 and the actual position O2 of the taking unit 3, the robot arm 2, and the self-propelled vehicle 1 are represented by O1 and O2 respectively in the coordinate diagram of FIG. 3, and will not affect the actual execution Feasibility of the invention.

由於該自走車1在移動後會產生一個移動誤差量(Translation Error)及一個轉動誤差量(Rotation Error)。例如,圖3中的dx及dy就是分別在X1方向及Y1方向的移動誤差量,而dθ則是轉動誤差量。因此,當該自走車1在該實際位置O2朝該實際方向擷取該影像時,該影像可能並未包含該標記6。此時,也就是當該自走車1在該影像中,無法辨識到該標記6時,該自走車1控制該機械手臂2移動,也就是控制該影像擷取單元3朝向不同的方向或位置擷取另一個影像。該自走車1再對該另一個影像作影像辨識,若還是沒有辨識出該標記6,則再次控制該機械手臂2移動,以獲得下一個影像。如此,依序地控制機械手臂2移動、擷取新影像、作影像辨識的循環操作,直到該自走車1對該影像擷取單元3所擷取的該等影像所作的分析中,辨識到該標記6為止。Because the self-propelled vehicle 1 generates a translation error amount and a rotation error amount after moving. For example, dx and dy in FIG. 3 are the movement error amounts in the X1 direction and the Y1 direction, respectively, and dθ is the rotation error amount. Therefore, when the self-propelled vehicle 1 captures the image in the actual direction at the actual position O2, the image may not include the mark 6. At this time, that is, when the self-propelled vehicle 1 cannot recognize the mark 6 in the image, the self-propelled vehicle 1 controls the robotic arm 2 to move, that is, the image capturing unit 3 is directed to different directions or Position to capture another image. The self-propelled vehicle 1 performs image recognition on the other image. If the mark 6 is still not recognized, the robot arm 2 is controlled to move again to obtain the next image. In this way, the cyclic operations of the robotic arm 2 movement, acquisition of new images, and image recognition are sequentially controlled until the self-propelled vehicle 1 analyzes the images captured by the image capture unit 3 and recognizes that Up to this mark 6.

當該自走車1控制該機械手臂2移動,以使得該影像擷取單元3擷取新影像時,其控制的方式是讓該影像擷取單元3對一個虛擬平面區域所包含的多個預定區域輪流擷取該等影像。該虛擬平面會通過該標記6的所在位置,以圖3的例子來說,假設T表示該標記6的位置,當該自走車1移動至該實際位置O2時,該影像擷取單元3所在的位置(即O2)所朝向的該實際方向即是該Y2方向,因此,該虛擬平面區域會通過位置T,且其法線方向與該Y2方向相平行。When the self-propelled vehicle 1 controls the movement of the robotic arm 2 so that the image capturing unit 3 captures a new image, the control method is to let the image capturing unit 3 make multiple reservations for a virtual plane area. The areas take turns capturing these images. The virtual plane will pass through the position of the marker 6. Taking the example of FIG. 3 as an example, suppose T represents the position of the marker 6. When the self-propelled vehicle 1 moves to the actual position O2, the image capturing unit 3 is located. The actual direction to which the position of O2 is (i.e., O2) is the Y2 direction. Therefore, the virtual plane area will pass the position T, and its normal direction is parallel to the Y2 direction.

該虛擬平面區域的大小可以藉由該自走車1的規格中的一個最大移動誤差量與一個最大轉動誤差量,及該標記6的位置T與該預定位置O1之間的距離來獲得。舉例來說,該最大移動誤差量、該最大轉動誤差量、或兩位置之間的距離越大,則該虛擬平面區域的大小也就會越大。The size of the virtual plane area can be obtained by a maximum movement error amount and a maximum rotation error amount in the specifications of the self-propelled vehicle 1 and the distance between the position T of the mark 6 and the predetermined position O1. For example, the larger the maximum movement error amount, the maximum rotation error amount, or the distance between the two positions, the larger the size of the virtual plane area.

更具體地說,定義P Marker、P Cam、P Tool、P Base、及P AGV,共六種座標位置,每個座標位置都是以同一個參考點為基準,且都具有六個維度,分別是在X、Y、Z三個互相垂直的方向的位置,及分別對應X、Y、Z三個方向的轉動量。P Marker是標記6的座標位置,P Cam是該影像擷取單元3的座標位置,當該影像擷取單元3是架設在該機械手臂2的最後一個維度的一個夾持部時,P Tool是該夾持部的座標位置,P Base是該機械手臂2的一機底支點的座標位置,P AGV是該自走車1(例如質心)的座標位置。 More specifically, define P Marker , P Cam , P Tool , P Base , and P AGV , a total of six coordinate positions, each coordinate position is based on the same reference point, and all have six dimensions, respectively It is the position in three mutually perpendicular directions of X, Y, and Z, and the rotation amounts corresponding to the three directions of X, Y, and Z, respectively. P Marker is the coordinate position of the marker 6, P Cam is the coordinate position of the image capture unit 3. When the image capture unit 3 is a clamping part set up in the last dimension of the robot arm 2, the P Tool is The coordinate position of the clamping part, P Base is the coordinate position of a bottom fulcrum of the robot arm 2, and P AGV is the coordinate position of the self-propelled vehicle 1 (for example, the center of mass).

定義T M2C、T C2T、T T2B、T B2A,共四個轉換矩陣,該等座標位置之間具有下列的關係。其中,因為該機底支點相對於該自走車1是固定的相對位置關係,且該影像擷取單元3及該夾持部之間也是固定的相對位置關係,故T B2A及T C2T都是可以預先獲得的常數。再者,因為該機械手臂2的移動是受到該自走車1的控制,故該機械手臂2相對於該機底支點的相對位置關係也是已知,即T T2B也是已知。再者,T M2C可以藉由PnP(Perspective-n-Point)演算法在該自走車1作影像辨識時而獲得,所以也是已知。 Defining T M2C , T C2T , T T2B , T B2A , there are four transformation matrices. The positions of these coordinates have the following relationship. Among them, T B2A and T C2T are both fixed relative positional relationship with respect to the self-propelled vehicle 1 and fixed relative positional relationship between the image capturing unit 3 and the clamping portion. A constant that can be obtained in advance. Furthermore, since the movement of the robot arm 2 is controlled by the self-propelled vehicle 1, the relative position relationship of the robot arm 2 with respect to the fulcrum of the aircraft is also known, that is, T T2B is also known. Furthermore, T M2C can be obtained by PnP (Perspective-n-Point) algorithm when the self-propelled vehicle 1 is used for image recognition, so it is also known.

由上面的轉換矩陣,可以得到下列的關係式。由這個關係式可知,該自走車1的座標位置可以經由轉換矩陣的換算而得到標記6的座標位置,也就是說,當該自走車1的該最大移動誤差量及該最大轉動誤差量為已知時,即能將該自走車1的該實際位置所在的整個區域,經由該轉換矩陣M,轉換成該標記6可能會出現的最大範圍,即該虛擬平面區域前後的立體空間(舉例如圖4)。 From the above transformation matrix, the following relations can be obtained. It can be known from this relationship that the coordinate position of the self-propelled vehicle 1 can be obtained by conversion of the transformation matrix to obtain the coordinate position of the marker 6, that is, when the maximum movement error amount and the maximum rotation error amount of the self-propelled vehicle 1 When it is known, the entire area where the actual position of the self-propelled vehicle 1 is located can be transformed into the maximum range that the mark 6 may appear through the transformation matrix M, that is, the three-dimensional space before and after the virtual plane area ( An example is shown in Figure 4).

由於該自走設備的該機械手臂2對該工件7的運作有不同的精度要求,例如:當精度要求越高時,該影像擷取單元3能在該虛擬平面區域上所能擷取到的影像區域的面積就會相對越小。因此,隨著精度的要求不同,該虛擬平面區域所包含的該等預定區域的數量,是等於該虛擬平面區域的大小及每個影像在該虛擬平面區域所對應的大小的比值,或者與其相關。Because the robot arm 2 of the self-propelled device has different accuracy requirements for the operation of the workpiece 7, for example, when the accuracy requirement is higher, the image capturing unit 3 can capture the images on the virtual plane area. The area of the image area will be relatively small. Therefore, with different accuracy requirements, the number of the predetermined areas included in the virtual plane area is equal to or related to the ratio of the size of the virtual plane area and the size of each image in the virtual plane area. .

再參閱圖4,圖4是一個示意圖,示例性地說明該虛擬平面區域及該等預定區域。舉例來說,該虛擬平面區域P1包含九個預定區域Z1~Z9,當該自走車1移動至該實際位置O2時,該影像擷取單元3朝向該實際方向的視角,能在該預定區域Z5上取得第一個影像。當該第一個影像無法辨識出該標記6時,該自走車1控制該機械手臂2移動,使得該影像擷取單元3在該虛擬平面區域P1的其他該等預定區域Z1~Z4、Z6~Z9中,擷取其他的多個影像。至於該等其他影像的擷取順序,也就是控制該機械手臂2的移動順序關係,例如依序朝向預定區域Z4、Z1、Z2、Z6的相對螺旋位置、或是依序朝向預定區域Z1~Z3的地毯式掃描位置等等,則不在此限。Referring again to FIG. 4, FIG. 4 is a schematic diagram illustrating the virtual plane area and the predetermined areas by way of example. For example, the virtual plane area P1 includes nine predetermined areas Z1 to Z9. When the self-propelled vehicle 1 moves to the actual position O2, the perspective of the image capturing unit 3 facing the actual direction can be in the predetermined area. Get the first image on Z5. When the mark 6 cannot be recognized in the first image, the self-propelled vehicle 1 controls the movement of the robot arm 2 so that the image capturing unit 3 is in the other predetermined areas Z1 to Z4 and Z6 in the virtual plane area P1. In ~ Z9, multiple other images are captured. As for the acquisition sequence of these other images, that is, controlling the relationship of the movement order of the robot arm 2, for example, toward the relative spiral positions of the predetermined areas Z4, Z1, Z2, and Z6 in sequence, or toward the predetermined areas Z1 to Z3 in sequence. Carpet scanning positions, etc. are not limited.

另外要補充說明的是,在圖4中,由於每個影像擷取單元3的鏡頭的有其景深的限制,因此,該標記6只能在該虛擬平面區域P1的前後一定深度的立體空間時,該影像才能被辨識出該標記6與該影像擷取單元3之間的相對位置關係。只是為方便繪圖起見,圖4只畫出該虛擬平面區域P1遠離該自走車1的立體空間,且省略而未畫出該虛擬平面區域P1鄰近該自走車1的立體空間。In addition, it should be added that in FIG. 4, because the lens of each image capturing unit 3 has a limitation of its depth of field, the mark 6 can only be used in a three-dimensional space with a certain depth before and after the virtual plane area P1. Only then can the image recognize the relative positional relationship between the mark 6 and the image capturing unit 3. For convenience of drawing, FIG. 4 only illustrates the three-dimensional space of the virtual plane area P1 away from the self-propelled vehicle 1, and the three-dimensional space of the virtual plane area P1 adjacent to the self-propelled vehicle 1 is omitted and not shown.

於步驟S4,藉由該自走車1根據該相對位置關係,獲得該誤差量,以控制該機械手臂2對該工件7執行夾取或移動的作動。舉例來說,在該誤差量等於零的情況下,該自走車1與該工件7之間的相對位置關係及距離是分別如圖3的T與O1及D1,而當該誤差量大於零時,該自走車1能藉由影像辨識而獲得該工件7與該自走車1之間的相對位置關係及距離是分別如圖3的T與O2及D2,進而作定位的補償,使得能夠正確地控制該機械手臂2對該工件7作正確地作動。In step S4, the self-propelled vehicle 1 obtains the error amount according to the relative position relationship, so as to control the robot arm 2 to perform the gripping or moving operation on the workpiece 7. For example, in the case where the error amount is equal to zero, the relative position relationship and distance between the self-propelled cart 1 and the workpiece 7 are respectively T and O1 and D1 in FIG. 3, and when the error amount is greater than zero, The self-propelled vehicle 1 can obtain the relative positional relationship and distance between the workpiece 7 and the self-propelled vehicle 1 through image recognition, as shown in T and O2 and D2 of FIG. 3, respectively, and then make positioning compensation, so that The robot arm 2 is correctly controlled to correctly operate the workpiece 7.

綜上所述,藉由該自走車1對該影像擷取單元3所擷取的該影像,作影像辨識,而獲得該工件7(或該標記6)與該影像擷取單元3之間的相對位置關係,進而能夠獲得該預定位置O1與該實際位置O2的該誤差量,以補償控制該機械手臂2正確地對該工件7作動。此外,當單一個該影像無法辨識成功時,藉由該自走車1控制該機械手臂2移動,而獲得更新的另一個影像且作再次的影像辨識,直到辨識成功為止。因而能夠實現一種利用光學追蹤的技術,達成提升該自走設備的定位精度,故確實能達成本發明的目的。In summary, the image captured by the self-propelled vehicle 1 to the image capturing unit 3 is used for image recognition to obtain the space between the workpiece 7 (or the mark 6) and the image capturing unit 3. The relative positional relationship between the two can further obtain the error amount of the predetermined position O1 and the actual position O2, so as to compensate and control the robot arm 2 to correctly move the workpiece 7. In addition, when the single image cannot be successfully identified, the self-propelled vehicle 1 is used to control the movement of the robot arm 2 to obtain another updated image and perform image recognition again until the identification is successful. Therefore, a technology using optical tracking can be realized to improve the positioning accuracy of the self-propelled device, so the objective of the invention can be achieved.

惟以上所述者,僅為本發明的實施例而已,當不能以此限定本發明實施的範圍,凡是依本發明申請專利範圍及專利說明書內容所作的簡單的等效變化與修飾,皆仍屬本發明專利涵蓋的範圍內。However, the above are only examples of the present invention. When the scope of implementation of the present invention cannot be limited by this, any simple equivalent changes and modifications made according to the scope of the patent application and the contents of the patent specification of the present invention are still Within the scope of the invention patent.

S1~S4‧‧‧步驟Steps S1 ~ S4‧‧‧‧

1‧‧‧自走車1‧‧‧Self-propelled car

2‧‧‧機械手臂2‧‧‧ robotic arm

3‧‧‧影像擷取單元3‧‧‧Image Acquisition Unit

6‧‧‧標記6‧‧‧ mark

7‧‧‧工件7‧‧‧ Workpiece

8‧‧‧平台8‧‧‧ platform

D1‧‧‧距離D1‧‧‧distance

D2‧‧‧距離D2‧‧‧distance

O1‧‧‧預定位置O1‧‧‧ scheduled location

O2‧‧‧實際位置O2‧‧‧ actual location

T‧‧‧位置T‧‧‧Location

X1‧‧‧方向X1‧‧‧ direction

X2‧‧‧方向X2‧‧‧ direction

Y1‧‧‧方向Y1‧‧‧ direction

Y2‧‧‧方向Y2‧‧‧ direction

dx‧‧‧移動誤差量dx‧‧‧movement error amount

dy‧‧‧移動誤差量dy‧‧‧movement error amount

dθ‧‧‧轉動誤差量dθ‧‧‧rotation error

P1‧‧‧虛擬平面區域P1‧‧‧Virtual plane area

Z1~Z9‧‧‧預定區域Z1 ~ Z9 ‧‧‧ scheduled area

本發明的其他的特徵及功效,將於參照圖式的實施方式中清楚地呈現,其中: 圖1是一個流程圖,說明本發明自走設備的控制方法的一個實施例; 圖2是一個示意圖,說明該實施例的一個自走設備及一個工件之間的位置關係; 圖3是一個座標圖,示例性地說明一個預定位置及一個實際位置之間的相對關係;及 圖4是一個示意圖,示例性地說明一個虛擬平面區域及多個預定區域。Other features and effects of the present invention will be clearly presented in the embodiment with reference to the drawings, in which: FIG. 1 is a flowchart illustrating an embodiment of a control method of a self-propelled device of the present invention; FIG. 2 is a schematic diagram To illustrate the positional relationship between a self-propelled equipment and a workpiece in this embodiment; FIG. 3 is a coordinate diagram exemplarily illustrating the relative relationship between a predetermined position and an actual position; and FIG. 4 is a schematic diagram, One virtual plane area and a plurality of predetermined areas are exemplarily described.

Claims (3)

一種自走設備的控制方法,適用於一個自走車、一個設置在該自走車上的機械手臂、一個設置在該機械手臂上的影像擷取單元、一個在目標位置的工件、及一個鄰近該工件的標記,該控制方法包含下列步驟:(a)藉由該自走車接收一個驅動信號,而移動至一個實際位置,該實際位置相對於一個預定位置有一個誤差量;(b)藉由該影像擷取單元朝一個實際方向擷取一個影像,當該自走車在該預定位置時,該實際方向是由該影像擷取單元指向該工件,該實際方向是由該影像擷取單元指向該標記;(c)藉由該自走車對該影像作影像辨識,以獲得該工件與該影像擷取單元之間的相對位置關係,該自走車對該影像中,作該標記的影像辨識,而獲得該相對位置關係,當該自走車在該影像中,無法辨識到該標記時,該自走車控制該機械手臂移動,直到該自走車對該影像擷取單元所擷取的該等影像所作的分析中,辨識到該標記為止,該自走車控制該機械手臂移動,使得該影像擷取單元對多個預定區域輪流擷取該等影像,定義一個虛擬平面區域,該虛擬平面區域的法線方向與該實際方向平行,該虛擬平面區域包含該等預定區域,該虛擬平面區域的該等預定區域之其中一者會通過該標記的所在位置,該虛擬平面區域的大小相關於該自走車的一個最大移動誤差量、一個最大轉動誤差量、及該目標位置與該預定位置之間的距離;及(d)藉由該自走車根據該相對位置關係,獲得該誤差量,以控制該機械手臂作動。A self-propelled device control method is suitable for a self-propelled vehicle, a robotic arm provided on the self-propelled vehicle, an image capture unit provided on the robotic arm, a workpiece at a target position, and an adjacent The marking of the workpiece, the control method includes the following steps: (a) receiving a driving signal by the self-propelled vehicle, and moving to an actual position, the actual position has an error amount relative to a predetermined position; (b) borrowing An image is captured by the image capture unit in an actual direction. When the self-propelled vehicle is at the predetermined position, the actual direction is pointed by the image capture unit to the workpiece, and the actual direction is by the image capture unit. Point to the mark; (c) image recognition of the image by the self-propelled vehicle to obtain the relative position relationship between the workpiece and the image capture unit, the self-propelled vehicle makes the mark in the image Image recognition to obtain the relative position relationship. When the self-propelled vehicle cannot identify the mark in the image, the self-propelled vehicle controls the robotic arm to move until the self-propelled vehicle controls the image. In the analysis of the images captured by the acquisition unit, until the mark is recognized, the self-propelled vehicle controls the movement of the robotic arm, so that the image acquisition unit acquires these images in turn for a plurality of predetermined areas, defining a A virtual plane area, the normal direction of the virtual plane area is parallel to the actual direction, the virtual plane area includes the predetermined areas, and one of the predetermined areas of the virtual plane area passes the position of the marker, the The size of the virtual plane area is related to a maximum movement error amount, a maximum rotation error amount, and a distance between the target position and the predetermined position of the self-propelled vehicle; and (d) by the self-propelled vehicle according to the relative Positional relationship to obtain the amount of error to control the robotic arm. 如請求項1所述的自走設備的控制方法,其中,在步驟(c)中,該虛擬平面區域所包含的該等預定區域的數量,相關於該虛擬平面區域的大小及每個影像在該虛擬平面區域所對應的大小的比值。The method for controlling a self-propelled device according to claim 1, wherein in step (c), the number of the predetermined areas included in the virtual plane area is related to the size of the virtual plane area and each image in The ratio of the size corresponding to the virtual plane area. 如請求項2所述的自走設備的控制方法,其中,在步驟(c)中,該標記可以是一個棋盤(Chessboard)圖、一個二維條碼(QR Code)、一個條碼(Bar Code)、及一個Aruco碼之其中任一者。The method for controlling a self-propelled device according to claim 2, wherein, in step (c), the mark may be a Chessboard map, a two-dimensional barcode (QR Code), a bar code (Bar Code), And either an Aruco code.
TW106140165A 2017-11-20 2017-11-20 Control method of self-propelled equipment TWI656421B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW106140165A TWI656421B (en) 2017-11-20 2017-11-20 Control method of self-propelled equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW106140165A TWI656421B (en) 2017-11-20 2017-11-20 Control method of self-propelled equipment

Publications (2)

Publication Number Publication Date
TWI656421B true TWI656421B (en) 2019-04-11
TW201923498A TW201923498A (en) 2019-06-16

Family

ID=66996318

Family Applications (1)

Application Number Title Priority Date Filing Date
TW106140165A TWI656421B (en) 2017-11-20 2017-11-20 Control method of self-propelled equipment

Country Status (1)

Country Link
TW (1) TWI656421B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114167850A (en) * 2020-08-21 2022-03-11 鸿富锦精密电子(天津)有限公司 Self-propelled triangular warning frame and advancing control method thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4933864A (en) * 1988-10-04 1990-06-12 Transitions Research Corporation Mobile robot navigation employing ceiling light fixtures
CN101182990A (en) * 2007-11-30 2008-05-21 华南理工大学 Large-sized workpieces in process geometric measuring systems based on machine vision
CN101264384A (en) * 2007-03-13 2008-09-17 林其禹 Disc type game system and robot device
TWM517671U (en) * 2015-12-03 2016-02-21 Chao-Shen Chou Machining apparatus
TW201622908A (en) * 2014-12-26 2016-07-01 Kawasaki Heavy Ind Ltd Self-propelled articulated robot
CN107003662A (en) * 2014-11-11 2017-08-01 X开发有限责任公司 Position control robot cluster with visual information exchange

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4933864A (en) * 1988-10-04 1990-06-12 Transitions Research Corporation Mobile robot navigation employing ceiling light fixtures
CN101264384A (en) * 2007-03-13 2008-09-17 林其禹 Disc type game system and robot device
CN101182990A (en) * 2007-11-30 2008-05-21 华南理工大学 Large-sized workpieces in process geometric measuring systems based on machine vision
CN107003662A (en) * 2014-11-11 2017-08-01 X开发有限责任公司 Position control robot cluster with visual information exchange
TW201622908A (en) * 2014-12-26 2016-07-01 Kawasaki Heavy Ind Ltd Self-propelled articulated robot
TWM517671U (en) * 2015-12-03 2016-02-21 Chao-Shen Chou Machining apparatus

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114167850A (en) * 2020-08-21 2022-03-11 鸿富锦精密电子(天津)有限公司 Self-propelled triangular warning frame and advancing control method thereof
CN114167850B (en) * 2020-08-21 2024-02-20 富联精密电子(天津)有限公司 Self-propelled triangular warning frame and travelling control method thereof

Also Published As

Publication number Publication date
TW201923498A (en) 2019-06-16

Similar Documents

Publication Publication Date Title
US11772267B2 (en) Robotic system control method and controller
CN110116406B (en) Robotic system with enhanced scanning mechanism
KR102367438B1 (en) Simultaneous positioning and mapping navigation method, apparatus and system combined with markers
CN111452040B (en) System and method for associating machine vision coordinate space in a pilot assembly environment
US9844882B2 (en) Conveyor robot system provided with three-dimensional sensor
KR101988083B1 (en) Systems and methods for tracking location of movable target object
US7283661B2 (en) Image processing apparatus
JP2018169403A5 (en)
CN108827154B (en) Robot non-teaching grabbing method and device and computer readable storage medium
JP6855492B2 (en) Robot system, robot system control device, and robot system control method
CN108098762A (en) A kind of robotic positioning device and method based on novel visual guiding
CN109900251A (en) A kind of robotic positioning device and method of view-based access control model technology
TWI656421B (en) Control method of self-propelled equipment
CN112428248B (en) Robot system and control method
US20230123629A1 (en) 3d computer-vision system with variable spatial resolution
KR20180017503A (en) Method and System for Calibration of Mobile Robot and Camera
US20240003675A1 (en) Measurement system, measurement device, measurement method, and measurement program
CN110969661A (en) Image processing device and method, position calibration system and method
JP2016203282A (en) Robot with mechanism for changing end effector attitude
US20200376678A1 (en) Visual servo system
CN113302027B (en) Job coordinate generating device
Wang et al. Robotic assembly system guided by multiple vision and laser sensors for large scale components
KR100991194B1 (en) System and method for transporting object of mobing robot
TWI788253B (en) Adaptive mobile manipulation apparatus and method
JP7278637B2 (en) Self-propelled moving device