TW202004671A - Method and device for determining spatial position shape of object, storage medium and robot - Google Patents

Method and device for determining spatial position shape of object, storage medium and robot Download PDF

Info

Publication number
TW202004671A
TW202004671A TW108119050A TW108119050A TW202004671A TW 202004671 A TW202004671 A TW 202004671A TW 108119050 A TW108119050 A TW 108119050A TW 108119050 A TW108119050 A TW 108119050A TW 202004671 A TW202004671 A TW 202004671A
Authority
TW
Taiwan
Prior art keywords
cloud data
point cloud
image
data
tested
Prior art date
Application number
TW108119050A
Other languages
Chinese (zh)
Inventor
吳飛
彭建林
楊宇
Original Assignee
大陸商上海微電子裝備(集團)股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大陸商上海微電子裝備(集團)股份有限公司 filed Critical 大陸商上海微電子裝備(集團)股份有限公司
Publication of TW202004671A publication Critical patent/TW202004671A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

Disclosed by the present application are a method and device for determining the spatial position shape of an object, a storage medium and a robot. The method for determining the spatial position pattern of an object comprises: acquiring a binocular visual image of an object to be tested and a standard mark by means of a binocular vision device; correcting and fitting the binocular visual image of the object to be tested so as to obtain a range image of the object to be tested in a world coordinate system, and determining a point cloud data image of the object to be tested according to the range image; determining point cloud data of an upper surface and the center position of the upper surface as position data of an object to be tested; and determining shape data of the object to be tested according to a fitting surface of the point cloud data of the upper surface of the object to be tested and the distance between the center position of the upper surface of the object to be tested and a boundary position of the point cloud data of the upper surface of the object to be tested.

Description

物體空間位置形態的確定方法及裝置、儲存介質及機器人 Method and device for determining space position and shape of object, storage medium and robot

本發明實施型態關於圖像識別及圖像處理技術領域,例如關於一種物體空間位置形態的確定方法及裝置、儲存介質及機器人。 The embodiment of the present invention relates to the technical field of image recognition and image processing, for example, a method and device for determining the spatial position and shape of an object, a storage medium, and a robot.

根據圖像識別及圖像處理手段,來對圖像中的物體進行定位及形態的確定,已經成為影響電子科技發展的重要因素之一。 According to image recognition and image processing methods, it is one of the important factors that affect the development of electronic technology to locate and shape the objects in the image.

物體的空間位置,即在空間座標下的具體位置,物體的空間形態,即物體以何種形態處在空間座標位置處。如在工業生產中,工業機器人或者機器手臂對於一些標準件或者非標準件進行抓取以實現安裝或者組裝功能時,如果物品的空間位置形態沒有被確定,而直接採用機械式的作業方法,很容易造成標準件或者非標準件的脫落,影響工業生產效率,甚至有時會對裝配線或者工業機器人帶來損害。 The spatial position of the object, that is, the specific position under the spatial coordinate, the spatial form of the object, that is, the shape of the object in the spatial coordinate position. For example, in industrial production, when an industrial robot or robot arm grabs some standard or non-standard parts to achieve installation or assembly functions, if the spatial position and shape of the item are not determined, the mechanical operation method is directly used. It is easy to cause the shedding of standard parts or non-standard parts, affect the industrial production efficiency, and sometimes even damage the assembly line or industrial robot.

在生活當中,例如無人機,智慧型機器人等,如果不能夠自動確定待測物體的空間位置形態,就必須需要人為協助才能夠實現對物品的承載以及運輸等,如果失去人為協助,不僅不能夠進行正常的工作,而且很難進行業務的拓展,影響電子科技事業的發展。因此,如何能夠對空間物體的位置及 形態進行確定,已經成為領域內急待解決的技術難題。 In life, such as drones, intelligent robots, etc., if you can not automatically determine the spatial position and shape of the object to be measured, you must need human assistance to achieve the carrying and transportation of items. If you lose human assistance, not only can’t Carry out normal work, and it is difficult to expand the business, which affects the development of electronic technology. Therefore, how to determine the position and shape of space objects has become an urgent technical problem in the field.

本發明實施型態提供一種物體空間位置形態的確定方法及裝置、儲存介質及機器人,可以實現藉由雙目視覺裝置獲取待測物品的圖像後,經過處理及分析,確定待測物品的空間位置及形態的效果。 The embodiment of the present invention provides a method and device for determining the spatial position and shape of an object, a storage medium, and a robot, which can realize the image of the object to be tested by the binocular vision device, and after processing and analysis, determine the space of the object to be tested Position and shape effects.

第一方面,本發明實施型態提供了一種物體空間位置形態的確定方法,該方法包括:藉由雙目視覺裝置獲取待測物品及標準標記的雙目視覺圖像;其中,前述雙目視覺裝置佈置在前述待測物品的上方;根據前述標準標記與前述待測物品的位置關係,以及標準標記的雙目視覺圖像,對前述待測物品的雙目視覺圖像進行校正擬合,得到待測物品在世界座標系下的深度圖像,根據前述待測物品在世界座標系下的深度圖像確定前述待測物品的點雲數據圖像;利用垂向空間統計方法確定前述待測物品的上表面點雲數據;根據前述待測物品的上表面點雲數據確定前述待測物品的上表面中心位置,作為前述待測物品的位置數據;並根據前述待測物品的上表面點雲數據的擬合面,以及前述待測物品的上表面中心位置與前述待測物品的上表面點雲數據邊界位置的距離,確定前述待測物品的形態數據。 In a first aspect, the embodiment of the present invention provides a method for determining the spatial position and shape of an object. The method includes: acquiring a binocular visual image of an object to be tested and a standard mark by a binocular visual device; The device is arranged above the test object; according to the positional relationship between the standard mark and the test object, and the binocular visual image of the standard mark, the binocular visual image of the test object is corrected and fitted to obtain Depth image of the object under test in the world coordinate system, according to the depth image of the object under test in the world coordinate system to determine the point cloud data image of the object under test; use the vertical spatial statistical method to determine the object under test The upper surface point cloud data of the object; determine the center position of the upper surface of the object under test based on the upper surface point cloud data of the object under test, as the position data of the object under test; and according to the upper surface point cloud data of the object under test And the distance between the center position of the upper surface of the object to be measured and the boundary position of the point cloud data of the upper surface of the object to be measured determines the shape data of the object to be tested.

在一些實施型態中,在確定前述待測物品的位置數據及前述待測物品的形態數據之後,前述方法進一步包括:根據前述待測物品的位置數據及前述待測物品的形態數據,確定機器人操作臂的抓取位置及抓取姿態,以控制前述機器人操作臂對前述待測物品進行抓 取。 In some embodiments, after determining the position data of the object to be tested and the shape data of the object to be tested, the method further includes: determining a robot based on the position data of the object to be tested and the shape data of the object to be tested The grasping position and grasping posture of the operating arm are used to control the robot operating arm to grab the object to be tested.

在一些實施型態中,在前述藉由雙目視覺裝置獲取待測物品及標準標記的雙目視覺圖像前,前述方法進一步包括:在前述待測物品承載空間內選取一固定結構作為前述標準標記,或在前述待測物品承載空間內安裝一標記作為前述標準標記,藉由前述雙目視覺裝置與前述標準標記之間的位置關係,建立前述標準標記的座標系與前述雙目視覺裝置的座標系之間的關係。 In some embodiments, before the binocular vision image of the test object and the standard mark is acquired by the binocular vision device, the method further includes: selecting a fixed structure as the standard in the load space of the test object Mark, or install a mark as the standard mark in the carrying space of the object to be tested, and establish the coordinate system of the standard mark and the position of the binocular vision device through the positional relationship between the binocular vision device and the standard mark The relationship between coordinate systems.

在一些實施型態中,前述根據前述標準標記與前述待測物品的位置關係,以及標準標記的雙目視覺圖像,對前述待測物品的雙目視覺圖像進行校正擬合,得到待測物品在世界座標系下的深度圖像,包括:利用前述標準標記,確定雙目視覺裝置的位置;根據前述雙目視覺裝置的位置與前述標準標記之間的位置關係,確定前述待測物品的雙目視覺圖像的世界座標系校正參數;按照前述待測物品的雙目視覺圖像的世界座標系校正參數,將前述待測物品的雙目視覺圖像轉換到世界座標系下,再進行深度圖像擬合,得到前述待測物品在世界座標系下的深度圖像。 In some embodiments, the binocular visual image of the object to be tested is corrected and fitted according to the positional relationship between the standard mark and the item to be tested and the binocular visual image of the standard mark to obtain the object to be tested The depth image of the object under the world coordinate system includes: using the aforementioned standard mark to determine the position of the binocular vision device; according to the positional relationship between the position of the aforementioned binocular vision device and the aforementioned standard mark, determining the position of the aforementioned object to be tested Correction parameters of the world coordinate system of the binocular visual image; according to the correction parameters of the world coordinate system of the binocular visual image of the object to be tested, convert the binocular visual image of the object to be tested to the world coordinate system, and then proceed Depth image fitting to obtain the depth image of the object under test in the world coordinate system.

在一些實施型態中,前述根據前述標準標記與前述待測物品的位置關係,以及標準標記的雙目視覺圖像,對前述待測物品的雙目視覺圖像進行校正擬合,得到待測物品在世界座標系下的深度圖像,包括:對前述標準標記的雙目視覺圖像及前述待測物品的雙目視覺圖像進行深度圖像擬合,得到各自的初級深度圖像;根據前述雙目視覺裝置的位置與前述標準標記之間的位置關係,確定前述 待測物品的初級深度圖像的世界座標系校正參數;根據前述待測物品的初級深度圖像的世界座標系校正參數,將前述待測物品的初級深度圖像進行世界座標系校正,得到前述待測物品在世界座標系下的深度圖像。 In some embodiments, the binocular visual image of the object to be tested is corrected and fitted according to the positional relationship between the standard mark and the item to be tested and the binocular visual image of the standard mark to obtain the object to be tested The depth image of the object in the world coordinate system includes: fitting the depth image of the binocular visual image of the standard mark and the binocular visual image of the object to be tested to obtain their respective primary depth images; The positional relationship between the position of the binocular vision device and the standard mark determines the world coordinate system correction parameters of the primary depth image of the object under test; the world coordinate system correction parameters according to the elementary depth image of the object under test , The primary depth image of the object to be tested is corrected by the world coordinate system to obtain the depth image of the object to be tested under the world coordinate system.

在一些實施型態中,在根據前述待測物品在世界座標系下的深度圖像確定前述待測物品的點雲數據圖像之後,利用垂向空間統計方法確定前述待測物品的上表面點雲數據之前,前述方法進一步包括:根據前述待測物品的點雲數據圖像中點雲數據的三色差異度,對背景點雲數據進行濾除,得到前景點雲數據圖像;相應的,利用垂向空間統計方法確定前述待測物品的上表面點雲數據,包括:利用垂向空間統計方法,從前述前景點雲數據圖像中確定前述待測物品的上表面點雲數據。 In some embodiments, after determining the point cloud data image of the object under test based on the depth image of the object under test in the world coordinate system, a vertical spatial statistical method is used to determine the upper surface point of the object under test Before the cloud data, the foregoing method further includes: filtering the background point cloud data according to the three-color difference degree of the point cloud data in the point cloud data image of the object to be measured, and obtaining the cloud image data of the former scenic spot; correspondingly, Using the vertical spatial statistical method to determine the upper surface point cloud data of the object to be measured includes: using the vertical spatial statistical method to determine the upper surface point cloud data of the object to be measured from the front spot cloud data image.

在一些實施型態中,根據前述待測物品的點雲數據圖像中點雲數據的三色差異度,對背景點雲數據進行濾除,得到前景點雲數據圖像,包括:採用如下公式確定前述點雲數據圖像中點雲數據的三色差異度值:T=|Rpoint-Gpoint|-|Gpoint-Bpoint|;其中,Rpoint表示點雲數據中的RGB顏色中紅色的數值;Gpoint表示點雲數據中的RGB顏色中綠色的數值;Bpoint表示點雲數據中的RGB顏色中藍色的數值; 其中,T為三色差異度值,前述三色差異度值小於背景濾除閾值時,則確定對應的點雲數據為背景點雲數據,對前述背景點雲數據進行濾除操作。 In some implementation forms, the background point cloud data is filtered according to the three-color difference of the point cloud data in the point cloud data image of the object to be measured, and the front spot cloud data image is obtained, including: using the following formula Determine the three-color difference value of the point cloud data in the aforementioned point cloud data image: T=|R point -G point |-|G point -B point |; where R point represents the red of the RGB colors in the point cloud data G point represents the value of green in the RGB colors in the point cloud data; B point represents the value of blue in the RGB colors in the point cloud data; where T is the three-color difference value, and the aforementioned three-color difference value When it is smaller than the background filtering threshold, it is determined that the corresponding point cloud data is background point cloud data, and the foregoing background point cloud data is filtered.

在一些實施型態中,利用垂向空間統計方法,從前述前景點雲數據圖像中確定前述待測物品的上表面點雲數據,包括:對所有前景點雲數據的垂向數據進行統計分佈;確定統計分佈中每個垂向數據間隔中的前景點雲數據個數;確定前述前景點雲數據個數最多的垂向數據間隔的中值;將垂向數據分佈在設定範圍內的點雲數據,作為前述待測物品的上表面點雲數據;前述設定範圍是由將前述垂向數據間隔的中值減去預設可控數值得到的第一數值、與將前述垂向數據間隔的中值加上預設可控數值得到的第二數值所形成的數值範圍。 In some implementation forms, the vertical space statistical method is used to determine the point cloud data of the top surface of the object to be measured from the aforementioned cloud data image of the front sight, including: statistical distribution of the vertical data of all cloud data of the front sight ; Determine the number of the front cloud data in each vertical data interval in the statistical distribution; determine the median value of the vertical data interval with the largest number of front cloud data; point cloud that distributes the vertical data within the set range Data, as the point cloud data of the upper surface of the object to be measured; the setting range is the first value obtained by subtracting the preset controllable value from the median value of the vertical data interval, and the median value of the vertical data interval The range of values formed by the value plus the second value obtained from the preset controllable value.

在一些實施型態中,前述預設可控數值採用如下方式確定:統計所有前景點雲數據圖像的垂向數據,確定標準方差;將前述標準方差的設定倍數作為預設可控數值。 In some implementations, the aforementioned preset controllable value is determined in the following manner: the vertical data of all cloud data images of the front sight are counted to determine the standard deviation; and the set multiple of the aforementioned standard deviation is used as the preset controllable value.

在一些實施型態中,根據前述待測物品的上表面點雲數據確定前述待測物品的上表面中心位置,作為前述待測物品的位置數據,包括:根據所有前述待測物品的上表面點雲數據的空間座標的平均值,確定前述待測物品的上表面中心位置,作為前述待測物品的位置數據;根據前述待測物品的上表面點雲數據的擬合面,以及前述待測物品的上表面中心位置與前述待測物品的上表面點雲數據邊界位置的距離,確定形態數據,包括:根據前述待測物品的上表面點雲數據的空間座標,進行平面擬合以確定前 述待測物品的上表面;確定前述待測物品的上表面的法向向量,根據前述法向向量確定前述待測物品在世界座標系中沿X軸的扭轉角度Rx,及沿Y軸的扭轉角度Ry;並且,對待測物品的上表面點雲數據在XOY面上進行投影;在上表面中心位置的投影位置,與上表面點雲數據邊界位置的投影位置之間的距離中,確定最小值;根據前述最小值所在方向確定前述待測物品沿Z軸的扭轉角度Rz;將前述Rx、Ry及Rz確定為前述待測物品的形態數據。 In some embodiments, the center position of the upper surface of the object to be measured is determined according to the point cloud data of the upper surface of the object to be tested, and the position data of the object to be tested includes: according to the upper surface points of all the objects to be tested The average value of the spatial coordinates of the cloud data determines the center position of the upper surface of the test object as the position data of the test object; the fitting surface according to the point cloud data of the upper surface of the test object and the test object The distance between the center position of the upper surface of the upper surface and the boundary position of the upper surface point cloud data of the object under test to determine the morphological data includes: performing plane fitting according to the spatial coordinates of the upper surface point cloud data of the object under test to determine the aforementioned object The upper surface of the test object; determine the normal vector of the upper surface of the test object, and determine the torsion angle Rx of the test object in the world coordinate system along the X axis and the torsion angle Ry along the Y axis according to the normal vector ; And, the point cloud data of the upper surface of the object to be measured is projected on the XOY plane; the minimum value is determined from the distance between the projection position of the upper surface center position and the projection position of the upper surface point cloud data boundary position; The direction of the minimum value determines the torsion angle Rz of the object under test along the Z axis; the Rx, Ry, and Rz are determined as the morphological data of the object under test.

第二方面,本發明實施型態進一步提供了一種物體空間位置形態的確定裝置,該裝置包括:雙目視覺圖像獲取模組,設置為藉由雙目視覺裝置獲取待測物品及標準標記的雙目視覺圖像;其中,前述雙目視覺裝置佈置在前述待測物品的上方;點雲數據圖像確定模組,設置為根據前述標準標記與前述待測物品的位置關係,以及標準標記的雙目視覺圖像,對前述待測物品的雙目視覺圖像進行校正擬合,得到待測物品在世界座標系下的深度圖像,根據前述待測物品在世界座標系下的深度圖像確定前述待測物品的點雲數據圖像;上表面點雲數據篩選模組,設置為利用垂向空間統計方法確定前述待測物品的上表面點雲數據;位置數據及形態數據確定模組,設置為根據前述待測物品的上表面點雲數據確定前述待測物品的上表面中心位置,作為前述待測物品的位置數據;並根據前述待測物品的上表面點雲數據的擬合面,以及前述待測物品的上表面中心 位置與前述待測物品的上表面點雲數據邊界位置的距離,確定前述待測物品的形態數據。 In a second aspect, the embodiment of the present invention further provides a device for determining the spatial position of an object. The device includes: a binocular visual image acquisition module configured to acquire the object to be tested and the standard mark by the binocular visual device Binocular visual image; wherein, the aforementioned binocular visual device is arranged above the object to be tested; the point cloud data image determination module is set according to the positional relationship between the standard mark and the object to be tested, and the standard mark Binocular visual image, correcting and fitting the binocular visual image of the object under test to obtain the depth image of the object under test under the world coordinate system, according to the depth image of the object under test under the world coordinate system Determine the point cloud data image of the object to be tested; the upper surface point cloud data filtering module, set to use the vertical spatial statistical method to determine the upper surface point cloud data of the object to be tested; the position data and shape data determination module, Set to determine the center position of the upper surface of the object to be tested based on the point cloud data of the upper surface of the object to be measured as position data of the object to be tested; and according to the fitting surface of the point cloud data of the upper surface of the object to be tested, And the distance between the center position of the upper surface of the object to be measured and the boundary position of the point cloud data of the upper surface of the object to be measured determines the shape data of the object to be tested.

第三方面,本發明實施型態提供了一種電腦可讀儲存介質,其上儲存有電腦程式,該程式被處理器執行時實現如本發明實施型態所記載之物體空間位置形態的確定方法。 In a third aspect, the embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored. When the program is executed by a processor, a method for determining the spatial position of an object as described in the embodiment of the present invention is implemented.

第四方面,本發明實施型態提供了一種雙目視覺機器人,包括雙目視覺裝置、標準標記、記憶體、處理器及儲存在記憶體上並可在處理器運行的電腦程式,前述處理器執行前述電腦程式時實現如本發明實施例所記載之物體空間位置形態的確定方法。 According to a fourth aspect, the embodiment of the present invention provides a binocular vision robot, which includes a binocular vision device, a standard mark, a memory, a processor, and a computer program stored on the memory and executable on the processor. The method for determining the spatial position and shape of the object as described in the embodiments of the present invention is implemented when the foregoing computer program is executed.

本發明實施型態所提供的技術手段,藉由雙目視覺裝置獲取待測物品及標準標記的雙目視覺圖像;其中,前述雙目視覺裝置佈置在前述待測物品的上方;根據前述標準標記與前述待測物品的位置關係,以及標準標記的雙目視覺圖像,對前述待測物品的雙目視覺圖像進行校正擬合,得到待測物品在世界座標系下的深度圖像,根據前述待測物品在世界座標系下的深度圖像確定前述待測物品的點雲數據圖像;利用垂向空間統計方法確定前述待測物品的上表面點雲數據;根據前述待測物品的上表面點雲數據確定前述待測物品的上表面中心位置,作為前述待測物品的位置數據;並根據前述待測物品的上表面點雲數據的擬合面,以及前述待測物品的上表面中心位置與前述待測物品的上表面點雲數據邊界位置的距離,確定前述待測物品的形態數據,可以實現藉由雙目視覺裝置獲取待測物品的圖像後,經過處理及分析,確定待測物品的空間位置及形態的效果。 The technical means provided by the embodiment of the present invention obtain the binocular visual image of the test object and the standard mark by a binocular vision device; wherein the binocular vision device is arranged above the test object; according to the aforementioned standard The positional relationship between the mark and the aforementioned test object, and the binocular visual image of the standard mark, correcting and fitting the binocular visual image of the aforementioned test object to obtain the depth image of the test object under the world coordinate system, Determine the point cloud data image of the object under test based on the depth image of the object under test in the world coordinate system; determine the point cloud data of the upper surface of the object under test using vertical spatial statistical methods; The upper surface point cloud data determines the center position of the upper surface of the object to be measured as the position data of the object to be measured; and the fitting surface according to the point cloud data of the upper surface of the object to be measured and the upper surface of the object to be measured The distance between the center position and the boundary position of the point cloud data on the upper surface of the object to be measured determines the morphological data of the object to be tested, which can be achieved by acquiring and processing the image of the object to be tested by binocular vision device after processing and analysis. The effect of the spatial position and shape of the item to be tested.

10‧‧‧雙目視覺裝置 10‧‧‧ Binocular vision device

20‧‧‧操作台 20‧‧‧operator

30‧‧‧操作台上的標準標記 30‧‧‧Standard mark on the console

50‧‧‧機器人操作臂 50‧‧‧Robot operating arm

【圖1】是本發明實施例一提供的物體空間位置形態的確定方法的流程圖。 FIG. 1 is a flowchart of a method for determining the spatial position and shape of an object provided by Embodiment 1 of the present invention.

【圖2】是本發明實施例二提供的物體空間位置形態的確定方法的流程圖。 FIG. 2 is a flowchart of a method for determining the spatial position and shape of an object provided by Embodiment 2 of the present invention.

【圖3】是本發明實施例二提供的點雲數據統計分佈示意圖。 [FIG. 3] is a schematic diagram of statistical distribution of point cloud data provided by Embodiment 2 of the present invention.

【圖4】是本發明實施例三提供的物體空間位置形態的確定裝置的結構示意圖。 FIG. 4 is a schematic structural diagram of a device for determining a spatial position and shape of an object according to Embodiment 3 of the present invention.

【圖5a】是本發明實施例五所提供的雙目視覺機器人示意圖。 [Figure 5a] is a schematic diagram of a binocular vision robot provided by Embodiment 5 of the present invention.

【圖5b】是本發明實施例五所提供的雙目視覺機器人示意圖。 [Figure 5b] is a schematic diagram of a binocular vision robot provided by Embodiment 5 of the present invention.

【圖5c】是本發明實施例五所提供的雙目視覺機器人示意圖。 [Figure 5c] is a schematic diagram of a binocular vision robot provided by Embodiment 5 of the present invention.

【圖6】是本發明實施例二提供的物體空間形態數據中Rz的確定方法示意圖。 FIG. 6 is a schematic diagram of a method for determining Rz in object spatial shape data provided by Embodiment 2 of the present invention.

下面結合圖式及實施例對本發明作進一步的詳細說明。可以理解的是,此處所描述的具體實施例僅僅用於解釋本發明,而非對本發明的限定。另外進一步需要說明的是,為了便於描述,圖式中僅示出了與本發明相關的部分而非全部結構。 The present invention will be further described in detail below with reference to the drawings and embodiments. It can be understood that the specific embodiments described herein are only used to explain the present invention, rather than to limit the present invention. It should be further noted that, in order to facilitate description, the drawings only show parts, but not all structures related to the present invention.

在更加詳細地討論示例性實施例之前應當提到的是,一些示例性實施例被描述成作為流程圖描繪的處理或方法。雖然流程圖將各步驟描述成 順序的處理,但是其中的許多步驟可以被並行地、並發地或者同時實施。此外,各步驟的順序可以被重新安排。當其操作完成時前述處理可以被終止,但是亦可以具有未包括在圖式中的附加步驟。前述處理可以對應於方法、函數、規程、子常式、子程式等等。 Before discussing exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although the flowchart describes the steps as sequential processing, many of the steps can be implemented in parallel, concurrently, or simultaneously. In addition, the order of the steps can be rearranged. The aforementioned processing may be terminated when its operation is completed, but may also have additional steps not included in the drawing. The foregoing processing may correspond to methods, functions, procedures, subroutines, subroutines, and so on.

實施例一 Example one

圖1是本發明實施例一提供的物體空間位置形態的確定方法的流程圖,本實施例可適用對於待測物品進行定位及形態確定的情況,該方法可以由本發明實施例所提供的物體空間位置形態的確定裝置來執行,該裝置可以由軟體及硬體中至少之一的方式來實現,並可集成於雙目視覺機器人中。 FIG. 1 is a flowchart of a method for determining an object’s spatial position and shape according to Embodiment 1 of the present invention. This embodiment is applicable to the positioning and shape determination of objects to be tested. The method may be based on the object space provided by the embodiment of the present invention. The position and shape determination device is executed. The device can be implemented by at least one of software and hardware, and can be integrated into the binocular vision robot.

如圖1所示,前述物體空間位置形態的確定方法包括: As shown in FIG. 1, the method for determining the spatial shape of the aforementioned object includes:

S110、藉由雙目視覺裝置獲取待測物品及標準標記的雙目視覺圖像;其中,前述雙目視覺裝置佈置在前述待測物品的上方。 S110. Obtain the object to be tested and the binocular vision image of the standard mark by the binocular vision device; wherein the binocular vision device is arranged above the object to be tested.

其中,雙目視覺裝置可以用來獲取固定範圍內的待測物品的空間位置及形態,例如在生產、安裝流水線,可以藉由在操作台的正上方固定位置安裝雙目視覺裝置,可以令雙目視覺裝置的中心位置與操作台的中心位置相對應,如此就可以確定藉由雙目視覺裝置採集到的圖像是正對著操作台中心的。亦可以將其安裝在不固定位置,如可以移動機器人的頭部,生產、安裝流水線的機器人操作臂上,如此可以使得雙目視覺裝置的設置更加靈活,但是相對於前者,這種方式在圖像校正過程中會相對複雜一點。如果雙目視覺裝置的位置固定,可以對其得到的圖像進行位置校正來得到待測物品在世界座標系中的位置,而對於可移動位置的雙目視覺裝置來說,必須要在被採集的雙目視覺圖像中含有標準標記,才能夠確定物體在世界座標系中的位置,或者相對機器 人自身或者機器人機械手臂的相對位置。 Among them, the binocular vision device can be used to obtain the spatial position and shape of the object to be tested within a fixed range. For example, in production and installation lines, the binocular vision device can be installed at a fixed position directly above the operating table to make the binocular vision device The center position of the visual vision device corresponds to the center position of the console, so that it can be determined that the image collected by the binocular vision device is directly facing the center of the console. It can also be installed in an unfixed position, such as moving the head of the robot, producing and installing the robotic arm of the assembly line, which can make the setting of the binocular vision device more flexible, but compared to the former, this method is shown in the figure. The image correction process is relatively complicated. If the position of the binocular vision device is fixed, the position of the object to be measured in the world coordinate system can be corrected by the position of the obtained image, and for the movable position of the binocular vision device, it must be collected The binocular vision image contains standard marks to determine the position of the object in the world coordinate system, or the relative position of the robot itself or the robot mechanical arm.

在本實施例中,選擇性的,在前述藉由雙目視覺裝置獲取待測物品及標準標記的雙目視覺圖像前,前述方法進一步包括:在前述待測物品承載空間內選取一固定結構作為前述標準標記,或在前述待測物品承載空間內安裝一標記作為前述標準標記,藉由前述雙目視覺裝置與前述標準標記之間的位置關係,建立前述標準標記的座標系與前述雙目視覺裝置的座標系之間的關係。如此設置的好處是可以根據固定的或者預先設定的標準標記對圖像進行校正擬合。 In this embodiment, optionally, before the binocular vision image of the object to be tested and the standard mark is acquired by the binocular vision device, the method further includes: selecting a fixed structure in the object-bearing space of the object to be tested As the standard mark, or install a mark as the standard mark in the load space of the object to be tested, establish the coordinate system of the standard mark and the binocular by the positional relationship between the binocular vision device and the standard mark The relationship between the coordinate system of the visual device. The advantage of this setting is that the image can be corrected and fitted according to fixed or preset standard marks.

其中,標準標記是設置在固定位置,在雙目視覺圖像中對雙目視覺圖像進行校準的標記。如可以是一個交叉指向正北及正東的箭頭。 Among them, the standard mark is a mark set at a fixed position to calibrate the binocular vision image in the binocular vision image. For example, it can be an arrow that crosses to north and east.

前述雙目視覺裝置佈置在前述待測物品的上方,如此設置是為了能夠得到待測物品上表面的圖像,因為在藉由機器人或者機器人操作臂來抓取一些物品時,往往是藉由從上方按照待測物品的形態來確定抓取角度進行抓取。如果機器人可以實現藉由橫向抓取的方式來抓取物品的話,亦可以藉由獲取待測物品的前表面的位置及形態來確定。 The above-mentioned binocular vision device is arranged above the above-mentioned object to be tested, so the setting is to be able to obtain the image of the upper surface of the object to be tested, because when grabbing some objects by the robot or the robot operating arm, it is often by The grab angle is determined according to the shape of the item to be grabbed. If the robot can grasp the object by lateral grabbing, it can also be determined by acquiring the position and shape of the front surface of the object to be tested.

S120、根據前述標準標記與前述待測物品的位置關係,以及標準標記的雙目視覺圖像,對前述待測物品的雙目視覺圖像進行校正擬合,得到待測物品在世界座標系下的深度圖像,根據前述待測物品在世界座標系下的深度圖像確定前述待測物品的點雲數據圖像。 S120. Correct and fit the binocular visual image of the test object according to the positional relationship between the standard mark and the test object and the binocular visual image of the standard mark to obtain the test object under the world coordinate system According to the depth image of the object under test in the world coordinate system, the point cloud data image of the object under test is determined.

其中,標準標記是設置在固定位置,在雙目視覺圖像中對雙目視覺圖像進行校準的標記。如可以是一個交叉指向正北及正東的箭頭。深度圖像可以是每個像素點帶有深度資訊的圖像,在本實施例中,經過校正的深度圖 像可以是由上至下的深度資訊,可以以雙目視覺裝置所在的Z軸位置作為起點,深度資訊可以是構成圖像的每個像素點與雙目視覺裝置的中心所在的平面的垂向距離(Z軸距離)。點雲數據圖像可以是將每個像素點以點雲的形式顯示出來,點雲數據圖像可以藉由深度圖像按照特定的演算法轉變而成。 Among them, the standard mark is a mark set at a fixed position to calibrate the binocular vision image in the binocular vision image. For example, it can be an arrow that crosses to north and east. The depth image may be an image with depth information for each pixel. In this embodiment, the corrected depth image may be the depth information from top to bottom, which may be the Z-axis position where the binocular vision device is located. As a starting point, the depth information may be the vertical distance (Z-axis distance) between each pixel constituting the image and the plane where the center of the binocular vision device is located. The point cloud data image can be displayed by each pixel in the form of a point cloud, and the point cloud data image can be transformed from the depth image according to a specific algorithm.

在本實施例中,選擇性的,前述根據前述標準標記與前述待測物品的位置關係,以及標準標記的雙目視覺圖像,對前述待測物品的雙目視覺圖像進行校正擬合,得到待測物品在世界座標系下的深度圖像,包括:利用前述標準標記,確定雙目視覺裝置的位置;根據前述雙目視覺裝置的位置與前述標準標記之間的位置關係,確定前述待測物品的雙目視覺圖像的世界座標系校正參數;按照前述待測物品的雙目視覺圖像的世界座標系校正參數,將前述待測物品的雙目視覺圖像轉換到世界座標系下,再進行深度圖像擬合,得到前述待測物品在世界座標系下的深度圖像。 In this embodiment, optionally, the binocular visual image of the test object is corrected and fitted according to the positional relationship between the standard mark and the item to be tested, and the binocular visual image of the standard mark, Obtaining the depth image of the object under test in the world coordinate system includes: determining the position of the binocular vision device using the aforementioned standard mark; determining the aforementioned object according to the positional relationship between the position of the aforementioned binocular vision device and the aforementioned standard mark Correction parameters of the world coordinate system of the binocular visual image of the test object; according to the correction parameters of the world coordinate system of the binocular visual image of the test object, convert the binocular visual image of the test object to the world coordinate system Then, the depth image is fitted to obtain the depth image of the object to be tested in the world coordinate system.

在本實施例中,選擇性的,前述根據前述標準標記與前述待測物品的位置關係,以及標準標記的雙目視覺圖像,對前述待測物品的雙目視覺圖像進行校正擬合,得到待測物品在世界座標系下的深度圖像,包括:對前述標準標記的雙目視覺圖像及前述待測物品的雙目視覺圖像進行深度圖像擬合,得到各自的初級深度圖像;根據前述雙目視覺裝置的位置與前述標準標記之間的位置關係,確定前述待測物品的初級深度圖像的世界座標系校正參數;根據前述待測物品的初級深度圖像的世界座標系校正參數,將前述待測物品的初級深度圖像進行世界座標系校正,得到前述待測物品在世界座標系下的深度圖像。 In this embodiment, optionally, the binocular visual image of the test object is corrected and fitted according to the positional relationship between the standard mark and the item to be tested, and the binocular visual image of the standard mark, Obtain the depth image of the test object in the world coordinate system, including: fitting the depth image of the binocular visual image of the standard mark and the binocular visual image of the test object to obtain the respective primary depth map Image; according to the positional relationship between the position of the binocular vision device and the standard mark, determine the world coordinate system correction parameters of the primary depth image of the test object; according to the world coordinate of the primary depth image of the test object According to the correction parameters, the primary depth image of the object to be tested is corrected by the world coordinate system to obtain the depth image of the object to be tested under the world coordinate system.

上述兩種方式分別闡述了先將雙目視覺裝置的兩個攝像頭獲取 的圖片進行世界座標系的校正,再進行擬合的方法,及先進行擬合再進行世界座標系校正的方法。下面分別介紹世界座標系的校正及圖像擬合的過程: 由於原始的雙目圖像分別藉由左眼相機及右眼相機獨立拍攝,由於相機鏡頭位置不同的關係,兩個相機存在一定的畸變。需要對視場範圍內所有的像素進行擬合,並根據實測數據將擬合的補償量賦予相機程式。 The above two methods respectively explain the method of first correcting the world coordinate system of the pictures acquired by the two cameras of the binocular vision device, and then performing the fitting, and the method of first fitting and then correcting the world coordinate system. The following describes the process of world coordinate system correction and image fitting: Since the original binocular images were taken independently by the left-eye camera and the right-eye camera, due to the different positions of the camera lens, the two cameras have certain distortion. It is necessary to fit all the pixels in the field of view, and give the compensation amount of the fitting to the camera program based on the measured data.

此外,進一步包括,確定空間座標系中物體點同它在圖像平面上像點之間的對應關係。選擇性的,將左眼相機及右眼相機內部參數調節一致,內部參數包括:相機內部幾何、光學參數,外部參數包括:左眼、右眼相機座標系與世界座標系的轉換。 In addition, it further includes determining the correspondence between the object point in the space coordinate system and its image point on the image plane. Optionally, adjust the internal parameters of the left-eye camera and the right-eye camera to be consistent. The internal parameters include: the camera's internal geometry and optical parameters, and the external parameters include: the conversion between the left-eye and right-eye camera coordinate systems and the world coordinate system.

此處的擬合是用來修正鏡頭所產生的畸變的。在原始圖像中可以看到鏡頭所帶來的這種畸變。例如,場景中的一條直線在原始的左、右眼圖像中會變成一條曲線,這種效果在左、右眼圖像的邊角處尤為明顯。擬合就是為了修正這種類型的畸變。 The fitting here is used to correct the distortion produced by the lens. The distortion caused by the lens can be seen in the original image. For example, a straight line in the scene will become a curve in the original left and right eye images. This effect is especially noticeable at the corners of the left and right eye images. The fitting is to correct this type of distortion.

在圖像處理過程中,根據物體圖像的進行邊界提取。可採用的演算法包括:拉普拉斯-高斯濾波,邊界的特徵是識別物體的一個明顯及主要的特徵,為後續演算法奠定基礎。此外進一步包括,在圖像預處理及特徵提取,預處理:主要包括圖像對比度的增強、隨機雜訊的去除、低通濾波及圖像的增強、偽彩色處理等;特徵提取:常用的匹配特徵,主要有點狀特徵、線狀特徵及區域特徵等,進行提取。其中,低通濾波為了擬合一幅圖像,事先對其進行平滑是非常重要的。所以如果要擬合一幅圖像,事先將低通濾波器打開左、右眼圖像是很好的方法。當然不使用低通濾波器同樣可以校正圖像,但校正後的圖像可能會出現混淆的現象。如果要提高處理速度,可以將低通濾波器關掉。 During image processing, boundary extraction is performed based on the object image. The algorithms that can be used include: Laplacian-Gaussian filtering. The boundary feature is an obvious and main feature for identifying objects, which lays the foundation for subsequent algorithms. In addition, further includes, in image preprocessing and feature extraction, preprocessing: mainly includes image contrast enhancement, random noise removal, low-pass filtering and image enhancement, pseudo color processing, etc.; feature extraction: commonly used matching Features, mainly point-like features, linear features and regional features, are extracted. Among them, in order to fit an image, low-pass filtering is very important to smooth it in advance. So if you want to fit an image, it is a good method to open the low-pass filter to the left and right eye images in advance. Of course, the image can also be corrected without using a low-pass filter, but the corrected image may be confused. If you want to increase the processing speed, you can turn off the low-pass filter.

邊緣檢測為任選特性,它使用亮度的變化來匹配特徵。當系統中的相機具有自動增益功能時,這項功能是非常有用的。如果每個相機的自動增益的變化是不一致的,那麼圖像間的絕對亮度是不一致的,而雖然絕對亮度是不一致的,但亮度的變化卻是一個常數。因此邊緣檢測適用於光照有很大變化的環境當中。雖然邊緣檢測可以改善物品邊緣的識別結果,但這相當於又引入了另外的處理步驟,因此要權衡結果的改善狀況及速度之間的關係來使用這項功能。 Edge detection is an optional feature that uses changes in brightness to match features. This function is very useful when the camera in the system has an automatic gain function. If the change of the automatic gain of each camera is inconsistent, then the absolute brightness between the images is inconsistent, and although the absolute brightness is inconsistent, the change in brightness is a constant. Therefore, edge detection is suitable for environments with large changes in lighting. Although edge detection can improve the recognition results of the edge of the item, this is equivalent to introducing another processing step, so we must weigh the relationship between the improvement of the result and the speed to use this function.

圖像處理過程中,根據雙目視覺立體成像原理,其中雙目立體視覺三維測量是基於視差原理。 During the image processing, according to the principle of binocular stereo imaging, the three-dimensional measurement of binocular stereo vision is based on the principle of parallax.

基線距B=兩攝像機的投影中心連線的距離;相機焦距為f。設兩攝像機在同一時刻觀看空間物體的同一特徵點在空間座標系下為P(x c ,y c ,z c ),分別在「左眼」及「右眼」上獲取了點P的圖像,它們的圖像座標在像座標下分別為P =(X,Y)及P =(Xt ,Y)。現兩攝像機的圖像在同一個平面上,則特徵點P的圖像座標Y座標相同,即Y=Y=Y,則由三角幾何關係得到:

Figure 108119050-A0202-12-0013-1
Baseline distance B = the distance between the projection centers of the two cameras; the focal length of the camera is f. Suppose two cameras are viewing the same feature point of a space object at the same time as P ( x c , y c , z c ) under the space coordinate system, and images of point P are acquired on the "left eye" and "right eye" respectively , Their image coordinates under the image coordinates are P left = (X left , Y left ) and P right = (X right t , Y right ). Now that the images of the two cameras are on the same plane, the image coordinates Y of the feature point P are the same, that is, Y left =Y right =Y, which is obtained from the triangular geometric relationship:
Figure 108119050-A0202-12-0013-1

則視差為:D視差=X-X。由此可計算出特徵點P在相機座標系下的三維座標為:

Figure 108119050-A0202-12-0014-2
The parallax is: D parallax = X left- X right . From this, the three-dimensional coordinates of the feature point P under the camera coordinate system can be calculated as:
Figure 108119050-A0202-12-0014-2

因此,左眼相機像面上的任意一點只要能在右眼相機像面上找到對應的匹配點,就可以確定出該點的三維座標。這種方法是完全的點對點運算,像面上所有點只要存在相應的匹配點,就可以參與上述運算,從而獲取其對應的三維座標。 Therefore, as long as any point on the image surface of the left-eye camera can find a corresponding matching point on the image surface of the right-eye camera, the three-dimensional coordinates of the point can be determined. This method is a complete point-to-point operation. As long as there is a corresponding matching point for all points on the image surface, it can participate in the above operation, so as to obtain its corresponding three-dimensional coordinates.

此外在進行圖像立體匹配時,根據對所選特徵的計算,建立特徵之間的對應關係,將同一個空間物理點在不同圖像中的映射點對應起來。立體匹配有三個基本的步驟組成:1)從立體圖像對中的一幅圖像如左圖上選擇與實際物理結構相應的圖像特徵;2)在另一幅圖像如右圖中確定出同一物理結構的對應圖像特徵;3)確定這兩個特徵之間的相對位置,得到視差。 In addition, when performing image stereo matching, the correspondence between the features is established according to the calculation of the selected features, and the mapping points of the same spatial physical point in different images are mapped. Stereo matching consists of three basic steps: 1) Select the image features corresponding to the actual physical structure from one image in the stereo image pair as shown on the left; 2) Determine on the other image as shown on the right Out the corresponding image features of the same physical structure; 3) determine the relative position between these two features to get the parallax.

其中的步驟2)是實現匹配的關鍵。 Step 2) is the key to matching.

深度確定藉由立體匹配得到視差圖像之後,便可以確定深度圖像,並恢復場景3D資訊。立體匹配建立相關性庫使用絕對相關偏差和的方法來建立圖像間的相關。這種方法的原理如下:對於圖像中的中每一個像素在參照圖像中,按照給定的正方形尺寸選擇一個鄰域,將這個鄰域沿著同一行與另一幅圖像中的一系列鄰域相比較找到最佳的匹配結束。使用絕對方差相關性計算:

Figure 108119050-A0202-12-0015-3
Determining the depth After obtaining the parallax image through stereo matching, the depth image can be determined and the 3D information of the scene can be restored. Stereo matching establishes a correlation library using the method of absolute correlation deviation sum to establish correlation between images. The principle of this method is as follows: for each pixel in the image in the reference image, select a neighborhood according to the given square size, and this neighborhood along the same line with one of the other images The series of neighborhoods is compared to find the best matching end. Calculation using absolute variance correlation:
Figure 108119050-A0202-12-0015-3

其中:dmin及dmax是最小及最大視差(disparity);m是模板尺寸(mask size);I及I是左邊及右邊的圖像。 Where: d min and d max are the minimum and maximum disparity; m is the mask size; I left and I right are the left and right images.

圖像處理過程中,藉由物體雙目擬合後圖像的進行計算,根據雙目視差原理公式及絕對相關偏差和的方法來建立圖像間的相關性計算深度,形成深度圖或空間點雲數據。 In the process of image processing, through the calculation of the image after binocular fitting of the object, the correlation calculation depth between the images is established according to the binocular parallax principle formula and the method of the absolute correlation deviation sum, forming a depth map or spatial point Cloud data.

S130、利用垂向空間統計方法確定前述待測物品的上表面點雲數據。 S130. Determine the point cloud data of the upper surface of the object to be measured by using a vertical spatial statistical method.

其中,在得到點雲數據圖像後,可以確定每個點的縱向(Z軸)數據,根據對縱向數據進行統計,即可以得到當前點雲圖像中各個高度範圍內的點雲個數。其中,如果背景為一個平面,如操作台等,則在背景的縱向數據內,點雲個數可以是最多的,而且在所有的點雲數據中,背景的點雲數據的Z軸數據也是最大或者最小的。藉由這種統計可以濾除掉背景點雲數據,在前景點雲數據中藉由統計某個範圍內點雲數據的個數,即可以確定為待測物品的上表面的點雲數據。如果上表面水準,則上表面的點雲數據範圍相對狹窄,如果上表面傾斜,則上表面的點雲數據範圍相對寬泛。 After obtaining the point cloud data image, the longitudinal (Z-axis) data of each point can be determined, and according to the statistics of the longitudinal data, the number of point clouds in each height range in the current point cloud image can be obtained. Among them, if the background is a plane, such as a console, the number of point clouds can be the largest in the longitudinal data of the background, and among all the point cloud data, the Z-axis data of the background point cloud data is also the largest Or the smallest. With this kind of statistics, background point cloud data can be filtered out. By counting the number of point cloud data in a certain range in the cloud data of the former scenic spot, the point cloud data on the upper surface of the item to be measured can be determined. If the upper surface is level, the point cloud data range of the upper surface is relatively narrow, and if the upper surface is inclined, the point cloud data range of the upper surface is relatively wide.

S140、根據前述待測物品的上表面點雲數據確定前述待測物品的上表面中心位置,作為前述待測物品的位置數據;並根據前述待測物品的上表面點雲數據的擬合面,以及前述待測物品的上表面中心位置與前述待測物品的上表面點雲數據邊界位置的距離,確定前述待測物品的形態數據。 S140. Determine the center position of the upper surface of the object to be measured according to the point cloud data of the upper surface of the object to be measured as position data of the object to be tested; and according to the fitting surface of the point cloud data of the upper surface of the object to be tested, And the distance between the center position of the upper surface of the object to be measured and the boundary position of the point cloud data of the upper surface of the object to be measured determines the shape data of the object to be tested.

其中,上表面中心位置可以藉由標準標記確定的世界座標系來確定,如(X,Y,Z)。如,可以根據上表面的點雲數據在XOY面上的投影的幾何形狀的中心位置來確定上表面的中心位置。 Among them, the center position of the upper surface can be determined by the world coordinate system determined by the standard mark, such as (X, Y, Z). For example, the center position of the upper surface can be determined according to the center position of the geometric shape of the projection of the point cloud data of the upper surface on the XOY plane.

根據前述上表面點雲數據的擬合面,以及上表面中心位置與上表面點雲數據邊界位置的距離,確定形態數據。形態數據可以藉由確定待測物品相對X軸、Y軸及Z軸的轉角Rx、Ry及Rz三個量來表示。在確定上表面的擬合面之後,其中擬合面可以是平面,亦可以是曲面。在確定上表面的法向向量後,可以根據上表面的法向向量與XOZ面的夾角確定Ry,與YOZ面的夾角確定Rx。再藉由上表面中心位置與上表面邊界點中距離最近的點形成的向量與XOY面的夾角確定Rz。 The morphological data is determined according to the fitting surface of the aforementioned upper surface point cloud data and the distance between the center position of the upper surface and the boundary position of the upper surface point cloud data. The morphological data can be expressed by determining three quantities of rotation angles Rx, Ry and Rz of the object to be measured with respect to the X-axis, Y-axis and Z-axis. After the fitting surface of the upper surface is determined, the fitting surface may be a flat surface or a curved surface. After the normal vector on the upper surface is determined, Ry can be determined according to the angle between the normal vector on the upper surface and the XOZ plane, and Rx can be determined based on the included angle with the YOZ plane. Then, Rz is determined by the angle between the vector formed by the center position of the upper surface and the closest point among the boundary points of the upper surface and the XOY plane.

本發明實施例所提供的技術手段,藉由雙目視覺裝置獲取待測物品及標準標記的雙目視覺圖像;其中,前述雙目視覺裝置佈置在前述待測物品的上方;根據前述標準標記與前述待測物品的位置關係,以及標準標記的雙目視覺圖像,對前述待測物品的雙目視覺圖像進行校正擬合,得到待測物品在世界座標系下的深度圖像,根據前述待測物品在世界座標系下的深度圖像確定前述待測物品的點雲數據圖像;利用垂向空間統計方法確定前述待測物品的上表面點雲數據;根據前述待測物品的上表面點雲數據確定前述待測物品的上表面中心位置,作為前述待測物品的位置數據;並根據前述待測物品的上表面點雲數據的擬合面,以及前述待測物品的上表面中心位置與前述待測物品的上表面點雲數據邊界位置的距離,確定前述待測物品的形態數據,可以實現藉由雙目視覺裝置獲取待測物品的圖像後,經過處理及分析,確定待測物品的空間位置及形態的效果。 The technical means provided by the embodiments of the present invention obtain the binocular visual image of the test object and the standard mark by a binocular vision device; wherein the binocular vision device is arranged above the test object; according to the standard mark The positional relationship with the aforementioned test object and the binocular visual image of the standard mark are corrected and fitted to the aforementioned binocular visual image of the tested object to obtain the depth image of the tested object under the world coordinate system, according to Determining the point cloud data image of the object under test using the depth image of the object under test in the world coordinate system; determining the point cloud data of the upper surface of the object under test using vertical spatial statistical methods; The surface point cloud data determines the center position of the upper surface of the object to be measured as the position data of the object to be measured; and based on the fitting surface of the point cloud data of the upper surface of the object to be measured and the center of the upper surface of the object to be measured The distance between the position and the boundary position of the point cloud data on the upper surface of the object to be measured determines the morphological data of the object to be tested. After obtaining the image of the object to be tested by the binocular vision device, after processing and analysis, the The effect of measuring the spatial position and form of items.

在上述技術手段的基礎上,選擇性的,在確定前述位置數據及前述形態數據之後,前述方法進一步包括:根據前述位置數據及前述形態數據,確定機器人操作臂的抓取位置及抓取姿態,以控制前述機器人操作臂對前述待測物品進行抓取。 Based on the above technical means, optionally, after determining the position data and the form data, the method further includes: determining the gripping position and gripping posture of the robot operating arm based on the position data and the form data, To control the robot operating arm to grab the object to be tested.

其中,機器人操作臂的位置可以校正到與待測物品相同的世界座標系中,從而可以確定操作臂的運動距離、運動方向甚至運動軌跡,當操作臂運動到待測物品位置時,可以控制操作臂的夾爪以與待測物品想適應的形態抓取待測物品。如此設置的好處是可以確定對於待測物品的位置識別後能夠順利的抓取物品,而且抓取更加緊緻,避免抓取脫落等事故出現。 Among them, the position of the robot operating arm can be corrected to the same world coordinate system as the object to be measured, so that the movement distance, direction and even trajectory of the operation arm can be determined. When the operation arm moves to the position of the object to be measured, the operation can be controlled The gripper of the arm grabs the object to be tested in a form suitable for the object to be tested. The advantage of this setting is that it can be determined that the object to be tested can be grasped smoothly after the position recognition, and the grasping is more compact to avoid accidents such as grasping and falling off.

實施例二 Example 2

圖2是本發明實施例二提供的物體空間位置形態的確定方法的流程圖。本實施例在上述實施例的基礎上,變形為:在根據前述待測物品在世界座標系下的深度圖像確定前述待測物品的點雲數據圖像之後,利用垂向空間統計方法確定前述待測物品的上表面點雲數據之前,前述方法進一步包括:根據前述待測物品的點雲數據圖像中點雲數據的三色差異度,對背景點雲數據進行濾除,得到前景點雲數據圖像;相應的,利用垂向空間統計方法確定前述待測物品的上表面點雲數據,包括:利用垂向空間統計方法,從前述前景點雲數據圖像中確定前述待測物品的上表面點雲數據。 FIG. 2 is a flowchart of a method for determining the spatial position and shape of an object provided by Embodiment 2 of the present invention. Based on the above embodiment, this embodiment is modified as follows: after determining the point cloud data image of the object under test based on the depth image of the object under test in the world coordinate system, the vertical spatial statistical method is used to determine the aforementioned Before the point cloud data on the upper surface of the object to be measured, the foregoing method further includes: filtering the background point cloud data according to the three-color difference degree of the point cloud data in the point cloud data image of the object to be measured, to obtain the front spot cloud Data image; correspondingly, using the vertical spatial statistical method to determine the upper surface point cloud data of the object to be measured includes: using the vertical spatial statistical method to determine the upper surface of the object to be measured from the foregoing cloud data image of the front spot Surface point cloud data.

如圖2所示,前述物體空間位置形態的確定方法包括: As shown in FIG. 2, the method for determining the spatial shape of the aforementioned object includes:

S210、藉由雙目視覺裝置獲取待測物品及標準標記的雙目視覺圖像;其中,前述雙目視覺裝置佈置在前述待測物品的上方。 S210. Obtain the binocular vision image of the test object and the standard mark by the binocular vision device; wherein the binocular vision device is arranged above the test object.

S220、根據前述標準標記與前述待測物品的位置關係,以及標 準標記的雙目視覺圖像,對前述待測物品的雙目視覺圖像進行校正擬合,得到待測物品在世界座標系下的深度圖像,根據前述待測物品在世界座標系下的深度圖像確定前述待測物品的點雲數據圖像。 S220. Correct and fit the binocular visual image of the test object according to the positional relationship between the standard mark and the test object and the binocular visual image of the standard mark to obtain the test object under the world coordinate system According to the depth image of the object under test in the world coordinate system, the point cloud data image of the object under test is determined.

S230、根據前述待測物品的點雲數據圖像中點雲數據的三色差異度,對背景點雲數據進行濾除,得到前景點雲數據圖像。 S230. Filter the background point cloud data according to the three-color difference degree of the point cloud data in the point cloud data image of the aforementioned object to be measured, to obtain a cloud image of the former scenic spot.

其中,三色差異度可以是點雲數據圖像中每個數據的像素點顏色中紅、綠、藍三原色的數值的相互差異,如此設置主要是可以對於顏色或者接近的背景點雲數據進行濾除,就可以得到只有前景點雲數據的點雲數據圖像。 Among them, the three-color difference degree can be the difference between the values of the three primary colors of red, green, and blue in the pixel color of each data in the point cloud data image. This setting can mainly filter the color or close background point cloud data In addition, you can get a point cloud data image with only the cloud data of the previous attractions.

在本實施例中,選擇性的,根據前述點雲數據圖像中點雲數據的三色差異度,對背景點雲數據進行濾除,得到前景點雲數據圖像,包括:採用如下公式確定前述點雲數據圖像中點雲數據的三色差異度值:T=|Rpoint-Gpoint|-|Gpoint-Bpoint|;其中,Rpoint表示點雲數據中的RGB顏色中紅色的數值;Gpoint表示點雲數據中的RGB顏色中綠色的數值;Bpoint表示點雲數據中的RGB顏色中藍色的數值;其中,T為三色差異度值,前述三色差異度值小於背景濾除閾值時,則確定對應的點雲數據為背景點雲數據,進行濾除操作。 In this embodiment, optionally, the background point cloud data is filtered according to the three-color difference degree of the point cloud data in the aforementioned point cloud data image, to obtain the front spot cloud data image, which includes: The three-color difference value of the point cloud data in the aforementioned point cloud data image: T=|R point -G point |-|G point -B point |; where R point represents the red of the RGB colors in the point cloud data Numerical value; G point represents the numerical value of green in the RGB color in the point cloud data; B point represents the numerical value of blue in the RGB color in the point cloud data; where T is the three-color difference value, and the aforementioned three-color difference value is less than When the background filtering threshold is determined, the corresponding point cloud data is determined as background point cloud data, and the filtering operation is performed.

如此設置有利於對背景以及其他反光或者個別跳變的雜訊點的點雲數據進行濾除,提高了對於上表面點雲數據確定的準確性。 Such setting is advantageous for filtering the point cloud data of the background and other reflective or individual jumping noise points, and improves the accuracy of determining the point cloud data on the upper surface.

S240、利用垂向空間統計方法,從前述前景點雲數據圖像中確定前述待測物品的上表面點雲數據。 S240. Use the vertical spatial statistical method to determine the point cloud data on the upper surface of the object to be measured from the cloud data image of the front sight.

在本實施例中,選擇性的,利用垂向空間統計方法確定前述待測物品的上表面點雲數據,包括:對所有前景點雲數據的垂向數據進行統計分佈;確定每個垂向數據間隔中的前景點雲數據個數;確定前景點雲數據個數最多的垂向數據間隔的中值;將垂向數據分佈在設定範圍內的點雲數據,作為前述待測物品的上表面點雲數據;前述設定範圍是由將前述垂向數據間隔的中值減去預設可控數值得到的第一數值、與將前述垂向數據間隔的中值加上預設可控數值得到的第二數值所形成的數值範圍。 In this embodiment, optionally, the vertical space statistical method is used to determine the point cloud data on the upper surface of the object to be measured, which includes: statistically distributing the vertical data of all the front-point cloud data; determining each vertical data The number of cloud data of the front sight in the interval; determine the median value of the vertical data interval with the largest number of cloud data of the front sight; the point cloud data that distributes the vertical data within the set range as the upper surface point of the aforementioned object to be measured Cloud data; the aforementioned setting range is the first value obtained by subtracting the preset controllable value from the median value of the vertical data interval, and the third value obtained by adding the preset controllable value to the median value of the vertical data interval The range of values formed by the two values.

圖3是本發明實施例二提供的點雲數據統計分佈示意圖。如圖3所示,橫軸為點雲數據的縱向數據,可以理解為高度,單位為米,縱座標為每個數據間隔中點雲數據的個數,表示為在當前的縱向數據間隔中點雲數據的個數。如,數據間隔採用0.02。在圖中,0.414-0.416中的點雲個數最多,則可以確定上表面點雲數據為以0.415為中心的一定範圍內的點雲數據。 FIG. 3 is a schematic diagram of statistical distribution of point cloud data provided by Embodiment 2 of the present invention. As shown in Figure 3, the horizontal axis is the vertical data of the point cloud data, which can be understood as the height, in meters, and the vertical coordinate is the number of point cloud data in each data interval, which is expressed as the midpoint of the current vertical data interval The number of cloud data. For example, the data interval is 0.02. In the figure, the maximum number of point clouds in 0.414-0.416 can be determined as point cloud data in a certain range centered on 0.415.

在本實施例中,選擇性的,前述預設可控數值採用如下方式確定:統計所有前景點雲數據圖像的垂向數據,確定標準方差;將前述標準方差的設定倍數作為預設可控數值。 In this embodiment, optionally, the aforementioned preset controllable value is determined in the following manner: the vertical data of all the cloud data images of the front sight are counted to determine the standard deviation; the set multiple of the aforementioned standard deviation is used as the preset controllable Value.

識別目標物體的上平面,所取目標物體有色點,根據Z向垂向進行統計分佈,使用點雲數據的μ(μ為統計平均值Mean)及統計高頻峰值(Peak)所在數值。如圖3所示,同時採用σ(σ為標準方差Standard Deviation)控制所選擇的範圍,實際中在1σ-6σ之間的範圍內的點作為目標物體的上平面是典型數據分佈效果,如圖3所示,粗線代表中值μ,虛線區間代表+/-σ標準方差。並認為這些點構成了目標物體的主成像面或上表面。同時,藉由這一方法來移除反光點、離群點、陰影點的點雲數據造成的偏差。 Identify the upper plane of the target object, take the colored points of the target object, and perform statistical distribution according to the vertical direction of the Z direction. Use the point cloud data μ (μ is the statistical mean) and the statistical high frequency peak (Peak) value. As shown in Figure 3, σ (σ is the standard deviation Standard Deviation) is also used to control the selected range. In fact, the point in the range between 1σ-6σ as the upper plane of the target object is a typical data distribution effect, as shown in the figure As shown in 3, the thick line represents the median μ, and the broken line interval represents +/-σ standard deviation. And that these points constitute the main imaging surface or upper surface of the target object. At the same time, this method is used to remove the deviation caused by the point cloud data of reflective points, outliers and shadow points.

S250、根據前述待測物品的上表面點雲數據確定前述待測物品的上表面中心位置,作為前述待測物品的位置數據;並根據前述待測物品的上表面點雲數據的擬合面,以及前述待測物品的上表面中心位置與前述待測物品的上表面點雲數據邊界位置的距離,確定前述待測物品的形態數據。 S250. Determine the center position of the upper surface of the object to be tested according to the point cloud data of the upper surface of the object to be measured, as the position data of the object to be tested; and according to the fitting surface of the point cloud data of the upper surface of the object to be tested, And the distance between the center position of the upper surface of the object to be measured and the boundary position of the point cloud data of the upper surface of the object to be measured determines the shape data of the object to be tested.

本實施例在上述實施例的基礎上,提供了對於前景點雲數據的確定方法,藉由這種方法,可以移除反光點、離群點、陰影點的點雲數據造成的干擾,提高了確定待測物品的上表面點雲數據的準確性。 This embodiment provides a method for determining the cloud data of the front sight on the basis of the above embodiments. With this method, the interference caused by the point cloud data of reflective points, outliers, and shadow points can be removed, which improves Determine the accuracy of the point cloud data on the upper surface of the item to be measured.

在上述各技術手段的基礎上,選擇性的,根據前述待測物品的上表面點雲數據確定前述待測物品的上表面中心位置,作為前述待測物品的位置數據,包括:根據所有前述上表面點雲數據的空間座標的平均值,確定前述待測物品的上表面中心位置,作為位置數據;根據前述待測物品的上表面點雲數據的擬合面,以及前述待測物品的上表面中心位置與前述待測物品的上表面點雲數據邊界位置的距離,確定前述待測物品的形態數據,包括:根據前述上表面點雲數據的空間座標,進行平面擬合以確定前述待測物品的上表面;確定前述待測物品的上表面的法向向量,根據前述法向向量確定前述待測物品在世界座標系中沿X軸的扭轉角度Rx,及沿Y軸的扭轉角度Ry;並且,對待測物品的上表面點雲數據在XOY面上進行投影;在上表面中心位置的投影位置,與上表面點雲數據邊界位置的投影位置之間的距離中,確定最小值;根據前述最小值所在方向確定前述待測物品沿Z軸的扭轉角度Rz;將前述Rx、Ry及Rz確定為前述待測物品的形態數據。如此設置的好處是可以提高對於待測物品空間六參數的確定過程的準確性及簡便性,提高本發明實施例所提供的技術手段的準確性。 Based on the above technical means, optionally, the center position of the upper surface of the object to be measured is determined according to the point cloud data of the upper surface of the object to be tested, and the position data of the object to be tested includes: The average value of the spatial coordinates of the surface point cloud data determines the center position of the upper surface of the object under test as position data; the fitting surface according to the point cloud data of the upper surface of the object under test, and the upper surface of the object under test The distance between the center position and the boundary position of the upper surface point cloud data of the object to be measured determines the morphological data of the object to be measured, including: performing plane fitting according to the spatial coordinates of the upper surface point cloud data to determine the object to be measured The upper surface of the test object; determine the normal vector of the upper surface of the test object, determine the torsion angle Rx of the test object in the world coordinate system along the X axis, and the twist angle Ry along the Y axis according to the normal vector; and , Project the point cloud data of the upper surface of the object to be tested on the XOY plane; determine the minimum value between the projection position of the upper surface center position and the projection position of the upper surface point cloud data boundary position; The direction of the value determines the torsion angle Rz of the object under test along the Z axis; the aforementioned Rx, Ry, and Rz are determined as the morphological data of the object under test. The advantage of this setting is that it can improve the accuracy and simplicity of the process of determining the six parameters of the space of the object to be tested, and improve the accuracy of the technical means provided by the embodiments of the present invention.

圖6是本發明實施例二提供的物體空間形態數據中Rz的確定方法示意圖。如圖6所示,在確定待測物品上表面點雲數據之後,可以在XOY面上進行投影,其中Z軸與O點重合,未在圖中示出,將三維點轉換到二維平面中。可以以原來確定的中心點作為投影後的中心點,在確定中心點後,提取該集合中所有的外圍點構成一個凸多邊形(Convex hull 2D),構成凸多邊形的頂點標記為邊界點(如圖,僅部分進行了標記)。藉由中心點可以與相鄰的兩個多邊形頂點構成三角形,任意相鄰的兩個頂點構成一個線段。如圖6中,H4就是中心點與兩個邊界點所形成的三角形中,中心點到該部分邊界的高度,圖中示出了H1、H2、H3、H4及H5五個高度值,其中H4是最小值,H3是最大值。 6 is a schematic diagram of a method for determining Rz in object spatial shape data provided by Embodiment 2 of the present invention. As shown in Figure 6, after determining the point cloud data on the surface of the object to be measured, you can project on the XOY plane, where the Z axis coincides with the O point, which is not shown in the figure, and convert the three-dimensional point into a two-dimensional plane . The original determined center point can be used as the projected center point. After determining the center point, all the peripheral points in the set are extracted to form a convex polygon (Convex hull 2D), and the vertices forming the convex polygon are marked as boundary points (as shown in the figure). , Only partially marked). The center point can form a triangle with two adjacent vertices of the polygon, and any two adjacent vertices form a line segment. As shown in Figure 6, H4 is the triangle formed by the center point and the two boundary points, the height from the center point to the boundary of the part, the figure shows the five height values of H1, H2, H3, H4 and H5, where H4 Is the minimum value and H3 is the maximum value.

在任意一個兩個頂點構成一個線段中找到,中心點到該線段(Segment)的最短距離(H4),即上表面點雲所圍的多邊形中,最短邊距離為垂足。在知道了上表面中心到邊界多邊形最短的距離及方向後,將H4向量化,得到向量H4與X軸或者與Y軸形成的角度,即為該待測物品繞著Z軸旋轉的角度Rz,所以向量H4能夠表示上表面中心點的Rz方向。根據目標物體的上表面中心點,取中心點距離邊界多邊形最短的向量(中心距離邊界線段的最短距離點)。根據最短邊的方向同XOZ平面(或YOZ平面)夾角定位,確定目標物體的Rz的夾角。 Found in any two vertices forming a line segment, the shortest distance (H4) from the center point to the line segment (Segment), that is, in the polygon surrounded by the point cloud on the upper surface, the shortest side distance is vertical foot. After knowing the shortest distance and direction from the center of the upper surface to the boundary polygon, vectorize H4 to obtain the angle formed by the vector H4 and the X axis or the Y axis, which is the angle Rz of the object to be tested rotating around the Z axis, Therefore, the vector H4 can represent the Rz direction of the center point of the upper surface. According to the center point of the upper surface of the target object, take the vector whose center point is the shortest from the boundary polygon (the shortest distance point from the center to the boundary line segment). According to the orientation of the shortest side and the XOZ plane (or YOZ plane), determine the included angle of the Rz of the target object.

實施例三 Example Three

圖4是本發明實施例三提供的物體空間位置形態的確定裝置的結構示意圖。如圖4所示,前述物體空間位置形態的確定裝置,包括:雙目視覺圖像獲取模組410,設置為藉由雙目視覺裝置獲取待測物品及標 準標記的雙目視覺圖像;其中,前述雙目視覺裝置佈置在前述待測物品的上方;點雲數據圖像確定模組420,設置為根據前述標準標記與前述待測物品的位置關係,以及標準標記的雙目視覺圖像,對前述待測物品的雙目視覺圖像進行校正擬合,得到待測物品在世界座標系下的深度圖像,根據前述待測物品在世界座標系下的深度圖像確定前述待測物品的點雲數據圖像;上表面點雲數據篩選模組430,設置為利用垂向空間統計方法確定前述待測物品的上表面點雲數據;位置數據及形態數據確定模組440,設置為根據前述待測物品的上表面點雲數據確定前述待測物品的上表面中心位置,作為前述待測物品的位置數據;並根據前述待測物品的上表面點雲數據的擬合面,以及前述待測物品的上表面中心位置與前述待測物品的上表面點雲數據邊界位置的距離,確定前述待測物品的形態數據。 4 is a schematic structural diagram of a device for determining a spatial position and shape of an object according to Embodiment 3 of the present invention. As shown in FIG. 4, the aforementioned device for determining the spatial position and shape of the object includes: a binocular visual image acquisition module 410, which is configured to acquire a binocular visual image of an object to be tested and a standard mark through the binocular visual device; The aforementioned binocular vision device is arranged above the aforementioned test object; the point cloud data image determination module 420 is set to be based on the positional relationship between the aforementioned standard mark and the aforementioned test object, and the binocular visual image of the standard mark, Correcting and fitting the binocular visual image of the object to be tested to obtain a depth image of the object to be tested in the world coordinate system, and determining the depth of the object to be tested according to the depth image of the object to be tested in the world coordinate system Point cloud data image; upper surface point cloud data filtering module 430, configured to determine the upper surface point cloud data of the object to be measured using vertical spatial statistical methods; position data and shape data determination module 440, configured to be based on the foregoing The point cloud data of the upper surface of the test object determines the center position of the upper surface of the test object as the position data of the test object; and according to the fitting surface of the point cloud data of the upper surface of the test object and the test object The distance between the center position of the upper surface of the article and the boundary position of the point cloud data of the upper surface of the article to be measured determines the shape data of the article to be measured.

本發明實施例所提供的技術手段,藉由雙目視覺裝置獲取待測物品及標準標記的雙目視覺圖像;其中,前述雙目視覺裝置佈置在前述待測物品的上方;根據前述標準標記與前述待測物品的位置關係,以及標準標記的雙目視覺圖像,對前述待測物品的雙目視覺圖像進行校正擬合,得到待測物品在世界座標系下的深度圖像,根據前述待測物品在世界座標系下的深度圖像確定前述待測物品的點雲數據圖像;利用垂向空間統計方法確定前述待測物品的上表面點雲數據;根據前述待測物品的上表面點雲數據確定前述待測物品的上表面中心位置,作為前述待測物品的位置數據;並根據前述待測物品的上表面點雲數據的擬合面,以及前述待測物品的上表面中心位置與前述待測物品的上表 面點雲數據邊界位置的距離,確定前述待測物品的形態數據,可以實現藉由雙目視覺裝置獲取待測物品的圖像後,經過處理及分析,確定待測物品的空間位置及形態的效果。 The technical means provided by the embodiments of the present invention obtain the binocular visual image of the test object and the standard mark by a binocular vision device; wherein the binocular vision device is arranged above the test object; according to the standard mark The positional relationship with the aforementioned test object and the binocular visual image of the standard mark are corrected and fitted to the aforementioned binocular visual image of the tested object to obtain the depth image of the tested object under the world coordinate system, according to Determining the point cloud data image of the object under test using the depth image of the object under test in the world coordinate system; determining the point cloud data of the upper surface of the object under test using vertical spatial statistical methods; The surface point cloud data determines the center position of the upper surface of the object to be measured as the position data of the object to be measured; and based on the fitting surface of the point cloud data of the upper surface of the object to be measured and the center of the upper surface of the object to be measured The distance between the position and the boundary position of the point cloud data on the upper surface of the object to be measured determines the morphological data of the object to be tested. After obtaining the image of the object to be tested by the binocular vision device, after processing and analysis, the The effect of measuring the spatial position and form of items.

上述產品可執行本發明任意實施例所提供的方法,具備執行方法相應的功能模組及功效。 The above-mentioned products can execute the method provided by any embodiment of the present invention, and have the function modules and functions corresponding to the execution method.

實施例四 Example 4

本發明實施例進一步提供一種包括電腦可執行指令的儲存介質,前述電腦可執行指令在由電腦處理器執行時用於執行一種物體空間位置形態的確定方法,該方法包括:藉由雙目視覺裝置獲取待測物品及標準標記的雙目視覺圖像;其中,前述雙目視覺裝置佈置在前述待測物品的上方;根據前述標準標記與前述待測物品的位置關係,以及標準標記的雙目視覺圖像,對前述待測物品的雙目視覺圖像進行校正擬合,得到待測物品在世界座標系下的深度圖像,根據前述待測物品在世界座標系下的深度圖像確定前述待測物品的點雲數據圖像;利用垂向空間統計方法確定前述待測物品的上表面點雲數據;根據前述待測物品的上表面點雲數據確定前述待測物品的上表面中心位置,作為前述待測物品的位置數據;並根據前述待測物品的上表面點雲數據的擬合面,以及前述待測物品的上表面中心位置與前述待測物品的上表面點雲數據邊界位置的距離,確定前述待測物品的形態數據。 Embodiments of the present invention further provide a storage medium including computer-executable instructions, which when executed by a computer processor are used to perform a method for determining the spatial position and shape of an object, the method includes: using a binocular vision device Obtain the binocular vision image of the test object and the standard mark; wherein the binocular vision device is arranged above the test object; according to the positional relationship between the standard mark and the test object, and the binocular vision of the standard mark Image, correcting and fitting the binocular vision image of the object under test to obtain the depth image of the object under test in the world coordinate system, and determining the object to be tested according to the depth image of the object under test in the world coordinate system The point cloud data image of the measured object; the vertical surface statistical method is used to determine the point cloud data of the upper surface of the object to be measured; the center position of the upper surface of the object to be measured is determined according to the point cloud data of the upper surface of the object to be measured, as The position data of the aforementioned test object; and the distance between the center position of the upper surface point cloud data of the aforementioned test object and the boundary position of the upper surface point cloud data of the aforementioned test object To determine the shape data of the item to be tested.

儲存介質-任何的各種類型的記憶體設備或儲存裝置。術語「儲存介質」旨在包括:安裝介質,例如CD-ROM、軟式磁碟或磁帶裝置;電 腦系統記憶體或隨機存取記憶體,諸如DRAM、DDR RAM、SRAM、EDO RAM,蘭巴斯(Rambus)RAM等;非揮發性記憶體,諸如快閃記憶體、磁介質(例如硬碟或光儲存);暫存器或其它相似類型的記憶體元件等。儲存介質可以進一步包括其它類型的記憶體或其組合。另外,儲存介質可以位於程式在其中被執行的電腦系統中,或者可以位於不同的第二電腦系統中,第二電腦系統藉由網路(諸如網際網路)連接到電腦系統。第二電腦系統可以提供程式指令給電腦用於執行。術語「儲存介質」可以包括可以駐留在不同位置中(例如在藉由網路連接的不同電腦系統中)的兩個或更多儲存介質。儲存介質可以儲存可由一個或多個處理器執行的程式指令(例如具體實現為電腦程式)。 Storage medium-any type of memory device or storage device. The term "storage media" is intended to include: installation media such as CD-ROM, floppy disk or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Lambasse ( Rambus) RAM, etc.; non-volatile memory, such as flash memory, magnetic media (such as hard drives or optical storage); registers or other similar types of memory devices, etc. The storage medium may further include other types of memory or a combination thereof. In addition, the storage medium may be located in a computer system in which the program is executed, or may be located in a different second computer system that is connected to the computer system through a network (such as the Internet). The second computer system can provide program instructions to the computer for execution. The term "storage medium" can include two or more storage media that can reside in different locations (eg, in different computer systems connected by a network). The storage medium may store program instructions executable by one or more processors (for example, embodied as a computer program).

當然,本發明實施例所提供的一種包括電腦可執行指令的儲存介質,其電腦可執行指令不限於如上所述的物體空間位置形態的確定操作,亦可以執行本發明任意實施例所提供的物體空間位置形態的確定方法中的相關操作。 Of course, a storage medium including computer-executable instructions provided by an embodiment of the present invention is not limited to the above-mentioned operation of determining the spatial position and shape of an object, and can also execute objects provided by any embodiment of the present invention Relevant operations in the method of determining spatial position patterns.

實施例五 Example 5

本發明實施例提供了一種雙目視覺機器人,包括雙目視覺裝置,操作台,操作台上的標準標記、機器人操作臂,記憶體,處理器及儲存在記憶體上並可在處理器運行的電腦程式,前述處理器執行前述電腦程式時實現如本發明實施例中任一項的物體空間位置形態的確定方法。 An embodiment of the present invention provides a binocular vision robot, which includes a binocular vision device, an operating table, a standard mark on the operating table, a robot operating arm, a memory, a processor, and a processor stored on the memory and capable of running on the processor A computer program, when the processor executes the computer program, the method for determining the spatial position and shape of an object according to any one of the embodiments of the present invention is implemented.

圖5a為本發明實施例五所提供的雙目視覺機器人示意圖。如圖5a所示,雙目視覺裝置10,操作台20,操作台上的標準標記30、機器人操作臂50,記憶體,處理器及儲存在記憶體上並可在處理器運行的電腦程式,前述處理器執行前述電腦程式時實現如本發明實施例中任一項的物體空間位置形態 的確定方法。 FIG. 5a is a schematic diagram of a binocular vision robot provided by Embodiment 5 of the present invention. As shown in FIG. 5a, the binocular vision device 10, the console 20, the standard mark 30 on the console, the robot operating arm 50, the memory, the processor, and the computer program stored on the memory and running on the processor, When the foregoing processor executes the foregoing computer program, the method for determining the spatial position and shape of an object according to any one of the embodiments of the present invention is implemented.

圖5b為本發明實施例五所提供的雙目視覺機器人示意圖。如圖5b所示,相對於上面所述的技術手段而言,將雙目視覺裝置10設置在卡爪上可以使雙目視覺圖像獲取更加靈活,可以在待測物品較多時,或者經過一側計算後計算準確率不符合標準或者雜訊率過高時,藉由控制卡爪的移動,可以從另一個角度進行對待測物品空間六參數的定位。亦可以藉由多個位置得到的空間六參數結果進行相互比較及確認,從而提高本發明實施例所提供的技術方案對於待測物品的空間位置及形態的確定結果的準確性。 FIG. 5b is a schematic diagram of a binocular vision robot provided by Embodiment 5 of the present invention. As shown in FIG. 5b, relative to the above-mentioned technical means, setting the binocular vision device 10 on the jaws can make the binocular vision image acquisition more flexible, and can be used when there are many items to be tested, or through After the calculation accuracy on one side does not meet the standard or the noise rate is too high, by controlling the movement of the jaws, the six parameters of the space to be measured can be positioned from another angle. The spatial six-parameter results obtained from multiple positions can also be compared and confirmed with each other, thereby improving the accuracy of the results of the determination of the spatial position and shape of the object to be tested by the technical solution provided by the embodiments of the present invention.

圖5c為本發明實施例五所提供的雙目視覺機器人示意圖。如圖5c所示,相對於上述多個技術手段,將雙目視覺裝置設置在機器人操作臂的機身上,如此可以避免第一種方案中專門為雙目視覺裝置提供安裝支架的情形,同時可以在機器人操作臂移動到另一個操作台時,藉由雙目視覺裝置進行雙目視覺圖像的獲取,無需針對每個操作台都安裝雙目視覺裝置,達到了節省系統成本的效果。 5c is a schematic diagram of a binocular vision robot provided by Embodiment 5 of the present invention. As shown in FIG. 5c, relative to the above-mentioned multiple technical means, the binocular vision device is installed on the body of the robot operating arm, so that the situation that the first solution specifically provides a mounting bracket for the binocular vision device can be avoided, and at the same time When the robot operating arm moves to another operation table, the binocular vision device is used to acquire the binocular vision image, and there is no need to install a binocular vision device for each operation station, which achieves the effect of saving system cost.

其中,可以將雙目視覺裝置設置在機器人操作臂的卡爪上,亦可以設置在機器人操作臂的固定位置上,只要能夠獲取到待測物品的上表面的圖像以及操作台正面的圖像就可以。 Among them, the binocular vision device can be installed on the claw of the robot operating arm, or it can be installed on a fixed position of the robot operating arm, as long as the image of the upper surface of the object to be measured and the image of the front of the operating table can be obtained can.

Claims (13)

一種物體空間位置形態的確定方法,其特徵係包括:藉由雙目視覺裝置獲取待測物品及標準標記的雙目視覺圖像;且,前述雙目視覺裝置佈置在前述待測物品的上方;根據前述標準標記與前述待測物品的位置關係,以及標準標記的雙目視覺圖像,對前述待測物品的雙目視覺圖像進行校正擬合,得到待測物品在世界座標系下的深度圖像,根據前述待測物品在世界座標系下的深度圖像確定前述待測物品的點雲數據圖像;利用垂向空間統計方法確定前述待測物品的上表面點雲數據;根據前述待測物品的上表面點雲數據確定前述待測物品的上表面中心位置,作為前述待測物品的位置數據;並根據前述待測物品的上表面點雲數據的擬合面,以及前述待測物品的上表面中心位置與前述待測物品的上表面點雲數據邊界位置的距離,確定前述待測物品的形態數據。 A method for determining the spatial position and shape of an object, characterized in that: the binocular visual image of the item to be tested and the standard mark is acquired by a binocular visual device; and the binocular visual device is arranged above the item to be tested; According to the positional relationship between the standard mark and the object to be tested, and the binocular visual image of the standard mark, the binocular visual image of the object to be tested is corrected and fitted to obtain the depth of the object to be tested in the world coordinate system Image, based on the depth image of the object under test in the world coordinate system to determine the point cloud data image of the object under test; using the vertical spatial statistical method to determine the point cloud data of the surface of the object under test; according to the aforementioned object The point cloud data of the upper surface of the test object determines the center position of the upper surface of the test object as the position data of the test object; and according to the fitting surface of the point cloud data of the upper surface of the test object and the test object The distance between the center position of the upper surface of the upper surface and the boundary position of the point cloud data of the upper surface of the object to be measured determines the shape data of the object to be measured. 如申請專利範圍第1項所記載之物體空間位置形態的確定方法,其中,在確定前述待測物品的位置數據及前述待測物品的形態數據之後,前述方法進一步包括:根據前述待測物品的位置數據及前述待測物品的形態數據,確定機器人操作臂的抓取位置及抓取姿態,以控制前述機器人操作臂對前述待測物品進行抓取。 The method for determining the spatial position and shape of an object as described in item 1 of the patent application scope, wherein after determining the position data of the object to be tested and the shape data of the object to be tested, the method further includes: The position data and the shape data of the object to be tested determine the grasping position and grasping posture of the robot operating arm to control the robot operating arm to grasp the object to be tested. 如申請專利範圍第1項所記載之物體空間位置形態的確定方法,其中,在前述藉由雙目視覺裝置獲取待測物品及標準標記的雙目視覺圖像前,前述 方法亦包括:在前述待測物品承載空間內選取一固定結構作為前述標準標記,或在前述待測物品承載空間內安裝一標記作為前述標準標記,藉由前述雙目視覺裝置與前述標準標記之間的位置關係,建立前述標準標記的座標系與前述雙目視覺裝置的座標系之間的關係。 The method for determining the spatial position and shape of an object as described in item 1 of the patent application scope, wherein, before the binocular visual images of the object to be tested and the standard mark are acquired by the binocular visual device, the method also includes: Select a fixed structure in the test object carrying space as the standard mark, or install a mark in the test object carrying space as the standard mark, established by the positional relationship between the binocular vision device and the standard mark The relationship between the coordinate system of the aforementioned standard mark and the coordinate system of the aforementioned binocular vision device. 如申請專利範圍第1項所記載之物體空間位置形態的確定方法,其中,前述根據前述標準標記與前述待測物品的位置關係,以及標準標記的雙目視覺圖像,對前述待測物品的雙目視覺圖像進行校正擬合,得到待測物品在世界座標系下的深度圖像,包括:利用前述標準標記,確定雙目視覺裝置的位置;根據前述雙目視覺裝置的位置與前述標準標記之間的位置關係,確定前述待測物品的雙目視覺圖像的世界座標系校正參數;按照前述待測物品的雙目視覺圖像的世界座標系校正參數,將前述待測物品的雙目視覺圖像轉換到世界座標系下,再進行深度圖像擬合,得到前述待測物品在世界座標系下的深度圖像。 The method for determining the spatial position form of an object as described in item 1 of the scope of the patent application, wherein the foregoing is based on the positional relationship between the standard mark and the object to be tested, and the binocular visual image of the standard mark Correcting and fitting the binocular vision image to obtain the depth image of the object under test in the world coordinate system, including: using the aforementioned standard mark to determine the position of the binocular vision device; according to the position of the aforementioned binocular vision device and the aforementioned standard The positional relationship between the markers determines the world coordinate system correction parameters of the binocular visual image of the aforementioned test object; according to the world coordinate system correction parameters of the binocular visual image of the aforementioned test object, the The visual image is converted to the world coordinate system, and then the depth image is fitted to obtain the depth image of the aforementioned object under test in the world coordinate system. 如申請專利範圍第1項所記載之物體空間位置形態的確定方法,其中,前述根據前述標準標記與前述待測物品的位置關係,以及標準標記的雙目視覺圖像,對前述待測物品的雙目視覺圖像進行校正擬合,得到待測物品在世界座標系下的深度圖像,包括:對前述標準標記的雙目視覺圖像及前述待測物品的雙目視覺圖像進行深度圖像擬合,得到各自的初級深度圖像;根據前述雙目視覺裝置的位置與前述標準標記之間的位置關係,確定前述 待測物品的初級深度圖像的世界座標系校正參數;根據前述待測物品的初級深度圖像的世界座標系校正參數,將前述待測物品的初級深度圖像進行世界座標系校正,得到前述待測物品在世界座標系下的深度圖像。 The method for determining the spatial position form of an object as described in item 1 of the scope of the patent application, wherein the foregoing is based on the positional relationship between the standard mark and the object to be tested, and the binocular visual image of the standard mark Correcting and fitting the binocular visual image to obtain the depth image of the test object in the world coordinate system, including: performing a depth map on the binocular visual image of the aforementioned standard mark and the binocular visual image of the aforementioned test object Image fitting to obtain the respective primary depth images; according to the positional relationship between the position of the binocular vision device and the standard mark, determine the world coordinate system correction parameters of the primary depth image of the test object; The world coordinate system correction parameters of the primary depth image of the object to be measured are corrected by the world coordinate system of the primary depth image of the object to be measured to obtain the depth image of the object to be tested under the world coordinate system. 如申請專利範圍第1項所記載之物體空間位置形態的確定方法,其中,在根據前述待測物品在世界座標系下的深度圖像確定前述待測物品的點雲數據圖像之後,利用垂向空間統計方法確定前述待測物品的上表面點雲數據之前,前述方法進一步包括:根據前述待測物品的點雲數據圖像中點雲數據的三色差異度,對背景點雲數據進行濾除,得到前景點雲數據圖像;相應的,利用垂向空間統計方法確定前述待測物品的上表面點雲數據,包括:利用垂向空間統計方法,從前述前景點雲數據圖像中確定前述待測物品的上表面點雲數據。 The method for determining the spatial position form of an object as described in item 1 of the patent application scope, wherein after determining the point cloud data image of the object under test based on the depth image of the object under test under the world coordinate system, the vertical Before determining the point cloud data of the upper surface of the object under test by a spatial statistical method, the method further includes: filtering the background point cloud data according to the three-color difference degree of the point cloud data in the point cloud data image of the object under test In addition, obtain the cloud data image of the front sight; correspondingly, use the vertical spatial statistical method to determine the point cloud data of the upper surface of the object to be measured, including: use the vertical space statistical method to determine from the foregoing cloud data image of the front sight Point cloud data on the upper surface of the aforementioned object to be measured. 如申請專利範圍第6項所記載之物體空間位置形態的確定方法,其中,根據前述待測物品的點雲數據圖像中點雲數據的三色差異度,對背景點雲數據進行濾除,得到前景點雲數據圖像,包括:採用如下公式確定前述點雲數據圖像中點雲數據的三色差異度值:T=|R point-G point|-|G point-B point|;其中,R point表示點雲數據中的RGB顏色中紅色的數值;G point表示點雲數據中的RGB顏色中綠色的數值;B point表示點雲數據中的RGB顏色中藍 色的數值;其中,T為三色差異度值,前述三色差異度值小於背景濾除閾值時,則確定對應的點雲數據為背景點雲數據,對前述背景點雲數據進行濾除操作。 The method for determining the spatial position shape of an object as described in Item 6 of the patent application scope, in which the background point cloud data is filtered according to the three-color difference degree of the point cloud data in the point cloud data image of the object to be measured, Obtaining the cloud data image of the former scenic spot, including: using the following formula to determine the three-color difference value of the point cloud data in the aforementioned point cloud data image: T=|R point -G point |-|G point -B point |; where , R point represents the red value in the RGB color in the point cloud data; G point represents the green value in the RGB color in the point cloud data; B point represents the blue value in the RGB color in the point cloud data; where, T Is a three-color difference degree value. When the aforementioned three-color difference degree value is less than the background filtering threshold, the corresponding point cloud data is determined to be background point cloud data, and the background point cloud data is filtered. 如申請專利範圍第6項所記載之物體空間位置形態的確定方法,其中,利用垂向空間統計方法,從前述前景點雲數據圖像中確定前述待測物品的上表面點雲數據,包括:對所有前景點雲數據的垂向數據進行統計分佈;確定統計分佈中每個垂向數據間隔中的前景點雲數據個數;確定前述前景點雲數據個數最多的垂向數據間隔的中值;將垂向數據分佈在設定範圍內的點雲數據,作為前述待測物品的上表面點雲數據;前述設定範圍是由將前述垂向數據間隔的中值減去預設可控數值得到的第一數值、與將前述垂向數據間隔的中值加上預設可控數值得到的第二數值所形成的數值範圍。 The method for determining the spatial position and shape of an object as described in Item 6 of the patent application scope, wherein the vertical space statistical method is used to determine the point cloud data of the upper surface of the object to be measured from the cloud data image of the front sight, including: Perform statistical distribution on the vertical data of all front sight cloud data; determine the number of front sight cloud data in each vertical data interval in the statistical distribution; determine the median value of the vertical data interval with the largest number of front sight cloud data ; The point cloud data with vertical data distributed within a set range is used as the point cloud data of the upper surface of the object to be measured; the set range is obtained by subtracting the preset controllable value from the median value of the vertical data interval A numerical range formed by a first numerical value and a second numerical value obtained by adding a median value of the foregoing vertical data interval to a preset controllable numerical value. 如申請專利範圍第8項所記載之物體空間位置形態的確定方法,其中,前述預設可控數值採用如下方式確定:統計所有前景點雲數據圖像的垂向數據,確定標準方差;將前述標準方差的設定倍數作為預設可控數值。 The method for determining the spatial position and shape of the object as described in item 8 of the patent application scope, wherein the aforementioned preset controllable value is determined in the following manner: the vertical data of all the cloud data images of the former scenic spot are counted to determine the standard variance; The set multiple of the standard deviation is used as the preset controllable value. 如申請專利範圍第1項所記載之物體空間位置形態的確定方法,其中,根據前述待測物品的上表面點雲數據確定前述待測物品的上表面中心位置,作為前述待測物品的位置數據,包括:根據所有前述待測物品的上表面點雲數據的空間座標的平均值,確定前述待測物品的上表面中心位置,作為前述待測物品的位置數據; 根據前述待測物品的上表面點雲數據的擬合面,以及前述待測物品的上表面中心位置與前述待測物品的上表面點雲數據邊界位置的距離,確定形態數據,包括:根據前述待測物品的上表面點雲數據的空間座標,進行平面擬合以確定前述待測物品的上表面;確定前述待測物品的上表面的法向向量,根據前述法向向量確定前述待測物品在世界座標系中沿X軸的扭轉角度Rx,及沿Y軸的扭轉角度Ry;並且,對待測物品的上表面點雲數據在XOY面上進行投影;在上表面中心位置的投影位置,與上表面點雲數據邊界位置的投影位置之間的距離中,確定最小值;根據前述最小值所在方向確定前述待測物品沿Z軸的扭轉角度Rz;將前述Rx、Ry及Rz確定為前述待測物品的形態數據。 The method for determining the spatial position form of an object as described in item 1 of the patent application scope, wherein the center position of the upper surface of the object to be measured is determined based on the point cloud data of the upper surface of the object to be measured as the position data of the object to be tested , Including: determining the center position of the upper surface of the above-mentioned object as the position data of the above-mentioned object according to the average value of the spatial coordinates of the point cloud data of all the above-mentioned objects under the measurement; according to the upper surface of the above-mentioned object The fitting surface of the point cloud data, and the distance between the center position of the upper surface of the object to be measured and the boundary position of the point cloud data of the upper surface of the object to be measured, determining the morphological data includes: according to the point cloud of the upper surface of the object to be measured The spatial coordinates of the data are plane-fitted to determine the upper surface of the object to be measured; the normal vector of the upper surface of the object to be measured is determined, and the aforementioned object is determined along the X axis in the world coordinate system according to the normal vector The torsion angle Rx and the torsion angle Ry along the Y axis; and, the point cloud data of the upper surface of the object to be tested is projected on the XOY plane; the projection position at the center of the upper surface and the boundary position of the upper surface point cloud data Determine the minimum value among the distances between the projection positions; determine the torsion angle Rz of the object under test along the Z axis according to the direction of the minimum value; determine the Rx, Ry, and Rz as the morphological data of the object under test. 一種物體空間位置形態的確定裝置,其特徵係包括:雙目視覺圖像獲取模組,設置為藉由雙目視覺裝置獲取待測物品及標準標記的雙目視覺圖像;其中,前述雙目視覺裝置佈置在前述待測物品的上方;點雲數據圖像確定模組,設置為根據前述標準標記與前述待測物品的位置關係,以及標準標記的雙目視覺圖像,對前述待測物品的雙目視覺圖像進行校正擬合,得到待測物品在世界座標系下的深度圖像,根據前述待測物品在世界座標系下的深度圖像確定前述待測物品的點雲數據圖像;上表面點雲數據篩選模組,設置為利用垂向空間統計方法確定前述待測物 品的上表面點雲數據;位置數據及形態數據確定模組,設置為根據前述待測物品的上表面點雲數據確定前述待測物品的上表面中心位置,作為前述待測物品的位置數據;並根據前述待測物品的上表面點雲數據的擬合面,以及前述待測物品的上表面中心位置與前述待測物品的上表面點雲數據邊界位置的距離,確定前述待測物品的形態數據。 A device for determining the spatial position and shape of an object, characterized in that it includes: a binocular visual image acquisition module, which is configured to acquire a binocular visual image of an object to be tested and a standard mark through a binocular visual device; The visual device is arranged above the aforementioned object to be tested; the point cloud data image determination module is set to determine the aforementioned object under test based on the positional relationship between the aforementioned standard mark and the aforementioned object under test and the binocular visual image of the standard mark The binocular vision image is corrected and fitted to obtain the depth image of the object under test in the world coordinate system, and the point cloud data image of the object under test is determined according to the depth image of the object under test in the world coordinate system The upper surface point cloud data screening module is set to determine the upper surface point cloud data of the above-mentioned object to be measured using vertical spatial statistical methods; the position data and shape data determination module is set to be based on the upper surface point of the above-mentioned object to be measured The cloud data determines the center position of the upper surface of the test object as position data of the test object; and based on the fitting surface of the point cloud data of the upper surface of the test object and the center position of the upper surface of the test object and The distance of the boundary position of the point cloud data on the upper surface of the object to be tested determines the shape data of the object to be tested. 一種電腦可讀儲存介質,其特徵係,其上儲存有電腦程式,該程式被處理器執行時實現如申請專利範圍第1至10項中任一項所記載之物體空間位置形態的確定方法。 A computer-readable storage medium characterized by a computer program stored on it, which when executed by a processor implements a method for determining the spatial position and shape of an object as described in any of items 1 to 10 of the patent application. 一種雙目視覺機器人,其特徵係包括雙目視覺裝置、標準標記、機器人操作臂、記憶體、處理器及儲存在記憶體上並可在處理器運行的電腦程式,前述處理器執行前述電腦程式時實現如申請專利範圍第1至10項中任一項所記載之物體空間位置形態的確定方法。 A binocular vision robot, characterized in that it includes a binocular vision device, a standard mark, a robot operating arm, a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor executes the computer program Realize the method of determining the spatial position and shape of the object as described in any one of the items 1 to 10 of the patent application range.
TW108119050A 2018-05-31 2019-05-31 Method and device for determining spatial position shape of object, storage medium and robot TW202004671A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810549518.9 2018-05-31
CN201810549518.9A CN110555878B (en) 2018-05-31 2018-05-31 Method and device for determining object space position form, storage medium and robot

Publications (1)

Publication Number Publication Date
TW202004671A true TW202004671A (en) 2020-01-16

Family

ID=68697857

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108119050A TW202004671A (en) 2018-05-31 2019-05-31 Method and device for determining spatial position shape of object, storage medium and robot

Country Status (3)

Country Link
CN (1) CN110555878B (en)
TW (1) TW202004671A (en)
WO (1) WO2019228523A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110874854B (en) * 2020-01-19 2020-06-23 立得空间信息技术股份有限公司 Camera binocular photogrammetry method based on small baseline condition
CN113496503B (en) * 2020-03-18 2022-11-08 广州极飞科技股份有限公司 Point cloud data generation and real-time display method, device, equipment and medium
US11232315B2 (en) 2020-04-28 2022-01-25 NextVPU (Shanghai) Co., Ltd. Image depth determining method and living body identification method, circuit, device, and medium
CN111696162B (en) * 2020-06-11 2022-02-22 中国科学院地理科学与资源研究所 Binocular stereo vision fine terrain measurement system and method
CN111993420A (en) * 2020-08-10 2020-11-27 广州瑞松北斗汽车装备有限公司 Fixed binocular vision 3D guide piece feeding system
CN112819770B (en) * 2021-01-26 2022-11-22 中国人民解放军陆军军医大学第一附属医院 Iodine contrast agent allergy monitoring method and system
CN113146625A (en) * 2021-03-28 2021-07-23 苏州氢旺芯智能科技有限公司 Binocular vision material three-dimensional space detection method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2602588A1 (en) * 2011-12-06 2013-06-12 Hexagon Technology Center GmbH Position and Orientation Determination in 6-DOF
CN104317391B (en) * 2014-09-24 2017-10-03 华中科技大学 A kind of three-dimensional palm gesture recognition exchange method and system based on stereoscopic vision
US9895131B2 (en) * 2015-10-13 2018-02-20 Siemens Healthcare Gmbh Method and system of scanner automation for X-ray tube with 3D camera
CN107590832A (en) * 2017-09-29 2018-01-16 西北工业大学 Physical object tracking positioning method based on physical feature
CN108010085B (en) * 2017-11-30 2019-12-31 西南科技大学 Target identification method based on binocular visible light camera and thermal infrared camera

Also Published As

Publication number Publication date
CN110555878A (en) 2019-12-10
CN110555878B (en) 2021-04-13
WO2019228523A1 (en) 2019-12-05

Similar Documents

Publication Publication Date Title
TW202004671A (en) Method and device for determining spatial position shape of object, storage medium and robot
CN109767474B (en) Multi-view camera calibration method and device and storage medium
CN108827154B (en) Robot non-teaching grabbing method and device and computer readable storage medium
JP2021016153A5 (en)
JP6363863B2 (en) Information processing apparatus and information processing method
CN109255813A (en) A kind of hand-held object pose real-time detection method towards man-machine collaboration
CN108177143A (en) A kind of robot localization grasping means and system based on laser vision guiding
WO2015019526A1 (en) Image processing device and markers
CN110136211A (en) A kind of workpiece localization method and system based on active binocular vision technology
CN109297433A (en) 3D vision guide de-stacking measuring system and its control method
CN111151463A (en) Mechanical arm sorting and grabbing system and method based on 3D vision
CN110349249B (en) Real-time dense reconstruction method and system based on RGB-D data
WO2021012681A1 (en) Object carrying method applied to carrying robot, and carrying robot thereof
CN114714356A (en) Method for accurately detecting calibration error of hand eye of industrial robot based on binocular vision
JP2022514429A (en) Calibration method for image acquisition equipment, equipment, systems, equipment and storage media
CN115816471A (en) Disordered grabbing method and equipment for multi-view 3D vision-guided robot and medium
CN116749198A (en) Binocular stereoscopic vision-based mechanical arm grabbing method
CN113313116A (en) Vision-based accurate detection and positioning method for underwater artificial target
CN110533717B (en) Target grabbing method and device based on binocular vision
WO2020133407A1 (en) Structured-light-based locating method and apparatus for industrial robot, and controller and medium
CN110849285A (en) Welding spot depth measuring method, system and medium based on monocular camera
JPH06259536A (en) Three-dimensional correcting method for image pickup position and posture and three-dimensional position correcting method for robot
Zhang et al. Visual 3d reconstruction system based on rgbd camera
KR100867731B1 (en) Method for estimation of omnidirectional camera motion
CN110766752A (en) Virtual reality interactive glasses with reflective mark points and space positioning method