WO2019228523A1 - 物体空间位置形态的确定方法、装置、存储介质及机器人 - Google Patents

物体空间位置形态的确定方法、装置、存储介质及机器人 Download PDF

Info

Publication number
WO2019228523A1
WO2019228523A1 PCT/CN2019/089635 CN2019089635W WO2019228523A1 WO 2019228523 A1 WO2019228523 A1 WO 2019228523A1 CN 2019089635 W CN2019089635 W CN 2019089635W WO 2019228523 A1 WO2019228523 A1 WO 2019228523A1
Authority
WO
WIPO (PCT)
Prior art keywords
item
cloud data
point cloud
measured
image
Prior art date
Application number
PCT/CN2019/089635
Other languages
English (en)
French (fr)
Inventor
吴飞
彭建林
杨宇
Original Assignee
上海微电子装备(集团)股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海微电子装备(集团)股份有限公司 filed Critical 上海微电子装备(集团)股份有限公司
Publication of WO2019228523A1 publication Critical patent/WO2019228523A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Definitions

  • the embodiments of the present application relate to the technical field of image recognition and image processing, for example, to a method, a device, a storage medium, and a robot for determining a spatial position and shape of an object.
  • the spatial position of the object that is, the specific position in space coordinates
  • the spatial form of the object that is, the form in which the object is located at the spatial coordinate position.
  • the embodiments of the present application provide a method, a device, a storage medium, and a robot for determining the spatial position of an object, which can realize the spatial position and Morphological effect.
  • an embodiment of the present application provides a method for determining a spatial position and shape of an object.
  • the method includes:
  • a depth image determining a point cloud data image of the item to be measured according to a depth image of the item to be measured in a world coordinate system
  • the method further includes: according to the position data of the item to be tested and the shape of the item to be tested Data to determine the grasping position and grasping posture of the robot operating arm, so as to control the robot operating arm to grasp the object to be measured.
  • the method before the obtaining the binocular vision image of the item to be tested and the binocular vision image of the standard mark by the binocular vision device, the method further includes: selecting a fixed structure as a location in the load-bearing space of the item under test.
  • the standard mark or installing a mark in the load-bearing space of the item to be tested as the standard mark, and establishing a coordinate system of the standard mark and the position relationship between the binocular vision device and the standard mark The relationship between the coordinate systems of the binocular vision device.
  • Detecting a depth image of an article in a world coordinate system includes: using the binocular vision image of the standard mark to determine the position of the binocular vision device; and according to the position of the binocular vision device and the position between the standard mark Relationship, determining a world coordinate system correction parameter of the binocular vision image of the object to be tested; and converting a binocular vision image of the object to be measured according to the world coordinate system correction parameter of the binocular vision image of the object to be tested
  • depth image fitting is performed to obtain a depth image of the object to be measured in the world coordinate system.
  • correcting and fitting the binocular vision image of the item to be tested to obtain the The depth image of the measured object in the world coordinate system includes: performing a depth image fitting on the binocular vision image of the standard mark and the binocular vision image of the object to be measured to obtain respective primary depth images; according to the The positional relationship between the position of the binocular vision device and the standard mark determines a world coordinate system correction parameter of the primary depth image of the object to be measured; a correction parameter of the world coordinate system according to the primary depth image of the object to be measured , Performing a world coordinate system correction on the primary depth image of the item to be measured to obtain a depth image of the item to be measured in the world coordinate system.
  • a vertical space statistical method is used to determine an upper surface point of the item to be measured before cloud data, the method further includes: filtering the background point cloud data according to the three-color difference degree of the point cloud data in the point cloud data image of the item to be measured to obtain a cloud image of the front spot;
  • the use of a vertical space statistical method to determine the upper surface point cloud data of the item to be tested includes: using a vertical space statistical method to determine the upper surface point cloud data of the item to be measured from the front spot cloud data image.
  • the background point cloud data is filtered according to the three-color difference degree of the point cloud data in the point cloud data image of the object to be tested to obtain a front spot cloud data image, including:
  • R point represents the value of red in the RGB color in the point cloud data
  • G point represents the value of green in the RGB color in the point cloud data
  • B point represents the value of blue in the RGB color in the point cloud data
  • T is a three-color difference value
  • the corresponding point cloud data is determined as the background point cloud data, and the background point cloud data is filtered.
  • using a vertical space statistical method to determine the point cloud data of the upper surface of the item to be measured from the cloud data image of the front scenic spot includes: statistically distributing the vertical data of the cloud data of all front scenic spots. Determine the number of front sight cloud data in each vertical data interval in the statistical distribution; determine the median value of the vertical data interval with the largest number of front sight cloud data; distribute the vertical data within a set range
  • the point cloud data is used as the point cloud data of the upper surface of the object to be measured; the set range is a first value obtained by subtracting a preset controllable value from a median value of the vertical data interval, and A numerical range formed by adding a second value obtained by adding a median value of the vertical data interval to a preset controllable value.
  • the preset controllable value is determined in the following manner: Counting vertical data of cloud data images of all front attractions to determine a standard deviation; and setting a set multiple of the standard variance as the preset controllable value.
  • determining the center position of the upper surface of the test object according to the point cloud data of the upper surface of the test object as the position data of the test object includes: according to the upper surface of all the test objects The average value of the space coordinates of the point cloud data to determine the center position of the upper surface of the item to be measured, as the position data of the item to be measured;
  • Determining the morphological data according to the fitting surface of the point cloud data of the upper surface of the object to be tested, and the distance between the center position of the upper surface of the object to be tested and the boundary position of the upper surface point cloud data of the object to be measured including : Performing a plane fitting according to the spatial coordinates of the point cloud data of the upper surface of the object to be measured to determine the upper surface of the object to be measured; determining a normal vector of the upper surface of the object to be measured, according to the method
  • the direction vector determines the twist angle Rx of the test object in the world coordinate system along the X axis and the twist angle Ry of the Y axis; and, the point cloud data of the upper surface of the test object is projected on the XOY plane;
  • the minimum value is determined from the distance between the projection position of the surface center position and the projection position of the boundary position of the upper surface point cloud data; the twist angle Rz of the object to be measured along the Z axis is determined according to the direction of the minimum value;
  • an embodiment of the present application further provides a device for determining a spatial position of an object.
  • the device includes: a binocular vision image acquisition module configured to obtain a binocular vision image and a standard of an object to be measured through the binocular vision device.
  • a marked binocular vision image wherein the binocular vision device is arranged above the item to be measured; a point cloud data image determination module is configured to position the relationship between the mark and the item to be measured according to the standard, and The standard-labeled binocular vision image is obtained by correcting and fitting the binocular vision image of the item to be measured to obtain a depth image of the item to be measured in the world coordinate system, and according to the depth of the item to be measured in the world coordinate system The image determines the point cloud data image of the item to be measured; the upper surface point cloud data screening module is configured to determine the upper surface point cloud data of the item to be measured using a vertical space statistical method; a position data and morphological data determination module, It is set to determine the center position of the upper surface of the test object according to the point cloud data of the upper surface of the test object as the test object. Position data of the upper surface point cloud data of the object to be measured, and the distance between the center position of the upper surface of the object to be measured and the boundary position of the upper surface
  • an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored.
  • the program is executed by a processor, the method for determining an object space position and form according to an embodiment of the present invention is implemented.
  • an embodiment of the present application provides a binocular vision robot, including a binocular vision device, a standard mark, a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes
  • the computer program implements the method for determining the spatial position and shape of an object according to any embodiment of the present application.
  • a binocular vision image of an item to be tested and a standard mark is obtained through a binocular vision device; wherein the binocular vision device is arranged above the item to be measured; according to the standard The positional relationship between the mark and the item to be measured, and the binocular vision image of the standard mark, correct and fit the binocular vision image of the item to be measured to obtain a depth image of the item to be measured in the world coordinate system.
  • the upper surface point cloud data determines the center position of the upper surface of the item to be measured, as the position data of the item to be measured; and according to the fitting surface of the upper surface point cloud data of the item to be measured, and the object to be measured.
  • the distance between the center position of the upper surface of the item to be measured and the boundary position of the point cloud data on the upper surface of the item to be measured, and the morphological data of the item to be measured can be determined by double After the visual image of the measurement apparatus acquires article, processed and analyzed to determine the effect of the spatial position and shape of the article to be tested.
  • FIG. 1 is a flowchart of a method for determining a spatial position and shape of an object according to a first embodiment of the present application
  • FIG. 2 is a flowchart of a method for determining a spatial position of an object according to a second embodiment of the present application
  • FIG. 3 is a schematic diagram of statistical distribution of point cloud data provided in Embodiment 2 of the present application.
  • FIG. 4 is a schematic structural diagram of an apparatus for determining a spatial position of an object according to a third embodiment of the present application.
  • FIG. 5a is a schematic diagram of a binocular vision robot provided in Embodiment 5 of the present application.
  • FIG. 5b is a schematic diagram of a binocular vision robot provided in Embodiment 5 of the present application.
  • 5c is a schematic diagram of a binocular vision robot provided in Embodiment 5 of the present application.
  • FIG. 6 is a schematic diagram of a method for determining Rz in object space morphology data provided in Embodiment 2 of the present application.
  • FIG. 1 is a flowchart of a method for determining an object space position and form provided in Embodiment 1 of the present application. This embodiment is applicable to a situation where an object to be measured is located and determined, and the method may be performed by the object space provided by the embodiment of the present application.
  • the position and shape determination device is implemented. The device can be implemented by software and / or hardware, and can be integrated into a binocular vision robot.
  • the method for determining the spatial position and shape of an object includes the following steps.
  • the binocular vision device can be used to obtain the spatial position and shape of the object to be measured in a fixed range.
  • the binocular vision device can be installed at a fixed position directly above the operation platform. Make the center position of the binocular vision device correspond to the center position of the operation table, so that it can be determined that the image collected by the binocular vision device is facing the center of the operation table.
  • the binocular vision device can also be installed in an unfixed position, such as on the head of a mobile robot, on the robotic arm of the production and installation pipeline, which can make the setting of the binocular vision device more flexible, but compared to the former, This method will be a bit more complicated during the image correction process.
  • the image obtained by the binocular vision device can be positionally corrected to obtain the position of the object to be measured in the world coordinate system.
  • the image obtained by the binocular vision device can be positionally corrected to obtain the position of the object to be measured in the world coordinate system.
  • the The collected binocular vision images contain standard markers to determine the position of the object in the world coordinate system, or the relative position with respect to the robot itself or the robot arm.
  • the method before the obtaining the binocular vision image of the item to be tested and the standard mark through the binocular vision device, the method further includes: selecting a fixed structure as the standard mark in the load bearing space of the item to be tested, or A mark is installed in the load-bearing space of the item to be tested as the standard mark, and a coordinate system of the standard mark and the binocular vision are established through a positional relationship between the binocular vision device and the standard mark. The relationship between the device's coordinate system.
  • the advantage of this setting is that the image can be corrected and fitted according to fixed or preset standard marks.
  • the standard mark is a mark set at a fixed position to calibrate the binocular vision image in the binocular vision image.
  • it can be an arrow pointing to north and east.
  • the binocular vision device is arranged above the object to be measured. This arrangement is to be able to obtain an image of the upper surface of the object to be measured, because when grabbing some objects by a robot or a robot operating an arm, it is often performed from above According to the shape of the item to be tested, determine the grasping angle for grasping. If the robot can grasp items by means of horizontal grasping, it can also be determined by obtaining the position and shape of the front surface of the item to be measured.
  • the standard mark is a mark set at a fixed position to calibrate the binocular vision image in the binocular vision image.
  • the depth image may be an image with depth information for each pixel.
  • the corrected depth image may be top-down depth information, and the Z-axis position where the binocular vision device is located may be used as a starting point.
  • the depth information may be a vertical distance (Z-axis distance) between each pixel constituting the image and a plane where the center of the binocular vision device is located.
  • the point cloud data image can be displayed in the form of a point cloud for each pixel, and the point cloud data image can be transformed from the depth image according to a specific algorithm.
  • Detecting a depth image of an article in a world coordinate system includes: using the binocular vision image of the standard mark to determine the position of the binocular vision device; and according to the position of the binocular vision device and the position between the standard mark Relationship, determining a world coordinate system correction parameter of the binocular vision image of the object to be tested; and converting a binocular vision image of the object to be measured according to the world coordinate system correction parameter of the binocular vision image of the object to be tested
  • depth image fitting is performed to obtain a depth image of the object to be measured in the world coordinate system.
  • correcting and fitting the binocular vision image of the item to be tested to obtain the The depth image of the measured object in the world coordinate system includes: performing a depth image fitting on the binocular vision image of the standard mark and the binocular vision image of the object to be measured to obtain respective primary depth images; according to the The positional relationship between the position of the binocular vision device and the standard mark determines a world coordinate system correction parameter of the primary depth image of the object to be measured; a correction parameter of the world coordinate system according to the primary depth image of the object to be measured , Performing a world coordinate system correction on the primary depth image of the item to be measured to obtain a depth image of the item to be measured in the world coordinate system.
  • the above two methods respectively describe the method of first correcting the world coordinate system of the pictures obtained by the two cameras of the binocular vision device, and then performing the fitting, and the method of performing the fitting and then the world coordinate system correction.
  • the following introduces the process of correction and image fitting of the world coordinate system:
  • the two cameras Since the original binocular images are taken independently by the left-eye camera and the right-eye camera, the two cameras have some distortion due to the different relationship of the camera lens positions. It is necessary to fit all pixels in the field of view, and give the compensation amount of the fit to the camera program according to the measured data.
  • the internal parameters of the left-eye camera and the right-eye camera are adjusted to be the same.
  • the internal parameters include the internal geometric and optical parameters of the camera, and the external parameters include the conversion between the left-eye and right-eye camera coordinate systems and the world coordinate system.
  • the fitting here is used to correct the distortion caused by the lens.
  • This distortion caused by the lens can be seen in the original image. For example, a straight line in the scene becomes a curve in the original left and right eye images, and this effect is especially noticeable at the corners of the left and right eye images.
  • the purpose of fitting is to correct this type of distortion.
  • boundary extraction is performed according to the object image.
  • the algorithms that can be used include: Laplace-Gaussian filtering.
  • the feature of the boundary is an obvious and main feature of identifying the object, which lays the foundation for subsequent algorithms. It also includes image pre-processing and feature extraction.
  • Pre-processing mainly includes image contrast enhancement, random noise removal, low-pass filtering and image enhancement, pseudo-color processing, etc .
  • feature extraction commonly used matching features, mainly a bit Shape features, linear features, and regional features.
  • Edge detection is an optional feature, and edge detection uses changes in brightness to match features. This function is very useful when the camera in the system has an automatic gain function. If the change of the automatic gain of each camera is inconsistent, the absolute brightness between the images is not consistent, and although the absolute brightness is not consistent, the change in brightness is a constant. Therefore, edge detection is suitable for environments where the lighting changes greatly. Although edge detection can improve the recognition result of the edge of an item, it is equivalent to introducing another processing step. Therefore, the relationship between the improvement of the result and the speed must be weighed to use this function.
  • the stereoscopic imaging principle of binocular vision is used, and the three-dimensional measurement of binocular stereovision is based on the principle of parallax.
  • Baseline distance B the distance between the projection centers of the two cameras; the camera focal length is f.
  • P (x c , y c , z c ) The images of point P are obtained on the "left eye” and "right eye", respectively.
  • the three-dimensional coordinates of the point can be determined.
  • This method is a complete point-to-point operation. As long as all points on the image surface have corresponding matching points, they can participate in the above operation, thereby obtaining the three-dimensional coordinates corresponding to the matching points.
  • Stereo matching includes three basic steps:
  • Step 2) is the key to achieving matching.
  • Stereo matching builds a correlation library using absolute correlation bias sum method to establish correlation between images.
  • the principle of this method is as follows:
  • d min and d max are the minimum and maximum disparity
  • m is the mask size
  • I left and I right are the left and right images.
  • the correlation between images is calculated through the binocular fitting of the object.
  • the correlation depth between the images is calculated based on the binocular disparity principle formula and the absolute correlation deviation sum method to form a depth map or a spatial point. Cloud data.
  • the longitudinal (Z-axis) data of each point can be determined.
  • the point clouds in multiple height ranges in the current point cloud image can be obtained. Number.
  • the background is a plane, such as a console
  • the number of point clouds may be the largest in the longitudinal data of the background, and among all the point cloud data, the Z of the background point cloud data is Z.
  • the axis data is also the largest or smallest.
  • the background point cloud data can be filtered out.
  • the point cloud data on the upper surface of the item to be measured can be determined. If the upper surface is horizontal, the point cloud data range of the upper surface is relatively narrow, and if the upper surface is inclined, the point cloud data range of the upper surface is relatively broad.
  • S140 Determine the center position of the upper surface of the item to be measured according to the point cloud data of the upper surface of the item to be measured, and use the position data of the item to be measured; and The fitting surface and the distance between the center position of the upper surface of the item to be tested and the boundary position of the point cloud data of the top surface of the item to be tested determine the morphological data of the item to be tested.
  • the center position of the upper surface may be determined by a world coordinate system determined by a standard mark, such as (X, Y, Z).
  • the center position of the upper surface may be determined according to the center position of the geometric shape of the projection of the point cloud data on the XOY plane.
  • the morphological data is determined according to the fitting surface of the upper surface point cloud data and the distance between the upper surface center position and the upper surface point cloud data boundary position.
  • the morphological data can be expressed by determining the three angles Rx, Ry, and Rz of the item to be measured with respect to the X-axis, Y-axis, and Z-axis.
  • the fitting surface may be a flat surface or a curved surface.
  • Ry can be determined according to the angle between the normal vector of the upper surface and the XOZ plane
  • Rx can be determined with the angle of the YOZ plane.
  • Rz is then determined by the angle between the vector formed by the center position of the upper surface and the point closest to the upper surface boundary point and the XOY plane.
  • a binocular vision image of an item to be tested and a standard mark is obtained through a binocular vision device; wherein the binocular vision device is arranged above the item to be measured; according to the standard The positional relationship between the mark and the item to be measured, and the binocular vision image of the standard mark, correct and fit the binocular vision image of the item to be measured to obtain a depth image of the item to be measured in the world coordinate system.
  • the upper surface point cloud data to determine the center position of the upper surface of the item to be measured, as the position data of the item to be measured; and according to the fitting surface of the upper surface point cloud data of the item to be measured, and the The distance between the center position of the upper surface of the item to be measured and the boundary position of the point cloud data on the upper surface of the item to be measured, and the morphological data of the item to be measured can be determined by double
  • the visual device acquires the image of the item to be measured, it is processed and analyzed to determine the effect of the spatial position and shape of the item to be measured.
  • the method further includes: determining a grip of a robot operating arm according to the position data and the morphological data. Take a position and a grasping attitude to control the robot operating arm to grasp the object to be measured.
  • the position of the robot's operating arm can be corrected to the same world coordinate system as the item to be measured, so that the moving distance, direction, or even trajectory of the operating arm can be determined.
  • the operating arm moves to the position of the item to be measured .
  • the advantage of this setting is that it can be determined that the article can be grasped smoothly after the position of the object to be measured is identified, and the grasping is more compact, so that accidents such as grasping and falling off can be avoided.
  • FIG. 2 is a flowchart of a method for determining a spatial position and shape of an object according to a second embodiment of the present application. This embodiment is based on the above embodiment, and after determining the point cloud data image of the item to be measured according to the depth image of the item to be measured in the world coordinate system, the vertical space statistical method is used to determine the item to be measured.
  • the method further includes: filtering the background point cloud data according to the three-color difference degree of the point cloud data in the point cloud data image of the item to be tested to obtain the cloud data of the front spot
  • determining vertical point cloud data of the object to be measured by using a vertical space statistical method includes: determining vertical position cloud data of the object to be tested from the cloud data image of the front spot by using a vertical space statistical method.
  • the method for determining the spatial position and shape of an object includes the following steps.
  • S210 Obtain a binocular vision image of the item to be tested and a standard mark through the binocular vision device; wherein the binocular vision device is arranged above the item to be measured.
  • S230 Filter the background point cloud data according to the three-color difference degree of the point cloud data in the point cloud data image of the item to be measured to obtain a cloud image of the front spot.
  • the three-color difference degree may be the mutual difference between the values of the three primary colors of red, green, and blue in the pixel color of each data in the point cloud data image.
  • This setting is mainly for background point cloud data with similar colors. After filtering, a point cloud data image with only cloud data of the front spot can be obtained.
  • filtering the background point cloud data according to the three-color difference degree of the point cloud data in the point cloud data image to obtain a front spot cloud data image includes:
  • R point represents the value of red in the RGB color in the point cloud data
  • G point represents the value of green in the RGB color in the point cloud data
  • B point represents the value of blue in the RGB color in the point cloud data
  • T is a three-color difference value.
  • This setting is beneficial for filtering the point cloud data of the background and other reflective or individual jumping noise points, and improves the accuracy of the determination of the point cloud data on the upper surface.
  • S240 Use a vertical space statistical method to determine the point cloud data of the upper surface of the item to be measured from the cloud data image of the front spot.
  • determining the point cloud data of the upper surface of the item to be tested by using a vertical space statistical method includes: performing statistical distribution on the vertical data of cloud data of all front attractions; determining the vertical data interval in each vertical data interval. The number of front sight cloud data; determine the median value of the vertical data interval with the most front sight cloud data; point cloud data that distributes vertical data within a set range as the upper surface point cloud of the item to be measured Data; the setting range is a first value obtained by subtracting a preset controllable value from the median value of the vertical data interval, and adding a preset controllable value to the median value of the vertical data interval A range of values formed by the obtained second value.
  • FIG. 3 is a schematic diagram of statistical distribution of point cloud data provided in Embodiment 2 of the present application.
  • the horizontal axis is the vertical data of the point cloud data, which can be understood as the height, the unit is meter, and the vertical coordinate is the number of point cloud data in each data interval, expressed as the midpoint of the current vertical data interval.
  • the number of cloud data For example, the data interval is 0.02.
  • the number of point clouds in 0.414-0.416 is the largest, and it can be determined that the point cloud data on the upper surface is point cloud data within a certain range centered on 0.415.
  • the preset controllable value is determined in the following manner: Counting vertical data of cloud data images of all front attractions to determine a standard deviation; and setting a set multiple of the standard variance as the preset controllable value.
  • the upper plane of the target object is identified, the colored points of the target object are taken, and the statistical distribution is performed according to the vertical direction of the Z direction.
  • the value of ⁇ ( ⁇ is the statistical mean Mean) and the statistical high frequency peak (Peak) of the point cloud data are used. Recognize the height of the upper plane of the target object and the position of the taken target object, and extract the Z direction of all colored points based on the three-dimensional coordinate values (X, Y, and Z) of the colored points of the upper plane of the target object obtained by the binocular camera.
  • the coordinate values are statistically distributed according to the vertical direction of the Z direction.
  • is the statistical mean Mean
  • the value of the statistical high-frequency peak (Peak) of the point cloud data is regarded as the nominal height of the Z-up surface of the target object
  • the average ⁇ is the error range of the Z-up surface height.
  • is the standard deviation Deviation
  • the points in the range between 1 ⁇ -6 ⁇ as the upper plane of the target object are typical data distribution effects, as shown in the figure.
  • the thick line represents the median ⁇
  • the dashed interval represents the +/- ⁇ standard deviation.
  • S250 Determine the center position of the upper surface of the item to be measured according to the point cloud data of the upper surface of the item to be measured, and use the position data of the item to be measured; and The fitting surface and the distance between the center position of the upper surface of the item to be tested and the boundary position of the point cloud data of the top surface of the item to be tested determine the morphological data of the item to be tested.
  • This embodiment provides a method for determining cloud data of a front spot on the basis of the foregoing embodiment.
  • interference caused by point cloud data of reflective points, outliers, and shadow points can be removed, and the determination is improved. Accuracy of point cloud data on the top surface of the item to be measured.
  • determining the upper surface center position of the item to be measured according to the upper surface point cloud data, as the position data includes: according to the spatial coordinates of all the upper surface point cloud data
  • the average position of the upper surface of the item to be measured is determined as position data; according to the fitting surface of the upper surface point cloud data, and the distance between the upper surface center position and the upper surface point cloud data boundary position, determine
  • the morphological data includes: performing a plane fitting to determine the upper surface of the item to be measured according to the spatial coordinates of the upper surface point cloud data; determining a normal vector of the upper surface of the item to be measured, according to the method
  • the direction vector determines the twist angle Rx of the test object in the world coordinate system along the X axis and the twist angle Ry of the Y axis; and, the point cloud data of the upper surface of the test object is projected on the XOY plane;
  • the minimum value is determined from the distance between the projection position at the center of the surface and the projection position at the boundary position
  • FIG. 6 is a schematic diagram of a method for determining Rz in object space morphology data provided in Embodiment 2 of the present application.
  • projection can be performed on the XOY plane, where the Z axis coincides with the O point, which is not shown in Figure 6, and the 3D point is converted to a 2D plane in.
  • the originally determined center point can be used as the center point after projection.
  • all peripheral points in the set are extracted to form a convex polygon (Convex Hull 2D), and the vertices constituting the convex polygon are marked as boundary points (as shown in the figure). 6, only partially marked).
  • the center point can form a triangle with two adjacent polygon vertices, and any two adjacent vertices form a line segment.
  • H4 is the height from the center point to the part of the triangle formed by the center point and two boundary points.
  • Figure 6 shows the five height values of H1, H2, H3, H4, and H5, where H4 is the minimum and H3 is the maximum.
  • the vector H4 can represent the Rz direction of the center point of the upper surface.
  • the vector with the shortest distance between the center point and the boundary polygon (the shortest distance point between the center and the boundary line segment) is taken.
  • FIG. 4 is a schematic structural diagram of an apparatus for determining a spatial position and shape of an object according to a third embodiment of the present application.
  • the device for determining the spatial position of an object includes a binocular vision image acquisition module 410 configured to obtain a binocular vision image of an object to be tested and a standard mark through the binocular vision device;
  • the binocular vision device is arranged above the item to be measured;
  • the point cloud data image determination module 420 is configured to perform a binocular vision image on the object according to the positional relationship between the standard mark and the item to be measured, and the standard mark.
  • the binocular vision image of the object to be tested is calibrated and fitted to obtain a depth image of the object to be measured in the world coordinate system, and the point cloud of the object to be measured is determined according to the depth image of the object to be measured in the world coordinate system.
  • upper surface point cloud data screening module 430 which is set to determine the upper surface point cloud data of the object to be measured using a vertical space statistical method
  • position data and morphological data determination module 440 which is set to be based on the object to be measured
  • the upper surface point cloud data to determine the center position of the upper surface of the item to be measured, as the position data of the item to be measured; and according to the item to be measured
  • the fitting surface of the upper surface point cloud data and the distance between the center position of the upper surface of the item to be tested and the boundary position of the upper surface point cloud data of the item to be tested determine the morphological data of the item to be tested.
  • a binocular vision image of an item to be tested and a standard mark is obtained through a binocular vision device; wherein the binocular vision device is arranged above the item to be measured; according to the standard The positional relationship between the mark and the item to be measured, and the binocular vision image of the standard mark, correct and fit the binocular vision image of the item to be measured to obtain a depth image of the item to be measured in the world coordinate system.
  • the upper surface point cloud data determines the center position of the upper surface of the item to be measured, as the position data of the item to be measured; and according to the fitting surface of the upper surface point cloud data of the item to be measured, and the object to be measured.
  • the distance between the center position of the upper surface of the item to be measured and the boundary position of the point cloud data on the upper surface of the item to be measured, and the morphological data of the item to be measured can be determined by double After the visual image acquiring apparatus items to be measured, processed and analyzed to determine the effect of the spatial position and shape of the article to be tested.
  • the above product can execute the method provided by any embodiment of the present application, and has corresponding functional modules and beneficial effects for executing the method.
  • An embodiment of the present application further provides a storage medium containing computer-executable instructions.
  • the method for determining a spatial position and shape of an object includes: The visual device acquires a binocular vision image of the item to be tested and a standard mark; wherein the binocular vision device is arranged above the item to be tested; according to a positional relationship between the standard mark and the item to be tested, and a standard
  • the labeled binocular vision image is obtained by correcting and fitting the binocular vision image of the object to be measured to obtain a depth image of the object to be measured in the world coordinate system, and according to the depth image of the object to be measured in the world coordinate system
  • Storage medium any type of memory device or storage device.
  • the term "storage medium” is intended to include: installation media such as Compact Disc Read-Only Memory (CD-ROM), floppy disks or magnetic tape devices; computer system memory or random access memory, such as dynamic random access memory Access Memory (Dynamic Random Access Memory, DRAM), Double Data Rate Random Access Memory (DDR Random Access Memory, DDR RAM), Static Random Access Memory (Stat Random Access Memory, SRAM), Extended Data Output Random Access Memory (Extended Data Output Random Access Memory, EDO RAM), Rambus Random Access Memory (Random Access Memory, RAM), etc .; non-volatile memory such as flash memory, magnetic media (such as hard disk or optical Storage); registers or other similar types of memory elements, etc.
  • installation media such as Compact Disc Read-Only Memory (CD-ROM), floppy disks or magnetic tape devices
  • computer system memory or random access memory such as dynamic random access memory Access Memory (Dynamic Random Access Memory, DRAM), Double Data Rate Random Access Memory (DDR
  • the storage medium may further include other types of memory or a combination thereof.
  • the storage medium may be located in a computer system in which the program is executed, or may be located in a different second computer system connected to the computer system through a network such as the Internet.
  • the second computer system may provide program instructions to a computer for execution.
  • the term "storage medium" may include two or more storage media that may reside in different locations, such as in different computer systems connected through a network.
  • the storage medium may store program instructions (for example, embodied as a computer program) executable by one or more processors.
  • a storage medium including computer-executable instructions provided in the embodiments of the present application is not limited to the operation of determining the spatial position and shape of an object as described above, and may also perform the operations provided in any embodiment of the present application. Relevant operations in the method of determining the spatial position and shape of an object.
  • the embodiment of the present application provides a binocular vision robot, which includes a binocular vision device, an operation table, a standard mark on the operation table, a robot operating arm, a memory, a processor, and a computer stored in the memory and capable of running on the processor.
  • a program that, when the processor executes the computer program, implements a method for determining an object space position and shape as in any embodiment of the application.
  • FIG. 5a is a schematic diagram of a binocular vision robot provided in Embodiment 5 of the present application.
  • the binocular vision device 10 the operating table 20, the standard mark 30 on the operating table, the robot operating arm 50, a memory, a processor, and a computer program stored in the memory and executable on the processor.
  • the processor executes the computer program, a method for determining an object space position and shape as in any embodiment of the present application is implemented.
  • FIG. 5b is a schematic diagram of a binocular vision robot provided in Embodiment 5 of the present application.
  • setting the binocular vision device 10 on the claw can make the binocular vision image acquisition more flexible. It can be used when there are many items to be tested, or when passing through one side.
  • the calculation accuracy rate does not meet the standard or the noise rate is too high after the calculation, by controlling the movement of the claw, the six parameters of the object space can be located from another angle. It is also possible to compare and confirm the results of the six spatial parameters obtained from multiple positions, thereby improving the accuracy of the determination result of the spatial position and shape of the object to be measured by the technical solution provided in the embodiment of the present application.
  • FIG. 5c is a schematic diagram of a binocular vision robot provided in Embodiment 5 of the present application.
  • the binocular vision device is set on the body of the robot operating arm, so that the situation of providing a mounting bracket for the binocular vision device in the first solution can be avoided, and When the robot operating arm is moved to another operation platform, the binocular vision image can be acquired through the binocular vision device. There is no need to install a binocular vision device for each operation platform, thereby achieving the effect of saving system costs.
  • the binocular vision device can be set on the claw of the robotic operating arm, and can also be set on a fixed position of the robotic operating arm, as long as an image of the upper surface of the object to be measured and the front of the operating table can be obtained. Image is fine.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本申请公开了一种物体空间位置形态的确定方法、装置、存储介质及机器人。该物体空间位置形态的确定方法包括:通过双目视觉装置获取待测物品和标准标记的双目视觉图像;对待测物品的双目视觉图像进行校正拟合,得到待测物品在世界坐标系下的深度图像,根据深度图像确定待测物品的点云数据图像;确定上表面点云数据以及上表面中心位置,作为待测物品的位置数据;并根据待测物品的上表面点云数据的拟合面,以及待测物品的上表面中心位置与待测物品的上表面点云数据边界位置的距离,确定待测物品的形态数据。

Description

物体空间位置形态的确定方法、装置、存储介质及机器人
本申请要求在2018年05月31日提交中国专利局、申请号为201810549518.9的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及图像识别及图像处理技术领域,例如涉及一种物体空间位置形态的确定方法、装置、存储介质及机器人。
背景技术
根据图像识别及图像处理手段,来对图像中的物体进行定位和形态的确定,已经成为影响电子科技发展的重要因素之一。
物体的空间位置,即在空间坐标下的具体位置,物体的空间形态,即物体以何种形态处在空间坐标位置处。如在工业生产中,工业机器人或者机器手臂对于一些标准件或者非标准件进行抓取以实现安装或者组装功能时,如果物品的空间位置形态没有被确定,而直接采用机械式的作业方法,很容易造成标准件或者非标准件的脱落,影响工业生产效率,甚至有时会对装配线或者工业机器人带来损害。在生活当中,例如无人机,智能机器人等,如果不能够自动确定待测物体的空间位置形态,就必须需要人为协助才能够实现对物品的承载以及运输等,如果失去人为协助,不仅不能够进行正常的工作,而且很难进行业务的拓展,影响电子科技事业的发展。因此,如何能够对空间物体的位置和形态进行确定,已经成为领域内亟待解决的技术难题。
发明内容
本申请实施例提供一种物体空间位置形态的确定方法、装置、存储介质及机器人,可以实现通过双目视觉装置获取待测物品的图像后,经过处理和分析,确定待测物品的空间位置和形态的效果。
第一方面,本申请实施例提供了一种物体空间位置形态的确定方法,该方法包括:
通过双目视觉装置获取待测物品的双目视觉图像和标准标记的双目视觉图像;其中,所述双目视觉装置布置在所述待测物品的上方;
根据所述标准标记与所述待测物品的位置关系,以及标准标记的双目视觉图像,对所述待测物品的双目视觉图像进行校正拟合,得到待测物品在世界坐标系下的深度图像,根据所述待测物品在世界坐标系下的深度图像确定所述待测物品的点云数据图像;
利用垂向空间统计方法确定所述待测物品的上表面点云数据;
根据所述待测物品的上表面点云数据确定所述待测物品的上表面中心位置,作为所述待测物品的位置数据;并根据所述待测物品的上表面点云数据的拟合面,以及所述待测物品的上表面中心位置与所述待测物品的上表面点云数据边界位置的距离,确定所述待测物品的形态数据。
在一实施例中,在确定所述待测物品的位置数据和所述待测物品的形态数据之后,所述方法还包括:根据所述待测物品的位置数据和所述待测物品的形态数据,确定机器人操作臂的抓取位置和抓取姿态,以控制所述机器人操作臂对所述待测物品进行抓取。
在一实施例中,在所述通过双目视觉装置获取待测物品的双目视觉图像和标准标记的双目视觉图像前还包括:在所述待测物品承载空间内选取一固定结构作为所述标准标记,或在所述待测物品承载空间内安装一标记作为所述标准标记,通过所述双目视觉装置与所述标准标记之间的位置关系,建立所述标准标记的坐标系与所述双目视觉装置的坐标系之间的关系。
在一实施例中,所述根据所述标准标记与所述待测物品的位置关系,以及标准标记的双目视觉图像,对所述待测物品的双目视觉图像进行校正拟合,得到待测物品在世界坐标系下的深度图像,包括:利用所述标准标记的双目视觉图像,确定双目视觉装置的位置;根据所述双目视觉装置的位置与所述标准标记之间的位置关系,确定所述待测物品的双目视觉图像的世界坐标系校正参数; 按照所述待测物品的双目视觉图像的世界坐标系校正参数,将所述待测物品的双目视觉图像转换到世界坐标系下,再进行深度图像拟合,得到所述待测物品在世界坐标系下的深度图像。
在一实施例中,所述根据所述标准标记与所述待测物品的位置关系,以及标准标记的双目视觉图像,对所述待测物品的双目视觉图像进行校正拟合,得到待测物品在世界坐标系下的深度图像,包括:对所述标准标记的双目视觉图像和所述待测物品的双目视觉图像进行深度图像拟合,得到各自的初级深度图像;根据所述双目视觉装置的位置与所述标准标记之间的位置关系,确定所述待测物品的初级深度图像的世界坐标系校正参数;根据所述待测物品的初级深度图像的世界坐标系校正参数,将所述待测物品的初级深度图像进行世界坐标系校正,得到所述待测物品在世界坐标系下的深度图像。
在一实施例中,在根据所述待测物品在世界坐标系下的深度图像确定所述待测物品的点云数据图像之后,利用垂向空间统计方法确定所述待测物品的上表面点云数据之前,所述方法还包括:根据所述待测物品的点云数据图像中点云数据的三色差异度,对背景点云数据进行滤除,得到前景点云数据图像;
利用垂向空间统计方法确定所述待测物品的上表面点云数据,包括:利用垂向空间统计方法,从所述前景点云数据图像中确定所述待测物品的上表面点云数据。
在一实施例中,根据所述待测物品的点云数据图像中点云数据的三色差异度,对背景点云数据进行滤除,得到前景点云数据图像,包括:采用如下公式确定所述点云数据图像中点云数据的三色差异度值:
T=|R point-G point|-|G point-B point|;
其中,R point表示点云数据中的RGB颜色中红色的数值;G point表示点云数据中的RGB颜色中绿色的数值;B point表示点云数据中的RGB颜色中蓝色的数值;
其中,T为三色差异度值,所述三色差异度值小于背景滤除阈值时,则确定对应的点云数据为背景点云数据,对所述背景点云数据进行滤除操作。
在一实施例中,利用垂向空间统计方法,从所述前景点云数据图像中确定所述待测物品的上表面点云数据,包括:对所有前景点云数据的垂向数据进行统计分布;确定统计分布中每个垂向数据间隔中的前景点云数据个数;确定所述前景点云数据个数最多的垂向数据间隔的中值;将垂向数据分布在设定范围内的点云数据,作为所述待测物品的上表面点云数据;所述设定范围是由将所述垂向数据间隔的中值减去预设可控数值得到的第一数值、与将所述垂向数据间隔的中值加上预设可控数值得到的第二数值所形成的数值范围。
在一实施例中,所述预设可控数值采用如下方式确定:统计所有前景点云数据图像的垂向数据,确定标准方差;将所述标准方差的设定倍数作为预设可控数值。
在一实施例中,根据所述待测物品的上表面点云数据确定待测物品的上表面中心位置,作为所述待测物品的位置数据,包括:根据所有所述待测物品的上表面点云数据的空间坐标的平均值,确定所述待测物品的上表面中心位置,作为所述待测物品的位置数据;
根据所述待测物品的上表面点云数据的拟合面,以及所述待测物品的上表面中心位置与所述待测物品的上表面点云数据边界位置的距离,确定形态数据,包括:根据所述待测物品的上表面点云数据的空间坐标,进行平面拟合以确定所述待测物品的上表面;确定所述待测物品的上表面的法向向量,根据所述法向向量确定所述待测物品在世界坐标系中沿X轴的扭转角度Rx,和沿Y轴的扭 转角度Ry;并且,对待测物品的上表面点云数据在XOY面上进行投影;在上表面中心位置的投影位置,与上表面点云数据边界位置的投影位置之间的距离中,确定最小值;根据所述最小值所在方向确定所述待测物品沿Z轴的扭转角度Rz;将所述Rx、Ry和Rz确定为所述待测物品的形态数据。
第二方面,本申请实施例还提供了一种物体空间位置形态的确定装置,该装置包括:双目视觉图像获取模块,设置为通过双目视觉装置获取待测物品的双目视觉图像和标准标记的双目视觉图像;其中,所述双目视觉装置布置在所述待测物品的上方;点云数据图像确定模块,设置为根据所述标准标记与所述待测物品的位置关系,以及标准标记的双目视觉图像,对所述待测物品的双目视觉图像进行校正拟合,得到待测物品在世界坐标系下的深度图像,根据所述待测物品在世界坐标系下的深度图像确定所述待测物品的点云数据图像;上表面点云数据筛选模块,设置为利用垂向空间统计方法确定所述待测物品的上表面点云数据;位置数据及形态数据确定模块,设置为根据所述待测物品的上表面点云数据确定所述待测物品的上表面中心位置,作为所述待测物品的位置数据;并根据所述待测物品的上表面点云数据的拟合面,以及所述待测物品的上表面中心位置与所述待测物品的上表面点云数据边界位置的距离,确定所述待测物品的形态数据。
第三方面,本申请实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现如本发明实施例所述的物体空间位置形态的确定方法。
第四方面,本申请实施例提供了一种双目视觉机器人,包括双目视觉装置、标准标记、存储器、处理器及存储在存储器上并可在处理器运行的计算机程序,所述处理器执行所述计算机程序时实现如本申请任意实施例所述的物体空间位 置形态的确定方法。
本申请实施例所提供的技术方案,通过双目视觉装置获取待测物品和标准标记的双目视觉图像;其中,所述双目视觉装置布置在所述待测物品的上方;根据所述标准标记与所述待测物品的位置关系,以及标准标记的双目视觉图像,对所述待测物品的双目视觉图像进行校正拟合,得到待测物品在世界坐标系下的深度图像,根据所述待测物品在世界坐标系下的深度图像确定所述待测物品的点云数据图像;利用垂向空间统计方法确定所述待测物品的上表面点云数据;根据所述待测物品的上表面点云数据确定所述待测物品的上表面中心位置,作为所述待测物品的位置数据;并根据所述待测物品的上表面点云数据的拟合面,以及所述待测物品的上表面中心位置与所述待测物品的上表面点云数据边界位置的距离,确定所述待测物品的形态数据,可以实现通过双目视觉装置获取待测物品的图像后,经过处理和分析,确定待测物品的空间位置和形态的效果。
附图说明
图1是本申请实施例一提供的物体空间位置形态的确定方法的流程图;
图2是本申请实施例二提供的物体空间位置形态的确定方法的流程图;
图3是是本申请实施例二提供的点云数据统计分布示意图;
图4是本申请实施例三提供的物体空间位置形态的确定装置的结构示意图;
图5a为本申请实施例五所提供的双目视觉机器人示意图;
图5b为本申请实施例五所提供的双目视觉机器人示意图;
图5c为本申请实施例五所提供的双目视觉机器人示意图;
图6是本申请实施例二提供的物体空间形态数据中Rz的确定方法示意图。
具体实施方式
下面结合附图和实施例对本申请进行说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本申请,而非对本申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本申请相关的部分而非全部结构。
实施例一
图1是本申请实施例一提供的物体空间位置形态的确定方法的流程图,本实施例可适用对于待测物品进行定位和形态确定的情况,该方法可以由本申请 实施例所提供的物体空间位置形态的确定装置来执行,该装置可以由软件和/或硬件的方式来实现,并可集成于双目视觉机器人中。
如图1所示,所述物体空间位置形态的确定方法包括如下步骤。
S110、通过双目视觉装置获取待测物品和标准标记的双目视觉图像;其中,所述双目视觉装置布置在所述待测物品的上方。
本实施例中,双目视觉装置可以用来获取固定范围内的待测物品的空间位置及形态,例如在生产、安装流水线,可以通过在操作台的正上方固定位置安装双目视觉装置,可以令双目视觉装置的中心位置与操作台的中心位置相对应,这样就可以确定通过双目视觉装置采集到的图像是正对着操作台中心的。还可以将双目视觉装置安装在不固定位置,如可以安装在移动机器人的头部,生产、安装流水线的机器人操作臂上,这样可以使得双目视觉装置的设置更加灵活,但是相对于前者,这种方式在图像校正过程中会相对复杂一点。如果双目视觉装置的位置固定,可以对双目视觉装置得到的图像进行位置校正来得到待测物品在世界坐标系中的位置,而对于可移动位置的双目视觉装置来说,必须要在被采集的双目视觉图像中含有标准标记,才能够确定物体在世界坐标系中的位置,或者相对机器人自身或者机器人机械手臂的相对位置。
在一实施例中,在所述通过双目视觉装置获取待测物品和标准标记的双目视觉图像前还包括:在所述待测物品承载空间内选取一固定结构作为所述标准标记,或在所述待测物品承载空间内安装一标记作为所述标准标记,通过所述双目视觉装置与所述标准标记之间的位置关系,建立所述标准标记的坐标系与所述双目视觉装置的坐标系之间的关系。这样设置的好处是可以根据固定的或者预先设定的标准标记对图像进行校正拟合。
在本实施例中,标准标记是设置在固定位置,在双目视觉图像中对双目视觉图像进行校准的标记。如可以是一个交叉指向正北和正东的箭头。
所述双目视觉装置布置在所述待测物品的上方,这样设置是为了能够得到待测物品上表面的图像,因为在通过机器人或者机器人操作臂来抓取一些物品时,往往是通过从上方按照待测物品的形态来确定抓取角度进行抓取。如果机器人可以实现通过横向抓取的方式来抓取物品的话,也可以通过获取待测物品的前表面的位置和形态来确定。
S120、根据所述标准标记与所述待测物品的位置关系,以及标准标记的双目视觉图像,对所述待测物品的双目视觉图像进行校正拟合,得到待测物品在世界坐标系下的深度图像,根据所述待测物品在世界坐标系下的深度图像确定所述待测物品的点云数据图像。
在本实施例中,标准标记是设置在固定位置,在双目视觉图像中对双目视觉图像进行校准的标记。如可以是一个交叉指向正北和正东的箭头。深度图像可以是每个像素点带有深度信息的图像,在本实施例中,经过校正的深度图像可以是由上至下的深度信息,可以以双目视觉装置所在的Z轴位置作为起点,深度信息可以是构成图像的每个像素点与双目视觉装置的中心所在的平面的垂向距离(Z轴距离)。点云数据图像可以是将每个像素点以点云的形式显示出来,点云数据图像可以通过深度图像按照特定的算法转变而成。
在一实施例中,所述根据所述标准标记与所述待测物品的位置关系,以及标准标记的双目视觉图像,对所述待测物品的双目视觉图像进行校正拟合,得到待测物品在世界坐标系下的深度图像,包括:利用所述标准标记的双目视觉图像,确定双目视觉装置的位置;根据所述双目视觉装置的位置与所述标准标记之间的位置关系,确定所述待测物品的双目视觉图像的世界坐标系校正参数;按照所述待测物品的双目视觉图像的世界坐标系校正参数,将所述待测物品的双目视觉图像转换到世界坐标系下,再进行深度图像拟合,得到所述待测物品在世界坐标系下的深度图像。
在一实施例中,所述根据所述标准标记与所述待测物品的位置关系,以及标准标记的双目视觉图像,对所述待测物品的双目视觉图像进行校正拟合,得到待测物品在世界坐标系下的深度图像,包括:对所述标准标记的双目视觉图像和所述待测物品的双目视觉图像进行深度图像拟合,得到各自的初级深度图像;根据所述双目视觉装置的位置与所述标准标记之间的位置关系,确定所述待测物品的初级深度图像的世界坐标系校正参数;根据所述待测物品的初级深度图像的世界坐标系校正参数,将所述待测物品的初级深度图像进行世界坐标系校正,得到所述待测物品在世界坐标系下的深度图像。
上述两种方式分别阐述了先将双目视觉装置的两个摄像头获取的图片先进行世界坐标系的校正,再进行拟合的方法,和先进行拟合再进行世界坐标系校正的方法。下面分别介绍世界坐标系的校正和图像拟合的过程:
由于原始的双目图像分别通过左眼相机和右眼相机独立拍摄,由于相机镜头位置不同的关系,两个相机存在一定的畸变。需要对视场范围内所有的像素进行拟合,并根据实测数据将拟合的补偿量赋予相机程序。
此外,还包括,确定空间坐标系中物体点同它在图像平面上像点之间的对应关系。在一实施例中,将左眼相机和右眼相机内部参数调节一致,内部参数包括:相机内部几何、光学参数,外部参数包括:左眼、右眼相机坐标系与世界坐标系的转换。
此处的拟合是用来修正镜头所产生的畸变的。在原始图像中可以看到镜头 所带来的这种畸变。例如,场景中的一条直线在原始的左、右眼图像中会变成一条曲线,这种效果在左、右眼图像的边角处尤为明显。拟合就是为了修正这种类型的畸变。
在图像处理过程中,根据物体图像进行边界提取。可采用的算法包括:拉普拉斯-高斯滤波,边界的特征是识别物体的一个明显和主要的特征,为后续算法奠定基础。此外还包括,在图像预处理和特征提取,预处理:主要包括图像对比度的增强、随机噪声的去除、低通滤波和图像的增强、伪彩色处理等;特征提取:常用的匹配特征,主要有点状特征、线状特征和区域特征等,进行提取。在一实施例中,低通滤波为了拟合一幅图像,事先对该图像进行平滑是非常重要的。所以如果要拟合一幅图像,预先采用低通滤波器处理左、右眼图像是很好的方法。当然不使用低通滤波器同样可以校正图像,但校正后的图像可能会出现混淆的现象。如果要提高处理速度,可以将低通滤波器关掉。
边缘检测为任选特性,边缘检测使用亮度的变化来匹配特征。当系统中的相机具有自动增益功能时,这项功能是非常有用的。如果每个相机的自动增益的变化是不一致的,那么图像间的绝对亮度是不一致的,而虽然绝对亮度是不一致的,但亮度的变化却是一个常数。因此边缘检测适用于光照有很大变化的环境当中。虽然边缘检测可以改善物品边缘的识别结果,但这相当于又引入了另外的处理步骤,因此要权衡结果的改善状况和速度之间的关系来使用这项功能。
图像处理过程中,根据双目视觉立体成像原理,其中双目立体视觉三维测量是基于视差原理。
基线距B=两摄像机的投影中心连线的距离;相机焦距为f。设两摄像机在同一时刻观看空间物体的同一特征点在空间坐标系下为P(x c,y c,z c),分别在“左眼”和“右眼”上获取了点P的图像,它们的图像坐标在像坐标下分别为P =(X ,Y )和P =(X 右t,Y )。现两摄像机的图像在同一个平面上,则特征点P的图像坐标Y坐标相同,即Y =Y =Y,则由三角几何关系得到:
Figure PCTCN2019089635-appb-000001
则视差为:D 视差=X -X 。由此可计算出特征点P在相机坐标系下的三维坐标为:
Figure PCTCN2019089635-appb-000002
因此,左眼相机像面上的任意一点只要能在右眼相机像面上找到对应的匹配点,就可以确定出该点的三维坐标。这种方法是完全的点对点运算,像面上所有点只要存在相应的匹配点,就可以参与上述运算,从而获取匹配点对应的三维坐标。
此外在进行图像立体匹配时,根据对所选特征的计算,建立特征之间的对应关系,将同一个空间物理点在不同图像中的映像点对应起来。立体匹配包括三个基本的步骤:
1)从立体图像对中的一幅图像如左图上选择与实际物理结构相应的图像特征;
2)在另一幅图像如右图中确定出同一物理结构的对应图像特征;
3)确定这两个特征之间的相对位置,得到视差。
其中的步骤2)是实现匹配的关键。
通过立体匹配得到视差图像之后,便可以确定深度图像,并恢复场景3D信息。立体匹配建立相关性库使用绝对相关偏差和的方法来建立图像间的相关。这种方法的原理如下:
对于图像中的每一个像素在参照图像中,按照给定的正方形尺寸选择一个邻域,将这个邻域沿着同一行与另一幅图像中的一系列邻域相比较找到最佳的匹配结束。使用绝对方差相关性计算:
Figure PCTCN2019089635-appb-000003
其中:d min和d max是最小和最大视差(disparity);m是模板尺寸(mask size);I 和I 是左边和右边的图像。
图像处理过程中,通过物体双目拟合后图像进行计算图像间的相关性,根据双目视差原理公式和绝对相关偏差和的方法来建立图像间的相关性计算深度,形成深度图或空间点云数据。
S130、利用垂向空间统计方法确定所述待测物品的上表面点云数据。
在一实施例中,在得到点云数据图像后,可以确定每个点的纵向(Z轴)数据,根据对纵向数据进行统计,既可以得到当前点云图像中多个高度范围内的点云个数。在一实施例中,如果背景为一个平面,如操作台等,则在背景的纵向数据内,点云个数可以是最多的,而且在所有的点云数据中,背景的点云数据的Z轴数据也是最大或者最小的。通过这种统计可以滤除掉背景点云数据,在前景点云数据中通过统计某个范围内点云数据的个数,即可以确定为待测物品的上表面的点云数据。如果上表面水平,则上表面的点云数据范围相对狭窄,如果上表面倾斜,则上表面的点云数据范围相对宽泛。
S140、根据所述待测物品的上表面点云数据确定所述待测物品的上表面中心位置,作为所述待测物品的位置数据;并根据所述待测物品的上表面点云数据的拟合面,以及所述待测物品的上表面中心位置与所述待测物品的上表面点云数据边界位置的距离,确定所述待测物品的形态数据。
在一实施例中,上表面中心位置可以通过标准标记确定的世界坐标系来确定,如(X,Y,Z)。如,可以根据上表面的点云数据在XOY面上的投影的几何形状的中心位置来确定上表面的中心位置。
根据所述上表面点云数据的拟合面,以及上表面中心位置与上表面点云数据边界位置的距离,确定形态数据。形态数据可以通过确定待测物品相对X轴、Y轴和Z轴的转角Rx、Ry和Rz三个量来表示。在确定上表面的拟合面之后,其中拟合面可以是平面,也可以是曲面。在确定上表面的法向向量后,可以根据上表面的法向向量与XOZ面的夹角确定Ry,与YOZ面的夹角确定Rx。再通过上表面中心位置与上表面边界点中距离最近的点形成的向量与XOY面的夹角确定Rz。
本申请实施例所提供的技术方案,通过双目视觉装置获取待测物品和标准标记的双目视觉图像;其中,所述双目视觉装置布置在所述待测物品的上方;根据所述标准标记与所述待测物品的位置关系,以及标准标记的双目视觉图像,对所述待测物品的双目视觉图像进行校正拟合,得到待测物品在世界坐标系下的深度图像,根据所述待测物品在世界坐标系下的深度图像确定所述待测物品的点云数据图像;利用垂向空间统计方法确定所述待测物品的上表面点云数据;根据所述待测物品的上表面点云数据确定所述待测物品的上表面中心位置,作为所述待测物品的位置数据;并根据所述待测物品的上表面点云数据的拟合面, 以及所述待测物品的上表面中心位置与所述待测物品的上表面点云数据边界位置的距离,确定所述待测物品的形态数据,可以实现通过双目视觉装置获取待测物品的图像后,经过处理和分析,确定待测物品的空间位置和形态的效果。
在一实施例中,在上述技术方案的基础上,在确定所述位置数据和所述形态数据之后,所述方法还包括:根据所述位置数据和所述形态数据,确定机器人操作臂的抓取位置和抓取姿态,以控制所述机器人操作臂对所述待测物品进行抓取。
在一实施例中,机器人操作臂的位置可以校正到与待测物品相同的世界坐标系中,从而可以确定操作臂的运动距离、运动方向甚至运动轨迹,当操作臂运动到待测物品位置时,可以控制操作臂的夹爪以与待测物品想适应的形态抓取待测物品。这样设置的好处是可以确定对于待测物品的位置识别后能够顺利的抓取物品,而且抓取更加紧致,避免抓取脱落等事故出现。
实施例二
图2是本申请实施例二提供的物体空间位置形态的确定方法的流程图。本实施例在上述实施例的基础上,在根据所述待测物品在世界坐标系下的深度图像确定所述待测物品的点云数据图像之后,利用垂向空间统计方法确定所述待测物品的上表面点云数据之前,所述方法还包括:根据所述待测物品的点云数据图像中点云数据的三色差异度,对背景点云数据进行滤除,得到前景点云数据图像;相应的,利用垂向空间统计方法确定所述待测物品的上表面点云数据,包括:利用垂向空间统计方法,从所述前景点云数据图像中确定所述待测物品的上表面点云数据。
如图2所示,所述物体空间位置形态的确定方法包括如下步骤。
S210、通过双目视觉装置获取待测物品和标准标记的双目视觉图像;其中,所述双目视觉装置布置在所述待测物品的上方。
S220、根据所述标准标记与所述待测物品的位置关系,以及标准标记的双目视觉图像,对所述待测物品的双目视觉图像进行校正拟合,得到待测物品在世界坐标系下的深度图像,根据所述待测物品在世界坐标系下的深度图像确定所述待测物品的点云数据图像。
S230、根据所述待测物品的点云数据图像中点云数据的三色差异度,对背景点云数据进行滤除,得到前景点云数据图像。
在本实施例中,三色差异度可以是点云数据图像中每个数据的像素点颜色中红、绿、蓝三原色的数值的相互差异,这样设置主要是可以对于颜色接近的背景点云数据进行滤除,就可以得到只有前景点云数据的点云数据图像。
在一实施例中,根据所述点云数据图像中点云数据的三色差异度,对背景点云数据进行滤除,得到前景点云数据图像,包括:
采用如下公式确定所述点云数据图像中点云数据的三色差异度值:
T=|R point-G point|-|G point-B point|
其中,R point表示点云数据中的RGB颜色中红色的数值;G point表示点云数据中的RGB颜色中绿色的数值;B point表示点云数据中的RGB颜色中蓝色的数值;其中,T为三色差异度值,所述三色差异度值小于背景滤除阈值时,则确定对应的点云数据为背景点云数据,进行滤除操作。
这样设置有利于对背景以及其他反光或者个别跳变的噪声点的点云数据进行滤除,提高了上表面点云数据确定的准确性。
S240、利用垂向空间统计方法,从所述前景点云数据图像中确定所述待测物品的上表面点云数据。
在一实施例中,利用垂向空间统计方法确定所述待测物品的上表面点云数据,包括:对所有前景点云数据的垂向数据进行统计分布;确定每个垂向数据间隔中的前景点云数据个数;确定前景点云数据个数最多的垂向数据间隔的中值;将垂向数据分布在设定范围内的点云数据,作为所述待测物品的上表面点云数据;所述设定范围是由将所述垂向数据间隔的中值减去预设可控数值得到的第一数值、与将所述垂向数据间隔的中值加上预设可控数值得到的第二数值所形成的数值范围。
图3是本申请实施例二提供的点云数据统计分布示意图。如图3所示,横轴为点云数据的纵向数据,可以理解为高度,单位为米,纵坐标为每个数据间隔中点云数据的个数,表示为在当前的纵向数据间隔中点云数据的个数。如,数据间隔采用0.02。在图3中,0.414-0.416中的点云个数最多,则可以确定上表面点云数据为以0.415为中心的一定范围内的点云数据。
在一实施例中,所述预设可控数值采用如下方式确定:统计所有前景点云数据图像的垂向数据,确定标准方差;将所述标准方差的设定倍数作为预设可控数值。
识别目标物体的上平面,所取目标物体有色点,根据Z向垂向进行统计分布,使用点云数据的μ(μ为统计平均值Mean)和统计高频峰值(Peak)所在数值。识别目标物体的上平面的高度和所取目标物体的位置,根据通过双目相机获取的目标物体的上平面的有色点的三维坐标值(X、Y和Z),提取所有有色点的Z向坐标值,根据Z向垂向进行统计分布,使用点云数据的μ(μ为统计平均值Mean)和统计高频峰值(Peak)所在数值(采用的数据是点云数据的 平均值和点云数据高峰值所在的Z向数据值),即可认为高频峰值为目标物体的Z向上表面的名义高度,且平均值μ为Z向上表面高度的误差范围。如图3所示,同时采用σ(σ为标准方差Standard Deviation)控制所选择的范围,实际中在1σ-6σ之间的范围内的点作为目标物体的上平面是典型数据分布效果,如图3所示,粗线代表中值μ,虚线区间代表+/-σ标准方差。并认为这些点构成了目标物体的主成像面或上表面。同时,通过这一方法来移除反光点、离群点、阴影点的点云数据造成的偏差。
S250、根据所述待测物品的上表面点云数据确定所述待测物品的上表面中心位置,作为所述待测物品的位置数据;并根据所述待测物品的上表面点云数据的拟合面,以及所述待测物品的上表面中心位置与所述待测物品的上表面点云数据边界位置的距离,确定所述待测物品的形态数据。
本实施例在上述实施例的基础上,提供了对于前景点云数据的确定方法,通过这种方法,可以移除反光点、离群点、阴影点的点云数据造成的干扰,提高了确定待测物品的上表面点云数据的准确性。
在一实施例中,在上述技术方案的基础上,根据所述上表面点云数据确定待测物品的上表面中心位置,作为位置数据,包括:根据所有所述上表面点云数据的空间坐标的平均值,确定所述待测物品的上表面中心位置,作为位置数据;根据所述上表面点云数据的拟合面,以及上表面中心位置与上表面点云数据边界位置的距离,确定形态数据,包括:根据所述上表面点云数据的空间坐标,进行平面拟合以确定所述待测物品的上表面;确定所述待测物品的上表面的法向向量,根据所述法向向量确定所述待测物品在世界坐标系中沿X轴的扭转角度Rx,和沿Y轴的扭转角度Ry;并且,对待测物品的上表面点云数据在XOY面上进行投影;在上表面中心位置的投影位置,与上表面点云数据边界位置的投影位置之间的距离中,确定最小值;根据所述最小值所在方向确定所述待测物品沿Z轴的扭转角度Rz;将所述Rx、Ry和Rz确定为所述待测物品的形态数据。这样设置的好处是可以提高对于待测物品空间六参数的确定过程的准确性和简便性,提高本申请实施例所提供的技术方案的准确性。
图6是本申请实施例二提供的物体空间形态数据中Rz的确定方法示意图。如图6所示,在确定待测物品上表面点云数据之后,可以在XOY面上进行投影,其中Z轴与O点重合,未在图6中示出,将三维点转换到二维平面中。可以以原来确定的中心点作为投影后的中心点,在确定中心点后,提取该集合中所有的外围点构成一个凸多边形(Convex hull 2D),构成凸多边形的顶点标记为边界点(如图6,仅部分进行了标记)。通过中心点可以和相邻的两个多边形顶点构成三角形,任意相邻的两个顶点构成一个线段。如图6中,H4就是中心点与 两个边界点所形成的三角形中,中心点到该部分边界的高度,图6中示出了H1、H2、H3、H4和H5五个高度值,其中H4是最小值,H3是最大值。
在任意一个两个顶点构成一个线段中找到,中心点到该线段(Segment)的最短距离(H4),即上表面点云所围的多边形中,最短边距离为垂足。在知道了上表面中心到边界多边形最短的距离和方向后,将H4向量化,得到向量H4与X轴或者与Y轴形成的角度,即为该待测物品绕着Z轴旋转的角度Rz,所以向量H4能够表示上表面中心点的Rz方向。根据目标物体的上表面中心点,取中心点距离边界多边形最短的向量(中心距离边界线段的最短距离点)。根据最短边的方向同XOZ平面(或YOZ平面)夹角定位,确定目标物体的Rz的夹角。
实施例三
图4是本申请实施例三提供的物体空间位置形态的确定装置的结构示意图。如图4所示,所述物体空间位置形态的确定装置,包括:双目视觉图像获取模块410,设置为通过双目视觉装置获取待测物品和标准标记的双目视觉图像;其中,所述双目视觉装置布置在所述待测物品的上方;点云数据图像确定模块420,设置为根据所述标准标记与所述待测物品的位置关系,以及标准标记的双目视觉图像,对所述待测物品的双目视觉图像进行校正拟合,得到待测物品在世界坐标系下的深度图像,根据所述待测物品在世界坐标系下的深度图像确定所述待测物品的点云数据图像;上表面点云数据筛选模块430,设置为利用垂向空间统计方法确定所述待测物品的上表面点云数据;位置数据及形态数据确定模块440,设置为根据所述待测物品的上表面点云数据确定所述待测物品的上表面中心位置,作为所述待测物品的位置数据;并根据所述待测物品的上表面点云数据的拟合面,以及所述待测物品的上表面中心位置与所述待测物品的上表面点云数据边界位置的距离,确定所述待测物品的形态数据。
本申请实施例所提供的技术方案,通过双目视觉装置获取待测物品和标准标记的双目视觉图像;其中,所述双目视觉装置布置在所述待测物品的上方;根据所述标准标记与所述待测物品的位置关系,以及标准标记的双目视觉图像,对所述待测物品的双目视觉图像进行校正拟合,得到待测物品在世界坐标系下的深度图像,根据所述待测物品在世界坐标系下的深度图像确定所述待测物品的点云数据图像;利用垂向空间统计方法确定所述待测物品的上表面点云数据;根据所述待测物品的上表面点云数据确定所述待测物品的上表面中心位置,作为所述待测物品的位置数据;并根据所述待测物品的上表面点云数据的拟合面,以及所述待测物品的上表面中心位置与所述待测物品的上表面点云数据边界位置的距离,确定所述待测物品的形态数据,可以实现通过双目视觉装置获取待 测物品的图像后,经过处理和分析,确定待测物品的空间位置和形态的效果。
上述产品可执行本申请任意实施例所提供的方法,具备执行方法相应的功能模块和有益效果。
实施例四
本申请实施例还提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行一种物体空间位置形态的确定方法,该方法包括:通过双目视觉装置获取待测物品和标准标记的双目视觉图像;其中,所述双目视觉装置布置在所述待测物品的上方;根据所述标准标记与所述待测物品的位置关系,以及标准标记的双目视觉图像,对所述待测物品的双目视觉图像进行校正拟合,得到待测物品在世界坐标系下的深度图像,根据所述待测物品在世界坐标系下的深度图像确定所述待测物品的点云数据图像;利用垂向空间统计方法确定所述待测物品的上表面点云数据;根据所述待测物品的上表面点云数据确定所述待测物品的上表面中心位置,作为所述待测物品的位置数据;并根据所述待测物品的上表面点云数据的拟合面,以及所述待测物品的上表面中心位置与所述待测物品的上表面点云数据边界位置的距离,确定所述待测物品的形态数据。
存储介质——任何类型的存储器设备或存储设备。术语“存储介质”旨在包括:安装介质,例如紧凑型光盘只读储存器(Compact Disc Read-Only Memory,CD-ROM)、软盘或磁带装置;计算机系统存储器或随机存取存储器,诸如动态随机存取存储器(Dynamic Random Access Memory,DRAM)、双倍数据速率随机存取存储器(Double Data Rate Random Access Memory,DDR RAM)、静态随机存取存储器(Static Random Access Memory,SRAM)、扩展数据输出随机存取存储器(Extended Data Output Random Access Memory,EDO RAM),兰巴斯(Rambus)随机存取存储器(Random Access Memory,RAM)等;非易失性存储器,诸如闪存、磁介质(例如硬盘或光存储);寄存器或其它相似类型的存储器元件等。存储介质可以还包括其它类型的存储器或其组合。另外,存储介质可以位于程序在其中被执行的计算机系统中,或者可以位于不同的第二计算机系统中,第二计算机系统通过网络(诸如因特网)连接到计算机系统。第二计算机系统可以提供程序指令给计算机用于执行。术语“存储介质”可以包括可以驻留在不同位置中(例如在通过网络连接的不同计算机系统中)的两个或更多存储介质。存储介质可以存储可由一个或多个处理器执行的程序指令(例如具体实现为计算机程序)。
当然,本申请实施例所提供的一种包含计算机可执行指令的存储介质,其计算机可执行指令不限于如上所述的物体空间位置形态的确定操作,还可以执 行本申请任意实施例所提供的物体空间位置形态的确定方法中的相关操作。
实施例五
本申请实施例提供了一种双目视觉机器人,包括双目视觉装置,操作台,操作台上的标准标记、机器人操作臂,存储器,处理器及存储在存储器上并可在处理器运行的计算机程序,所述处理器执行所述计算机程序时实现如申请任意实施例中的物体空间位置形态的确定方法。
图5a为本申请实施例五所提供的双目视觉机器人示意图。如图5a所示,双目视觉装置10,操作台20,操作台上的标准标记30、机器人操作臂50,存储器,处理器及存储在存储器上并可在处理器运行的计算机程序,所述处理器执行所述计算机程序时实现如本申请任意实施例中的物体空间位置形态的确定方法。
图5b为本申请实施例五所提供的双目视觉机器人示意图。如图5b所示,相对于上面所述的技术方案而言,将双目视觉装置10设置在卡爪上可以使双目视觉图像获取更加灵活,可以在待测物品较多时,或者经过一侧计算后计算准确率不符合标准或者噪声率过高时,通过控制卡爪的移动,可以从另一个角度进行对待测物品空间六参数的定位。还可以通过多个位置得到的空间六参数结果进行相互比较和确认,从而提高本申请实施例所提供的技术方案对于待测物品的空间位置及形态的确定结果的准确性。
图5c为本申请实施例五所提供的双目视觉机器人示意图。如图5c所示,相对于上述多个技术方案,将双目视觉装置设置在机器人操作臂的机身上,这样可以避免第一种方案中专门为双目视觉装置提供安装支架的情形,同时可以在机器人操作臂移动到另一个操作台时,通过双目视觉装置进行双目视觉图像的获取,无需针对每个操作台都安装双目视觉装置,达到了节省系统成本的效果。
在一实施例中,可以将双目视觉装置设置在机器人操作臂的卡爪上,还可以设置在机器人操作臂的固定位置上,只要能够获取到待测物品的上表面的图像以及操作台正面的图像就可以。

Claims (13)

  1. 一种物体空间位置形态的确定方法,包括:
    通过双目视觉装置获取待测物品的双目视觉图像和标准标记的双目视觉图像;其中,所述双目视觉装置布置在所述待测物品的上方;
    根据所述标准标记与所述待测物品的位置关系,以及所述标准标记的双目视觉图像,对所述待测物品的双目视觉图像进行校正拟合,得到所述待测物品在世界坐标系下的深度图像,根据所述待测物品在世界坐标系下的深度图像确定所述待测物品的点云数据图像;
    利用垂向空间统计方法确定所述待测物品的上表面点云数据;
    根据所述待测物品的上表面点云数据确定所述待测物品的上表面中心位置,作为所述待测物品的位置数据;并根据所述待测物品的上表面点云数据的拟合面,以及所述待测物品的上表面中心位置与所述待测物品的上表面点云数据边界位置的距离,确定所述待测物品的形态数据。
  2. 根据权利要求1所述的物体空间位置形态的确定方法,在确定所述待测物品的位置数据和所述待测物品的形态数据之后,还包括:
    根据所述待测物品的位置数据和所述待测物品的形态数据,确定机器人操作臂的抓取位置和抓取姿态,以控制所述机器人操作臂对所述待测物品进行抓取。
  3. 根据权利要求1所述的物体空间位置形态的确定方法,在所述通过双目视觉装置获取待测物品的双目视觉图像和标准标记的双目视觉图像前,还包括:
    在所述待测物品承载空间内选取一固定结构作为所述标准标记,或在所述待测物品承载空间内安装一标记作为所述标准标记,通过所述双目视觉装置与所述标准标记之间的位置关系,建立所述标准标记的坐标系与所述双目视觉装置的坐标系之间的关系。
  4. 根据权利要求1所述的物体空间位置形态的确定方法,其中,所述根据所述标准标记与所述待测物品的位置关系,以及所述标准标记的双目视觉图像,对所述待测物品的双目视觉图像进行校正拟合,得到所述待测物品在世界坐标系下的深度图像,包括:
    利用所述标准标记的双目视觉图像,确定所述双目视觉装置的位置;
    根据所述双目视觉装置的位置与所述标准标记之间的位置关系,确定所述待测物品的双目视觉图像的世界坐标系校正参数;
    按照所述待测物品的双目视觉图像的世界坐标系校正参数,将所述待测物 品的双目视觉图像转换到世界坐标系下,再进行深度图像拟合,得到所述待测物品在世界坐标系下的深度图像。
  5. 根据权利要求1所述的物体空间位置形态的确定方法,其中,所述根据所述标准标记与所述待测物品的位置关系,以及标准标记的双目视觉图像,对所述待测物品的双目视觉图像进行校正拟合,得到待测物品在世界坐标系下的深度图像,包括:
    对所述标准标记的双目视觉图像和所述待测物品的双目视觉图像进行深度图像拟合,得到各自的初级深度图像;
    根据所述双目视觉装置的位置与所述标准标记之间的位置关系,确定所述待测物品的初级深度图像的世界坐标系校正参数;
    根据所述待测物品的初级深度图像的世界坐标系校正参数,将所述待测物品的初级深度图像进行世界坐标系校正,得到所述待测物品在世界坐标系下的深度图像。
  6. 根据权利要求1所述的物体空间位置形态的确定方法,在根据所述待测物品在世界坐标系下的深度图像确定所述待测物品的点云数据图像之后,利用垂向空间统计方法确定所述待测物品的上表面点云数据之前,还包括:
    根据所述待测物品的点云数据图像中点云数据的三色差异度,对背景点云数据进行滤除,得到前景点云数据图像;
    利用垂向空间统计方法确定所述待测物品的上表面点云数据,包括:
    利用垂向空间统计方法,从所述前景点云数据图像中确定所述待测物品的上表面点云数据。
  7. 根据权利要求6所述的物体空间位置形态的确定方法,其中,根据所述待测物品的点云数据图像中点云数据的三色差异度,对背景点云数据进行滤除,得到前景点云数据图像,包括:
    采用如下公式确定所述点云数据图像中点云数据的三色差异度值:
    T=|R point-G point|-|G point-B point|;
    其中,R point表示点云数据中的红绿蓝RGB颜色中红色的数值;G point表示点云数据中的RGB颜色中绿色的数值;B point表示点云数据中的RGB颜色中蓝色的数值;
    其中,T为三色差异度值,在所述三色差异度值小于背景滤除阈值的情况下,确定对应的点云数据为背景点云数据,对所述背景点云数据进行滤除操作。
  8. 根据权利要求6所述的物体空间位置形态的确定方法,其中,利用垂向空间统计方法,从所述前景点云数据图像中确定所述待测物品的上表面点云数据,包括:
    对所有前景点云数据的垂向数据进行统计分布;确定统计分布中每个垂向数据间隔中的前景点云数据个数;
    确定所述前景点云数据个数最多的垂向数据间隔的中值;
    将垂向数据分布在设定范围内的点云数据,作为所述待测物品的上表面点云数据;所述设定范围是由将所述垂向数据间隔的中值减去预设可控数值得到的第一数值、与将所述垂向数据间隔的中值加上所述预设可控数值得到的第二数值所形成的数值范围。
  9. 根据权利要求8所述的物体空间位置形态的确定方法,其中,所述预设可控数值采用如下方式确定:
    统计所有前景点云数据图像的垂向数据,确定标准方差;
    将所述标准方差的设定倍数作为预设可控数值。
  10. 根据权利要求1所述的物体空间位置形态的确定方法,其中,根据所述待测物品的上表面点云数据确定待测物品的上表面中心位置,作为所述待测物品的位置数据,包括:
    根据所有所述待测物品的上表面点云数据的空间坐标的平均值,确定所述待测物品的上表面中心位置,作为所述待测物品的位置数据;
    根据所述待测物品的上表面点云数据的拟合面,以及所述待测物品的上表面中心位置与所述待测物品的上表面点云数据边界位置的距离,确定形态数据,包括:
    根据所述待测物品的上表面点云数据的空间坐标,进行平面拟合以确定所述待测物品的上表面;
    确定所述待测物品的上表面的法向向量,根据所述法向向量确定所述待测物品在世界坐标系中沿X轴的扭转角度Rx,和沿Y轴的扭转角度Ry;
    并且,
    对所述待测物品的上表面点云数据在XOY面上进行投影;
    在上表面中心位置的投影位置,与上表面点云数据边界位置的投影位置之 间的距离中,确定最小值;
    根据所述最小值所在方向确定所述待测物品沿Z轴的扭转角度Rz;
    将所述Rx、Ry和Rz确定为所述待测物品的形态数据。
  11. 一种物体空间位置形态的确定装置,包括:
    双目视觉图像获取模块,设置为通过双目视觉装置获取待测物品的双目视觉图像和标准标记的双目视觉图像;其中,所述双目视觉装置布置在所述待测物品的上方;
    点云数据图像确定模块,设置为根据所述标准标记与所述待测物品的位置关系,以及所述标准标记的双目视觉图像,对所述待测物品的双目视觉图像进行校正拟合,得到所述待测物品在世界坐标系下的深度图像,根据所述待测物品在世界坐标系下的深度图像确定所述待测物品的点云数据图像;
    上表面点云数据筛选模块,设置为利用垂向空间统计方法确定所述待测物品的上表面点云数据;
    位置数据及形态数据确定模块,设置为根据所述待测物品的上表面点云数据确定所述待测物品的上表面中心位置,作为所述待测物品的位置数据;并根据所述待测物品的上表面点云数据的拟合面,以及所述待测物品的上表面中心位置与所述待测物品的上表面点云数据边界位置的距离,确定所述待测物品的形态数据。
  12. 一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-10中任一项所述的物体空间位置形态的确定方法。
  13. 一种双目视觉机器人,包括双目视觉装置、标准标记、机器人操作臂、存储器、处理器及存储在存储器上并可在处理器运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1-10中任一项所述的物体空间位置形态的确定方法。
PCT/CN2019/089635 2018-05-31 2019-05-31 物体空间位置形态的确定方法、装置、存储介质及机器人 WO2019228523A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810549518.9 2018-05-31
CN201810549518.9A CN110555878B (zh) 2018-05-31 2018-05-31 物体空间位置形态的确定方法、装置、存储介质及机器人

Publications (1)

Publication Number Publication Date
WO2019228523A1 true WO2019228523A1 (zh) 2019-12-05

Family

ID=68697857

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/089635 WO2019228523A1 (zh) 2018-05-31 2019-05-31 物体空间位置形态的确定方法、装置、存储介质及机器人

Country Status (3)

Country Link
CN (1) CN110555878B (zh)
TW (1) TW202004671A (zh)
WO (1) WO2019228523A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110874854A (zh) * 2020-01-19 2020-03-10 立得空间信息技术股份有限公司 一种基于小基线条件下的大畸变广角相机双目摄影测量方法
CN111696162A (zh) * 2020-06-11 2020-09-22 中国科学院地理科学与资源研究所 一种双目立体视觉精细地形测量系统及方法
CN111993420A (zh) * 2020-08-10 2020-11-27 广州瑞松北斗汽车装备有限公司 一种固定式双目视觉3d引导上件系统
CN112819770A (zh) * 2021-01-26 2021-05-18 中国人民解放军陆军军医大学第一附属医院 碘对比剂过敏监测方法及系统

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113496503B (zh) * 2020-03-18 2022-11-08 广州极飞科技股份有限公司 点云数据的生成及实时显示方法、装置、设备及介质
US11232315B2 (en) 2020-04-28 2022-01-25 NextVPU (Shanghai) Co., Ltd. Image depth determining method and living body identification method, circuit, device, and medium
CN113146625A (zh) * 2021-03-28 2021-07-23 苏州氢旺芯智能科技有限公司 双目视觉物料三维空间检测方法
CN115239811A (zh) * 2022-07-15 2022-10-25 苏州汉特士视觉科技有限公司 一种基于双目视觉检测的定位方法、系统、计算机及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103959012A (zh) * 2011-12-06 2014-07-30 赫克斯冈技术中心 6自由度位置和取向确定
CN104317391A (zh) * 2014-09-24 2015-01-28 华中科技大学 一种基于立体视觉的三维手掌姿态识别交互方法和系统
CN107590832A (zh) * 2017-09-29 2018-01-16 西北工业大学 基于自然特征的物理对象追踪定位方法
US9895131B2 (en) * 2015-10-13 2018-02-20 Siemens Healthcare Gmbh Method and system of scanner automation for X-ray tube with 3D camera
CN108010085A (zh) * 2017-11-30 2018-05-08 西南科技大学 基于双目可见光相机与热红外相机的目标识别方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103959012A (zh) * 2011-12-06 2014-07-30 赫克斯冈技术中心 6自由度位置和取向确定
CN104317391A (zh) * 2014-09-24 2015-01-28 华中科技大学 一种基于立体视觉的三维手掌姿态识别交互方法和系统
US9895131B2 (en) * 2015-10-13 2018-02-20 Siemens Healthcare Gmbh Method and system of scanner automation for X-ray tube with 3D camera
CN107590832A (zh) * 2017-09-29 2018-01-16 西北工业大学 基于自然特征的物理对象追踪定位方法
CN108010085A (zh) * 2017-11-30 2018-05-08 西南科技大学 基于双目可见光相机与热红外相机的目标识别方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110874854A (zh) * 2020-01-19 2020-03-10 立得空间信息技术股份有限公司 一种基于小基线条件下的大畸变广角相机双目摄影测量方法
CN111696162A (zh) * 2020-06-11 2020-09-22 中国科学院地理科学与资源研究所 一种双目立体视觉精细地形测量系统及方法
CN111696162B (zh) * 2020-06-11 2022-02-22 中国科学院地理科学与资源研究所 一种双目立体视觉精细地形测量系统及方法
CN111993420A (zh) * 2020-08-10 2020-11-27 广州瑞松北斗汽车装备有限公司 一种固定式双目视觉3d引导上件系统
CN112819770A (zh) * 2021-01-26 2021-05-18 中国人民解放军陆军军医大学第一附属医院 碘对比剂过敏监测方法及系统

Also Published As

Publication number Publication date
CN110555878A (zh) 2019-12-10
CN110555878B (zh) 2021-04-13
TW202004671A (zh) 2020-01-16

Similar Documents

Publication Publication Date Title
WO2019228523A1 (zh) 物体空间位置形态的确定方法、装置、存储介质及机器人
WO2019100647A1 (zh) 一种基于rgb-d相机的物体对称轴检测方法
US11667036B2 (en) Workpiece picking device and workpiece picking method
JP6271953B2 (ja) 画像処理装置、画像処理方法
CN112070818A (zh) 基于机器视觉的机器人无序抓取方法和系统及存储介质
JP6260891B2 (ja) 画像処理装置および画像処理方法
US9639942B2 (en) Information processing apparatus, information processing method, and storage medium
CN111151463A (zh) 一种基于3d视觉的机械臂分拣抓取系统及方法
CN109297433A (zh) 3d视觉引导拆垛测量系统及其控制方法
CN108827154A (zh) 一种机器人无示教抓取方法、装置及计算机可读存储介质
CN110349249B (zh) 基于rgb-d数据的实时稠密重建方法及系统
JPWO2013133129A1 (ja) 移動物体位置姿勢推定装置及び移動物体位置姿勢推定方法
CN111612794A (zh) 基于多2d视觉的零部件高精度三维位姿估计方法及系统
US11488354B2 (en) Information processing apparatus and information processing method
CN110136211A (zh) 一种基于主动双目视觉技术的工件定位方法及系统
JP2022514429A (ja) 画像収集機器のキャリブレーション方法、装置、システム、機器及び記憶媒体
JP2017142613A (ja) 情報処理装置、情報処理システム、情報処理方法及び情報処理プログラム
CN113313116A (zh) 一种基于视觉的水下人工目标准确检测与定位方法
CN116749198A (zh) 一种基于双目立体视觉引导机械臂抓取方法
JP2008309595A (ja) オブジェクト認識装置及びそれに用いられるプログラム
JP2004062757A (ja) 情報処理方法および撮像部位置姿勢推定装置
CN111105467A (zh) 一种图像标定方法、装置及电子设备
JP2018146347A (ja) 画像処理装置、画像処理方法、及びコンピュータプログラム
JP2020021212A (ja) 情報処理装置、情報処理方法及びプログラム
CN110533717B (zh) 一种基于双目视觉的目标抓取方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19812422

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19812422

Country of ref document: EP

Kind code of ref document: A1