WO2020133407A1 - Structured-light-based locating method and apparatus for industrial robot, and controller and medium - Google Patents

Structured-light-based locating method and apparatus for industrial robot, and controller and medium Download PDF

Info

Publication number
WO2020133407A1
WO2020133407A1 PCT/CN2018/125590 CN2018125590W WO2020133407A1 WO 2020133407 A1 WO2020133407 A1 WO 2020133407A1 CN 2018125590 W CN2018125590 W CN 2018125590W WO 2020133407 A1 WO2020133407 A1 WO 2020133407A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
image
dimensional
structured light
camera
Prior art date
Application number
PCT/CN2018/125590
Other languages
French (fr)
Chinese (zh)
Inventor
苗庆伟
王志飞
张卓辉
陈鹏
关肖州
Original Assignee
河南埃尔森智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 河南埃尔森智能科技有限公司 filed Critical 河南埃尔森智能科技有限公司
Priority to CN201880093140.4A priority Critical patent/CN112074868A/en
Priority to PCT/CN2018/125590 priority patent/WO2020133407A1/en
Publication of WO2020133407A1 publication Critical patent/WO2020133407A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/514Depth or shape recovery from specularities

Definitions

  • the invention relates to the technical field of robot vision, in particular to a method, device, controller and medium for positioning an industrial robot based on structured light.
  • Existing industrial robot positioning methods can be divided into two categories according to the type of data: (1) Recognize the target by comparing with the template from the two-dimensional image, and then extract the three-dimensional data using the image area of the recognition target or obtain it through the distance sensor The target local plane calculates the pose of the target. This method depends heavily on the quality of the captured image, but the complex light changes in the industrial production environment, therefore, this method is difficult to adapt to actual production. (2) Compare directly with the CAD model from the 3D data, this method no longer depends on the quality of the acquired 2D image, but for the case of multiple workpieces superimposed together, it is easy to cause registration ambiguity, which affects the comparison with the template Right stability.
  • the existing industrial robot positioning method is difficult to use complex and changeable industrial application environment, and cannot meet the needs of flexible production.
  • the technical problem to be solved by the present invention is to provide an industrial robot positioning method and device, controller, and medium based on structured light.
  • the industrial robot recognition and positioning based on visual guidance can achieve accurate and rapid three-dimensional contours of target objects in complex environments. Scanning and analyzing through the one-to-one mapping of three-dimensional data and two-dimensional image data to efficiently and accurately locate the posture of the target object and guide the industrial robot to grab the target object.
  • a method for positioning an industrial robot based on structured light including the following steps:
  • the position and posture of the target object are located according to the three-dimensional data of the target object to grasp the target object.
  • segmenting the pixel area where the target object is located from the two-dimensional image information of the preset area includes the following steps:
  • the two-dimensional image information is segmented by AI or image recognition, and the pixel area where the target object is located is segmented.
  • the method further includes: constructing a mapping relationship between the preset two-dimensional image information and three-dimensional data, specifically including the following steps:
  • a structured light vision sensor is used to obtain two-dimensional image information and three-dimensional data of the target object, and a one-to-one mapping is formed on the two-dimensional image information and three-dimensional data, wherein the structured light vision sensor includes a light source and at least one camera.
  • the acquiring the two-dimensional image information of the target object using the structured light vision sensor includes the following steps:
  • the light source is a grating projection device
  • the use of a structured light vision sensor to acquire three-dimensional data of a target object includes the following steps:
  • the raster projection device projects an image and generates a raster sinusoidal image
  • the camera takes the projection image to form a camera image
  • the three-dimensional coordinates of the target object are obtained according to the corrected camera image coordinates, raster sine image coordinates, and the inner and outer parameters.
  • the positioning of the target object according to the three-dimensional data of the target object to grab the target object includes the following steps:
  • the method further includes: in the process of grabbing the target object, determining whether the gripping jig interferes with the target object and the material frame where the target object is located, if there is interference, if there is interference, then restart Adjust the position of the fixture and then grab.
  • determining whether the gripping jig interferes with the target object and the material frame where the target object is located includes the following steps:
  • each cuboid By analyzing the posture of the target object, it is determined whether each cuboid has an intersection with the target object, and if so, the cuboid interferes with the target object.
  • an industrial robot positioning device based on structured light including:
  • An obtaining module configured to obtain the two-dimensional image information of the preset area, and segment the pixel area where the target object is located from the two-dimensional image information of the preset area;
  • a conversion module configured to convert the pixel area into three-dimensional data corresponding to the target object according to a preset mapping relationship between two-dimensional image information and three-dimensional data;
  • the positioning module is configured to locate the posture of the target object according to the three-dimensional data of the target object to grab the target object.
  • the acquisition module is also used to:
  • the two-dimensional image information is segmented by AI or image recognition, and the pixel area where the target object is located is segmented.
  • the device further includes a construction module for constructing a mapping relationship between the preset two-dimensional image information and three-dimensional data;
  • the building block includes:
  • the first acquiring unit is used to acquire the two-dimensional image information of the target object using a structured light vision sensor
  • the second acquisition unit is used to acquire three-dimensional data using a structured light vision sensor
  • a mapping unit for forming a one-to-one mapping between the two-dimensional image information and the three-dimensional data
  • the structured light vision sensor includes a light source and at least one camera.
  • the first obtaining unit is specifically configured to project an image with a brightness exceeding a preset brightness, and obtain two-dimensional image information after lighting.
  • the light source is a grating projection device
  • the second acquiring unit is specifically configured to acquire the three-dimensional data of the target object using a structured light vision sensor:
  • the second obtaining unit includes:
  • a stator unit for calibrating the internal parameters of each camera and the external parameters between each camera and the raster projection device
  • a first image acquisition subunit configured to project an image through the raster projection device and generate a raster sine image
  • a second image acquisition subunit configured to capture the projected image by the camera to form a camera image
  • a main phase value acquisition subunit used to obtain the main phase value of the grating sinusoidal image
  • a first coordinate obtaining subunit configured to obtain the raster sine image coordinates according to the phase main value and the camera image coordinates
  • the second coordinate acquisition subunit is configured to acquire the three-dimensional coordinates of the target object according to the corrected camera image coordinates, raster sine image coordinates, and the inner and outer parameters.
  • the positioning module includes:
  • a pose acquisition unit for positioning the pose of the target object according to the three-dimensional data of the target object and the preset CAD model of the target object;
  • the coordinate conversion unit is used to convert the posture of the target object to the manipulator coordinate system to grab the target object.
  • the device further includes a detection module for determining whether the gripping jig interferes with the target object and the material frame where the target object is located during the process of grabbing the target object, if interference occurs, if it occurs If it interferes, readjust the position of the fixture and then grab.
  • the detection module includes:
  • An image collection unit used to collect the stereoscopic image of the grabbing fixture
  • a dividing unit configured to divide the stereoscopic image of the gripping jig into a plurality of rectangular parallelepipeds, each of which is used to describe the structural characteristics of each part of the gripping jig;
  • the first judging unit is used to judge whether each cuboid has an intersection with each plane of the material frame, and if so, there is interference between the cuboid and the material frame;
  • the second determining unit is configured to determine whether each cuboid has an intersection with the target object by analyzing the posture of the target object, and if there is, the cuboid interferes with the target object.
  • a controller which includes a memory and a processor, the memory stores a computer program, and the program can implement the steps of the method when executed by the processor.
  • a computer-readable storage medium for storing a computer program, which when executed by a computer or processor implements the steps of the method.
  • the present invention provides a method, device, controller, and medium for positioning industrial robots based on structured light, which can achieve considerable technological advancement and practicality, and has wide industrial use value, which has at least the following advantages :
  • the invention is based on visually guided industrial robot identification and positioning, which can realize accurate and rapid scanning of the three-dimensional contour of the target object in a complex environment, and through the one-to-one mapping and analysis of three-dimensional data and two-dimensional image data, thereby efficiently and accurately locating the
  • the posture of the target object guides the industrial robot to grab the target object, adapts to the complex and changing industrial application environment, and thereby improves the production efficiency of industrial assembly and loading and unloading in a complex environment; in addition, in the process of grabbing the target object, through It is analyzed whether the gripping jig interferes with the target object and the material frame where the target object is located, and makes corresponding adjustments, which satisfies the more flexible and intelligent industrial production using industrial robots.
  • FIG. 1 is a schematic diagram of an industrial robot positioning method based on structured light provided by an embodiment of the present invention
  • FIG. 2 is a schematic diagram of two camera space coordinate systems according to an embodiment of the invention.
  • FIG. 3 is a schematic diagram of an industrial robot positioning device based on structured light provided by an embodiment of the present invention.
  • An embodiment of the present invention provides an industrial robot positioning method based on structured light, as shown in FIG. 1, including the following steps:
  • Step S1 Obtain the two-dimensional image information of the preset area, and segment the pixel area where the target object is located from the two-dimensional image information of the preset area;
  • step S1 further includes the following steps:
  • AI Artificial Intelligence
  • image recognition technology is an existing image processing technology. This will not be repeated here.
  • the two-dimensional image information may include one or more target objects, and the pixel region corresponding to each target object may be separated by AI segmentation or image recognition technology.
  • the preset area is an area in the material frame
  • the target object is an object to be grasped in the material frame
  • the object in the material frame may be referred to as a workpiece
  • the target object is the target workpiece.
  • Step S2 According to the preset mapping relationship between the two-dimensional image information and the three-dimensional data, the pixel area is converted into three-dimensional data AI segmentation or image recognition corresponding to the target object, and each target object can be mapped;
  • the method further includes step S20, constructing a mapping relationship between the preset two-dimensional image information and three-dimensional data, which specifically includes: acquiring the two-dimensional image information and three-dimensional data of the target object using a structured light vision sensor, and The two-dimensional image information and the three-dimensional data form a one-to-one mapping, wherein the structured light vision sensor includes a light source and at least one camera.
  • the light source is a raster projection device.
  • the following embodiment uses a raster projection device as an example The description will be made, but it can be understood that other light sources that can be applied to the embodiment of the present invention to obtain two-dimensional image information and three-dimensional data may also be suitable for this.
  • the use of multiple cameras and the combination of the grating projection device can reduce the blind area caused by the limitation of the installation angle of each camera and the grating projection device, and improve the positioning accuracy.
  • the structured light vision sensor includes two cameras as an example Be explained.
  • Step S20 A structured light vision sensor is used to obtain the two-dimensional image information and three-dimensional data of the target at the same time, and form a one-to-one mapping between the two, so that the two-dimensional image information of the target object can be used when positioning the target object. It can use three-dimensional data to adapt to complex environments and improve the robustness of target object positioning, making industrial robot positioning more flexible and intelligent.
  • step S20 the acquiring the two-dimensional image information of the target object using the structured light vision sensor includes the following steps:
  • Step S200 Project an image with a brightness exceeding the preset brightness to obtain the two-dimensional image information after lighting.
  • the preset brightness is set according to projection requirements.
  • the image exceeding the preset brightness may be a white image, a blue image, or the like.
  • step S20 the acquiring the three-dimensional data of the target object using the structured light vision sensor includes the following steps:
  • Step S201 calibrate the internal parameters of each camera and the external parameters between each camera and the raster projection device
  • Step S202 the raster projection device projects an image and generates a raster sine image
  • the gray distribution of the raster sine image is:
  • (u, v) represents the coordinates of the projected pixel unit on the projection surface
  • I (u, v) is the gray value of the (u, v) point
  • a and b are the DC fundamental wave component of the sinusoidal grating (light intensity background) Value) and amplitude (modulated light intensity)
  • ⁇ (u,v) is the grating phase corresponding to I(u,v)
  • is the main value of the phase to be sought
  • is the phase shift.
  • Step S203 the camera takes the projected image to form a camera image
  • Step S204 Obtain the main phase value of the grating sine image
  • the main phase value is the relative phase value.
  • the standard four-step phase shift method can be used to calculate the main phase value of the raster image, but it can be understood that other methods for calculating the main phase value can also be applied here.
  • a phase main value image is calculated by using four raster images with the same frequency.
  • the light intensity expression of the four raster images is:
  • the main phase value of the raster image is:
  • Step S205 Acquire the raster sine image coordinates according to the phase main value and the camera image coordinates
  • a vertical line or a horizontal line in the projected image can be determined; then, according to the absolute phase values in the horizontal and vertical directions, a point in the projected image can be determined.
  • the point in the camera image is set to (uc, vc)
  • you can determine a corresponding point in the projected image that is, the coordinate point of the raster sine image
  • the coordinate point of the raster sine image is set
  • the coordinates of this point can be calculated with the following formula.
  • ⁇ v and ⁇ h are the absolute phase values in the vertical and horizontal directions at the (uc, vc) point in the raster image
  • Nv and Nh are the number of raster stripes in the raster image of the vertical and horizontal cameras
  • H and W are the cameras, respectively Raster image vertical and horizontal resolution.
  • Step S206 correcting the camera image coordinates and the raster sine image coordinates
  • (uc, vc) and (up, vp) are corrected using pre-calibrated system distortion parameters.
  • the second-order radial distortion correction formula is as follows:
  • Step S207 Acquire the three-dimensional coordinates of the target object according to the corrected camera image coordinates, raster sine image coordinates, and the inner and outer parameters.
  • a line between the camera's optical center and the image image point is calculated, Then use the above information to calculate the corresponding three-dimensional coordinates through the principle of triangulation.
  • Sc and Sp are the scale factors of the camera and the optical machine
  • Ac and Ap are internal parameter matrices
  • [Rc tc] and [Rp tp] are external parameter matrices
  • Xw, Yw, Zw, 1 are space vectors.
  • O1-xyz and O2-xyz are the two-camera space coordinate system; P1, P2 are a pair of points with the same name; S1, S2 are the center position of the camera lens; w is a point in real space. P1, S1 determine a straight line in space, P2, S2 determine another straight line, they intersect at W in space.
  • X, Y, Z are the three-dimensional coordinates of the target point, which is an unknown number
  • x, y, f are the image point coordinates, which are known quantities (obtained by analyzing the image)
  • Xs, Ys, Zs are the lens center coordinates, which are Known quantities (obtained during the camera calibration process)
  • a i , b i , c i are coordinate system transformation parameters, which are known quantities (obtained during the camera calibration process).
  • An image can list a straight line equation, two images can list two straight line equations, a total of 4 equations, and the unknown number in the formula is only three (three-dimensional point coordinates X, Y, Z), therefore, can be calculated three Unknowns.
  • Step S3 Position the posture of the target object according to the three-dimensional data of the target object to grab the target object.
  • the step S3 includes the following steps:
  • Step S31 Position the posture of the target object according to the three-dimensional data of the target object and the preset CAD model of the target object;
  • Each target object has its corresponding CAD model, which is set in advance. By comparing the three-dimensional data of the target object with its corresponding CAD model, the posture of the target object can be located.
  • Step S32 Convert the position and posture of the target object to the robot coordinate system to grab the target object.
  • the system corresponding to the method may include an industrial robot, a host computer, and a three-dimensional vision sensor, where the three-dimensional vision sensor is a structured light vision sensor.
  • the method described in the embodiments of the present invention can not only solve the problem that it is difficult to calculate the three-dimensional coordinates of the target object solely on the basis of two-dimensional image analysis, but also avoid the difficulty of relying solely on three-dimensional data. Effectively deal with the segmentation problem when the target objects are closely arranged.
  • the method can achieve accurate and rapid scanning of the three-dimensional contour of the target object in a complex environment by performing steps S1-step S3, and through the mapping relationship of the two-dimensional image of the three-dimensional data, efficiently and accurately locate the posture of the target object, thereby guiding Industrial robots grab workpieces.
  • the method further includes step S4. During the process of grasping the target object, it is determined whether the gripping jig interferes with the target object and the material frame where the target object is located. If interference occurs, if interference occurs, Then readjust the position of the fixture and then grab.
  • step S4 determining whether the gripping jig interferes with the target object and the material frame where the target object is located includes the following steps:
  • Step S41 Collect the stereoscopic image of the grabbing fixture
  • Step S42 Divide the stereoscopic image of the gripping jig into a plurality of rectangular parallelepipeds, each of which is used to describe the structural characteristics of each part of the gripping jig;
  • Step S43 Determine whether each cuboid has an intersection with each plane of the material frame. If so, there is interference between the cuboid and the material frame;
  • Step S44 By analyzing the posture of the target object, it is determined whether each cuboid has an intersection with the target object, and if so, the cuboid interferes with the target object.
  • step S4 the industrial robot can not only avoid the interference between the fixture and the material frame and the workpiece during the process of grasping the workpiece, but also ensure that the workpiece is grasped to the maximum extent.
  • An embodiment of the present invention also provides an industrial robot positioning device based on structured light. As shown in FIG. 3, it includes an acquisition module 1, a conversion module 2, and a positioning module 3.
  • the acquisition module 1 is used to acquire two of the preset area. Dimensional image information, and segment the pixel area where the target object is located from the two-dimensional image information of the preset area;
  • the conversion module 2 is used to convert all the two-dimensional image information and three-dimensional data according to the preset mapping relationship
  • the pixel area is converted into three-dimensional data corresponding to the target object;
  • the positioning module 3 is used to locate the posture of the target object according to the three-dimensional data of the target object to grab the target object.
  • the device described in the embodiments of the present invention can not only solve the problem that it is difficult to calculate the three-dimensional coordinates of the target object solely on the basis of two-dimensional image analysis, but also avoid the difficulty of relying solely on three-dimensional data. Effectively deal with the segmentation problem when the target objects are closely arranged.
  • the device can realize accurate and rapid scanning of the three-dimensional contour of the target object in a complex environment, and through the mapping relationship of the two-dimensional image of the three-dimensional data, locate the posture of the target object efficiently and accurately, thereby guiding the industrial robot to grab the workpiece.
  • the acquisition module 1 is also used to segment the pixel area where the target object is located by AI segmentation or image recognition of the two-dimensional image information.
  • AI segmentation or image recognition technology is an existing image processing technology, which will not be repeated here.
  • the two-dimensional image information may include one or more target objects, and the pixel area corresponding to each target object may be separated by AI segmentation or image recognition technology.
  • the preset area is an area in the material frame
  • the target object is an object to be grasped in the material frame
  • the object in the material frame may be called a workpiece
  • the target object is the target workpiece.
  • the device further includes a construction module for constructing a mapping relationship between the preset two-dimensional image information and three-dimensional data.
  • the construction module includes a first acquisition unit, a second acquisition unit, and a mapping unit, wherein the first acquisition unit is used to acquire the two-dimensional image information of the target object using the structured light vision sensor; the second acquisition unit is used to adopt the structured light vision
  • the sensor acquires three-dimensional data; the mapping unit is used to form a one-to-one mapping between the two-dimensional image information and the three-dimensional data, so that both the two-dimensional image information of the target object and the three-dimensional data can be used when locating the target object, thereby adapting Complex environment, and improve the robustness of target object positioning, making industrial robot positioning more flexible and intelligent.
  • the structured light vision sensor includes a light source and at least one camera.
  • the light source is a grating projection device.
  • the following embodiment uses a grating projection device as an example for description, but it can be understood that other applications
  • the light source for obtaining two-dimensional image information and three-dimensional data according to the embodiment of the present invention may also be suitable for this.
  • the use of multiple cameras and the combination of the grating projection device can reduce the blind area caused by the limitation of the installation angle of each camera and the grating projection device, and improve the positioning accuracy.
  • the structured light vision sensor includes two cameras as an example Be explained.
  • the first acquiring unit is specifically used to project an image with a brightness exceeding a preset brightness, acquire two-dimensional image information after lighting, and use binocular vision in conjunction with grating structured light to use its own lighting to shoot high The quality of the two-dimensional image, wherein the preset brightness is set according to projection requirements.
  • the image exceeding the preset brightness may be a white image, a blue image, or the like.
  • the second acquisition unit is specifically used to acquire the three-dimensional data of the target object using a structured light vision sensor: the second acquisition unit includes a standard stator unit, a first image acquisition subunit, a second image acquisition subunit, and a phase principal value acquisition subunit , A first coordinate acquisition subunit, a correction subunit, and a second coordinate acquisition subunit, where,
  • a stator unit for calibrating the internal parameters of each camera and the external parameters between each camera and the raster projection device
  • a first image acquisition subunit configured to project an image through the raster projection device and generate a raster sine image
  • the gray distribution of the raster sine image is:
  • (u, v) represents the coordinates of the projected pixel unit on the projection surface
  • I (u, v) is the gray value of the (u, v) point
  • a and b are the DC fundamental wave component of the sinusoidal grating (light intensity background) Value) and amplitude (modulated light intensity)
  • ⁇ (u,v) is the grating phase corresponding to I(u,v)
  • is the main value of the phase to be sought
  • is the phase shift.
  • a second image acquisition subunit configured to capture the projected image by the camera to form a camera image
  • a main phase value acquisition subunit used to obtain the main phase value of the grating sinusoidal image
  • a phase main value image is calculated by using four raster images with the same frequency.
  • the light intensity expression of the four raster images is:
  • the main phase value of the raster image is:
  • a first coordinate obtaining subunit configured to obtain the coordinate of the raster sine image according to the phase main value and the camera image coordinate
  • a vertical line or a horizontal line in the projected image can be determined; then, according to the absolute phase values in the horizontal and vertical directions, a point in the projected image can be determined.
  • the point in the camera image is set to (uc, vc)
  • you can determine a corresponding point in the projected image that is, the coordinate point of the raster sine image
  • the coordinate point of the raster sine image is set
  • the coordinates of this point can be calculated with the following formula.
  • ⁇ v and ⁇ h are the absolute phase values in the vertical and horizontal directions at the (uc, vc) point in the raster image
  • Nv and Nh are the number of raster stripes in the raster image of the vertical and horizontal cameras
  • H and W are the cameras, respectively Raster image vertical and horizontal resolution.
  • (uc, vc) and (up, vp) are corrected using pre-calibrated system distortion parameters.
  • the second-order radial distortion correction formula is as follows:
  • the second coordinate acquisition subunit is configured to acquire the three-dimensional coordinates of the target object according to the corrected camera image coordinates, raster sine image coordinates, and the inner and outer parameters.
  • a line between the camera's optical center and the image image point is calculated, Then use the above information to calculate the corresponding three-dimensional coordinates through the principle of triangulation.
  • Sc and Sp are the scale factors of the camera and the optical machine
  • Ac and Ap are internal parameter matrices
  • [Rc tc] and [Rp tp] are external parameter matrices
  • Xw, Yw, Zw, 1 are space vectors.
  • O1-xyz and O2-xyz are the two-camera space coordinate system; P1, P2 are a pair of points with the same name; S1, S2 are the center position of the camera lens; w is a point in real space. P1, S1 determine a straight line in space, P2, S2 determine another straight line, they intersect at W in space.
  • X, Y, Z are the three-dimensional coordinates of the target point, which is an unknown number
  • x, y, f are the image point coordinates, which are known quantities (obtained by analyzing the image)
  • Xs, Ys, Zs are the lens center coordinates, which are Known quantities (obtained during the camera calibration process)
  • a i , b i , c i are coordinate system transformation parameters, which are known quantities (obtained during the camera calibration process).
  • An image can list a straight line equation, two images can list two straight line equations, a total of 4 equations, and the unknown number in the formula is only three (three-dimensional point coordinates X, Y, Z), therefore, can be calculated three Unknowns.
  • the positioning module 3 includes a pose acquisition unit and a coordinate conversion unit, wherein the pose acquisition unit is used to locate the target object according to the three-dimensional data of the target object and a preset CAD model of the target object Pose; the coordinate conversion unit is used to convert the posture of the target object to the manipulator coordinate system to grab the target object.
  • Each target object has its corresponding CAD model, which is set in advance. By comparing the three-dimensional data of the target object with its corresponding CAD model, the posture of the target object can be located.
  • the device further includes a detection module for judging whether the gripping jig interferes with the target object and the material frame where the target object is located during the process of grabbing the target object, if interference occurs, if it occurs If it interferes, readjust the position of the fixture and then grab.
  • the detection module includes an image acquisition unit, a division unit, a first judgment unit and a second judgment unit, among them.
  • An image acquisition unit is used to acquire the stereoscopic image of the gripping jig;
  • a dividing unit is used to divide the stereoscopic image of the gripping jig into a plurality of rectangular parallelepipeds, each of which is used to describe the structural characteristics of each part of the gripping jig
  • the first judgment unit is used to judge whether each of the cuboid has an intersection with each plane of the material frame.
  • the second judgment unit is used to analyze the posture of the target object To determine whether each cuboid has an intersection with the target object, and if so, there is interference between the cuboid and the target object.
  • the industrial robot can avoid the interference between the fixture and the material frame and the workpiece during the process of grasping the workpiece, and also ensure that the workpiece is grasped to the maximum extent.
  • the method and device of the present invention are based on visually guided industrial robot recognition and positioning, which can achieve accurate and rapid scanning of the three-dimensional contour of the target object in a complex environment, and through one-to-one mapping and analysis of three-dimensional data and two-dimensional image data, thus efficient and accurate Locate the position of the target object, guide the industrial robot to grab the target object, adapt to the complex and changing industrial application environment, and then improve the production efficiency of industrial assembly and loading and unloading in a complex environment; in addition, grab the target In the process of objects, by analyzing whether the gripping jig interferes with the target object and the material frame where the target object is located, and make corresponding adjustments, it is more flexible and intelligent for industrial production using industrial robots.
  • An embodiment of the present invention also provides a controller, which includes a memory and a processor, where the memory stores a computer program, and the program, when executed by the processor, can implement the structured light-based industrial robot positioning method step.
  • An embodiment of the present invention also provides a computer-readable storage medium for storing a computer program, which when executed by a computer or processor implements the steps of the structured light-based industrial robot positioning method.

Abstract

The present invention relates to a structured-light-based locating method and apparatus for an industrial robot, and a controller and a medium. The method comprises: acquiring two-dimensional image information of a pre-set area, and segmenting, from the two-dimensional image information of the pre-set area, a pixel area where a target object is located; according to a pre-set mapping relationship between two-dimensional image information and three-dimensional data, converting the pixel area into three-dimensional data corresponding to the target object; and according to the three-dimensional data of the target object, locating a pose of the target object, so as to capture the target object. According to the present invention, based on recognition and locating of a vision-guided industrial robot, a three-dimensional profile of a target object can be accurately and rapidly scanned in a complex environment, and by means of one-to-one mapping and joint analysis of three-dimensional data and data of a two-dimensional image, a pose of the target object is efficiently and accurately located so as to guide the industrial robot to capture the target object.

Description

基于结构光的工业机器人定位方法和装置、控制器、介质Industrial robot positioning method and device, controller and medium based on structured light 技术领域Technical field
本发明涉及机器人视觉技术领域,尤其涉及一种基于结构光的工业机器人定位方法和装置、控制器、介质。The invention relates to the technical field of robot vision, in particular to a method, device, controller and medium for positioning an industrial robot based on structured light.
背景技术Background technique
随着工业自动化的迅速发展,工业机器人的应用越来越普遍,对于现有技术中大多数的工业机器人应用场景来说,通常需要手动示教或者离线编程来事先规划机器人的工作路径,这种高度结构化的工作模式严格限定了工业机器人使用的灵活性和智能性。With the rapid development of industrial automation, the application of industrial robots is becoming more and more common. For most industrial robot application scenarios in the prior art, manual teaching or offline programming is usually required to plan the working path of the robot in advance. The highly structured working mode strictly limits the flexibility and intelligence of industrial robots.
现有的工业机器人定位方式,按照数据的类型,可以划分为两类:(1)从二维图像中通过与模板比对识别目标,然后利用识别目标的图像区域提取三维数据或者通过距离传感器获取目标局部平面计算目标的位姿。该方式严重依赖于拍摄图像的质量,但工业生产环境中复杂的光线变化,因此,该方式很难以适应实际生产。(2)从三维数据直接与CAD模型比对,该方式不再依赖于获取二维图像的质量,但对于多个工件叠加在一起的情况,容易引起配准的歧义性,从而影响和模板比对的稳定性。Existing industrial robot positioning methods can be divided into two categories according to the type of data: (1) Recognize the target by comparing with the template from the two-dimensional image, and then extract the three-dimensional data using the image area of the recognition target or obtain it through the distance sensor The target local plane calculates the pose of the target. This method depends heavily on the quality of the captured image, but the complex light changes in the industrial production environment, therefore, this method is difficult to adapt to actual production. (2) Compare directly with the CAD model from the 3D data, this method no longer depends on the quality of the acquired 2D image, but for the case of multiple workpieces superimposed together, it is easy to cause registration ambiguity, which affects the comparison with the template Right stability.
综上可知,现有的工业机器人定位方法难以使用复杂多变的工业应用环境,无法满足柔性生产的需求。In summary, the existing industrial robot positioning method is difficult to use complex and changeable industrial application environment, and cannot meet the needs of flexible production.
发明内容Summary of the invention
本发明所要解决的技术问题在于,提供一种基于结构光的工业机器人定位方法和装置、控制器、介质,基于视觉引导的工业机器人识别定位,能实现复杂环境下对目标物体三维轮廓的精确快速扫描,并通过三维数据和二维图像的数据一一映射结合分析,从而高效准确地定位出所述目标物体的位姿,引导工业机器人抓取目标物体。The technical problem to be solved by the present invention is to provide an industrial robot positioning method and device, controller, and medium based on structured light. The industrial robot recognition and positioning based on visual guidance can achieve accurate and rapid three-dimensional contours of target objects in complex environments. Scanning and analyzing through the one-to-one mapping of three-dimensional data and two-dimensional image data to efficiently and accurately locate the posture of the target object and guide the industrial robot to grab the target object.
为了解决上述技术问题,根据本发明第一实施例,提供了一种基于结构光的工业机器人定位方法,包括以下步骤:In order to solve the above technical problems, according to the first embodiment of the present invention, a method for positioning an industrial robot based on structured light is provided, including the following steps:
获取预设区域的二维图像信息,并从所述预设区域的二维图像信息中分割出目标物体所在的像素区域;Acquiring the two-dimensional image information of the preset area, and segmenting the pixel area where the target object is located from the two-dimensional image information of the preset area;
根据预设的二维图像信息和三维数据之间的映射关系,将所述像素区域转换为所述目标物体对应的三维数据;Converting the pixel area into three-dimensional data corresponding to the target object according to the preset mapping relationship between the two-dimensional image information and the three-dimensional data;
根据所述目标物体的三维数据定位出所述目标物体的位姿从而抓取所述目标物体。The position and posture of the target object are located according to the three-dimensional data of the target object to grasp the target object.
进一步的,从所述预设区域的二维图像信息中分割出目标物体所在的像素区域,包括以下步骤:Further, segmenting the pixel area where the target object is located from the two-dimensional image information of the preset area includes the following steps:
通过AI分割或图像识别所述二维图像信息,分割出所述目标物体所在的像素区域。The two-dimensional image information is segmented by AI or image recognition, and the pixel area where the target object is located is segmented.
进一步的,所述方法还包括:构建所述预设的二维图像信息和三维数据之间的映射关系,具体包括以下步骤:Further, the method further includes: constructing a mapping relationship between the preset two-dimensional image information and three-dimensional data, specifically including the following steps:
采用结构光视觉传感器获取目标物体的二维图像信息和三维数据,并在所述二维图像信息和三维数据形成一一映射,其中,所述结构光视觉传感器包括光源和至少一个相机。A structured light vision sensor is used to obtain two-dimensional image information and three-dimensional data of the target object, and a one-to-one mapping is formed on the two-dimensional image information and three-dimensional data, wherein the structured light vision sensor includes a light source and at least one camera.
进一步的,所述采用结构光视觉传感器获取目标物体的二维图像信息包括以下步骤:Further, the acquiring the two-dimensional image information of the target object using the structured light vision sensor includes the following steps:
投影一张亮度超过预设亮度的图像,获取打光后的二维图像信息。Project an image whose brightness exceeds the preset brightness to obtain the two-dimensional image information after lighting.
进一步的,所述光源为光栅投影装置,所述采用结构光视觉传感器获取目标物体的三维数据,包括以下步骤:Further, the light source is a grating projection device, and the use of a structured light vision sensor to acquire three-dimensional data of a target object includes the following steps:
标定每个所述相机的内参数,以及每个所述相机和所述光栅投影装置之间的外参数;Calibrate the internal parameters of each camera and the external parameters between each camera and the raster projection device;
所述光栅投影装置投影图像并生成光栅正弦图像;The raster projection device projects an image and generates a raster sinusoidal image;
所述相机拍摄所述投影图像形成相机图像;The camera takes the projection image to form a camera image;
获取所述光栅正弦图像的相位主值;Acquiring the main phase value of the grating sine image;
根据所述相位主值和所述相机图像坐标获取所述光栅正弦图像坐标;Acquiring the raster sine image coordinates according to the main phase value and the camera image coordinates;
矫正所述相机图像坐标和所述光栅正弦图像坐标;Correcting the camera image coordinates and the raster sine image coordinates;
根据所述矫正后的相机图像坐标、光栅正弦图像坐标和所述内参数以及外参数获取目标物体的三维坐标。The three-dimensional coordinates of the target object are obtained according to the corrected camera image coordinates, raster sine image coordinates, and the inner and outer parameters.
进一步的,所述根据所述目标物体的三维数据定位出所述目标物体的位姿从而抓取所述目标物体,包括以下步骤:Further, the positioning of the target object according to the three-dimensional data of the target object to grab the target object includes the following steps:
根据所述目标物体的三维数据以及预设的目标物体的CAD模型定位出所述目标物体的位姿;Locate the posture of the target object according to the three-dimensional data of the target object and the preset CAD model of the target object;
将所述目标物体的位姿转换至机械手坐标系下对目标物体进行抓取。Converting the posture of the target object to the robot coordinate system to grab the target object.
进一步的,所述方法还包括:在抓取所述目标物体过程中,判断抓取夹具是否与所述目标物体以及所述目标物体所在料框发生干涉,若发生干涉,若发生干涉,则重新调整夹具所在位置再进行抓取。Further, the method further includes: in the process of grabbing the target object, determining whether the gripping jig interferes with the target object and the material frame where the target object is located, if there is interference, if there is interference, then restart Adjust the position of the fixture and then grab.
进一步的,判断抓取夹具与所述目标物体以及所述目标物体所在料框是否发生干涉,包括以下步骤:Further, determining whether the gripping jig interferes with the target object and the material frame where the target object is located includes the following steps:
采集所述抓取夹具立体图像;Collect the stereoscopic image of the grabbing fixture;
将所述抓取夹具立体图像划分为多个长方体,每个所述长方体用于描述所述抓取夹具各个部分的结构特征;Dividing the stereoscopic image of the gripping jig into a plurality of rectangular parallelepipeds, each of which is used to describe the structural characteristics of each part of the gripping jig;
判断每个所述长方体是否与料框的每个平面有交点,若有,该长方体与料框存在干涉;Determine whether each cuboid has an intersection with each plane of the material frame; if so, there is interference between the cuboid and the material frame;
通过分析所述目标物体的位姿,判断每个所述长方体是否与所述目标物体存在交点,若有,则该长方体与目标物体存在干涉。By analyzing the posture of the target object, it is determined whether each cuboid has an intersection with the target object, and if so, the cuboid interferes with the target object.
根据本发明第二实施例,提供了一种基于结构光的工业机器人定位装置,包括:According to a second embodiment of the present invention, an industrial robot positioning device based on structured light is provided, including:
获取模块,用于获取预设区域的二维图像信息,并从所述预设区域的二维图像信息中分割出目标物体所在的像素区域;An obtaining module, configured to obtain the two-dimensional image information of the preset area, and segment the pixel area where the target object is located from the two-dimensional image information of the preset area;
转换模块,用于根据预设的二维图像信息和三维数据之间的映射关系,将所述像素区域转换为所述目标物体对应的三维数据;A conversion module, configured to convert the pixel area into three-dimensional data corresponding to the target object according to a preset mapping relationship between two-dimensional image information and three-dimensional data;
定位模块,用于根据所述目标物体的三维数据定位出所述目标物体的位姿从而抓取所述目标物体。The positioning module is configured to locate the posture of the target object according to the three-dimensional data of the target object to grab the target object.
进一步的,所述获取模块还用于:Further, the acquisition module is also used to:
通过AI分割或图像识别所述二维图像信息,分割出所述目标物体所在的像素区域。The two-dimensional image information is segmented by AI or image recognition, and the pixel area where the target object is located is segmented.
进一步的,所述装置还包括构建模块,用于构建所述预设的二维图像信息和三维数据之间的映射关系;Further, the device further includes a construction module for constructing a mapping relationship between the preset two-dimensional image information and three-dimensional data;
所述构建模块包括:The building block includes:
第一获取单元,用于采用结构光视觉传感器获取目标物体的二维图像信息;The first acquiring unit is used to acquire the two-dimensional image information of the target object using a structured light vision sensor;
第二获取单元,用于采用结构光视觉传感器获取三维数据;The second acquisition unit is used to acquire three-dimensional data using a structured light vision sensor;
映射单元,用于并在所述二维图像信息和三维数据形成一一映射;A mapping unit for forming a one-to-one mapping between the two-dimensional image information and the three-dimensional data;
其中,所述结构光视觉传感器包括光源和至少一个相机。Wherein, the structured light vision sensor includes a light source and at least one camera.
进一步的,所述第一获取单元具体用于投影一张亮度超过预设亮度的图像,获取打光后的二维图像信息。Further, the first obtaining unit is specifically configured to project an image with a brightness exceeding a preset brightness, and obtain two-dimensional image information after lighting.
进一步的,所述光源为光栅投影装置,所述第二获取单元具体用于采用结构光视觉传感器获取目标物体的三维数据:Further, the light source is a grating projection device, and the second acquiring unit is specifically configured to acquire the three-dimensional data of the target object using a structured light vision sensor:
所述第二获取单元包括:The second obtaining unit includes:
标定子单元,用于标定每个所述相机的内参数,以及每个所述相机和所述光栅投影装置之间的外参数;A stator unit for calibrating the internal parameters of each camera and the external parameters between each camera and the raster projection device;
第一图像获取子单元,用于通过所述光栅投影装置投影图像并生成光栅正弦图像;A first image acquisition subunit, configured to project an image through the raster projection device and generate a raster sine image;
第二图像获取子单元,用于通过所述相机拍摄所述投影图像形成相机图像;A second image acquisition subunit, configured to capture the projected image by the camera to form a camera image;
相位主值获取子单元,用于获取所述光栅正弦图像的相位主值;A main phase value acquisition subunit, used to obtain the main phase value of the grating sinusoidal image;
第一坐标获取子单元,用于根据所述相位主值和所述相机图像坐标获 取所述光栅正弦图像坐标;A first coordinate obtaining subunit, configured to obtain the raster sine image coordinates according to the phase main value and the camera image coordinates;
矫正子单元,用于矫正所述相机图像坐标和所述光栅正弦图像坐标;A correction subunit for correcting the camera image coordinates and the raster sine image coordinates;
第二坐标获取子单元,用于根据所述矫正后的相机图像坐标、光栅正弦图像坐标和所述内参数以及外参数获取目标物体的三维坐标。The second coordinate acquisition subunit is configured to acquire the three-dimensional coordinates of the target object according to the corrected camera image coordinates, raster sine image coordinates, and the inner and outer parameters.
进一步的,所述定位模块包括:Further, the positioning module includes:
位姿获取单元,用于根据所述目标物体的三维数据以及预设的目标物体的CAD模型定位出所述目标物体的位姿;A pose acquisition unit for positioning the pose of the target object according to the three-dimensional data of the target object and the preset CAD model of the target object;
坐标转换单元,用于将所述目标物体的位姿转换至机械手坐标系下对目标物体进行抓取。The coordinate conversion unit is used to convert the posture of the target object to the manipulator coordinate system to grab the target object.
进一步的,所述装置还包括检测模块,用于在抓取所述目标物体过程中,判断抓取夹具是否与所述目标物体以及所述目标物体所在料框发生干涉,若发生干涉,若发生干涉,则重新调整夹具所在位置再进行抓取。Further, the device further includes a detection module for determining whether the gripping jig interferes with the target object and the material frame where the target object is located during the process of grabbing the target object, if interference occurs, if it occurs If it interferes, readjust the position of the fixture and then grab.
进一步的,所述检测模块包括:Further, the detection module includes:
图像采集单元,用于采集所述抓取夹具立体图像;An image collection unit, used to collect the stereoscopic image of the grabbing fixture;
划分单元,用于将所述抓取夹具立体图像划分为多个长方体,每个所述长方体用于描述所述抓取夹具各个部分的结构特征;A dividing unit, configured to divide the stereoscopic image of the gripping jig into a plurality of rectangular parallelepipeds, each of which is used to describe the structural characteristics of each part of the gripping jig;
第一判断单元,用于判断每个所述长方体是否与料框的每个平面有交点,若有,该长方体与料框存在干涉;The first judging unit is used to judge whether each cuboid has an intersection with each plane of the material frame, and if so, there is interference between the cuboid and the material frame;
第二判断单元,用于通过分析所述目标物体的位姿,判断每个所述长方体是否与所述目标物体存在交点,若有,则该长方体与目标物体存在干涉。The second determining unit is configured to determine whether each cuboid has an intersection with the target object by analyzing the posture of the target object, and if there is, the cuboid interferes with the target object.
根据本发明第三实施例,提供了一种控制器,其包括存储器与处理器,所述存储器存储有计算机程序,所述程序在被所述处理器执行时能够实现所述方法的步骤。According to a third embodiment of the present invention, a controller is provided, which includes a memory and a processor, the memory stores a computer program, and the program can implement the steps of the method when executed by the processor.
根据本发明第四实施例,提供了一种计算机可读存储介质,用于存储计算机程序,所述程序在由一计算机或处理器执行时实现所述方法的步骤。According to a fourth embodiment of the present invention, a computer-readable storage medium is provided for storing a computer program, which when executed by a computer or processor implements the steps of the method.
本发明与现有技术相比具有明显的优点和有益效果。借由上述技术方案,本发明一种基于结构光的工业机器人定位方法和装置、控制器、介质可达到相当的技术进步性及实用性,并具有产业上的广泛利用价值,其至少具有下列优点:Compared with the prior art, the invention has obvious advantages and beneficial effects. With the above technical solutions, the present invention provides a method, device, controller, and medium for positioning industrial robots based on structured light, which can achieve considerable technological advancement and practicality, and has wide industrial use value, which has at least the following advantages :
本发明基于视觉引导的工业机器人识别定位,能实现复杂环境下对目标物体三维轮廓的精确快速扫描,并通过三维数据和二维图像的数据一一映射结合分析,从而高效准确地定位出所述目标物体的位姿,引导工业机器人抓取目标物体,适应复杂多变的工业应用环境,进而提高复杂环境下 工业装配、上下料的生产效率;此外,在抓取所述目标物体过程中,通过分析抓取夹具是否与所述目标物体以及所述目标物体所在料框发生干涉,并进行相应的调整,满足了应用工业机器人的工业生产更加的柔性化和智能化。The invention is based on visually guided industrial robot identification and positioning, which can realize accurate and rapid scanning of the three-dimensional contour of the target object in a complex environment, and through the one-to-one mapping and analysis of three-dimensional data and two-dimensional image data, thereby efficiently and accurately locating the The posture of the target object guides the industrial robot to grab the target object, adapts to the complex and changing industrial application environment, and thereby improves the production efficiency of industrial assembly and loading and unloading in a complex environment; in addition, in the process of grabbing the target object, through It is analyzed whether the gripping jig interferes with the target object and the material frame where the target object is located, and makes corresponding adjustments, which satisfies the more flexible and intelligent industrial production using industrial robots.
上述说明仅是本发明技术方案的概述,为了能够更清楚了解本发明的技术手段,而可依照说明书的内容予以实施,并且为了让本发明的上述和其他目的、特征和优点能够更明显易懂,以下特举较佳实施例,并配合附图,详细说明如下。The above description is only an overview of the technical solutions of the present invention. In order to understand the technical means of the present invention more clearly, it can be implemented in accordance with the content of the specification, and in order to make the above and other objects, features and advantages of the present invention more obvious and understandable In the following, the preferred embodiments are described in detail in conjunction with the drawings, which are described in detail below.
附图说明BRIEF DESCRIPTION
图1为本发明一实施例提供的基于结构光的工业机器人定位方法示意图;FIG. 1 is a schematic diagram of an industrial robot positioning method based on structured light provided by an embodiment of the present invention;
图2为本发明一实施例两个相机空间坐标系示意图;2 is a schematic diagram of two camera space coordinate systems according to an embodiment of the invention;
图3为本发明一实施例提供的基于结构光的工业机器人定位装置示意图。3 is a schematic diagram of an industrial robot positioning device based on structured light provided by an embodiment of the present invention.
【符号说明】【Symbol Description】
1:获取模块      2:转换模块1: Get the module 2: Convert the module
3:定位模块3: Positioning module
具体实施方式detailed description
为更进一步阐述本发明为达成预定发明目的所采取的技术手段及功效,以下结合附图及较佳实施例,对依据本发明提出的一种基于结构光的工业机器人定位方法和装置、控制器、介质的具体实施方式及其功效,详细说明如后。In order to further illustrate the technical means and effects adopted by the present invention to achieve the intended purpose of the invention, in the following, in conjunction with the drawings and preferred embodiments, a method, device and controller for positioning an industrial robot based on structured light according to the present invention The specific implementation of the medium and its efficacy will be described in detail later.
本发明实施例提供了一种基于结构光的工业机器人定位方法,如图1所示,包括以下步骤:An embodiment of the present invention provides an industrial robot positioning method based on structured light, as shown in FIG. 1, including the following steps:
步骤S1、获取预设区域的二维图像信息,并从所述预设区域的二维图像信息中分割出目标物体所在的像素区域;Step S1: Obtain the two-dimensional image information of the preset area, and segment the pixel area where the target object is located from the two-dimensional image information of the preset area;
作为示例,所述步骤S1还包括以下步骤:As an example, the step S1 further includes the following steps:
通过AI(Artificial Intelligence人工智能)分割或图像识别所述二维图像信息,分割出所述目标物体所在的像素区域,需要说明的是AI分割或图像识别技术为现有的图像处理技术手段,在此不再赘述。二维图像信息可包括一个或多个目标物体,通过AI分割或图像识别技术可将每一目标物体对应的像素区域分离出来。在工业应用中,作为一种示例,所述预设区域为料框内区域,所述目标物体为料框内待抓取的物体,料框内物体可称为工件,目标物体为目标工件。By AI (Artificial Intelligence) segmentation or image recognition of the two-dimensional image information, the pixel area where the target object is located is divided. It should be noted that AI segmentation or image recognition technology is an existing image processing technology. This will not be repeated here. The two-dimensional image information may include one or more target objects, and the pixel region corresponding to each target object may be separated by AI segmentation or image recognition technology. In industrial applications, as an example, the preset area is an area in the material frame, the target object is an object to be grasped in the material frame, the object in the material frame may be referred to as a workpiece, and the target object is the target workpiece.
步骤S2、根据预设的二维图像信息和三维数据之间的映射关系,将所述像素区域转换为所述目标物体对应的三维数据AI分割或图像识别,可将每一目标物体对应的;Step S2: According to the preset mapping relationship between the two-dimensional image information and the three-dimensional data, the pixel area is converted into three-dimensional data AI segmentation or image recognition corresponding to the target object, and each target object can be mapped;
所述方法还包括步骤S20、构建所述预设的二维图像信息和三维数据之间的映射关系,具体包括:采用结构光视觉传感器获取目标物体的二维图像信息和三维数据,并在所述二维图像信息和三维数据形成一一映射,其中,所述结构光视觉传感器包括光源和至少一个相机,作为一种示例,所述光源为光栅投影装置,以下实施例以光栅投影装置为示例进行说明,但可以理解的是,其他可应用于本发明实施例来获取二维图像信息和三维数据的光源也可适于此。使用多个相机和光栅投影装置组合是可以减少由于每个相机与光栅投影装置的安装角度限制引起的盲区,提高定位准确度,本发明实施例中,以结构光视觉传感器包括两个相机为例进行说明。The method further includes step S20, constructing a mapping relationship between the preset two-dimensional image information and three-dimensional data, which specifically includes: acquiring the two-dimensional image information and three-dimensional data of the target object using a structured light vision sensor, and The two-dimensional image information and the three-dimensional data form a one-to-one mapping, wherein the structured light vision sensor includes a light source and at least one camera. As an example, the light source is a raster projection device. The following embodiment uses a raster projection device as an example The description will be made, but it can be understood that other light sources that can be applied to the embodiment of the present invention to obtain two-dimensional image information and three-dimensional data may also be suitable for this. The use of multiple cameras and the combination of the grating projection device can reduce the blind area caused by the limitation of the installation angle of each camera and the grating projection device, and improve the positioning accuracy. In the embodiment of the present invention, the structured light vision sensor includes two cameras as an example Be explained.
步骤S20、采用结构光视觉传感器,能同时获取目标的二维图像信息和三维数据,并在两者之间形成一一映射,使得定位目标物体时既能使用目标物体的二维图像信息,又能使用三维数据,从而适应了复杂环境,且提高了目标物体定位的鲁棒性,使得工业机器人定位更加柔性化和智能化。Step S20. A structured light vision sensor is used to obtain the two-dimensional image information and three-dimensional data of the target at the same time, and form a one-to-one mapping between the two, so that the two-dimensional image information of the target object can be used when positioning the target object. It can use three-dimensional data to adapt to complex environments and improve the robustness of target object positioning, making industrial robot positioning more flexible and intelligent.
步骤S20中,所述采用结构光视觉传感器获取目标物体的二维图像信息包括以下步骤:In step S20, the acquiring the two-dimensional image information of the target object using the structured light vision sensor includes the following steps:
步骤S200、投影一张亮度超过预设亮度的图像,获取打光后的二维图像信息,利用双目视觉配合光栅结构光,可利用自身的打光拍摄高质量的二维图像,其中,所述预设亮度根据投影需求来设定,作为一种示例,所述超过预设亮度的图像可以为一张白色图像、蓝色图像等。Step S200. Project an image with a brightness exceeding the preset brightness to obtain the two-dimensional image information after lighting. Using binocular vision and grating structured light, you can use your own lighting to shoot high-quality two-dimensional images. The preset brightness is set according to projection requirements. As an example, the image exceeding the preset brightness may be a white image, a blue image, or the like.
步骤S20中,所述采用结构光视觉传感器获取目标物体的三维数据,包括以下步骤:In step S20, the acquiring the three-dimensional data of the target object using the structured light vision sensor includes the following steps:
步骤S201、标定每个所述相机的内参数,以及每个所述相机和所述光栅投影装置之间的外参数;Step S201: calibrate the internal parameters of each camera and the external parameters between each camera and the raster projection device;
步骤S202、所述光栅投影装置投影图像并生成光栅正弦图像;Step S202, the raster projection device projects an image and generates a raster sine image;
光栅正弦图像的灰度分布为:The gray distribution of the raster sine image is:
I(u,v)=a+b*cos(θ(u,v)),θ(u,v)=φ(u,v)+αI(u,v)=a+b*cos(θ(u,v)), θ(u,v)=φ(u,v)+α
其中,(u,v)表示投影面投影像素单元的坐标,I(u,v)为(u,v)点的灰度值,a和b分别为正弦光栅的直流基波分量(光强背景值)和振幅(调制光强),θ(u,v)为I(u,v)对应的光栅相位,φ为待求相位主值,α为相位位移。Among them, (u, v) represents the coordinates of the projected pixel unit on the projection surface, I (u, v) is the gray value of the (u, v) point, and a and b are the DC fundamental wave component of the sinusoidal grating (light intensity background) Value) and amplitude (modulated light intensity), θ(u,v) is the grating phase corresponding to I(u,v), φ is the main value of the phase to be sought, and α is the phase shift.
步骤S203、所述相机拍摄所述投影图像形成相机图像;Step S203, the camera takes the projected image to form a camera image;
步骤S204、获取所述光栅正弦图像的相位主值;Step S204: Obtain the main phase value of the grating sine image;
相位主值即相对相位值,作为一种示例,可采用标准的四步相移法计 算光栅图像的相位主值,但可以理解额是,其他计算相位主值的方法也可适用于此。The main phase value is the relative phase value. As an example, the standard four-step phase shift method can be used to calculate the main phase value of the raster image, but it can be understood that other methods for calculating the main phase value can also be applied here.
利用频率相同四幅光栅图像计算出一幅相位主值图像,四幅光栅图像的光强表达式为:A phase main value image is calculated by using four raster images with the same frequency. The light intensity expression of the four raster images is:
I i(u,v)=a+b*cos(θ i(u,v)),θ i(u,v)=φ(u,v)+π/2*i,i∈{0,1,2,3} I i (u,v)=a+b*cos(θ i (u,v)),θ i (u,v)=φ(u,v)+π/2*i,i∈{0,1 ,2,3}
则光栅图像的相位主值为:The main phase value of the raster image is:
φ(u,v)=atan2(I 3-I 1,I 0-I 2) φ(u,v)=atan2(I 3 -I 1 ,I 0 -I 2 )
步骤S205、根据所述相位主值和所述相机图像坐标获取所述光栅正弦图像坐标;Step S205: Acquire the raster sine image coordinates according to the phase main value and the camera image coordinates;
根据一个方向的绝对相位值,可以确定投影图像中的一条垂直线或水平线;则根据水平和垂直两个方向的绝对相位值,可以确定投影图像中的一个点。According to the absolute phase value in one direction, a vertical line or a horizontal line in the projected image can be determined; then, according to the absolute phase values in the horizontal and vertical directions, a point in the projected image can be determined.
通过竖直和水平光栅条纹的相机图像中的点,相机图像中的点设为(uc,vc),可以确定投影图像中对应的一个点,即光栅正弦图像坐标点,光栅正弦图像坐标点设为(up,vp),该点坐标可用如下公式计算。Through the points in the camera image of vertical and horizontal raster stripes, the point in the camera image is set to (uc, vc), you can determine a corresponding point in the projected image, that is, the coordinate point of the raster sine image, the coordinate point of the raster sine image is set For (up, vp), the coordinates of this point can be calculated with the following formula.
Figure PCTCN2018125590-appb-000001
Figure PCTCN2018125590-appb-000001
其中,Φv和Φh分别为光栅图像中(uc,vc)点处竖直和水平方向的绝对相位值,Nv和Nh分别为竖直和水平相机光栅图像中光栅条纹数,H和W分别为相机光栅图像竖直和水平方向分辨率。Where Φv and Φh are the absolute phase values in the vertical and horizontal directions at the (uc, vc) point in the raster image, Nv and Nh are the number of raster stripes in the raster image of the vertical and horizontal cameras, and H and W are the cameras, respectively Raster image vertical and horizontal resolution.
步骤S206、矫正所述相机图像坐标和所述光栅正弦图像坐标;Step S206, correcting the camera image coordinates and the raster sine image coordinates;
作为示例,对(uc,vc)和(up,vp)使用预先标定出的系统畸变参数进行矫正,二阶径向畸变矫正公式如下:As an example, (uc, vc) and (up, vp) are corrected using pre-calibrated system distortion parameters. The second-order radial distortion correction formula is as follows:
u -=u+(u-u 0)[k 1(x 2+y 2)+k 2(x 2+y 2) 2] u - = u + (uu 0 ) [k 1 (x 2 + y 2) + k 2 (x 2 + y 2) 2]
v -=v+(v-v 0)[k 1(x 2+y 2)+k 2(x 2+y 2) 2] v - = v + (vv 0 ) [k 1 (x 2 + y 2) + k 2 (x 2 + y 2) 2]
步骤S207、根据所述矫正后的相机图像坐标、光栅正弦图像坐标和所述内参数以及外参数获取目标物体的三维坐标。Step S207: Acquire the three-dimensional coordinates of the target object according to the corrected camera image coordinates, raster sine image coordinates, and the inner and outer parameters.
根据矫正后的相机图像坐标(uc,vc)和光栅正弦图像坐标(up,vp),以及预先相机标定出的内参书和外参书,分别解算出相机光心与图像像点的一条线,然后利用上述信息,通过三角测量原理计算出对应的三维坐标。According to the corrected camera image coordinates (uc, vc) and raster sine image coordinates (up, vp), and the internal reference book and external reference book calibrated by the camera in advance, a line between the camera's optical center and the image image point is calculated, Then use the above information to calculate the corresponding three-dimensional coordinates through the principle of triangulation.
Figure PCTCN2018125590-appb-000002
Figure PCTCN2018125590-appb-000002
其中,Sc和Sp分别为相机和光机的尺度因子,Ac和Ap为内参数矩阵,[Rc tc]和[Rp tp]为外参数矩阵,Xw,Yw,Zw,1为空间向量。Among them, Sc and Sp are the scale factors of the camera and the optical machine, Ac and Ap are internal parameter matrices, [Rc tc] and [Rp tp] are external parameter matrices, and Xw, Yw, Zw, 1 are space vectors.
以下对通过三角测量原理计算出对应的三维坐标进行详细说明:The following is a detailed description of the corresponding three-dimensional coordinates calculated by the principle of triangulation:
图2中,O1-xyz与O2-xyz分别是两相机空间坐标系;P1,P2是一对同名点;S1,S2是相机镜头的中心位置;w是真实空间中的一个点。P1,S1确定了空间中一条直线,P2,S2确定了另一条直线,它们在空间中相交于W。In Figure 2, O1-xyz and O2-xyz are the two-camera space coordinate system; P1, P2 are a pair of points with the same name; S1, S2 are the center position of the camera lens; w is a point in real space. P1, S1 determine a straight line in space, P2, S2 determine another straight line, they intersect at W in space.
空间直线:相机拍摄图像后,相机CCD上的一个像点与相机镜头中心可以确定一条直线,如图2。像点与镜头中心这两点的坐标都在相机坐标系下,这两点组成的空间直线方程为:Spatial straight line: After the camera takes an image, an image point on the camera CCD and the center of the camera lens can determine a straight line, as shown in Figure 2. The coordinates of the two points of the image point and the center of the lens are in the camera coordinate system. The linear equation of space formed by these two points is:
Figure PCTCN2018125590-appb-000003
Figure PCTCN2018125590-appb-000003
Figure PCTCN2018125590-appb-000004
Figure PCTCN2018125590-appb-000004
其中,X,Y,Z为目标点的三维坐标,是未知数;x,y,f为像点坐标,是已知量(通过分析图像得到);Xs,Ys,Zs是镜头中心坐标,为已知量(在相机标定过程得到);a i、b i、c i(i=1,2,3)为坐标系变换参数,为已知量(在相机标定过程得到)。一张图像可以列一个直线方程,两张图像就可以列两个直线方程,共4个方程组,而式中的未知数只有三个(三维点坐标X,Y,Z),因此,可算出三个未知数。 Among them, X, Y, Z are the three-dimensional coordinates of the target point, which is an unknown number; x, y, f are the image point coordinates, which are known quantities (obtained by analyzing the image); Xs, Ys, Zs are the lens center coordinates, which are Known quantities (obtained during the camera calibration process); a i , b i , c i (i=1, 2, 3) are coordinate system transformation parameters, which are known quantities (obtained during the camera calibration process). An image can list a straight line equation, two images can list two straight line equations, a total of 4 equations, and the unknown number in the formula is only three (three-dimensional point coordinates X, Y, Z), therefore, can be calculated three Unknowns.
步骤S3、根据所述目标物体的三维数据定位出所述目标物体的位姿从而抓取所述目标物体。Step S3: Position the posture of the target object according to the three-dimensional data of the target object to grab the target object.
作为示例,所述步骤S3包括以下步骤:As an example, the step S3 includes the following steps:
步骤S31、根据所述目标物体的三维数据以及预设的目标物体的CAD模型定位出所述目标物体的位姿;Step S31: Position the posture of the target object according to the three-dimensional data of the target object and the preset CAD model of the target object;
每一目标物体均有其对应的CAD模型,且均预先设置好,通过将所述目标物体的三维数据以及其对应的CAD模型比对,即可定位出所述目标物体的位姿。Each target object has its corresponding CAD model, which is set in advance. By comparing the three-dimensional data of the target object with its corresponding CAD model, the posture of the target object can be located.
步骤S32、将所述目标物体的位姿转换至机械手坐标系下对目标物体进行抓取。Step S32: Convert the position and posture of the target object to the robot coordinate system to grab the target object.
所述方法对应的系统可包括工业机器人、上位机和三维视觉传感器,所述三维视觉传感器为结构光视觉传感器。与现有的单纯依赖二维图像和三维数据的方法相比,本发明实施例所述方法不仅能解决单纯依赖二维图像分析难以计算目标物体三维坐标的问题,还能避免单纯依赖三维数据难以有效处理目标物体紧密排列时的分割问题。所述方法通过执行步骤S1-步骤S3能够实现复杂环境下对目标物体三维轮廓的精确快速扫描,并通过三维数据二维图像的映射关系,高效、准确地定位出目标物体的位姿,从而引导工业机器人抓取工件。The system corresponding to the method may include an industrial robot, a host computer, and a three-dimensional vision sensor, where the three-dimensional vision sensor is a structured light vision sensor. Compared with existing methods that rely solely on two-dimensional images and three-dimensional data, the method described in the embodiments of the present invention can not only solve the problem that it is difficult to calculate the three-dimensional coordinates of the target object solely on the basis of two-dimensional image analysis, but also avoid the difficulty of relying solely on three-dimensional data. Effectively deal with the segmentation problem when the target objects are closely arranged. The method can achieve accurate and rapid scanning of the three-dimensional contour of the target object in a complex environment by performing steps S1-step S3, and through the mapping relationship of the two-dimensional image of the three-dimensional data, efficiently and accurately locate the posture of the target object, thereby guiding Industrial robots grab workpieces.
在结构光引导工业机器人识别定位应用中,对抓取工件过程中夹具和料框以及夹具与工件之间的干涉分析是一项重要研究内容和有待解决的问题,因工件在料框内位姿的多样性,引起夹具设计的不规则性。基于此,所述方法还包括步骤S4、在抓取所述目标物体过程中,判断抓取夹具是否与所述目标物体以及所述目标物体所在料框发生干涉,若发生干涉,若发生干涉,则重新调整夹具所在位置再进行抓取。In the application of structured light guided industrial robot recognition and positioning, the analysis of the interference between the fixture and the frame and the fixture and the workpiece during the grasping of the workpiece is an important research content and a problem to be solved, because the posture of the workpiece in the frame The diversity of fixtures causes irregularities in fixture design. Based on this, the method further includes step S4. During the process of grasping the target object, it is determined whether the gripping jig interferes with the target object and the material frame where the target object is located. If interference occurs, if interference occurs, Then readjust the position of the fixture and then grab.
步骤S4中,判断抓取夹具与所述目标物体以及所述目标物体所在料框是否发生干涉,包括以下步骤:In step S4, determining whether the gripping jig interferes with the target object and the material frame where the target object is located includes the following steps:
步骤S41、采集所述抓取夹具立体图像;Step S41: Collect the stereoscopic image of the grabbing fixture;
步骤S42、将所述抓取夹具立体图像划分为多个长方体,每个所述长方体用于描述所述抓取夹具各个部分的结构特征;Step S42: Divide the stereoscopic image of the gripping jig into a plurality of rectangular parallelepipeds, each of which is used to describe the structural characteristics of each part of the gripping jig;
步骤S43、判断每个所述长方体是否与料框的每个平面有交点,若有,该长方体与料框存在干涉;Step S43: Determine whether each cuboid has an intersection with each plane of the material frame. If so, there is interference between the cuboid and the material frame;
步骤S44、通过分析所述目标物体的位姿,判断每个所述长方体是否与所述目标物体存在交点,若有,则该长方体与目标物体存在干涉。Step S44: By analyzing the posture of the target object, it is determined whether each cuboid has an intersection with the target object, and if so, the cuboid interferes with the target object.
通过步骤S4使得工业机器人在抓取工件的过程中既能避免夹具和料框以及工件之间的干涉,又能保证工件得到最大程度的抓取。Through step S4, the industrial robot can not only avoid the interference between the fixture and the material frame and the workpiece during the process of grasping the workpiece, but also ensure that the workpiece is grasped to the maximum extent.
本发明实施例还提供了一种基于结构光的工业机器人定位装置,如图3所示,包括获取模块1、转换模块2和定位模块3,其中,获取模块1用于获取预设区域的二维图像信息,并从所述预设区域的二维图像信息中分割 出目标物体所在的像素区域;转换模块2用于根据预设的二维图像信息和三维数据之间的映射关系,将所述像素区域转换为所述目标物体对应的三维数据;定位模块3用于根据所述目标物体的三维数据定位出所述目标物体的位姿从而抓取所述目标物体。与现有的单纯依赖二维图像和三维数据的方法相比,本发明实施例所述装置不仅能解决单纯依赖二维图像分析难以计算目标物体三维坐标的问题,还能避免单纯依赖三维数据难以有效处理目标物体紧密排列时的分割问题。所述装置能够实现复杂环境下对目标物体三维轮廓的精确快速扫描,并通过三维数据二维图像的映射关系,高效、准确地定位出目标物体的位姿,从而引导工业机器人抓取工件。An embodiment of the present invention also provides an industrial robot positioning device based on structured light. As shown in FIG. 3, it includes an acquisition module 1, a conversion module 2, and a positioning module 3. The acquisition module 1 is used to acquire two of the preset area. Dimensional image information, and segment the pixel area where the target object is located from the two-dimensional image information of the preset area; the conversion module 2 is used to convert all the two-dimensional image information and three-dimensional data according to the preset mapping relationship The pixel area is converted into three-dimensional data corresponding to the target object; the positioning module 3 is used to locate the posture of the target object according to the three-dimensional data of the target object to grab the target object. Compared with existing methods that rely solely on two-dimensional images and three-dimensional data, the device described in the embodiments of the present invention can not only solve the problem that it is difficult to calculate the three-dimensional coordinates of the target object solely on the basis of two-dimensional image analysis, but also avoid the difficulty of relying solely on three-dimensional data. Effectively deal with the segmentation problem when the target objects are closely arranged. The device can realize accurate and rapid scanning of the three-dimensional contour of the target object in a complex environment, and through the mapping relationship of the two-dimensional image of the three-dimensional data, locate the posture of the target object efficiently and accurately, thereby guiding the industrial robot to grab the workpiece.
作为示例,获取模块1还用于,通过AI分割或图像识别所述二维图像信息,分割出所述目标物体所在的像素区域。需要说明的是AI分割或图像识别技术为现有的图像处理技术手段,在此不再赘述。二维图像信息可包括一个或多个目标物体,通过AI分割或图像识别技术可将每一目标物体对应的像素区域分离出来。在工业应用中,作为一种示例,所述预设区域为料框内区域,所述目标物体为料框内待抓取的物体,料框内物体可称为工件,目标物体为目标工件。As an example, the acquisition module 1 is also used to segment the pixel area where the target object is located by AI segmentation or image recognition of the two-dimensional image information. It should be noted that the AI segmentation or image recognition technology is an existing image processing technology, which will not be repeated here. The two-dimensional image information may include one or more target objects, and the pixel area corresponding to each target object may be separated by AI segmentation or image recognition technology. In industrial applications, as an example, the preset area is an area in the material frame, the target object is an object to be grasped in the material frame, the object in the material frame may be called a workpiece, and the target object is the target workpiece.
所述装置还包括构建模块,用于构建所述预设的二维图像信息和三维数据之间的映射关系。The device further includes a construction module for constructing a mapping relationship between the preset two-dimensional image information and three-dimensional data.
所述构建模块包括第一获取单元、第二获取单元和映射单元,其中,第一获取单元用于采用结构光视觉传感器获取目标物体的二维图像信息;第二获取单元用于采用结构光视觉传感器获取三维数据;映射单元用于并在所述二维图像信息和三维数据形成一一映射,使得定位目标物体时既能使用目标物体的二维图像信息,又能使用三维数据,从而适应了复杂环境,且提高了目标物体定位的鲁棒性,使得工业机器人定位更加柔性化和智能化。其中,所述结构光视觉传感器包括光源和至少一个相机,作为一种示例,所述光源为光栅投影装置,以下实施例以光栅投影装置为示例进行说明,但可以理解的是,其他可应用于本发明实施例来获取二维图像信息和三维数据的光源也可适于此。使用多个相机和光栅投影装置组合是可以减少由于每个相机与光栅投影装置的安装角度限制引起的盲区,提高定位准确度,本发明实施例中,以结构光视觉传感器包括两个相机为例进行说明。The construction module includes a first acquisition unit, a second acquisition unit, and a mapping unit, wherein the first acquisition unit is used to acquire the two-dimensional image information of the target object using the structured light vision sensor; the second acquisition unit is used to adopt the structured light vision The sensor acquires three-dimensional data; the mapping unit is used to form a one-to-one mapping between the two-dimensional image information and the three-dimensional data, so that both the two-dimensional image information of the target object and the three-dimensional data can be used when locating the target object, thereby adapting Complex environment, and improve the robustness of target object positioning, making industrial robot positioning more flexible and intelligent. The structured light vision sensor includes a light source and at least one camera. As an example, the light source is a grating projection device. The following embodiment uses a grating projection device as an example for description, but it can be understood that other applications The light source for obtaining two-dimensional image information and three-dimensional data according to the embodiment of the present invention may also be suitable for this. The use of multiple cameras and the combination of the grating projection device can reduce the blind area caused by the limitation of the installation angle of each camera and the grating projection device, and improve the positioning accuracy. In the embodiment of the present invention, the structured light vision sensor includes two cameras as an example Be explained.
作为一种示例,第一获取单元具体用于投影一张亮度超过预设亮度的图像,获取打光后的二维图像信息,利用双目视觉配合光栅结构光,可利用自身的打光拍摄高质量的二维图像,其中,所述预设亮度根据投影需求来设定,作为一种示例,所述超过预设亮度的图像可以为一张白色图像、蓝色图像等。第二获取单元具体用于采用结构光视觉传感器获取目标物体的三维数据:所述第二获取单元包括标定子单元、第一图像获取子单元、 第二图像获取子单元、相位主值获取子单元、第一坐标获取子单元、矫正子单元和第二坐标获取子单元,其中,As an example, the first acquiring unit is specifically used to project an image with a brightness exceeding a preset brightness, acquire two-dimensional image information after lighting, and use binocular vision in conjunction with grating structured light to use its own lighting to shoot high The quality of the two-dimensional image, wherein the preset brightness is set according to projection requirements. As an example, the image exceeding the preset brightness may be a white image, a blue image, or the like. The second acquisition unit is specifically used to acquire the three-dimensional data of the target object using a structured light vision sensor: the second acquisition unit includes a standard stator unit, a first image acquisition subunit, a second image acquisition subunit, and a phase principal value acquisition subunit , A first coordinate acquisition subunit, a correction subunit, and a second coordinate acquisition subunit, where,
标定子单元,用于标定每个所述相机的内参数,以及每个所述相机和所述光栅投影装置之间的外参数;A stator unit for calibrating the internal parameters of each camera and the external parameters between each camera and the raster projection device;
第一图像获取子单元,用于通过所述光栅投影装置投影图像并生成光栅正弦图像;A first image acquisition subunit, configured to project an image through the raster projection device and generate a raster sine image;
光栅正弦图像的灰度分布为:The gray distribution of the raster sine image is:
I(u,v)=a+b*cos(θ(u,v)),θ(u,v)=φ(u,v)+αI(u,v)=a+b*cos(θ(u,v)), θ(u,v)=φ(u,v)+α
其中,(u,v)表示投影面投影像素单元的坐标,I(u,v)为(u,v)点的灰度值,a和b分别为正弦光栅的直流基波分量(光强背景值)和振幅(调制光强),θ(u,v)为I(u,v)对应的光栅相位,φ为待求相位主值,α为相位位移。Among them, (u, v) represents the coordinates of the projected pixel unit on the projection surface, I (u, v) is the gray value of the (u, v) point, and a and b are the DC fundamental wave component of the sinusoidal grating (light intensity background) Value) and amplitude (modulated light intensity), θ(u,v) is the grating phase corresponding to I(u,v), φ is the main value of the phase to be sought, and α is the phase shift.
第二图像获取子单元,用于通过所述相机拍摄所述投影图像形成相机图像;A second image acquisition subunit, configured to capture the projected image by the camera to form a camera image;
相位主值获取子单元,用于获取所述光栅正弦图像的相位主值;A main phase value acquisition subunit, used to obtain the main phase value of the grating sinusoidal image;
利用频率相同四幅光栅图像计算出一幅相位主值图像,四幅光栅图像的光强表达式为:A phase main value image is calculated by using four raster images with the same frequency. The light intensity expression of the four raster images is:
I i(u,v)=a+b*cos(θ i(u,v)),θ i(u,v)=φ(u,v)+π/2*i,i∈{0,1,2,3} I i (u,v)=a+b*cos(θ i (u,v)),θ i (u,v)=φ(u,v)+π/2*i,i∈{0,1 ,2,3}
则光栅图像的相位主值为:The main phase value of the raster image is:
φ(u,v)=atan2(I 3-I 1,I 0-I 2) φ(u,v)=atan2(I 3 -I 1 ,I 0 -I 2 )
第一坐标获取子单元,用于根据所述相位主值和所述相机图像坐标获取所述光栅正弦图像坐标;A first coordinate obtaining subunit, configured to obtain the coordinate of the raster sine image according to the phase main value and the camera image coordinate;
根据一个方向的绝对相位值,可以确定投影图像中的一条垂直线或水平线;则根据水平和垂直两个方向的绝对相位值,可以确定投影图像中的一个点。According to the absolute phase value in one direction, a vertical line or a horizontal line in the projected image can be determined; then, according to the absolute phase values in the horizontal and vertical directions, a point in the projected image can be determined.
通过竖直和水平光栅条纹的相机图像中的点,相机图像中的点设为(uc,vc),可以确定投影图像中对应的一个点,即光栅正弦图像坐标点,光栅正弦图像坐标点设为(up,vp),该点坐标可用如下公式计算。Through the points in the camera image of vertical and horizontal raster stripes, the point in the camera image is set to (uc, vc), you can determine a corresponding point in the projected image, that is, the coordinate point of the raster sine image, the coordinate point of the raster sine image is set For (up, vp), the coordinates of this point can be calculated with the following formula.
Figure PCTCN2018125590-appb-000005
Figure PCTCN2018125590-appb-000005
其中,Φv和Φh分别为光栅图像中(uc,vc)点处竖直和水平方向的绝对相位值,Nv和Nh分别为竖直和水平相机光栅图像中光栅条纹数,H和W分别为相机光栅图像竖直和水平方向分辨率。Where Φv and Φh are the absolute phase values in the vertical and horizontal directions at the (uc, vc) point in the raster image, Nv and Nh are the number of raster stripes in the raster image of the vertical and horizontal cameras, and H and W are the cameras, respectively Raster image vertical and horizontal resolution.
矫正子单元,用于矫正所述相机图像坐标和所述光栅正弦图像坐标;A correction subunit for correcting the camera image coordinates and the raster sine image coordinates;
作为示例,对(uc,vc)和(up,vp)使用预先标定出的系统畸变参数进行矫正,二阶径向畸变矫正公式如下:As an example, (uc, vc) and (up, vp) are corrected using pre-calibrated system distortion parameters. The second-order radial distortion correction formula is as follows:
u -=u+(u-u 0)[k 1(x 2+y 2)+k 2(x 2+y 2) 2] u - = u + (uu 0 ) [k 1 (x 2 + y 2) + k 2 (x 2 + y 2) 2]
v -=v+(v-v 0)[k 1(x 2+y 2)+k 2(x 2+y 2) 2] v - = v + (vv 0 ) [k 1 (x 2 + y 2) + k 2 (x 2 + y 2) 2]
第二坐标获取子单元,用于根据所述矫正后的相机图像坐标、光栅正弦图像坐标和所述内参数以及外参数获取目标物体的三维坐标。The second coordinate acquisition subunit is configured to acquire the three-dimensional coordinates of the target object according to the corrected camera image coordinates, raster sine image coordinates, and the inner and outer parameters.
根据矫正后的相机图像坐标(uc,vc)和光栅正弦图像坐标(up,vp),以及预先相机标定出的内参书和外参书,分别解算出相机光心与图像像点的一条线,然后利用上述信息,通过三角测量原理计算出对应的三维坐标。According to the corrected camera image coordinates (uc, vc) and raster sine image coordinates (up, vp), and the internal reference book and external reference book calibrated by the camera in advance, a line between the camera's optical center and the image image point is calculated, Then use the above information to calculate the corresponding three-dimensional coordinates through the principle of triangulation.
Figure PCTCN2018125590-appb-000006
Figure PCTCN2018125590-appb-000006
其中,Sc和Sp分别为相机和光机的尺度因子,Ac和Ap为内参数矩阵,[Rc tc]和[Rp tp]为外参数矩阵,Xw,Yw,Zw,1为空间向量。Among them, Sc and Sp are the scale factors of the camera and the optical machine, Ac and Ap are internal parameter matrices, [Rc tc] and [Rp tp] are external parameter matrices, and Xw, Yw, Zw, 1 are space vectors.
以下对通过三角测量原理计算出对应的三维坐标进行详细说明:The following is a detailed description of the corresponding three-dimensional coordinates calculated by the principle of triangulation:
图2中,O1-xyz与O2-xyz分别是两相机空间坐标系;P1,P2是一对同名点;S1,S2是相机镜头的中心位置;w是真实空间中的一个点。P1,S1确定了空间中一条直线,P2,S2确定了另一条直线,它们在空间中相交于W。In Figure 2, O1-xyz and O2-xyz are the two-camera space coordinate system; P1, P2 are a pair of points with the same name; S1, S2 are the center position of the camera lens; w is a point in real space. P1, S1 determine a straight line in space, P2, S2 determine another straight line, they intersect at W in space.
空间直线:相机拍摄图像后,相机CCD上的一个像点与相机镜头中心可以确定一条直线,如图2。像点与镜头中心这两点的坐标都在相机坐标系下,这两点组成的空间直线方程为:Spatial straight line: After the camera takes an image, an image point on the camera CCD and the center of the camera lens can determine a straight line, as shown in Figure 2. The coordinates of the two points of the image point and the center of the lens are in the camera coordinate system. The linear equation of space formed by these two points is:
Figure PCTCN2018125590-appb-000007
Figure PCTCN2018125590-appb-000007
Figure PCTCN2018125590-appb-000008
Figure PCTCN2018125590-appb-000008
其中,X,Y,Z为目标点的三维坐标,是未知数;x,y,f为像点坐标,是已知量(通过分析图像得到);Xs,Ys,Zs是镜头中心坐标,为已知量(在相机标定过程得到);a i、b i、c i(i=1,2,3)为坐标系变换参数,为已知量(在相机标定过程得到)。一张图像可以列一个直线方程,两张图像就可以列两个直线方程,共4个方程组,而式中的未知数只有三个(三维点坐标X,Y,Z),因此,可算出三个未知数。 Among them, X, Y, Z are the three-dimensional coordinates of the target point, which is an unknown number; x, y, f are the image point coordinates, which are known quantities (obtained by analyzing the image); Xs, Ys, Zs are the lens center coordinates, which are Known quantities (obtained during the camera calibration process); a i , b i , c i (i=1, 2, 3) are coordinate system transformation parameters, which are known quantities (obtained during the camera calibration process). An image can list a straight line equation, two images can list two straight line equations, a total of 4 equations, and the unknown number in the formula is only three (three-dimensional point coordinates X, Y, Z), therefore, can be calculated three Unknowns.
作为一种示例,定位模块3包括位姿获取单元和坐标转换单元,其中,位姿获取单元用于根据所述目标物体的三维数据以及预设的目标物体的CAD模型定位出所述目标物体的位姿;坐标转换单元用于将所述目标物体的位姿转换至机械手坐标系下对目标物体进行抓取。每一目标物体均有其对应的CAD模型,且均预先设置好,通过将所述目标物体的三维数据以及其对应的CAD模型比对,即可定位出所述目标物体的位姿。As an example, the positioning module 3 includes a pose acquisition unit and a coordinate conversion unit, wherein the pose acquisition unit is used to locate the target object according to the three-dimensional data of the target object and a preset CAD model of the target object Pose; the coordinate conversion unit is used to convert the posture of the target object to the manipulator coordinate system to grab the target object. Each target object has its corresponding CAD model, which is set in advance. By comparing the three-dimensional data of the target object with its corresponding CAD model, the posture of the target object can be located.
在结构光引导工业机器人识别定位应用中,对抓取工件过程中夹具和料框以及夹具与工件之间的干涉分析是一项重要研究内容和有待解决的问题,因工件在料框内位姿的多样性,引起夹具设计的不规则性。基于此,所述装置还包括检测模块,用于在抓取所述目标物体过程中,判断抓取夹具是否与所述目标物体以及所述目标物体所在料框发生干涉,若发生干涉,若发生干涉,则重新调整夹具所在位置再进行抓取。In the application of structured light guided industrial robot recognition and positioning, the analysis of the interference between the fixture and the frame and the fixture and the workpiece during the grasping of the workpiece is an important research content and a problem to be solved, because the posture of the workpiece in the frame The diversity of fixtures causes irregularities in fixture design. Based on this, the device further includes a detection module for judging whether the gripping jig interferes with the target object and the material frame where the target object is located during the process of grabbing the target object, if interference occurs, if it occurs If it interferes, readjust the position of the fixture and then grab.
所述检测模块包括图像采集单元、划分单元、第一判断单元和第二判断单元,其中。图像采集单元用于采集所述抓取夹具立体图像;划分单元用于将所述抓取夹具立体图像划分为多个长方体,每个所述长方体用于描述所述抓取夹具各个部分的结构特征;第一判断单元用于判断每个所述长方体是否与料框的每个平面有交点,若有,该长方体与料框存在干涉;第二判断单元用于通过分析所述目标物体的位姿,判断每个所述长方体是否与所述目标物体存在交点,若有,则该长方体与目标物体存在干涉。通过检测模块使得工业机器人在抓取工件的过程中既能避免夹具和料框以及工件之间的干涉,又能保证工件得到最大程度的抓取。The detection module includes an image acquisition unit, a division unit, a first judgment unit and a second judgment unit, among them. An image acquisition unit is used to acquire the stereoscopic image of the gripping jig; a dividing unit is used to divide the stereoscopic image of the gripping jig into a plurality of rectangular parallelepipeds, each of which is used to describe the structural characteristics of each part of the gripping jig The first judgment unit is used to judge whether each of the cuboid has an intersection with each plane of the material frame. If there is, the cuboid interferes with the material frame; the second judgment unit is used to analyze the posture of the target object To determine whether each cuboid has an intersection with the target object, and if so, there is interference between the cuboid and the target object. Through the detection module, the industrial robot can avoid the interference between the fixture and the material frame and the workpiece during the process of grasping the workpiece, and also ensure that the workpiece is grasped to the maximum extent.
本发明所述方法和装置基于视觉引导的工业机器人识别定位,能实现复杂环境下对目标物体三维轮廓的精确快速扫描,并通过三维数据和二维图像的数据一一映射结合分析,从而高效准确地定位出所述目标物体的位姿,引导工业机器人抓取目标物体,适应复杂多变的工业应用环境,进而提高复杂环境下工业装配、上下料的生产效率;此外,在抓取所述目标物体过程中,通过分析抓取夹具是否与所述目标物体以及所述目标物体所在料框发生干涉,并进行相应的调整,满足了应用工业机器人的工业生产更加的柔性化和智能化。The method and device of the present invention are based on visually guided industrial robot recognition and positioning, which can achieve accurate and rapid scanning of the three-dimensional contour of the target object in a complex environment, and through one-to-one mapping and analysis of three-dimensional data and two-dimensional image data, thus efficient and accurate Locate the position of the target object, guide the industrial robot to grab the target object, adapt to the complex and changing industrial application environment, and then improve the production efficiency of industrial assembly and loading and unloading in a complex environment; in addition, grab the target In the process of objects, by analyzing whether the gripping jig interferes with the target object and the material frame where the target object is located, and make corresponding adjustments, it is more flexible and intelligent for industrial production using industrial robots.
本发明实施例还提供一种控制器,其包括存储器与处理器,所述存储 器存储有计算机程序,所述程序在被所述处理器执行时能够实现所述基于结构光的工业机器人定位方法的步骤。An embodiment of the present invention also provides a controller, which includes a memory and a processor, where the memory stores a computer program, and the program, when executed by the processor, can implement the structured light-based industrial robot positioning method step.
本发明实施例还提供一种计算机可读存储介质,用于存储计算机程序,所述程序在由一计算机或处理器执行时实现所述基于结构光的工业机器人定位方法的步骤。An embodiment of the present invention also provides a computer-readable storage medium for storing a computer program, which when executed by a computer or processor implements the steps of the structured light-based industrial robot positioning method.
以上所述,仅是本发明的较佳实施例而已,并非对本发明作任何形式上的限制,虽然本发明已以较佳实施例揭露如上,然而并非用以限定本发明,任何熟悉本专业的技术人员,在不脱离本发明技术方案范围内,当可利用上述揭示的技术内容作出些许更动或修饰为等同变化的等效实施例,但凡是未脱离本发明技术方案的内容,依据本发明的技术实质对以上实施例所作的任何简单修改、等同变化与修饰,均仍属于本发明技术方案的范围内。The above is only the preferred embodiment of the present invention, and does not limit the present invention in any form. Although the present invention has been disclosed in the preferred embodiment as above, it is not intended to limit the present invention. Anyone familiar with the profession Technical personnel, without departing from the scope of the technical solution of the present invention, when the technical contents disclosed above can be used to make some modifications or modifications to equivalent embodiments of equivalent changes, but any content that does not depart from the technical solution of the present invention, according to the present invention Any simple modifications, equivalent changes, and modifications made to the above embodiments by the technical essence of the above still fall within the scope of the technical solution of the present invention.

Claims (18)

  1. 一种基于结构光的工业机器人定位方法,其特征在于,包括以下步骤:An industrial robot positioning method based on structured light, characterized by comprising the following steps:
    获取预设区域的二维图像信息,并从所述预设区域的二维图像信息中分割出目标物体所在的像素区域;Acquiring the two-dimensional image information of the preset area, and segmenting the pixel area where the target object is located from the two-dimensional image information of the preset area;
    根据预设的二维图像信息和三维数据之间的映射关系,将所述像素区域转换为所述目标物体对应的三维数据;Converting the pixel area into three-dimensional data corresponding to the target object according to the preset mapping relationship between the two-dimensional image information and the three-dimensional data;
    根据所述目标物体的三维数据定位出所述目标物体的位姿从而抓取所述目标物体。The position and posture of the target object are located according to the three-dimensional data of the target object to grasp the target object.
  2. 根据权利要求1所述的基于结构光的工业机器人定位方法,其特征在于,The method for positioning an industrial robot based on structured light according to claim 1, wherein:
    从所述预设区域的二维图像信息中分割出目标物体所在的像素区域,包括以下步骤:Segmenting the pixel area where the target object is located from the two-dimensional image information of the preset area includes the following steps:
    通过AI分割或图像识别所述二维图像信息,分割出所述目标物体所在的像素区域。The two-dimensional image information is segmented by AI or image recognition, and the pixel area where the target object is located is segmented.
  3. 根据权利要求1所述的基于结构光的工业机器人定位方法,其特征在于,The method for positioning an industrial robot based on structured light according to claim 1, wherein:
    所述方法还包括:构建所述预设的二维图像信息和三维数据之间的映射关系,具体包括以下步骤:The method further includes: constructing a mapping relationship between the preset two-dimensional image information and three-dimensional data, specifically including the following steps:
    采用结构光视觉传感器获取目标物体的二维图像信息和三维数据,并在所述二维图像信息和三维数据形成一一映射,其中,所述结构光视觉传感器包括光源和至少一个相机。A structured light vision sensor is used to obtain two-dimensional image information and three-dimensional data of the target object, and a one-to-one mapping is formed on the two-dimensional image information and three-dimensional data, wherein the structured light vision sensor includes a light source and at least one camera.
  4. 根据权利要求3所述的基于结构光的工业机器人定位方法,其特征在于,The method for positioning an industrial robot based on structured light according to claim 3, wherein
    所述采用结构光视觉传感器获取目标物体的二维图像信息包括以下步骤:The use of a structured light vision sensor to obtain two-dimensional image information of a target object includes the following steps:
    投影一张亮度超过预设亮度的图像,获取打光后的二维图像信息。Project an image whose brightness exceeds the preset brightness to obtain the two-dimensional image information after lighting.
  5. 根据权利要求3所述的基于结构光的工业机器人定位方法,其特征在于,The method for positioning an industrial robot based on structured light according to claim 3, wherein
    所述光源为光栅投影装置,所述采用结构光视觉传感器获取目标物体的三维数据,包括以下步骤:The light source is a grating projection device. The structured light vision sensor for acquiring three-dimensional data of the target object includes the following steps:
    标定每个所述相机的内参数,以及每个所述相机和所述光栅投影装置之间的外参数;Calibrate the internal parameters of each camera and the external parameters between each camera and the raster projection device;
    所述光栅投影装置投影图像并生成光栅正弦图像;The raster projection device projects an image and generates a raster sinusoidal image;
    所述相机拍摄所述投影图像形成相机图像;The camera takes the projection image to form a camera image;
    获取所述光栅正弦图像的相位主值;Acquiring the main phase value of the grating sine image;
    根据所述相位主值和所述相机图像坐标获取所述光栅正弦图像坐标;Acquiring the raster sine image coordinates according to the main phase value and the camera image coordinates;
    矫正所述相机图像坐标和所述光栅正弦图像坐标;Correcting the camera image coordinates and the raster sine image coordinates;
    根据所述矫正后的相机图像坐标、光栅正弦图像坐标和所述内参数以及外参数获取目标物体的三维坐标。The three-dimensional coordinates of the target object are obtained according to the corrected camera image coordinates, raster sine image coordinates, and the inner and outer parameters.
  6. 根据权利要求1所述的基于结构光的工业机器人定位方法,其特征在于,The method for positioning an industrial robot based on structured light according to claim 1, wherein:
    所述根据所述目标物体的三维数据定位出所述目标物体的位姿从而抓取所述目标物体,包括以下步骤:The positioning the posture of the target object according to the three-dimensional data of the target object to grab the target object includes the following steps:
    根据所述目标物体的三维数据以及预设的目标物体的CAD模型定位出所述目标物体的位姿;Locate the posture of the target object according to the three-dimensional data of the target object and the preset CAD model of the target object;
    将所述目标物体的位姿转换至机械手坐标系下对目标物体进行抓取。Converting the posture of the target object to the robot coordinate system to grab the target object.
  7. 根据权利要求1所述的基于结构光的工业机器人定位方法,其特征在于,The method for positioning an industrial robot based on structured light according to claim 1, wherein:
    所述方法还包括:在抓取所述目标物体过程中,判断抓取夹具是否与所述目标物体以及所述目标物体所在料框发生干涉,若发生干涉,若发生干涉,则重新调整夹具所在位置再进行抓取。The method further includes: in the process of grasping the target object, determining whether the gripping jig interferes with the target object and the material frame where the target object is located, if interference occurs, if interference occurs, readjusting the location of the jig Then grab the location.
  8. 根据权利要求7所述的基于结构光的工业机器人定位方法,其特征在于,The industrial robot positioning method based on structured light according to claim 7, characterized in that
    判断抓取夹具与所述目标物体以及所述目标物体所在料框是否发生干涉,包括以下步骤:Determine whether the gripping jig interferes with the target object and the material frame where the target object is located, including the following steps:
    采集所述抓取夹具立体图像;Collect the stereoscopic image of the grabbing fixture;
    将所述抓取夹具立体图像划分为多个长方体,每个所述长方体用于描述所述抓取夹具各个部分的结构特征;Dividing the stereoscopic image of the gripping jig into a plurality of rectangular parallelepipeds, each of which is used to describe the structural characteristics of each part of the gripping jig;
    判断每个所述长方体是否与料框的每个平面有交点,若有,该长方体与料框存在干涉;Determine whether each cuboid has an intersection with each plane of the material frame; if so, there is interference between the cuboid and the material frame;
    通过分析所述目标物体的位姿,判断每个所述长方体是否与所述目标物体存在交点,若有,则该长方体与目标物体存在干涉。By analyzing the posture of the target object, it is determined whether each cuboid has an intersection with the target object, and if so, the cuboid interferes with the target object.
  9. 一种基于结构光的工业机器人定位装置,其特征在于,包括:An industrial robot positioning device based on structured light, characterized in that it includes:
    获取模块,用于获取预设区域的二维图像信息,并从所述预设区域的二维图像信息中分割出目标物体所在的像素区域;An obtaining module, configured to obtain the two-dimensional image information of the preset area, and segment the pixel area where the target object is located from the two-dimensional image information of the preset area;
    转换模块,用于根据预设的二维图像信息和三维数据之间的映射关系,将所述像素区域转换为所述目标物体对应的三维数据;A conversion module, configured to convert the pixel area into three-dimensional data corresponding to the target object according to a preset mapping relationship between two-dimensional image information and three-dimensional data;
    定位模块,用于根据所述目标物体的三维数据定位出所述目标物体的位姿从而抓取所述目标物体。The positioning module is configured to locate the posture of the target object according to the three-dimensional data of the target object to grab the target object.
  10. 根据权利要求9所述的基于结构光的工业机器人定位装置,其特 征在于,The structured light-based industrial robot positioning device according to claim 9, characterized in that:
    所述获取模块还用于:The acquisition module is also used to:
    通过AI分割或图像识别所述二维图像信息,分割出所述目标物体所在的像素区域。The two-dimensional image information is segmented by AI or image recognition, and the pixel area where the target object is located is segmented.
  11. 根据权利要求9所述的基于结构光的工业机器人定位装置,其特征在于,The industrial robot positioning device based on structured light according to claim 9, characterized in that
    所述装置还包括构建模块,用于构建所述预设的二维图像信息和三维数据之间的映射关系;The device further includes a construction module for constructing a mapping relationship between the preset two-dimensional image information and three-dimensional data;
    所述构建模块包括:The building block includes:
    第一获取单元,用于采用结构光视觉传感器获取目标物体的二维图像信息;The first acquiring unit is used to acquire the two-dimensional image information of the target object using a structured light vision sensor;
    第二获取单元,用于采用结构光视觉传感器获取三维数据;The second acquisition unit is used to acquire three-dimensional data using a structured light vision sensor;
    映射单元,用于并在所述二维图像信息和三维数据形成一一映射;A mapping unit for forming a one-to-one mapping between the two-dimensional image information and the three-dimensional data;
    其中,所述结构光视觉传感器包括光源和至少一个相机。Wherein, the structured light vision sensor includes a light source and at least one camera.
  12. 根据权利要求11所述的基于结构光的工业机器人定位装置,其特征在于,The industrial robot positioning device based on structured light according to claim 11, characterized in that
    所述第一获取单元具体用于投影一张亮度超过预设亮度的图像,获取打光后的二维图像信息。The first obtaining unit is specifically configured to project an image with a brightness exceeding a preset brightness, and obtain two-dimensional image information after lighting.
  13. 根据权利要求11所述的基于结构光的工业机器人定位装置,其特征在于,The industrial robot positioning device based on structured light according to claim 11, characterized in that
    所述光源为光栅投影装置,所述第二获取单元具体用于采用结构光视觉传感器获取目标物体的三维数据:The light source is a grating projection device, and the second acquiring unit is specifically configured to acquire the three-dimensional data of the target object using a structured light vision sensor:
    所述第二获取单元包括:The second obtaining unit includes:
    标定子单元,用于标定每个所述相机的内参数,以及每个所述相机和所述光栅投影装置之间的外参数;A stator unit for calibrating the internal parameters of each camera and the external parameters between each camera and the raster projection device;
    第一图像获取子单元,用于通过所述光栅投影装置投影图像并生成光栅正弦图像;A first image acquisition subunit, configured to project an image through the raster projection device and generate a raster sine image;
    第二图像获取子单元,用于通过所述相机拍摄所述投影图像形成相机图像;A second image acquisition subunit, configured to capture the projected image by the camera to form a camera image;
    相位主值获取子单元,用于获取所述光栅正弦图像的相位主值;A main phase value acquisition subunit, used to obtain the main phase value of the grating sinusoidal image;
    第一坐标获取子单元,用于根据所述相位主值和所述相机图像坐标获取所述光栅正弦图像坐标;A first coordinate obtaining subunit, configured to obtain the coordinate of the raster sine image according to the phase main value and the camera image coordinate;
    矫正子单元,用于矫正所述相机图像坐标和所述光栅正弦图像坐标;A correction subunit for correcting the camera image coordinates and the raster sine image coordinates;
    第二坐标获取子单元,用于根据所述矫正后的相机图像坐标、光栅正弦图像坐标和所述内参数以及外参数获取目标物体的三维坐标。The second coordinate acquisition subunit is configured to acquire the three-dimensional coordinates of the target object according to the corrected camera image coordinates, raster sine image coordinates, and the inner and outer parameters.
  14. 根据权利要求9所述的基于结构光的工业机器人定位装置,其特 征在于,The structured light-based industrial robot positioning device according to claim 9, characterized in that:
    所述定位模块包括:The positioning module includes:
    位姿获取单元,用于根据所述目标物体的三维数据以及预设的目标物体的CAD模型定位出所述目标物体的位姿;A pose acquisition unit for positioning the pose of the target object according to the three-dimensional data of the target object and the preset CAD model of the target object;
    坐标转换单元,用于将所述目标物体的位姿转换至机械手坐标系下对目标物体进行抓取。The coordinate conversion unit is used to convert the posture of the target object to the manipulator coordinate system to grab the target object.
  15. 根据权利要求9所述的基于结构光的工业机器人定位装置,其特征在于,The industrial robot positioning device based on structured light according to claim 9, characterized in that
    所述装置还包括检测模块,用于在抓取所述目标物体过程中,判断抓取夹具是否与所述目标物体以及所述目标物体所在料框发生干涉,若发生干涉,若发生干涉,则重新调整夹具所在位置再进行抓取。The device further includes a detection module for determining whether the gripping jig interferes with the target object and the material frame where the target object is located during the process of grabbing the target object, if interference occurs, if interference occurs, then Re-adjust the position of the jig and then grab.
  16. 根据权利要求15所述的基于结构光的工业机器人定位装置,其特征在于,The industrial robot positioning device based on structured light according to claim 15, characterized in that
    所述检测模块包括:The detection module includes:
    图像采集单元,用于采集所述抓取夹具立体图像;An image collection unit, used to collect the stereoscopic image of the grabbing fixture;
    划分单元,用于将所述抓取夹具立体图像划分为多个长方体,每个所述长方体用于描述所述抓取夹具各个部分的结构特征;A dividing unit, configured to divide the stereoscopic image of the gripping jig into a plurality of rectangular parallelepipeds, each of which is used to describe the structural characteristics of each part of the gripping jig;
    第一判断单元,用于判断每个所述长方体是否与料框的每个平面有交点,若有,该长方体与料框存在干涉;The first judging unit is used to judge whether each cuboid has an intersection with each plane of the material frame, and if so, there is interference between the cuboid and the material frame;
    第二判断单元,用于通过分析所述目标物体的位姿,判断每个所述长方体是否与所述目标物体存在交点,若有,则该长方体与目标物体存在干涉。The second determining unit is configured to determine whether each cuboid has an intersection with the target object by analyzing the posture of the target object, and if there is, the cuboid interferes with the target object.
  17. 一种控制器,其包括存储器与处理器,其特征在于:所述存储器存储有计算机程序,所述程序在被所述处理器执行时能够实现权利要求1至8中任意一项权利要求所述的方法的步骤。A controller, including a memory and a processor, characterized in that: the memory stores a computer program, and when the program is executed by the processor, the program can implement any one of claims 1 to 8. Steps of the method.
  18. 一种计算机可读存储介质,用于存储计算机程序,其特征在于:所述程序在由一计算机或处理器执行时实现如权利要求1至8中任意一项权利要求所述的方法的步骤。A computer-readable storage medium for storing a computer program, characterized in that the program, when executed by a computer or processor, implements the steps of the method according to any one of claims 1 to 8.
PCT/CN2018/125590 2018-12-29 2018-12-29 Structured-light-based locating method and apparatus for industrial robot, and controller and medium WO2020133407A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880093140.4A CN112074868A (en) 2018-12-29 2018-12-29 Industrial robot positioning method and device based on structured light, controller and medium
PCT/CN2018/125590 WO2020133407A1 (en) 2018-12-29 2018-12-29 Structured-light-based locating method and apparatus for industrial robot, and controller and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/125590 WO2020133407A1 (en) 2018-12-29 2018-12-29 Structured-light-based locating method and apparatus for industrial robot, and controller and medium

Publications (1)

Publication Number Publication Date
WO2020133407A1 true WO2020133407A1 (en) 2020-07-02

Family

ID=71128511

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/125590 WO2020133407A1 (en) 2018-12-29 2018-12-29 Structured-light-based locating method and apparatus for industrial robot, and controller and medium

Country Status (2)

Country Link
CN (1) CN112074868A (en)
WO (1) WO2020133407A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114264243A (en) * 2021-12-31 2022-04-01 深圳明锐理想科技有限公司 Method for detecting crimping welding spots and measuring line arc height between crimping welding spots

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104249371A (en) * 2013-06-28 2014-12-31 佳能株式会社 Information processing apparatus and information processing method
EP3009886A1 (en) * 2014-10-17 2016-04-20 Ricoh Company, Ltd. Illumination apparatus, pattern irradiation device, and 3d measurement system
CN107362987A (en) * 2017-06-07 2017-11-21 武汉科技大学 The robot method for sorting and system of a kind of view-based access control model
CN108942921A (en) * 2018-06-11 2018-12-07 江苏楚门机器人科技有限公司 A kind of grabbing device at random based on deep learning object identification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104249371A (en) * 2013-06-28 2014-12-31 佳能株式会社 Information processing apparatus and information processing method
EP3009886A1 (en) * 2014-10-17 2016-04-20 Ricoh Company, Ltd. Illumination apparatus, pattern irradiation device, and 3d measurement system
CN107362987A (en) * 2017-06-07 2017-11-21 武汉科技大学 The robot method for sorting and system of a kind of view-based access control model
CN108942921A (en) * 2018-06-11 2018-12-07 江苏楚门机器人科技有限公司 A kind of grabbing device at random based on deep learning object identification

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114264243A (en) * 2021-12-31 2022-04-01 深圳明锐理想科技有限公司 Method for detecting crimping welding spots and measuring line arc height between crimping welding spots

Also Published As

Publication number Publication date
CN112074868A (en) 2020-12-11

Similar Documents

Publication Publication Date Title
CN111775146B (en) Visual alignment method under industrial mechanical arm multi-station operation
US20200096317A1 (en) Three-dimensional measurement apparatus, processing method, and non-transitory computer-readable storage medium
CN108177143B (en) Robot positioning and grabbing method and system based on laser vision guidance
JP5839929B2 (en) Information processing apparatus, information processing system, information processing method, and program
US8095237B2 (en) Method and apparatus for single image 3D vision guided robotics
CN111721259B (en) Underwater robot recovery positioning method based on binocular vision
JP2011083882A (en) Robot system
EP2981397A1 (en) A robot system and method for calibration
CN110555878B (en) Method and device for determining object space position form, storage medium and robot
WO2023060926A1 (en) Method and apparatus for guiding robot positioning and grabbing based on 3d grating, and device
CN113276106A (en) Climbing robot space positioning method and space positioning system
CN112109072B (en) Accurate 6D pose measurement and grabbing method for large sparse feature tray
WO2020133888A1 (en) Scale-invariant depth map mapping method for three-dimensional image
US20190287258A1 (en) Control Apparatus, Robot System, And Method Of Detecting Object
WO2020133407A1 (en) Structured-light-based locating method and apparatus for industrial robot, and controller and medium
Borangiu et al. Robot arms with 3D vision capabilities
JP2014053018A (en) Information processing device, control method for information processing device, and program
CN116749198A (en) Binocular stereoscopic vision-based mechanical arm grabbing method
CN110533717A (en) A kind of target grasping means and device based on binocular vision
US11193755B2 (en) Measurement system, measurement device, measurement method, and measurement program
Kheng et al. Stereo vision with 3D coordinates for robot arm application guide
CN112361982A (en) Method and system for extracting three-dimensional data of large-breadth workpiece
Cheng et al. New method and system of automatically moving a CMM probe tip into holes with submillimeter diameter and high aspect ratio
CN115661271B (en) Robot nucleic acid sampling guiding method based on vision
CN115578465B (en) Laser positioning and labeling method based on binocular vision, product and application

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18944908

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18944908

Country of ref document: EP

Kind code of ref document: A1