WO2022252562A1 - 基于机器人的非平面结构判定方法、装置、电子设备和存储介质 - Google Patents

基于机器人的非平面结构判定方法、装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2022252562A1
WO2022252562A1 PCT/CN2021/138577 CN2021138577W WO2022252562A1 WO 2022252562 A1 WO2022252562 A1 WO 2022252562A1 CN 2021138577 W CN2021138577 W CN 2021138577W WO 2022252562 A1 WO2022252562 A1 WO 2022252562A1
Authority
WO
WIPO (PCT)
Prior art keywords
planar structure
obstacle
depth value
depth
group
Prior art date
Application number
PCT/CN2021/138577
Other languages
English (en)
French (fr)
Inventor
李辉
魏海永
丁有爽
邵天兰
Original Assignee
梅卡曼德(北京)机器人科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 梅卡曼德(北京)机器人科技有限公司 filed Critical 梅卡曼德(北京)机器人科技有限公司
Publication of WO2022252562A1 publication Critical patent/WO2022252562A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Definitions

  • the present application relates to the field of image processing, and more specifically, to a robot-based non-planar structure determination method, device, electronic equipment and storage medium.
  • intelligent program-controlled robots At present, with the widespread popularization of intelligent program-controlled robots, more and more items can be grasped and transported by intelligent program-controlled robots. For example, logistics packaging can be grasped by intelligent program-controlled robots, thereby greatly improving the grasping efficiency.
  • intelligent program-controlled robots In order to improve the grasping efficiency and to be able to flexibly adapt to a variety of objects, intelligent program-controlled robots usually install a fixture group consisting of multiple fixtures, so that different fixtures in the fixture group can be flexibly called according to different objects. Different grippers can grasp different objects. For example, the suction cup array can absorb objects similar to glass, but once it encounters rubber or protrusions, the grasping will fail. Objects or structures that cannot be grasped by this type of gripper are called obstacles.
  • Conventional intelligent robots can only perform obstacle avoidance grasping for a fixed type of grasping object and a single and simple obstacle.
  • the size, shape and position of the grasping object and obstacle are fixed.
  • the gripper is set at the center of the grasping object and at a position without obstacles, so as to avoid obstacles for grasping.
  • this obstacle avoidance grasping method has the following defects: First, this grasping method can only be used for items with obstacles at fixed positions and fixed models, that is, when the model of the item to be grabbed is unknown or the model of the item to be grabbed It is known that when the position of the obstacle is not fixed, it cannot accurately avoid the obstacle for grasping; secondly, this grasping method does not judge whether multiple grippers can stably grab the object, if the number of grippers used is not enough or more The arrangement of a fixture is inappropriate, for example, multiple fixtures are arranged in a straight line. In this case, after the object is grasped, the center of gravity is unstable, and the object may swing during the grasping process, causing the object to fall or collide with other unexpected items. damaged by collision.
  • the present invention has been proposed in order to overcome the above problems or at least partly solve the above problems.
  • the present invention can obtain the possible grasping modes of the gripper first, and select the best grasping mode that will not catch obstacles to perform the grasping of the item, so that even if there is an object that cannot be grasped When picking up obstacles, it can also accurately avoid obstacles and grab objects;
  • the present invention proposes a method to determine the depth value of each pixel in a specific area by grouping The difference and the number of different depth values are used to determine whether there is a non-planar structure in the area.
  • this method uses numerical statistics instead of determining the specific position of the non-planar structure to determine whether the non-planar structure exists, the processing efficiency It is high and practical, and it can also be applied to industrial scenes other than grasping that may need to determine whether there is a non-planar structure on the surface of the object; again, the present invention can pre-check the gripper used before the gripper group performs gripping.
  • the present invention is based on the general obstacle avoidance
  • the grasping method and fixture verification method developed an obstacle-avoiding grasping method and a suction cup verification method dedicated to the industrial scene of glass grasping using a suction cup array, which can improve the accuracy and stability of glass grasping using a suction cup array sex. It can be seen that the present invention solves all aspects of problems in the industrial scene of using a clamp to grab objects.
  • the present application provides a robot-based non-planar structure determination method, device, electronic equipment and storage medium.
  • the grouping includes grouping two different depth values.
  • the grouping includes grouping all depth values in the following manner: sorting all obtained depth values from high to low, and dividing the first depth value and the penultimate depth value into One group, the second depth value and the penultimate depth value are grouped together...the Nth depth value and the penultimate Nth depth value are grouped together, where N is a natural number greater than or equal to 1.
  • the preset difference threshold and/or grouping number threshold includes a preset difference threshold and/or grouping number threshold based on the capability of the gripper.
  • condition of non-planar structures includes presence or absence of non-planar structures.
  • condition of the non-planar structure includes a degree of relief of the non-planar structure.
  • a depth map acquisition module configured to obtain a two-dimensional plane depth map of the area to be determined on the surface of the item
  • a depth value acquisition module configured to acquire the depth value of each pixel in the area to be determined according to the two-dimensional plane depth map
  • a grouping module configured to group obtained depth values
  • the comparison module is used to calculate the difference of the depth value in each group and compare it with the preset difference threshold; and/or calculate the number of groups and compare it with the preset group number threshold;
  • the judging module is used for judging the non-planar structure of the region according to the comparison result.
  • the depth value obtaining module obtains the depth value of each pixel, only one of the multiple identical depth values is retained.
  • the grouping module groups two different depth values into a group.
  • the grouping module groups all depth values in the following manner: sort all obtained depth values from high to low, and divide the first depth value and the penultimate depth value into One group, the second depth value and the penultimate depth value are grouped together...the Nth depth value and the penultimate Nth depth value are grouped together, where N is a natural number greater than or equal to 1.
  • the preset difference threshold and/or grouping number threshold includes a preset difference threshold and/or grouping number threshold based on the capability of the gripper.
  • the determining module determines the presence or absence of non-planar structures.
  • the determination module determines the degree of relief of the non-planar structure.
  • the electronic device of the embodiment of the present application includes a memory, a processor, and a computer program stored on the memory and operable on the processor, and the processor implements any of the above embodiments when executing the computer program.
  • Robot-based non-planar structure determination method Robot-based non-planar structure determination method.
  • the computer-readable storage medium of the embodiments of the present application stores a computer program thereon, and when the computer program is executed by a processor, the robot-based non-planar structure determination method of any of the above-mentioned embodiments is implemented.
  • FIG. 1 is a schematic flow diagram of a gripping method for obstacle avoidance in some embodiments of the present application
  • Fig. 2 is a schematic flow chart of a method for determining a non-planar structure on the surface of an article in some embodiments of the present application;
  • Fig. 3 is a schematic flow chart of a fixture verification method in some embodiments of the present application.
  • FIG. 4 is a schematic flow diagram of a glass grabbing method using a suction cup according to some embodiments of the present application.
  • Fig. 5 is a schematic diagram of glass and obstacle point cloud and suction cup arrangement in some embodiments of the present application.
  • Fig. 6 is a schematic flow chart of obstacle judgment when grabbing glass in some embodiments of the present application.
  • Fig. 7 is the schematic diagram of the gluing process of the strip to be pasted in some embodiments of the present application.
  • Fig. 8 is a schematic flow chart of a chuck array verification method in some embodiments of the present application.
  • Fig. 9 is a schematic structural view of the gripper obstacle avoidance grabbing device in some embodiments of the present application.
  • Fig. 10 is a schematic structural diagram of a device for determining non-planar structure on the surface of an article in some embodiments of the present application.
  • Fig. 11 is a schematic structural view of a fixture verification device in some embodiments of the present application.
  • FIG. 12 is a schematic structural view of a glass grabbing device using a suction cup according to some embodiments of the present application.
  • Fig. 13 is a schematic structural view of a sucker array verification device in some embodiments of the present application.
  • Fig. 14 is a schematic structural diagram of an electronic device according to some embodiments of the present application.
  • Fig. 1 shows a schematic flow chart of a robot-based clamp obstacle avoidance grabbing method according to an embodiment of the present invention. As shown in Fig. 1, the method includes:
  • Step S100 acquiring point cloud information of the object group to be grasped.
  • the group of items to be grabbed may include one or more items to be grabbed, and may also include one or more obstacles. Obstacles may exist on the item to be grabbed, such as bumps on the item to be grabbed, adhesive strips stuck to the item to be grabbed, etc.; obstacles may also exist outside the item to be grabbed, such as Items may be stacked in layers with rubber pads between layers, etc.
  • the point cloud information of the group of objects to be grasped may include point cloud information of one or more objects to be grasped and/or obstacles.
  • point cloud information can be obtained through a 3D industrial camera.
  • a 3D industrial camera is generally equipped with two lenses to capture the group of objects to be captured from different angles, and after processing, the three-dimensional image of the object can be displayed. Place the group of items to be grabbed under the visual sensor, and shoot the two lenses simultaneously.
  • the relative attitude parameters of the two images obtained use the general binocular stereo vision algorithm to calculate the X of each point of the glass to be glued. , Y, Z coordinate values and the coordinate orientation of each point, and then transformed into the point cloud data of the item group to be captured.
  • components such as laser detectors, visible light detectors such as LEDs, infrared detectors, and radar detectors can also be used to generate point clouds, and the present invention does not limit specific implementation methods.
  • the point cloud data obtained through the above method is three-dimensional data.
  • the acquired three-dimensional data to be grasped can be The orthographic projection of the point cloud data of the item group is mapped to a two-dimensional plane.
  • a depth map corresponding to the orthographic projection may also be generated.
  • a two-dimensional color image corresponding to the three-dimensional object area and a depth image corresponding to the two-dimensional color image may be acquired along a depth direction perpendicular to the object.
  • the two-dimensional color map corresponds to the image of the plane area perpendicular to the preset depth direction; each pixel in the depth map corresponding to the two-dimensional color map corresponds to each pixel in the two-dimensional color map, and each The value of a pixel is the depth value of the pixel.
  • Step S110 Generate a search state based on the point cloud information of the object group to be grasped and the search boundary factor parameters of the fixture.
  • the search boundary factor parameters of the fixture include the control boundary of the controllable parameters of the fixture, the control granularity and other boundary factor parameters. According to the actual situation, for example, the type of fixture used, the type and size of the object to be grasped, etc., the parameters and values of the search boundary factors may be different.
  • the controllable parameters can include the moving distance in the X direction, the moving distance in the Y direction, and the angle of rotation.
  • the moving distance boundary in the X direction (that is, the controllable boundary) can be set from -500mm to 500mm, and the control granularity is 100mm , the fixture can move from -500mm to 500mm in units of 100mm in the X direction, so there are 11 search states in the X direction;
  • the moving distance boundary in the Y direction can be set from -100mm to 100mm, and the control granularity is 50mm, the fixture can move from -100mm in the Y direction to 100mm in units of 50mm, so there are 5 search states in the Y direction;
  • the rotation angle can be set from -50 degrees to +50 degrees to control the granularity is 10 degrees, the fixture can rotate from -50 degrees to +50 degrees in units of 10 degrees, and there are 11 search states in total in this rotation angle.
  • other boundary factor parameters can also be set. For example, it can be set that the moving distance of the fixture cannot exceed 100mm from a certain side, and when the fixture moves to this side, it can only move to a position 100mm away from this side.
  • the shape, size and other information of the fixture can be configured, and the configuration information can be saved using the json file configuration.
  • the configuration information may be different.
  • the configuration information may include the position of a single suction cup relative to the center of the entire suction cup array, the number and radius of a single suction cup, and the like.
  • the solution of the present invention can be used for various types of fixtures, for example, including various general fixtures.
  • the general fixture refers to a fixture whose structure has been standardized and has a large application range, for example, three-jaw chucks and four-jaw chucks for lathes Chucks, flat pliers and indexing heads for milling machines, etc.
  • the fixture can be divided into manual clamping fixture, pneumatic clamping fixture, hydraulic clamping fixture, gas-hydraulic linkage clamping fixture, electromagnetic fixture, vacuum fixture and so on.
  • the present invention does not limit the specific type of the clamp, as long as it can realize the grabbing operation of the item.
  • Step S120 For each search state, perform obstacle judgment, and set the search state that passes the obstacle judgment as an alternative search state.
  • Obstacle judgment is used to judge the obstacles faced by the gripper in a certain search state, such as whether the obstacle exists or not, whether it affects the grasping, etc. If there is no obstacle, or the obstacle does not affect the grasping, the obstacle judgment is passed, otherwise it is not passed .
  • grasping is performed using multiple grippers or a gripper array consisting of multiple grippers, obstacle determination can be performed for each gripper individually.
  • One of the key points of the present invention is to traverse all search states to perform obstacle judgment, so there is no limitation on the obstacle judgment method, and any obstacle judgment method can be used in the present invention.
  • the obstacle situation of the object to be grasped can be judged by only one obstacle judgment method, or can be judged by combining multiple obstacle judgment methods. If a combination of multiple judgment methods is adopted, the various judgment methods can be performed in a certain order or in parallel. When any of the multiple judgment methods fails, it is considered that the obstacle judgment of the search state is not valid. pass.
  • the present invention exemplarily provides four obstacle determination methods, namely boundary obstacle avoidance, fixed obstacle avoidance, convex/concave obstacle avoidance and custom obstacle avoidance.
  • the point cloud data of the item group to be captured can be obtained. Specifically, the point cloud data can be obtained by using the method in step 100, and the edge part of the point cloud data can be intercepted to obtain the item group to be captured. Because the obtained contour point cloud of the object group to be grasped is 3D point cloud data, and the 3D point cloud data will affect the determination of the edge contour of the object due to various external or internal factors.
  • the outline point cloud of the object edge can be projected, and the above outline point cloud can be mapped to a two-dimensional plane to obtain the outline points of the edge of the object group to be grasped. Since the contour points at the edge of the object group to be grasped are two-dimensional data at this time, the contour points at the edge of the object group to be grasped can be more clearly defined based on the two-dimensional data.
  • the contour points at the four corners of the to-be-grabbed item can be obtained from the obtained two-dimensional pattern of the to-be-grabbed item group.
  • the minimum circumscribed rectangle of the glass to be glued is obtained, and the minimum circumscribed rectangle is regarded as the contour quadrilateral of the object group to be grasped.
  • For each search state determine the relationship between the center of the gripper and the edge contour point of the object in the search state. If the center of the gripper is outside the obtained contour, it means that the gripper is located at the boundary of the object to be grasped. At this time, determine If there is a boundary obstacle, the obstacle judgment will not pass; if the center of the fixture is within the contour, the obstacle judgment will pass.
  • the items to be grabbed may be placed together with some fixed obstacles to form a group of items to be grabbed.
  • the items to be grabbed are steel plates or glass
  • each layer of steel plates or The glass may be separated by rubber pads or foam plastics.
  • the rubber pad or foam plastic will remain on the lower steel plate or glass, thus becoming an obstacle when grasping. In such cases, it is necessary to perform obstacle determination on these fixed obstacles that exist but are not fixed, and that affect the gripper's grasping of the object to be grasped.
  • the three-dimensional point cloud data of the object group to be grasped can be acquired and mapped onto a two-dimensional plane, wherein the method of step 100 can be used to acquire the point cloud data and perform mapping.
  • fixed obstacles usually have uniform specifications, for example, uniform size or shape
  • the specification parameters of fixed obstacles can be preset. For each search state, based on the two-dimensional point cloud data, it is possible to judge the fixture in this search state. Whether there is an item below that meets the preset specification parameters, if there is, it is determined that there is a fixed obstacle, and the obstacle judgment is not passed; otherwise, the obstacle judgment is passed.
  • protrusions or depressions in the item itself there may be protrusions or depressions in the item itself, and these protrusions/depressions may be caused by the item itself, or may be caused by collisions and other reasons.
  • Certain grippers may not be able to grasp objects correctly in raised or recessed places, for example, if a suction cup gripper is used to absorb a raised edge, the suction cup cannot be completely attached to the object, causing air leakage, and then it cannot be grasped in this position thing. Therefore, in some cases, it is necessary to use the protrusion/depression as an obstacle for obstacle determination.
  • protrusions/depressions are difficult to distinguish through contour point cloud data, and, unlike fixed obstacles, protrusions/depressions usually do not have fixed specifications. Conventional The protrusion/depression of the item cannot be recognized in the same way.
  • the present invention proposes a method for judging the non-planar structure of the surface of an object, which is also one of the key points of the present invention, and this method can be used in some embodiments of the present invention for judging convex/depressed obstacles , can also be used in other situations where it is necessary to judge whether the surface of the object has a non-planar structure, and is not limited to the use in the solution of obstacle avoidance and grasping.
  • Fig. 2 shows a schematic flow chart of a method for determining a non-planar structure on an object surface according to the present invention, and the non-planar structure includes structures such as protrusions and depressions. As shown in Figure 2, the method includes:
  • Step 200 acquire the two-dimensional plane depth map information of the area to be determined on the surface of the object.
  • the depth map corresponding to the orthographic projection is generated as the two-dimensional plane depth map information of the area to be determined.
  • the depth map can be acquired in a manner similar to step 100 . It is also possible not to acquire the depth map of the whole item, but to acquire the depth map of a partial area of the item or an area to be determined.
  • Step 210 acquire the depth value of each pixel in the area to be determined according to the two-dimensional plane depth map information.
  • Each pixel in the depth map is in one-to-one correspondence with each pixel in the orthographic two-dimensional image, and the value of each pixel is the depth value of the pixel.
  • the depth value of each pixel in the area to be determined can be obtained in this step.
  • Step 220 group the obtained depth values.
  • the depth values of each pixel of a planar structure are the same or have a small difference, while the depth values of each pixel of a non-planar structure such as a protrusion or a depression may have a large difference.
  • the obtained depth values of all pixels are grouped. If there are multiple identical depth values, they can be combined into one depth value and then grouped.
  • all obtained depth values may be grouped according to a group of two or more depth values.
  • the way of grouping may be random grouping or any other grouping way, which is not limited in the present invention.
  • all the depth values can be sorted first, then the highest value and the lowest value are grouped together, the second highest value and the second lowest value are grouped, and so on, all the depth values are grouped .
  • Step 230 calculating the difference of the depth values in each group, and comparing with the preset difference threshold; and/or, calculating the number of groups, and comparing with the preset group number threshold.
  • the difference in depth value can reflect the distance between pixels.
  • the number of groups can reflect the number of pixels with different heights. The larger the number of groups in the area to be determined, the more pixels with ups and downs. Therefore, the number of groups can reflect the degree of unevenness in the area to be judged from another angle.
  • the difference of depth values or the number of groups can be used alone to determine whether an object has a non-planar structure. As a preferred embodiment, it is also possible to jointly determine whether an object has a non-planar structure using the difference of the depth value and the number of groups.
  • the difference threshold and/or group number threshold can be set according to the needs of the actual situation, for example, when the method is applied to grasping and avoiding obstacles, for some grippers, smaller protrusions Does not affect grasping, so the threshold can be set higher so that smaller bumps are not judged as ungraspable obstacles.
  • Step 240 judging the non-planar structure of the region according to the comparison result.
  • the non-planar structure can include whether the non-planar structure exists. When the difference of the depth value and/or the number of groups exceeds the threshold, it is determined that there is a non-planar structure; it can also include the degree of undulation of the non-planar structure. levels. In other embodiments, multi-level thresholds can also be set to determine whether non-planar structures exist or the level of undulations through a combination of different thresholds. As a preferred embodiment, when using the difference of the depth value and the number of groups to jointly determine the non-planar structure, it can be set that when the difference of the depth value exceeds the threshold of the difference and the number of groups exceeds the threshold of the number of groups, it is determined that there is a non-planar structure. structure.
  • the object to be grasped is glass
  • the glass may be coated with glue.
  • the glue is usually relatively transparent and the features are not obvious, so it cannot be correctly identified.
  • the glass cannot be gripped in the glued position.
  • the edge contour of the obstacle can be generated first according to the user-defined obstacle information, and then for each search state, determine the relationship between the center of the fixture in the search state and the self-defined obstacle edge contour, if If the center of the fixture is outside the obtained contour, it is determined that there is an obstacle, and the obstacle judgment fails; if the center of the fixture is inside the contour, the obstacle judgment is passed.
  • the retraction distance of the edge contour can also be set, that is, the original edge contour of the obstacle shrinks inward for a certain distance, and then the determination of the relative position between the clamp center and the edge contour is performed.
  • step S120 the scheme can also be extended or improved as follows:
  • obstacle determination may be performed for each of the plurality of grippers
  • the search state can be set as an alternative search state
  • Step S130 selecting the best search state from the candidate search states.
  • the way to select the best search state is also different.
  • the present invention does not limit this, and any selection method can be used in the solution of the present invention.
  • the optimal search state can be selected according to the distance between the gripper and the center of the object to be grasped in each search state. Generally speaking, the closer the gripping position of the gripper is to the center of the object, the more stable the grip will be. Therefore, the search state in which the gripper is closest to the center of the object can be set as the best search state.
  • the optimal search state can also be selected according to the number of grippers that have passed obstacle judgment, or the distance between the gripper and the center of the object and the grippers that have passed obstacle judgment can be considered comprehensively.
  • the number of select the best search state In a preferred embodiment, it can be selected according to the number first. If there are multiple search states with the largest number of passes, then from the multiple search states that have passed, the search state that is the closest to the center of the article is further selected as the best. Search state: In other embodiments, a quantity threshold can also be preset, first select all search states whose number of clamps opened exceeds the threshold, and then select the search state closest to the center from these search states as the best search state.
  • Step S140 grabbing the item to be grabbed based on the optimal search state.
  • the robot sets the position, angle and quantity of the fixture according to the configuration parameters of the fixture in the optimal search state, and then executes the grabbing of the item and places the item in the specified position.
  • the specified location includes the ground, the item rack, etc.
  • the present invention also proposes a method for clamp grasping verification, which can determine whether the grasping method to be used can stably grasp the object before grasping the object, thereby avoiding unnecessary errors during grasping. loss.
  • This method can be used in the obstacle avoidance grasping method of the present invention, and can also be used in other grasping scenes.
  • the present invention does not limit the specific usage situation, as long as multiple clamps or multiple clamps are used in a certain grasping scene
  • the formed fixture array can be verified by applying this method.
  • multiple fixtures, or a fixture array composed of multiple fixtures are collectively referred to as a fixture set in the present invention.
  • Fig. 3 shows a schematic flowchart of a verification method for a fixture set according to an embodiment of the present invention. As shown in Figure 3, the method includes:
  • Step S300 Obtain status information of the fixture set.
  • the state of the fixture group refers to the combination of specific parameters of each fixture in the fixture group.
  • the gripping performed by the fixture group in a certain state means that the fixture configures and executes the grabbing action based on the parameters in the state. For example, suppose A certain fixture has 3 fixtures, number 1-3 respectively. The No. 1 fixture is on the left edge of the object to be grasped, and the rotation angle is 30 degrees. The No. 2 fixture is on the right edge of the object, and the rotation angle is 0 degrees. Fixtures 1 and 2 are both set to open, and fixture 3 is set outside the object and set to off. The combination of such specific parameters is a state of the fixture group.
  • the status information may include the position information of the clamp group in this state, the position information and angle information of each clamp in the clamp group, and information such as whether these clamps can be opened in the current state and which clamps can be opened.
  • Step S310 Determine the quantity information of the openable clamps and/or the position information of the openable clamps based on the state information of the clamp group.
  • the position of the clamp relative to the item to be grasped can be determined according to the outline information of the item to be grasped, combined with the boundary parameters of the clamp; the position of the clamp relative to other clamps can also be determined, or the position of the clamp relative to the
  • the position of the fixture array serves as the position of the fixture.
  • multiple clamps or clamp arrays may be divided into multiple areas, and the location information of the clamps may be represented by the area where the clamps are located.
  • Step S320 judging whether the number of openable fixtures meets a preset condition; and/or judging whether the position information of the openable fixture satisfies a preset condition.
  • the number of clamps when grabbing heavier items, can be set to 5, so that the preset condition can only be met when the number of openable clamps exceeds 5; for lighter items, it can be set to 3.
  • the number of clamps to be used can also be set according to the size of the items. For example, for items with a larger area, the preset condition is 5 clamps, and for items with a smaller area, it is 3. , the area of the item can be determined according to the outline information of the item. For the conditions that the position information of the openable fixtures needs to meet, it is usually at least to avoid that multiple fixtures are located in a straight line or approximately in a straight line.
  • connection line of the opening position of the clamp must form a stable triangle; it is also possible to set the position information that the openable clamp needs to meet according to the center of gravity of the item to be grasped to ensure a stable center of gravity when grasping, for example, when the center mass of the item to be grasped If it is larger, the preset condition can be that there must be an openable clamp near the center of the object to be grasped or there must be a certain number of openable clamps.
  • Step S330 Determine whether the gripper set can perform grasping in this state according to the judgment result.
  • the grabbing may be performed only when the number of openable clamps meets the condition or the position information of the openable clamp satisfies the condition. It is also possible to perform crawling only when both conditions are met at the same time. When multiple conditions of location information are included, crawling may be performed only when multiple conditions are met.
  • the jig verification may be performed after it is determined that the grabbing of the object is performed in a certain state, and before the grabbing is performed. If it is used in conjunction with the obstacle avoidance grasping scheme, steps S300-S330 can be performed between steps S130 to S140 in the foregoing embodiment, that is, after the best search state is selected, the best search state is used as the state of the gripper array to execute the gripper Check to see if crawling is possible using this optimal search state. In another implementation manner, it may also be combined with a fixture verification method to determine that the grasping of the item is performed in a certain state.
  • steps S300-S330 can be performed in step S120 or step S130 of the foregoing embodiment, for example, an alternative search state can be selected through a fixture verification method or after an alternative search state is selected, first The candidate search state is used as the state of the fixture array, and the search state that cannot be grasped is eliminated by the fixture verification method, and then the best search state is selected according to the number of fixtures and the distance from the center.
  • the obstacle avoidance grasping method and fixture grasping verification method of the present invention are not limited to specific fixtures and application scenarios.
  • the inventor has worked hard to further refine the method to adapt to this scene, which is also one of the key points of the present invention.
  • Figure 4 shows a flow chart of a glass obstacle-avoiding grasping method using a suction cup array according to a preferred embodiment of the present invention, the method comprising:
  • Step S400 Obtain point cloud information of glass to be grasped and obstacles.
  • the glass can be stacked in layers, and a rubber pad is provided between each layer of glass to separate the glass.
  • a method similar to step S100 can be used to obtain the point cloud of the rubber pad and the point cloud of the object itself, and project the point cloud onto the 2D image and generate a corresponding depth map after the orthographic projection.
  • Figure 5 shows the point cloud image obtained in this way, the black part in the figure is transparent glass, and the white part is the point cloud of non-transparent part.
  • Step S410 Generate a search state based on the point cloud information and the search boundary factor parameters of the suction cup array.
  • the search state of the suction cup includes different position states of the suction cup on the object and the rotation state of the suction cup itself.
  • the specified search boundary factor parameters can include X direction, Y direction and rotation, and can also include the distance from the suction cup boundary to the specified side.
  • Several different search states are generated according to different search boundary parameters.
  • the suction cup array used includes 10 suction cups, numbered 1-10 respectively.
  • Figure 5 shows the positions of these 10 suction cups in a certain search state. In other search states, the suction cups may be in other position or have different rotation angles.
  • the movement distance boundary in the X direction (that is, the control boundary) can be set from -500mm to 500mm, and the control granularity is 100mm, then the fixture can move from -500mm in the X direction to 500mm in units of 100mm.
  • the moving distance boundary in the Y direction can be set from -100mm to 100mm, and the control granularity is 50mm, then the fixture can move from -100mm in the Y direction to 100mm in units of 50mm , so there are 5 search states in the Y direction;
  • the rotation angle can be set from -50 degrees to +50 degrees, and the control granularity is 10 degrees, then the fixture can rotate from -50 degrees to + 50 degrees, there are 11 search states in this rotation angle.
  • Step S420 For each search state, determine whether there is an obstacle under each suction cup, and if so, close the suction cup in the search state.
  • judging whether there is an obstacle under the suction cup may include the following steps:
  • Step S421 Determine whether there is a glass boundary obstacle under the suction cup.
  • the point cloud ratio refers to the ratio of the point cloud area to the overall area in the area under the sucker.
  • the suction cups 4-6 in Figure 5 are located at the glass boundary.
  • the white point cloud area under these suction cups accounts for a significantly larger proportion, so the point cloud ratio threshold can be preset. , when the proportion of the point cloud in the area under the suction cup exceeds the threshold, it is judged that there is a glass boundary obstacle under the suction cup.
  • the ratio threshold can be set according to the needs of the actual situation, such as the suction force of the suction cup and the obstacle situation, and the specific value is not limited in the present invention.
  • Step S422 Determine whether there is a rubber pad obstacle under the suction cup.
  • the glass is stacked layer by layer, and the layers are separated by rubber pads. Except for the uppermost glass, there is a rubber pad for separating the glass layers on each glass in the lower layer. In this way, after the upper glass is taken away, there will be rubber pads left on the lower glass. Since the size of the rubber block is fixed and known, the rubber block can be filtered out according to the size.
  • the area of the point cloud of the obstacle can be preset, and when the area of the point cloud in the area under the suction cup exceeds the area of the point cloud of the obstacle, it is determined that there is a rubber pad obstacle under the suction cup. The area of the obstacle point cloud can be set arbitrarily according to the size of the rubber pad used.
  • Step S423 Determine whether there is a rubber strip obstacle under the suction cup.
  • glue may be applied near the border of the glass.
  • the glue strip formed after glue application is usually transparent.
  • the point cloud features of such glue strips are not obvious and difficult to identify, but the suction cup cannot be caught on the glue strip.
  • the position of the glue strip can be set by the user, and the robot can judge whether there is a glue strip obstacle under the suction cup array according to the glue strip obstacle information set by the user.
  • a circle of glue strips can be generated near the edge contour of the glass according to user settings.
  • the user can set the position of the glue strips, the indentation distance and the width of the glue strips, or two glue strips can be generated on the same side.
  • Figure 7 shows four different gluing processes.
  • the sticking obstacle information is set in advance at the position where the sticking is required. In this way, after sticking the tape, the robot can judge whether there is a tape obstacle in the area under the suction cup according to the tape information set by the user.
  • Step S424 Determine whether there is a raised obstacle under the suction cup.
  • the depth difference threshold and the logarithmic threshold can be set according to the needs of the actual situation, such as the suction force of the suction cup and the obstacle situation, and the present invention does not limit the specific values.
  • the logarithmic threshold can be selected within 10 to 50 pairs, with 20 pairs being the best;
  • the depth difference threshold can be Choose within 0.500mm-0.005mm, with 0.015mm being the best.
  • the obstacle judgment can be performed in strict accordance with the order of steps S421-S424. After the last step judges that there is an obstacle and closes the suction cup, the subsequent steps will not be executed. For example, if it is judged in step S421 that there is a glass boundary under the suction cup If there is an obstacle, close the suction cup, and steps S422-S424 will not be executed. Since the above four steps are relatively independent, it is also possible to judge the obstacle of the glass only through the obstacle judgment method of any step in S421-S424, or to combine any number of steps, for example, using A combination of steps S421 and S422, without using steps S423 and S424.
  • the multiple steps can be performed in a certain order or in parallel. It should be noted that although a single step or various combinations can realize the obstacle judgment, if the obstacle judgment method of steps S421-S424 is executed in sequence in the preferred embodiment of the present invention, compared with other judgment methods, not only can the obstacle be improved The accuracy of the judgment, and because the obstacles that are more likely to appear are judged first and the simpler algorithm is used first, the efficiency of obstacle judgment can be improved, and the efficiency of robot grasping can be improved, so it has special advantages in industrial scenarios.
  • Step S430 selecting an alternative search state according to the opening of the suction cup in each search state.
  • Step S440 selecting the best search state from the alternative search states according to the number of suction cups opened and/or the distance between the suction cups and the center of the glass.
  • the optimal search state can be selected according to the distance between each search state and the center of the glass to be grabbed. Generally speaking, the closer to the center of the object, the more stable the grasp, so the search state closest to the center of the object can be set as the best Search status. For each search state, determine the center position P1 of the fixture in this search state, and then calculate the center position P2 of the object through the circumscribed rectangle of the glass. The distance between P1 and P2 is the distance between the fixture and the center of the glass. It is also possible to select the search state with the largest number of passing through obstacles as the best search state according to the number of suction cups that have passed the obstacle determination.
  • the optimal search state can be selected in combination with the number of suction cups opened and the distance between the suction cups and the center of the glass. For example, it can be selected based on the number first.
  • the multiple search states further select the search state closest to the center position of the item as the best search state; in another embodiment, a quantity threshold can also be preset, and first select all the search states whose number of suction cups opened exceeds the threshold, Then select the search state closest to the center from these search states as the best search state.
  • Step S450 using the best search state to grab the glass.
  • FIG. 8 shows a flow chart of a glass gripping verification method using a suction cup array according to a preferred embodiment of the present invention. As shown in Figure 8, the method includes:
  • Step S500 Group all the suckers in the array according to the shape of the sucker array.
  • the suction cup array includes 12 suction cups, respectively numbered 1-12, and the suction cups can be divided into 4 groups of upper left, upper right, lower left, and lower right according to their relative positions with the array of suckers. Specifically, suction cups 3 and 9 are the first group, suction cups 4 and 8 are the second group, suction cups 1, 2 and 10 are the third group, and suction cups 5, 6 and 7 are the fourth group.
  • Step S510 Obtain state information of the suction cup array.
  • the state of the suction cup array refers to the combination of specific parameters of each suction cup in the suction cup array.
  • the gripping performed by the suction cup array in a certain state means that the suction cup array configures and executes the grabbing action based on various parameters in the state, for example Assume that a suction cup array has 3 suction cups, number 1-3 respectively, suction cup No. 1 is on the left edge of the object to be grasped, and the rotation angle is 30 degrees, and suction cup No. 2 is on the right edge of the object, and the rotation angle is 0 Degree, No. 1 and No. 2 suction cups are set to open, No. 3 suction cup is outside the item, set to off, the combination of such specific parameters is a state of the suction cup array.
  • the status information of the suction cup array includes the position of the suction cup array, the position and angle of each suction cup in the suction cup array, and information such as whether these suction cups can be opened in the current state, which suction cups can be opened, and the like.
  • Step S520 Determine the quantity information of the openable suction cups and the group information of the openable suction cups based on the state information of the suction cup array.
  • the number of the suction cups that can be opened in this state and the group of the suction cups that can be opened are determined.
  • Step S530 Determine whether the number of openable suction cups meets a preset condition and/or whether the grouping information of the openable suction cups meets a preset condition.
  • a single threshold can be set, for example, more than 5 suction cups are required to allow grasping. It is also possible to preset the threshold of the number of suction cups according to the size of the glass to be grabbed. For example, a minimum threshold of 3 suction cups can be set, that is, no matter how large the glass area is, at least 3 suction cups must be able to be opened; it can also be 1 square meter -2 There are 4 suction cups for a square meter glass area, and 5 suction cups for a glass area of 2 square meters or more.
  • the area of the glass and the number of suction cups opened are judged to determine whether the grip can be performed.
  • the area of the glass must also be judged. For example, it can be judged that the glass area is 2 square meters, and the number of openable suction cups is 4. When the corresponding relationship between area and quantity is satisfied, the quantity condition is considered to be satisfied.
  • the suction cup distribution conditions that need to be met can be preset, for example, it can be set that the suction cup distribution conditions are satisfied when there are suction cups that can be opened in at least three groups (in this case, the suction cups can be arranged in a stable triangle or quadrilateral).
  • Step S540 Determine whether the suction cup array can perform grasping in this state according to the judgment result.
  • the grabbing may be performed only when the number of the openable suction cups meets the condition or the distribution of the openable suction cups meets the condition. It is also possible to perform crawling only when both conditions are met at the same time. When there are multiple preset conditions for suction cup grouping, grasping can only be performed when multiple conditions are met.
  • Step S500 can be executed at any time before S510, as long as it is ensured that there is already determined group information when steps S510-S540 are executed in sequence.
  • the suction cup verification may be performed after it is determined that the glass grabbing is performed in a certain state and before the grabbing is performed. If it is used in conjunction with the obstacle avoidance grasping scheme, steps S510-S540 can be performed between steps S430 to S440 of the aforementioned embodiment, that is, after the best search state is selected, the best search state is used as the state of the suction cup array, Perform a sucker check to determine if this optimal search state can be used for crawling.
  • steps S510-S540 can be performed in step S420 or step S430 of the foregoing embodiment, for example, an alternative search state can be selected through the suction cup verification method or after the alternative search state is selected, first The candidate search state is used as the state of the suction cup array, and the search state that cannot be grasped is eliminated according to the suction cup verification method, and then the best search state is selected according to the number of suction cups and the distance from the center.
  • the present invention can obtain the possible gripping methods of the clamp first, and select the best gripping method that will not catch obstacles to execute the grabbing of the item, so that even if there is an obstacle that cannot be grabbed on the item , can also accurately avoid obstacles and grab objects;
  • the present invention proposes a solution that can identify non-planar areas on the surface of the object;
  • the present invention can Before the gripper group performs grabbing, pre-verify that the grabbing method used can grab the item correctly, which improves the stability of grabbing and avoids problems such as unstable center of gravity that may occur during the grabbing process; fourth, this The invention is based on the general-purpose obstacle avoidance grasping method and fixture verification method, and has developed an obstacle avoidance grasping method and a suction cup verification method dedicated to the industrial scene of glass grasping using a suction cup array, which can improve the use
  • Fig. 9 shows a clamp control device according to yet another embodiment of the present invention, the device includes:
  • the point cloud acquisition module 600 is used to acquire the point cloud information of the item group to be captured, that is, to realize step S100;
  • the search state generation module 610 is used to generate the search state based on the point cloud information of the item group to be grasped and the search boundary factor parameters of the fixture, that is, to realize step S110;
  • the obstacle judgment module 620 is used for each search state, executes the obstacle judgment, and sets the search state passed through the obstacle judgment as an alternative search state, that is, for realizing step S120;
  • the best search state determination module 630 is used to select the best search state from the candidate search states, that is, to implement step S130;
  • the grabbing module 640 is configured to grab the item to be grabbed based on the optimal search state, that is, to implement step S140.
  • Fig. 10 shows a robot-based non-planar structure judging device according to yet another embodiment of the present invention, the device comprising:
  • a depth map acquisition module 700 configured to obtain a two-dimensional plane depth map of the area to be determined on the surface of the item, that is, to implement step S200;
  • a depth value acquisition module 710 configured to acquire the depth value of each pixel in the area to be determined according to the two-dimensional plane depth map, that is, to implement step S210;
  • a grouping module 720 configured to group the obtained depth values, that is, to implement step S220;
  • the comparison module 730 is used to calculate the difference between the depth values in each group, and compare it with a preset difference threshold; and/or, calculate the number of groups, and compare it with a preset group number threshold, that is, use To realize step S230;
  • the determination module 740 is configured to determine the non-planar structure of the region according to the comparison result, that is, to implement step S240.
  • Fig. 11 shows a schematic structural view of a fixture set verification device according to yet another embodiment of the present invention, the device comprising:
  • a status information acquisition module 800 configured to acquire status information of the fixture array, that is, to execute step S300;
  • An information determination module 810 configured to determine the quantity information of the openable clamps and/or the position information of the openable clamps based on the state information, that is, to execute step S310;
  • a condition determination module 820 configured to determine whether the number of openable clamps meets a preset condition and/or determine whether the position information of an openable clamp meets a preset condition, that is, to execute step S320;
  • the grasping determination module 830 is configured to determine whether the gripper group can perform grasping in this state according to the judgment result, that is, to execute step S330.
  • Fig. 12 shows a schematic structural diagram of a suction cup array control device according to yet another embodiment of the present invention, the device includes:
  • the point cloud acquisition module 900 is used to acquire the point cloud information of glass to be grasped and obstacles, that is, to execute step S400;
  • a search state generation module 910 configured to generate a search state based on point cloud information and search boundary factor parameters of the sucker array, that is, to execute step S410;
  • Obstacle judging module 920 for each search state, to judge whether there is an obstacle under each suction cup, if there is, close the suction cup in the search state, that is, to execute step S420;
  • the alternative search state selection module 930 is used to select an alternative search state according to the opening of the suction cup in each search state. That is, it is used to execute step S430;
  • the best search state selection module 940 is used to select the best search state from the alternative search states according to the number of suction cups opened and/or the distance between the suction cups and the center of the glass, that is, to perform step S440;
  • the grasping module 950 is configured to grasp the glass based on the optimal search state, that is, to execute step S450.
  • Fig. 13 shows a schematic structural view of a sucker array calibration device according to yet another embodiment of the present invention, the device includes:
  • the grouping module 1000 is used to group all the suckers in the array according to the shape of the sucker array, that is, to execute step S500;
  • a status information acquisition module 1010 configured to acquire the status information of the sucker array, that is, to execute step S510;
  • the information determination module 1020 is used to determine the number information of the openable suction cups and the group information of the openable suction cups based on the state information of the suction cup array, that is, to execute step S520;
  • a condition determination module 1030 configured to determine whether the number of openable suction cups meets a preset condition and/or whether the grouping information of the openable suction cups meets a preset condition, that is, to execute step S530;
  • the grasping determination module 1040 is configured to determine whether the suction cup array can perform grasping in this state according to the judgment result, that is, to execute step S540.
  • the above-mentioned embodiment describes the method that the grasping determination module 1040 is used to implement step S540, however, according to actual needs, the grasping determination module 1040 can also be used to implement the method or method of steps S500, S510, S520 or S530 part.
  • the present application also provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the method in any one of the above-mentioned implementation modes is implemented.
  • the computer program stored in the computer-readable storage medium in the embodiment of the present application can be executed by the processor of the electronic device.
  • the computer-readable storage medium can be a storage medium built in the electronic device, or can be The storage medium of the electronic device is pluggable and pluggable. Therefore, the computer-readable storage medium in the embodiments of the present application has high flexibility and reliability.
  • FIG. 14 shows a schematic structural diagram of an electronic device according to an embodiment of the present invention.
  • the specific embodiment of the present invention does not limit the specific implementation of the electronic device.
  • the electronic device may include: a processor (processor) 1102, a communication interface (Communications Interface) 1104, a memory (memory) 1106, and a communication bus 1108.
  • processor processor
  • Communication interface Communication Interface
  • memory memory
  • the processor 1102 , the communication interface 1104 , and the memory 1106 communicate with each other through the communication bus 1108 .
  • the communication interface 1104 is used to communicate with network elements of other devices such as clients or other servers.
  • the processor 1102 is configured to execute the program 1110, and may specifically execute relevant steps in the foregoing method embodiments.
  • the program 1110 may include program codes including computer operation instructions.
  • the processor 1102 may be a central processing unit CPU, or an ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement the embodiments of the present invention.
  • the one or more processors included in the electronic device may be of the same type, such as one or more CPUs, or may be different types of processors, such as one or more CPUs and one or more ASICs.
  • the memory 1106 is used to store the program 1110 .
  • the memory 1106 may include a high-speed RAM memory, and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
  • the program 1110 may be specifically configured to enable the processor 1102 to perform various operations in the foregoing method embodiments.
  • the content of the invention of the present invention includes:
  • a fixture control method comprising:
  • An item to be grabbed is grabbed based on the optimal search state.
  • the group of items to be grabbed includes one or more items to be grabbed and/or obstacles.
  • the obstacle determination includes at least one of the following: boundary obstacle determination, fixed obstacle determination, convex/sag obstacle determination and custom obstacle determination.
  • the determination of the boundary obstacle includes determining whether there is a boundary obstacle according to the relationship between the center of the clamp and the edge contour points of the object.
  • the determination of the fixed obstacle includes determining whether there is a fixed obstacle according to the size or shape of the obstacle.
  • the determination of the protrusion/depression obstacle includes determining whether there is a protrusion/depression obstacle according to the two-dimensional plane depth map information.
  • the determination of the user-defined obstacle includes generating an edge profile of the user-defined obstacle, and determining whether there is a user-defined obstacle based on the edge profile.
  • the method further includes: performing fixture verification for the optimal search state to determine whether the object can be grasped correctly using the search state.
  • a clamp control device comprising:
  • the point cloud acquisition module is used to acquire the point cloud information of the item group to be captured
  • the search state generation module is used to generate the search state based on the point cloud information of the item group to be grasped and the search boundary factor parameters of the fixture;
  • Obstacle judgment module is used for each search state, executes obstacle judgment, and sets the search state passed through obstacle judgment as an alternative search state;
  • the best search state determination module is used to select the best search state from the alternative search states
  • a grabbing module configured to grab an item to be grabbed based on the optimal search state.
  • the group of items to be grabbed includes one or more items to be grabbed and/or obstacles.
  • it also includes: using a json file to save the configuration parameters of the fixture.
  • the obstacle judgment module is configured to perform at least one of the following obstacle judgments: boundary obstacle judgment, fixed obstacle judgment, convex/sunken obstacle judgment and user-defined obstacle judgment.
  • the obstacle judging module judges whether there is a border obstacle according to the relationship between the center of the clamp and the edge contour point of the object when performing the border obstacle judgment.
  • the obstacle judging module judges whether there is a fixed obstacle according to the size or shape of the obstacle when performing the fixed obstacle judgment.
  • the obstacle judging module judges whether there is a convex/sunken obstacle according to the two-dimensional plane depth map information when performing the convex/sunken obstacle judgment.
  • the obstacle judging module judges whether there is a custom obstacle according to the edge profile of the custom obstacle when performing the custom obstacle judgment.
  • the method further includes: performing fixture verification for the optimal search state to determine whether the object can be grasped correctly using the search state.
  • a robot-based non-planar structure determination method comprising:
  • the grouping includes grouping two different depth values into one group.
  • the grouping includes grouping all depth values in the following manner: sorting all obtained depth values from high to low, and grouping the first depth value and the penultimate depth value, The second depth value and the penultimate depth value are grouped together...the Nth depth value and the penultimate Nth depth value are grouped together, where N is a natural number greater than or equal to 1.
  • the preset difference threshold and/or group number threshold includes a preset difference threshold and/or group number threshold based on the capability of the clamp.
  • condition of the non-planar structure includes the existence of the non-planar structure or the absence of the non-planar structure.
  • condition of the non-planar structure includes a degree of undulation of the non-planar structure.
  • a non-planar structure determination device based on a robot comprising:
  • a depth map acquisition module configured to obtain a two-dimensional plane depth map of the area to be determined on the surface of the item
  • a depth value acquisition module configured to acquire the depth value of each pixel in the area to be determined according to the two-dimensional plane depth map
  • a grouping module configured to group obtained depth values
  • the comparison module is used to calculate the difference of the depth value in each group and compare it with the preset difference threshold; and/or calculate the number of groups and compare it with the preset group number threshold;
  • the judging module is used for judging the non-planar structure of the region according to the comparison result.
  • the depth value obtaining module obtains the depth value of each pixel, only one of the multiple identical depth values is retained.
  • the grouping module groups two different depth values into one group.
  • the grouping module groups all the depth values in the following manner: sort all obtained depth values from high to low, and group the first depth value and the penultimate depth value, The second depth value and the penultimate depth value are grouped together...the Nth depth value and the penultimate Nth depth value are grouped together, where N is a natural number greater than or equal to 1.
  • the preset difference threshold and/or group number threshold includes a preset difference threshold and/or group number threshold based on the capability of the clamp.
  • the judging module judges that there is a non-planar structure or that there is no non-planar structure.
  • the judging module judges the undulation degree of the non-planar structure.
  • a fixture set verification method comprising:
  • the determining the position information of the openable jig includes determining the position information of the openable jig according to the outline information of the object to be grasped.
  • the condition that the number of openable clamps needs to be met is preset.
  • a condition to be met by the position of the openable gripper is preset.
  • the conditions to be met for the positions of the openable clamps include: the lines connecting the positions of multiple clamps cannot form a straight line.
  • the determining whether the gripper group can perform grasping in this state according to the judgment result includes determining that the gripping can be performed when the number of the grippers meets the preset condition and the positions of the grippers meet the preset condition.
  • a fixture set verification device comprising:
  • a status information acquisition module configured to acquire status information of the fixture array
  • An information determination module configured to determine the quantity information of the openable clamps and/or the position information of the openable clamps based on the state information
  • a condition determination module configured to determine whether the number of openable fixtures meets a preset condition and/or determine whether the position information of an openable fixture meets a preset condition
  • a grasping determining module configured to determine whether the gripper group can perform grasping in this state according to the judgment result.
  • the information determination module determines the position information of the openable clamp according to the outline information of the object to be grasped.
  • the condition that the number of openable clamps needs to be met is preset.
  • a condition to be met by the position of the openable gripper is preset.
  • the conditions to be met for the positions of the openable clamps include: the lines connecting the positions of multiple clamps cannot form a straight line. .
  • the grasping determining module determines that grasping can be performed when the number of grippers meets the preset conditions and the positions of the grippers meet the preset conditions.
  • a suction cup array control method comprising:
  • the obstacle judgment includes at least one of the following: glass boundary obstacle judgment, rubber mat obstacle judgment, rubber strip obstacle judgment and raised obstacle judgment.
  • the determination of the glass boundary obstacle includes determining whether there is a glass boundary obstacle based on the proportion of the point cloud in the area under the suction cup.
  • the determination of the rubber mat obstacle includes: judging whether there is a rubber mat obstacle based on the area of the obstacle point cloud, wherein the area of the obstacle point cloud is preset according to the area of the rubber mat.
  • the judging of the raised obstacle includes: judging whether there is a raised obstacle based on a preset depth difference threshold and a logarithmic threshold.
  • the selecting an alternative search state according to the opening of the suction cup in each search state includes: if there is at least one activated suction cup, selecting the search state as the alternative search state.
  • the selection of the best search state from the alternative search states according to the number of suction cups opened and/or the distance between the suction cups and the center of the glass includes: selecting the search state with the largest number of suction cups opened, if the search state with the largest number of suction cups opened has If there are more than one, then further select the search state in which the suction cup is closest to the center of the glass.
  • the method further includes: performing fixture verification for the optimal search state to determine whether the glass can be grasped correctly using the search state.
  • a sucker array control device comprising:
  • the point cloud acquisition module is used to obtain the point cloud information of glass and obstacles to be grasped
  • a search state generating module configured to generate a search state based on point cloud information and search boundary factor parameters of the sucker array
  • the obstacle judgment module is used to judge whether there is an obstacle under each suction cup for each search state, and if it exists, close the suction cup in the search state;
  • An alternative search state selection module is used to select an alternative search state according to the opening of the suction cup in each search state
  • the best search state selection module is used to select the best search state from the alternative search states according to the number of suction cups opened and/or the distance between the suction cup and the center of the glass;
  • the obstacle judging module is configured to perform at least one of the following obstacle judgments: glass boundary obstacle judgment, rubber mat obstacle judgment, rubber strip obstacle judgment and raised obstacle judgment.
  • the obstacle determination module executes the glass boundary obstacle, it determines whether there is a glass boundary obstacle according to the proportion of the point cloud in the area under the suction cup.
  • the obstacle judgment module executes rubber mat obstacle judgment, it judges whether there is a rubber mat obstacle based on the area of the obstacle point cloud, wherein the area of the obstacle point cloud is preset according to the area of the rubber mat.
  • the obstacle judging module when the obstacle judging module performs the judging of a raised obstacle, it judges whether there is a raised obstacle based on a preset depth difference threshold and a logarithmic threshold.
  • the alternative search state selection module selects the search state as an alternative search state when there is at least one open suction cup in a certain search state.
  • the optimal search state selection module selects the search state with the largest number of suction cups opened as the optimal search state, if there are multiple search states with the largest number of suction cups opened, then further select the search state in which the suction cups are the closest to the center of the glass .
  • Optionally, also includes: performing gripper verification against the optimal search state to determine whether the glass can be properly grasped using the search state.
  • a sucker array verification method comprising:
  • grouping all the suckers in the array according to the shape of the sucker array includes dividing the sucker array into four areas: upper left, lower left, upper right, and lower right, and dividing all the suckers into four groups according to the area where all the suckers are located.
  • a condition that needs to be satisfied by the number of openable suction cups is preset.
  • the condition that the grouping information of the openable suction cup needs to meet is preset.
  • the condition that the grouping information of the openable suction cups needs to meet includes: the openable suction cups are distributed in at least three groups.
  • determining whether the suction cup array can perform grasping in this state according to the judgment result includes: determining that the suction cup array can perform grasping when the number of suction cups satisfies a preset condition and the suction cup grouping information satisfies a preset condition.
  • a sucker array calibration device comprising:
  • the grouping module is used to group all suckers in the array according to the shape of the sucker array
  • a status information acquisition module configured to acquire status information of the suction cup array
  • An information determination module configured to determine the quantity information of the openable suction cups and the group information of the openable suction cups based on the state information of the suction cup array;
  • a condition determination module configured to determine whether the number of openable suction cups meets a preset condition and/or whether the grouping information of the openable suction cups meets a preset condition
  • the grasping determination module is configured to determine whether the suction cup array can perform grasping in this state according to the judgment result.
  • the grouping module is specifically used to divide the suction cup array into four regions: upper left, lower left, upper right, and lower right, and divide all the suckers into four groups according to the regions where all the suckers are located.
  • a condition that needs to be satisfied by the number of openable suction cups is preset.
  • the condition that the grouping information of the openable suction cup needs to meet is preset.
  • the condition that the grouping information of the openable suction cups needs to meet includes: the openable suction cups are distributed in at least three groups.
  • the grasping determination module is specifically configured to: determine that grasping can be performed when the number of suction cups meets the preset condition and the grouping information of the suction cups meets the preset condition.
  • a "computer-readable medium” may be any device that can contain, store, communicate, propagate or transmit a program for use in or in conjunction with an instruction execution system, device or device.
  • computer-readable media include the following: electrical connection with one or more wires (electronic device), portable computer disk case (magnetic device), random access memory (RAM), Read Only Memory (ROM), Erasable and Editable Read Only Memory (EPROM or Flash Memory), Fiber Optic Devices, and Portable Compact Disc Read Only Memory (CDROM).
  • the computer-readable medium may even be paper or other suitable medium on which the program can be printed, since the program can be read, for example, by optically scanning the paper or other medium, followed by editing, interpretation or other suitable processing if necessary.
  • the program is processed electronically and stored in computer memory.
  • the processor can be a central processing unit (Central Processing Unit, CPU), or other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
  • each part of the embodiments of the present application may be realized by hardware, software, firmware or a combination thereof.
  • various steps or methods may be implemented by software or firmware stored in memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques known in the art: Discrete logic circuits, ASICs with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
  • each functional unit in each embodiment of the present application may be integrated into one processing module, each unit may exist separately physically, or two or more units may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules. If the integrated modules are realized in the form of software function modules and sold or used as independent products, they can also be stored in a computer-readable storage medium.
  • the storage medium mentioned above may be a read-only memory, a magnetic disk or an optical disk, and the like.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)

Abstract

本申请公开了一种基于机器人的非平面结构判定方法、装置、电子设备和存储介质。基于机器人的非平面结构判定方法包括:获取物品表面待判定区域的二维平面深度图信息;根据二维平面深度图信息获取待判定区域内每个像素点的深度值;将获得的深度值分组;计算每个分组内深度值的差值,并与预设的差值阈值进行比较;和/或,计算分组的数量,并与预设的分组数阈值进行比较;根据比较结果判定区域的非平面结构情况。本发明使用了数值统计的方式而非确定非平面结构具体位置的方式来判定非平面结构是否存在,因此处理效率高且实用性强,并且还能适用于抓取之外的可能需要判断物体表面是否存在非平面结构的工业场景。

Description

基于机器人的非平面结构判定方法、装置、电子设备和存储介质
优先权声明
本申请要求2021年5月31日递交的、申请号为CN202110598705.8、名称为“基于机器人的非平面结构判定方法、装置、电子设备和存储介质”的中国发明专利的优先权,上述专利的所有内容在此全部引入。
技术领域
本申请涉及图像处理领域,更具体而言,特别涉及基于机器人的非平面结构判定方法、装置、电子设备和存储介质。
背景技术
目前,随着智能程控机器人的广泛普及,越来越多的物品能够借助智能程控机器人实现抓取以及运输操作。例如,物流包装能够通过智能程控机器人进行抓取,从而大幅提升抓取效率。为了提升抓取效率,也为了能够灵活适配多种物品对象,智能程控机器人通常会安装由多个夹具构成的夹具组,以便根据不同的物品对象灵活调用夹具组中的不同夹具。不同的夹具能够抓取的对象不同,例如吸盘阵列,能够吸取类似玻璃材质的物体,而一旦遇到橡胶或者凸起等时,则会导致抓取失败。对于此类夹具无法抓取的物体或结构,称之为障碍。
常规的智能机器人只能针对固定型号的抓取对象以及障碍物单一简单的情况进行避障抓取,这种情况下,抓取对象以及障碍物的大小,形态和位置均是固定的。现有技术中根据型号以及障碍物的位置将夹具设定在抓取对象的中心且非障碍物的位置,从而避开障碍物进行抓取。然而这种避障抓取方法存在以下缺陷:首先,该抓取方法只能用于障碍物在固定位置且型号固定的物品,即在待抓取的物品型号未知时或者待抓取的物品型号已知但是障碍物的位置不固定时,无法准确地避开障碍物进行抓取;其次,该抓取方法并不判断多个夹具是否能稳定地抓取物体,假如使用的夹具数量不够或者多个夹具的排列不合适,例如多个夹具排列成一条直线,在该情形下抓取物体后,重心不稳,抓取过程中物体可能会摆动,导致物体掉落或者与预期之外的其它物品碰撞而损毁。
发明内容
鉴于上述问题,提出了本发明以便克服上述问题或者至少部分地解决上述问题。具体地,根据上述实施例,首先,本发明能够先获取夹具可能的抓取方式,选取其中最佳的且不会抓到障碍的抓取方式执行物品的抓取,使得即便物品上存在不能抓取的障碍时,也能够准确地避障抓取物品;其次,针对突起/凹陷这类导致夹具可能无法正确抓取的障碍,本发明提出了一种通过分组确定特定区域各个像素点的深度值差异以及不同的深度值的数量来确定该区域是否存在非平面结构的方法,由于本方法使用了数值统计的方式而非确定非平面结构具体位置的方式来判定非平面结构是否存在,因此处理效率高且实用性强,并且还能适用于抓取之外的可能需要判断物体表面是否存在非平面结构的工业场景;再次,本发明能够在夹具组执行抓取之前,预先校验所使用的抓取方式能否正确抓取物品,提高了抓取的稳定性,避免了抓取过程中可能出现的重心不稳而导致被抓取物品损毁、灭失等问题;最后,本发明基于通用的避障抓取方法以及夹具校验方法,开发了专用于使用吸盘阵列进行玻璃抓取这一工业场景的避障抓取方法以及吸盘校验方法,能够提高使用吸盘阵列进行玻璃抓取的准确性和稳定性。由此可见,本发明解决了使用夹具进行物品抓取这一工业场景中出现的方方面面的问题。
本申请权利要求和说明书所披露的所有方案均具有上述一个或多个创新之处,相应地,能够解决上述一个或多个技术问题。具体地,本申请提供一种基于机器人的非平面结构判定方法、装置、电子设备和存储介质。
本申请的实施方式的基于机器人的非平面结构判定方法包括:
获取物品表面待判定区域的二维平面深度图信息;
根据二维平面深度图信息获取待判定区域内每个像素点的深度值;
将获得的深度值分组;
计算每个分组内深度值的差值,并与预设的差值阈值进行比较;和/或,计算分组的数量,并与预设的分组数阈值进行比较;
根据比较结果判定区域的非平面结构情况。
在某些实施方式中,获得每个像素点的深度值后,对于多个相同深度值,只保留其中一个。
在某些实施方式中,所述分组包括将两个不同的深度值分成一组。
在某些实施方式中,所述分组包括以如下方式将所有的深度值进行分组:将获取的全部深度值从高到低排序,并将第一个深度值与倒数第一个深度值分为一组,第二个深度值与倒数第二个深度值分为一组……第N个深度值与倒数第N个深度值分为一组,其中N为大于等于1的自然数。
在某些实施方式中,所述预设的差值阈值和/或分组数阈值包括基于夹具的能力预设的差值阈值和/或分组数阈值。
在某些实施方式中,所述非平面结构情况包括存在非平面结构或者不存在非平面结构。
在某些实施方式中,所述非平面结构情况包括非平面结构的起伏程度。
本申请的实施方式的基于机器人的非平面结构判定装置包括:
深度图获取模块,用于获取物品表面待判定区域的二维平面深度图;
深度值获取模块,用于根据二维平面深度图获取待判定区域内每个像素点的深度值;
分组模块,用于将获得的深度值分组;
比较模块,用于计算每个分组内深度值的差值,并与预设的差值阈值进行比较;和/或,计算分组的数量,并与预设的分组数阈值进行比较;
判定模块,用于根据比较结果判定区域的非平面结构情况。
在某些实施方式中,所述深度值获取模块获得每个像素点的深度值后,对于多个相同深度值,只保留其中一个。
在某些实施方式中,所述分组模块将两个不同的深度值分成一组。
在某些实施方式中,所述分组模块以如下方式将所有的深度值进行分组:将获取的全部深度值从高到低排序,并将第一个深度值与倒数第一个深度值分为一组,第二个深度值与倒数第二个深度值分为一组……第N个深度值与倒数第N个深度值分为一组,其中N为大于等于1的自然数。
在某些实施方式中,所述预设的差值阈值和/或分组数阈值包括基于夹具的能力预设的差值阈值和/或分组数阈值。
在某些实施方式中,所述判定模块判定存在非平面结构或者不存在非平面结构。
在某些实施方式中,所述判定模块判定非平面结构的起伏程度。
本申请的实施方式的电子设备包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述任一实施方式的基于机器人的非平面结构判定方法。
本申请的实施方式的计算机可读存储介质其上存储有计算机程序,所述计算机程序被处理器执行时实现上述任一实施方式的基于机器人的非平面结构判定方法。
本申请的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
本申请的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得 明显和容易理解,其中:
图1是本申请某些实施方式的夹具避障抓取方法的流程示意图;
图2是本申请某些实施方式的物品表面非平面结构判定方法的流程示意图;
图3是本申请某些实施方式的夹具校验方法的流程示意图;
图4是本申请某些实施方式的使用吸盘的玻璃抓取方法的流程示意图;
图5是本申请某些实施方式的玻璃及障碍点云以及吸盘排列的示意图;
图6是本申请某些实施方式的优选的玻璃抓取时的障碍判断的流程示意图;
图7是本申请某些实施方式的待贴条的涂胶工艺的示意图;
图8是本申请某些实施方式的吸盘阵列校验方法的流程示意图;
图9是本申请某些实施方式的夹具避障抓取装置的结构示意图;
图10是本申请某些实施方式的物品表面非平面结构判定装置的结构示意图;
图11是本申请某些实施方式的夹具校验装置的结构示意图;
图12是本申请某些实施方式的使用吸盘的玻璃抓取装置的结构示意图;
图13是本申请某些实施方式的吸盘阵列校验装置的结构示意图;
图14是本申请某些实施方式的电子设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。
图1示出了根据本发明一个实施例的基于机器人的夹具避障抓取方法的流程示意图,如图1所示,该方法包括:
步骤S100,获取待抓取物品组的点云信息。
待抓取物品组可以包括一个或多个待抓取物品,还可以包括一个或多个障碍物。障碍物可能存在与待抓取物品上,例如待抓取物品上的凸起,粘在待抓取物品上的胶条等;障碍物也可能存在于待抓取物品之外,例如待抓取物品可能按层堆叠放置,层之间会有橡胶垫等。相应地,待抓取物品组的点云信息可以包括一个或多个待抓取物品和/或障碍物的点云信息。
作为一个示例,可以通过3D工业相机获取点云信息,3D工业相机一般装配有两个镜头,分别从不同的角度捕捉待抓取物品组,经过处理后能够实现物体的三维图像的展示。将待抓取物品组置于视觉传感器的下方,两个镜头同时拍摄,根据所得到的两个图像的相 对姿态参数,使用通用的双目立体视觉算法计算出待涂胶玻璃的各点的X、Y、Z坐标值及各点的坐标朝向,进而转变为待抓取物品组的点云数据。具体实施时,也可以使用激光探测器、LED等可见光探测器、红外探测器以及雷达探测器等元件生成点云,本发明对具体实现方式不作限定。
通过以上方式获取的点云数据是三维的数据,为了滤除对抓取影响较小的维度对应的数据,减小数据处理量进而加速数据处理速度,提高效率,可将获取的三维的待抓取物品组点云数据正投影映射到二维平面上。
作为一个示例,也可以生成该正投影对应的深度图。可以沿垂直于物品的深度方向获取与三维物品区域相对应的二维彩色图以及对应于二维彩色图的深度图。其中,二维彩色图对应于与预设深度方向垂直的平面区域的图像;对应于二维彩色图的深度图中的各个像素点与二维彩色图中的各个像素点一一对应,且各个像素点的取值为该像素点的深度值。
步骤S110:基于待抓取物品组的点云信息以及夹具的搜索边界因素参数生成搜索状态。
夹具的搜索边界因素参数包括夹具的可控参数的控制边界,控制粒度及其它边界因素参数。根据实际情况,例如,所使用的夹具类型,待抓取物品类型和尺寸等,搜索边界因素参数及数值可以不同。
作为一个示例,可控参数可以包括X方向的移动距离、Y方向的移动距离及旋转的角度,X方向的移动距离边界(即可控边界)可以设定为-500mm到500mm,控制粒度为100mm,则夹具能够在X方向上从-500mm,以100mm为单位,移动到500mm,如此在X方向上共有11个搜索状态;Y方向的移动距离边界可以设定为-100mm到100mm,控制粒度为50mm,则夹具能够在Y方向上从-100mm,以50mm为单位,移动到100mm,如此在Y方向上共有5个搜索状态;旋转的角度可以设定为-50度到+50度,控制粒度为10度,则夹具能够从-50度,以10度为单位,旋转到+50度,如此旋转角度上共有11个搜索状态。在上述搜索边界因素参数的设置下,总计有11×5×11=605个搜索状态。此外,还可以设置其它边界因素参数,例如可以设定夹具的移动距离不能超过某一边的100mm,则夹具向该边移动时只能移动到距离该边的100mm的位置。
为了方便对夹具进行控制,可以对夹具的形态,尺寸等信息进行配置,配置信息可以使用json文件配置保存。根据实际情况的不同,比如使用的夹具或者待抓取物品的不同,配置信息可以不同。例如在使用吸盘阵列进行抓取时,配置信息可以包括单个吸盘相对整个吸盘阵列中心的位置、单个吸盘的编号、半径等。
本发明的方案可以用于多种类型的夹具,例如,包括各类通用夹具,通用夹具是指结构已经标准化,且有较大适用范围的夹具,例如,车床用的三爪卡盘和四爪卡盘,铣床用的平口钳及分度头等。又如,按夹具所用夹紧动力源,可将夹具分为手动夹紧夹具、气动 夹紧夹具、液压夹紧夹具、气液联动夹紧夹具、电磁夹具、真空夹具等。本发明不限定夹具的具体类型,只要能够实现物品抓取操作即可。
步骤S120:针对每一种搜索状态,执行障碍判定,将通过障碍判定的搜索状态设为备选搜索状态。
障碍判定用于判断某种搜索状态下,夹具所面对的障碍情况,例如障碍存在与否,是否影响抓取等,若没有障碍,或障碍不影响抓取,则通过障碍判定,否则不通过。如果使用多个夹具或由多个夹具构成的夹具阵列进行抓取,可以对每个夹具分别执行障碍判定。本发明的重点之一在于遍历所有搜索状态,进行障碍判定,因而并不对障碍判定方式进行限定,任意的障碍判定方式都可以用在本发明中。可以只通过一种障碍判定方式对待抓取物品的障碍情况进行判定,也可以将多种障碍判定方式组合起来进行判定。如果采取多种判定方式的组合,多种判定方式之间可以依照一定顺序进行,也可以并行进行,当多种判定方式中的任一判定方式不通过时,则认为该搜索状态的障碍判定不通过。
在一种可选的实现方式中,本发明示例性地给出了边界避障,固定障碍物避障,凸起\凹陷避障和自定义避障这四种障碍判定方法。
边界障碍判定
一些夹具无法在物品边缘抓取物品,使用这些夹具时,物品边界本身将构成抓取的障碍,因此需要识别出物品的边界以避免在边界处抓取待抓取物品。为了识别出边界,可以获取待抓取物品组的点云数据,具体地,可以使用步骤100的方法获取点云数据,将该点云数据中处于边缘的部分进行截取,得到待抓取物品组的轮廓点云,由于获得的待抓取物品组的轮廓点云是三维点云数据,而三维点云数据会由于各种外界或内在因素而影响物品边缘轮廓的确定。为此,为了能够更加精准的明确物品边缘的轮廓,可对物品边缘的轮廓点云进行投影,将上述轮廓点云映射到二维平面上,得到待抓取物品组边缘的轮廓点。由于此时待抓取物品组边缘的轮廓点是二维数据,基于该二维数据可以更加清楚地明确该待抓取物品组边缘的轮廓点。
为了避免获取的点云不完整,可以从所得到的待抓取物品组的二维图案中获取到待抓取物品的四个角落处的轮廓点。根据上述待涂胶玻璃四个角落处的轮廓点,得到待涂胶玻璃的最小外接矩形,将该最小外接矩形看作待抓取物品组的轮廓四边形。或者将四个角落处的轮廓点按照从左到右、从上到下的顺序顺时针依次连接,形成待抓取物品的边缘线,将边缘线围成的四边形看作待抓取物品组的轮廓四边形。
对于每一种搜索状态,判定该搜索状态下夹具的中心与物品边缘轮廓点的关系,若夹具中心位于所得到的的轮廓之外,表示该夹具位于待抓取物品的边界,此时,判定存在边界障碍,障碍判定不通过;若夹具中心位于轮廓之内,则障碍判定通过。
固定障碍判定
待抓取物品可能同某些固定的障碍物放置在一起从而组成待抓取物品组,例如,待抓取物品是钢板或者玻璃时,多块钢板或者玻璃可能会堆叠放置时,每层钢板或者玻璃之间可能会通过橡胶垫或泡沫塑料隔开,当上层钢板或者玻璃被抓取后,橡胶垫或泡沫塑料会遗留在下层钢板或者玻璃之上,从而成为抓取时的障碍。在此类情况下,需要对这些固定存在但是位置并不确定的,影响夹具抓取待抓取物品的固定障碍物进行障碍判定。
为了识别出固定障碍物,可以获取待抓取物品组的三维点云数据,将其映射到二维平面上,其中,可以使用步骤100的方法获取点云数据并进行映射。由于固定障碍物通常具有统一的规格,例如,统一的尺寸或形状,因此可以预设固定障碍物的规格参数,对于每一个搜索状态,可以基于二维点云数据,判断该搜索状态下,夹具下方是否存在满足该预设规格参数的物品,如果存在,则判定存在固定障碍物,障碍判定不通过;否则,障碍判定通过。
凸起\凹陷障碍判定
根据待抓取物品的不同,物品自身可能存在凸起或凹陷部分,这些凸起/凹陷可能是物品本身就有的,也可能是因为碰撞等原因导致。某些夹具在凸起或凹陷处可能无法正确地抓取物品,例如,如果使用吸盘夹具吸在了凸起棱处,吸盘无法与物体完全贴紧导致漏气,继而也无法在该位置抓取物品。故在某些情形下,需要将凸起/凹陷作为障碍进行障碍判定。
申请人在研发过程中发现,凸起/凹陷作为待抓取物品的一部分,通过轮廓点云数据难以区分,并且,与固定障碍物不同,凸起/凹陷通常也并不具有固定的规格,常规的方式无法识别出物品的凸起/凹陷。
为了解决该问题,本发明提出了一种物体表面非平面结构的判定方法,该方法也是本发明的重点之一,该方法可以用于本发明某些实施例中用于判断凸起/凹陷障碍,也可以在其它需要判断物体表面是否具有非平面结构的情况下使用,而不局限于在避障抓取的方案中使用。图2示出了根据本发明的物体表面非平面结构的判定方法的流程示意图,非平面结构包括凸起,凹陷等结构。如图2所示,该方法包括:
步骤200,获取物品表面待判定区域的二维平面深度图信息。
将物品三维的点云数据正投影映射到二维平面上后,生成该正投影对应的深度图作为待判定区域的二维平面深度图信息。可以采用与步骤100类似的方式获取该深度图。也可以不获取物品整体的深度图,而获取物品部分区域或待判定区域的深度图。
步骤210,根据二维平面深度图信息获取待判定区域内每个像素点的深度值。
深度图中的各个像素点与正投影的二维图中的各个像素点一一对应,且各个像素点的取值为该像素点的深度值。为了确定待判定区域内是否存在凸起或凹陷等非平面结构,本 步骤中可以获取待判定区域内每个像素点的深度值。
步骤220,将获得的深度值分组。
平面结构的各个像素点的深度值相同或者差距较小,而凸起或凹陷等非平面结构的各个像素点的深度值会有较大差异。本步骤中将获得的全部像素点的深度值分组,如果有多个相同的深度值,则可以合并为一个深度值再进行分组。
在分组时,可以将所有获得的深度值按照两个或两个以上的深度值为一组进行分组。分组的方式可以采用随机分组或其它任意的分组方式,本发明对此不作限制。作为一个优选的实施例,可以先将所有的深度值进行排序,之后将最高值和最低值分为一组,次高值和次低值分为一组,依次类推,将所有的深度值分组。
步骤230,计算每个分组内深度值的差值,并与预设的差值阈值进行比较;和/或,计算分组的数量,并与预设的分组数阈值进行比较。
深度值的差值能够反映出像素点之间的距离,两个像素点之间的深度值差值越大,则表示两点之间的高度差越大,因此深度值的差值能够反映出待判定区域凹凸不平的程度。此外,分组的数量能够反映出具有不同高度的像素点的数量,待判定区域内的分组数量越大,则表示有越多高低起伏的像素点。因此分组数量能够从另一个角度反映出待判定区域内凹凸不平的程度。深度值的差值或分组数量可以单独使用以判断物体是否存在非平面结构。作为一个优选的实施例,也可以使用深度值的差值和分组数量联合判定物体是否存在非平面结构,申请人发现与单独使用相比,此种方式能够大幅增加非平面结构判定的准确率。在另一个实施例中,差值阈值和/或分组数阈值可以根据实际情况的需要进行设定,例如在将方法运用于抓取避障时,对于某些夹具来说,较小的凸起并不影响抓取,因而可以将阈值设置得比较大,这样不会将较小的凸起判定为不可抓取的障碍。
步骤240,根据比较结果判定区域的非平面结构情况。
非平面结构情况可以包括非平面结构是否存在,当深度值的差值和/或分组数量超过阈值时,判定存在非平面结构;还可以包括非平面结构的起伏程度,可以为起伏程度设定多个等级。在其它的实施例中,还可以设置多级阈值,通过不同阈值的组合判定非平面结构是否存在或者起伏程度等级。作为一个优选的实施例,在使用深度值的差值和分组数量联合判定非平面结构情况时,可以设定深度值的差值超过差值阈值且分组数量超过分组数阈值时,判定存在非平面结构。
自定义障碍判定
工业场景中,存在部分无法通过机器人视觉技术识别的障碍,例如在待抓取物是玻璃时,玻璃上可能会涂胶,涂胶通常较为透明,特征也不明显,无法正确识别,而夹具也不能在涂胶位置抓取玻璃。为了识别诸如此类的障碍,可以在抓取前自定义障碍的尺寸,形 态以及位置等信息。如此,对于每一个搜索状态,根据预先定义好的障碍信息判定夹具下是否存在障碍。在一种实施方式中,可以先根据自定义的障碍信息生成该障碍的边缘轮廓,之后对于每一搜索状态,判定该搜索状态下夹具的中心与该自定义的障碍物边缘轮廓的关系,若夹具中心位于所得到的的轮廓之外,判定存在障碍,障碍判定不通过;若夹具中心位于轮廓之内,则障碍判定通过。在另一个实施例中,还可以设置边缘轮廓的内缩距离,即令障碍物的原始边缘轮廓向内部收缩一定的距离,再执行夹具中心与边缘轮廓相对位置的判定。
在步骤S120中,还可以对方案进行如下的扩展或改进:
对于某个搜索状态,若夹具通过障碍判断,则在该搜索状态下,设定开启该夹具;没有通过,则在该搜索状态下,将该夹具设为关闭状态;
如果使用多个夹具或由多个夹具构成的夹具阵列进行抓取,则在执行障碍判定时,可以针对多个夹具中的每一个夹具执行障碍判定;
在某个搜索状态下,若至少一个夹具通过障碍判定,则可以将该搜索状态设定为备选搜索状态;
在执行障碍判定时,也可以在存在障碍但不影响抓取时,判定通过障碍判定,而在障碍存在且影响抓取时,才判定不通过。
步骤S130,从备选搜索状态中选择最佳搜索状态。
根据实际情况,例如待抓取物品以及使用的夹具的不同,选择最佳搜索状态的方式也不同,本发明对此不作限定,任意的选择方式都可以用在本发明的方案中。作为一个较佳的实施例,可以根据每个搜索状态下,夹具距待抓取物品中心的距离选择最佳搜索状态。通常来说,夹具的抓取位置距离物品中心越近,则抓得越稳,因此可以将夹具距物品中心最近的搜索状态设为最佳搜索状态。为了计算夹具距物品中心的距离,可以先确定某一搜索状态下夹具的中心位置P1,接着求取物品的外接矩形,然后根据物品的外接矩形求取物体的中心位置P2,则P1与P2的距离即为夹具距物品中心的距离。
如果使用多个夹具或由多个夹具构成的夹具阵列进行抓取,还可以根据通过障碍判定的夹具的数量选择最佳的搜索状态,或者综合考虑夹具距物品中心的距离以及通过障碍判定的夹具的数量选择最佳搜索状态。在一种优选的实施方式中,可以优先根据数量选取,如果通过数量最多的搜索状态有多个,则从通过的多个搜索状态中,进一步选择夹具距离物品中心位置最近的搜索状态作为最佳搜索状态;在其它实施方式中,也可以预设一数量阈值,先选择所有夹具开启数量超过该阈值的搜索状态,再从这些搜索状态中选取距离中心最近的搜索状态作为最佳搜索状态。
步骤S140,基于所述最佳搜索状态抓取待抓取物品。
机器人根据最佳搜索状态下夹具的各项配置参数设定夹具的位置,角度以及数量等,然后执行物品的抓取并将物品放置在指定的位置。指定的位置包括地面,物品放置架等,一些物品在放置时有位置和状态的要求,例如需要垂直放置,以及不能过高或过低等。因此还可以求取最终规划结果的夹具中心到物体指定边的距离,从而将物品准确地放置在指定位置。
发明人发现在使用多个夹具或由多个夹具构成的夹具阵列进行抓取时,假如最佳搜索状态中使用的夹具数量不够或者多个夹具的排列不合适,例如多个夹具排列成一条直线,会导致抓取时重心不稳,如此抓取过程中物品可能会摆动,掉落或者与预期之外的其它物品碰撞而损毁。为此,本发明还提出了一种夹具抓取校验的方法,该方法可以在抓取物品之前,确定待使用的抓取方式是否能够稳稳抓住物品,从而避免抓取时不必要的损失。该方法可以在本发明的避障抓取方法中使用,也可以在其它抓取场景中使用,本发明对具体的使用情形不作限制,只要某抓取场景中使用了多个夹具或多个夹具组成的夹具阵列,就可以适用本方法进行校验,为了方便说明,本发明中将多个夹具,或多个夹具组成的夹具阵列统称为夹具组。
图3示出了根据本发明一个实施例的夹具组的校验方法的流程示意图。如图3所示,该方法包括:
步骤S300:获取夹具组的状态信息。
夹具组的状态是指夹具组中的各个夹具的特定参数的组合,夹具组在某个状态下执行抓取是指夹具基于该状态中的各项参数进行配置并执行抓取的动作,例如假设某个夹具具有3个夹具,分别为1-3号,1号夹具在待抓取物品的左边缘,且旋转角度为30度,2号夹具在物品的右边缘,且旋转角度为0度,1,2号夹具均设为开启,3号夹具在物品之外,设为不开启,这样的特定参数的组合即为夹具组的一个状态。状态信息可以包括该状态下夹具组的位置信息,夹具组中的各个夹具的位置信息,角度信息,以及这些夹具在当前状态下是否能够开启,哪些夹具能够开启等信息。
步骤S310:基于夹具组的状态信息确定可开启夹具的数量信息和/或可开启夹具的位置信息。
在确定可开启夹具的位置信息时,可以根据待抓取物品的轮廓信息,结合夹具边界参数确定夹具相对于待抓取物品的位置;也可以确定夹具相对于其它夹具的位置,或者夹具相对于夹具阵列的位置作为夹具的位置。在一种实施方式中,可以将多个夹具或夹具阵列划分多个区域,夹具的位置信息可以用夹具所在的区域来表示。
步骤S320:判断可开启夹具的数量是否满足预设的条件;和/或判断可开启夹具的位置信息是否满足预设的条件。
可开启夹具的数量不足,或者可开启夹具的位置不佳,均可能导致抓取不稳。如果可开启夹具的数量不可调节或无需调节,或者夹具的位置不可调节或无需调节时,可以仅判断其中一项。本领域技术人员能够理解,相比于只判断一种条件,同时判断两种条件能够显著提高夹具抓稳的概率。可以根据待抓取物品的信息,例如根据待抓取物品的重量,预先设定需要满足的条件,例如设置可开启夹具的数量应当满足的条件。举个例子,在抓取较重的物品时,可以将夹具数量设为5个,如此可开启夹具的数量超过5个时才满足预设的条件;而对于较轻的物品,则可以设为3个。在待抓取物品的密度均匀时,也可以根据物品的面积大小设定需要使用的夹具数量,例如面积较大的物品,预设条件为5个夹具,面积较小的物品,则为3个,可以根据物品的轮廓信息确定物品的面积。对于可开启夹具的位置信息需要满足的条件,通常至少要避免多个夹具位于一条直线或近似于位于一条直线,根据实际情况的需要,也可以设置更加具体的位置信息,例如可以设置多个可开启夹具的位置的连线必须形成稳定的三角形;也可以根据待抓取物品的重心设置可开启夹具需要满足的位置信息以保证在抓取时具有稳定的重心,例如当待抓取物品中心质量较大时,则预设条件可以为在该待抓取物品的中心附近必须有可开启夹具或必须有特定数量的可开启夹具。
步骤S330:根据所述判断结果确定夹具组在该状态下能否执行抓取。
在具体实施时,可以仅在可开启夹具的数量满足条件或可开启夹具的位置信息满足条件时即执行抓取。也可以在两者同时满足条件时才执行抓取。位置信息的条件包括多个时,可以在满足多个条件时才进行抓取。
在一种实施方式中,可以在确定以某种状态执行物品的抓取之后,且在进行抓取之前,进行夹具校验。如果结合避障抓取方案使用,可以在前述实施例的步骤S130到步骤S140之间执行步骤S300-S330,即在选出最佳搜索状态后,将最佳搜索状态作为夹具阵列的状态执行夹具校验,确定是否可以使用该最佳搜索状态进行抓取。在另一种实施方式中,也可以结合夹具校验的方法来确定以某种状态执行物品的抓取。如果结合避障抓取方案使用,可以在前述实施例的步骤S120或步骤S130中执行步骤S300-S330,例如可以通过夹具校验方式选择备选搜索状态或者在选出备选搜索状态后,先将该备选搜索状态作为夹具阵列的状态,以夹具校验方法从中剔除不可执行抓取的搜索状态,再根据夹具的数量和距中心的距离选择最佳搜索状态。
本发明的避障抓取方法以及夹具抓取校验方法并不限制具体的夹具以及应用场景。为了能够在使用吸盘阵列抓取玻璃的工业场景中应用上述方法,发明人付出了艰辛的劳动对方法进行了进一步细化以适应该场景,这也是本发明的重点之一。
如图4示出了根据本发明优选实施例的使用吸盘阵列的玻璃避障抓取方法的流程图, 该方法包括:
步骤S400:获取待抓取玻璃及障碍的点云信息。
本实施例中,玻璃可以按层堆叠放置,每层玻璃之间设有橡胶垫,将玻璃隔开。可以使用类似步骤S100的方式分别获取橡胶垫点云和物体自身点云,并将点云正投影到2D图并生成正投影后对应的深度图。图5示出了以此方式获取的点云图,图中黑色部分是透明的玻璃,白色部分是非透明部分的点云。
步骤S410:基于点云信息以及吸盘阵列的搜索边界因素参数生成搜索状态。
吸盘的搜索状态包括吸盘在物体上不同的位置状态及吸盘本身的旋转状态。指定搜索边界因素参数可以包括X向、Y向及旋转,还可以包括吸盘边界到指定边的距离。根据不同的搜索边界参数生成多种不同的搜索状态。如图5所示,所使用的吸盘阵列包括10个吸盘,分别编号为1-10,图5示出了某个搜索状态下这10个吸盘的位置,在其他搜索状态下,吸盘可能处于其他位置或者有不同的旋转角度。作为一个示例,X方向的移动距离边界(即可控边界)可以设定为-500mm到500mm,控制粒度为100mm,则夹具能够在X方向上从-500mm,以100mm为单位,移动到500mm,如此在X方向上共有11个搜索状态;Y方向的移动距离边界可以设定为-100mm到100mm,控制粒度为50mm,则夹具能够在Y方向上从-100mm,以50mm为单位,移动到100mm,如此在Y方向上共有5个搜索状态;旋转的角度可以设定为-50度到+50度,控制粒度为10度,则夹具能够从-50度,以10度为单位,旋转到+50度,如此旋转角度上共有11个搜索状态。在上述搜索边界因素参数的设置下,总计有11×5×11=605个搜索状态。还可以设定夹具的移动距离不能超过上边界的100mm,则夹具在Y向上只能移动到距离上边100mm为止。
步骤S420:针对每一种搜索状态,判断每个吸盘下方是否存在障碍,存在则在该搜索状态下关闭该吸盘。
如图6所示,判断吸盘下方是否存在障碍可以包括如下步骤:
步骤S421:判断吸盘下方是否存在玻璃边界障碍。
在当前的搜索状态,对每个吸盘,检查此单个吸盘中心是否在物体轮廓内部。可以通过判断吸盘下方区域内点云占比判断单个吸盘中心是否在物体轮廓内部,点云占比是指吸盘下方区域内,点云区域与整体区域的比例。当吸盘处于玻璃边界时,点云会显著增多,图5中的吸盘4-6即位于玻璃的边界,这些吸盘下方白色的点云区域占比明显较大,因此可以预设点云占比阈值,当吸盘下方区域内点云占比超过该阈值时,则判断吸盘下方存在玻璃边界障碍。占比阈值可以根据实际情况的需要,例如吸盘吸力,障碍物情况,而自行设定,本发明对具体取值不作限定。发明人发现占比阈值在5%-40%之内选取能够提高边界障碍判断的准确率,其中以为10%为最佳。
步骤S422:判断吸盘下方是否存在橡胶垫障碍。
本实施例中,玻璃一层层堆叠摆放,层与层之间通过橡胶垫隔开,除了最上层的玻璃,下层的每个玻璃上面都会存在用于隔开玻璃层的橡胶垫。如此在抓走上层玻璃后,下层玻璃上会有剩下的橡胶垫。由于橡胶块的尺寸是固定且已知的,可以根据尺寸过滤出橡胶块如果在。在一个实施方式中,可以预先设定障碍点云面积,当吸盘下方区域内的点云面积超过障碍点云面积时,则判断吸盘下方存在橡胶垫障碍。障碍点云面积可以根据所使用的橡胶垫的大小任意设定。
步骤S423:判断吸盘下方是否存在胶条障碍。
在工业场景中,可能会在玻璃的边界附近涂胶,涂胶后形成的胶条通常是透明的,这样的胶条点云特征不明显,难以识别,但是也不能让吸盘在胶条处抓取玻璃。为了能正确识别胶条障碍,可以由用户自行设定胶条的位置,机器人根据用户自行设定的胶条障碍信息,判断吸盘阵列下方是否存在胶条障碍。具体地,可以根据用户设置而在玻璃的边缘轮廓附近生成一圈胶条,用户可以设置胶条的位置,内缩距离和胶条宽度,也可以在同一边生成两个胶条。图7示出了4种不同的涂胶工艺,对于工艺1,无需贴条,也无需预设贴条信息;对于工艺2,共两段内外层轨迹,可在内外层轨迹之间不涂的地方进行贴条,也可以不对内层轨迹进行贴条,单独生成指定边的内层轨迹段;对于工艺3,将开口段,即图中右侧无轨迹段,进行贴条;对于工艺4,对内层轨迹外的开口段,即图中与工艺3开口段类似位置的段,进行贴条。对于上述工艺2-4,预先在需要贴条的位置设定贴条障碍信息。如此,在贴条后,机器人能够根据用户设置的胶条信息,判断吸盘下方区域是否存在胶条障碍。
步骤S424:判断吸盘下方是否存在凸起障碍。
玻璃表面可能存在凸起,如果吸盘吸在了凸起棱处,则无法与玻璃表面完全贴紧,从而导致吸盘漏气,吸取失败。可根据吸盘下方区域的点云中各个像素点之间的深度差值及像素点的数量来判断该区域内是否存在凸起障碍。在一种实施方式中,可以获取吸盘下方区域所有点云的深度值,将所有深度值排序,再将最高的深度值与最低的深度值配对,次高的与次低的配对,第三高的与第三低的配对……以此方式将所有深度值配对。计算配对的深度值之间的差值,将深度差值与预设的深度差阈值进行比较,以及将配对数量与预设的对数阈值进行比较,如果高度差值超过阈值且配对数量超过阈值,则判断吸盘下方区域存在凸起障碍。深度差阈值和对数阈值可以根据实际情况的需要,例如吸盘吸力,障碍物情况,而自行设定,本发明对具体取值不作限定。发明人发现对数阈值和深度差阈值在以下范围内选取能够提高凸起障碍判断的准确率:对数阈值可以在10对-50对之内选取,以20对为最佳;深度差阈值可以在0.500mm-0.005mm之内选取,以0.015mm为最佳。
作为优选的实施例,可以严格按照步骤S421-S424的顺序执行障碍判断,在上一步骤 判断存在障碍并关闭吸盘后,之后的步骤则不再执行,例如如果步骤S421中判断吸盘下方存在玻璃边界障碍,则关闭该吸盘,步骤S422-S424不再执行。由于上述四个步骤之间相对独立,因此也可以仅通过S421-S424中的任一步骤的障碍判定方式对玻璃的障碍情况进行判定,或者将其中任意多个步骤组合起来进行判定,例如,使用步骤S421和S422的组合,而不使用步骤S423和S424。如果使用多个步骤的组合,多个步骤之间可以依照一定顺序进行,也可以并行进行。需要说明的是,虽然单一步骤或各种组合均能实现障碍判断,但是如果使用本发明最优实施例的顺序执行步骤S421-S424的障碍判断方法,与其它判断方法相比,不仅能够提高障碍判断的准确度,并且因为先判断出现可能性较大的障碍且先使用较简单的算法,还能提高障碍判断的效率,进而提高机器人抓取的效率,因而在工业场景中具有特别的优势。
步骤S430,根据每个搜索状态的吸盘开启情况,选取备选搜索状态。
可以自行设定选取的条件,例如开启数量,特定吸盘是否开启等。在一种实施方式中,如果至少存在一个开启的吸盘,则将该搜索状态选为备选搜索状态。
步骤S440,根据吸盘开启数量和/或吸盘距玻璃中心的距离从备选搜索状态中选择最佳搜索状态。
可以根据每个搜索状态距待抓取玻璃中心的距离选择最佳搜索状态,通常来说,距离物品中心越近,则抓得越稳,因此可以将距物品中心最近的搜索状态设为最佳搜索状态。对于每一个搜索状态,确定该搜索状态下夹具的中心位置P1,之后通过玻璃的外接矩形求取物体中心位置P2,P1与P2的距离即为夹具距玻璃中心的距离。也可以根据通过障碍判定的吸盘的数量,选择通过数量最多的搜索状态作为最佳搜索状态。在一个优选的实施例中,可以结合吸盘开启数量和吸盘距玻璃中心的距离选择最佳搜索状态,例如可以优先根据数量选取,如果通过数量最多的搜索状态有多个时,则从选出的多个搜索状态中,进一步选择距离物品中心位置最近的搜索状态作为最佳搜索状态;在另外的实施例中,也可以预设一数量阈值,先选择所有吸盘开启数量超过该阈值的搜索状态,再从这些搜索状态中选取距离中心最近的搜索状态作为最佳搜索状态。
步骤S450,使用最佳搜索状态抓取玻璃。
工业场景中通常需要将抓取的玻璃放置在地面或者物品放置架上,如果将玻璃放置在架子上,不能让玻璃超过架子边缘太多,因此还可以求取最终规划结果的吸盘阵列中心到玻璃指定边的距离,如此在放置时机器人能够较为精确地控制玻璃的放置位置,从而将玻璃准确地放置在指定位置。
如图8示出了根据本发明优选实施例的使用吸盘阵列的玻璃抓取校验方法的流程图。如图8所示,该方法包括:
步骤S500:根据吸盘阵列的形态将阵列中的全部吸盘分组。
可以根据吸盘在吸盘阵列上的分布进行分组。如图5所示的实施方式中,吸盘阵列包括12个吸盘,分别编号为1-12,可以将吸盘按照其与吸盘阵列的相对位置,分为左上,右上,左下,右下共4组,具体地,吸盘3和9为第一组,吸盘4和8为第二组,吸盘1、2和10为第三组,吸盘5、6和7为第四组。
步骤S510:获取吸盘阵列的状态信息。
吸盘阵列的状态是指吸盘阵列中的各个吸盘的特定参数的组合,吸盘阵列在某个状态下执行抓取是指吸盘阵列基于该状态中的各项参数进行配置并执行抓取的动作,例如假设某个吸盘阵列具有3个吸盘,分别为1-3号,1号吸盘在待抓取物品的左边缘,且旋转角度为30度,2号吸盘在物品的右边缘,且旋转角度为0度,1,2号吸盘均设为开启,3号吸盘在物品之外,设为不开启,这样的特定参数的组合即为吸盘阵列的一个状态。吸盘阵列的状态信息包括吸盘阵列的位置,吸盘阵列中的各个吸盘的位置,角度,以及这些吸盘在当前状态下是否能够开启,哪些吸盘能够开启等信息。
步骤S520:基于吸盘阵列的状态信息确定可开启吸盘的数量信息和可开启吸盘所在的分组信息。
在该步骤中,对于每种吸盘状态,确定该状态下可开启吸盘的数量及可开启吸盘所在的分组。
步骤S530:判断可开启吸盘的数量是否满足预设的条件和/或可开启吸盘的分组信息是否满足预设的条件。
开启吸盘的数量不足,或者开启吸盘的位置不佳,均可能导致抓取不稳。对于开启吸盘的数量,可以设定单一阈值,例如必须超过5个吸盘才允许抓取。也可以根据待抓取玻璃的大小预设吸盘数量阈值,例如,可以设定一最低数量阈值为3个吸盘,即不管玻璃面积多大,至少得能开启3个吸盘;还可以为1平米-2平米的玻璃面积设置4个吸盘,2平米玻璃以上则为5个吸盘。以此方式判断玻璃的面积以及吸盘开启数量,判断能否执行抓取。如此,在判断可开启吸盘的数量时,还要判断玻璃的面积,例如,可以判断玻璃面积为2平米,可开启吸盘数量为4,当满足面积与数量的对应关系时,认为满足数量条件。
对于开启吸盘的分组信息,可以仅判断哪些分组中具有开启的吸盘。例如,在上述将吸盘分为4组的实施例中,可以根据吸盘开启情况,分别判断第一组到第四组中,哪些组里有能够开启的吸盘。可以预设需要满足的吸盘分布条件,例如可以设定至少三个分组(在此种情况下,吸盘能够排列为稳定的三角形或四边形)中有能够开启的吸盘时,满足吸盘分布条件。
步骤S540:根据所述判断结果确定吸盘阵列在该状态下能否执行抓取。
在具体实施时,可以仅在可开启吸盘的数量满足条件或可开启吸盘的分布满足条件时即执行抓取。也可以在两者同时满足条件时才执行抓取。吸盘分组的预设条件包括多个时,可以在满足多个条件时才进行抓取。
步骤S500可以在S510之前的任意时刻执行,只要保证在顺序执行步骤S510-S540时,已经有确定的分组信息即可。在一个实施方式中,可以在确定以某种状态执行玻璃抓取之后,且在进行抓取之前,进行吸盘校验。如果结合避障抓取方案使用,可以在前述实施例的步骤S430到步骤S440之间执行步骤S510-S540,即在选出最佳搜索状态后,将该最佳搜索状态作为吸盘阵列的状态,执行吸盘校验,确定是否可以使用该最佳搜索状态进行抓取。在另一种实施方式中,也可以结合吸盘校验的方法来确定以某种状态执行物品的抓取。如果结合避障抓取方案使用,可以在前述实施例的步骤S420或步骤S430中执行步骤S510-S540,例如可以通过吸盘校验方法选择备选搜索状态或者在选出备选搜索状态后,先以该备选搜索状态作为吸盘阵列的状态,根据吸盘校验方法从中剔除不可执行抓取的搜索状态,再根据吸盘的数量和距中心的距离选择最佳搜索状态。
另外,需要说明的是,虽然本发明将通用的机器人抓取方法和专用于玻璃抓取的抓取方法分别放在多个实施例中进行描述,并且多个实施例的技术细节不尽相同,然而本领域技术人员能够理解,在专用方法中未描述而在通用方法中描述的技术细节实际上也可以在专用方法中使用,反之亦然。换句话说,虽然本发明的每个实施例都具有特定的特征组合,然而,这些特征在实施例之间的进一步组合和交叉组合也是可行的。
根据上述实施例,首先,本发明能够先获取夹具可能的抓取方式,选取其中最佳的且不会抓到障碍的抓取方式执行物品的抓取,使得即便物品上存在不能抓取的障碍,也能够准确地避障抓取物品;其次,针对可能需要判断物体表面是否存在非平面结构的工业场景,本发明提出了能够识别出物体表面的非平面区域的方案;第三,本发明能够在夹具组执行抓取之前,预先校验所使用的抓取方式能够正确抓取物品,提高了抓取的稳定性,避免了抓取过程中可能出现的重心不稳等问题;第四,本发明基于通用的避障抓取方法以及夹具校验方法,开发了专用于使用吸盘阵列进行玻璃抓取这一工业场景的避障抓取方法以及吸盘校验方法,能够提高使用吸盘阵列进行玻璃抓取的准确性和稳定性。由此可见,本发明解决了使用夹具进行物品抓取这一工业场景中出现的方方面面的问题。
图9示出了根据本发明又一个实施例的夹具控制装置,该装置包括:
点云获取模块600,用于获取待抓取物品组的点云信息,即用于实现步骤S100;
搜索状态生成模块610,用于基于待抓取物品组的点云信息以及夹具的搜索边界因素参数生成搜索状态,即用于实现步骤S110;
障碍判定模块620,用于针对每一种搜索状态,执行障碍判定,将通过障碍判定的搜索 状态设为备选搜索状态,即用于实现步骤S120;
最佳搜索状态确定模块630,用于从备选搜索状态中选择最佳搜索状态,即用于实现步骤S130;
抓取模块640,用于基于所述最佳搜索状态抓取待抓取物品,即用于实现步骤S140。
图10示出了根据本发明又一个实施例的基于机器人的非平面结构判定装置,该装置包括:
深度图获取模块700,用于获取物品表面待判定区域的二维平面深度图,即用于实现步骤S200;
深度值获取模块710,用于根据二维平面深度图获取待判定区域内每个像素点的深度值,即用于实现步骤S210;
分组模块720,用于将获得的深度值分组,即用于实现步骤S220;
比较模块730,用于计算每个分组内深度值的差值,并与预设的差值阈值进行比较;和/或,计算分组的数量,并与预设的分组数阈值进行比较,即用于实现步骤S230;
判定模块740,用于根据比较结果判定区域的非平面结构情况,即用于实现步骤S240。
图11示出了根据本发明又一个实施例的夹具组校验装置的结构示意图,该装置包括:
状态信息获取模块800,用于获取夹具阵列的状态信息,即用于执行步骤S300;
信息确定模块810,用于基于所述状态信息,确定可开启夹具的数量信息和/或可开启夹具的位置信息,即用于执行步骤S310;
条件判定模块820,用于判断可开启夹具的数量是否满足预设的条件和/或判断可开启夹具的位置信息是否满足预设的条件,即用于执行步骤S320;
抓取确定模块830,用于根据所述判断结果确定夹具组在该状态下能否执行抓取,即用于执行步骤S330。
图12示出了根据本发明又一个实施例的吸盘阵列控制装置的结构示意图,该装置包括:
点云获取模块900,用于获取待抓取玻璃及障碍的点云信息,即用于执行步骤S400;
搜索状态生成模块910,用于基于点云信息以及吸盘阵列的搜索边界因素参数生成搜索状态,即用于执行步骤S410;
障碍判定模块920,用于针对每一种搜索状态,判断每个吸盘下方是否存在障碍,存在则在该搜索状态下关闭该吸盘,即用于执行步骤S420;
备选搜索状态选取模块930,用于根据每个搜索状态的吸盘开启情况,选取备选搜索状态.即用于执行步骤S430;
最佳搜索状态选取模块940,用于根据吸盘开启数量和/或吸盘距玻璃中心的距离从备选搜索状态中选择最佳搜索状态,即用于执行步骤S440;
抓取模块950,用于基于最佳搜索状态抓取玻璃,即用于执行步骤S450。
图13示出了根据本发明又一个实施例的吸盘阵列校验装置的结构示意图,该装置包括:
分组模块1000,用于根据吸盘阵列的形态将阵列中的全部吸盘分组,即用于执行步骤S500;
状态信息获取模块1010,用于获取吸盘阵列的状态信息,即用于执行步骤S510;
信息确定模块1020,用于基于吸盘阵列的状态信息确定可开启吸盘的数量信息和可开启吸盘所在的分组信息,即用于执行步骤S520;
条件判定模块1030,用于判断可开启吸盘的数量是否满足预设的条件和/或可开启吸盘的分组信息是否满足预设的条件,即用于执行步骤S530;
抓取确定模块1040,用于根据所述判断结果确定吸盘阵列在该状态下能否执行抓取,即用于执行步骤S540。
上述图9-图13所示的装置实施例中,仅描述了模块的主要功能,各个模块的全部功能与方法实施例中相应步骤相对应,各个模块的工作原理同样可以参照方法实施例中相应步骤的描述,此处不再赘述。另外,虽然上述实施例中限定了功能模块的功能与方法的对应关系,然而本领域技术人员能够理解,功能模块的功能并不局限于上述对应关系,即特定的功能模块还能够实现其他方法步骤或方法步骤的一部分。例如,上述实施例描述了抓取确定模块1040用于实现步骤S540的方法,然而根据实际情况的需要,抓取确定模块1040也可以用于实现步骤S500、S510、S520或S530的方法或方法的一部分。
本申请还提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述任一实施方式的方法。需要指出的是,本申请实施方式的计算机可读存储介质存储的计算机程序可以被电子设备的处理器执行,此外,计算机可读存储介质可以是内置在电子设备中的存储介质,也可以是能够插拔地插接在电子设备的存储介质,因此,本申请实施方式的计算机可读存储介质具有较高的灵活性和可靠性。
图14示出了根据本发明实施例的一种电子设备的结构示意图,本发明具体实施例并不对电子设备的具体实现做限定。
如图14所示,该电子设备可以包括:处理器(processor)1102、通信接口(Communications Interface)1104、存储器(memory)1106、以及通信总线1108。
其中:
处理器1102、通信接口1104、以及存储器1106通过通信总线1108完成相互间的通信。
通信接口1104,用于与其它设备比如客户端或其它服务器等的网元通信。
处理器1102,用于执行程序1110,具体可以执行上述方法实施例中的相关步骤。
具体地,程序1110可以包括程序代码,该程序代码包括计算机操作指令。
处理器1102可能是中央处理器CPU,或者是特定集成电路ASIC(Application Specific Integrated Circuit),或者是被配置成实施本发明实施例的一个或多个集成电路。电子设备包括的一个或多个处理器,可以是同一类型的处理器,如一个或多个CPU;也可以是不同类型的处理器,如一个或多个CPU以及一个或多个ASIC。
存储器1106,用于存放程序1110。存储器1106可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
程序1110具体可以用于使得处理器1102执行上述方法实施例中的各项操作。
概括地说,本发明的发明内容包括:
一种夹具控制方法,包括:
获取待抓取物品组的点云信息;
基于待抓取物品组的点云信息以及夹具的搜索边界因素参数生成搜索状态;
针对每一种搜索状态,执行障碍判定,将通过障碍判定的搜索状态设为备选搜索状态;
从备选搜索状态中选择最佳搜索状态;
基于所述最佳搜索状态抓取待抓取物品。
可选的,所述待抓取物品组包括一个或多个待抓取物品和/或障碍物。
可选的,使用json文件保存夹具的配置参数。
可选的,所述障碍判定至少包括以下一种:边界障碍判定,固定障碍判定,凸起\凹陷障碍判定和自定义障碍判定。
可选的,所述边界障碍判定包括根据夹具的中心与物品边缘轮廓点的关系判定是否存在边界障碍。
可选的,所述固定障碍判定包括根据障碍尺寸或形状判定是否存在固定障碍。
可选的,所述凸起\凹陷障碍判定包括根据二维平面深度图信息判定是否存在凸起\凹陷障碍。
可选的,所述自定义障碍判定包括生成自定义障碍的边缘轮廓,基于该边缘轮廓判定是否存在自定义障碍。
可选的,还包括:针对最佳搜索状态执行夹具校验以确定使用该搜索状态能否正确抓取物品。
一种夹具控制装置,包括:
点云获取模块,用于获取待抓取物品组的点云信息;
搜索状态生成模块,用于基于待抓取物品组的点云信息以及夹具的搜索边界因素参数生成搜索状态;
障碍判定模块,用于针对每一种搜索状态,执行障碍判定,将通过障碍判定的搜索状 态设为备选搜索状态;
最佳搜索状态确定模块,用于从备选搜索状态中选择最佳搜索状态;
抓取模块,用于基于所述最佳搜索状态抓取待抓取物品。
可选的,所述待抓取物品组包括一个或多个待抓取物品和/或障碍物。
可选的,还包括:使用json文件保存夹具的配置参数。
可选的,所述障碍判定模块用于执行以下至少一种障碍判定:边界障碍判定,固定障碍判定,凸起\凹陷障碍判定和自定义障碍判定。
可选的,所述障碍判定模块执行边界障碍判定时根据夹具的中心与物品边缘轮廓点的关系判定是否存在边界障碍。
可选的,所述障碍判定模块执行固定障碍判定时根据障碍尺寸或形状判定是否存在固定障碍。
可选的,所述障碍判定模块执行凸起\凹陷障碍判定时根据二维平面深度图信息判定是否存在凸起\凹陷障碍。
可选的,所述障碍判定模块执行自定义障碍判定时根据自定义障碍的边缘轮廓判定是否存在自定义障碍。
可选的,还包括:针对最佳搜索状态执行夹具校验以确定使用该搜索状态能否正确抓取物品。
一种基于机器人的非平面结构判定方法,包括:
获取物品表面待判定区域的二维平面深度图信息;
根据二维平面深度图信息获取待判定区域内每个像素点的深度值;
将获得的深度值分组;
计算每个分组内深度值的差值,并与预设的差值阈值进行比较;和/或,计算分组的数量,并与预设的分组数阈值进行比较;
根据比较结果判定区域的非平面结构情况。
可选的,获得每个像素点的深度值后,对于多个相同深度值,只保留其中一个。
可选的,所述分组包括将两个不同的深度值分成一组。
可选的,所述分组包括以如下方式将所有的深度值进行分组:将获取的全部深度值从高到低排序,并将第一个深度值与倒数第一个深度值分为一组,第二个深度值与倒数第二个深度值分为一组……第N个深度值与倒数第N个深度值分为一组,其中N为大于等于1的自然数。
可选的,所述预设的差值阈值和/或分组数阈值包括基于夹具的能力预设的差值阈值和/或分组数阈值。
可选的,所述非平面结构情况包括存在非平面结构或者不存在非平面结构。
可选的,所述非平面结构情况包括非平面结构的起伏程度。
一种基于机器人的非平面结构判定装置,包括:
深度图获取模块,用于获取物品表面待判定区域的二维平面深度图;
深度值获取模块,用于根据二维平面深度图获取待判定区域内每个像素点的深度值;
分组模块,用于将获得的深度值分组;
比较模块,用于计算每个分组内深度值的差值,并与预设的差值阈值进行比较;和/或,计算分组的数量,并与预设的分组数阈值进行比较;
判定模块,用于根据比较结果判定区域的非平面结构情况。
可选的,所述深度值获取模块获得每个像素点的深度值后,对于多个相同深度值,只保留其中一个。
可选的,所述分组模块将两个不同的深度值分成一组。
可选的,所述分组模块以如下方式将所有的深度值进行分组:将获取的全部深度值从高到低排序,并将第一个深度值与倒数第一个深度值分为一组,第二个深度值与倒数第二个深度值分为一组……第N个深度值与倒数第N个深度值分为一组,其中N为大于等于1的自然数。
可选的,所述预设的差值阈值和/或分组数阈值包括基于夹具的能力预设的差值阈值和/或分组数阈值。
可选的,所述判定模块判定存在非平面结构或者不存在非平面结构。
可选的,所述判定模块判定非平面结构的起伏程度。
一种夹具组校验方法,包括:
获取夹具组的状态信息;
基于所述状态信息,确定该状态下可开启夹具的数量信息和/或可开启夹具的位置信息;
判断可开启夹具的数量是否满足预设的条件和/或判断可开启夹具的位置信息是否满足预设的条件;
根据所述判断结果确定夹具组在该状态下能否执行抓取。
可选的,所述确定可开启夹具的位置信息包括根据待抓取物品的轮廓信息确定可开启夹具的位置信息。
可选的,基于待抓取物品的重量预设可开启夹具的数量需要满足的条件。
可选的,基于待抓取物品的重心预设可开启夹具的位置需要满足的条件。
可选的,可开启夹具的位置需要满足的条件包括:多个夹具的位置连线不能形成一条直线。
可选的,所述根据所述判断结果确定夹具组在该状态下能否执行抓取包括,当夹具数量满足预设条件且夹具位置满足预设条件时,确定能够执行抓取。
一种夹具组校验装置,包括:
状态信息获取模块,用于获取夹具阵列的状态信息;
信息确定模块,用于基于所述状态信息,确定可开启夹具的数量信息和/或可开启夹具的位置信息;
条件判定模块,用于判断可开启夹具的数量是否满足预设的条件和/或判断可开启夹具的位置信息是否满足预设的条件;
抓取确定模块,用于根据所述判断结果确定夹具组在该状态下能否执行抓取。
可选的,所述所述信息确定模块根据待抓取物品的轮廓信息确定可开启夹具的位置信息。
可选的,基于待抓取物品的重量预设可开启夹具的数量需要满足的条件。
可选的,基于待抓取物品的重心预设可开启夹具的位置需要满足的条件。
可选的,可开启夹具的位置需要满足的条件包括:多个夹具的位置连线不能形成一条直线。。
可选的,所述抓取确定模块在夹具数量满足预设条件且夹具位置满足预设条件时,确定能够执行抓取。
一种吸盘阵列控制方法,包括:
获取待抓取玻璃及障碍的点云信息;
基于点云信息以及吸盘阵列的搜索边界因素参数生成搜索状态;
针对每一种搜索状态,判断每个吸盘下方是否存在障碍,存在则在该搜索状态下关闭该吸盘;
根据每个搜索状态的吸盘开启情况,选取备选搜索状态;
根据吸盘开启数量和/或吸盘距玻璃中心的距离从备选搜索状态中选择最佳搜索状态;
使用最佳搜索状态抓取玻璃。
可选的,所述障碍判定至少包括以下一种:玻璃边界障碍判定,橡胶垫障碍判定,胶条障碍判定和凸起障碍判定。
可选的,所述玻璃边界障碍判定包括根据吸盘下方区域内点云占比判定是否存在玻璃边界障碍。
可选的,所述橡胶垫障碍判定包括:基于障碍点云面积判定是否存在橡胶垫障碍,其中障碍点云面积根据橡胶垫面积预先设定。
可选的,所述凸起障碍判定包括:基于预先设定的深度差阈值和对数阈值判定是否存 在凸起障碍。
可选的,所述根据每个搜索状态的吸盘开启情况,选取备选搜索状态包括:如果至少存在一个开启的吸盘,则将该搜索状态选为备选搜索状态。
可选的,所述根据吸盘开启数量和/或吸盘距玻璃中心的距离从备选搜索状态中选择最佳搜索状态包括:选取吸盘开启数量最多的搜索状态,若吸盘开启数量最多的搜索状态有多个,则进一步选取其中吸盘距玻璃中心最近的搜索状态。
可选的,还包括:针对最佳搜索状态执行夹具校验以确定使用该搜索状态能否正确抓取玻璃。
一种吸盘阵列控制装置,包括:
点云获取模块,用于获取待抓取玻璃及障碍的点云信息;
搜索状态生成模块,用于基于点云信息以及吸盘阵列的搜索边界因素参数生成搜索状态;
障碍判定模块,用于针对每一种搜索状态,判断每个吸盘下方是否存在障碍,存在则在该搜索状态下关闭该吸盘;
备选搜索状态选取模块,用于根据每个搜索状态的吸盘开启情况,选取备选搜索状态;
最佳搜索状态选取模块,用于根据吸盘开启数量和/或吸盘距玻璃中心的距离从备选搜索状态中选择最佳搜索状态;
抓取模块,用于基于最佳搜索状态抓取玻璃。
可选的,所述障碍判定模块用于执行以下至少一种障碍判定:玻璃边界障碍判定,橡胶垫障碍判定,胶条障碍判定和凸起障碍判定。
可选的,所述障碍判定模块执行玻璃边界障碍时,根据吸盘下方区域内点云占比判定是否存在玻璃边界障碍。
可选的,所述障碍判定模块执行橡胶垫障碍判定时,基于障碍点云面积判定是否存在橡胶垫障碍,其中障碍点云面积根据橡胶垫面积预先设定。
可选的,所述障碍判定模块执行凸起障碍判定时,基于预先设定的深度差阈值和对数阈值判定是否存在凸起障碍。
可选的,所述备选搜索状态选取模块在某个搜索状态至少存在一个开启的吸盘时,则将该搜索状态选为备选搜索状态。
可选的,所述最佳搜索状态选取模块选取吸盘开启数量最多的搜索状态作为最佳搜索状态,若吸盘开启数量最多的搜索状态有多个,则进一步选取其中吸盘距玻璃中心最近的搜索状态。
可选的,还包括:针对最佳搜索状态执行夹具校验以确定使用该搜索状态能否正确抓 取玻璃。
一种吸盘阵列校验方法,包括:
根据吸盘阵列的形态将阵列中的全部吸盘分组;
获取吸盘阵列的状态信息;
基于吸盘阵列的状态信息确定可开启吸盘的数量信息和可开启吸盘所在的分组信息;
判断可开启吸盘的数量是否满足预设的条件和/或可开启吸盘的分组信息是否满足预设的条件;
根据所述判断结果确定吸盘阵列在该状态下能否执行抓取。
可选的,所述根据吸盘阵列的形态将阵列中的全部吸盘分组包括将吸盘阵列划分为左上,左下,右上,右下四个区域,并根据全部吸盘所在区域将全部吸盘分为四组。
可选的,基于待抓取物品的重量预设可开启吸盘的数量需要满足的条件。
可选的,基于待抓取物品的重心预设可开启吸盘的分组信息需要满足的条件。
可选的,可开启吸盘的分组信息需要满足的条件包括:可开启的吸盘至少分布在三个组中。
可选的,所述根据所述判断结果确定吸盘阵列在该状态下能否执行抓取包括:当吸盘数量满足预设条件且吸盘分组信息满足预设条件时,确定能够执行抓取。
一种吸盘阵列校验装置,包括:
分组模块,用于根据吸盘阵列的形态将阵列中的全部吸盘分组;
状态信息获取模块,用于获取吸盘阵列的状态信息;
信息确定模块,用于基于吸盘阵列的状态信息确定可开启吸盘的数量信息和可开启吸盘所在的分组信息;
条件判定模块,用于判断可开启吸盘的数量是否满足预设的条件和/或可开启吸盘的分组信息是否满足预设的条件;
抓取确定模块,用于根据所述判断结果确定吸盘阵列在该状态下能否执行抓取。
可选的,所述分组模块具体用于将吸盘阵列划分为左上,左下,右上,右下四个区域,并根据全部吸盘所在区域将全部吸盘分为四组。
可选的,基于待抓取物品的重量预设可开启吸盘的数量需要满足的条件。
可选的,基于待抓取物品的重心预设可开启吸盘的分组信息需要满足的条件。
可选的,可开启吸盘的分组信息需要满足的条件包括:可开启的吸盘至少分布在三个组中。
可选的,抓取确定模块具体用于:当吸盘数量满足预设条件且吸盘分组信息满足预设条件时,确定能够执行抓取。
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理模块的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
处理器可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
应当理解,本申请的实施方式的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列 (PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本申请的各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。
尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施方式进行变化、修改、替换和变型。

Claims (16)

  1. 一种基于机器人的非平面结构判定方法,其特征在于,包括:
    获取物品表面待判定区域的二维平面深度图信息;
    根据二维平面深度图信息获取待判定区域内每个像素点的深度值;
    将获得的深度值分组;
    计算每个分组内深度值的差值,并与预设的差值阈值进行比较;和/或,计算分组的数量,并与预设的分组数阈值进行比较;
    根据比较结果判定区域的非平面结构情况。
  2. 根据权利要求1所述的非平面结构判定方法,其特征在于:获得每个像素点的深度值后,对于多个相同深度值,只保留其中一个。
  3. 根据权利要求1所述的非平面结构判定方法,其特征在于:所述分组包括将两个不同的深度值分成一组。
  4. 根据权利要求1或2所述的非平面结构判定方法,其特征在于:所述分组包括以如下方式将所有的深度值进行分组:将获取的全部深度值从高到低排序,并将第一个深度值与倒数第一个深度值分为一组,第二个深度值与倒数第二个深度值分为一组……第N个深度值与倒数第N个深度值分为一组,其中N为大于等于1的自然数。
  5. 根据权利要求1所述的非平面结构判定方法,其特征在于:所述预设的差值阈值和/或分组数阈值包括基于夹具的能力预设的差值阈值和/或分组数阈值。
  6. 根据权利要求1所述的非平面结构判定方法,其特征在于:所述非平面结构情况包括存在非平面结构或者不存在非平面结构。
  7. 根据权利要求1所述的非平面结构判定方法,其特征在于:所述非平面结构情况包括非平面结构的起伏程度。
  8. 一种基于机器人的非平面结构判定装置,其特征在于,包括:
    深度图获取模块,用于获取物品表面待判定区域的二维平面深度图;
    深度值获取模块,用于根据二维平面深度图获取待判定区域内每个像素点的深度值;
    分组模块,用于将获得的深度值分组;
    比较模块,用于计算每个分组内深度值的差值,并与预设的差值阈值进行比较;和/或,计算分组的数量,并与预设的分组数阈值进行比较;
    判定模块,用于根据比较结果判定区域的非平面结构情况。
  9. 根据权利要求8所述的非平面结构判定装置,其特征在于:所述深度值获取模块获得 每个像素点的深度值后,对于多个相同深度值,只保留其中一个。
  10. 根据权利要求9所述的非平面结构判定装置,其特征在于:所述分组模块将两个不同的深度值分成一组。
  11. 根据权利要求8或9所述的非平面结构判定装置,其特征在于:所述分组模块以如下方式将所有的深度值进行分组:将获取的全部深度值从高到低排序,并将第一个深度值与倒数第一个深度值分为一组,第二个深度值与倒数第二个深度值分为一组……第N个深度值与倒数第N个深度值分为一组,其中N为大于等于1的自然数。
  12. 根据权利要求8所述的非平面结构判定装置,其特征在于:所述预设的差值阈值和/或分组数阈值包括基于夹具的能力预设的差值阈值和/或分组数阈值。
  13. 根据权利要求8所述的非平面结构判定装置,其特征在于:所述判定模块判定存在非平面结构或者不存在非平面结构。
  14. 根据权利要求8所述的非平面结构判定装置,其特征在于:所述判定模块判定非平面结构的起伏程度。
  15. 一种电子设备,其特征在于,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现权利要求1至7中任一项所述的基于机器人的非平面结构判定方法。
  16. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至7中任一项所述的基于机器人的非平面结构判定方法。
PCT/CN2021/138577 2021-05-31 2021-12-15 基于机器人的非平面结构判定方法、装置、电子设备和存储介质 WO2022252562A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110598705.8A CN115482350A (zh) 2021-05-31 2021-05-31 基于机器人的非平面结构判定方法、装置、电子设备和存储介质
CN202110598705.8 2021-05-31

Publications (1)

Publication Number Publication Date
WO2022252562A1 true WO2022252562A1 (zh) 2022-12-08

Family

ID=84322833

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/138577 WO2022252562A1 (zh) 2021-05-31 2021-12-15 基于机器人的非平面结构判定方法、装置、电子设备和存储介质

Country Status (2)

Country Link
CN (1) CN115482350A (zh)
WO (1) WO2022252562A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170039731A1 (en) * 2015-08-05 2017-02-09 Intel Corporation Method and system of planar surface detection for image processing
CN111161263A (zh) * 2020-04-02 2020-05-15 北京协同创新研究院 一种包装平整度检测方法、系统、电子设备和存储介质
CN112414326A (zh) * 2020-11-10 2021-02-26 浙江华睿科技有限公司 物体表面平整度的检测方法、装置、电子装置和存储介质
CN112802113A (zh) * 2021-02-05 2021-05-14 梅卡曼德(北京)机器人科技有限公司 一种任意形状物体的抓取点确定方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170039731A1 (en) * 2015-08-05 2017-02-09 Intel Corporation Method and system of planar surface detection for image processing
CN111161263A (zh) * 2020-04-02 2020-05-15 北京协同创新研究院 一种包装平整度检测方法、系统、电子设备和存储介质
CN112414326A (zh) * 2020-11-10 2021-02-26 浙江华睿科技有限公司 物体表面平整度的检测方法、装置、电子装置和存储介质
CN112802113A (zh) * 2021-02-05 2021-05-14 梅卡曼德(北京)机器人科技有限公司 一种任意形状物体的抓取点确定方法

Also Published As

Publication number Publication date
CN115482350A (zh) 2022-12-16

Similar Documents

Publication Publication Date Title
US20210187735A1 (en) Positioning a Robot Sensor for Object Classification
JP6461369B2 (ja) 視覚光、および、赤外の投射されるパターンを検出するためのイメージャ
US11383380B2 (en) Object pickup strategies for a robotic device
US10754318B2 (en) Robot interaction with objects based on semantic information associated with embedding spaces
US9649767B2 (en) Methods and systems for distributing remote assistance to facilitate robotic object manipulation
CN113351522B (zh) 物品分拣方法、装置及系统
US20160136808A1 (en) Real-Time Determination of Object Metrics for Trajectory Planning
JP2017520417A (ja) 複数の吸着カップの制御
CN112025701B (zh) 抓取物体的方法、装置、计算设备和存储介质
CN110395515B (zh) 一种货物识别抓取方法、设备以及存储介质
CN107009358A (zh) 一种基于单相机的机器人无序抓取装置及方法
US11213958B2 (en) Transferring system and method for transferring an object
WO2023124734A1 (zh) 物体抓取点估计、模型训练及数据生成方法、装置及系统
WO2022252562A1 (zh) 基于机器人的非平面结构判定方法、装置、电子设备和存储介质
CN115476351A (zh) 吸盘阵列校验方法、装置、电子设备和存储介质
CN115476350A (zh) 吸盘阵列控制方法、装置、电子设备和存储介质
CN115476349A (zh) 夹具组校验方法、装置、电子设备和存储介质
Li et al. A workpiece localization method for robotic de-palletizing based on region growing and PPHT
CN115476348A (zh) 夹具控制方法、装置、电子设备和存储介质
CN116175542B (zh) 确定夹具抓取顺序的方法、装置、电子设备和存储介质
Pacheco-Ortega et al. Intelligent flat-and-textureless object manipulation in Service Robots
CN111360822B (zh) 一种基于视觉的机械手抓取空间正方体方法
CN116175541B (zh) 抓取控制方法、装置、电子设备和存储介质
CN116197885B (zh) 基于压叠检测的图像数据过滤方法、装置、设备和介质
CN114359482A (zh) 一种墙面施工粉尘控制方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21943914

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21943914

Country of ref document: EP

Kind code of ref document: A1