CN116175542B - Method, device, electronic equipment and storage medium for determining clamp grabbing sequence - Google Patents

Method, device, electronic equipment and storage medium for determining clamp grabbing sequence Download PDF

Info

Publication number
CN116175542B
CN116175542B CN202111429144.5A CN202111429144A CN116175542B CN 116175542 B CN116175542 B CN 116175542B CN 202111429144 A CN202111429144 A CN 202111429144A CN 116175542 B CN116175542 B CN 116175542B
Authority
CN
China
Prior art keywords
mask
grabbed
grabbing
value
article
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111429144.5A
Other languages
Chinese (zh)
Other versions
CN116175542A (en
Inventor
崔致豪
丁有爽
邵天兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mech Mind Robotics Technologies Co Ltd
Original Assignee
Mech Mind Robotics Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mech Mind Robotics Technologies Co Ltd filed Critical Mech Mind Robotics Technologies Co Ltd
Priority to CN202111429144.5A priority Critical patent/CN116175542B/en
Publication of CN116175542A publication Critical patent/CN116175542A/en
Application granted granted Critical
Publication of CN116175542B publication Critical patent/CN116175542B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Manipulator (AREA)

Abstract

The application discloses a method, a device, electronic equipment and a storage medium for determining a clamp grabbing sequence. The method for determining the gripping sequence of the clamp comprises the following steps: acquiring a mask of a grabbing area of at least one object to be grabbed; for each object to be grabbed in at least one object to be grabbed, acquiring a characteristic value of at least one characteristic of a mask in a grabbed area of the object to be grabbed; performing normalization processing on each of the acquired feature values of the at least one feature to obtain at least one normalized feature value; and calculating the grabbing priority value of each article to be grabbed based on at least one normalized characteristic value and a preset weight value of each article to be grabbed, so that when at least one article to be grabbed is grabbed, the grabbing sequence can be controlled according to the grabbing priority value. Compared with the traditional method, the grabbing and sorting method improves the sorting accuracy, and does not obviously reduce the operation speed even if multiple factors are considered because the characteristics of the whole object are not processed.

Description

Method, device, electronic equipment and storage medium for determining clamp grabbing sequence
Technical Field
The present application relates to the field of automated control of robotic arms or jigs, program control B25J, and more particularly to a method, apparatus, electronic device, and storage medium for determining a gripping order of a jig.
Background
The robot has the basic characteristics of perception, decision making, execution and the like, can assist or even replace human beings to finish dangerous, heavy and complex work, improves the working efficiency and quality, serves the life of the human beings, and enlarges or extends the activity and capacity range of the human beings. With the development of industrial automation and computer technology, robots are beginning to enter mass production and practical application stages. In industrial settings, industrial robots have found widespread use, capable of performing some repetitive or dangerous work instead of humans. Traditional industrial robot designs focus on the design and manufacture of robot hardware, which is not "intelligent" by itself. When the robot is used in an industrial field, technicians need to plan hardware equipment, a production line, material positions, task paths of the robot and the like of the whole industrial field in advance, for example, if articles are to be sorted and carried, field workers need to sort out different types of articles and neatly put the articles into material frames with uniform specifications, before the robot is used for operation, the production line, the material frames, carrying positions and the like need to be determined, and a fixed motion path, a fixed grabbing position, a fixed rotating angle and a fixed clamp are set for the robot according to determined information.
As an improvement of the conventional robot technology, an intelligent program-controlled robot based on robot vision has been developed, however, the current 'intelligence' is simpler, and the main implementation mode is that image data related to a task is acquired through a vision acquisition device such as a camera, 3D point cloud information is acquired based on the image data, and then the operation of the robot is planned based on the point cloud information, including information such as movement speed and movement track, so as to control the robot to execute the task. However, the existing robot control schemes do not work well when they encounter complex tasks. For example, in the scenes of super business, logistics and the like, a plurality of stacked articles are processed, the mechanical arm is required to sequentially position and identify the positions of the articles by means of vision equipment in the scattered and unordered scenes, the articles are picked up by using suction cups, clamps or other bionic instruments, and the picked articles are placed at corresponding positions according to a certain rule by the operations of mechanical arm movement, track planning and the like. Under such an industrial scene, a robot is used to perform grabbing, for example, the number of objects to be grabbed in the scene is too large, light rays are uneven, so that the quality of point clouds of partial objects is poor, and the grabbing effect is affected; the objects are various, are not orderly placed and face the five-flower eight door, so that when each object is grabbed, the grabbing points are different, and the grabbing position of the clamp is difficult to determine; the stacked articles are easy to generate the condition that other articles fly when one article is grabbed. Under the industrial scene, the factors influencing the difficulty in grabbing the objects are more, and the effect of the traditional grabbing and sorting method is not good enough; in addition, when the grabbing algorithm is designed to be more complex, more barriers are brought to site workers, and when a problem occurs, the site workers have difficulty in finding out why the problem occurs and how to adjust the problem to solve the problem, and often the robot provider is required to send out an expert to assist.
Disclosure of Invention
The present invention has been made in view of the above problems, and aims to overcome or at least partially solve the above problems. Specifically, according to the grabbing and sorting method, comprehensive sorting is performed according to the characteristics of the mask in the grabbing area of the object to be grabbed, compared with a traditional method, sorting accuracy is improved, and as the characteristics of the whole object are not processed, even if a plurality of factors are considered, operation speed is not remarkably reduced.
All of the solutions disclosed in the claims and the description of the present application have one or more of the innovations described above, and accordingly, one or more of the technical problems described above can be solved. Specifically, the application provides a grabbing control method, a grabbing control device, an electronic device and a storage medium for determining grabbing sequence of a clamp.
The grabbing control method for determining the grabbing sequence of the clamp in the embodiment of the application comprises the following steps:
acquiring a mask of a grabbing area of at least one object to be grabbed;
for each object to be grabbed in at least one object to be grabbed, acquiring a characteristic value of at least one characteristic of a mask in a grabbed area of the object to be grabbed;
performing normalization processing on each of the acquired feature values of the at least one feature to obtain at least one normalized feature value;
And calculating the grabbing priority value of each article to be grabbed based on at least one normalized characteristic value and a preset weight value of each article to be grabbed, so that when at least one article to be grabbed is grabbed, the grabbing sequence can be controlled according to the grabbing priority value.
In some embodiments, the mask of the grippable region is characterized by: mask height, clamp size, number of point clouds in the mask, mask diagonal degree, mask stacking degree, mask size and/or pose direction.
In some embodiments, mask height feature values of the mask of the grippable region are calculated based on depth values of the grippable region.
In some embodiments, the clamp size is determined based on a mapping relationship between a preset clamp and the clamp size.
In some embodiments, the mask diagonal is determined based on the angle between the diagonal of the circumscribed rectangle of the mask and one side of the circumscribed rectangle.
In some embodiments, the priority value is calculated according to the following formula:
wherein P is the priority value of the object to be grabbed, n is the characteristic quantity, omega i Weight of ith feature, X i Is the feature value of the ith feature.
The grasping control device for determining a grasping order of a jig according to an embodiment of the present application includes:
The mask acquisition module is used for acquiring a mask of a grabbed area of at least one article to be grabbed;
the characteristic value acquisition module is used for acquiring the characteristic value of at least one characteristic of the mask in the grabbing area of each article to be grabbed in the at least one article to be grabbed;
the feature value normalization module is used for performing normalization processing on each of the acquired feature values of the at least one feature to obtain at least one normalized feature value;
the priority value calculating module is used for calculating the grabbing priority value of each article to be grabbed based on at least one normalized characteristic value and a preset weight value of each article to be grabbed, so that when at least one article to be grabbed is grabbed, the grabbing sequence can be controlled according to the grabbing priority value.
In some embodiments, the mask of the grippable region is characterized by: mask height, clamp size, number of point clouds in the mask, mask diagonal degree, mask stacking degree, mask size and/or pose direction.
In some embodiments, mask height feature values of the mask of the grippable region are calculated based on depth values of the grippable region.
In some embodiments, the clamp size is determined based on a mapping relationship between a preset clamp and the clamp size.
In some embodiments, the mask diagonal is determined based on the angle between the diagonal of the circumscribed rectangle of the mask and one side of the circumscribed rectangle.
In some embodiments, the priority value calculation module calculates the priority value according to the following formula:
wherein P is the priority value of the object to be grabbed, n is the characteristic quantity, omega i Weight of ith feature, X i Is the feature value of the ith feature.
The electronic device of the embodiment of the application comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes the method for controlling the gripping of the determining clamp according to any embodiment when executing the computer program.
The computer-readable storage medium of the embodiments of the present application has stored thereon a computer program which, when executed by a processor, implements the grip control method of determining the grip order of the jig of any of the above embodiments.
Additional aspects and advantages of the application will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a schematic illustration of mask pretreatment according to certain embodiments of the present application;
FIG. 2 is a flow chart of a method of determining a grasping order according to certain embodiments of the present application;
FIG. 3 is a schematic view of the diagonal of an article according to certain embodiments of the present application;
FIG. 4 is a schematic illustration of the impact of the grade pose of an object to be grabbed on grabbing according to some embodiments of the present application;
FIG. 5 is a schematic diagram of a grasping sequence determining device according to certain embodiments of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to some embodiments of the present application.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In the description of the specific embodiments, it should be understood that the terms "center," "longitudinal," "transverse," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships based on the orientation or positional relationships shown in the drawings, merely to facilitate describing the invention and simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the invention.
Furthermore, the terms "first," "second," "third," and the like are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", etc. may explicitly or implicitly include one or more such feature. In the description of the present invention, unless otherwise indicated, the meaning of "a plurality" is two or more.
The invention can be used in industrial robot control scenes based on visual identification. A typical vision-based industrial robot control scenario includes devices for capturing images, control devices such as hardware for a production line and a PLC for the production line, robot components for performing tasks, and operating systems or software for controlling these devices. The means for capturing images may include a 2D or 3D smart/non-smart industrial camera, which may include an area camera, a line camera, a black and white camera, a color camera, a CCD camera, a CMOS camera, an analog camera, a digital camera, a visible light camera, an infrared camera, an ultraviolet camera, etc., depending on different functions and application scenarios; the production line can comprise a packaging production line, a sorting production line, a logistics production line, a processing production line and the like which need robots; the robot parts used in the industrial scene for performing tasks may be biomimetic robots, such as a human-type robot or a dog-type robot, or may be conventional industrial robots, such as a mechanical arm, etc.; the industrial robot may be an operation type robot, a program controlled type robot, a teaching reproduction type robot, a numerical control type robot, a sensory control type robot, an adaptation control type robot, a learning control type robot, an intelligent robot, or the like; the mechanical arm can be a ball-and-socket type mechanical arm, a multi-joint mechanical arm, a rectangular coordinate mechanical arm, a cylindrical coordinate mechanical arm, a polar coordinate mechanical arm and the like according to the working principle, and can be a grabbing mechanical arm, a stacking mechanical arm, a welding mechanical arm and an industrial mechanical arm according to the functions of the mechanical arm; the end of the mechanical arm can be provided with an end effector, and the end effector can use a robot clamp, a robot gripper, a robot tool quick-change device, a robot collision sensor, a robot rotary connector, a robot pressure tool, a compliance device, a robot spray gun, a robot burr cleaning tool, a robot arc welding gun, a robot electric welding gun and the like according to the requirements of tasks; the robot clamp can be various universal clamps, and the universal clamps refer to clamps with standardized structures and wide application range, such as a three-jaw chuck and a four-jaw chuck for a lathe, a flat tongs and an index head for a milling machine, and the like. For another example, the clamp may be classified into a manual clamp, a pneumatic clamp, a hydraulic clamp, a gas-liquid linkage clamp, an electromagnetic clamp, a vacuum clamp, etc. or other bionic devices capable of picking up an article, according to a clamping power source used for the clamp. The device for collecting images, the control devices such as hardware for a production line, a PLC (programmable logic controller) for the production line and the like, the robot parts for executing tasks and the operating system or software for controlling the devices can communicate based on TCP (transmission control protocol), HTTP (hyper text transfer protocol) and GRPC (generic personal computer) protocols (Google Remote Procedure Call Protocol ) so as to transmit various control instructions or commands. The operating system or software may be disposed in any electronic device, typically such electronic devices include industrial computers, personal computers, notebook computers, tablet computers, cell phones, etc., which may communicate with other devices or systems by wired or wireless means. Further, the gripping appearing in the present invention refers to any gripping action capable of controlling an article to change the position of the article in a broad sense, and is not limited to gripping the article in a narrow sense in a "gripping" manner, in other words, gripping the article in a manner such as suction, lifting, tightening, or the like, and also falls within the scope of the gripping of the present invention. The articles to be grasped in the present invention may be cartons, plastic soft packs (including but not limited to snack packages, milk tetra pillow packages, milk plastic packages, etc.), cosmeceutical bottles, cosmeceuticals, and/or irregular toys, etc., which may be placed in a floor, tray, conveyor belt, and/or material basket.
The existing sorting scheme generally only considers one to two characteristics of the objects to be grabbed, performs sorting based on simple sorting logic, and has the defects that the sorting result is often inaccurate due to insufficient overall consideration factors, or when on-site staff finds that the sorting is inaccurate, no method is available to enable the grabbing sequence to meet the self-expected result through parameter adjustment, so that poor grabbing effect is generated when grabbing is performed based on the sequence. In order to solve the problems, the invention provides a method for comprehensively determining the grabbing sequence of all objects to be grabbed based on a plurality of characteristics of the grabbing areas of the objects, which improves the sequencing accuracy and the degree of freedom for adjusting the grabbing sequence, does not obviously improve the operation speed, and has strong applicability, thus being one of the key points of the invention.
Fig. 2 shows a flow diagram of a method of processing image data to determine a grabbing order according to an embodiment of the invention. As shown in fig. 2, the method includes:
step S200, obtaining a mask of a grabbing area of at least one object to be grabbed;
step S210, for each object to be grabbed in at least one object to be grabbed, acquiring at least one characteristic value of a mask of a grabbed area of the object to be grabbed;
Step S220, for each of the at least one obtained characteristic value, performing normalization processing to obtain at least one normalized characteristic value;
step S230, calculating a grabbing priority value of each article to be grabbed based on at least one normalized characteristic value and a preset weight value of each article to be grabbed, so that when at least one article to be grabbed is grabbed, the grabbing sequence can be controlled according to the grabbing priority value.
With respect to the step S200 of the process,
one possible embodiment of determining the grabber area and generating the mask may be to first, after acquiring image data comprising one or more objects to be grabbed, process the image data to identify each pixel in the image, e.g. for a 256 x 256 image 256 x 65536 pixels should be identified; and classifying all the pixel points included in the whole image based on the characteristics of each pixel point, wherein the characteristics of the pixel points mainly refer to RGB values of the pixel points, and in an actual application scene, RGB color images can be processed into gray images for conveniently classifying the characteristics, and the gray images can be classified by using the gray values. For classification of the pixel points, it may be predetermined which class the pixel points need to be classified into, for example, a large stack of beverage cans, food boxes and frames is included in the RGB image obtained by photographing, so if the purpose is to generate a mask in which the beverage cans, food boxes and frames are to be generated, the predetermined classification may be beverage cans, food boxes and frames. The three different classifications can be provided with a label, wherein the label can be a number, for example, a beverage can is 1, a food box is 2, a material frame is 3, or the label can be a color, for example, a beverage can is red, a food box is blue, and a material frame is green, so that after the classification and the processing are carried out, the beverage can is marked with 1 or red, the food box is marked with 2 or blue, and the material frame is marked with 3 or green in a finally obtained image. In this embodiment, the mask of the grippable region of the object is to be generated, so that only the grippable region is classified, for example, blue, and the blue region in the image processed in this way is the mask of the grippable region of the object to be grippable; a channel of image output is then created for each class, the channel acting to extract as output all class-dependent features in the input image. For example, after we create a channel of image output for the class of grippable region, the acquired RGB color image is input into the channel, and then the image from which the features of the grippable region are extracted can be acquired from the output of the channel. Finally, the feature image of the grippable region obtained by the processing is combined with the original RGB image to generate the composite image data with the grippable region mask identified.
Masks generated in this manner are sometimes unsuitable, e.g., some masks are of a size and shape that is inconvenient to follow. For another example, some areas may have masks generated, but the clamps may not be able to perform a grab at the mask locations. An unsuitable mask can have a significant impact on subsequent processing, and therefore requires pretreatment of the resulting mask for further steps. As shown in fig. 1, the preprocessing of the mask may include: 1. and (3) performing expansion treatment on the mask to fill in defects such as missing and irregular mask images. For example, for each pixel point on the mask, a certain number of points, e.g., 8-25 points, around the point may be set to be the same color as the point. This step corresponds to filling the periphery of each pixel, so if there is a defect in the object mask, the missing part will be filled completely, after this, the object mask will become complete, there is no defect, and the mask will become slightly "fat" due to expansion, and proper expansion will help to follow-up further image processing operation; 2. judging whether the area of the mask meets the preset condition, and if not, eliminating the mask. First, smaller mask areas are likely to be erroneous because of the continuity of the image data, one grabbed area will typically include a large number of pixels with similar characteristics, and mask areas formed by discrete small pixels may not be truly grabbed areas; secondly, the robot end actuating mechanism, namely the clamp, needs to have a certain area in the foot falling position when the grabbing task is executed, if the area of the grabbing area is too small, the clamp cannot drop the foot in the area at all, and therefore the object cannot be grabbed, and therefore too small mask is meaningless. The predetermined condition may be set according to the size of the jig and the size of the noise, and the value thereof may be a determined size, or the number of the included pixels, or a ratio, for example, the predetermined condition may be set to 0.1%, that is, when the ratio of the mask area to the whole image area is less than 0.1%, the mask is considered to be unusable, and then is removed from the image; 3. and judging whether the number of the point clouds in the mask is less than the preset minimum number of the point clouds. The number of the point clouds reflects the quality of the acquisition of the camera, and if the number of the point clouds in a certain grippable area is too small, the shooting of the area is not accurate enough. The point cloud may be used to control the gripper to perform the gripping, and too small a number may have an impact on the gripper's control process. Thus, the number of point clouds that should be included at least in a certain mask area may be set, for example: and when the number of the point clouds covered in a certain grabbing area is less than 10, eliminating the mask from the image data or randomly adding the point clouds for the grabbing area until the number reaches 10.
After the mask of the grabbed area is acquired, the feature of the mask related to grabbing needs to be acquired in step S210. During the course of the study, the inventors found that the following features of the mask are most likely to affect the capture: mask height, clamp size, number of point clouds in the mask, mask diagonal degree, mask stacking degree, mask size and pose direction. The grabbing sequence can be optionally determined according to the requirements of the actual application scene by combining one or more features. In particular, among these features, the four features of mask height, mask size, mask collapse degree, and pose direction have the greatest influence on gripping. As a preferred embodiment, all of the above features may be considered in combination to determine the order of grabbing. The following describes the meaning of each feature, the effect of grabbing and the acquisition method thereof:
mask height
The mask height refers to the height of a mask in a grabbing area of an object to be grabbed, and can also be Z coordinate value. The height of the mask reflects the height of the object grabbing surface, as a plurality of objects to be grabbed are stacked and placed together, the object on the upper layer is grabbed preferentially firstly, the problem that the object on the upper layer is scattered due to the fact that the object on the lower layer is pressed can be prevented, secondly, the object on the upper layer can be prevented from being knocked down, grabbing of the object on the lower layer is affected, and the object on the upper layer is obviously grabbed better than the object on the lower layer. The height of the mask can be obtained through a depth map or a point cloud of the position of the mask, in one embodiment, the point cloud including one or more objects to be grabbed can be obtained first, the point cloud is a data set of points under a preset coordinate system, and for convenience in calculating the height value, a camera can be used for shooting right above the objects to be grabbed. And then acquiring the point cloud included in the mask area based on the mask area. And calculating pose key points of the grippable region represented by the mask and depth values of the pose key points, wherein the three-dimensional pose information of the object is used for describing the pose of the object to be gripped in the three-dimensional world. The pose key points refer to: the pose point of the three-dimensional position feature of the grippable region can be reflected. The calculation can be performed by:
Firstly, three-dimensional position coordinates of each data point of a mask area are obtained, and position information of pose key points of the grippable area corresponding to the mask is determined according to a preset operation result corresponding to the three-dimensional position coordinates of each data point. For example, assuming that 100 data points are included in the point cloud of the mask region, three-dimensional position coordinates of the 100 data points are obtained, an average value of the three-dimensional position coordinates of the 100 data points is calculated, and a data point corresponding to the average value is used as a pose key point of the grippable region corresponding to the mask region. Of course, the above-mentioned preset operation method may be, in addition to the averaging, center of gravity calculation, maximum value calculation, minimum value calculation, or the like, which is not limited to the present invention. Then, the direction with the smallest variation and the direction with the largest variation among the 100 data points are found. The direction with the smallest variation is taken as a Z-axis direction (namely, a depth direction consistent with the shooting direction of a camera), the direction with the largest variation is taken as an X-axis direction, and a Y-axis direction is determined through a right-hand coordinate system, so that three-dimensional state information of position information of the pose key point is determined, and the direction characteristics of the pose key point in a three-dimensional space are reflected.
And finally, calculating pose key points of the object grippable areas corresponding to the mask areas and depth values of the pose key points. The depth value of the pose key point is a coordinate value of the object gripable region corresponding to a depth coordinate axis, wherein the depth coordinate axis is set according to a photographing direction of a camera, a gravity direction or a direction of a vertical line of a plane where the gripable region is located. Accordingly, the depth value is used to reflect the position of the grippable region at the depth coordinate axis. In specific implementation, the origin and direction of the depth coordinate axis can be flexibly set by a person skilled in the art, and the setting mode of the origin of the depth coordinate axis is not limited in the invention. For example, when the depth coordinate axis is set according to the photographing direction of the camera, the origin of the depth coordinate axis may be the position where the camera is located, and the direction of the depth coordinate axis is the direction from the camera to the object, so that the depth value of the mask in each graspable region corresponds to the opposite number of the distance from the graspable region to the camera, that is, the farther from the camera, the lower the depth value of the mask, and the depth value is taken as the mask height characteristic value.
Clamp size
The jig size refers to the size of a jig configured for a certain article to be grasped. Since the grippable region of the article is on the surface of the object, the article is gripped by the gripper, which essentially controls the gripper to perform the gripping operation in the grippable region, the gripper size may also be counted as a feature of the mask of the grippable region of the article. The influence of the clamp size on grabbing is mainly reflected in whether the clamp possibly bumps the article which does not correspond to the clamp by mistake. For example, if a large-sized suction cup is used, the suction cup is gripped when there are more stacked objects than if a small-sized suction cup is used, and the large-sized suction cup is more likely to collide with other objects during gripping, resulting in shaking of the suction cup or a change in the position of the objects, which may cause gripping failure. In an actual industrial scenario, what kind of jigs is used by each set of system may be predetermined, that is, the size of the jigs may be determined before the actual gripping, so that the jig size in this embodiment may be obtained based on the configured jigs and the mapping relationship between the jigs and the sizes thereof, which are established and stored in advance.
Number of point clouds in mask
The number of the point clouds in the mask refers to the number of the point clouds covered by the mask in the grabbing area of a certain object to be grabbed. The number of point clouds reflects the quality of the acquisition of the camera, and if the number of point clouds in a certain grippable area is too small, the point clouds may be due to light reflection or shielding, which indicates that the shooting of the area is inaccurate, and may affect the control process of the clamp. Therefore, the grabbing priority of the objects with more point clouds in the mask can be set to be higher, and grabbing is preferentially executed. The number of the point clouds can be obtained by calculating the number of the point clouds covered by the mask in the grippable region.
Mask diagonal degree
As shown in fig. 3, the diagonal degree of the mask refers to the degree of inclination of the diagonal line of the mask. The object to be grabbed with high diagonal degree of the mask is fat, and grabbing is relatively easy; while the object to be grabbed with a low diagonal degree of the mask is relatively thin, the object to be grabbed is relatively difficult to grab. As shown in fig. 3, in order to calculate the diagonal degree of the mask, the minimum circumscribed rectangle of the mask may be calculated first, and the corner point of the circumscribed rectangle is the corner point of the mask. The angle X DEG between the two corner points which are diagonal to each other and the side (such as the side parallel to X in figure 3) of the circumscribed rectangle can reflect the diagonal degree, and as a preferred implementation, the diagonal degree of the mask can be equal to |45 DEG to X DEG|.
Degree of mask press-on
The degree of mask folding refers to the degree to which a mask of a graspable region of an article to be grasped is folded by other articles. While typical overlay detection only determines whether an article is being overlaid, the degree of masking in this embodiment requires a certain value to be calculated, i.e., the "overlay degree value". The specific stacking degree value can be used for sequencing all the objects to be grabbed, and the objects with low stacking degree value have high grabbing priority.
Mask size
The size of the mask in the grabbing area can be the area of the mask, and the large area of the mask indicates that the grabbing area of the object to be grabbed is large, so that the clamp is easier to grab; conversely, if the graspable area is small, the gripper grasps more difficult.
Orientation of pose
A plurality of articles to be grasped are piled in the material frame, each article has a unique pose, and the pose of the article is changed even after each grasping. The pose of the article, in particular the pose of the grippable region of the article, determines where the gripper should be and in what pose the gripping of the article is performed. The existing gripping method does not particularly consider the problem of the orientation of the articles, so that the order of gripping the articles is not determined based on the orientation of the articles, however, the orientation of the articles (or the orientation of the grippable region) has an influence on the gripping effect, as shown in fig. 4, if the grippable region of a certain article faces the frame mouth, the article is clearly gripped better, if the grippable region of the article is biased toward the frame wall, the gripping difficulty is relatively high, and especially if the article is located near the edge of the frame, the influence of the orientation on the gripping difficulty is particularly obvious. The position and the orientation of the object can be calculated by an imaging mode or a mode of inputting image data into a neural network, and the pose characteristic value of the object is calculated based on the difficulty degree of grabbing the object at the position and the orientation, and it can be understood that the harder the object is grabbed, the lower the pose characteristic value is.
For step S220, the dimensions of the respective feature values obtained in the above manner may be different, for example: the mask height value may be a length value, for example-182 mm; the number of the point cloud in the mask is the number of pixels, for example: 100; the mask diagonal value is an angle value, for example: 45 deg.. The values of different dimensions cannot be directly put together for calculation, and normalization processing is required to be carried out on the values of each feature. Normalization enables different dimensions to be categorized into a uniform interval, for example, feature values of individual features may be uniformly normalized into the interval of [0,10 ]. In a specific embodiment, assuming that the mask height value of one article to be grasped is-100 mm and the mask height value of the other article to be grasped is-120 mm, the mask height values of the two articles to be grasped after normalization are 8 and 6 respectively, and the mask height value of the one article to be grasped can be normalized to be 8 and the mask height value of the mask to be grasped after normalization to be 6 are normalized to be 8 and 6 respectively; for another example, the diagonal value of the mask of one article to be grasped is 30 °, the diagonal value of the mask of the other article to be grasped is 15 °, the 30 ° can be normalized to 6, the 15 ° is normalized to 3, and the diagonal values of the masks of the two articles to be grasped after normalization are respectively 7 and 4.
For step S230, after the normalized feature values of the features are obtained, a weight may be preset for each feature, and the priority value P of each object to be grabbed may be calculated based on the feature value of each feature and the corresponding weight. The priority value may be according to the following formulaCalculating, wherein P is the priority value of a certain object to be grabbed, n is the characteristic quantity, omega i Weight of ith feature, X i Is the feature value of the ith feature. For example, a certain grabbing task needs to grab two objects to be grabbed, and the mask height, the clamp size, the number of point clouds in the mask, the mask diagonal degree, the mask stacking degree, the mask size and the pose direction are used as characteristic values, and before the grabbing sequence is determined, weights are preset for all the characteristics, for example, the mask height weight is 3, the clamp size weight is 1, the number weight of the point clouds in the mask is 2, the mask diagonal degree weight is 0, the mask stacking degree weight is 1, the mask size weight is 2 and the pose direction weight is 3. Next, each normalized feature value of the first article to be grabbed is obtained, for example, as follows: the mask height value is 5, the clamp size value is 6, the number of point clouds in the mask is 4, the mask diagonal degree value is 9, the mask stacking degree value is 6, the mask size value is 3, the pose direction value is 2, and then the priority value P of the first object to be grabbed can be obtained according to a formula 1 =3×5+1×6+2×4+0×9+1×6+2×3+3×2=47. Then, each normalized characteristic value of the second article to be grabbed is obtained, for example, as follows: the mask height value 3, the clamp size value 5, the number of point clouds in the mask 2, the mask diagonal degree value 2, the mask stacking degree value 5, the mask size value 6 and the pose direction value 5, and calculating the priority value P of the first object to be grabbed according to a formula 2 =3×3+1×5+2×2+0×2+1×5+2×6+3×5=50. Due to P 2 >P 1 That is, the grabbing priority value of the second object to be grabbed is higher than the grabbing priority value of the first object to be grabbed, so that when the grabbing task is executed, the second object to be grabbed is grabbed by the clamp, and after grabbing, the first object to be grabbed is grabbed.
In addition, it should be noted that although each embodiment of the present invention has a specific combination of features, further combinations and cross combinations of these features between embodiments are also possible.
Fig. 5 shows a grip control device according to a further embodiment of the invention, the device comprising:
a mask acquiring module 600, configured to acquire a mask of a grabbed area of at least one object to be grabbed, i.e. to implement step S200;
A feature value obtaining module 610, configured to obtain, for each of the at least one object to be grabbed, a feature value of at least one feature of the mask in the grabbed area, that is, the feature value is used to implement step S210;
a feature value normalization module 620, configured to perform normalization processing on each of the obtained feature values of the at least one feature, to obtain at least one normalized feature value, that is, to implement step S220;
the priority value calculating module 630 is configured to calculate, based on at least one normalized feature value and a preset weight value of each object to be grabbed, a grabbing priority value of the object to be grabbed, so that when at least one object to be grabbed is grabbed, a grabbing sequence can be controlled according to the grabbing priority value, that is, the method is used to implement step S230.
It should be understood that in the embodiment of the apparatus shown in fig. 4, only the main functions of the modules are described, and all the functions of each module correspond to the corresponding steps in the method embodiment, and the working principle of each module may refer to the description of the corresponding steps in the method embodiment. In addition, although the correspondence between functions of the functional modules and the method is defined in the above embodiments, those skilled in the art will understand that the functions of the functional modules are not limited to the correspondence, that is, a specific functional module may also implement other method steps or a part of the method steps. The present application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the method of any of the above embodiments. It should be noted that, the computer program stored in the computer readable storage medium according to the embodiment of the present application may be executed by the processor of the electronic device, and in addition, the computer readable storage medium may be a storage medium built in the electronic device or may be a storage medium capable of being plugged into the electronic device in a pluggable manner, so that the computer readable storage medium according to the embodiment of the present application has higher flexibility and reliability.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, which may be a control system/electronic system configured in an automobile, a mobile terminal (e.g., a smart mobile phone, etc.), a personal computer (PC, e.g., a desktop computer or a notebook computer, etc.), a tablet computer, a server, etc., and the specific embodiment of the present invention is not limited to the specific implementation of the electronic device.
As shown in fig. 6, the electronic device may include: a processor 1202, a communication interface (Communications Interface) 1204, a memory 1206, and a communication bus 1208.
Wherein:
the processor 1202, the communication interface 1204, and the memory 1206 communicate with each other via a communication bus 1208.
A communication interface 1204 for communicating with network elements of other devices, such as clients or other servers, etc.
The processor 1202 is configured to execute the program 1210, and may specifically perform relevant steps in the method embodiments described above.
In particular, program 1210 may include program code including computer operating instructions.
The processor 1202 may be a central processing unit CPU, or a specific integrated circuit ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement embodiments of the present invention. The one or more processors included in the electronic device may be the same type of processor, such as one or more CPUs; but may also be different types of processors such as one or more CPUs and one or more ASICs.
Memory 1206 for storing program 1210. The memory 1206 may comprise high-speed RAM memory or may further comprise non-volatile memory (non-volatile memory), such as at least one disk memory.
Program 1210 may be downloaded and installed from a network and/or from a removable medium via communications interface 1204. The program, when executed by the processor 1202, may cause the processor 1202 to perform the operations of the method embodiments described above.
In the description of the present specification, reference to the terms "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, system that includes a processing module, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
The processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It is to be understood that portions of embodiments of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
Furthermore, each functional unit in the embodiments of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
Although the embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the embodiments described above by those of ordinary skill in the art within the scope of the application.

Claims (12)

1. A method of determining a clip grasping sequence, comprising:
acquiring a mask of a grabbing area of at least one object to be grabbed;
for each object to be grabbed in at least one object to be grabbed, acquiring a characteristic value of at least one characteristic of a mask in a grabbed area of the object to be grabbed;
performing normalization processing on each of the acquired feature values of the at least one feature to obtain at least one normalized feature value;
calculating a grabbing priority value of each article to be grabbed based on at least one normalized characteristic value and a preset weight value of each article to be grabbed, so that when at least one article to be grabbed is grabbed, the grabbing sequence can be controlled according to the grabbing priority value;
the mask of the grippable region is characterized by comprising: mask height, clamp size, number of point clouds in the mask, mask diagonal degree, mask stacking degree, mask size and pose direction.
2. The method of determining a jig gripping order according to claim 1, wherein mask height feature values of the mask of the grippable region are calculated based on depth values of the grippable region.
3. The method of determining a clip grabbing order as claimed in claim 1, wherein the clip size is determined based on a mapping relationship between a preset clip and a clip size.
4. The method of determining a clip grabbing order as claimed in claim 1, wherein the diagonal of the mask is determined based on the angle between the diagonal of the circumscribed rectangle of the mask and one side of the circumscribed rectangle.
5. The method of determining a gripping order of a gripper according to any one of claims 1 to 4, wherein the priority value is calculated according to the following formula:
wherein P is a priority value of the object to be grabbed, n is a characteristic quantity,weight of ith feature, X i Is the feature value of the ith feature.
6. An apparatus for determining a gripping order of a gripper, comprising:
the mask acquisition module is used for acquiring a mask of a grabbed area of at least one article to be grabbed;
the characteristic value acquisition module is used for acquiring the characteristic value of at least one characteristic of the mask in the grabbing area of each article to be grabbed in the at least one article to be grabbed;
the feature value normalization module is used for performing normalization processing on each of the acquired feature values of the at least one feature to obtain at least one normalized feature value;
the priority value calculation module is used for calculating the grabbing priority value of each article to be grabbed based on at least one normalized characteristic value and a preset weight value of the article to be grabbed, so that when at least one article to be grabbed is grabbed, the grabbing sequence can be controlled according to the grabbing priority value;
The mask of the grippable region is characterized by comprising: mask height, clamp size, number of point clouds in the mask, mask diagonal degree, mask stacking degree, mask size and pose direction.
7. The apparatus for determining a gripping order of a jig according to claim 6, wherein the mask height characteristic value of the mask of the grippable region is calculated based on the depth value of the grippable region.
8. The apparatus for determining a gripping order of jigs as claimed in claim 6, wherein the jig size is determined based on a mapping relationship between a preset jig and the jig size.
9. The apparatus for determining a gripping order of a jig according to claim 6, wherein the diagonal degree of the mask is determined based on an angle between a diagonal line of the circumscribed rectangle of the mask and one side of the circumscribed rectangle.
10. The apparatus for determining a gripping order of a jig according to any one of claims 6 to 9, wherein the priority value calculating module calculates the priority value according to the following formula:
wherein P is a priority value of the object to be grabbed, n is a characteristic quantity,weight of ith feature, X i Is the feature value of the ith feature.
11. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of determining a clip grabbing order of any one of claims 1 to 5 when the computer program is executed.
12. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of determining a gripper grabbing order as claimed in any one of claims 1 to 5.
CN202111429144.5A 2021-11-28 2021-11-28 Method, device, electronic equipment and storage medium for determining clamp grabbing sequence Active CN116175542B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111429144.5A CN116175542B (en) 2021-11-28 2021-11-28 Method, device, electronic equipment and storage medium for determining clamp grabbing sequence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111429144.5A CN116175542B (en) 2021-11-28 2021-11-28 Method, device, electronic equipment and storage medium for determining clamp grabbing sequence

Publications (2)

Publication Number Publication Date
CN116175542A CN116175542A (en) 2023-05-30
CN116175542B true CN116175542B (en) 2024-01-26

Family

ID=86440834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111429144.5A Active CN116175542B (en) 2021-11-28 2021-11-28 Method, device, electronic equipment and storage medium for determining clamp grabbing sequence

Country Status (1)

Country Link
CN (1) CN116175542B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117067219B (en) * 2023-10-13 2023-12-15 广州朗晴电动车有限公司 Sheet metal mechanical arm control method and system for trolley body molding

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107370764A (en) * 2017-09-05 2017-11-21 北京奇艺世纪科技有限公司 A kind of audio/video communication system and audio/video communication method
CN111144322A (en) * 2019-12-28 2020-05-12 广东拓斯达科技股份有限公司 Sorting method, device, equipment and storage medium
CN111344118A (en) * 2017-11-17 2020-06-26 奥卡多创新有限公司 Control device and method for a robot system for positioning items and calculating an appropriate gripping point for each item
CN112802105A (en) * 2021-02-05 2021-05-14 梅卡曼德(北京)机器人科技有限公司 Object grabbing method and device
DE102020128653A1 (en) * 2019-11-13 2021-05-20 Nvidia Corporation Determination of reaching for an object in disarray
CN112883881A (en) * 2021-02-25 2021-06-01 中国农业大学 Disordered sorting method and device for strip-shaped agricultural products
CN113420746A (en) * 2021-08-25 2021-09-21 中国科学院自动化研究所 Robot visual sorting method and device, electronic equipment and storage medium
CN113592855A (en) * 2021-08-19 2021-11-02 山东大学 Heuristic deep reinforcement learning-based autonomous grabbing and boxing method and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107370764A (en) * 2017-09-05 2017-11-21 北京奇艺世纪科技有限公司 A kind of audio/video communication system and audio/video communication method
CN111344118A (en) * 2017-11-17 2020-06-26 奥卡多创新有限公司 Control device and method for a robot system for positioning items and calculating an appropriate gripping point for each item
DE102020128653A1 (en) * 2019-11-13 2021-05-20 Nvidia Corporation Determination of reaching for an object in disarray
CN111144322A (en) * 2019-12-28 2020-05-12 广东拓斯达科技股份有限公司 Sorting method, device, equipment and storage medium
CN112802105A (en) * 2021-02-05 2021-05-14 梅卡曼德(北京)机器人科技有限公司 Object grabbing method and device
CN112883881A (en) * 2021-02-25 2021-06-01 中国农业大学 Disordered sorting method and device for strip-shaped agricultural products
CN113592855A (en) * 2021-08-19 2021-11-02 山东大学 Heuristic deep reinforcement learning-based autonomous grabbing and boxing method and system
CN113420746A (en) * 2021-08-25 2021-09-21 中国科学院自动化研究所 Robot visual sorting method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于RGB-D图像的三维物体检测与抓取;何若涛;陈龙新;廖亚军;陈十力;;机械工程与自动化(第05期);全文 *

Also Published As

Publication number Publication date
CN116175542A (en) 2023-05-30

Similar Documents

Publication Publication Date Title
JP6813229B1 (en) Robot system equipped with automatic object detection mechanism and its operation method
US10124489B2 (en) Locating, separating, and picking boxes with a sensor-guided robot
CN110580725A (en) Box sorting method and system based on RGB-D camera
JP5778311B1 (en) Picking apparatus and picking method
JP5558585B2 (en) Work picking device
CA2625163A1 (en) A method and an arrangement for locating and picking up objects from a carrier
JP2000288974A (en) Robot device having image processing function
WO2021039775A1 (en) Image processing device, image capturing device, robot, and robot system
CN113689509A (en) Binocular vision-based disordered grabbing method and system and storage medium
CN116175542B (en) Method, device, electronic equipment and storage medium for determining clamp grabbing sequence
CN113538459A (en) Multi-mode grabbing obstacle avoidance detection optimization method based on drop point area detection
JP7398662B2 (en) Robot multi-sided gripper assembly and its operating method
CN114092428A (en) Image data processing method, image data processing device, electronic equipment and storage medium
CN116529760A (en) Grabbing control method, grabbing control device, electronic equipment and storage medium
JP2018146347A (en) Image processing device, image processing method, and computer program
CN116197887B (en) Image data processing method, device, electronic equipment and storage medium for generating grabbing auxiliary image
CN116197885B (en) Image data filtering method, device, equipment and medium based on press-fit detection
CN116197888B (en) Method and device for determining position of article, electronic equipment and storage medium
CN116175541B (en) Grabbing control method, grabbing control device, electronic equipment and storage medium
CN116175540B (en) Grabbing control method, device, equipment and medium based on position and orientation
CN116188559A (en) Image data processing method, device, electronic equipment and storage medium
CN116197886A (en) Image data processing method, device, electronic equipment and storage medium
CN114022342A (en) Acquisition method and device for acquisition point information, electronic equipment and storage medium
CN116214494A (en) Grabbing control method, grabbing control device, electronic equipment and storage medium
CN116205837A (en) Image data processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant