CN117021122A - Grabbing robot control method and system - Google Patents

Grabbing robot control method and system Download PDF

Info

Publication number
CN117021122A
CN117021122A CN202311294693.5A CN202311294693A CN117021122A CN 117021122 A CN117021122 A CN 117021122A CN 202311294693 A CN202311294693 A CN 202311294693A CN 117021122 A CN117021122 A CN 117021122A
Authority
CN
China
Prior art keywords
grabbing
probability
information
target
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311294693.5A
Other languages
Chinese (zh)
Other versions
CN117021122B (en
Inventor
白国超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhixing Robot Technology Suzhou Co ltd
Original Assignee
Zhixing Robot Technology Suzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhixing Robot Technology Suzhou Co ltd filed Critical Zhixing Robot Technology Suzhou Co ltd
Priority to CN202311294693.5A priority Critical patent/CN117021122B/en
Publication of CN117021122A publication Critical patent/CN117021122A/en
Application granted granted Critical
Publication of CN117021122B publication Critical patent/CN117021122B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

The invention provides a control method and a control system for a grabbing robot, which relate to the technical field of intelligent control, and are used for constructing a three-dimensional model of a target grabbing object and acquiring a first grabbing action characteristic of the grabbing robot, carrying out grabbing probability evaluation, if the grabbing probability is smaller than or equal to a grabbing probability threshold value, carrying out object positioning information adjustment and grabbing action characteristic optimizing, carrying out grabbing probability evaluation again, carrying out grabbing control when the grabbing probability is larger than the grabbing probability threshold value, solving the technical problems that the grabbing process in the prior art needs to design grabbing actions according to object states artificially, the intelligent degree is low, self-adaptive flexible control cannot be carried out on the target, the grabbing success rate is low, so that the operation efficiency is influenced, carrying out three-dimensional modeling and space positioning on the target grabbing object, carrying out grabbing probability intelligent evaluation by combining grabbing characteristics, carrying out position adjustment and grabbing optimizing grabbing control, and maximizing the grabbing success probability guarantee.

Description

Grabbing robot control method and system
Technical Field
The invention relates to the technical field of intelligent control, in particular to a control method and system for a grabbing robot.
Background
Along with the development of artificial intelligence, the robot gradually replaces the manual work to carry out grabbing operation, and in order to ensure the operation efficiency and accuracy, the robot needs to combine the actual scene of grabbing to carry out adaptive control. The traditional grabbing process needs to design grabbing actions according to object states artificially, the intelligent degree is low, self-adaptive flexible control cannot be conducted on targets, and therefore grabbing success rate is low to influence operation efficiency.
Disclosure of Invention
The application provides a control method and a control system of a grabbing robot, which are used for solving the technical problems that the traditional grabbing process in the prior art needs artificial grabbing action design according to the object state, the intelligent degree is low, the self-adaptive flexible control cannot be carried out on a target, and the grabbing success rate is low so as to influence the operation efficiency.
In view of the above problems, the present application provides a method and a system for controlling a gripping robot.
In a first aspect, the present application provides a method for controlling a gripping robot, the method comprising:
performing multidimensional image acquisition on a target grabber to construct a target grabber three-dimensional model, wherein the target grabber three-dimensional model is provided with first object positioning information;
Acquiring a first grabbing action characteristic of the grabbing robot, wherein the first grabbing action characteristic comprises grabbing path information and grabbing front joint position information;
according to the first object positioning information, carrying out grabbing probability evaluation on the target grabbing object three-dimensional model based on the grabbing path information and the pre-grabbing joint position information, and obtaining a first grabbing probability;
when the first grabbing probability is smaller than or equal to a grabbing probability threshold value, a first grabbing barrier region is obtained;
adjusting the first object positioning information according to the first grabbing barrier region to obtain second object positioning information, wherein the second object positioning information does not have the first grabbing barrier region;
positioning adjustment is carried out on the target grabbing object according to the second object positioning information, the first grabbing action feature is optimized based on the second object positioning information, and a second grabbing action feature is obtained;
according to the second grabbing action characteristics and the second object positioning information, carrying out grabbing probability evaluation on the target grabbing object three-dimensional model to obtain a second grabbing probability;
and when the second grabbing probability is larger than the grabbing probability threshold value, grabbing control is performed based on the second grabbing action characteristic.
In a second aspect, the present application provides a grasping robot control system, the system comprising:
the model building module is used for carrying out multidimensional image acquisition on the target grabber and constructing a three-dimensional model of the target grabber, wherein the three-dimensional model of the target grabber is provided with first object positioning information;
the device comprises a feature acquisition module, a first control module and a second control module, wherein the feature acquisition module is used for acquiring a first grabbing action feature of the grabbing robot, and the first grabbing action feature comprises grabbing path information and grabbing front joint position information;
the first grabbing probability acquisition module is used for carrying out grabbing probability evaluation on the three-dimensional model of the target grabbing object based on the grabbing path information and the pre-grabbing joint position information according to the first object positioning information to acquire a first grabbing probability;
the obstacle region acquisition module is used for acquiring a first grabbing obstacle region when the first grabbing probability is smaller than or equal to a grabbing probability threshold value;
the positioning information acquisition module is used for adjusting the first object positioning information according to the first grabbing barrier region to acquire second object positioning information, wherein the second object positioning information does not have the first grabbing barrier region;
The characteristic optimizing module is used for carrying out positioning adjustment on the target grabbing object according to the second object positioning information, optimizing the first grabbing action characteristic based on the second object positioning information and obtaining a second grabbing action characteristic;
the second grabbing probability acquisition module is used for carrying out grabbing probability evaluation on the target grabbing object three-dimensional model according to the second grabbing action characteristics and the second object positioning information to acquire a second grabbing probability;
and the grabbing control module is used for carrying out grabbing control based on the second grabbing action characteristic when the second grabbing probability is larger than the grabbing probability threshold value.
One or more technical schemes provided by the application have at least the following technical effects or advantages:
the control method of the grabbing robot provided by the embodiment of the application is used for carrying out multidimensional image acquisition on the target grabbing object and constructing a three-dimensional model of the target grabbing object, wherein the three-dimensional model of the target grabbing object is provided with first object positioning information; the method comprises the steps of obtaining first grabbing action characteristics of a grabbing robot, including grabbing path information and grabbing front joint position information, further carrying out grabbing probability assessment on a target grabbing object three-dimensional model, obtaining first grabbing probability, obtaining a first grabbing obstacle area when the grabbing probability is smaller than or equal to a grabbing probability threshold value, adjusting first object positioning information to obtain second object positioning information, carrying out positioning adjustment on the target grabbing object and optimizing the first grabbing action characteristics, obtaining second grabbing action characteristics, combining the second object positioning information, carrying out grabbing probability assessment on the target grabbing object three-dimensional model, obtaining second grabbing probability, carrying out grabbing control based on the second grabbing action characteristics when the second grabbing probability is larger than the grabbing probability threshold value, solving the technical problems that grabbing actions are required to be designed according to object states artificially, the intelligent degree is low, self-adaptive flexible control cannot be carried out on the target, the grabbing success rate is low to influence the operation efficiency, carrying out three-dimensional modeling and space positioning on the target grabbing object, combining the characteristics, carrying out grabbing probability intelligent adjustment and optimizing control, carrying out grabbing probability optimization control, and grabbing success rate guaranteeing grabbing control.
Drawings
FIG. 1 is a schematic flow chart of a control method of a grabbing robot;
FIG. 2 is a schematic diagram of a three-dimensional model construction process of a target object to be grabbed in a control method of a grabbing robot;
fig. 3 is a schematic diagram of a first grabbing probability obtaining flow in a grabbing robot control method according to the present application;
fig. 4 is a schematic structural diagram of a control system of a gripping robot.
Reference numerals illustrate: the system comprises a model building module 11, a feature acquisition module 12, a first grabbing probability acquisition module 13, an obstacle region acquisition module 14, a positioning information acquisition module 15, a feature optimizing module 16, a second grabbing probability acquisition module 17 and a grabbing control module 18.
Detailed Description
According to the control method and system for the grabbing robot, a three-dimensional model of a target grabbing object is built, the first grabbing action characteristic of the grabbing robot is obtained, grabbing probability evaluation is conducted, object positioning information adjustment and grabbing action characteristic optimizing are conducted if the grabbing probability threshold value is smaller than or equal to the grabbing probability threshold value, grabbing probability evaluation is conducted again, grabbing control is conducted when the grabbing probability threshold value is larger than the grabbing probability threshold value, the control method and system are used for solving the technical problems that in a traditional grabbing process in the prior art, grabbing actions are required to be designed according to object states artificially, the intelligent degree is low, self-adaptive flexible control cannot be conducted on the target, and grabbing success rate is low to influence operation efficiency.
Embodiment one:
as shown in fig. 1, the present application provides a control method of a gripping robot, the method comprising:
step S100: performing multidimensional image acquisition on a target grabber to construct a target grabber three-dimensional model, wherein the target grabber three-dimensional model is provided with first object positioning information;
further, as shown in fig. 2, the method further includes performing multidimensional image acquisition on the object to be captured to construct a three-dimensional model of the object to be captured, where the three-dimensional model of the object to be captured has first object positioning information, and step S100 of the present application further includes:
step S110: based on a size sensor, a preset positioning point is built at the center position of the target grabbing object, and a main view direction line, a side view direction line and a top view direction line are built based on the preset positioning point;
step S120: controlling a motion camera to acquire a first angle image from a preset distance from the preset positioning point to the main view direction line;
step S130: controlling a motion camera to acquire a second angle image from the preset distance from the preset positioning point to the side view direction line;
step S140: controlling a motion camera to acquire a third angle image from the preset distance from the preset positioning point to the top view direction line;
Step S150: constructing the three-dimensional model of the target grabbing object according to the first angle image, the second angle image and the third angle image;
step S160: and constructing a three-dimensional space coordinate system by taking the preset positioning point as an origin and taking the main view direction line, the side view direction line and the top view direction line as coordinate axes, and acquiring the positioning information of the first object.
Specifically, with the development of artificial intelligence, a robot gradually replaces manual grabbing operation, and cannot carry out adaptive flexible control on a target, so that grabbing success rate is low to influence operation efficiency.
Specifically, the target grabbing object is an object to be grabbed by the grabbing robot, view collection of different dimensions is carried out on the target grabbing object based on a sensing collection device, and the view collection comprises three views, so that the information completeness of a collected image is guaranteed, and a multidimensional collected image is obtained. Based on the multidimensional acquisition image, acquiring characteristic parameters, such as size, material and the like, of the target grabber, determining the space relative position of the target grabber as the first object positioning information, and performing target three-dimensional modeling on the first object positioning information as a data source to acquire the grabber three-dimensional model for subsequent grabbing analysis.
Specifically, based on the size sensor, the preset positioning point is built at the center position of the target grabbing object, and is used as a reference position point to determine a view acquisition direction, so that the main view direction line, the side view direction line and the top view direction line are built. Further, the preset distance is a distance between the moving camera and the target grabbing object collecting position, and can be set in a self-defined manner. And controlling the motion camera, and acquiring the first angle image by performing image acquisition from a preset distance from the preset positioning point to the main view direction line.
Similarly, controlling the motion camera, starting from the preset positioning point to the preset distance of the side view direction line, and acquiring an image to acquire the second angle image; and starting from the preset positioning point to the preset distance of the top view direction line, and acquiring an image to acquire the third angle image.
Further, the first angle image, the second angle image and the third angle image are used as source data, image recognition is performed to extract characteristic parameters and three-dimensional modeling is performed, for example, physical simulation of the target grabber can be performed based on a computer-aided modeling mode, and the three-dimensional model of the target grabber is obtained. And further, with the preset positioning point as an origin, and with the main view direction line, the side view direction line and the top view direction line as coordinate axes, constructing a three-dimensional space coordinate system, determining the space position of the target grabbing object, and obtaining the first object positioning information.
Step S200: acquiring a first grabbing action characteristic of the grabbing robot, wherein the first grabbing action characteristic comprises grabbing path information and grabbing front joint position information;
specifically, when the grabbing robot performs grabbing control, grabbing paths of all joints of a grabbing hand of the mechanical arm are determined, joint positioning joints are performed before object grabbing are performed as grabbing path information, namely joint positions of the grabbing hand of the grabbing robot are determined, for example, relative coordinate positions of all joints in the three-dimensional space coordinate system are determined, and the information of the joint positions before grabbing is obtained. And taking the grabbing path information and the pre-grabbing joint position information as the first grabbing action characteristics. The first grabbing control feature is a control execution feature of the grabbing robot which is initially configured, and grabbing probability analysis is further conducted on the first grabbing action feature.
Step S300: according to the first object positioning information, carrying out grabbing probability evaluation on the target grabbing object three-dimensional model based on the grabbing path information and the pre-grabbing joint position information, and obtaining a first grabbing probability;
Further, as shown in fig. 3, according to the first object positioning information, the grabbing probability evaluation is performed on the target grabbing object three-dimensional model based on the grabbing path information and the pre-grabbing joint position information, so as to obtain a first grabbing probability, and step S300 of the present application further includes:
step S310: acquiring object type features and object geometric features according to the target grabbing object three-dimensional model;
step S320: performing grabbing and searching in industrial big data based on the object type characteristics and the object geometric characteristics to obtain grabbing record data, wherein the grabbing record data comprise grabbing position record data;
step S330: performing frequent sequence analysis on the grabbing position record data to acquire frequent grabbing position combinations;
step S340: positioning and matching are carried out on the first object positioning information according to the frequent grabbing position combination and the target grabbing object three-dimensional model, and grabbing position positioning information is obtained;
step S350: and carrying out grabbing probability evaluation on the grabbing position positioning information, the grabbing path information and the grabbing front joint position information through a grabbing probability evaluation channel to acquire the first grabbing probability.
Specifically, based on the three-dimensional model of the target grabber, feature recognition extraction is performed to obtain the object type features, such as features of object attributes, states and the like; geometric features of the object, such as object size, shape, etc., are acquired. And taking the object type features and the object geometric features as indexes, and performing big data retrieval in the industrial Internet to acquire the grabbing record data of the object similar to the target grabbing object so as to improve the matching degree of the subsequent grabbing position analysis result and the target grabbing object. And acquiring the grabbing record data, wherein the grabbing record data comprise the grabbing position record data. Further, based on the grabbing position record data, frequent sequence analysis is performed to determine a better grabbing position combination of the same object, wherein the grabbing position combination comprises grabbing position information, grabbing angle information and the like.
Further, based on the frequent grabbing position combination and the three-dimensional model of the target grabbing object, the grabbing position positioning matching of the target grabbing object is carried out in combination with the first object positioning information, the grabbing position positioning information is obtained, the grabbing position positioning information accords with the frequent grabbing position combination, and is a preferential grabbing position determined for frequent analysis. Further, the grabbing probability evaluation channel is built, grabbing probability evaluation is performed based on the grabbing position positioning information, the grabbing path information and the grabbing front joint position information, and the first grabbing probability is obtained. And judging whether one-time grabbing is successful or not based on the first grabbing probability, and if not, carrying out grabbing optimization.
Further, the step S330 of the present application further includes performing frequent sequence analysis on the capture location record data to obtain frequent capture location combinations:
step S331: the grabbing position record data comprises a first grabbing position, a second grabbing position and an Nth grabbing position;
step S332: setting a frequent threshold of the single position, traversing the first grabbing position and the second grabbing position until the Nth grabbing position to prune, and obtaining a frequent item set of the single position;
step S333: setting a double-position frequent threshold, pruning the combined items of the single-position frequent item set to obtain the double-position frequent item set;
step S334: setting a three-position frequent threshold, pruning the combined items of the single-position frequent item set to obtain the three-position frequent item set;
step S335: pruning is carried out on the combined items of the unit-position frequent item sets until a k-position frequent threshold is set, and the k-position frequent item sets are obtained, wherein k is a grabbing position number threshold of the grabbing robot;
step S336: traversing the single-position frequent item set, the double-position frequent item set and the three-position frequent item set until the k-position frequent item set is subjected to independent grabbing analysis to obtain independent grabbing position combinations;
Step S337: and extracting the independent grabbing position combination with the frequency maximum value, and setting the independent grabbing position combination as the frequent grabbing position combination.
Specifically, the grabbing position record data is identified, and the first grabbing position and the second grabbing position with different grabbing position points are extracted until the nth grabbing position. And setting a unit location frequency threshold, namely, a custom-set critical frequency value for frequent item judgment of a single grabbing position. Based on the grabbing position record data, the occurrence frequencies of the first grabbing position and the second grabbing position to the N grabbing position are counted respectively, the first grabbing position and the second grabbing position are checked with the unit position frequent threshold value, pruning is carried out on grabbing positions which do not meet the unit position frequent threshold value, and the grabbing positions which meet the unit position frequent threshold value are integrated and regulated to be used as the unit position frequent item set.
And similarly, setting the two-position frequent threshold, namely, defining a set critical frequency value for carrying out frequency judgment on the combined two positions, carrying out random combination on each two of the single-position frequent items in the single-position frequent item set, respectively calibrating the single-position frequent item with the two-position frequent threshold, and screening combined items meeting the two-position frequent threshold as the two-position frequent item set. Setting the three-position frequent threshold, randomly combining the unit-position frequent item set based on three grabbing positions, performing checking and judging with the three-position frequent threshold, and screening combined items meeting the three-position frequent threshold to be used as the three-position frequent item set. Repeating the steps to perform frequent item analysis of the combined grabbing positions until pruning screening of the k-position frequent item sets is completed, wherein k is a grabbing position number threshold of the grabbing robot and is a positive integer less than or equal to N.
Further, traversing the single-position frequent item set, the double-position frequent item set and the three-position frequent item set until the k-position frequent item set, performing independent grabbing frequency identification extraction for each position frequent item, performing positive sequencing on grabbing frequencies from large to small, extracting frequent items corresponding to the maximum grabbing frequency as the independent grabbing position combination, wherein the independent grabbing position combination comprises grabbing position information and grabbing angle information, and is a preferential grabbing position determined by intelligent evaluation based on big data.
Further, the step S350 of the present application further includes performing, by the grabbing probability evaluation channel, grabbing probability evaluation on the grabbing position positioning information, the grabbing path information and the grabbing front joint position information, to obtain the first grabbing probability:
step S351: the grabbing probability evaluation channel comprises a grabbing simulation module and a probability evaluation module;
step S352: according to the grabbing path information and the grabbing front joint position information, carrying out simulated grabbing in the grabbing simulation module to obtain grabbing rear joint position information;
step S353: and carrying out grabbing probability evaluation on the grabbed joint position information and the grabbing position positioning information through the probability evaluation module to acquire the first grabbing probability.
Further, the probability evaluation module performs a grabbing probability evaluation on the post-grabbing joint position information and the grabbing position positioning information to obtain the first grabbing probability, and the step S353 of the present application further includes:
step S3531: constructing a grabbing probability evaluation function:
wherein,characterizing the ith gripping position location information, +.>Representing the superposition area of the ith grabbing position positioning information and the grabbing hand fixing joint, and +.>The gripping angle is formed by the gripping hand and the positioning information of the ith gripping position;
step S3532: collecting grabbing log data, wherein the grabbing log data comprise post-grabbing joint position record data and preset grabbing position positioning record data;
step S3533: performing grabbing probability identification on the grabbed joint position record data and the preset grabbing position positioning record data based on the grabbing probability evaluation function to obtain grabbing probability identification results;
step S3534: and training the probability evaluation module based on the post-grabbing joint position record data, the preset grabbing position positioning record data and the grabbing probability identification result.
Specifically, the grabbing simulation module is constructed, a visual simulation platform is connected, simulation modeling is conducted on the grabbing robot, the grabbing simulation module is formed by combining the target grabbing object three-dimensional model, the probability evaluation module is placed in the grabbing simulation module and connected with the grabbing simulation module, the grabbing probability evaluation channel is generated, grabbing analysis is conducted through the construction of the grabbing probability evaluation channel, and accuracy and objectivity of a probability evaluation result can be effectively guaranteed.
Further, the grabbing path information and the grabbing front joint position information are input into the grabbing simulation module, simulation grabbing execution is carried out, joint position recognition detection is carried out after grabbing operation, and the grabbing rear joint position information is obtained. And further transferring the grabbing position positioning information to the probability evaluation module, and performing grabbing probability evaluation by combining the pre-grabbing joint position information.
Specifically, the grabbing probability evaluation function is constructed:wherein->Characterizing the i-th grasping position locating information,representing the superposition area of the ith grabbing position positioning information and the grabbing hand fixing joint, and +.>The grasping angle is formed by the grasping hand and the ith grasping position positioning information, the grasping angle is the grasping angle information, and the parameters can be determined based on the earlier processing of the embodiment of the application so as to quantitatively determine grasping probability and facilitate visual judgment and analysis. And acquiring the post-grabbing joint position record data and the preset grabbing position positioning record data, namely, the joint positions before grabbing and after grabbing, wherein the two groups of record data correspond to each other one by one, the preset grabbing position positioning record data comprise grabbing preferential positions and grabbing angle data, and the grabbing log data are obtained by carrying out data integration. And carrying out grabbing probability calculation and corresponding identification on the mapped and corresponding grabbing joint position record data and the preset grabbing position positioning record data based on the grabbing probability evaluation function, and obtaining the grabbing probability identification result. And further, mapping and correlating the post-living-area joint position record data, the preset grabbing position positioning record data and the grabbing probability identification result, and performing neural network training to generate the probability evaluation module.
Further, the post-grabbing joint position information and the grabbing position positioning information are input into the probability evaluation module, information matching and association decision analysis are carried out, the first grabbing probability is obtained and output, and the first grabbing probability is the probability of successfully completing grabbing operation.
Step S400: when the first grabbing probability is smaller than or equal to a grabbing probability threshold value, a first grabbing barrier region is obtained;
step S500: adjusting the first object positioning information according to the first grabbing barrier region to obtain second object positioning information, wherein the second object positioning information does not have the first grabbing barrier region;
specifically, the grabbing probability threshold is set, that is, a critical grabbing probability value which can be successfully grabbed and is set empirically by a person skilled in the art. Checking the first grabbing probability and the grabbing probability threshold, and if the first grabbing probability is larger than the grabbing probability threshold, indicating that the first grabbing action feature can finish grabbing operation; if the grasping success probability is lower, the necessity of operation optimization exists, and the first grasping obstacle region, for example, the local region of the target grasping object which prevents normal grasping is acquired, so that the obstacle region determination can be performed based on the simulated grasping process in the grasping simulation module. Further, based on the first grabbing barrier region, the first object positioning information is adjusted, namely, avoidance of the first grabbing barrier region is performed, and adjusted second object positioning information is obtained.
Step S600: positioning adjustment is carried out on the target grabbing object according to the second object positioning information, the first grabbing action feature is optimized based on the second object positioning information, and a second grabbing action feature is obtained;
further, the step S600 of the present application further includes:
step S610: positioning and fitting the frequent grabbing position combination based on the second object positioning information to obtain grabbing position optimizing and positioning results;
step S620: optimizing the post-grabbing joint position information based on the grabbing position optimizing and positioning result and combining the probability evaluation module to obtain a post-grabbing joint position optimizing result, wherein grabbing probability of the post-grabbing joint position optimizing result is larger than the grabbing probability threshold;
step S630: and performing reverse fitting on the grabbing path information and the grabbing front joint position information based on the grabbing rear joint position optimizing result to obtain the second grabbing action feature.
Specifically, since the grabbing success rate of the first grabbing action feature is low, the target grabbing object is subjected to position movement adjustment, the obstacle avoidance area determines the second object positioning information, the first grabbing action feature is optimized, the end joint position and the end joint angle are determined firstly through a reverse optimizing mode, namely the post-grabbing joint position optimizing result is obtained, and then the pre-grabbing joint position and the grabbing path are optimized, so long as the target is the post-grabbing joint position optimizing result, namely the initial position and the path of the end joint position and the angle can be converged.
Specifically, the second object positioning information is spatial relative positioning information determined based on the three-dimensional space coordinate system, the combination of the frequent grabbing positions is relative positioning of an entity relative to the target grabbing object, the position fitting is performed based on the combination of the second object positioning information and the frequent grabbing positions, and the actual grabbing position of the target grabbing object is determined based on the three-dimensional space coordinate system and is used as the grabbing position optimizing positioning result. Further, the probability evaluation module is combined to evaluate the grabbing probability of the grabbing position optimizing and positioning result after grabbing, judge whether the probability evaluation result meets the grabbing probability threshold, if not, adjust and optimize the grabbing joint position information, and perform probability evaluation and judgment on the adjusted grabbing joint position information until the grabbing probability threshold is met, and the final optimized and adjusted position information is used as the grabbing joint position optimizing result, namely the end joint position and the angle.
Further, based on the post-grabbing joint position optimizing result, namely the determined end joint position and angle, reverse optimizing is performed, the grabbing path information and the pre-grabbing joint position information are fitted, and as long as the final response target is the post-grabbing joint position optimizing result, the grabbing joint position and grabbing path matched with the final response target can meet convergence criteria, the grabbing path information and the pre-grabbing joint position information are determined and serve as the second grabbing action feature.
Step S700: according to the second grabbing action characteristics and the second object positioning information, carrying out grabbing probability evaluation on the target grabbing object three-dimensional model to obtain a second grabbing probability;
step S800: and when the second grabbing probability is larger than the grabbing probability threshold value, grabbing control is performed based on the second grabbing action characteristic.
Specifically, based on the second grabbing action feature and the second object positioning information, performing simulated grabbing execution and probability evaluation on the target grabbing object three-dimensional model and combining the grabbing probability evaluation channel, and acquiring the second grabbing probability based on the simulation module and the probability evaluation module. Further checking the second grabbing probability and the grabbing probability threshold, and when the second grabbing probability is larger than the grabbing probability threshold, namely the probability of successful grabbing is higher, carrying out grabbing control on the target grabbing object based on the second grabbing action characteristic; and if the second grabbing probability is smaller than the grabbing probability threshold, carrying out optimization adjustment and grabbing probability prediction again by combining the mode of the embodiment of the application until the grabbing probability meets the grabbing threshold so as to carry out intelligent optimization control of the manipulator and ensure grabbing effect.
The control method of the grabbing robot provided by the embodiment of the application has the following technical effects:
1. and carrying out frequent analysis and pruning of the grabbing positions by combining the big data, determining optimal grabbing position information and grabbing angle information, modeling a target grabbing object based on the multidimensional image, carrying out grabbing probability evaluation based on the space positioning information and grabbing action characteristics of the object, and judging that the control standard of one-time grabbing success can be achieved, otherwise, carrying out scene adaptability adjustment to ensure grabbing effect.
2. And performing target grabbing object space position adjustment and grabbing control optimization, and adopting a reverse optimization mode, firstly determining the optimized grabbing joint position information meeting grabbing standards, taking the optimized grabbing joint position information as a response target, performing fitting analysis on grabbing path information and the grabbing joint position information before grabbing, realizing target compliance flexible optimization execution, and realizing self-adaptive flexibility grabbing regulation of grabbing operation of the grabbing robot.
Embodiment two:
based on the same inventive concept as one of the gripper robot control methods in the previous embodiments, as shown in fig. 4, the present application provides a gripper robot control system including:
The model building module 11 is used for performing multidimensional image acquisition on the target grabbing object and building a three-dimensional model of the target grabbing object, wherein the three-dimensional model of the target grabbing object is provided with first object positioning information;
the characteristic acquisition module 12 is configured to acquire a first capturing action characteristic of the capturing robot, where the first capturing action characteristic includes capturing path information and pre-capturing joint position information;
the first grabbing probability acquisition module 13 is configured to perform grabbing probability evaluation on the target grabbing object three-dimensional model based on the grabbing path information and the pre-grabbing joint position information according to the first object positioning information, so as to acquire a first grabbing probability;
an obstacle region acquisition module 14, where the obstacle region acquisition module 14 is configured to acquire a first grasping obstacle region when the first grasping probability is less than or equal to a grasping probability threshold;
a positioning information obtaining module 15, where the positioning information obtaining module 15 is configured to adjust the first object positioning information according to the first capture obstacle area, and obtain second object positioning information, where the second object positioning information does not have the first capture obstacle area;
The feature optimizing module 16 is configured to perform positioning adjustment on the target gripping object according to the second object positioning information, optimize the first gripping motion feature based on the second object positioning information, and obtain a second gripping motion feature;
the second grabbing probability obtaining module 17 is configured to perform grabbing probability evaluation on the three-dimensional model of the target grabbing object according to the second grabbing action feature and the second object positioning information, so as to obtain a second grabbing probability;
the grabbing control module 18, the grabbing control module 18 is configured to perform grabbing control based on the second grabbing action feature when the second grabbing probability is greater than the grabbing probability threshold.
Further, the model building module 11 further includes:
the direction line construction module is used for constructing a preset positioning point at the center position of the target grabber based on the size sensor, and constructing a main view direction line, a side view direction line and a top view direction line based on the preset positioning point;
the first angle image acquisition module is used for controlling the motion camera to acquire a first angle image from the preset positioning point to the preset distance of the main view direction line;
The second angle image acquisition module is used for controlling the motion camera to acquire a second angle image from the preset positioning point to the preset distance of the side view direction line;
the third angle image acquisition module is used for controlling the motion camera to acquire a third angle image from the preset positioning point to the preset distance of the top view direction line;
the three-dimensional model construction module is used for constructing the three-dimensional model of the target grabber according to the first angle image, the second angle image and the third angle image;
the first object positioning information acquisition module is used for constructing a three-dimensional space coordinate system by taking the preset positioning point as an original point and taking the main view direction line, the side view direction line and the top view direction line as coordinate axes to acquire the first object positioning information.
Further, the first capture probability obtaining module 13 further includes:
the object feature acquisition module is used for acquiring object type features and object geometric features according to the target grabbing object three-dimensional model;
The grabbing record data acquisition module is used for carrying out grabbing search in industrial big data based on the object type characteristics and the object geometric characteristics to acquire grabbing record data, wherein the grabbing record data comprises grabbing position record data;
the frequent sequence analysis module is used for carrying out frequent sequence analysis on the grabbing position record data to acquire frequent grabbing position combinations;
the positioning matching module is used for performing positioning matching on the first object positioning information according to the frequent grabbing position combination and the target grabbing object three-dimensional model to obtain grabbing position positioning information;
the probability evaluation module is used for carrying out grabbing probability evaluation on the grabbing position positioning information, the grabbing path information and the grabbing front joint position information through a grabbing probability evaluation channel to acquire the first grabbing probability.
Further, the frequent sequence analysis module further includes:
the data analysis module is used for recording data of the grabbing positions, wherein the data analysis module comprises a first grabbing position and a second grabbing position until an Nth grabbing position;
The single-position frequent item acquisition module is used for setting a single-position frequent threshold value, traversing the first grabbing position and the second grabbing position until the Nth grabbing position to prune, and acquiring a single-position frequent item set;
the double-position frequent item acquisition module is used for setting a double-position frequent threshold value, pruning the combined items of the single-position frequent item set and acquiring the double-position frequent item set;
the three-position frequent item acquisition module is used for setting a three-position frequent threshold value, pruning the combined items of the unit-position frequent item sets and acquiring the three-position frequent item sets;
the k-position frequent item acquisition module is used for pruning the combined items of the unit-position frequent item sets until a k-position frequent threshold is set, and acquiring the k-position frequent item sets, wherein k is a grabbing position number threshold of the grabbing robot;
the independent grabbing analysis module is used for traversing the single-position frequent item set, the double-position frequent item set and the three-position frequent item set until the k-position frequent item set is subjected to independent grabbing analysis to obtain an independent grabbing position combination;
The frequent grabbing position combination setting module is used for extracting the independent grabbing position combination with the frequency maximum value and setting the independent grabbing position combination as the frequent grabbing position combination.
Further, the probability evaluation module further includes:
the channel analysis module is used for the grabbing probability evaluation channel and comprises a grabbing simulation module and a probability evaluation module;
the simulation grabbing module is used for performing simulation grabbing in the grabbing simulation module according to the grabbing path information and the grabbing front joint position information to obtain grabbing rear joint position information;
the probability acquisition module is used for carrying out grabbing probability evaluation on the grabbed joint position information and the grabbing position positioning information through the probability evaluation module, and acquiring the first grabbing probability.
Further, the probability acquisition module further includes:
the function construction module is used for constructing a grabbing probability evaluation function:
wherein,characterizing the ith gripping position location information, +.>Representing the superposition area of the ith grabbing position positioning information and the grabbing hand fixing joint, and +. >The gripping angle is formed by the gripping hand and the positioning information of the ith gripping position;
the log acquisition module is used for acquiring grabbing log data, wherein the grabbing log data comprise post-grabbing joint position record data and preset grabbing position positioning record data;
the grabbing probability identification module is used for carrying out grabbing probability identification on the grabbed joint position record data and the preset grabbing position positioning record data based on the grabbing probability evaluation function, and acquiring grabbing probability identification results;
the training module is used for training the probability evaluation module based on the post-grabbing joint position record data, the preset grabbing position positioning record data and the grabbing probability identification result.
Further, the feature optimizing module 16 further includes:
the positioning fitting module is used for performing positioning fitting on the frequent grabbing position combination based on the second object positioning information to obtain grabbing position optimizing positioning results;
the position optimization module is used for optimizing the post-grabbing joint position information based on the grabbing position optimizing and positioning result and combining the probability evaluation module to obtain a post-grabbing joint position optimizing result, wherein grabbing probability of the post-grabbing joint position optimizing result is larger than the grabbing probability threshold;
The grabbing action feature acquisition module is used for carrying out reverse fitting on the grabbing path information and the grabbing front joint position information based on the grabbing rear joint position optimizing result to acquire the second grabbing action feature.
The foregoing detailed description of a method for controlling a gripping robot will be clear to those skilled in the art, and the device disclosed in this embodiment is relatively simple in description, and the relevant points refer to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. A grasping robot control method, characterized by comprising:
performing multidimensional image acquisition on a target grabber to construct a target grabber three-dimensional model, wherein the target grabber three-dimensional model is provided with first object positioning information;
acquiring a first grabbing action characteristic of the grabbing robot, wherein the first grabbing action characteristic comprises grabbing path information and grabbing front joint position information;
according to the first object positioning information, carrying out grabbing probability evaluation on the target grabbing object three-dimensional model based on the grabbing path information and the pre-grabbing joint position information, and obtaining a first grabbing probability;
when the first grabbing probability is smaller than or equal to a grabbing probability threshold value, a first grabbing barrier region is obtained;
adjusting the first object positioning information according to the first grabbing barrier region to obtain second object positioning information, wherein the second object positioning information does not have the first grabbing barrier region;
positioning adjustment is carried out on the target grabbing object according to the second object positioning information, the first grabbing action feature is optimized based on the second object positioning information, and a second grabbing action feature is obtained;
According to the second grabbing action characteristics and the second object positioning information, carrying out grabbing probability evaluation on the target grabbing object three-dimensional model to obtain a second grabbing probability;
and when the second grabbing probability is larger than the grabbing probability threshold value, grabbing control is performed based on the second grabbing action characteristic.
2. The method of claim 1, wherein the multi-dimensional image acquisition is performed on the target object to construct a three-dimensional model of the target object, wherein the three-dimensional model of the target object has first object positioning information, comprising:
based on a size sensor, a preset positioning point is built at the center position of the target grabbing object, and a main view direction line, a side view direction line and a top view direction line are built based on the preset positioning point;
controlling a motion camera to acquire a first angle image from a preset distance from the preset positioning point to the main view direction line;
controlling a motion camera to acquire a second angle image from the preset distance from the preset positioning point to the side view direction line;
controlling a motion camera to acquire a third angle image from the preset distance from the preset positioning point to the top view direction line;
Constructing the three-dimensional model of the target grabbing object according to the first angle image, the second angle image and the third angle image;
and constructing a three-dimensional space coordinate system by taking the preset positioning point as an origin and taking the main view direction line, the side view direction line and the top view direction line as coordinate axes, and acquiring the positioning information of the first object.
3. The method of claim 1, wherein performing a grasp probability evaluation on the three-dimensional model of the target grasp object based on the grasp path information and the pre-grasp joint position information according to the first object positioning information, obtaining a first grasp probability comprises:
acquiring object type features and object geometric features according to the target grabbing object three-dimensional model;
performing grabbing and searching in industrial big data based on the object type characteristics and the object geometric characteristics to obtain grabbing record data, wherein the grabbing record data comprise grabbing position record data;
performing frequent sequence analysis on the grabbing position record data to acquire frequent grabbing position combinations;
positioning and matching are carried out on the first object positioning information according to the frequent grabbing position combination and the target grabbing object three-dimensional model, and grabbing position positioning information is obtained;
And carrying out grabbing probability evaluation on the grabbing position positioning information, the grabbing path information and the grabbing front joint position information through a grabbing probability evaluation channel to acquire the first grabbing probability.
4. A method according to claim 3, wherein frequent sequence analysis of the grab position record data is performed to obtain frequent grab position combinations, comprising:
the grabbing position record data comprises a first grabbing position, a second grabbing position and an Nth grabbing position;
setting a frequent threshold of the single position, traversing the first grabbing position and the second grabbing position until the Nth grabbing position to prune, and obtaining a frequent item set of the single position;
setting a double-position frequent threshold, pruning the combined items of the single-position frequent item set to obtain the double-position frequent item set;
setting a three-position frequent threshold, pruning the combined items of the single-position frequent item set to obtain the three-position frequent item set;
pruning is carried out on the combined items of the unit-position frequent item sets until a k-position frequent threshold is set, and the k-position frequent item sets are obtained, wherein k is a grabbing position number threshold of the grabbing robot;
traversing the single-position frequent item set, the double-position frequent item set and the three-position frequent item set until the k-position frequent item set is subjected to independent grabbing analysis to obtain independent grabbing position combinations;
And extracting the independent grabbing position combination with the frequency maximum value, and setting the independent grabbing position combination as the frequent grabbing position combination.
5. The method of claim 3, wherein the acquiring the first capture probability by the capture probability assessment channel performing a capture probability assessment on the capture position location information, the capture path information, and the pre-capture joint position information comprises:
the grabbing probability evaluation channel comprises a grabbing simulation module and a probability evaluation module;
according to the grabbing path information and the grabbing front joint position information, carrying out simulated grabbing in the grabbing simulation module to obtain grabbing rear joint position information;
and carrying out grabbing probability evaluation on the grabbed joint position information and the grabbing position positioning information through the probability evaluation module to acquire the first grabbing probability.
6. The method of claim 5, wherein performing, by the probability assessment module, a grasp probability assessment on the post-grasp joint position information and the grasp position-location information, obtaining the first grasp probability comprises:
constructing a grabbing probability evaluation function:
wherein,characterizing the ith gripping position location information, +. >Representing the superposition area of the ith grabbing position positioning information and the grabbing hand fixing joint, and +.>The gripping angle is formed by the gripping hand and the positioning information of the ith gripping position;
collecting grabbing log data, wherein the grabbing log data comprise post-grabbing joint position record data and preset grabbing position positioning record data;
performing grabbing probability identification on the grabbed joint position record data and the preset grabbing position positioning record data based on the grabbing probability evaluation function to obtain grabbing probability identification results;
and training the probability evaluation module based on the post-grabbing joint position record data, the preset grabbing position positioning record data and the grabbing probability identification result.
7. The method of claim 6, wherein positioning the target grabber based on the second object positioning information, optimizing the first grabbing action feature based on the second object positioning information, and obtaining a second grabbing action feature, comprises:
positioning and fitting the frequent grabbing position combination based on the second object positioning information to obtain grabbing position optimizing and positioning results;
Optimizing the post-grabbing joint position information based on the grabbing position optimizing and positioning result and combining the probability evaluation module to obtain a post-grabbing joint position optimizing result, wherein grabbing probability of the post-grabbing joint position optimizing result is larger than the grabbing probability threshold;
and performing reverse fitting on the grabbing path information and the grabbing front joint position information based on the grabbing rear joint position optimizing result to obtain the second grabbing action feature.
8. A grasping robot control system, characterized by comprising:
the model building module is used for carrying out multidimensional image acquisition on the target grabber and constructing a three-dimensional model of the target grabber, wherein the three-dimensional model of the target grabber is provided with first object positioning information;
the device comprises a feature acquisition module, a first control module and a second control module, wherein the feature acquisition module is used for acquiring a first grabbing action feature of the grabbing robot, and the first grabbing action feature comprises grabbing path information and grabbing front joint position information;
the first grabbing probability acquisition module is used for carrying out grabbing probability evaluation on the three-dimensional model of the target grabbing object based on the grabbing path information and the pre-grabbing joint position information according to the first object positioning information to acquire a first grabbing probability;
The obstacle region acquisition module is used for acquiring a first grabbing obstacle region when the first grabbing probability is smaller than or equal to a grabbing probability threshold value;
the positioning information acquisition module is used for adjusting the first object positioning information according to the first grabbing barrier region to acquire second object positioning information, wherein the second object positioning information does not have the first grabbing barrier region;
the characteristic optimizing module is used for carrying out positioning adjustment on the target grabbing object according to the second object positioning information, optimizing the first grabbing action characteristic based on the second object positioning information and obtaining a second grabbing action characteristic;
the second grabbing probability acquisition module is used for carrying out grabbing probability evaluation on the target grabbing object three-dimensional model according to the second grabbing action characteristics and the second object positioning information to acquire a second grabbing probability;
and the grabbing control module is used for carrying out grabbing control based on the second grabbing action characteristic when the second grabbing probability is larger than the grabbing probability threshold value.
CN202311294693.5A 2023-10-09 2023-10-09 Grabbing robot control method and system Active CN117021122B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311294693.5A CN117021122B (en) 2023-10-09 2023-10-09 Grabbing robot control method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311294693.5A CN117021122B (en) 2023-10-09 2023-10-09 Grabbing robot control method and system

Publications (2)

Publication Number Publication Date
CN117021122A true CN117021122A (en) 2023-11-10
CN117021122B CN117021122B (en) 2024-01-26

Family

ID=88645288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311294693.5A Active CN117021122B (en) 2023-10-09 2023-10-09 Grabbing robot control method and system

Country Status (1)

Country Link
CN (1) CN117021122B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108058172A (en) * 2017-11-30 2018-05-22 深圳市唯特视科技有限公司 A kind of manipulator grasping means based on autoregression model
CN109015640A (en) * 2018-08-15 2018-12-18 深圳清华大学研究院 Grasping means, system, computer installation and readable storage medium storing program for executing
US20200164505A1 (en) * 2018-11-27 2020-05-28 Osaro Training for Robot Arm Grasping of Objects
ES2885077A1 (en) * 2020-06-09 2021-12-13 Consejo Superior Investigacion METHOD FOR DETERMINING A HAND GRIP MODEL (Machine-translation by Google Translate, not legally binding)
CN113787521A (en) * 2021-09-24 2021-12-14 上海微电机研究所(中国电子科技集团公司第二十一研究所) Robot grabbing method, system, medium and electronic device based on deep learning
US20220016767A1 (en) * 2020-07-14 2022-01-20 Vicarious Fpc, Inc. Method and system for object grasping
CN114918944A (en) * 2022-06-02 2022-08-19 哈尔滨理工大学 Family service robot grabbing detection method based on convolutional neural network fusion
US20230081119A1 (en) * 2021-09-13 2023-03-16 Osaro Automated Robotic Tool Selection
CN116276973A (en) * 2023-01-31 2023-06-23 中电科机器人有限公司 Visual perception grabbing training method based on deep learning
CN116673962A (en) * 2023-07-12 2023-09-01 安徽大学 Intelligent mechanical arm grabbing method and system based on FasterR-CNN and GRCNN

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108058172A (en) * 2017-11-30 2018-05-22 深圳市唯特视科技有限公司 A kind of manipulator grasping means based on autoregression model
CN109015640A (en) * 2018-08-15 2018-12-18 深圳清华大学研究院 Grasping means, system, computer installation and readable storage medium storing program for executing
US20200164505A1 (en) * 2018-11-27 2020-05-28 Osaro Training for Robot Arm Grasping of Objects
ES2885077A1 (en) * 2020-06-09 2021-12-13 Consejo Superior Investigacion METHOD FOR DETERMINING A HAND GRIP MODEL (Machine-translation by Google Translate, not legally binding)
US20220016767A1 (en) * 2020-07-14 2022-01-20 Vicarious Fpc, Inc. Method and system for object grasping
US20230081119A1 (en) * 2021-09-13 2023-03-16 Osaro Automated Robotic Tool Selection
CN113787521A (en) * 2021-09-24 2021-12-14 上海微电机研究所(中国电子科技集团公司第二十一研究所) Robot grabbing method, system, medium and electronic device based on deep learning
CN114918944A (en) * 2022-06-02 2022-08-19 哈尔滨理工大学 Family service robot grabbing detection method based on convolutional neural network fusion
CN116276973A (en) * 2023-01-31 2023-06-23 中电科机器人有限公司 Visual perception grabbing training method based on deep learning
CN116673962A (en) * 2023-07-12 2023-09-01 安徽大学 Intelligent mechanical arm grabbing method and system based on FasterR-CNN and GRCNN

Also Published As

Publication number Publication date
CN117021122B (en) 2024-01-26

Similar Documents

Publication Publication Date Title
CN109934115B (en) Face recognition model construction method, face recognition method and electronic equipment
US20190152054A1 (en) Gripping system with machine learning
CN111251295B (en) Visual mechanical arm grabbing method and device applied to parameterized parts
CN107813310B (en) Multi-gesture robot control method based on binocular vision
CN112297013B (en) Robot intelligent grabbing method based on digital twin and deep neural network
CN108196453B (en) Intelligent calculation method for mechanical arm motion planning group
CN110378325B (en) Target pose identification method in robot grabbing process
JP2021167060A (en) Robot teaching by human demonstration
JP2012518236A (en) Method and system for gesture recognition
CN112906797A (en) Plane grabbing detection method based on computer vision and deep learning
CN110216671A (en) A kind of mechanical gripper training method and system based on Computer Simulation
CN112847374B (en) Parabolic-object receiving robot system
JP2018128897A (en) Detection method and detection program for detecting attitude and the like of object
CN111745640A (en) Object detection method, object detection device, and robot system
CN114463244A (en) Vision robot grabbing system and control method thereof
CN117021122B (en) Grabbing robot control method and system
Schaub et al. 6-DoF grasp detection for unknown objects
CN112070005B (en) Three-dimensional primitive data extraction method and device and storage medium
Boby Hand-eye calibration using a single image and robotic picking up using images lacking in contrast
CN117036470A (en) Object identification and pose estimation method of grabbing robot
Fang et al. A pick-and-throw method for enhancing robotic sorting ability via deep reinforcement learning
Zhang et al. Object detection and grabbing based on machine vision for service robot
CN113436293A (en) Intelligent captured image generation method based on condition generation type countermeasure network
Zhao Application of Robot Sorting Technology in Automatic Sorting of 3D Vision Target Pose Manipulator
CN111797929B (en) Binocular robot obstacle feature detection method based on CNN and PSO

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant