CN118106973A - Mechanical arm grabbing method and device, electronic equipment and storage medium - Google Patents

Mechanical arm grabbing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN118106973A
CN118106973A CN202410459359.9A CN202410459359A CN118106973A CN 118106973 A CN118106973 A CN 118106973A CN 202410459359 A CN202410459359 A CN 202410459359A CN 118106973 A CN118106973 A CN 118106973A
Authority
CN
China
Prior art keywords
grabbing
mechanical arm
recognition model
target object
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410459359.9A
Other languages
Chinese (zh)
Inventor
徐博文
王蒙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202410459359.9A priority Critical patent/CN118106973A/en
Publication of CN118106973A publication Critical patent/CN118106973A/en
Pending legal-status Critical Current

Links

Landscapes

  • Manipulator (AREA)

Abstract

The application provides a mechanical arm grabbing method, a mechanical arm grabbing device, electronic equipment and a storage medium, and relates to the technical field of machine vision. The method comprises the steps of inputting point cloud data of a grabbing area into a grabbing pose recognition model obtained through pre-training to obtain a plurality of selectable grabbing positions of a target object; then, considering the influence of the mechanical arm clamp on grabbing, the method is proposed to adjust a plurality of optional grabbing positions according to the size of the mechanical arm clamp to obtain an accurate target grabbing position, so that the accuracy of the target grabbing position is improved; finally, based on the current position of the mechanical arm and the target grabbing position, the mechanical arm is controlled to grab the target object, so that the success rate and efficiency of grabbing the mechanical arm are improved, the number of re-planning times is reduced, the use requirement is met, and the problem that the grabbing failure is caused because the actual acting point of the target grabbing point and the fixture are regarded as the same position in the prior art is solved.

Description

Mechanical arm grabbing method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of machine vision, in particular to a mechanical arm grabbing method, a mechanical arm grabbing device, electronic equipment and a storage medium.
Background
The grabbing planning of the mechanical arm is an important problem in the field of industrial mechanical arms, and is generally conducted around the processes of sensing, planning, controlling and the like, namely three steps of giving grabbing points, planning grabbing paths and controlling the mechanical arm to reach the position to complete grabbing actions are conducted.
In the prior art, a vision-based mechanical arm grabbing method is commonly used, and the main principle is that semantic segmentation is performed through a vision sensing network, a proper object is selected as a grabbing target, a target grabbing point is determined through a normal vector of the object, then track planning is performed according to the current position of the mechanical arm and the target grabbing point, and the mechanical arm is controlled to grab along a moving track obtained through track planning.
However, the actual grabbing point of the mechanical arm is the actual acting point of the clamp, and the target grabbing point and the actual acting point of the clamp are regarded as the same position, so that the problem of grabbing failure exists.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide a mechanical arm grabbing method, a mechanical arm grabbing device, electronic equipment and a storage medium, so that the grabbing efficiency of the mechanical arm is improved.
In order to achieve the above purpose, the technical scheme adopted by the embodiment of the application is as follows:
in a first aspect, an embodiment of the present application provides a method for grabbing a mechanical arm, where the method includes:
acquiring point cloud data of a grabbing area, wherein a target object to be grabbed is placed in the grabbing area;
inputting the point cloud data into a pre-trained grabbing pose recognition model to obtain a plurality of selectable grabbing positions of the target object;
the plurality of selectable grabbing positions are adjusted according to the size of the mechanical arm clamp, and the target grabbing position of the target object is obtained;
And controlling the mechanical arm to grasp the target object according to the target grasping position.
In a second aspect, an embodiment of the present application further provides a robotic arm gripping device, where the device includes:
the acquisition module is used for acquiring point cloud data of a grabbing area, and a target object to be grabbed is placed in the grabbing area;
The recognition module is used for inputting the point cloud data into a capture pose recognition model which is obtained through training in advance, so as to obtain a plurality of selectable capture positions of the target object;
The adjusting module is used for adjusting the plurality of selectable grabbing positions according to the size of the mechanical arm clamp to obtain a target grabbing position of the target object;
And the control module is used for controlling the mechanical arm to grasp the target object according to the target grasping position.
In a third aspect, an embodiment of the present application further provides an electronic device, including: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating over the bus when the electronic device is running, the processor executing the machine-readable instructions to perform the steps of the robotic arm gripping method as provided in the first aspect.
In a fourth aspect, an embodiment of the present application further provides a computer readable storage medium, where a computer program is stored, the computer program when executed by a processor performs the method for gripping a robotic arm according to the first aspect.
The beneficial effects of the application are as follows:
the application provides a mechanical arm grabbing method, a mechanical arm grabbing device, electronic equipment and a storage medium, wherein in the scheme, point cloud data of an acquired grabbing area is firstly input into a grabbing pose recognition model which is obtained through training in advance, and a plurality of selectable grabbing positions of an output target object of the grabbing pose recognition model are obtained; then, considering the influence of the mechanical arm clamp on grabbing, compensating and adjusting a plurality of optional grabbing positions according to the size of the mechanical arm clamp to obtain an accurate target grabbing position, so that the accuracy of the target grabbing position is improved; finally, the mechanical arm is controlled based on the target grabbing position to grab the target object, so that the success rate and efficiency of grabbing the mechanical arm are improved, the number of re-planning times is reduced, the use requirement is met, and the problem that grabbing failure is caused because the actual acting point of the target grabbing point and the fixture are regarded as the same position in the prior art is solved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an application scenario captured by a mechanical arm according to an embodiment of the present application;
Fig. 2 is a schematic flow chart of a method for grabbing a mechanical arm according to an embodiment of the present application;
fig. 3 is a schematic view of a grabbing area according to an embodiment of the present application;
Fig. 4 is a second schematic flow chart of a mechanical arm grabbing method according to an embodiment of the present application;
fig. 5 is a flowchart of a mechanical arm grabbing method according to an embodiment of the present application;
fig. 6 is a flow chart diagram of a mechanical arm grabbing method according to an embodiment of the present application;
Fig. 7 is a flowchart of a mechanical arm grabbing method according to an embodiment of the present application;
fig. 8 is a flowchart of a mechanical arm grabbing method according to an embodiment of the present application;
Fig. 9 is a flow chart of a mechanical arm grabbing method according to an embodiment of the present application;
FIG. 10 is a schematic diagram of a mechanical arm fixture according to an embodiment of the present application;
Fig. 11 is a schematic flow diagram eight of a method for grabbing a mechanical arm according to an embodiment of the present application;
fig. 12 is a flowchart of a mechanical arm grabbing method according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a mechanical arm grabbing device according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Icon: 101-an industrial personal computer; 102-a mechanical arm; 103-depth camera.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described with reference to the accompanying drawings in the embodiments of the present application, and it should be understood that the drawings in the present application are for the purpose of illustration and description only and are not intended to limit the scope of the present application. In addition, it should be understood that the schematic drawings are not drawn to scale. A flowchart, as used in this disclosure, illustrates operations implemented according to some embodiments of the present application. It should be understood that the operations of the flow diagrams may be implemented out of order and that steps without logical context may be performed in reverse order or concurrently. Moreover, one or more other operations may be added to or removed from the flow diagrams by those skilled in the art under the direction of the present disclosure.
In addition, the described embodiments are only some, but not all, embodiments of the application. The components of the embodiments of the present application generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the application, as presented in the figures, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that the term "comprising" will be used in embodiments of the application to indicate the presence of the features stated hereafter, but not to exclude the addition of other features.
First, the prior art to which the present application relates will be described.
In general, the grabbing planning of the mechanical arm is a classical problem, and is generally performed around a sensing-planning-control process, namely, three steps of giving grabbing points, planning grabbing paths and controlling the mechanical arm to reach a position to complete actions are performed. However, the current mature industrial mechanical arm application is mainly pipelining, and can only execute some simple tasks to complete the fixed track task.
At present, a general and practical mechanical arm grabbing method based on vision in the prior art is mainly characterized in that semantic segmentation is carried out through a visual perception network, a proper object is selected to serve as a grabbing target, a target grabbing point is determined through an object normal vector, then track planning is carried out according to the current position of the mechanical arm and the target grabbing point, and the mechanical arm is controlled to grab along a moving track obtained through track planning.
However, the actual grabbing point of the mechanical arm is the actual acting point of the clamp, and the target grabbing point and the actual acting point of the clamp are regarded as the same position, so that the problem of grabbing failure exists.
Based on the problems, the embodiment of the application provides a mechanical arm grabbing method, which is based on point cloud data of a grabbing area and a grabbing pose recognition model obtained through pre-training, and after a plurality of selectable grabbing positions of a target object are obtained, the influence of a mechanical arm clamp on grabbing is considered, and the plurality of selectable grabbing positions of the target object can be compensated and adjusted according to the size of the mechanical arm clamp, so that an accurate target grabbing position is obtained, and the accuracy of the target grabbing point is improved; then, the mechanical arm is controlled to grasp the target object based on the target grasping position, so that the success rate and efficiency of grasping the mechanical arm are improved, the number of re-planning times is reduced, and the use requirement is met.
Before the technical scheme provided by the application is expanded and specifically explained, the application scene of the mechanical arm grabbing related to the application is briefly explained.
Referring to fig. 1, a schematic diagram of an application scenario captured by a mechanical arm according to an embodiment of the present application is shown; as shown in fig. 1, the mechanical arm grabbing application scenario includes: industrial personal computer 101, robotic arm 102, and depth camera 103.
Wherein, for example, for a grabbing scene of a fixed field of view, the depth camera 103 may be fixedly mounted to an upper position of the robotic arm 102 performing the grabbing operation, i.e. the position of the depth camera 103 does not change with the movement of the robotic arm. For another example, the depth camera 103 may be fixedly mounted to the end effector of the robotic arm 102 for grabbing scenes of dynamic fields of view, i.e., the position of the depth camera 103 changes as the robotic arm moves.
For example, the industrial personal computer may be a terminal device having a data processing function, such as a personal computer, a notebook computer, a smart phone, and a tablet computer.
For example, the mechanical arm can be a Rokae Xmate-series seven-axis mechanical arm, the claw at the tail end of the mechanical arm is a robotiq-2f-140 clamping jaw, and the target object to be grabbed can be common daily necessities such as milk tea, shower gel, plastic boxes and the like.
For example, the depth camera may be a Time Of Flight (TOF) camera. I.e. the distance information between the surroundings of the depth camera and the depth camera, i.e. the depth information.
The industrial personal computer is respectively in communication connection with the mechanical arm and the depth camera, and the depth camera is used for collecting point cloud data of the grabbing area and sending the collected point cloud data to the industrial personal computer; the industrial personal computer processes the point cloud data to obtain an accurate target grabbing point, performs track planning according to the current position of the mechanical arm and the target grabbing point, and controls the mechanical arm to grab the target object along the planned moving track to achieve accurate grabbing of the target object.
The implementation principle of the steps of the mechanical arm grabbing method provided by the application and the corresponding beneficial effects are described below through a plurality of specific embodiments.
In an embodiment, referring to fig. 2, a method for capturing an arm is provided, and optionally, an execution body of the method may be an industrial personal computer in the application scenario of capturing an arm shown in fig. 1.
It should be understood that in other embodiments, the sequence of some steps in the mechanical arm grabbing method may be interchanged according to actual needs, or some steps in the mechanical arm grabbing method may be omitted or deleted. As shown in fig. 2, the method includes:
s201, acquiring point cloud data of a grabbing area.
Wherein, the target object to be grabbed is placed in the grabbing area. For example, as shown with reference to fig. 3, for example, an object 1, an object 2, an object 3, and the like are placed in the gripping area. Namely, the point cloud data acquired by the depth camera comprises the point cloud data of all objects to be grabbed in the grabbing area.
S202, inputting the point cloud data into a pre-trained grabbing pose recognition model to obtain a plurality of selectable grabbing positions of the target object.
The grabbing pose recognition model is a recognition model obtained based on a large number of grabbing pose samples. Wherein, snatch the position appearance sample and include: the pose that the mechanical arm can reach and the pose that the mechanical arm can not reach.
Optionally, the obtained point cloud data are input into a grabbing pose recognition model to obtain a plurality of selectable grabbing positions of the target object, namely, the grabbing pose recognition model is adopted to recognize which grabbing positions can be reached by the mechanical arm from the point cloud data, so that the output pose is ensured to meet the accessibility of the mechanical arm.
S203, adjusting the plurality of selectable grabbing positions according to the size of the mechanical arm clamp to obtain a target grabbing position of the target object.
The size of the mechanical arm clamp refers to the width of the mechanical arm clamp after being opened when grabbing objects.
In this embodiment, considering that the opening and closing of the mechanical arm fixture may cause the change of the central grabbing point to cause the grabbing failure, it may be proposed to dynamically compensate the identified multiple selectable grabbing positions according to the size of the mechanical arm fixture, so as to obtain an accurate target grabbing position, improve the accuracy of the target grabbing position, solve the problem in the prior art that the actual acting point of the target grabbing point and the fixture is regarded as the same position to cause the grabbing failure, and improve the grabbing success rate.
S204, controlling the mechanical arm to grasp the target object according to the target grasping position.
In this embodiment, track planning is performed according to the current position of the mechanical arm and the target grabbing point, that is, a collision-free moving track can be generated through MoveIt, and the mechanical arm is controlled to grab the target object along the planned moving track, so that accurate grabbing of the target object is achieved, grabbing efficiency of the mechanical arm is improved, number of re-planning times is reduced, and use requirements are met.
In summary, the embodiment of the application provides a mechanical arm grabbing method, in the scheme, firstly, the acquired point cloud data of a grabbing area is input into a grabbing pose recognition model which is obtained through training in advance, and a plurality of selectable grabbing positions of a target object which is output by the grabbing pose recognition model are obtained; then, considering the influence of the mechanical arm clamp on grabbing, compensating and adjusting a plurality of optional grabbing positions according to the size of the mechanical arm clamp to obtain an accurate target grabbing position, so that the accuracy of the target grabbing position is improved; finally, the mechanical arm is controlled based on the target grabbing position to grab the target object, so that the success rate and efficiency of grabbing the mechanical arm are improved, the number of re-planning times is reduced, the use requirement is met, and the problem that grabbing failure is caused because the actual acting point of the target grabbing point and the fixture are regarded as the same position in the prior art is solved.
Optionally, the step S202 includes:
And inputting the point cloud data into a grabbing pose recognition model, recognizing a plurality of initial grabbing positions and reachable attributes of each initial grabbing position based on the point cloud data by the grabbing pose recognition model, and determining a plurality of selectable grabbing positions from the plurality of initial grabbing positions according to the reachable attributes, wherein the reachable attributes are used for indicating whether the initial grabbing positions are reachable or not.
For example, in the movement space of the robot arm, the initial grasping position 1 is a position point that the robot arm cannot reach, i.e., a singular point, and it may be determined that the reachable property of the initial grasping position 1 is unreachable.
In this embodiment, in order to solve the problem that the gripping pose obtained by the existing solution may have a condition that the mechanical arm cannot reach, resulting in low gripping efficiency. Therefore, it is proposed that the acquired point cloud data can be processed by adopting the capture pose recognition model obtained through pre-training, namely, a plurality of initial capture positions and reachable attributes of each initial capture position are obtained through recognition, and the plurality of initial capture positions are screened according to the reachable attributes of each initial capture position, namely, unreachable capture positions are removed, and reachable capture positions are reserved, so that a plurality of optional capture positions are obtained.
Therefore, a plurality of selectable grabbing positions output by the grabbing pose recognition model can be guaranteed to be all the position points which can be reached by the mechanical arm, the mechanical arm kinematics constraint is considered by each selectable grabbing position, the repeated re-planning process caused by no solution of inverse kinematics in the planning process is avoided, the planning time is shortened, and the working efficiency of the mechanical arm is improved.
The following embodiment will specifically explain how to train to obtain the grasping gesture recognition model referred to in S202 above.
Optionally, referring to fig. 4, the above-mentioned capturing pose recognition model is obtained by training in the following manner:
s401, acquiring a plurality of track points which are randomly generated.
S402, carrying out reachability analysis on each track point to obtain labels of each track point.
Wherein the tag is used to indicate whether the track point is reachable or unreachable.
In this embodiment, a series of track points are randomly generated, and whether each track point is a position point that can be reached by the mechanical arm is judged through inverse kinematics, so that the reachability of each track point is obtained, and each track point is labeled according to the reachability of each track point. For example, the label of track point 1 is reachable and the label of track point 2 is unreachable.
S403, obtaining a training data set according to each track point and the label of each track point.
S404, performing iterative training on the initial grabbing pose recognition model by using a training data set, updating weight parameters of the initial grabbing pose recognition model after each iteration until the initial grabbing pose recognition model meets a preset convergence condition, and taking the initial grabbing pose recognition model meeting the convergence condition as the grabbing pose recognition model.
In this embodiment, the training data set may be used to perform iterative training on the initial capture pose recognition model, after each round of training is completed, a loss value of the current training is obtained, the weight parameter of the initial capture pose recognition model is updated by using the loss value of the current training until the obtained loss value is smaller than a preset threshold after the rounds of training are completed, and then it may be determined that the initial capture pose recognition model obtained by training at this time meets a preset convergence condition, and the initial capture pose recognition model meeting the convergence condition is used as the capture pose recognition model.
For example, the determined loss value may be calculated using cross entropy as a loss function. The cross entropy loss function is used for calculating a loss value, and the input is a prediction type (namely, a forward calculated output prediction result) and an actual real type of the model output, and the output is the loss value.
In another implementation manner, the training data set can be divided into a training set, a verification set and a test set according to a preset ratio of 7:2:1, and the weight parameters of the initial grabbing pose recognition model are adjusted according to the training set and the verification set, so that the initial grabbing pose recognition model is trained, and the grabbing pose recognition model is obtained; and testing the grabbing pose recognition model according to the test set to obtain test indexes of the grabbing pose recognition model. The test index can be used for evaluating the identification effect and the accuracy of the grabbing pose identification model to point cloud data, and the test index can comprise an accuracy rate, a recall rate, a false detection rate and a omission rate.
Specifically, in the process of training the initial recognition model, the learning rate in the training parameters of the initial model is initially used for carrying out first-round training on the initial recognition model, so as to obtain the recognition model of the first-round training and the loss value of the first-round training. And adjusting the learning rate in the initial recognition model according to the loss value of the first training to obtain the learning rate of the second training, and training the second training according to the learning rate of the second training until the model converges to obtain the target recognition model. After the target recognition model is obtained, the test data set is input into the target recognition model for testing, and the test index corresponding to the target recognition model is obtained.
Optionally, referring to fig. 5, in step S404, the training data set is used to iteratively train the initial capture pose recognition model, and after each iteration, the weight parameters of the initial capture pose recognition model are updated, including:
S501, replacing a full connection layer in the initial grabbing pose recognition model with a first full connection layer.
The initial grabbing pose recognition model comprises a plurality of training layers. Illustratively, for example, the initial grab pose recognition model includes: training layer 1, training layer 2, training layer 3 etc. replace the full connected layer in the initial grasping gesture recognition model with the first full connected layer.
S502, training each training layer except the first full-connection layer in the first full-connection layer and initial grabbing pose recognition model by using the training data set to obtain a loss value of the training data set.
And S503, updating the weight parameters of the first full-connection layer according to the loss value of the training data set, and keeping the weight parameters of all training layers except the first full-connection layer in the initial grabbing pose recognition model unchanged.
In this embodiment, for example, the selected initial capture pose recognition model is GraspNet models. In order to improve the recognition accuracy of the model, it is proposed that the GraspNet model can be finely tuned. Specifically, the full connection layer of GraspNet model is removed, and then a new full connection layer, namely the first full connection layer, is added after that for a new classification task, and is used for judging whether the generated track point is reachable.
And then training each training layer except the first full-connection layer in the first full-connection layer and the initial grabbing pose recognition model by using a training data set to obtain a loss value of the training data set, updating weight parameters of the first full-connection layer according to the loss value of the training data set, and freezing each training layer except the first full-connection layer in the initial grabbing pose recognition model so that the weight parameters of each training layer except the first full-connection layer are not updated in the fine tuning period, and after iterative training is performed for a plurality of times, if the loss value of the current iteration meets a preset convergence condition, taking the new initial grabbing pose recognition model obtained by training at the moment as a grabbing pose recognition model obtained by final training.
Therefore, in perception, the scheme carries out fine adjustment on the existing GraspNet model, namely adjusts the output layer of the GraspNet model, so that the grabbing position of the grabbing gesture recognition model obtained through final training is a position point which can be reached by the mechanical arm, and the kinematic requirement of the mechanical arm is met.
The following embodiments will specifically explain how the plurality of selectable gripping positions are adjusted.
Optionally, referring to fig. 6, the step S203 includes:
s601, clustering a plurality of selectable grabbing positions to obtain clustered positions.
The clustered positions are initial grabbing points of the target object.
In one implementation, for example, a k-means clustering algorithm may be used to cluster a plurality of selectable capture positions output by the capture pose recognition model, so as to obtain an initial capture point of the target object.
S602, adjusting the clustered positions according to the size of the mechanical arm clamp to obtain the target grabbing position of the target object.
Optionally, in order to improve the accuracy of the finally obtained target grabbing position, the method can be used for adjusting the initial grabbing point obtained after clustering by combining the size of the mechanical arm clamp to obtain an accurate target grabbing position.
Alternatively, referring to fig. 7, the step S601 includes:
s701, clustering a plurality of selectable grabbing positions to obtain a plurality of clusters.
S702, determining a point set to be grabbed according to the number of the selectable grabbing positions in each cluster.
In this embodiment, for example, a clustering algorithm is used to perform clustering processing on a plurality of selectable grabbing positions output by the grabbing gesture recognition model, grabbing positions with larger differences are divided into different groups, so as to obtain a plurality of cluster clusters, each cluster comprises a plurality of grabbing positions, the cluster with the largest number of grabbing positions is used as a point set to be grabbed, and unreasonable grabbing gestures with larger gesture differences are removed. For example, if the cluster 1 contains the largest number of grabbing positions, the cluster 1 is taken as the point set to be grabbed.
S703, determining the clustered positions according to the optional grabbing positions in the point set to be grabbed.
Optionally, the grabbing pose recognition model outputs the selectable grabbing positions and the weight values of the selectable grabbing positions.
In one implementation manner, for example, the plurality of optional grabbing positions in the set of points to be grabbed may be ordered according to the weight value from high to low, and the first optional grabbing position in the order is used as the post-clustering position, so as to obtain the initial grabbing point of the target object.
Alternatively, referring to fig. 8, the step S703 includes:
S801, determining the width of the target object according to each optional grabbing position.
In the present embodiment, for example, 10 selectable grabbing positions are included in the point set to be grabbed, and each selectable grabbing position may be represented by P1 (x 1, y1, z 1), P2 (x 2, y2, z 2), …, P10 (x 10, y10, z 10); and determining the maximum value Xmax and the minimum value Xmin of the abscissa and the maximum value Ymax and the minimum value Ymin of the ordinate according to the abscissa and the ordinate of each optional grabbing position, determining a first difference value of the maximum value Xmax and the minimum value Xmin of the abscissa, determining a second difference value of the maximum value Ymax and the minimum value Ymin of the ordinate, and taking the first difference value as the width of the target object if the first difference value is larger than the second difference value.
S802, carrying out weighted average on each optional grabbing position according to each optional grabbing position and the weight value of each optional grabbing position to obtain the initial grabbing pose of the target object.
In another implementation manner, for example, the weighted sum is performed on each selectable grabbing position and the weight value of each selectable grabbing position to obtain a summation result, and the ratio result of the summation result to the number of selectable grabbing positions in the point set to be grabbed is taken as an initial grabbing position P0 (x 0, y0, z 0) of the target object.
It should be appreciated that the initial grasping position includes: the center grabbing point and the grabbing direction. The central grabbing point and grabbing direction indicate the pose that the clamping jaw should reach, and as the grabbing point changes due to the open and closed states of the clamping jaw, the opening degree of the clamping jaw needs to be determined according to the size of a target object, and then the position of the grabbing point is dynamically adjusted, so that the compensated central grabbing pose, namely the end point of the mechanical arm track planning, can be obtained.
Optionally, referring to fig. 9, step 602 includes:
And S901, determining a reference rod swinging angle according to the width of the target object, the projection length of the second connecting piece of the mechanical arm clamp in the horizontal direction, the distance between the root of the third connecting piece of the mechanical arm clamp and the center of the mechanical arm clamp and the length of the third connecting piece of the mechanical arm clamp.
S902, determining grabbing depth information according to the shortest distance between the base of the mechanical arm clamp and the third connecting piece in the vertical direction, the length of the third connecting piece, the swinging angle of the reference rod, the projection length of the second connecting piece in the vertical direction and the length of the first connecting piece of the mechanical arm clamp.
S903, obtaining a target grabbing position of the target object according to the grabbing depth information.
Optionally, referring to fig. 10, a schematic diagram of a mechanical arm fixture is shown, and a geometrical relationship is established by using a distal end of a mechanical arm as a reference point, where the geometrical relationship is as following formula (1):
Where l=2χl1 is the width of the opening of the known mechanical arm fixture, (i.e., the width of the target object calculated as described above), Δ1, Δ3, and p 1 are the total of three connectors in fig. 10, i.e., the first connector, the second connector, and the third connector, h 2 is the length of the first connector, Δ 2 is the projected length of the second connector in the vertical direction, and p 1 is the length of the third connector.
With continued reference to fig. 10, Δ3 represents a distance from the root of the third connecting piece to the center of the mechanical arm fixture, Δ1 is a length of the second connecting piece projected in the horizontal direction, θ 1 is an angle between the third connecting piece and the middle vertical line of the gripper, which may also be referred to as a reference lever swing angle, that is, θ 1 may be obtained according to the above formula (1):
Therefore, h is the grasping depth from the base of the robot arm jig to the center of the gripper, and referring to fig. 10, the calculation manner in which information about the grasping depth can be obtained is the following formula (2):
Wherein h 1 in the above formula (2) is the shortest distance between the base of the mechanical arm fixture and the third connecting piece in the vertical direction, and Δ 2 is the projection length of the second connecting piece in the vertical direction. When the target object to be grabbed is grabbed, the grabbing depth of the target object is expected to be at the center position of the first connecting piece, and therefore h2/2 is used in the above formula (2), namely, the distance between the center position of the first connecting piece and the base of the mechanical arm clamp is used as grabbing depth h. That is, the grasping depth information can be calculated according to the above formula (2).
Therefore, the coordinate value Z0 of the initial grabbing position P0 (x 0, y0, Z0) on the Z axis can be replaced by the depth information h obtained by the calculation, so that the final target grabbing position P (x 0, y0, h) is obtained, the adjustment of the initial grabbing position on the grabbing depth is realized, the grabbing of the target object is realized based on the optimal grabbing depth, and the grabbing failure caused by the fact that the included angle of the mechanical arm is too deep or too shallow when the target object is grabbed is avoided.
In this embodiment, since the length of the mechanical arm clamp in the fully opened and fully closed state may be changed, the adjustment of the grabbing depth is calculated according to the physical characteristics of the mechanical arm clamp and the width of the target object, that is, the initial grabbing position P0 (x 0, y0, z 0) of the target object is dynamically compensated, and the target grabbing position P (x 0, y0, h), that is, the end point of the mechanical arm track planning, is obtained, so that the accuracy of the target grabbing position is improved, and the grabbing success rate of the mechanical arm is improved.
Optionally, referring to fig. 11, the step S204 includes:
s1101, generating a moving path according to the current position of the mechanical arm and the target grabbing position.
S1102, controlling the mechanical arm to move along the moving path and grasp the target object.
In one implementation manner, for example, a robot arm track is planned based on a cartesian space according to a current position of the robot arm and a target grabbing position, a moving path is generated, and the robot arm is controlled to move according to each position point on the moving path, so that grabbing of a target object is achieved.
Optionally, referring to fig. 12, the method further includes:
S1201, in the process that the mechanical arm moves along the moving path, determining whether the updated position of the mechanical arm is any position point on the moving path.
And S1202, if not, regenerating a new moving path according to the updated position and the target grabbing position of the mechanical arm, and controlling the mechanical arm to move along the new moving path and grab the target object.
Optionally, in order to ensure the robustness of the whole grabbing process, the whole system can cope with special situations, so that the system robustness is improved. According to the scheme, whether the updated position of the mechanical arm is any position point on the moving path is judged in real time in the moving process of the mechanical arm along the moving path, if not, the moving path can be determined to deviate, the moving path needs to be re-planned, namely, a new collision-free moving path is regenerated according to the updated position of the mechanical arm and the target grabbing position, and the mechanical arm is controlled to move to the target grabbing position along the new moving path and complete grabbing actions. If a collision-free track cannot be generated, the sensing and planning are performed again.
Based on the same inventive concept, the embodiment of the application also provides a mechanical arm grabbing device corresponding to the mechanical arm grabbing method, and because the principle of solving the problem by the device in the embodiment of the application is similar to that of the mechanical arm grabbing method in the embodiment of the application, the implementation of the device can be referred to the implementation of the method, and the repetition is omitted.
Fig. 13 is a schematic structural diagram of a mechanical arm grabbing device according to an embodiment of the present application, and referring to fig. 13, the device includes:
an obtaining module 1301, configured to obtain point cloud data of a capturing area, where a target object to be captured is placed in the capturing area;
the recognition module 1302 is configured to input the point cloud data to a capture pose recognition model obtained by training in advance, so as to obtain a plurality of selectable capture positions of the target object;
The adjusting module 1303 is configured to adjust the plurality of selectable capturing positions according to the size of the mechanical arm fixture, so as to obtain a target capturing position of the target object;
and a control module 1304, configured to control the robotic arm to grasp the target object according to the target grasping position.
As an alternative embodiment, the identification module 1302 is further configured to:
Inputting the point cloud data into the grabbing pose recognition model, recognizing a plurality of initial grabbing positions and reachable attributes of each initial grabbing position based on the point cloud data by the grabbing pose recognition model, and determining a plurality of selectable grabbing positions from the plurality of initial grabbing positions according to the reachable attributes, wherein the reachable attributes are used for indicating whether the initial grabbing positions are reachable or not.
As an alternative embodiment, the apparatus further comprises: a training module;
Training module for:
acquiring a plurality of track points which are randomly generated;
Carrying out reachability analysis on each track point to obtain a label of each track point, wherein the label is used for indicating whether the track point is reachable or not;
obtaining a training data set according to each track point and the label of each track point;
And carrying out iterative training on the initial grabbing pose recognition model by using the training data set, updating weight parameters of the initial grabbing pose recognition model after each iteration until the initial grabbing pose recognition model meets a preset convergence condition, and taking the initial grabbing pose recognition model meeting the convergence condition as the grabbing pose recognition model.
As an alternative embodiment, the training module is further configured to:
replacing a full connection layer in the initial grabbing pose recognition model with a first full connection layer, wherein the initial grabbing pose recognition model comprises a plurality of training layers;
training each training layer except the first full-connection layer in the first full-connection layer and the initial grabbing pose recognition model by using the training data set to obtain a loss value of the training data set;
And updating the weight parameters of the first full-connection layer according to the loss value of the training data set, wherein the weight parameters of all training layers except the first full-connection layer in the initial grabbing pose recognition model are kept unchanged.
As an alternative embodiment, the adjusting module 1303 is further configured to:
Clustering the plurality of selectable grabbing positions to obtain clustered positions;
and adjusting the clustered positions according to the size of the mechanical arm clamp to obtain the target grabbing position of the target object.
As an alternative embodiment, the adjusting module 1303 is further configured to:
Clustering the plurality of selectable grabbing positions to obtain a plurality of clusters;
Determining a point set to be grabbed according to the number of the selectable grabbing positions in each cluster;
and determining the clustered positions according to the optional grabbing positions in the point set to be grabbed.
As an alternative embodiment, the adjusting module 1303 is further configured to:
Determining the width of the target object according to each selectable grabbing position;
And carrying out weighted average on each optional grabbing position according to the optional grabbing positions and the weight value of each optional grabbing position to obtain the initial grabbing position of the target object.
As an optional implementation manner, the weight value is output by the grabbing pose recognition model.
As an alternative embodiment, the adjusting module 1303 is further configured to:
Determining a reference bar swinging angle according to the width of a target object, the length of the projection of the second connecting piece of the mechanical arm clamp in the horizontal direction, the distance between the root of the third connecting piece of the mechanical arm clamp and the center of the mechanical arm clamp and the length of the third connecting piece of the mechanical arm clamp;
determining grabbing depth information according to the shortest distance between the base of the mechanical arm clamp and the third connecting piece in the vertical direction, the length of the third connecting piece, the swinging angle of the reference rod, the projection length of the second connecting piece in the vertical direction and the length of the first connecting piece of the mechanical arm clamp;
and obtaining the target grabbing position of the target object according to the grabbing depth information.
As an alternative embodiment, the control module 1304 is further configured to:
generating a moving path according to the current position of the mechanical arm and the target grabbing position;
and controlling the mechanical arm to move along the moving path and grabbing the target object.
As an alternative embodiment, the device further comprises:
the judging module is used for determining whether the updated position of the mechanical arm is any position point on the moving path or not in the moving process of the mechanical arm along the moving path;
the control module 1304 is further configured to:
If not, regenerating a new moving path according to the updated position of the mechanical arm and the target grabbing position, and controlling the mechanical arm to move along the new moving path and grab the target object.
The foregoing apparatus is used for executing the method provided in the foregoing embodiment, and its implementation principle and technical effects are similar, and are not described herein again.
The above modules may be one or more integrated circuits configured to implement the above methods, for example: one or more Application SPECIFIC INTEGRATED Circuits (ASIC), or one or more microprocessors (DIGITAL SINGNAL processor, DSP), or one or more field programmable gate arrays (Field Programmable GATE ARRAY, FPGA), etc. For another example, when a module above is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke the program code. For another example, the modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where the electronic device includes: a processor 1401, a storage medium 1402 and a bus 1403, the storage medium 1402 storing machine-readable instructions executable by the processor 1401, the processor 1401 and the storage medium 1402 communicating over the bus 1403 when the electronic device is running, the processor 1401 executing the machine-readable instructions to perform the steps of:
acquiring point cloud data of a grabbing area, wherein a target object to be grabbed is placed in the grabbing area;
inputting the point cloud data into a pre-trained grabbing pose recognition model to obtain a plurality of selectable grabbing positions of the target object;
the plurality of selectable grabbing positions are adjusted according to the size of the mechanical arm clamp, and the target grabbing position of the target object is obtained;
And controlling the mechanical arm to grasp the target object according to the target grasping position.
As an optional implementation manner, the processor 1401 obtains a plurality of optional grabbing positions of the target object after executing the input of the point cloud data into a pre-trained grabbing pose recognition model, which is specifically configured to:
Inputting the point cloud data into the grabbing pose recognition model, recognizing a plurality of initial grabbing positions and reachable attributes of each initial grabbing position based on the point cloud data by the grabbing pose recognition model, and determining a plurality of selectable grabbing positions from the plurality of initial grabbing positions according to the reachable attributes, wherein the reachable attributes are used for indicating whether the initial grabbing positions are reachable or not.
As an alternative embodiment, the processor 1401 is further configured to perform:
acquiring a plurality of track points which are randomly generated;
Carrying out reachability analysis on each track point to obtain a label of each track point, wherein the label is used for indicating whether the track point is reachable or not;
obtaining a training data set according to each track point and the label of each track point;
And carrying out iterative training on the initial grabbing pose recognition model by using the training data set, updating weight parameters of the initial grabbing pose recognition model after each iteration until the initial grabbing pose recognition model meets a preset convergence condition, and taking the initial grabbing pose recognition model meeting the convergence condition as the grabbing pose recognition model.
As an alternative embodiment, the processor 1401 performs iterative training on the initial capture pose recognition model after performing the using the training data set, and updates the weight parameters of the initial capture pose recognition model after each iteration, specifically for:
replacing a full connection layer in the initial grabbing pose recognition model with a first full connection layer, wherein the initial grabbing pose recognition model comprises a plurality of training layers;
training each training layer except the first full-connection layer in the first full-connection layer and the initial grabbing pose recognition model by using the training data set to obtain a loss value of the training data set;
And updating the weight parameters of the first full-connection layer according to the loss value of the training data set, wherein the weight parameters of all training layers except the first full-connection layer in the initial grabbing pose recognition model are kept unchanged.
As an optional implementation manner, the processor 1401 is configured to, when executing the adjusting the plurality of selectable grabbing positions according to the size of the mechanical arm fixture, obtain a target grabbing position of the target object, specifically:
Clustering the plurality of selectable grabbing positions to obtain clustered positions;
and adjusting the clustered positions according to the size of the mechanical arm clamp to obtain the target grabbing position of the target object.
As an optional implementation manner, the processor 1401 is configured to, when performing the clustering on the plurality of optional grabbing positions, obtain a clustered position, specifically configured to:
Clustering the plurality of selectable grabbing positions to obtain a plurality of clusters;
Determining a point set to be grabbed according to the number of the selectable grabbing positions in each cluster;
and determining the clustered positions according to the optional grabbing positions in the point set to be grabbed.
As an optional implementation manner, the processor 1401 is configured to determine the post-cluster position after executing the determining according to each optional grabbing position in the to-be-grabbed point set, specifically:
Determining the width of the target object according to each selectable grabbing position;
And carrying out weighted average on each optional grabbing position according to the optional grabbing positions and the weight value of each optional grabbing position to obtain an initial grabbing point of the target object.
As an optional implementation manner, the weight value is output by the grabbing pose recognition model.
As an optional implementation manner, the processor 1401 adjusts the post-clustering position according to the size of the mechanical arm fixture to obtain a target capturing position of the target object, which is specifically configured to:
Determining a reference bar swinging angle according to the width of a target object, the length of the projection of the second connecting piece of the mechanical arm clamp in the horizontal direction, the distance between the root of the third connecting piece of the mechanical arm clamp and the center of the mechanical arm clamp and the length of the third connecting piece of the mechanical arm clamp;
determining grabbing depth information according to the shortest distance between the base of the mechanical arm clamp and the third connecting piece in the vertical direction, the length of the third connecting piece, the swinging angle of the reference rod, the projection length of the second connecting piece in the vertical direction and the length of the first connecting piece of the mechanical arm clamp;
and obtaining the target grabbing position of the target object according to the grabbing depth information.
As an alternative embodiment, the processor 1401 is configured to control the robotic arm to grasp the target object when executing the step of grasping the target object according to the target grasping position, specifically:
generating a moving path according to the current position of the mechanical arm and the target grabbing position;
and controlling the mechanical arm to move along the moving path and grabbing the target object.
As an alternative embodiment, the processor 1401 is further configured to, when executing:
Determining whether the updated position of the mechanical arm is any position point on the moving path in the moving process of the mechanical arm along the moving path;
If not, regenerating a new moving path according to the updated position of the mechanical arm and the target grabbing position, and controlling the mechanical arm to move along the new moving path and grab the target object.
Optionally, the present invention also provides a program product, such as a computer readable storage medium, comprising a program which when executed by a processor is adapted to perform the steps of:
acquiring point cloud data of a grabbing area, wherein a target object to be grabbed is placed in the grabbing area;
inputting the point cloud data into a pre-trained grabbing pose recognition model to obtain a plurality of selectable grabbing positions of the target object;
the plurality of selectable grabbing positions are adjusted according to the size of the mechanical arm clamp, and the target grabbing position of the target object is obtained;
And controlling the mechanical arm to grasp the target object according to the target grasping position.
As an optional implementation manner, the processor is configured to obtain a plurality of optional grabbing positions of the target object after executing the step of inputting the point cloud data into a pre-trained grabbing pose recognition model, where the steps are specifically:
Inputting the point cloud data into the grabbing pose recognition model, recognizing a plurality of initial grabbing positions and reachable attributes of each initial grabbing position based on the point cloud data by the grabbing pose recognition model, and determining a plurality of selectable grabbing positions from the plurality of initial grabbing positions according to the reachable attributes, wherein the reachable attributes are used for indicating whether the initial grabbing positions are reachable or not.
As an alternative embodiment, the processor is further configured to perform:
acquiring a plurality of track points which are randomly generated;
Carrying out reachability analysis on each track point to obtain a label of each track point, wherein the label is used for indicating whether the track point is reachable or not;
obtaining a training data set according to each track point and the label of each track point;
And carrying out iterative training on the initial grabbing pose recognition model by using the training data set, updating weight parameters of the initial grabbing pose recognition model after each iteration until the initial grabbing pose recognition model meets a preset convergence condition, and taking the initial grabbing pose recognition model meeting the convergence condition as the grabbing pose recognition model.
As an optional implementation manner, the processor performs iterative training on the initial grabbing pose recognition model after performing the training using the training data set, and updates the weight parameters of the initial grabbing pose recognition model after each iteration, which is specifically used for:
replacing a full connection layer in the initial grabbing pose recognition model with a first full connection layer, wherein the initial grabbing pose recognition model comprises a plurality of training layers;
training each training layer except the first full-connection layer in the first full-connection layer and the initial grabbing pose recognition model by using the training data set to obtain a loss value of the training data set;
And updating the weight parameters of the first full-connection layer according to the loss value of the training data set, wherein the weight parameters of all training layers except the first full-connection layer in the initial grabbing pose recognition model are kept unchanged.
As an optional implementation manner, the processor is configured to, when executing the adjustment of the plurality of selectable grabbing positions according to the size of the mechanical arm fixture, obtain a target grabbing position of the target object, specifically configured to:
Clustering the plurality of selectable grabbing positions to obtain clustered positions;
and adjusting the clustered positions according to the size of the mechanical arm clamp to obtain the target grabbing position of the target object.
As an optional implementation manner, the processor is configured to, after executing the clustering on the plurality of optional grabbing positions, obtain a clustered position, specifically configured to:
Clustering the plurality of selectable grabbing positions to obtain a plurality of clusters;
Determining a point set to be grabbed according to the number of the selectable grabbing positions in each cluster;
and determining the clustered positions according to the optional grabbing positions in the point set to be grabbed.
As an optional implementation manner, the processor is configured to determine the post-cluster position after executing the step according to each optional grabbing position in the to-be-grabbed point set, specifically:
Determining the width of the target object according to each selectable grabbing position;
And carrying out weighted average on each optional grabbing position according to the optional grabbing positions and the weight value of each optional grabbing position to obtain an initial grabbing point of the target object.
As an optional implementation manner, the weight value is output by the grabbing pose recognition model.
As an optional implementation manner, the processor is configured to adjust the post-clustering position according to the size of the mechanical arm fixture to obtain a target capturing position of the target object, where the target capturing position is specifically configured to:
Determining a reference bar swinging angle according to the width of a target object, the length of the projection of the second connecting piece of the mechanical arm clamp in the horizontal direction, the distance between the root of the third connecting piece of the mechanical arm clamp and the center of the mechanical arm clamp and the length of the third connecting piece of the mechanical arm clamp;
Determining grabbing depth information according to the shortest distance between the base of the mechanical arm clamp and the third connecting piece in the vertical direction, the length of the third connecting piece, the swinging angle of the reference rod, the projection length of the second connecting piece in the vertical direction and the length of the first connecting piece of the mechanical arm clamp; and obtaining the target grabbing position of the target object according to the grabbing depth information.
As an optional implementation manner, the processor is configured to control the mechanical arm to grasp the target object when executing the step of capturing the target object according to the target grasping position, specifically:
generating a moving path according to the current position of the mechanical arm and the target grabbing position;
and controlling the mechanical arm to move along the moving path and grabbing the target object.
As an alternative embodiment, the processor is further configured to, when executing:
Determining whether the updated position of the mechanical arm is any position point on the moving path in the moving process of the mechanical arm along the moving path;
If not, regenerating a new moving path according to the updated position of the mechanical arm and the target grabbing position, and controlling the mechanical arm to move along the new moving path and grab the target object.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (english: processor) to perform some of the steps of the methods according to the embodiments of the invention. And the aforementioned storage medium includes: u disk, mobile hard disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.

Claims (14)

1. A robotic arm gripping method, the method comprising:
acquiring point cloud data of a grabbing area, wherein a target object to be grabbed is placed in the grabbing area;
inputting the point cloud data into a pre-trained grabbing pose recognition model to obtain a plurality of selectable grabbing positions of the target object;
the plurality of selectable grabbing positions are adjusted according to the size of the mechanical arm clamp, and the target grabbing position of the target object is obtained;
And controlling the mechanical arm to grasp the target object according to the target grasping position.
2. The method according to claim 1, wherein said inputting the point cloud data into a pre-trained capture pose recognition model to obtain a plurality of selectable capture positions of the target object comprises:
Inputting the point cloud data into the grabbing pose recognition model, recognizing a plurality of initial grabbing positions and reachable attributes of each initial grabbing position based on the point cloud data by the grabbing pose recognition model, and determining a plurality of selectable grabbing positions from the plurality of initial grabbing positions according to the reachable attributes, wherein the reachable attributes are used for indicating whether the initial grabbing positions are reachable or not.
3. The method according to claim 2, wherein the grasping gesture recognition model is trained by:
acquiring a plurality of track points which are randomly generated;
Carrying out reachability analysis on each track point to obtain a label of each track point, wherein the label is used for indicating whether the track point is reachable or not;
obtaining a training data set according to each track point and the label of each track point;
And carrying out iterative training on the initial grabbing pose recognition model by using the training data set, updating weight parameters of the initial grabbing pose recognition model after each iteration until the initial grabbing pose recognition model meets a preset convergence condition, and taking the initial grabbing pose recognition model meeting the convergence condition as the grabbing pose recognition model.
4. A method according to claim 3, wherein the iterative training of the initial grasping gesture recognition model using the training data set, updating weight parameters of the initial grasping gesture recognition model after each iteration, comprises:
replacing a full connection layer in the initial grabbing pose recognition model with a first full connection layer, wherein the initial grabbing pose recognition model comprises a plurality of training layers;
training each training layer except the first full-connection layer in the first full-connection layer and the initial grabbing pose recognition model by using the training data set to obtain a loss value of the training data set;
And updating the weight parameters of the first full-connection layer according to the loss value of the training data set, wherein the weight parameters of all training layers except the first full-connection layer in the initial grabbing pose recognition model are kept unchanged.
5. The method of claim 1, wherein adjusting the plurality of selectable gripping locations based on the plurality of selectable gripping locations and a size of a robotic arm fixture results in a target gripping location of the target object, comprising:
Clustering the plurality of selectable grabbing positions to obtain clustered positions;
and adjusting the clustered positions according to the size of the mechanical arm clamp to obtain the target grabbing position of the target object.
6. The method of claim 5, wherein clustering the plurality of selectable grip locations to obtain clustered locations comprises:
Clustering the plurality of selectable grabbing positions to obtain a plurality of clusters;
Determining a point set to be grabbed according to the number of the selectable grabbing positions in each cluster;
and determining the clustered positions according to the optional grabbing positions in the point set to be grabbed.
7. The method of claim 6, wherein determining the post-cluster location from each selectable grabbing location in the set of points to be grabbed comprises:
Determining the width of the target object according to each selectable grabbing position;
And carrying out weighted average on each optional grabbing position according to the optional grabbing positions and the weight value of each optional grabbing position to obtain the initial grabbing position of the target object.
8. The method of claim 7, wherein the weight value is derived from the grasping gesture recognition model output.
9. The method of claim 5, wherein adjusting the post-cluster position according to the size of the robotic arm fixture results in a target capture position of the target object, comprising:
Determining a reference bar swinging angle according to the width of a target object, the length of the projection of the second connecting piece of the mechanical arm clamp in the horizontal direction, the distance between the root of the third connecting piece of the mechanical arm clamp and the center of the mechanical arm clamp and the length of the third connecting piece of the mechanical arm clamp;
determining grabbing depth information according to the shortest distance between the base of the mechanical arm clamp and the third connecting piece in the vertical direction, the length of the third connecting piece, the swinging angle of the reference rod, the projection length of the second connecting piece in the vertical direction and the length of the first connecting piece of the mechanical arm clamp;
and obtaining the target grabbing position of the target object according to the grabbing depth information.
10. The method according to any one of claims 1-9, wherein controlling the robotic arm to grasp the target object according to the target grasping position comprises:
generating a moving path according to the current position of the mechanical arm and the target grabbing position;
and controlling the mechanical arm to move along the moving path and grabbing the target object.
11. The method according to claim 10, wherein the method further comprises:
Determining whether the updated position of the mechanical arm is any position point on the moving path in the moving process of the mechanical arm along the moving path;
If not, regenerating a new moving path according to the updated position of the mechanical arm and the target grabbing position, and controlling the mechanical arm to move along the new moving path and grab the target object.
12. A robotic arm gripping device, the device comprising:
the acquisition module is used for acquiring point cloud data of a grabbing area, and a target object to be grabbed is placed in the grabbing area;
The recognition module is used for inputting the point cloud data into a capture pose recognition model which is obtained through training in advance, so as to obtain a plurality of selectable capture positions of the target object;
The adjusting module is used for adjusting the plurality of selectable grabbing positions according to the size of the mechanical arm clamp to obtain a target grabbing position of the target object;
And the control module is used for controlling the mechanical arm to grasp the target object according to the target grasping position.
13. An electronic device, comprising: a processor, a storage medium and a bus, the storage medium storing machine-readable instructions executable by the processor, the processor and the storage medium communicating over the bus when the electronic device is running, the processor executing the machine-readable instructions to perform the steps of the method of any one of claims 1-11.
14. A computer readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when executed by a processor, performs the method according to any of claims 1-11.
CN202410459359.9A 2024-04-16 2024-04-16 Mechanical arm grabbing method and device, electronic equipment and storage medium Pending CN118106973A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410459359.9A CN118106973A (en) 2024-04-16 2024-04-16 Mechanical arm grabbing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410459359.9A CN118106973A (en) 2024-04-16 2024-04-16 Mechanical arm grabbing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118106973A true CN118106973A (en) 2024-05-31

Family

ID=91217293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410459359.9A Pending CN118106973A (en) 2024-04-16 2024-04-16 Mechanical arm grabbing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118106973A (en)

Similar Documents

Publication Publication Date Title
US11325252B2 (en) Action prediction networks for robotic grasping
CN110125930B (en) Mechanical arm grabbing control method based on machine vision and deep learning
CN111251295B (en) Visual mechanical arm grabbing method and device applied to parameterized parts
CN110785268B (en) Machine learning method and device for semantic robot grabbing
EP3742347B1 (en) Deep machine learning methods and apparatus for robotic grasping
CN106737692B (en) Mechanical gripper grabbing planning method based on depth projection and control device
Brook et al. Collaborative grasp planning with multiple object representations
CN111055279A (en) Multi-mode object grabbing method and system based on combination of touch sense and vision
CN110298886B (en) Dexterous hand grabbing planning method based on four-stage convolutional neural network
CN111085997A (en) Capturing training method and system based on point cloud acquisition and processing
CN110378325B (en) Target pose identification method in robot grabbing process
CN110909762B (en) Robot posture recognition method and device based on multi-sensor fusion
CN111046948A (en) Point cloud simulation and deep learning workpiece pose identification and robot feeding method
CN109543732B (en) Assembling system and method based on class feature knowledge base
CN113752255B (en) Mechanical arm six-degree-of-freedom real-time grabbing method based on deep reinforcement learning
CN110969660A (en) Robot feeding system based on three-dimensional stereoscopic vision and point cloud depth learning
CN114387513A (en) Robot grabbing method and device, electronic equipment and storage medium
CN112017226A (en) Industrial part 6D pose estimation method and computer readable storage medium
CN114643586B (en) Multi-finger dexterous hand grabbing gesture planning method based on deep neural network
CN118106973A (en) Mechanical arm grabbing method and device, electronic equipment and storage medium
CN114663982A (en) Human hand trajectory prediction and intention recognition method based on multi-feature fusion
WO2018161305A1 (en) Grasp quality detection method, and method and system employing same
Zhang et al. [Retracted] Multifunctional Robot Grasping System Based on Deep Learning and Image Processing
CN111860096A (en) Unmanned aerial vehicle attitude control method based on Openpos and Alexnet
Piao et al. Robotic tidy-up tasks using point cloud-based pose estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination