CN114299116A - Dynamic object grabbing method, device and storage medium - Google Patents

Dynamic object grabbing method, device and storage medium Download PDF

Info

Publication number
CN114299116A
CN114299116A CN202111640698.XA CN202111640698A CN114299116A CN 114299116 A CN114299116 A CN 114299116A CN 202111640698 A CN202111640698 A CN 202111640698A CN 114299116 A CN114299116 A CN 114299116A
Authority
CN
China
Prior art keywords
target
grabbing
coordinate data
encoder
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111640698.XA
Other languages
Chinese (zh)
Inventor
黄朋生
潘桐
黄少华
覃宝钻
郭鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Borunte Robot Co Ltd
Original Assignee
Borunte Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Borunte Robot Co Ltd filed Critical Borunte Robot Co Ltd
Priority to CN202111640698.XA priority Critical patent/CN114299116A/en
Publication of CN114299116A publication Critical patent/CN114299116A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Manipulator (AREA)

Abstract

The invention discloses a dynamic target grabbing method, equipment and a storage medium, which are applied to grabbing equipment and relate to the technical field of object grabbing, wherein the method comprises the following steps: acquiring an image to be processed corresponding to the dynamic target; performing data analysis processing on the image to be processed to obtain corresponding coordinate data of the grabbing points to be grabbed; inputting the coordinate data of the grabbing point, the data of the encoder and the position offset corresponding to the dynamic target into an intersection point algorithm model to obtain target coordinate data corresponding to a target intersection point; and when the target coordinate data falls into a preset grabbing area, grabbing the dynamic target according to a target intersection point corresponding to the target coordinate data. The dynamic target grabbing method improves the positioning precision and reliability of the grabbing equipment to the dynamic target, so that the accuracy of the robot for grabbing the dynamic target is improved.

Description

Dynamic object grabbing method, device and storage medium
Technical Field
The invention relates to the technical field of article grabbing, in particular to a dynamic object grabbing method, dynamic object grabbing equipment and a storage medium.
Background
The dynamic grabbing refers to that in the whole process of grabbing an object by a grabbing device such as a robot, a conveyor belt does not need to be stopped, the object runs on the conveyor belt, and the position of the object is changed in real time, so that the robot grabs the object. The object continuously runs on the conveyor belt, the position continuously changes, the robot starts to move in an accelerated mode to the grabbing process, the object on the conveyor belt and the grabbing tool at the tail end of the robot are converged at a certain position, and the converged position is a convergence point.
At present, the object dynamically grabbed by the grabbing equipment is widely applied to actual production scenes, the acquisition of the intersection point is an important part of dynamic grabbing, but in the related technology, the accuracy of the intersection point obtained by calculating the intersection point through encoder position feedback is low, and the accurate grabbing of the dynamic object by the grabbing equipment is not facilitated.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art. Therefore, the invention provides a dynamic target grabbing method, equipment and a storage medium, and improves the accuracy of grabbing a dynamic target by grabbing equipment.
The dynamic object grabbing method according to the embodiment of the first aspect of the invention comprises the following steps:
acquiring an image to be processed corresponding to the dynamic target;
performing data analysis processing on the image to be processed to obtain capture point coordinate data corresponding to the to-be-captured point, wherein the dynamic target is provided with the to-be-captured point;
inputting the coordinate data of the grabbing point, the data of the encoder and the position offset corresponding to the dynamic target into an intersection point algorithm model to obtain target coordinate data corresponding to a target intersection point, wherein the target intersection point is the convergence position of the dynamic target and the grabbing equipment;
and when the target coordinate data falls into a preset grabbing area, grabbing the dynamic target according to a target intersection point corresponding to the target coordinate data.
According to one or more technical schemes provided in the embodiment of the invention, the method has at least the following beneficial effects: according to the method, the target coordinate data corresponding to the target junction point is obtained through the junction point algorithm model, and when the target coordinate data fall into a preset grabbing area, the grabbing equipment grabs the dynamic target according to the target junction point corresponding to the target coordinate data. The dynamic target grabbing method improves the positioning precision and reliability of the grabbing equipment to the dynamic target, so that the accuracy of the robot for grabbing the dynamic target is improved.
According to some embodiments of the present invention, the data analysis processing includes image processing and hand-eye calibration processing, and the data analysis processing is performed on the image to be processed to obtain coordinate data of a capture point corresponding to the capture point, including:
performing the image processing on the image to be processed to obtain camera coordinate data of the point to be grabbed in a camera coordinate system;
and carrying out the hand-eye calibration processing on the camera coordinate data to obtain the coordinate data of the grabbing point of the point to be grabbed in a machine coordinate system.
According to some embodiments of the invention, the encoder data is obtained by:
acquiring a period value of an encoder, an initial value of the encoder, the circumference of a metering wheel of the encoder and the resolution of the encoder;
and calculating to obtain the encoder data according to the encoder period value, the encoder initial value, the circumference of the metering wheel of the encoder and the resolution of the encoder.
According to some embodiments of the invention, the encoder data is calculated as follows:
E1=(E-E′)×D/H
wherein E is1Representing the encoder data, E representing the encoder period value, E 'representing the encoder initial value, D representing the encoder's odometer wheel circumference, H representing the encoder resolution.
According to some embodiments of the invention, the position offset is obtained by:
acquiring speed data and total duration data corresponding to the dynamic target, wherein the total duration data represents the total duration from the grabbing equipment to the target junction;
and calculating to obtain the position offset according to the speed data and the total duration data.
According to some embodiments of the invention, the position offset is calculated as follows:
s=T′robot×V
wherein V represents the speed data, T'robotRepresenting the total duration data.
According to some embodiments of the present invention, when the target coordinate data falls into a preset capture area, capturing the dynamic target according to a target intersection point corresponding to the target coordinate data, including:
polling and updating the target coordinate data according to a preset time period to obtain updated target coordinate data;
and when the updated target coordinate data falls into the preset grabbing area, grabbing the dynamic target according to a target intersection point corresponding to the updated target coordinate data.
According to some embodiments of the present invention, when the updated target coordinate data falls into the preset capture area, capturing the dynamic target according to a target intersection point corresponding to the updated target coordinate data, including:
when at least two updated target coordinate data fall into the preset grabbing area, extracting a target coordinate data to be grabbed from the at least two updated target coordinate data, wherein the target coordinate data comprises the target coordinate data to be grabbed;
and capturing the dynamic target according to a target junction point corresponding to the coordinate data to be captured of the target.
According to a second aspect of the present invention, a dynamic object capture device includes: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the dynamic object capture method as described in the first aspect above when executing the computer program.
According to a third aspect of the present invention, there is provided a computer-readable storage medium storing computer-executable instructions for causing a computer to perform the method of dynamic object grabbing as described in the first aspect above.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the example serve to explain the principles of the invention and not to limit the invention.
Fig. 1 is a schematic flowchart of a dynamic object capture method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a data analysis process provided by an embodiment of the present invention;
FIG. 3 is a flow chart of encoder data provided by an embodiment of the present invention;
FIG. 4 is a flow chart illustrating a position offset according to an embodiment of the present invention;
FIG. 5 is a schematic flow chart illustrating grabbing a dynamic object according to an embodiment of the present invention;
fig. 6 is a schematic flowchart of a specific process for capturing a dynamic object according to an embodiment of the present invention;
fig. 7 is a schematic diagram illustrating a relationship between a capturing device and a target junction according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a camera plus encoder assisted visual sorting system provided in accordance with an embodiment of the present invention.
Reference numerals:
a target junction 100, a target junction right above 110, a grabbing equipment waiting point 120;
an identification point 200, a capture boundary point 300 and an angle encoder 400;
a grabbing device 500, a preset grabbing area 510, a conveyor belt 520 and a camera 530.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It should be noted that although functional blocks are partitioned in a schematic diagram of an apparatus and a logical order is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the partitioning of blocks in the apparatus or the order in the flowchart. The terms first, second and the like in the description and in the claims, and the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The dynamic grabbing refers to that in the whole process of grabbing an object by a grabbing device such as a robot, a conveyor belt does not need to be stopped, the object runs on the conveyor belt, and the position of the object is changed in real time, so that the robot grabs the object. The object continuously runs on the conveyor belt, the position continuously changes, the robot starts to move in an accelerated mode to the grabbing process, the object on the conveyor belt and the grabbing tool at the tail end of the robot are converged at a certain position, and the converged position is a convergence point.
At present, the object dynamically grabbed by the grabbing equipment is widely applied to actual production scenes, the acquisition of the intersection point is an important part of dynamic grabbing, but in the related technology, the accuracy of the intersection point obtained by calculating the intersection point through encoder position feedback is low, and the accurate grabbing of the dynamic object by the grabbing equipment is not facilitated.
Based on this, the embodiment of the invention provides a dynamic target grabbing method, a device and a storage medium, and improves the accuracy of grabbing a dynamic target by grabbing equipment.
The embodiments of the present invention will be further explained with reference to the drawings.
An embodiment of a first aspect of the present invention specifically provides a dynamic object capture method, which is applied to a capture device 500, as shown in fig. 1, and fig. 1 is a schematic flow diagram of the dynamic object capture method according to an embodiment of the present invention. The dynamic target grabbing method of the embodiment of the invention comprises the following steps of but not limited to:
s100, acquiring an image to be processed corresponding to the dynamic target;
step S200, carrying out data analysis processing on the image to be processed to obtain capture point coordinate data corresponding to the image to be captured, wherein the dynamic target is provided with the image to be captured;
step S300, inputting the coordinate data of the grabbing point, the data of the encoder and the position offset corresponding to the dynamic target into an intersection point algorithm model to obtain target coordinate data corresponding to a target intersection point 100, wherein the target intersection point 100 is the junction position of the dynamic target and the grabbing equipment 500;
in step S400, when the target coordinate data falls into the preset capture area 510, the dynamic target is captured according to the target junction 100 corresponding to the target coordinate data.
According to the invention, the target coordinate data corresponding to the target junction 100 is obtained through the junction algorithm model, and when the target coordinate data falls into the preset grabbing area 510, the grabbing equipment 500 grabs the dynamic target according to the target junction 100 corresponding to the target coordinate data. The dynamic target grabbing method improves the positioning precision and reliability of the grabbing equipment 500 to the dynamic target, so that the accuracy of the robot for grabbing the dynamic target is improved.
In this embodiment, the dynamic object is an object moving on the conveyor 520, the grasping apparatus 500 may be a robot, and the preset grasping area 510 refers to a working space of the robot; the dynamic object may also be an object that moves in other manners, the grasping apparatus 500 may also be other apparatuses, and the preset grasping area 510 is a working space of the apparatus, which is not limited to this embodiment and is not described herein again.
Referring to fig. 7, a grabbing plan for the grabbing device 500 to grab the dynamic object according to the object intersection 100 corresponding to the object coordinate data is as follows: and obtaining a motion track by the grabbing plan, moving the grabbing equipment 500 to a position 110 right above the target junction through target coordinate data corresponding to the target junction 100 calculated according to the junction algorithm model, and grabbing the dynamic target by the grabbing equipment 500 according to the target junction 100 corresponding to the target coordinate data when the target coordinate data falls into a preset grabbing area 510.
It is understood that, referring to fig. 2, step S200 includes, but is not limited to, the following steps:
step S210, processing the image to be processed to obtain camera coordinate data of the point to be grabbed in a camera coordinate system;
and step S220, performing hand-eye calibration processing on the camera coordinate data to obtain the coordinate data of the grabbing point of the point to be grabbed in the machine coordinate system.
In this embodiment, the camera coordinate data is Gc ═ x, y, and the coordinate data of the capture point in the machine coordinate system is Gr ═ x ', y', and the matrix is converted by the hand-eye calibration
Figure BDA0003442517930000051
The camera coordinate data Gc in the camera coordinate system is converted into the capturing point coordinate data Gr in the machine coordinate system, which is converted into (X ', y '), the capturing point coordinate data Gr is input into the intersection point algorithm model, the encoder data and the position offset amount corresponding to the dynamic target are input into the intersection point algorithm model, and the target coordinate data Gr ' corresponding to the target intersection point 100 is obtained, which is converted into (X ', y '), and the running direction of the conveyor belt 520 is set to be parallel to the X direction of the machine coordinate system in this embodiment.
It is understood that, referring to fig. 3, the method of acquiring encoder data includes, but is not limited to, the following steps:
step S310, acquiring a period value of an encoder, an initial value of the encoder, the circumference of a metering wheel of the encoder and the resolution of the encoder;
step S311, calculating to obtain encoder data according to the encoder period value, the encoder initial value, the circumference of the metering wheel of the encoder, and the encoder resolution.
In this embodiment, according to the characteristics of the encoder, a first timer is used, where the first timer is used to set a timing time period to 5ms, and after the first timer is started, a signal is continuously sent every 5ms, and the encoder period value, the encoder meter wheel circumference, the encoder resolution, and the encoder sampling frequency are polled and updated; referring to fig. 7, after the start-up process, the camera 530 photographs the recognition point 200, and at this time, an encoder initial value is acquired.
Note that the identification point 200 is an initial position of the dynamic object when the dynamic object moves.
The embodiment is a method for realizing dynamic target grabbing based on a monocular 2D camera and an encoder, and the monocular 2D camera is used, so that the cost of grabbing a dynamic target by the grabbing equipment 500 is reduced; other cameras 530 may also be used and are not limited in this embodiment.
It should be noted that the encoder of the present embodiment is an angle encoder 400, and in other embodiments, the encoder may also be another type of encoder; the first timer may set the timing time to other values, and is not limited to the embodiment.
It is understood that the calculation formula of the encoder data is as follows:
E1=(E-E′)×D/H
wherein E is1Representing the encoder data, E representing the encoder period value, E 'representing the encoder initial value, D representing the encoder's odometer wheel circumference, H representing the encoder resolution.
It is understood that, referring to fig. 4, the method for obtaining the position offset includes, but is not limited to, the following steps:
step S320, acquiring speed data and total duration data corresponding to the dynamic target, wherein the total duration data represents the total duration from the capturing device 500 to the target junction 100;
and step S330, calculating to obtain the position offset according to the speed data and the total duration data.
It should be noted that the speed data is obtained by calculation according to the encoder period value obtained by polling update, the circumference of the metering wheel of the encoder, the encoder resolution and the encoder sampling frequency, wherein a calculation formula of the speed data is as follows:
V=E×1000×D×H×S
where V represents velocity data, E represents an encoder period value, D represents a odometer wheel circumference of the encoder, H represents an encoder resolution, and S represents an encoder sampling frequency.
The calculation formula of the total time length data is as follows:
T′robot=T1+t2
wherein, T'robotRepresenting total duration data, T1Represents the length of time, t, taken for the grasping device 500 to move to the preset path threshold of the grasping device 5002Indicating the length of time it takes for the grasping device 500 to approach the target intersection 100 from directly above the target intersection 110.
It should be noted that, referring to fig. 7, the total time length of the ideal movement time from the grasping apparatus 500 to the target junction 100 is: t isrobot=t1+t2Wherein, t1For the time taken by the grabbing device 500 from the starting point to 110 directly above the target junction, the dynamic target passes t from the grabbing boundary point 3003Time is transmitted to the target junction 100, so that T is the time required to ensure that the grasping apparatus 500 accurately grasps the dynamic targetrobot=t3(ii) a Wherein, T1≥t1,t3Indicating the grab transmission time. In fig. 7, { c } represents the camera coordinate system, and { r } represents the machine coordinate system, with the two straight lines in between representing the conveyor 520 surface, where the identified dynamic objects move from left to right on the conveyor 520, and where the position and velocity of the dynamic objects are acquired by the angle encoder 400.
In the present embodiment, t is t as long as the vertical distance between the grabbing device 500 and the conveyor belt 520 from directly above the target junction 110 is kept constant2Is a constant value; t since the location of the target junction 100 is uncertain1Is a meeting with an objectThe position of point 100. A fixed time T is selected based on the parameters of the grasping apparatus 500, such as the operating speed of the grasping apparatus 500 and the range of travel of the grasping apparatus 5001The selection principle of the time length for the grabbing equipment 500 to move to the preset path threshold of the grabbing equipment 500 is as follows:
setting the grasping apparatus 500 to move according to a determined linear velocity, such as 1000mm/s, testing a time period for the grasping apparatus 500 to move to a preset path threshold of the grasping apparatus 500 in a preset grasping area 510, such as 0.4s, and then specifying T1Not less than 0.4; alternatively, it is ensured that the grabbing device 500 can be at T no matter where the target meeting point 100 is in the preset grabbing area 5101The location may be reached in time. When the grabbing device 500 moves to a position 110 right above the target junction, a period of time is needed to be delayed, and then the dynamic target is grabbed, wherein the calculation formula of the delay time is as follows: t is t4=(T1-t1),t4The predetermined path threshold is the horizontal distance between the position right above the recognition point 200 and the waiting point 120 of the grasping apparatus.
Therefore, the calculation formula of the total duration data is as follows: t'robot=T1+t2
In this embodiment, when the grasping apparatus 500 grasps the dynamic object, the relevant trajectory parameters need to be set first: the extreme speed value, the extreme acceleration value and the extreme jerk value, and then the grabbing equipment 500 grabs the dynamic target by using a track planning mode planned by an S-shaped motion curve through the relevant track parameters; the length of time taken by the grasping device 500 to approach the target junction 100 from directly above the target junction 110 is calculated by a standard sigmoid curve plan in the sigmoid motion curve plan.
It is understood that the calculation formula of the position deviation amount is as follows:
s=T′robot×V
wherein V represents speed data, T'robotIndicating the total duration data.
It should be noted that the calculation formula of the intersection point algorithm model is as follows:
X′=x′+(E-E′)×D/H+T′robot×V
wherein X 'represents an X coordinate of the target coordinate data, X' represents an X coordinate of the grasping point coordinate data, E represents an encoder period value, E 'represents an encoder initial value, D represents a meter wheel circumference of the encoder, H represents an encoder resolution, and T'robotIndicating total duration data and V indicating speed data.
It should be noted that the length from the capture device 500 to the target junction 100 is sometimes T'robotTherefore, a position value compensation needs to be performed on the coordinate data of the capture point corresponding to the target capture point, that is, a position offset is added to the target junction point 100 in the X direction.
It is understood that, referring to fig. 5, step S400 includes, but is not limited to, the following steps:
step S410, polling and updating target coordinate data according to a preset time period to obtain updated target coordinate data;
in step S420, when the updated target coordinate data falls into the preset capture area 510, the dynamic target is captured according to the target junction 100 corresponding to the updated target coordinate data.
In this embodiment, during the continuous operation of the conveyor 520, a second timer is used, the second timer is used to set the timing time to 100ms, the second timer continuously sends a signal every 100ms after being started, the target coordinate data of the target intersection point 100 is calculated in real time, because the moving direction of the conveyor 520 is parallel to the X direction of the grabbing device 500, the y coordinate of the grabbing point coordinate data remains unchanged during the operation of the dynamic target, and the X coordinate of the target coordinate data is updated by polling through the second timer, so the target coordinate data Gr ' corresponding to the target intersection point 100 is (X ', y ').
It should be noted that the timing time setting of the second timer may also be set to other values, and is not limited in this embodiment.
It is understood that, referring to fig. 6, step S420 includes, but is not limited to, the following steps:
step S421, when at least two updated target coordinate data fall into the preset capturing area 510, extracting a target coordinate data to be captured from the at least two updated target coordinate data, where the target coordinate data includes the target coordinate data to be captured;
step S422, capturing the dynamic object according to the object intersection 100 corresponding to the coordinate data to be captured of the object.
In addition, an embodiment of a second aspect of the present invention further provides a dynamic object capture method, where the dynamic object capture method includes: a memory, a processor, and a computer program stored on the memory and executable on the processor.
The processor and memory may be connected by a bus or other means.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The non-transitory software programs and instructions required to implement the dynamic object crawling method of the first aspect embodiment described above are stored in the memory, and when executed by the processor, perform the dynamic object crawling method of the above embodiment, for example, perform the method steps S100 to S400 in fig. 1, the method steps S210 to S220 in fig. 2, the method steps S310 to S311 in fig. 3, the method steps S320 to S330 in fig. 4, the method steps S410 to S420 in fig. 5, and the method steps S421 to S422 in fig. 6 described above.
The above described embodiments of the device are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may also be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, which stores computer-executable instructions, which are executed by a processor or a controller, for example, by a processor in the above-mentioned apparatus embodiment, and can enable the above-mentioned processor to execute the dynamic object grabbing method in the above-mentioned embodiment, for example, execute the above-mentioned method steps S100 to S400 in fig. 1, method steps S210 to S220 in fig. 2, method steps S310 to S311 in fig. 3, method steps S320 to S330 in fig. 4, method steps S410 to S420 in fig. 5, and method steps S421 to S422 in fig. 6.
One of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
While the preferred embodiments of the present invention have been described in detail, it will be understood by those skilled in the art that the foregoing and various other changes, omissions and deviations in the form and detail thereof may be made without departing from the scope of this invention.

Claims (10)

1. A dynamic object grabbing method is applied to grabbing equipment and is characterized by comprising the following steps:
acquiring an image to be processed corresponding to the dynamic target;
performing data analysis processing on the image to be processed to obtain capture point coordinate data corresponding to the to-be-captured point, wherein the dynamic target is provided with the to-be-captured point;
inputting the coordinate data of the grabbing point, the data of the encoder and the position offset corresponding to the dynamic target into an intersection point algorithm model to obtain target coordinate data corresponding to a target intersection point, wherein the target intersection point is the convergence position of the dynamic target and the grabbing equipment;
and when the target coordinate data falls into a preset grabbing area, grabbing the dynamic target according to a target intersection point corresponding to the target coordinate data.
2. The dynamic object capturing method according to claim 1, wherein the data analysis processing includes image processing and hand-eye calibration processing, and the data analysis processing is performed on the image to be processed to obtain coordinate data of the capturing point corresponding to the capturing point, and includes:
performing the image processing on the image to be processed to obtain camera coordinate data of the point to be grabbed in a camera coordinate system;
and carrying out the hand-eye calibration processing on the camera coordinate data to obtain the coordinate data of the grabbing point of the point to be grabbed in a machine coordinate system.
3. The dynamic object crawling method according to claim 1, wherein the encoder data is obtained by:
acquiring a period value of an encoder, an initial value of the encoder, the circumference of a metering wheel of the encoder and the resolution of the encoder;
and calculating to obtain the encoder data according to the encoder period value, the encoder initial value, the circumference of the metering wheel of the encoder and the resolution of the encoder.
4. The dynamic object crawling method according to claim 3, wherein the encoder data is calculated as follows:
E1=(E-E′)×D/H
wherein E is1Representing the encoder data, E representing the encoder period value, E 'representing the encoder initial value, D representing the encoder's odometer wheel circumference, H representing the encoder resolution.
5. The dynamic object capture method of claim 1, wherein the positional offset is derived by:
acquiring speed data and total duration data corresponding to the dynamic target, wherein the total duration data represents the total duration from the grabbing equipment to the target junction;
and calculating to obtain the position offset according to the speed data and the total duration data.
6. The dynamic object capture method of claim 5, wherein the position offset is calculated as follows:
s=T′robot×V
wherein V represents the speed data, T'eobotRepresenting said total duration data。
7. The dynamic object capturing method according to claim 1, wherein capturing the dynamic object according to an object intersection point corresponding to the object coordinate data when the object coordinate data falls into a preset capturing area includes:
polling and updating the target coordinate data according to a preset time period to obtain updated target coordinate data;
and when the updated target coordinate data falls into the preset grabbing area, grabbing the dynamic target according to a target intersection point corresponding to the updated target coordinate data.
8. The method according to claim 7, wherein when the updated target coordinate data falls into the preset capture area, capturing the dynamic target according to a target intersection point corresponding to the updated target coordinate data, includes:
when at least two updated target coordinate data fall into the preset grabbing area, extracting a target coordinate data to be grabbed from the at least two updated target coordinate data, wherein the target coordinate data comprises the target coordinate data to be grabbed;
and capturing the dynamic target according to a target junction point corresponding to the coordinate data to be captured of the target.
9. A dynamic object capture device, comprising: memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing a dynamic object grabbing method according to any one of claims 1 to 8 when executing the computer program.
10. A computer-readable storage medium characterized by: the computer-readable storage medium stores computer-executable instructions for causing a computer to perform the dynamic object crawling method of any one of claims 1 to 8.
CN202111640698.XA 2021-12-29 2021-12-29 Dynamic object grabbing method, device and storage medium Pending CN114299116A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111640698.XA CN114299116A (en) 2021-12-29 2021-12-29 Dynamic object grabbing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111640698.XA CN114299116A (en) 2021-12-29 2021-12-29 Dynamic object grabbing method, device and storage medium

Publications (1)

Publication Number Publication Date
CN114299116A true CN114299116A (en) 2022-04-08

Family

ID=80971305

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111640698.XA Pending CN114299116A (en) 2021-12-29 2021-12-29 Dynamic object grabbing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN114299116A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024113216A1 (en) * 2022-11-30 2024-06-06 青岛理工大学(临沂) High-precision grasping method of industrial mold intelligent manufacturing robot

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024113216A1 (en) * 2022-11-30 2024-06-06 青岛理工大学(临沂) High-precision grasping method of industrial mold intelligent manufacturing robot

Similar Documents

Publication Publication Date Title
CN110450129B (en) Carrying advancing method applied to carrying robot and carrying robot thereof
US9604365B2 (en) Device and method of transferring articles by using robot
US10407250B2 (en) Image processing system, image processing apparatus, workpiece pickup method, and workpiece pickup program
US7283661B2 (en) Image processing apparatus
CN106574961B (en) Use the object identification device of multiple objects detection unit
US20170327127A1 (en) Method and device for processing image data, and driver-assistance system for a vehicle
CN113561171B (en) Robot system with dynamic motion adjustment mechanism and method of operating the same
CN113269085B (en) Linear conveyor belt tracking control method, system, device and storage medium
CN111604898B (en) Livestock retrieval method, robot, terminal equipment and storage medium
CN114299116A (en) Dynamic object grabbing method, device and storage medium
CN110295728A (en) Handling system and its control method, floor tile paving system
CN115609594A (en) Planning method and device for mechanical arm path, upper control end and storage medium
CN108347577A (en) A kind of imaging system and method
CN109322513B (en) Emergency supporting mechanism for universal vehicle platform
JPH1139464A (en) Image processor for vehicle
CN115170608A (en) Material tracking method and device
CN114812539A (en) Map search method, map using method, map searching device, map using device, robot and storage medium
CN114785955A (en) Motion compensation method, system and storage medium for dynamic camera in complex scene
CN115229780A (en) Mechanical arm motion path planning method and device
CN116408790A (en) Robot control method, device, system and storage medium
CN109344677B (en) Method, device, vehicle and storage medium for recognizing three-dimensional object
US20230027659A1 (en) Self-position estimation device, moving body, self-position estimation method, and self-position estimation program
CN113226666A (en) Method and apparatus for monitoring a robotic system
CN110210367B (en) Training data acquisition method, electronic device and storage medium
EP3093690A1 (en) Adaptive point cloud window selection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination