CN112775967A - Mechanical arm grabbing method, device and equipment based on machine vision - Google Patents

Mechanical arm grabbing method, device and equipment based on machine vision Download PDF

Info

Publication number
CN112775967A
CN112775967A CN202011642610.3A CN202011642610A CN112775967A CN 112775967 A CN112775967 A CN 112775967A CN 202011642610 A CN202011642610 A CN 202011642610A CN 112775967 A CN112775967 A CN 112775967A
Authority
CN
China
Prior art keywords
grabbing
mechanical arm
grabbed
determining
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011642610.3A
Other languages
Chinese (zh)
Inventor
郑禄
王珏
帖军
田莎莎
汪红
毛腾跃
谢勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Qingchuan University
South Central Minzu University
Original Assignee
Wuhan Qingchuan University
South Central University for Nationalities
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Qingchuan University, South Central University for Nationalities filed Critical Wuhan Qingchuan University
Priority to CN202011642610.3A priority Critical patent/CN112775967A/en
Publication of CN112775967A publication Critical patent/CN112775967A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention belongs to the technical field of mechanical arms, and discloses a mechanical arm grabbing method, device and equipment based on machine vision. Continuously acquiring multi-frame images of an object to be grabbed, and determining the current state of the object to be grabbed according to the multi-frame images; when the current state is a motion state, predicting the motion track of the object to be grabbed according to the multi-frame images; analyzing the multi-frame images, and determining object grabbing parameters according to the analysis result; acquiring the current coordinate of the mechanical arm, and determining the movement track of the mechanical arm according to the current coordinate and the movement track; and controlling the mechanical arm to grab the object to be grabbed according to the object grabbing parameters based on the mechanical arm moving track. The motion trail of the object is predicted according to the multi-frame images, the object grabbing parameters are determined, and the object to be grabbed is grabbed according to the current coordinate of the mechanical arm and the motion trail of the mechanical arm determined by the motion trail and the object grabbing parameters, so that the object to be grabbed can be grabbed when the grabbing scene is complex.

Description

Mechanical arm grabbing method, device and equipment based on machine vision
Technical Field
The invention relates to the technical field of mechanical arms, in particular to a mechanical arm grabbing method, device and equipment based on machine vision.
Background
With the rise of artificial intelligence wave, the robot plays an increasingly important role in various industries. For a robot, grabbing is an indispensable skill of the robot to walk into the real world, such as sorting objects in the logistics industry, completing assembly of parts on an industrial production line, and the like. However, in an actual application scenario, the moving track, position, posture, and bearing strength of the grasped object are all different, so that the mechanical arm grasping is difficult to be applied to various complex application scenarios.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a mechanical arm grabbing method, device and equipment based on machine vision, and aims to solve the technical problem that the prior art is difficult to apply to complex application scenes.
To achieve the above object, the present invention provides a method comprising the steps of:
continuously acquiring multi-frame images of an object to be grabbed, and determining the current state of the object to be grabbed according to the multi-frame images;
when the current state is a motion state, predicting the motion track of the object to be grabbed according to the multi-frame image;
analyzing the multi-frame image, and determining object grabbing parameters according to the analysis result;
acquiring the current coordinate of the mechanical arm, and determining the movement track of the mechanical arm according to the current coordinate and the movement track;
and controlling the mechanical arm to grab the object to be grabbed according to the object grabbing parameters based on the mechanical arm moving track.
Preferably, the step of analyzing the plurality of frames of images and determining the object grabbing parameters according to the analysis result includes:
analyzing the multi-frame image to obtain object characteristic data and object texture data of the object to be grabbed;
analyzing the object characteristic data, and determining a grabbing position and a grabbing angle;
determining a gripping force bearable interval of the object to be gripped according to the object texture data, and determining gripping force according to the gripping force bearable interval;
and establishing object grabbing parameters according to the grabbing positions, the grabbing angles and the grabbing strength.
Preferably, the step of analyzing the multiple frames of images to obtain object feature data and object texture data of the object to be grabbed includes:
performing texture recognition on the multi-frame image to obtain object texture data;
analyzing the multi-frame image according to a preset multilayer perception network to obtain multilayer characteristic image data;
and constructing object characteristic data according to the multilayer characteristic image data.
Preferably, the step of analyzing the object feature data to determine a grasping position and a grasping angle includes:
performing data analysis on the object characteristic data through a priori frame to generate a candidate region of a grabbing position;
and analyzing the candidate region of the grabbing position through a preset neural network model to obtain the grabbing position and the grabbing angle.
Preferably, the step of determining the bearable gripping force interval of the object to be gripped according to the object texture data includes:
extracting texture features according to the object texture data;
matching the texture features with the texture features of the objects of all material classes in a preset storage space to determine the material classes of the objects to be grabbed;
and determining the bearable grabbing force interval of the object to be grabbed according to the material category.
Preferably, the step of obtaining the current coordinate of the mechanical arm and determining the movement track of the mechanical arm according to the current coordinate and the movement track includes:
acquiring the current coordinate of the mechanical arm, and determining a target grabbing position according to the motion track and a preset grabbing distance;
and determining the moving track of the mechanical arm according to the current coordinate and the target grabbing position.
Preferably, the step of controlling the mechanical arm to grab the object to be grabbed according to the object grabbing parameters based on the mechanical arm moving track includes:
controlling the mechanical arm to move to a target grabbing position according to the mechanical arm moving track;
and controlling the mechanical arm to grab the object to be grabbed according to the target grabbing position and the object grabbing parameters.
Preferably, the step of controlling the robot arm to move to the target gripping position according to the robot arm movement track includes:
controlling the mechanical arm to move according to the mechanical arm moving track;
acquiring obstacle image data in the moving process of the mechanical arm;
analyzing the obstacle image data to obtain obstacle contour information;
and adjusting the movement track of the mechanical arm according to the obstacle profile information, and returning to the step of controlling the mechanical arm to move according to the movement track of the mechanical arm until the mechanical arm moves to a target grabbing position.
In addition, in order to achieve the above object, the present invention further provides a robot arm gripping device based on machine vision, where the robot arm gripping device based on machine vision includes the following modules:
the state confirmation module is used for continuously acquiring multi-frame images of the object to be grabbed and determining the current state of the object to be grabbed according to the multi-frame images;
the track calculation module is used for predicting the motion track of the object to be grabbed according to the multi-frame images when the current state is the motion state;
the parameter determining module is used for analyzing the multi-frame images and determining object grabbing parameters according to the analysis result;
the track determining module is used for acquiring the current coordinate of the mechanical arm and determining the movement track of the mechanical arm according to the current coordinate and the movement track;
and the grabbing control module is used for controlling the mechanical arm to grab the object to be grabbed according to the object grabbing parameters based on the mechanical arm moving track.
In addition, in order to achieve the above object, the present invention further provides a robot arm gripping apparatus based on machine vision, including: a memory, a processor and a machine vision based robot gripping program stored on the memory and executable on the processor, the machine vision based robot gripping program when executed by the processor implementing the steps of the machine vision based robot gripping method as described above.
The method comprises the steps of continuously acquiring multi-frame images of an object to be grabbed, and determining the current state of the object to be grabbed according to the multi-frame images; when the current state is a motion state, predicting the motion track of the object to be grabbed according to the multi-frame images; analyzing the multi-frame images, and determining object grabbing parameters according to the analysis result; acquiring the current coordinate of the mechanical arm, and determining the movement track of the mechanical arm according to the current coordinate and the movement track; and controlling the mechanical arm to grab the object to be grabbed according to the object grabbing parameters based on the mechanical arm moving track. The motion trail of the object is predicted according to the multi-frame images, the object grabbing parameters are determined, and the object to be grabbed is grabbed according to the current coordinate of the mechanical arm and the motion trail of the mechanical arm determined by the motion trail and the object grabbing parameters, so that the object to be grabbed can be grabbed when the grabbing scene is complex.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device in a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a robot arm grabbing method based on machine vision according to a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating a robot arm grabbing method based on machine vision according to a second embodiment of the present invention;
fig. 4 is a structural block diagram of a first embodiment of the robot arm gripping device based on machine vision according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a robot arm gripping device based on machine vision in a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the electronic device may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a WIreless interface (e.g., a WIreless-FIdelity (WI-FI) interface). The Memory 1005 may be a Random Access Memory (RAM) Memory, or may be a Non-Volatile Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration shown in fig. 1 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a storage medium, may include therein an operating system, a network communication module, a user interface module, and a robot grasping program based on machine vision.
In the electronic apparatus shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 of the electronic device according to the present invention may be disposed in a robot grasping device based on machine vision, and the electronic device calls a robot grasping program based on machine vision stored in the memory 1005 through the processor 1001 and executes the robot grasping method based on machine vision provided by the embodiment of the present invention.
An embodiment of the present invention provides a mechanical arm grabbing method based on machine vision, and referring to fig. 2, fig. 2 is a schematic flow diagram of a first embodiment of a mechanical arm grabbing method based on machine vision according to the present invention.
In this embodiment, the mechanical arm grabbing method based on machine vision includes the following steps:
step S10: continuously acquiring multi-frame images of an object to be grabbed, and determining the current state of the object to be grabbed according to the multi-frame images;
it should be noted that, the execution main body of this embodiment may be a robot gripping device based on machine vision, and the robot gripping device based on machine vision may be an electronic device such as a microcomputer, a server, etc., or may be another device capable of implementing the same or similar functions.
It should be noted that the object to be grabbed may be an object ready for grabbing, and the multiple frames of images may be continuously collected according to a preset interval, for example: the acquisition was performed every 20 milliseconds for a total of 10 images. The current state may include a motion state and a stationary state, and it may be determined whether the object to be grabbed is in the motion state or the stationary state through the collected object to be grabbed, for example: selecting a reference object from the acquired images, comparing the relative positions of the object to be grabbed and the reference object in the acquired images before and after, if the relative positions are kept unchanged all the time, judging that the current state of the object to be grabbed is a static state, and if the relative positions are changed, judging that the current state of the object to be grabbed is a motion state.
Step S20: when the current state is a motion state, predicting the motion track of the object to be grabbed according to the multi-frame image;
it can be understood that the motion state of the object to be grabbed in the current application scene is generally regular motion, and therefore, the motion track corresponding to the object to be grabbed can be determined through the change of the relative position of the selected reference object and the object to be grabbed in the multi-frame images.
Step S30: analyzing the multi-frame image, and determining object grabbing parameters according to the analysis result;
it should be noted that the object grabbing parameters may include grabbing position, grabbing angle and grabbing strength. By analyzing the multi-frame images, the object grabbing parameters required when the object to be grabbed is grabbed can be determined.
Further, in step S30 of this embodiment, the following steps may be performed:
step S301: analyzing the multi-frame image to obtain object characteristic data and object texture data of the object to be grabbed;
it should be noted that the object feature data may be feature data of the shape, size, contour, and the like of the object to be captured, which is identified according to an image identification technique, and the object texture data may be texture data of the object to be captured, which is extracted by performing texture identification on the object to be captured.
Further, in step S301 of this embodiment, the following steps may be performed:
performing texture recognition on the multi-frame image to obtain object texture data; analyzing the multi-frame image according to a preset multilayer perception network to obtain multilayer characteristic image data; and constructing object characteristic data according to the multilayer characteristic image data.
It should be noted that texture recognition is performed on a plurality of frames of images, and material texture data of the object to be grabbed, that is, object texture data, can be acquired. The preset multilayer perception network can be a multilayer perception network constructed by introducing an acceptance module (acceptance Construction) on the basis of an SSD (Single Shot Multi Box) network, can fuse bottom layer features to enrich shallow layer features, can analyze an image by the preset multilayer perception network to obtain multilayer feature image data, and can construct object feature data according to the multilayer feature image data.
Step S302: analyzing the object characteristic data, and determining a grabbing position and a grabbing angle;
it can be understood that the contour data of the object can be simulated and constructed according to the object characteristic data, and calculation and analysis can be performed according to the contour data, so that the available grabbing position and grabbing angle during grabbing can be obtained.
Further, in step S302 of this embodiment, the following steps may be performed:
performing data analysis on the object characteristic data through a priori frame to generate a candidate region of a grabbing position; and analyzing the candidate region of the grabbing position through a preset neural network model to obtain the grabbing position and the grabbing angle.
It should be noted that the prior frame is equivalent to a suggestion frame provided in an earlier stage in the area nomination algorithm, a candidate region of a grabbing position where a grippable portion may exist can be generated by using the prior frame, the preset neural network model is a neural network model obtained according to a large number of grabbing samples, the candidate region of the grabbing position can be analyzed according to the preset neural network model, a grippable position and a grabbing angle corresponding to the grippable position existing in the candidate region of the grabbing position can be extracted, the grippable position is ranked according to corresponding feasibility, the grabbing position is selected from all the grippable positions according to the ranking result, and the corresponding grabbing angle is determined.
Step S303: determining a gripping force bearable interval of the object to be gripped according to the object texture data, and determining gripping force according to the gripping force bearable interval;
it should be noted that, the object has different bearable forces according to the different materials of the component, so the material of the object to be grabbed can be determined according to the texture data of the object, the bearable force interval of the object to be grabbed can be determined according to the material of the object to be grabbed, and the grabbing force can be determined according to the bearable force interval, for example: and taking the minimum value in the bearable force interval as the grabbing force.
Further, to illustrate how to determine the section of sustainable grasping force, in step S303 of this embodiment, the following steps may be performed:
extracting texture features according to the object texture data; matching the texture features with the texture features of the objects of all material classes in a preset storage space to determine the material classes of the objects to be grabbed; and determining the bearable grabbing force interval of the object to be grabbed according to the material category.
It should be noted that the preset storage space may store texture features of objects of different material categories and bearing strength ranges corresponding to the material categories.
In actual use, texture features can be extracted according to object texture data, the texture features are matched with the object texture features of all material categories, texture matching degrees with all the material categories are obtained, the material category with the largest texture matching degree is selected as the material category of the object to be grabbed, and the grabbing force bearing interval corresponding to the material category is used as the grabbing force bearing interval of the object to be grabbed.
Step S304: and establishing object grabbing parameters according to the grabbing positions, the grabbing angles and the grabbing strength.
It can be understood that the object grabbing parameters can be constructed by combining the grabbing position, the grabbing angle and the grabbing strength.
Step S40: acquiring the current coordinate of the mechanical arm, and determining the movement track of the mechanical arm according to the current coordinate and the movement track;
it should be noted that the current coordinate may represent the current position of the robot arm, and a suitable gripping distance may be calculated according to the gripping speed of the robot arm.
In actual use, the current coordinates of the mechanical arm can be obtained, and a target grabbing position is determined according to the motion track and the preset grabbing distance; and determining the moving track of the mechanical arm according to the current coordinate and the target grabbing position.
It should be noted that the preset grabbing distance may be a grabbing distance calculated according to a grabbing speed of the mechanical arm, and the target grabbing position may be a position coordinate suitable for grabbing, which is calculated according to a moving track, the moving speed of the mechanical arm, and the preset grabbing distance.
In actual use, when the movement track of the mechanical arm is determined according to the current coordinate and the target grabbing position, all obstacle information between the current coordinate and the target grabbing position may be acquired, for example: and planning a moving track according to the information of the obstacles such as coordinates of the obstacles, sizes of the obstacles and the like so as to avoid the obstacles between the current coordinates and the target grabbing position.
Step S50: and controlling the mechanical arm to grab the object to be grabbed according to the object grabbing parameters based on the mechanical arm moving track.
It can be understood that the mechanical arm can be controlled to move to the target grabbing position according to the mechanical arm moving track, and after the movement is completed, the mechanical arm can be controlled to grab the object to be grabbed according to the object grabbing parameters.
The method comprises the steps of continuously collecting multiple frames of images of an object to be grabbed, and determining the current state of the object to be grabbed according to the multiple frames of images; when the current state is a motion state, predicting the motion track of the object to be grabbed according to the multi-frame images; analyzing the multi-frame images, and determining object grabbing parameters according to the analysis result; acquiring the current coordinate of the mechanical arm, and determining the movement track of the mechanical arm according to the current coordinate and the movement track; and controlling the mechanical arm to grab the object to be grabbed according to the object grabbing parameters based on the mechanical arm moving track. The motion trail of the object is predicted according to the multi-frame images, the object grabbing parameters are determined, and the object to be grabbed is grabbed according to the current coordinate of the mechanical arm and the motion trail of the mechanical arm determined by the motion trail and the object grabbing parameters, so that the object to be grabbed can be grabbed when the grabbing scene is complex.
Referring to fig. 3, fig. 3 is a flowchart illustrating a robot arm grabbing method based on machine vision according to a second embodiment of the present invention.
Based on the first embodiment, in the step S50, the mechanical arm grabbing method based on machine vision in this embodiment specifically includes:
step S501: controlling the mechanical arm to move to a target grabbing position according to the mechanical arm moving track;
it should be noted that the target grabbing position may be a position coordinate suitable for grabbing according to the moving track, the moving speed of the mechanical arm, and the preset grabbing distance, and the mechanical arm may be controlled to move to the target grabbing position according to the moving track of the mechanical arm.
It should be noted that, in the moving process of the robot arm, the position of the obstacle may change due to various accidents or other tasks, or an obstacle may appear on the trajectory line of the moving trajectory of the robot arm due to the influence of situations such as untimely update of obstacle information, and at this time, if the moving trajectory of the robot arm is not corrected in time, the robot arm may collide with the obstacle.
Further, in order to avoid collision between the mechanical arm and the obstacle, step S501 of this embodiment specifically includes:
controlling the mechanical arm to move according to the mechanical arm moving track; acquiring obstacle image data in the moving process of the mechanical arm; analyzing the obstacle image data to obtain obstacle contour information; and adjusting the movement track of the mechanical arm according to the obstacle profile information, and returning to the step of controlling the mechanical arm to move according to the movement track of the mechanical arm until the mechanical arm moves to a target grabbing position.
It should be noted that the obstacle contour information may include information such as size and shape of an obstacle, the obstacle image data may be acquired during the movement of the robot arm and located in front of the movement trajectory of the robot arm, the obstacle contour information of the obstacle may be determined by analyzing the obstacle image data, whether the robot arm continues to move and collides with the obstacle is determined by the obstacle contour information, when collision may occur, the movement trajectory of the robot arm may be adjusted according to the obstacle contour information so that the robot arm may avoid the obstacle, and then the robot arm may continue to operate according to the adjusted movement trajectory of the robot arm, and the above steps are repeatedly performed until the robot arm moves to the target grabbing position.
Step S502: and controlling the mechanical arm to grab the object to be grabbed according to the target grabbing position and the object grabbing parameters.
It can be understood that after the robot arm moves to the target grabbing position, the robot arm can be controlled to grab the object to be grabbed at the target grabbing position according to the object grabbing parameters.
The embodiment controls the mechanical arm to move to the target grabbing position according to the mechanical arm moving track; and controlling the mechanical arm to grab the object to be grabbed according to the target grabbing position and the object grabbing parameters. The robot arm grabbing method based on the machine vision has the advantages that the obstacle image data in the moving process of the robot arm are collected, the obstacle outline information is determined by analyzing the obstacle image data, the moving track of the robot arm is adjusted according to the obstacle outline information, collision caused by data updating delay or obstacle position change can be effectively avoided, and the safety of the robot arm grabbing method based on the machine vision is further improved.
In addition, an embodiment of the present invention further provides a storage medium, where a machine vision-based robot grabbing program is stored on the storage medium, and the machine vision-based robot grabbing program, when executed by a processor, implements the steps of the machine vision-based robot grabbing method described above.
Referring to fig. 4, fig. 4 is a block diagram illustrating a first embodiment of a robot arm gripping device based on machine vision according to the present invention.
As shown in fig. 4, a robot arm gripping device based on machine vision according to an embodiment of the present invention includes:
the state confirmation module 401 is configured to continuously acquire multiple frames of images of an object to be grabbed, and determine a current state of the object to be grabbed according to the multiple frames of images;
a track calculating module 402, configured to predict a motion track of the object to be grabbed according to the multiple frames of images when the current state is a motion state;
a parameter determining module 403, configured to analyze the multiple frames of images, and determine an object capture parameter according to an analysis result;
a track determining module 404, configured to obtain a current coordinate of the mechanical arm, and determine a movement track of the mechanical arm according to the current coordinate and the movement track;
and a grabbing control module 405, configured to control the mechanical arm to grab the object to be grabbed according to the object grabbing parameters based on the mechanical arm moving track.
The method comprises the steps of continuously collecting multiple frames of images of an object to be grabbed, and determining the current state of the object to be grabbed according to the multiple frames of images; when the current state is a motion state, predicting the motion track of the object to be grabbed according to the multi-frame images; analyzing the multi-frame images, and determining object grabbing parameters according to the analysis result; acquiring the current coordinate of the mechanical arm, and determining the movement track of the mechanical arm according to the current coordinate and the movement track; and controlling the mechanical arm to grab the object to be grabbed according to the object grabbing parameters based on the mechanical arm moving track. The motion trail of the object is predicted according to the multi-frame images, the object grabbing parameters are determined, and the object to be grabbed is grabbed according to the current coordinate of the mechanical arm and the motion trail of the mechanical arm determined by the motion trail and the object grabbing parameters, so that the object to be grabbed can be grabbed when the grabbing scene is complex.
Further, the parameter determining module 403 is further configured to analyze the multiple frames of images to obtain object feature data and object texture data of the object to be grabbed; analyzing the object characteristic data, and determining a grabbing position and a grabbing angle; determining a gripping force bearable interval of the object to be gripped according to the object texture data, and determining gripping force according to the gripping force bearable interval; and establishing object grabbing parameters according to the grabbing positions, the grabbing angles and the grabbing strength.
Further, the parameter determining module 403 is further configured to perform texture recognition on the multiple frames of images to obtain object texture data; analyzing the multi-frame image according to a preset multilayer perception network to obtain multilayer characteristic image data; and constructing object characteristic data according to the multilayer characteristic image data.
Further, the parameter determining module 403 is further configured to perform data analysis on the object feature data through a priori frame, and generate a candidate region of a capture position; and analyzing the candidate region of the grabbing position through a preset neural network model to obtain the grabbing position and the grabbing angle.
Further, the parameter determining module 403 is further configured to extract texture features according to the object texture data; matching the texture features with the texture features of the objects of all material classes in a preset storage space to determine the material classes of the objects to be grabbed; and determining the bearable grabbing force interval of the object to be grabbed according to the material category.
Further, the trajectory determining module 404 is further configured to obtain a current coordinate of the mechanical arm, and determine a target grabbing position according to the motion trajectory and a preset grabbing distance; and determining the moving track of the mechanical arm according to the current coordinate and the target grabbing position.
Further, the grabbing control module 405 is further configured to control the mechanical arm to move to a target grabbing position according to the mechanical arm movement track; and controlling the mechanical arm to grab the object to be grabbed according to the target grabbing position and the object grabbing parameters.
Further, the grabbing control module 405 is further configured to control the mechanical arm to move according to the mechanical arm movement track; acquiring obstacle image data in the moving process of the mechanical arm; analyzing the obstacle image data to obtain obstacle contour information; and adjusting the movement track of the mechanical arm according to the obstacle profile information, and returning to the step of controlling the mechanical arm to move according to the movement track of the mechanical arm until the mechanical arm moves to a target grabbing position.
It should be understood that the above is only an example, and the technical solution of the present invention is not limited in any way, and in a specific application, a person skilled in the art may set the technical solution as needed, and the present invention is not limited thereto.
It should be noted that the above-described work flows are only exemplary, and do not limit the scope of the present invention, and in practical applications, a person skilled in the art may select some or all of them to achieve the purpose of the solution of the embodiment according to actual needs, and the present invention is not limited herein.
In addition, the technical details that are not described in detail in this embodiment may be referred to a robot arm grabbing method based on machine vision provided in any embodiment of the present invention, and are not described herein again.
Further, it is to be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g. Read Only Memory (ROM)/RAM, magnetic disk, optical disk), and includes several instructions for enabling a terminal device (e.g. a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A mechanical arm grabbing method based on machine vision is characterized by comprising the following steps:
continuously acquiring multi-frame images of an object to be grabbed, and determining the current state of the object to be grabbed according to the multi-frame images;
when the current state is a motion state, predicting the motion track of the object to be grabbed according to the multi-frame image;
analyzing the multi-frame image, and determining object grabbing parameters according to the analysis result;
acquiring the current coordinate of the mechanical arm, and determining the movement track of the mechanical arm according to the current coordinate and the movement track;
and controlling the mechanical arm to grab the object to be grabbed according to the object grabbing parameters based on the mechanical arm moving track.
2. The machine-vision-based mechanical arm grabbing method according to claim 1, wherein the step of analyzing the multiple frames of images and determining object grabbing parameters according to the analysis result comprises:
analyzing the multi-frame image to obtain object characteristic data and object texture data of the object to be grabbed;
analyzing the object characteristic data, and determining a grabbing position and a grabbing angle;
determining a gripping force bearable interval of the object to be gripped according to the object texture data, and determining gripping force according to the gripping force bearable interval;
and establishing object grabbing parameters according to the grabbing positions, the grabbing angles and the grabbing strength.
3. The machine-vision-based mechanical arm grabbing method of claim 2, wherein the step of analyzing the multiple frames of images to obtain object feature data and object texture data of an object to be grabbed comprises:
performing texture recognition on the multi-frame image to obtain object texture data;
analyzing the multi-frame image according to a preset multilayer perception network to obtain multilayer characteristic image data;
and constructing object characteristic data according to the multilayer characteristic image data.
4. The machine-vision-based mechanical arm grabbing method of claim 2, wherein the step of analyzing the object characteristic data to determine a grabbing position and a grabbing angle comprises:
performing data analysis on the object characteristic data through a priori frame to generate a candidate region of a grabbing position;
and analyzing the candidate region of the grabbing position through a preset neural network model to obtain the grabbing position and the grabbing angle.
5. The machine vision-based mechanical arm grabbing method of claim 2, wherein the step of determining an allowable grabbing force interval of the object to be grabbed according to the object texture data comprises the following steps:
extracting texture features according to the object texture data;
matching the texture features with the texture features of the objects of all material classes in a preset storage space to determine the material classes of the objects to be grabbed;
and determining the bearable grabbing force interval of the object to be grabbed according to the material category.
6. The machine-vision-based mechanical arm grabbing method according to any one of claims 1-5, wherein the step of acquiring current coordinates of a mechanical arm and determining a mechanical arm movement track according to the current coordinates and the movement track comprises the following steps:
acquiring the current coordinate of the mechanical arm, and determining a target grabbing position according to the motion track and a preset grabbing distance;
and determining the moving track of the mechanical arm according to the current coordinate and the target grabbing position.
7. The machine-vision-based mechanical arm grabbing method according to any one of claims 1-5, wherein the step of controlling the mechanical arm to grab the object to be grabbed according to the object grabbing parameters based on the mechanical arm moving track comprises the following steps:
controlling the mechanical arm to move to a target grabbing position according to the mechanical arm moving track;
and controlling the mechanical arm to grab the object to be grabbed according to the target grabbing position and the object grabbing parameters.
8. The machine-vision-based robot arm gripping method according to claim 7, wherein the step of controlling the robot arm to move to the target gripping position according to the robot arm movement trajectory comprises:
controlling the mechanical arm to move according to the mechanical arm moving track;
acquiring obstacle image data in the moving process of the mechanical arm;
analyzing the obstacle image data to obtain obstacle contour information;
and adjusting the movement track of the mechanical arm according to the obstacle profile information, and returning to the step of controlling the mechanical arm to move according to the movement track of the mechanical arm until the mechanical arm moves to a target grabbing position.
9. A mechanical arm grabbing device based on machine vision is characterized in that the mechanical arm grabbing device based on machine vision comprises the following modules:
the state confirmation module is used for continuously acquiring multi-frame images of the object to be grabbed and determining the current state of the object to be grabbed according to the multi-frame images;
the track calculation module is used for predicting the motion track of the object to be grabbed according to the multi-frame images when the current state is the motion state;
the parameter determining module is used for analyzing the multi-frame images and determining object grabbing parameters according to the analysis result;
the track determining module is used for acquiring the current coordinate of the mechanical arm and determining the movement track of the mechanical arm according to the current coordinate and the movement track;
and the grabbing control module is used for controlling the mechanical arm to grab the object to be grabbed according to the object grabbing parameters based on the mechanical arm moving track.
10. A machine vision-based mechanical arm grabbing equipment is characterized by comprising: memory, a processor and a machine vision based robot grabbing program stored on the memory and executable on the processor, which when executed by the processor performs the steps of the machine vision based robot grabbing method of any of claims 1-8.
CN202011642610.3A 2020-12-30 2020-12-30 Mechanical arm grabbing method, device and equipment based on machine vision Pending CN112775967A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011642610.3A CN112775967A (en) 2020-12-30 2020-12-30 Mechanical arm grabbing method, device and equipment based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011642610.3A CN112775967A (en) 2020-12-30 2020-12-30 Mechanical arm grabbing method, device and equipment based on machine vision

Publications (1)

Publication Number Publication Date
CN112775967A true CN112775967A (en) 2021-05-11

Family

ID=75755157

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011642610.3A Pending CN112775967A (en) 2020-12-30 2020-12-30 Mechanical arm grabbing method, device and equipment based on machine vision

Country Status (1)

Country Link
CN (1) CN112775967A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111015667A (en) * 2019-12-27 2020-04-17 深圳前海达闼云端智能科技有限公司 Robot arm control method, robot, and computer-readable storage medium
CN113334395A (en) * 2021-08-09 2021-09-03 常州唯实智能物联创新中心有限公司 Multi-clamp mechanical arm disordered grabbing method and system
CN114619465A (en) * 2022-03-28 2022-06-14 加西贝拉压缩机有限公司 Grabbing control method of robot clamp
CN115366098A (en) * 2022-07-29 2022-11-22 山东浪潮科学研究院有限公司 Sheet-like object grabbing system based on visual guidance
CN115648232A (en) * 2022-12-30 2023-01-31 广东隆崎机器人有限公司 Mechanical arm control method and device, electronic equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3020303A1 (en) * 2014-04-25 2015-10-30 Sileane METHOD AND INSTALLATION FOR AUTOMATIC PRETENSION OF AN OBJECT.
CN105468033A (en) * 2015-12-29 2016-04-06 上海大学 Control method for medical suspension alarm automatic obstacle avoidance based on multi-camera machine vision
CN107363837A (en) * 2017-09-08 2017-11-21 桂林加宏汽车修理有限公司 A kind of method of manipulator control system and control machinery hand captures object
CN110271007A (en) * 2019-07-24 2019-09-24 广东工业大学 A kind of the grasping body method and relevant apparatus of mechanical arm
CN110744546A (en) * 2019-11-01 2020-02-04 云南电网有限责任公司电力科学研究院 Method and system for grabbing non-stationary lead by defect repairing robot
CN111015655A (en) * 2019-12-18 2020-04-17 深圳市优必选科技股份有限公司 Mechanical arm grabbing method and device, computer readable storage medium and robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3020303A1 (en) * 2014-04-25 2015-10-30 Sileane METHOD AND INSTALLATION FOR AUTOMATIC PRETENSION OF AN OBJECT.
CN105468033A (en) * 2015-12-29 2016-04-06 上海大学 Control method for medical suspension alarm automatic obstacle avoidance based on multi-camera machine vision
CN107363837A (en) * 2017-09-08 2017-11-21 桂林加宏汽车修理有限公司 A kind of method of manipulator control system and control machinery hand captures object
CN110271007A (en) * 2019-07-24 2019-09-24 广东工业大学 A kind of the grasping body method and relevant apparatus of mechanical arm
CN110744546A (en) * 2019-11-01 2020-02-04 云南电网有限责任公司电力科学研究院 Method and system for grabbing non-stationary lead by defect repairing robot
CN111015655A (en) * 2019-12-18 2020-04-17 深圳市优必选科技股份有限公司 Mechanical arm grabbing method and device, computer readable storage medium and robot

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张向荣 等: "《人工智能前沿技术丛书 模式识别》", 30 September 2019 *
陈云霁: "《智能计算系统》", 28 February 2020 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111015667A (en) * 2019-12-27 2020-04-17 深圳前海达闼云端智能科技有限公司 Robot arm control method, robot, and computer-readable storage medium
CN113334395A (en) * 2021-08-09 2021-09-03 常州唯实智能物联创新中心有限公司 Multi-clamp mechanical arm disordered grabbing method and system
CN113334395B (en) * 2021-08-09 2021-11-26 常州唯实智能物联创新中心有限公司 Multi-clamp mechanical arm disordered grabbing method and system
CN114619465A (en) * 2022-03-28 2022-06-14 加西贝拉压缩机有限公司 Grabbing control method of robot clamp
CN114619465B (en) * 2022-03-28 2023-10-20 加西贝拉压缩机有限公司 Grabbing control method of robot clamp
CN115366098A (en) * 2022-07-29 2022-11-22 山东浪潮科学研究院有限公司 Sheet-like object grabbing system based on visual guidance
CN115648232A (en) * 2022-12-30 2023-01-31 广东隆崎机器人有限公司 Mechanical arm control method and device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN112775967A (en) Mechanical arm grabbing method, device and equipment based on machine vision
CN111055279B (en) Multi-mode object grabbing method and system based on combination of touch sense and vision
CN109483573A (en) Machine learning device, robot system and machine learning method
JP6810087B2 (en) Machine learning device, robot control device and robot vision system using machine learning device, and machine learning method
JP7071054B2 (en) Information processing equipment, information processing methods and programs
US9230329B2 (en) Method, computer program and apparatus for determining a gripping location
CN109407603B (en) Method and device for controlling mechanical arm to grab object
CN112837371A (en) Object grabbing method and device based on 3D matching and computing equipment
EP3812107A1 (en) Robot control device, and method and program for controlling the same
CN112518756B (en) Motion trajectory planning method and device for mechanical arm, mechanical arm and storage medium
CN111428731A (en) Multi-class target identification and positioning method, device and equipment based on machine vision
Lee et al. Robot-assisted disassembly sequence planning with real-time human motion prediction
Hak et al. Reverse control for humanoid robot task recognition
CN112230649B (en) Machine learning method and mobile robot
Chen et al. Projection-based augmented reality system for assembly guidance and monitoring
Mitrouchev et al. Virtual disassembly sequences generation and evaluation
CN109213101B (en) Method and system for preprocessing under robot system
CN111275758B (en) Hybrid 3D visual positioning method, device, computer equipment and storage medium
JP2021135977A (en) Apparatus and method for processing information
CN113269008A (en) Pedestrian trajectory prediction method and device, electronic equipment and storage medium
CN115461199A (en) Task-oriented 3D reconstruction for autonomous robotic operation
CN115903773A (en) Mobile object control device, mobile object, learning device and method, and storage medium
JP7448327B2 (en) Robot systems, control methods, machine learning devices, and machine learning methods that assist workers in their work
CN112917470A (en) Teaching method, device and system of manipulator, storage medium and equipment
CN112287728A (en) Intelligent agent trajectory planning method, device, system, storage medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210511