CN108656107B - Mechanical arm grabbing system and method based on image processing - Google Patents

Mechanical arm grabbing system and method based on image processing Download PDF

Info

Publication number
CN108656107B
CN108656107B CN201810300186.0A CN201810300186A CN108656107B CN 108656107 B CN108656107 B CN 108656107B CN 201810300186 A CN201810300186 A CN 201810300186A CN 108656107 B CN108656107 B CN 108656107B
Authority
CN
China
Prior art keywords
target
mechanical arm
targets
image processing
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810300186.0A
Other languages
Chinese (zh)
Other versions
CN108656107A (en
Inventor
刘晓锋
孙旭
罗晨爽
黎延熹
袁野
高旭宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201810300186.0A priority Critical patent/CN108656107B/en
Publication of CN108656107A publication Critical patent/CN108656107A/en
Application granted granted Critical
Publication of CN108656107B publication Critical patent/CN108656107B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a mechanical arm grabbing system based on image processing, which comprises a mechanical arm module, a track simulation module and an image processing module, wherein the track simulation module comprises a track optimization unit for optimizing the track of a mechanical arm according to mechanical arm parameters and an upper computer simulation unit for simulating and verifying the optimized track of the mechanical arm; the image processing module comprises a target identification unit for identifying HSV segmentation targets and non-HSV segmentation targets, wherein the non-HSV segmentation targets comprise feature point feature targets, OCR feature targets, artificial visual feature targets and difficult-to-identify targets. The mechanical arm grabbing system has the characteristics of wide application range, high target identification and tracking quality, high movement efficiency, easiness in learning and upgrading and rich application scenes.

Description

Mechanical arm grabbing system and method based on image processing
Technical Field
The present application relates to the field of intelligent robots, and in particular, but not exclusively, to a robot arm grasping system and method based on image processing.
Background
The intelligent mechanical arm based on image processing is an industry with wide application prospect, a static or dynamic target and a mechanical arm are taken as research objects, the target is identified and positioned by acquiring and processing the target image, the motion state of the target is obtained by target tracking, and the motion equation of the mechanical arm is subjected to inverse solution to obtain the corner of the mechanical arm, so that target grabbing is realized. The intelligent mechanical arm technology based on image processing is a core technology of industrial target grabbing, target detection and manufacturing automation, and is a key technology for realizing industrial 4.0. In recent years, intelligent mechanical arms based on image processing have become research hotspots in the fields of industrial agvs (automated guided vehicles), home service robots, rescue robots, logistics sorting robots and the like, and are concerned by scholars and engineers in many fields at home and abroad.
However, the existing image processing-based intelligent mechanical arm technology has the defects of low complex target recognition rate, difficulty in tracking under complex target and background interference, low mechanical arm movement efficiency, single purpose, difficulty in learning and upgrading and the like, and is lack of a visual trajectory optimization simulation platform with complete functions.
Disclosure of Invention
In order to at least partially solve the defects of the prior art, the invention provides the mechanical arm grabbing system and method based on image processing, and the mechanical arm grabbing system and method have the characteristics of wide application range, high target identification and tracking quality, high movement efficiency, easiness in learning and upgrading and rich application scenes.
According to an aspect of the invention, an image processing-based mechanical arm grabbing system is provided, which comprises a mechanical arm module, a track simulation module and an image processing module,
the mechanical arm module comprises an image acquisition unit, a communication unit and a control unit;
the track simulation module comprises a track optimization unit for optimizing the track of the mechanical arm according to the parameters of the mechanical arm and an upper computer simulation unit for simulating and verifying the optimized track of the mechanical arm,
the image processing module comprises a target identification unit for identifying HSV segmentation targets and non-HSV segmentation targets, wherein the non-HSV segmentation targets comprise feature point feature targets, OCR feature targets, artificial visual feature targets and difficult-to-identify targets;
the image acquisition unit transmits the acquired image information to the target identification unit through the communication unit, the target identification unit realizes the identification and positioning of the target to be grabbed, the position information of the target to be grabbed is transmitted to the control unit through the communication unit, the control unit carries out binocular vision positioning calculation to obtain the distance between the target to be grabbed and the mechanical arm, and the grabbing of the target to be grabbed is realized according to the mechanical arm track optimized by the track optimization unit.
For the characteristic point characteristic class target, the target identification unit extracts ORB characteristic points, screens characteristic point pairs through Hamming distance threshold values by using a BM matching algorithm, calculates H matrixes by using a Randac algorithm, screens outlier point pairs through threshold filtering to obtain characteristic matching point pairs, and realizes identification and positioning of the characteristic point characteristic class target through template characteristic point matching and clustering.
For the target which is difficult to identify, the target identification unit downloads a data set by using an image capture engine based on Scapy, performs target identification training by using a convolutional neural network based on Caffe SSD, and realizes identification and positioning of the target which is difficult to identify by using a feedforward process of the convolutional neural network of Caffe SSD.
Further, the image capture engine based on Scapy can comprise a Python environment setting, a Scapy framework setting, a PyQT4 environment configuration, and an image searching and capturing program, wherein the image searching and capturing program comprises a Header request package, a multi-thread file downloading program, a timing Proxy updating program, a reservoir sampling content monitoring program and a Bloom Filter link deduplication program.
Further, the image processing module may include a target tracking unit, wherein when the background interference is not excluded, the target tracking unit performs target tracking on the simple color feature class target using an LK optical flow method; for the target with large brightness change and large movement distance, using Meanshift to track, and combining the target speed of the previous frame to predict and reduce a target tracking area; when the background interference is eliminated, the target tracking unit uses the tracking initial frame to perform background modeling, and reduces the background interference by a frame difference method.
Further, the track optimization unit can use the reciprocal of the sum of the minimum value of the mechanical arm movement time and the minimum value of the mechanical arm corner as a cost function, and solve the cost function through a genetic algorithm to obtain the optimal solution of the mechanical arm corner; and carrying out network training on the optimal solution of the discrete mechanical arm corner and the distance between the target to be grabbed and the mechanical arm through a BP neural network visualization platform, and establishing a distance-corner interpolation table.
Further, the BP neural network visualization platform may include Python environment setting, Matplotlib library configuration, PyGame configuration, a BP neural network program, and a visualization program, the BP neural network program includes a parameter configuration program, a feedforward calculation program, a back propagation error update program, a JSON and TXT format model persistence and a loading program, and the visualization program includes a network topology visualization program, a network parameter and weight visualization program.
Further, for HSV (hue, saturation, value) segmentation type targets, a control unit of the mechanical arm module can perform binocular vision positioning calculation through the shape center of the target to obtain the distance between the target to be grabbed and the mechanical arm; for non-HSV segmentation targets, the control unit can perform binocular vision positioning calculation through feature point matching to obtain the distance between the target to be grabbed and the mechanical arm.
According to another aspect of the invention, an image processing-based mechanical arm grabbing method is provided, which comprises the following steps:
s01: calibrating a binocular vision camera, and obtaining camera calibration parameters to obtain a conversion matrix under a camera model;
s02: optimizing the motion track of the mechanical arm by using a genetic algorithm, carrying out network training by using a BP neural network visualization platform to construct a distance-corner interpolation table, and carrying out simulation verification;
s03: for the type of the target to be grabbed, importing the prior knowledge of segmentation to an image processing module for the HSV segmentation type target; for the feature point feature class target and the artificial visual feature class target, the template is led into an image processing module; for OCR characteristic targets, pre-training a LeNet convolutional neural network, and importing the trained LeNet network into an image processing module; for the target difficult to identify, downloading a data set by an image capture engine based on Scapy, performing target identification training through a convolutional neural network based on Caffe SSD, and realizing the identification and positioning of the target through a feedforward process of the convolutional neural network of the Caffe SSD;
s04: transmitting images acquired by the binocular camera to a PC (personal computer) end image processing module in real time through serial port communication;
s05: in the image processing module, the type of the object to be grabbed is static or dynamic, if the object is set to be static, the object is identified and positioned according to the step S03, then the mechanical arm is rotated until the object is in the central area of the image,
if the target is set to be in a dynamic type, tracking and positioning the target by using a target tracking unit, and then rotating the mechanical arm until the target is in the central area of the image;
s06, the control unit of the mechanical arm module carries out binocular vision positioning calculation to obtain the distance between the object to be grabbed and the mechanical arm;
s07: using the distance obtained in the step S06 to look up the distance-corner interpolation table obtained in the step S02 to obtain an optimal corner of the robot arm, and sending the optimal corner to the robot arm module;
s08, receiving the optimal rotation angle by a control unit of the mechanical arm module, converting the optimal rotation angle into a steering engine PWM value of the mechanical arm through linear transformation, outputting the steering engine PWM value to a steering engine, completing grabbing of the target to be grabbed and resetting the mechanical arm;
s09: and in the communication process from the step S04 to the step S08, detection is carried out in real time according to the communication protocols of the upper computer and the lower computer, and the work is stopped when a fault occurs.
Further, in step S05, when the background interference is not excluded, the target tracking unit performs target tracking using the LK optical flow method for the simple color feature class target; for the target with large brightness change and large movement distance, using Meanshift to track, and combining the target speed of the previous frame to predict and reduce a target tracking area; when eliminating background interference, the target tracking unit uses the tracking initial frame to perform background modeling, reduces the background interference by a frame difference method,
further, in step S06, for HSV split targets, the control unit may perform binocular vision positioning calculation through the shape center of the target, and for non-HSV split targets, the control unit may perform binocular vision positioning calculation through feature point matching.
Further, in step S01, camera calibration may be performed using the zhangnyou method and MATLAB toolbox; in steps S03 and S06, HSV segmentation may be image smoothed using first-on-then-off morphological filtering and gaussian filtering.
The invention has the beneficial effects that:
(1) the invention classifies the common target images, provides corresponding detection methods for HSV segmentation targets, feature point feature targets, OCR feature targets, artificial visual feature targets and difficult-to-recognize targets, has wider applicability and can solve the problem of recognizing more complex targets by mechanical arms.
(2) The invention adopts ORB characteristic point extraction for identifying the characteristic class target based on the characteristic points, and screens out the characteristic point pairs by using a BM matching algorithm through a Hamming distance threshold. After preliminarily obtaining the matched characteristic point pairs, the method uses a Randac algorithm to calculate an H matrix, obtains a reprojection error through H matrix reverse iteration, and screens out outlier point pairs through threshold filtering. And carrying out K-means clustering and non-maximum suppression on image feature matching points obtained by Hamming distance screening and H matrix reverse iteration threshold filtering to obtain the position of the target. The algorithm obviously reduces the error of feature point matching and can obtain more accurate matching results.
(3) The invention autonomously develops an image capture engine based on Scapy, realizes Header request packaging, multithreading file downloading and timed Proxy updating, performs real-time sampling monitoring through reservoir sampling, realizes link duplicate removal through Bloom Filter link, has certain engineering value, and solves the problems of difficult data set production, high crowdsourcing acquisition cost and the like. After the data set is manufactured, the invention also trains through a Caffe SSD deep convolution neural network to finish the identification of the target which is difficult to identify. Based on the technical scheme, the application range of the mechanical arm is expanded, the application scenes of the mechanical arm are greatly enriched, the defects that the current mechanical arm is fixed in model function and difficult to learn and upgrade are overcome, and the intellectualization to a certain degree is realized.
(4) The invention limits the search area by combining the direction and the size of the target motion speed of the previous frame, obviously reduces the search range and improves the real-time performance of the algorithm. Meanwhile, the similarity of the RGB distribution histograms of the target and the search result thereof is used as a measurement standard of the tracking effect, when the similarity is smaller than a threshold value, the target is considered to be lost, the target is identified and positioned again, and the robustness of the system is improved.
(5) The BP neural network visualization platform has the functions of network layer node and weight visualization display, JSON and TXT file model persistence, user-defined activation function and parameters and the like. Compared with the traditional BP neural network algorithm platform, the BP neural network visualization platform has the advantages of more vividly and intuitively selecting the number of nodes and adjusting parameters.
(6) The upper computer simulation unit of the invention has two working modes of motion simulation and three-dimensional trajectory analysis. Under a motion simulation mode, displaying data such as a three-dimensional space state, the motion speed of the mechanical arm, angles of all connecting rods and the like in real time, and obtaining a top view and a left view of the motion of the mechanical arm in real time through space geometric projection; under the three-dimensional track analysis mode, the upper computer can store the spatial position of the motion of each connecting rod of the mechanical arm, record the motion track of the mechanical arm and perform projection recording and collision detection in real time. The upper computer simulation unit has the characteristics of simple codes and short development period, and can realize the functions of three-dimensional simulation, real-time projection, track recording, collision detection and the like.
(7) The mechanical arm grabbing system has good expandability due to the introduction of the deep learning network, namely, more complex functions of target identification, such as face identification detection, abnormal object detection and the like, can be added subsequently, and in addition, the grabbing system has low cost and wide application range.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments will be briefly described below.
Fig. 1 is a block diagram of the components of the image processing-based robotic arm grasping system of the present invention.
Fig. 2 is a functional image of the variation rate of the cost function of the movement of the robot arm according to the genetic algorithm of the present invention.
FIG. 3 is a diagram of the results of the present invention using genetic algorithm and engineering iteration respectively for planning and solving the trajectory of the mechanical arm.
Fig. 4 is an operation visualization effect diagram of the BP neural network visualization platform of the present invention.
Fig. 5 is a network visualization effect diagram of the BP neural network visualization platform of the present invention.
FIG. 6 is a result diagram of the present invention in a 3D simulation mode using a simulation platform, in which a 3D motion model of the robot arm is identified by a 3D identifier and obtained by three-dimensional projection, and the lower area of the interface is the physical parameters and the operational parameters of the robot arm.
Fig. 7 is a result diagram of the track recording operation mode of the mechanical arm simulation platform of the present invention, in which the "3D" mark is a 3-dimensional motion model track of the mechanical arm, the left lower part is the physical parameters and the operating parameters of the mechanical arm, and the right lower area is the three-dimensional motion track of the mechanical arm.
FIG. 8 is a graph of the experimental results of the HSV threshold segmentation algorithm of the present invention.
FIG. 9 is a graph of the effect of shape fitting using the Hu moment of the present invention.
FIG. 10 is a flow chart of the improved algorithm for feature point matching in recognition positioning of the present invention.
Fig. 11 is a comparison graph of the result of the improved algorithm for feature point matching in the identification and positioning of the present invention and the result of the conventional non-improved algorithm.
FIG. 12 is a diagram illustrating the image preprocessing and segmentation effect of identifying and locating an object of the OCR feature class according to the present invention.
Fig. 13 is a graph of the LeNet convolutional neural network (top panel) and the operating results (bottom panel) used in the present invention.
FIG. 14 is a Caffe SSD convolutional neural network topology (top) and a deep learning run monitor (bottom) used with the present invention.
Fig. 15 is a diagram of the operation result based on the MeanShift algorithm used in the present invention, which realizes the tracking of the intelligent vehicle.
Fig. 16 is an operation flowchart of the robot gripping method based on image processing according to the present invention.
Fig. 17 is a diagram of a binocular vision model (left) and a pinhole camera model (right) of the present invention.
Fig. 18 is a schematic structural diagram (left view) and a block diagram (right view) of the stereoscopic robot arm module according to the present invention.
FIG. 19 is a diagram of the software interface and the operation result of the image processing module according to the present invention, wherein the image processing module is composed of a C + + language and QT5.7 in a MinGW compiling environment, and the UI interface is QT Designer 4.3.
Fig. 20 is a software interface and a collection result diagram of the image collection unit of the present invention, where the upper diagram is an image data collection platform interface manufactured based on PyQT and script, which implements Header request packaging, multithreading file downloading and timing Proxy update, performs real-time sampling monitoring by reservoir sampling, and implements link deduplication by Bloom Filter, and the lower diagram is a collection result diagram.
FIG. 21 is a data set authoring platform and editing software of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only illustrative and are not intended to limit the present application.
Fig. 1 is a block diagram of a robot grasping system based on image processing according to the present invention, which includes a robot module, a trajectory simulation module, and an image processing module. The mechanical arm module comprises an image acquisition unit, a communication unit, a control unit, a steering engine and a mechanical arm, wherein the image acquisition unit consists of a binocular CCD camera and is used for realizing image information acquisition; the control unit comprises an embedded control panel consisting of an Arduino UNO controller, and the control panel can supply power to the system by using a USB interface; the communication unit comprises serial communication and Bluetooth communication, the serial communication transmits images collected by the binocular CCD camera to the image processing module at the PC end, the Bluetooth communication is realized by using HC05 master-slave mode, and corner information obtained by the image processing module is transmitted to the embedded control panel.
The track simulation module comprises a track optimization unit for optimizing the track of the mechanical arm according to the parameters of the mechanical arm and an upper computer simulation unit for simulating and verifying the optimized track of the mechanical arm.
The track optimization unit uses the sum of the minimum value of the motion time of the mechanical arm and the minimum value of the corner of the mechanical arm as a cost function, and solves the cost function by using a genetic algorithm, wherein the optimization aim is to minimize the cost function; then, performing network training on the optimal solution of the discrete mechanical arm corner and the distance between the object to be grabbed and the mechanical arm through a BP neural network visualization platform, and establishing a distance-corner interpolation table (wherein the distance is the distance between the object to be grabbed and the mechanical arm, and the corner is the corner of the mechanical arm); and finally, optimizing the real-time track of the mechanical arm by looking up the table.
The genetic algorithm used by the invention uses a three-link mathematical model to calculate the sum of absolute values of the rotation angles of the mechanical arm, the motion time of the mechanical arm is the ratio of the maximum rotation angle value of the mechanical arm to the motion speed of the mechanical arm, and the reciprocal of the sum of the minimum value of the motion time of the mechanical arm and the minimum value of the rotation angle of the mechanical arm is used as a fitness function of the genetic algorithm.
Fig. 2 is a functional image of the cost function of the mechanical arm movement changing along with the crossover probability of the genetic algorithm, the image reflects the rule that the function value of the cost function changes along with the variation rate, when the variation rate is large, the genetic algorithm tends to search randomly, and is difficult to stabilize at an optimal value, and when the variation rate is small, the optimal value is difficult to find under the condition of limited operation times. In particular, the cost function value is smallest at a variation rate of 0.3.
For more clearly illustration, the invention compares the track optimization solution of the genetic algorithm with the solution precision and the operation efficiency of the traditional engineering iterative algorithm (the step length is 0.001 cm). Specifically, as shown in fig. 3, the genetic algorithm according to the present invention obtains a smaller value of the cost function of the mechanical arm motion compared to the iterative engineering method (step size is 0.001 cm). As shown in tables 1 and 2, the running time of the genetic algorithm of the present invention is much higher when the iteration step size is 0.0001cm, and the running time of the genetic algorithm of the present invention is slightly lower when the iteration step size is 0.001cm, but the solution accuracy is lower than that of the genetic algorithm of the present invention. Therefore, compared with the traditional numerical iterative solution, the genetic algorithm has the advantages of better efficiency and more accurate solution result.
TABLE 1 track optimization solution of genetic algorithm of the present invention and the conventional engineering iterative algorithm (step size of 0.001cm) solution precision comparison results
Figure BDA0001619533490000081
TABLE 2 comparison of the trajectory optimization solution of the genetic algorithm of the present invention and the efficiency of the conventional engineering iterative algorithm
Algorithm Mean time(s)
Iteration (precision 0.001) 1.641
Iteration (precision 0.0001) 16.89
Genetic algorithm 2.411
The BP neural network visualization platform used by the invention comprises Python environment setting, Matplotlib library configuration, PyGame configuration, a BP neural network program and a visualization program. The BP neural network program comprises a parameter configuration program, a feedforward calculation program, a back propagation error updating program, a JSON and TXT format model persistence and a loading program. The visualization program comprises a network topology structure visualization program, a network parameter and weight visualization program. The BP neural network visualization platform is developed by using a Python language and open-source Numpy and Matplotlib libraries, and has the functions of network layer node and weight visualization display, JSON and TXT file model persistence, self-defined activation function and parameters and the like.
Fig. 4 is an operation visualization effect diagram of the BP neural network visualization platform of the present invention, which realizes the weight and offset visualization effect of two hidden layers. The two hidden layers (two middle layers) are used for simple explanation, wherein dark color connecting lines (four lines in total) represent positive weight values, light color connecting lines (six lines in total) represent negative weight values, the thickness of the connecting lines represents the weight value, and the numerical value is marked at the head end of the connecting line as shown in the figure. The BP neural network analysis platform supports three rules of random initialization, Gaussian distribution initialization, custom initialization and rules11 initialization. The random initialization makes w b be random numbers of [0,1], the Gaussian distribution initialization makes w b be random numbers which are in accordance with the Gaussian distribution on [0,1], the self-defined initialization is random numbers in a self-defined interval, 01 is initialized in an initialization mode that w is all 1, and b is all 0.
Fig. 5 is a network visualization effect graph of the BP neural network visualization platform completed in the present invention, the upper graph is a network parameter description block diagram, and the lower graph is a network topology visualization result diagram, where "Type" represents "node Type"; "Nodes Num" represents "number of Nodes"; "Input" means "Input layer"; "mm" means "hidden layer"; "Output" means "Output layer"; "Layer" means "Layer"; "Avi function" means "activation function"; "equivalent" means "congruent function". In general, "avifinite" shares four types, "sigmod", "tanh", "equivalent", "ReLu".
Obviously, as can also be seen from fig. 4 and 5, compared with the conventional BP neural network algorithm platform, the BP neural network visualization platform of the present invention has the advantage of more visually and intuitively performing node number selection and parameter adjustment.
The upper computer simulation unit is developed by using an open source platform Processing library based on Java, and three-dimensional simulation of the mechanical arm is performed by using an OpenGL rendering technology. The upper computer program comprises a three-dimensional simulation program, a real-time projection program, a track recording program and a collision detection program.
Fig. 6 is a result diagram in a 3D simulation working mode of the present invention using a simulation platform, and fig. 7 is a result diagram in a trajectory recording working mode of a robotic arm simulation platform of the present invention, wherein "View Angle" represents a "user viewing Angle"; "h" represents "robot base height"; "l 1" represents "length of link 1"; "l 2" denotes "length of link 2"; "l 3" indicates "the length of the link 3 (manipulator"); "Speed 1" represents "movement Speed of link 1"; "Speed 2" indicates "movement Speed of link 2"; "Speed 3" indicates "moving Speed of link 3 (robot arm"); "Angle 1" indicates "rotation Angle of the base"; "Angle 2" indicates "rotation Angle of link 1"; "Angle 3" represents "rotation Angle of link 2"; "Real-angle 1" indicates "Real-time rotation angle of the base"; "Real-angle 2" indicates "Real-time rotation angle of the link 1"; "Real-angle 3" represents "a Real-time rotation angle of the link 2", "Objx" represents "an x-axis coordinate of a grasped object"; "Objy" represents "y-axis coordinates of the object to be grasped"; "Objz" represents "z-axis coordinates of the object to be grasped"; "HandFlag" means "grab detect flag"; "conditions" means "grab state"; "Close" means "closed"; "Finished" means "complete"; "Stragedy" represents a "mechanical arm motion strategy"; "Run constraint" means "synchronous motion policy". As shown in fig. 6 and 7, the upper computer simulation unit has two working modes of motion simulation and three-dimensional trajectory analysis. Under a motion simulation mode, the upper computer simulation unit can display data such as a three-dimensional space state, the motion speed of the mechanical arm, angles of all connecting rods and the like in real time, and a top view and a left view are obtained in real time through space geometric projection; under the three-dimensional track analysis mode, the upper computer simulation unit can store the spatial position of the motion of each connecting rod of the mechanical arm, record the motion track of the mechanical arm and perform projection recording and collision detection in real time. The upper computer simulation unit has the characteristics of simple codes and short development period, and can realize the functions of three-dimensional simulation, real-time projection, track recording, collision detection and the like.
Particularly, the image processing module comprises a target identification unit for identifying HSV segmentation targets and non-HSV segmentation targets, wherein the non-HSV segmentation targets comprise feature point feature targets, OCR feature targets, artificial visual feature targets and difficult-to-identify targets.
For HSV segmentation targets, HSV visual segmentation, binarization, morphological operation, Gaussian filtering and Hu rectangular fitting approximation are adopted to realize target identification and positioning.
FIG. 8 is an experimental result diagram of the HSV threshold segmentation algorithm of the present invention, which successfully completes the identification and positioning of a purple spherical object with a radius of 4cm under the condition of a distance of 2 m. Fig. 9 is a shape fitting effect diagram using the Hu moment, and in the case that the HSV segmentation target is occluded to cause large interference in HSV binary segmentation (as shown in the left diagram), the Hu moment is introduced to perform shape fitting to calculate the centroid, so that the target shape reduction is successfully achieved (as shown in the right diagram), and errors caused by occlusion are reduced.
For the characteristic point characteristic class target, calculating a BRIEF descriptor by adopting a FAST characteristic extraction algorithm to obtain ORB characteristic point extraction of the template and the image, and screening out characteristic point pairs by using a BM matching algorithm through a Hamming distance to preliminarily reduce matching errors. After the characteristic point pairs are preliminarily obtained, an H matrix is calculated by using a Randac algorithm, a reprojection error is obtained through the reverse iteration of the H matrix, and outlier point pairs are screened out through threshold filtering. And performing feature point matching, K-means clustering and non-maximum suppression on the feature matching points of the image obtained after the two times of filtering with a template to obtain the position of the target to be grabbed.
Fig. 10 is a flowchart of an improved algorithm for feature point matching in recognition and positioning according to the present invention, which is specifically optimized to solve an H matrix after feature point pairs based on a threshold are screened, and to implement outlier detection and screening by H matrix reverse iteration, thereby improving the accuracy of feature matching. Fig. 11 is a comparison graph of the results of the improved algorithm for feature point matching in the identification and localization of the present invention (shown in the right diagram) and the conventional unmodified algorithm (shown in the left diagram). Therefore, compared with the traditional algorithm, the improved algorithm reduces the error of feature point matching and can obtain more accurate matching results.
And for the OCR characteristic targets, carrying out image preprocessing by using self-adaptive HSV segmentation, training by using a LeNet deep convolution network, and classifying by using the LeNet deep convolution network to obtain the positions of the targets to be grabbed and the values of OCR characters so as to realize target recognition and positioning. FIG. 12 is a diagram illustrating the image preprocessing and segmentation effect of the OCR feature object recognition and positioning according to the present invention, wherein the left diagram is the original input image, the middle diagram is the initial thresholding and binarization performed on the original input image, and the right diagram is the image segmentation result of the OCR feature object.
And for the artificial visual feature class target, carrying out target identification and positioning by adopting a scanning discrimination mode.
For the target which is difficult to identify, downloading a data set by using an image capture engine based on Scapy, carrying out target identification training by a convolutional neural network based on Caffe SSD, and realizing target identification positioning by a feed-forward process of the network.
The image capture engine based on Scapy comprises a Python (Python 2.7) environment setting program, a Scapy framework setting program, a PyQT4 environment configuration program and an image searching and capturing program, wherein the image searching and capturing program comprises a Header request package program, a multi-thread file downloading program, a timing Proxy updating program, a reservoir sampling content monitoring program and a Bloom Filter link deduplication program. As shown in fig. 20, the image capture engine based on script of the present invention implements Header request package, multi-thread file download and timing Proxy update, performs real-time sampling monitoring through reservoir sampling, and implements link deduplication through Bloom Filter, and has a certain engineering value, and solves the problems of difficulty in data set production, high crowdsourcing acquisition cost, and the like.
Wherein, in figure 20,
english language Corresponding Chinese
Crawler For Deep Learning Deep learning crawler
Begin Start of
Stop Stop
Settings Is provided with
Progress Progress of a game
Display Show(s)
Count Time Total time of day
Current Time Currently spent time
Need Time Time remaining
PROXY_IP Using IP proxies
LOG Log
PhoNums Total number of pictures
Threads Number of threads
BeginNums Starting number
DispNums Number of displays
DispGaps Show(s)
Recent Showing the latest pictures
Pooling Reservoir sampling switch for sampling and displaying picture
MosiTimes Crawling interval
FIG. 21 is a data set production platform and editing software of the present invention, wherein the top view is a data set production platform having functions of file import, filename uniform format processing, image scaling, image deletion, progress display, and the like; the lower diagram is image editing software which can realize manual labeling of the target object and generation of a data set.
Wherein, in figure 21,
english language Corresponding Chinese
Image Dir Picture catalogue
Pre Last one
Next Next one is
Load Loading
Delete Deleting
Clear All Emptying
Go to Images No. Go to picture
Bounding boxs Crop box boundary
After the data set is manufactured, the Caffe SSD deep convolution neural network is used, and Caffe and CUDA8.0 are used for training to finish the identification of the target difficult to identify in the environment of Ubuntu and GT630 graphics cards. Fig. 14 is a Caffe SSD convolutional neural network topology map and a deep learning operation monitoring program used in the present invention, where the upper map is an SSD network topology structure, the operation environment is Linux ubuntu16.04 system, the lower map is a deep learning operation monitoring program, the deep learning program is monitored when it is running, and when receiving "JP" information sent by a mobile phone end to a mailbox of POP3 or SMTP protocol that is set to a target mailbox through a Python SMTP library, the mobile phone performs screen capturing and sends the "JP" information to a mailbox of POP3 or SMTP protocol that is set to a target mailbox using SMTP protocol.
In particular, the image processing module may further comprise a target tracking unit. When background interference is not eliminated, tracking the simple color feature class target by using an LK optical flow method; for the targets with large brightness change and large movement distance, the target tracking algorithm based on the MeanShift algorithm is optimized, and the direction and the size of the target speed of the previous frame are used for obviously reducing the search range, so that the real-time performance of the algorithm is improved. Meanwhile, the similarity of the RGB distribution histograms of the target and the search result thereof is used as a measurement standard of the tracking effect, when the similarity is smaller than a threshold value, the target is considered to be lost, the target is identified and positioned again, and the robustness of the system is improved. When the background interference is eliminated, the initial tracking frame is used for background modeling, the background interference is reduced through a frame difference method, and the robustness of target tracking is enhanced.
On the basis of completing the positioning of the target to be grabbed, the control unit of the mechanical arm module obtains the distance of the target to be grabbed relative to the mechanical arm through binocular matching parallax calculation through a binocular vision positioning program, retrieves the distance-corner interpolation table through a linear interpolation program to obtain a corner value corresponding to the relative distance, and obtains the optimal corner of the mechanical arm through linear interpolation.
Particularly, for HSV segmentation targets, Canny edge detection and Hu rectangular fitting are carried out on binocular CCD cameras to obtain an edge point set, and the centroid calculation centroid pixel coordinate difference is obtained and used as binocular parallax; for non-HSV segmentation targets, ORB feature points in a target identification frame are extracted, Hamming distance screening feature point pairs, an H matrix is calculated by Randac, outliers are screened out by secondary filtering, feature matching positioning is achieved, and average parallax of the matched point pairs is used as binocular parallax. And then, calculating the space position of the target to be grabbed and the distance relative to the mechanical arm according to the camera parameters and the binocular parallax.
Tables 3 and 4 are the results of the stereovision localization experiments for the HSV split targets and the non-HSV split targets of the present invention, respectively. The positioning errors of the two are respectively about 2% and within 3%, and the usable effect is achieved.
TABLE 3 stereovision positioning experiment results of HSV targets of the present invention
Actual distance (mm) 100 150 200 250 270 300 350
Distance of vision positioning 96.985 147.68 196.91 245.21 274.45 288.81 341.62
Error in positioning 3.01% 1.565% 1.55% 1.92% 1.65% 3.73% 2.68%
TABLE 4 stereovision localization experiment results for non-HSV targets of the invention
Actual distance (mm) 100 150 200 250 270 300 350
Distance of vision positioning 97.13 147.29 196.14 244.35 263.66 306.43 339.52
Error in positioning 2.73% 1.81% 1.94% 2.26% 2.35% 2.41% 3.18%
Particularly, the invention provides a mechanical arm grabbing method based on image processing, which comprises the following steps:
s01: and (3) performing binocular vision camera calibration by using a Zhang-friend method and an MATLAB tool box, calculating camera calibration parameters to obtain a conversion matrix under a pinhole camera model, and reducing radial and tangential distortions of the camera. The binocular stereoscopic vision algorithm uses a calibrated pinhole camera model binocular vision algorithm and has low error.
S02: and (3) optimizing and calculating the cost function by using a genetic algorithm to obtain the optimal solution of the rotation angle of the mechanical arm at the step length of 10-30cm under the step length of 0.2 cm. And (3) performing network training by using the obtained series of data as training data and using a BP neural network visualization platform, adjusting parameters such as learning rate, step size, batch size, network layer number, hidden node number, activation function and the like to enable the BP neural network model to be optimal on a test set, constructing a distance-corner interpolation table, and performing simulation verification by using an upper computer simulation unit.
S03: the method comprises the steps of specifying the type of a target to be identified, and importing prior knowledge of segmentation to an image processing module for HSV segmentation type targets; for the feature point feature class target and the artificial visual feature class target, the template is led into an image processing module; for OCR characteristic targets, pre-training a LeNet convolutional neural network, and importing the trained LeNet network into an image processing module; and for the target which is difficult to identify, downloading a data set by an image capture engine based on Scapy, identifying and training the target by a convolutional neural network based on Caffe SSD, and identifying and positioning the target by a feedforward process of the convolutional neural network of the Caffe SSD.
S04: the mechanical arm module is started, and images collected by the binocular camera are transmitted to the PC-side image processing module in real time through serial port communication.
S05: in the image processing module, the static type and the dynamic type of the target to be grabbed are set, if the target is set to be the static type, the target is identified and positioned according to the step S03, and then the mechanical arm rotates until the target is in the central area of the image; if the target is set to be in a dynamic type, the target is tracked and positioned by using the target tracking unit, and then the mechanical arm rotates until the target is in the central area of the image.
S06: whether background interference is eliminated or not is set on the image processing module, and if the background interference is not eliminated, target tracking is carried out by using the MeanShift algorithm of the invention; and if the background interference is set to be eliminated, background modeling is carried out, and the target tracking is carried out by using a frame difference method. And simultaneously, the mechanical arm rotates until the moving target object is in the central area of the image.
And S07, the control unit of the mechanical arm module carries out binocular vision positioning calculation to obtain the distance between the object to be grabbed and the mechanical arm.
S08: the distance-rotation angle interpolation table obtained in step S02 is checked using the distance obtained in step S07 to obtain the optimum rotation angle of the robot arm, and the optimum rotation angle is sent to the stereoscopic vision robot arm module.
And S09, receiving the optimal rotation angle by the embedded control board of the mechanical arm module, converting the optimal rotation angle into a steering engine PWM value of the mechanical arm through linear transformation, outputting the steering engine PWM value to the steering engine, finishing grabbing the object to be grabbed and resetting the mechanical arm.
S10: in the communication process from the step S04 to the step S09, detection is performed in real time according to the communication protocol of the upper computer (i.e., the image processing module and the trajectory simulation module) and the lower computer (i.e., the mechanical arm hardware module), and the operation is stopped when a fault occurs.
Particularly, software preparation of the image processing module mainly comprises MinGW C + + compiling environment building, QT5.7 environment configuration and an upper computer program, a UI (user interface) is designed by using QT design 4.3, and the program execution efficiency is high. The upper computer program comprises a Bluetooth communication program, a serial port communication program, a parameter configuration program, a target identification and tracking program, binocular vision positioning and a linear interpolation program.
Particularly, the software preparation of the stereoscopic vision mechanical arm module mainly comprises HC-05 Bluetooth module master-slave mode configuration, hardware drive, steering engine drive configuration and stereoscopic vision mechanical arm control program design. The stereoscopic vision mechanical arm control program comprises a Bluetooth wireless communication program, a steering engine PWM calculation and control program.
In particular, the embedded development carrier of the stereoscopic vision manipulator module of the invention is an open-source hardware Arduino platform.
The above applications are only some embodiments of the present application. It will be apparent to those skilled in the art that various changes and modifications can be made without departing from the inventive concept herein, and it is intended to cover all such modifications and variations as fall within the scope of the invention.

Claims (9)

1. A mechanical arm grabbing system based on image processing is characterized by comprising a mechanical arm module, a track simulation module and an image processing module,
the mechanical arm module comprises an image acquisition unit, a communication unit and a control unit;
the track simulation module comprises a track optimization unit for optimizing the track of the mechanical arm according to the parameters of the mechanical arm and an upper computer simulation unit for simulating and verifying the optimized track of the mechanical arm,
the image processing module comprises a target identification unit for identifying HSV segmentation targets and non-HSV segmentation targets, wherein the non-HSV segmentation targets comprise feature point feature targets, OCR feature targets, artificial visual feature targets and difficult-to-identify targets;
wherein the image acquisition unit transmits the acquired image information to the target identification unit through the communication unit, the target identification unit realizes the identification and positioning of the target to be grabbed and transmits the position information of the target to be grabbed to the control unit through the communication unit, the control unit carries out binocular vision positioning calculation to obtain the distance between the target to be grabbed and the mechanical arm and realizes the grabbing of the target to be grabbed according to the mechanical arm track optimized by the track optimization unit,
the track optimization unit uses the reciprocal of the sum of the minimum value of the motion time of the mechanical arm and the minimum value of the mechanical arm corner as a cost function, and solves the cost function through a genetic algorithm to obtain the optimal solution of the mechanical arm corner; and carrying out network training on the optimal solution of the discrete mechanical arm corner and the distance between the target to be grabbed and the mechanical arm through a BP neural network visualization platform, and establishing a distance-corner interpolation table.
2. The system of claim 1,
for the characteristic point characteristic class target, the target identification unit extracts ORB characteristic points, screens characteristic point pairs through a hamming distance threshold value by using a BM matching algorithm, calculates an H matrix by using a Randac algorithm, screens outlier point pairs through threshold value filtering to obtain characteristic matching point pairs, and realizes identification and positioning of the characteristic point characteristic class target through template characteristic point matching and clustering;
for the target which is difficult to identify, the target identification unit downloads a data set by using an image capture engine based on Scapy, performs target identification training by using a convolutional neural network based on Caffe SSD, and realizes identification and positioning of the target which is difficult to identify by using a feedforward process of the convolutional neural network of Caffe SSD.
3. The system of claim 2, wherein the script-based image crawling engine comprises a Python environment setting, a script framework setting, a PyQT4 environment configuration, an image search and crawling program, the image search and crawling program comprising a Header request wrapper, a multi-threaded file downloading program, a timed Proxy update program, a reservoir sample content monitor program, a Bloom Filter link deduplication program.
4. The system of claim 1, wherein the image processing module comprises an object tracking unit, wherein,
when background interference is not eliminated, the target tracking unit tracks the simple color feature type target by using an LK optical flow method; for the target with large brightness change and large movement distance, using Meanshift to track, and combining the target speed of the previous frame to predict and reduce a target tracking area;
when the background interference is eliminated, the target tracking unit uses the tracking initial frame to perform background modeling, and reduces the background interference by a frame difference method.
5. The system of claim 1, wherein the BP neural network visualization platform comprises Python environment settings, Matplotlib library configuration, PyGame configuration, BP neural network programs, visualization programs, the BP neural network programs comprise a parameter configuration program, a feed-forward calculation program, a back propagation error update program, JSON and TXT format model persistence and loading programs, and the visualization programs comprise a network topology visualization program, a network parameter and weight visualization program.
6. The system of claim 1, wherein for an HSV segmentation class target, the control unit of the manipulator module performs binocular vision positioning calculation through the shape center of the target to obtain the relative manipulator distance of the target to be grabbed; and for non-HSV segmentation type targets, the control unit performs binocular vision positioning calculation through feature point matching to obtain the distance between the target to be grabbed and the mechanical arm.
7. A mechanical arm grabbing method based on image processing is characterized by comprising the following steps:
s01: calibrating a binocular vision camera, and obtaining camera calibration parameters to obtain a conversion matrix under a camera model;
s02: optimizing the motion track of the mechanical arm by using a genetic algorithm, carrying out network training by using a BP neural network visualization platform to construct a distance-corner interpolation table, and carrying out simulation verification;
s03: for the type of the target to be grabbed, importing the prior knowledge of segmentation to an image processing module for the HSV segmentation type target; for the feature point feature class target and the artificial visual feature class target, the template is led into an image processing module; for OCR characteristic targets, pre-training a LeNet convolutional neural network, and importing the trained LeNet network into an image processing module; for the target difficult to identify, downloading a data set by an image capture engine based on Scapy, performing target identification training through a convolutional neural network based on Caffe SSD, and realizing the identification and positioning of the target through a feedforward process of the convolutional neural network of the Caffe SSD;
s04: transmitting images acquired by the binocular camera to a PC (personal computer) end image processing module in real time through serial port communication;
s05: setting the static and dynamic types of the object to be grabbed in the image processing module, if the object is set to be the static type, identifying and positioning the object according to the step S03, rotating the mechanical arm until the object is in the central area of the image,
if the target is set to be in a dynamic type, tracking and positioning the target by using a target tracking unit, and then rotating the mechanical arm until the target is in the central area of the image;
s06, the control unit of the mechanical arm module carries out binocular vision positioning calculation to obtain the distance between the object to be grabbed and the mechanical arm;
s07: using the distance obtained in the step S06 to look up the distance-corner interpolation table obtained in the step S02 to obtain an optimal corner of the robot arm, and sending the optimal corner to the robot arm module;
s08, receiving the optimal rotation angle by a control unit of the mechanical arm module, converting the optimal rotation angle into a steering engine PWM value of the mechanical arm through linear transformation, outputting the steering engine PWM value to a steering engine, completing grabbing of the target to be grabbed and resetting the mechanical arm;
s09: and in the communication process from the step S04 to the step S08, detection is carried out in real time according to the communication protocols of the upper computer and the lower computer, and the work is stopped when a fault occurs.
8. The method of claim 7,
in the step S05, in the step S,
when background interference is not eliminated, the target tracking unit tracks the simple color feature type target by using an LK optical flow method; for the target with large brightness change and large movement distance, using Meanshift to track, and combining the target speed of the previous frame to predict and reduce a target tracking area;
when eliminating background interference, the target tracking unit uses the tracking initial frame to perform background modeling, reduces the background interference by a frame difference method,
in the step S06, in the step S,
for HSV segmentation targets, the control unit performs binocular vision positioning calculation through the shape center of the target, and for non-HSV segmentation targets, the control unit performs binocular vision positioning calculation through feature point matching.
9. The method of claim 8,
in step S01, camera calibration is performed using the zhangnyou method and MATLAB toolbox;
in steps S03 and S06, HSV segmentation performs image smoothing using first-on-then-off morphological filtering and gaussian filtering.
CN201810300186.0A 2018-04-04 2018-04-04 Mechanical arm grabbing system and method based on image processing Expired - Fee Related CN108656107B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810300186.0A CN108656107B (en) 2018-04-04 2018-04-04 Mechanical arm grabbing system and method based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810300186.0A CN108656107B (en) 2018-04-04 2018-04-04 Mechanical arm grabbing system and method based on image processing

Publications (2)

Publication Number Publication Date
CN108656107A CN108656107A (en) 2018-10-16
CN108656107B true CN108656107B (en) 2020-06-26

Family

ID=63783220

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810300186.0A Expired - Fee Related CN108656107B (en) 2018-04-04 2018-04-04 Mechanical arm grabbing system and method based on image processing

Country Status (1)

Country Link
CN (1) CN108656107B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109472235B (en) * 2018-11-01 2021-07-27 深圳蓝胖子机器智能有限公司 Attitude determination method, attitude determination device and computer-readable storage medium
CN109531570A (en) * 2018-12-10 2019-03-29 浙江树人学院 The mechanical arm grasping means of view-based access control model sensor
CN109816728B (en) * 2019-01-30 2022-06-14 国网江苏省电力有限公司苏州供电分公司 Mechanical arm grabbing point positioning detection method based on query network generation
CN109969178B (en) * 2019-03-26 2021-09-21 齐鲁工业大学 Multi-material autonomous carrying device and method based on multi-sensor
CN111768449B (en) * 2019-03-30 2024-05-14 北京伟景智能科技有限公司 Object grabbing method combining binocular vision with deep learning
CN110046626B (en) * 2019-04-03 2024-03-15 工极智能科技(苏州)有限公司 PICO algorithm-based image intelligent learning dynamic tracking system and method
CN110000785B (en) * 2019-04-11 2021-12-14 上海交通大学 Agricultural scene calibration-free robot motion vision cooperative servo control method and equipment
CN110176041B (en) * 2019-05-29 2021-05-11 西南交通大学 Novel train auxiliary assembly method based on binocular vision algorithm
CN112287728A (en) * 2019-07-24 2021-01-29 鲁班嫡系机器人(深圳)有限公司 Intelligent agent trajectory planning method, device, system, storage medium and equipment
CN110509273B (en) * 2019-08-16 2022-05-06 天津职业技术师范大学(中国职业培训指导教师进修中心) Robot manipulator detection and grabbing method based on visual deep learning features
CN111428815B (en) * 2020-04-16 2022-05-17 重庆理工大学 Mechanical arm grabbing detection method based on Anchor angle mechanism
CN112381173B (en) * 2020-11-30 2022-06-14 华南理工大学 Image recognition-based mechanical arm multitask autonomous learning control method and system
CN113011486A (en) * 2021-03-12 2021-06-22 重庆理工大学 Chicken claw classification and positioning model construction method and system and chicken claw sorting method
CN112926503B (en) * 2021-03-23 2023-07-18 上海大学 Automatic generation method of grabbing data set based on rectangular fitting
CN114029941B (en) * 2021-09-22 2023-04-07 中国科学院自动化研究所 Robot grabbing method and device, electronic equipment and computer medium
CN114029946A (en) * 2021-10-14 2022-02-11 五邑大学 Method, device and equipment for guiding robot to position and grab based on 3D grating
CN116197918B (en) * 2023-05-05 2023-07-21 北京华晟经世信息技术股份有限公司 Manipulator control system based on action record analysis
CN117644516B (en) * 2024-01-09 2024-06-11 中核四川环保工程有限责任公司 Nuclear waste treatment monitoring robot based on image processing

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902271A (en) * 2012-10-23 2013-01-30 上海大学 Binocular vision-based robot target identifying and gripping system and method
KR101289785B1 (en) * 2011-12-28 2013-07-26 한국과학기술원 System for generating optimal trajectory of robot manipulator that minimized the joint torque variation and method therefor
JP2014161917A (en) * 2013-02-21 2014-09-08 Seiko Epson Corp Robot control system, robot, robot control method, and program
CN104463108A (en) * 2014-11-21 2015-03-25 山东大学 Monocular real-time target recognition and pose measurement method
CN106649490A (en) * 2016-10-08 2017-05-10 中国人民解放军理工大学 Depth feature-based image retrieval method and apparatus
CN106970594A (en) * 2017-05-09 2017-07-21 京东方科技集团股份有限公司 A kind of method for planning track of flexible mechanical arm
JP2017185578A (en) * 2016-04-05 2017-10-12 株式会社リコー Object gripping device and gripping control program
CN107486858A (en) * 2017-08-08 2017-12-19 浙江工业大学 Multi-mechanical-arm collaborative offline programming method based on RoboDK
CN107767423A (en) * 2017-10-10 2018-03-06 大连理工大学 A kind of mechanical arm target positioning grasping means based on binocular vision

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9424470B1 (en) * 2014-08-22 2016-08-23 Google Inc. Systems and methods for scale invariant 3D object detection leveraging processor architecture

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101289785B1 (en) * 2011-12-28 2013-07-26 한국과학기술원 System for generating optimal trajectory of robot manipulator that minimized the joint torque variation and method therefor
CN102902271A (en) * 2012-10-23 2013-01-30 上海大学 Binocular vision-based robot target identifying and gripping system and method
JP2014161917A (en) * 2013-02-21 2014-09-08 Seiko Epson Corp Robot control system, robot, robot control method, and program
CN104463108A (en) * 2014-11-21 2015-03-25 山东大学 Monocular real-time target recognition and pose measurement method
JP2017185578A (en) * 2016-04-05 2017-10-12 株式会社リコー Object gripping device and gripping control program
CN106649490A (en) * 2016-10-08 2017-05-10 中国人民解放军理工大学 Depth feature-based image retrieval method and apparatus
CN106970594A (en) * 2017-05-09 2017-07-21 京东方科技集团股份有限公司 A kind of method for planning track of flexible mechanical arm
CN107486858A (en) * 2017-08-08 2017-12-19 浙江工业大学 Multi-mechanical-arm collaborative offline programming method based on RoboDK
CN107767423A (en) * 2017-10-10 2018-03-06 大连理工大学 A kind of mechanical arm target positioning grasping means based on binocular vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
机器人视觉伺服系统研究;陈文桥;《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》;20180315(第3期);23-53 *
非结构化助老助残机械手对目标识别定位系统的研究;樊炳辉等;《科学技术与工程》;20170131;第17卷(第1期);49-53,78 *

Also Published As

Publication number Publication date
CN108656107A (en) 2018-10-16

Similar Documents

Publication Publication Date Title
CN108656107B (en) Mechanical arm grabbing system and method based on image processing
CN112476434B (en) Visual 3D pick-and-place method and system based on cooperative robot
CN112070818B (en) Robot disordered grabbing method and system based on machine vision and storage medium
CN113330490B (en) Three-dimensional (3D) assisted personalized home object detection
CN112836734A (en) Heterogeneous data fusion method and device and storage medium
CN110084243B (en) File identification and positioning method based on two-dimensional code and monocular camera
CN107953329A (en) Object identification and Attitude estimation method, apparatus and mechanical arm grasping system
CN114952809B (en) Workpiece identification and pose detection method, system and mechanical arm grabbing control method
CN111553949B (en) Positioning and grabbing method for irregular workpiece based on single-frame RGB-D image deep learning
CN112801977B (en) Assembly body part relative pose estimation and monitoring method based on deep learning
Lin et al. Using synthetic data and deep networks to recognize primitive shapes for object grasping
CN111998862B (en) BNN-based dense binocular SLAM method
CN109444146A (en) A kind of defect inspection method, device and the equipment of industrial processes product
CN115903541A (en) Visual algorithm simulation data set generation and verification method based on twin scene
CN115330734A (en) Automatic robot repair welding system based on three-dimensional target detection and point cloud defect completion
CN114882109A (en) Robot grabbing detection method and system for sheltering and disordered scenes
CN114131603B (en) Deep reinforcement learning robot grabbing method based on perception enhancement and scene migration
Liu et al. Robotic picking in dense clutter via domain invariant learning from synthetic dense cluttered rendering
CN113436293B (en) Intelligent captured image generation method based on condition generation type countermeasure network
CN115239779A (en) Three-dimensional point cloud registration method, device, medium and electronic equipment
Zhang et al. Object detection and grabbing based on machine vision for service robot
CN117769724A (en) Synthetic dataset creation using deep-learned object detection and classification
Figueiredo et al. Shape-based attention for identification and localization of cylindrical objects
KR102590730B1 (en) Learning data collection system and method
Zhang et al. Robotic grasp detection using effective graspable feature selection and precise classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200626