CN117415051A - Robot intelligent sorting method based on RGB-D image and teaching experiment platform - Google Patents

Robot intelligent sorting method based on RGB-D image and teaching experiment platform Download PDF

Info

Publication number
CN117415051A
CN117415051A CN202311398187.0A CN202311398187A CN117415051A CN 117415051 A CN117415051 A CN 117415051A CN 202311398187 A CN202311398187 A CN 202311398187A CN 117415051 A CN117415051 A CN 117415051A
Authority
CN
China
Prior art keywords
images
workpiece
sorting
rgb
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311398187.0A
Other languages
Chinese (zh)
Inventor
吕红强
钟黄伟
李珂馨
李垚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202311398187.0A priority Critical patent/CN117415051A/en
Publication of CN117415051A publication Critical patent/CN117415051A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/36Sorting apparatus characterised by the means used for distribution
    • B07C5/361Processing or control devices therefor, e.g. escort memory
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/36Sorting apparatus characterised by the means used for distribution
    • B07C5/361Processing or control devices therefor, e.g. escort memory
    • B07C5/362Separating or distributor mechanisms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1692Calibration of manipulator
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B25/00Models for purposes not provided for in G09B23/00, e.g. full-sized devices for demonstration purposes
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B25/00Models for purposes not provided for in G09B23/00, e.g. full-sized devices for demonstration purposes
    • G09B25/02Models for purposes not provided for in G09B23/00, e.g. full-sized devices for demonstration purposes of industrial processes; of machinery
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C2501/00Sorting according to a characteristic or feature of the articles or material to be sorted
    • B07C2501/0063Using robots
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an intelligent robot sorting method based on RGB-D images and a teaching experiment platform, wherein step 1, a plurality of checkerboard images with different angles are shot through an RGB-D camera to obtain a conversion relation between a robot base coordinate and a workpiece pixel coordinate; step 2, collecting color images and depth images of the workpieces in the sorting area on line, preprocessing and decomposing the collected color images of the workpieces, and obtaining synchronous depth images and color images by utilizing a three-dimensional image synchronization algorithm; step 3, detecting a workpiece color image acquired on line by adopting a multi-scale feature attention depth neural network algorithm for the workpiece in the sorting area, and converting pixel coordinates of the workpiece into world coordinates under a robot base coordinate system; and 4, taking the world coordinates of the workpiece as a target position of the mechanical arm, controlling the mechanical arm to finish grabbing the workpiece, and classifying the workpiece to a designated position. The invention improves the sorting efficiency and the sorting consistency and saves the labor force.

Description

Robot intelligent sorting method based on RGB-D image and teaching experiment platform
Technical Field
The invention relates to the technical field of workpiece sorting, in particular to an intelligent robot sorting method based on RGB-D images and a teaching experiment platform.
Background
Workpiece sorting is an important step in the industrial production process, and the precision and efficiency of workpiece sorting directly affect the production efficiency and product quality of the whole production line. With the development of manufacturing industry and the continuous improvement of production efficiency, more and more enterprises begin to adopt automatic production lines so as to reduce production cost and improve production efficiency. In conventional production lines, raw materials or parts are often manually sorted and sent to the next process. This conventional sorting method is both labor-intensive and time-consuming and error-prone. And an automatic workpiece sorting system is adopted, so that manual intervention can be greatly reduced, and the production efficiency and the product quality are improved. The automatic equipment can quickly and accurately classify and sort the workpieces and send the workpieces to the next process, so that the compression of the upstream material arranging process of the production line is realized, and the efficiency and the productivity of the whole production line are improved.
In the flexible intelligent manufacturing field, the robot intelligent sorting technology for mixed materials is favorable for solving the actual problems of disorder, dense incoming materials, mixed edges and the like of incoming materials on an industrial production line, is favorable for the material sorting procedure at the upstream of a compression production line, improves the reliability of material sorting, and reduces the occupied area of equipment. Conventional industrial robots can only repeatedly perform a single action according to a given trajectory and do not have a flexible intelligent processing capability of visual guidance, which faces modern production modes featuring multiple varieties, small batches and short periods.
There are some disadvantages:
(1) The problem of insufficient visual intelligentization level of the industrial robot. Industrial robots generally do not have the ability to sense visual scenes and interact naturally in unstructured environments. Compared with flexible manual work, the system has the advantages of insufficient event driving capability and relatively low autonomous intelligence level.
(2) The problem of rigid division of work among stations of the industrial robot. The mechanical arm at each station repeatedly executes a certain single action according to a given track, so that mass production of a single strain can be realized, the mixed arrangement and wire replacement cost is high, and the condition that the whole processing production line is in a paralysis state due to the fault of a certain specific station is easy to occur.
Therefore, the urgent need of industrial production is to introduce the AI computer vision method into the target sorting field of the industrial robot, so that the industrial robot with high autonomy and intellectualization, and the vision-flexible industrial robot can formulate a subsequent processing detection flow according to the type of the workpiece and the current processing state, and continuously push the workpiece state to the qualified finished product direction.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides the robot intelligent sorting method based on the RGB-D image and the teaching experiment platform. Meanwhile, a sorting system is provided, teaching and development of a machine vision technology can be effectively promoted, students are helped to know the principle and application of the machine vision, and the practical capability and innovation capability of the students are improved.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
an intelligent robot sorting method based on RGB-D images comprises the following steps;
step 1: fixing a checkerboard calibration plate at the tail end of the multi-degree-of-freedom mechanical arm, adjusting the position and the posture of the tail end of the multi-degree-of-freedom mechanical arm, shooting a plurality of checkerboard images with different angles through an RGB-D depth camera, calibrating the camera by utilizing the checkerboard images to eliminate the lens distortion of the camera, and calibrating the hand and the eye to obtain the conversion relation between the base coordinates of the robot and the pixel coordinates of the workpiece;
step 2, collecting color images and depth images of the workpiece in the sorting area on line, preprocessing and decomposing the collected color images of the workpiece, extracting edges of the color images, supplementing the depth images with the edges of the color images as constraints, finally matching the characteristic points based on the two images, substituting internal and external parameters of an RGB-D depth camera, and obtaining synchronous depth images and color images by utilizing a three-dimensional image synchronization algorithm;
step 3, for the workpieces in the sorting area, a target detection network under a Pytorch frame is built by adopting a multi-scale feature attention depth neural network algorithm, the on-line acquired color images of the workpieces are detected to obtain category information of different workpieces and position information in a pixel coordinate system, and then the corresponding depth information is acquired through the synchronous depth images, so that the pixel coordinates of the workpieces are converted into world coordinates under a robot base coordinate system;
and 4, taking the world coordinates of the workpiece as a target position of the mechanical arm, controlling the mechanical arm with multiple degrees of freedom to finish grabbing the workpiece, and classifying the workpiece to a designated position.
Preferably, the RGB-D depth camera used in step 1 includes color information and spatial depth information at the same time of the acquired image, compared to a normal color camera; the method is more suitable for target positioning and pose estimation in the workpiece sorting process, and can more accurately realize intelligent sorting of mixed workpieces.
Preferably, the RGB-D depth camera is arranged right above the sorting area, the lens faces the working area, and the total field of view of the intelligent multi-dimensional camera system can completely cover the workpiece sorting platform.
Preferably, the specific image synchronization processing method in the step 2 is that for the color image of the workpiece acquired by the RGB-D depth camera, the color image is preprocessed and decomposed to obtain a height component and a chrominance component, the edge of the color image is extracted, then the depth image is complemented by the growth of a double-threshold filtering combined area with the edge as constraint, and finally the color image and the depth image are matched based on the characteristic points of the two images, so that the synchronous color image and the synchronous depth image are obtained.
Preferably, the sorting and positioning of the specific workpieces in the step 3 is divided into the following steps:
step 3.1: data set preparation and enhancement; based on a workpiece sorting scene in actual industrial production, a self-built experimental data set of an actual workpiece or a 3D printing substitute workpiece is utilized, an RGB-D depth camera is utilized to collect images, the experimental data set is enhanced through random mirroring, random overturning, random rotation and luminosity distortion, and the network generalization capability is improved;
step 3.2: model structure design and training; the target detection algorithm YOLOv7 is adopted as a basic network, the receptive field of the feature images extracted by the convolution layer is increased through hole convolution, the feature images containing different scale features are generated and fused in a channel splicing mode, and the feature images containing low-level colors, texture features and high-level abstract semantic features are obtained; the problems of small-size workpiece detection and complex environment interference are solved by adding a CBAM attention module and a small-object detection head after a convolutional layer of YOLOv7, experimental data preprocessed in the step 3.1 are used as a data set, and training is accelerated by a server GPU graphic card, and network super-parameters are continuously adjusted, so that the workpiece detection accuracy is highest;
step 3.3: deploying and testing a micro-service model; constructing a SpringCloud micro server by adopting a distributed micro service architecture, regarding a deep learning model related in a robot sorting task as different micro service components, and completing the deployment of the deep model in the steps; configuring a MySQL database on a micro server, realizing user-oriented database connection, supporting management of various data, particularly image data, placing workpieces in a working area to simulate an actual sorting scene, testing a model detection effect, and displaying the result on a visual display;
step 3.4: optimizing and reasoning acceleration; in order to improve the processing speed of online sorting of workpieces, the detection model is optimized and reconstructed through an optimization acceleration engine TensorRT, and the reasoning speed of the deep learning network model is improved. And then accelerating through a thread pool, and sorting and accelerating through multithreading to process tasks in a task queue.
In order to improve the detection speed in the workpiece sorting process, an optimization acceleration engine TensorRT is used for optimizing and reconstructing a detection model, the reasoning speed of a deep learning model is improved, a certain number of new threads are freely created through a thread pool, tasks in a task queue are processed through multiple threads, and sorting acceleration is achieved.
An intelligent robot sorting teaching experiment platform based on RGB-D images comprises an RGB-D depth camera, wherein the RGB-D depth camera is fixed right above a sorting operation table top, and depth images and color images of workpieces in a region to be sorted are completely collected; the multi-degree-of-freedom mechanical arm is fixed on the workbench, and the pneumatic end effector is arranged on an end flange joint of the multi-degree-of-freedom mechanical arm;
the RGB-D depth camera is used for receiving instructions sent by the visual configuration software controller to collect images, the processor is connected with the micro-server through a network, collected data are sent to the micro-server through the processor to conduct data processing and target detection model calling, after the micro-service processing is completed, a processing result is sent to the visual configuration software controller, the visual configuration software controller controls the multi-degree-of-freedom mechanical arm to move to a designated position through serial communication according to received information, and workpieces are sorted to a designated area through controlling a pneumatic end effector at the tail end of a clamping jaw of the mechanical arm.
The invention has the beneficial effects that:
the invention introduces the AI computer vision method into the target sorting field of the industrial robot, so that the method has high autonomy and intellectualization, can be used for teaching and practice of related courses such as robot technology, computer vision, machine learning and the like, helps students learn the principle and application of machine vision, improves the practice capability and innovation capability of the students, has important educational significance, has the following advantages,
first: compared with a common color camera, the RGB-D camera is used, the acquired source pictures simultaneously contain color information and spatial depth information, so that the target positioning and pose estimation in the workpiece sorting process are more accurate, and the target grabbing accuracy is higher.
Second,: the multi-scale feature fusion depth neural network structure is constructed to detect the sorting targets, and the method has high detection precision, accurate positioning and strong robustness on various targets to be sorted.
Third,: the distributed micro-service architecture is adopted, the micro-server is built, the deep learning model involved in the robot sorting task is regarded as being split into different micro-service components, the degree of automation is high, the portability is strong, and the robot sorting system is suitable for online sorting of various workpieces.
Fourth,: the method comprises the steps of building configuration visual software, and forming a machine visual teaching experiment platform by the configuration visual software and a robot control system, wherein the configuration visual software can freely design configuration functions, modularize various functions such as image acquisition, workpiece detection, communication connection, robot control and the like, enable the configuration visual software to have the functions of customizing, combining and expanding visual algorithms and motion control under different scenes, form a plurality of machine visual cases from basic theory to practical application, and build the modularized teaching experiment system with high freedom degree.
Drawings
Fig. 1 is a diagram of an intelligent robot sorting teaching experiment device based on RGB-D images.
Fig. 2 is a flow chart of the intelligent sorting method of the robot.
Fig. 3 is a schematic diagram of a color and depth image synchronization method according to the present invention.
FIG. 4 is a flow chart of a micro-service invocation deep learning model of the present invention.
Fig. 5 is a flowchart of a target detection and pose estimation method according to the present invention.
FIG. 6 is a diagram of a configurable visual software platform interface in accordance with the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings.
Fig. 1 is an intelligent robot sorting teaching experiment platform based on RGB-D images, and the system comprises an RGB-D depth camera 1, a multi-degree-of-freedom mechanical arm 2, a pneumatic end effector 3, a small air pump 4, a sorting operation table 5, a visual display 6, a processor 7, a mechanical arm demonstrator 8, a visual configuration software controller 9 and a workbench 10. The RGB-D depth camera 1 is fixed right above the surface of the sorting operation table 5, and is used for completely collecting depth images and color images of workpieces in a region to be sorted; the multi-degree-of-freedom mechanical arm 2 is fixed on the workbench 10, and the pneumatic end effector 3 is arranged on the end flange joint of the multi-degree-of-freedom mechanical arm 2;
specifically, the RGB-D depth camera 1 is configured to receive an instruction sent by the visual configuration software controller 9 to perform image acquisition, the processor 7 is connected with the micro server through a network, acquired data is sent to the micro server through the processor 7 to perform data processing and model calling of target detection, after the micro server is processed, a processing result is sent to the visual configuration software controller 9, the controller controls the mechanical arm to move to a designated position through serial port communication according to received information, and the pneumatic end effector 3 at the tail end of the clamping jaw of the mechanical arm is controlled to sort workpieces to a designated area according to categories.
Each function such as image acquisition, workpiece detection, communication connection, robot control and the like is processed in a modularized mode in a configuration machine vision software MicroVT, is packaged into code blocks, is reserved with interfaces, and has the functions of customizing, combining and expanding vision algorithms and motion control in different sorting scenes. And loading the database system to finish the storage and management of system data, and providing good man-machine interaction through the upper computer.
As shown in fig. 2, the overall flow of the robot intelligent sorting method based on RGB-D images in the invention can be divided into four links of color and depth image synchronization, micro-service call deep learning model, target detection and pose estimation, and robot grabbing,
the method comprises the following steps:
step 1, serial port communication is opened, a robot is connected, the robot is controlled to move to an initial position, the condition that a multi-degree-of-freedom mechanical arm 2 of the robot shields the view angle of an RGB-D depth camera 1 is avoided, the RGB-D depth camera 1 is controlled to acquire images through vision configuration software, and a synchronized color map and a synchronized depth map are acquired.
Specifically, as shown in fig. 3, the color map is first preprocessed and decomposed to obtain a height component and a chromaticity component, then preliminary segmentation is performed using the height component, and refinement processing is performed on the target contour using the chromaticity component. After the depth image and the color image under the same scene are acquired by the RGB-D depth camera 1 sensor, the color image is processed, the edge of the color image is extracted, and the reconstructed depth image is processed by taking the edge as a constraint, so that the aim of accurately matching the color image with the color image is fulfilled.
And 2, calling a target detection model through a micro service system, sending a call model instruction by a configuration visual software MicroVT software end of the visual configuration software controller 9, sending the call model instruction to a MySQL database of the micro service through communication transmission, then finding out corresponding independent service according to a corresponding request ID, detecting the transmitted data call model, outputting a detected result to the MySQL database, and feeding back the result to the software end through communication transmission, wherein the specific flow is shown in figure 4.
Specifically, workpiece image data are locally collected, a target detection model based on the multi-scale feature attention of the YOLOV7 is trained, the model is inferred and accelerated through a depth optimization engine TensorRT, business logic is written, and the business logic is deployed to a micro-service system.
And 3, when the RGB-D depth camera 1 detects a target object to be detected, obtaining a three-dimensional pixel coordinate of the target object through a result returned by the micro-service system, reading calibration parameters of the RGB-D depth camera 1, estimating the pose, converting three-dimensional pixel coordinate information of the target object into a three-dimensional coordinate under a robot coordinate system, and completing calculation of grabbing coordinates.
Specifically, as shown in fig. 5, the method comprises two parts of off-line calibration and real-time operation, wherein the off-line calibration comprises the calibration of the RGB-D depth camera 1 and the calibration of the eyes and hands, and the real-time operation comprises an image synchronization algorithm and a target detection algorithm.
And 4, controlling the flexible clamping jaw of the pneumatic end effector 3 of the robot to move to a position right above the grabbing point of the target object, driving the robot to grab the workpiece through serial communication, and respectively placing the workpiece in a designated area according to the type. Referring to fig. 6, the robot, after sensing the workpiece, plans a gripping strategy and performs gripping.
Specifically, the gripping strategy is optimized during robot motion to minimize the time required to grip and move objects. The grasping time is reduced by adjusting the specific posture of the pneumatic end effector 3 of the robot during grasping, and the robot is ensured to keep balance during moving; the movement path of the robot is reduced by calculating and arranging the placement sequence of the objects and the placement sequence of the robot for grabbing the target objects, so that the robot is ensured to complete tasks in the shortest time.
According to the invention, the AI computer vision method is introduced into the field of target sorting of the industrial robot, and a complete intelligent workpiece online sorting system is provided, so that the problems of strong dependence on manpower and equipment, insufficient intelligent level and low efficiency in workpiece sorting in the existing industrial production line are solved. The overall flow can be divided into four links of color and depth image synchronization, micro-service call of a deep learning model, target detection and pose estimation and robot grabbing, and the machine vision software is configured to have high autonomy and intellectualization, so that the visual flexible industrial robot can be according to the type of a workpiece and the current processing state.
Specifically, in step 3, the detection algorithm mainly comprises two parts, namely off-line calibration and real-time operation, wherein the off-line calibration comprises camera calibration and hand-eye calibration, and the real-time operation comprises two parts, namely an image synchronization algorithm and a target detection algorithm. In the target detection algorithm, the specific workpiece classification and positioning method can be divided into the following steps:
step 3.1: data set preparation and enhancement. Based on a workpiece sorting scene in actual industrial production, an image is acquired by using an RGB-D depth camera 1 through an actual workpiece or a 3D printing generation workpiece self-built data set, and an experimental data set is enhanced through methods of random mirroring, random overturning, random rotation, luminosity distortion and the like, so that the network generalization capability is improved.
Step 3.2: model structure design and training. The target detection algorithm YOLOv7 is used as a basic network, cavity convolution is applied to increase the receptive field of the feature images, the feature images containing different scale features are generated and fused in a channel splicing mode, and the feature images containing low-level colors, texture features and high-level abstract semantic features can be obtained. And by adding the CBAM attention module and the small target detection head, the problems of small-size workpiece detection and complex environment interference are solved. And accelerating model training through the server GPU graphic card, and continuously adjusting network super parameters to ensure that the workpiece detection accuracy is highest.
Step 3.3: and (5) deploying and testing a micro service model. And constructing a SpringCloud micro server by adopting a distributed micro service architecture, regarding the deep learning model related in the robot sorting task as different micro service components, and completing the deployment of the deep model. And configuring a MySQL database on the micro server, realizing user-oriented database connection, and supporting management of various data, especially image data. And placing a workpiece in the working area to simulate an actual sorting scene, testing the detection effect of the model, and displaying the result on a display screen.
Step 3.4: optimization and reasoning acceleration. In order to improve the processing speed of online sorting of workpieces, the detection model is optimized and reconstructed through an optimizing acceleration engine TensorRT, and the reasoning speed of the model is improved. And then accelerating through a thread pool, and sorting and accelerating through multithreading to process tasks in a task queue.
The method for calculating the grabbing coordinates of the workpiece in the step 3 is as follows:
firstly, performing color camera calibration on an RGB-D depth camera 1 by using a checkerboard to obtain a camera internal reference matrix K and a distortion coefficient D. According to the pinhole imaging model, the conversion relation between the pixel coordinates and the world coordinates of the workpiece is obtained through the corresponding relation among the world coordinates, the camera coordinates, the image coordinates and the pixel coordinates in the rigid body basic coordinate transformation principle:
where R is the rotation matrix, t is the translation vector, and K is the internal reference of the camera.
In the process of calibrating eyes outside hands, the calibration plate is fixed on the multi-degree-of-freedom machineAt the pneumatic end effector 3 at the end of the arm 2, the multi-degree-of-freedom mechanical arm 2 is fixed on the workbench 10, the pose of the calibration plate relative to the robot base coordinate system is kept unchanged, and any point in the robot coordinate system is reachedAnd any point in the coordinate system of the calibration plate +.>The relationship between these is shown in the following formula:
further, the conversion relation from the pixel coordinate system to the robot coordinate system is shown as the following formula:
in the method, in the process of the invention,the calibration method is characterized in that the calibration method comprises the steps of calculating the pose of a camera through a plurality of pictures when a calibration experiment is carried out by using a homogeneous transformation matrix from a camera coordinate system to a robot coordinate system, and solving the Tais method of equation AX=XB. When the coordinates of the workpiece are calculated, the type of the workpiece and the position in the pixel coordinate system are detected through the deep learning network, and the position of the workpiece under the robot coordinate system can be obtained by bringing the workpiece into the above mode.
The foregoing description of the embodiments of the invention is provided merely for the purpose of illustrating the general principles of the invention and should not be construed as limiting the scope of the invention in any way. Based on the teachings herein, one skilled in the art may recognize additional embodiments of the present invention without further inventive faculty, and such structures would be within the scope of the present invention.

Claims (6)

1. An intelligent robot sorting method based on RGB-D images is characterized by comprising the following steps of;
step 1: fixing a checkerboard calibration plate at the tail end of a multi-degree-of-freedom mechanical arm (2), adjusting the position and the posture of the tail end of the multi-degree-of-freedom mechanical arm (2), shooting a plurality of checkerboard images with different angles through an RGB-D depth camera (1), calibrating the camera by utilizing the checkerboard images to eliminate the lens distortion of the camera, and calibrating the hand and the eye to obtain the conversion relation between the base coordinates of the robot and the pixel coordinates of the workpiece;
step 2, collecting color images and depth images of a workpiece in a sorting area on line, preprocessing and decomposing the color images, extracting edges of the color images, complementing the depth images with the edges as constraints, finally matching based on characteristic points of the two images, substituting internal and external parameters of an RGB-D depth camera (1), and obtaining synchronous depth images and color images by utilizing a three-dimensional image synchronization algorithm;
step 3, for the workpieces in the sorting area, a target detection network under a Pytorch frame is built by adopting a multi-scale feature attention depth neural network algorithm, the on-line acquired color images of the workpieces are detected to obtain category information of different workpieces and position information in a pixel coordinate system, and then the corresponding depth information is acquired through the synchronous depth images, so that the pixel coordinates of the workpieces are converted into world coordinates under a robot base coordinate system;
and 4, taking the world coordinates of the workpiece as a target position of the mechanical arm, controlling the multi-degree-of-freedom mechanical arm (2) to finish grabbing the workpiece, and classifying the workpiece to a designated position.
2. The intelligent sorting method of the robot based on the RGB-D image according to claim 1, wherein the images collected by the RGB-D depth camera (1) used in the step 1 simultaneously contain color information and spatial depth information.
3. The robot intelligent sorting method based on RGB-D images according to claim 1, characterized in that the RGB-D depth camera (1) is arranged right above the sorting area with the lens towards the working area, the total field of view of the intelligent multi-dimensional camera system can fully cover the work piece sorting platform.
4. The intelligent robot sorting method based on RGB-D images according to claim 1, wherein the specific image synchronization processing method in the step 2 is that the color images of the workpiece collected by the RGB-D depth camera (1) are preprocessed and decomposed to obtain the height component and the chrominance component, the edges of the height component and the chrominance component are extracted, the depth images are complemented by the growth of a double-threshold filtering combined area with the height component and the chrominance component as constraints, and finally the synchronous color and depth images are obtained based on the characteristic points of the two images.
5. The intelligent robot sorting method based on the RGB-D image according to claim 1, wherein the specific sorting and positioning of the workpiece in the step 3 is divided into the following steps:
step 3.1: data set preparation and enhancement; based on a workpiece sorting scene in actual industrial production, a self-built experimental data set of an actual workpiece or a 3D printing substitute workpiece is utilized, an RGB-D depth camera (1) is utilized to collect images, and the experimental data set is enhanced through random mirroring, random overturning, random rotation and luminosity distortion;
step 3.2: model structure design and training; the target detection algorithm YOLOv7 is adopted as a basic network, the receptive field of the feature images extracted by the convolution layer is increased through hole convolution, the feature images containing different scale features are generated and fused in a channel splicing mode, and the feature images containing low-level colors, texture features and high-level abstract semantic features are obtained; adding a CBAM attention module and a small target detection head after a convolutional layer of YOLOv7, taking the experimental data preprocessed in the step 3.1 as a data set, training by a server GPU graphic card acceleration model, and continuously adjusting network super-parameters to ensure that the workpiece detection accuracy is highest;
step 3.3: deploying and testing a micro-service model; constructing a SpringCloud micro server by adopting a distributed micro service architecture, regarding a deep learning model related in a robot sorting task as different micro service components, and completing the deployment of the deep model in the steps; configuring a MySQL database on a micro server, realizing user-oriented database connection, supporting management of various data, especially image data, placing workpieces in a working area to simulate an actual sorting scene, testing a model detection effect, and displaying the result on a visual display (6);
step 3.4: optimizing and reasoning acceleration; firstly, optimizing and reconstructing a detection model through an optimizing acceleration engine TensorRT, improving the reasoning speed of a deep learning network model, then accelerating through a thread pool, and realizing sorting acceleration through processing tasks in a task queue through multithreading.
6. A sorting teaching experiment platform for realizing the intelligent robot sorting method based on RGB-D images according to any one of claims 1-5, which is characterized by comprising an RGB-D depth camera (1), wherein the RGB-D depth camera (1) is fixed right above the surface of a sorting operation table (5), and depth images and color images of workpieces in a region to be sorted are completely collected; the multi-degree-of-freedom mechanical arm (2) is fixed on the workbench (10), and the pneumatic end effector (3) is arranged on the end flange joint of the multi-degree-of-freedom mechanical arm (2);
the RGB-D depth camera (1) is used for receiving instructions sent by the visual configuration software controller (9) to collect images, the processor (7) is connected with the micro server through a network, collected data are sent to the micro server through the processor (7) to conduct data processing and target detection model calling, after micro service processing is completed, a processing result is sent to the visual configuration software controller (9), the visual configuration software controller (9) controls the multi-degree-of-freedom mechanical arm (2) to move to a designated position through serial communication according to received information, and workpieces are sorted to designated areas according to categories through the pneumatic end effector (3) at the tail end of the control mechanical arm clamping jaw.
CN202311398187.0A 2023-10-26 2023-10-26 Robot intelligent sorting method based on RGB-D image and teaching experiment platform Pending CN117415051A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311398187.0A CN117415051A (en) 2023-10-26 2023-10-26 Robot intelligent sorting method based on RGB-D image and teaching experiment platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311398187.0A CN117415051A (en) 2023-10-26 2023-10-26 Robot intelligent sorting method based on RGB-D image and teaching experiment platform

Publications (1)

Publication Number Publication Date
CN117415051A true CN117415051A (en) 2024-01-19

Family

ID=89526031

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311398187.0A Pending CN117415051A (en) 2023-10-26 2023-10-26 Robot intelligent sorting method based on RGB-D image and teaching experiment platform

Country Status (1)

Country Link
CN (1) CN117415051A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118357903A (en) * 2024-06-19 2024-07-19 安徽大学 Multi-objective sorting method with cooperation of multiple mechanical arms

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118357903A (en) * 2024-06-19 2024-07-19 安徽大学 Multi-objective sorting method with cooperation of multiple mechanical arms

Similar Documents

Publication Publication Date Title
CN108555908B (en) Stacked workpiece posture recognition and pickup method based on RGBD camera
CN110000785B (en) Agricultural scene calibration-free robot motion vision cooperative servo control method and equipment
CN111046948B (en) Point cloud simulation and deep learning workpiece pose identification and robot feeding method
CN114912287B (en) Robot autonomous grabbing simulation system and method based on target 6D pose estimation
CN111421539A (en) Industrial part intelligent identification and sorting system based on computer vision
CN111347411B (en) Two-arm cooperative robot three-dimensional visual recognition grabbing method based on deep learning
CN113379849B (en) Robot autonomous recognition intelligent grabbing method and system based on depth camera
CN111923053A (en) Industrial robot object grabbing teaching system and method based on depth vision
CN117415051A (en) Robot intelligent sorting method based on RGB-D image and teaching experiment platform
CN111906788B (en) Bathroom intelligent polishing system based on machine vision and polishing method thereof
US20220203548A1 (en) Creating training data variability in machine learning for object labelling from images
CN109318227B (en) Dice-throwing method based on humanoid robot and humanoid robot
CN115861780B (en) Robot arm detection grabbing method based on YOLO-GGCNN
CN114851201A (en) Mechanical arm six-degree-of-freedom vision closed-loop grabbing method based on TSDF three-dimensional reconstruction
CN109079777B (en) Manipulator hand-eye coordination operation system
CN114998573A (en) Grabbing pose detection method based on RGB-D feature depth fusion
CN113793383A (en) 3D visual identification taking and placing system and method
CN117483268A (en) Flexible sorting method and flexible sorting system
CN210589323U (en) Steel hoop processing feeding control system based on three-dimensional visual guidance
CN114187312A (en) Target object grabbing method, device, system, storage medium and equipment
CN115213122B (en) Disorder sorting method based on 3D depth network
CN115194774A (en) Binocular vision-based control method for double-mechanical-arm gripping system
CN113436293B (en) Intelligent captured image generation method based on condition generation type countermeasure network
Walck et al. Automatic observation for 3d reconstruction of unknown objects using visual servoing
CN211890823U (en) Four-degree-of-freedom mechanical arm vision servo control system based on RealSense camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination