CN114078152A - Robot carbon block cleaning method based on three-dimensional reconstruction - Google Patents

Robot carbon block cleaning method based on three-dimensional reconstruction Download PDF

Info

Publication number
CN114078152A
CN114078152A CN202010841548.4A CN202010841548A CN114078152A CN 114078152 A CN114078152 A CN 114078152A CN 202010841548 A CN202010841548 A CN 202010841548A CN 114078152 A CN114078152 A CN 114078152A
Authority
CN
China
Prior art keywords
carbon block
carbon
dimensional
conveyor belt
cleaning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010841548.4A
Other languages
Chinese (zh)
Other versions
CN114078152B (en
Inventor
孙银健
张友权
容桂淦
辛梓
陈凯
岳彩卫
于明华
李龙
余成建
陈洪
陈仁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Watman Intelligent Technology Co ltd
Original Assignee
Beijing Watman Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Watman Technology Co ltd filed Critical Beijing Watman Technology Co ltd
Priority to CN202010841548.4A priority Critical patent/CN114078152B/en
Publication of CN114078152A publication Critical patent/CN114078152A/en
Application granted granted Critical
Publication of CN114078152B publication Critical patent/CN114078152B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1605Simulation of manipulator lay-out, design, modelling of manipulator
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Optics & Photonics (AREA)
  • Automation & Control Theory (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a robot carbon block cleaning method based on three-dimensional reconstruction, which comprises the following steps: acquiring pictures and three-dimensional coordinate data of N carbon blocks on a conveyor belt by a depth camera; collecting three-dimensional coordinate data of the carbon block by a laser radar; preprocessing the picture to obtain image data, inputting a pre-trained carbon block identification model to obtain an identification result of each carbon block, and thus obtaining a carbon bowl area and a groove area of each carbon block; splicing the three-dimensional coordinate data to obtain dense three-dimensional coordinate data; carrying out real-time three-dimensional reconstruction on the carbon block to obtain a three-dimensional model of the N carbon blocks, thereby obtaining three-dimensional coordinates of the surface of the carbon block and calculating a normal vector of each surface; tracking and positioning the carbon blocks by using an SLAM technology to obtain the real-time position quantity of each carbon block on the conveyor belt; and planning a motion track according to the real-time position quantity and the normal vector of the surface of the carbon block, and controlling the mechanical arm to clean the side surface, the upper surface and the inner groove of the carbon block.

Description

Robot carbon block cleaning method based on three-dimensional reconstruction
Technical Field
The invention relates to the field of carbon block cleaning, in particular to a robot carbon block cleaning method based on three-dimensional reconstruction.
Background
At present, in the prior art, a lot of fillers are adhered to the surface of a carbon block by traditional manual cleaning methods, a carbon block cleaning worker needs to hold a scraper to clean residues adhered to the surface of the carbon block, certain requirements are imposed on the operation technology of the worker, and when the fillers are adhered tightly, the scraper is difficult to clean, so that the carbon block cleaning effect is not ideal. The other mechanical equipment is a cleaning machine with a plurality of layers of scrapers, guide rails perpendicular to the advancing direction of the carbon blocks are respectively arranged on two sides of the top of a frame, side scraper assemblies which are oppositely arranged are assembled between the guide rails in a sliding mode, and a plurality of bottom scraper assemblies which are arranged in parallel are assembled at the bottom of the frame along the advancing direction of the carbon blocks.
At present, in the prior art, residues attached to the surface of a carbon block are manually removed, carbon particles adhered to a carbon bowl of the carbon block are manually removed by a special iron shovel and then blown away by compressed air, so that a large amount of dust is diffused in an anode roasting workshop, the working environment is severe, and great harm is brought to the health of workers. The environment of abominable work, the work load of high strength lead to personnel's mobility big, often by interim workman's operation, bring the potential safety hazard. The manual surface cleaning of the carbon block has low efficiency and poor cleaning effect. For example, when the filling material adheres relatively tightly, it is very difficult to clean by hand. Because four carbon bowls on the upper surface of the carbon block are complex in shape, carbon particles adhered in the carbon bowls are cleaned efficiently and fully automatically, the carbon bowl cleaning machine has certain technical difficulty, has higher requirements on the automation degree of equipment, is provided with a cleaning machine with multiple layers of scrapers, realizes batch operation of products, and improves the cleaning efficiency. Along with the improvement of the environmental protection requirement and the current situation of labor shortage, related enterprises urgently need equipment capable of automatically cleaning the surface of a carbon block, the equipment can be connected with the equipment in series for production, the production efficiency is not influenced, and the cleaning effect required by production can be met.
The current situation that descaling machine and anode roasting workshop actual production meet combines the later stage to improve and forms, and it adopts compression roller mechanism to carry out the pre-compaction breakage earlier, and then adopts a plurality of scraper mechanisms to scrape the clearance, has consequently protected the carbon piece promptly and has possessed good integrality, and still to a great extent has reduced the loss of scraper.
Abbreviations and Key term definitions
Carbon block: the carbon block is produced by using petroleum coke and asphalt coke as aggregate and coal tar pitch as adhesive and is used as anode material for pre-baked aluminum electrolyzer. The carbon block has been calcined and has a stable geometry.
SLAM technique: the English language is called simultaneous localization and mapping, and the instant positioning and map building or the concurrent map building and positioning are carried out. The method can be interpreted as that the robot positions itself through an internal sensor (an encoder, an IMU and the like) and an external sensor (a laser sensor or a visual sensor) carried by the robot in an unknown environment, and an environment map is incrementally constructed by utilizing environment information acquired by the external sensor on the basis of positioning.
3D vision technology: a passive 3D vision technology of a stereoscopic vision system composed of cameras, a laser 3D scanning technology of realizing stereoscopic measurement by capturing projected laser images by the cameras, a structured light 3D scanning technology of realizing a 3D view reconstruction process by a projector through a structured light coding technology, a TOF camera technology of emitting high-frequency light signals through LEDs and realizing distance measurement according to a time difference from emission to return of the signals, and the like. The monocular 3D vision technology has a simple structure and low precision; the binocular and multi-ocular 3D vision technology has higher precision, but the algorithm and the structure are relatively complex; the laser 3D technology is fast, but is limited by laser speckle defects, and high precision is difficult to achieve. Structured light 3D technology is high in precision and speed, but cannot be used on transparent, black or strongly reflective surfaces. TOF camera technology is fast, accurate, but relatively costly. At present, the development of passive 3D, laser 3D and structured light 3D vision technologies is mature and widely applied.
Structured light method: the structured light method can be divided into a light point structured light method, a light bar structured light method, a smooth structured light method, and the like according to the form of the projection beam. The surface structure light measuring method is widely applied to depth measurement at present, and has obvious advantages that surface structures in various modes are projected on a measured object, for example, densely distributed uniform gratings are projected on the measured object, the surface of the measured object is uneven and has different depths, so grating stripes reflected by the surface are distorted along with different depths of the surface, and the process is regarded as that the depth information of the surface of the object modulates the stripes of the gratings. The surface information of the measured object is modulated in the reflected grating, and the height difference and the depth information between each measured point are obtained through analysis according to the geometric relationship between the grating reflected by the measured object and the reference grating.
Three-dimensional reconstruction: a three-dimensional model of an environment or object is obtained through a series of processes based on a series of photographs of the environment or object from different angles. Extracting image features, calculating feature matching between images by using the features, performing sparse reconstruction based on the matched features to obtain a camera pose and a sparse feature point cloud of each image, performing dense reconstruction based on the camera pose to obtain a dense point cloud, and reconstructing a grid based on the point cloud to obtain a three-dimensional model of a detected object.
Dense point cloud: the point cloud is a massive collection of points that represent the spatial distribution of the target and the characteristics of the target surface in the same spatial reference system. When a laser beam irradiates the surface of an object, the reflected laser beam carries information such as direction, distance and the like. If the laser beam is scanned along a certain track, the reflected laser point information is recorded while scanning, and because the scanning is extremely fine, a large number of laser points can be obtained, and a point data set, namely dense point cloud, of the product appearance surface is formed.
A depth camera: the light with certain structural characteristics is projected to a shot object and collected by a special infrared camera. The light with a certain structure can acquire different image phase information according to different depth areas of a shot object, and then the change of the structure is converted into depth information through an arithmetic unit, so that a three-dimensional structure is obtained. The structured light method used by the depth camera does not depend on the color and texture of an object, and the method of actively projecting known patterns is adopted to realize fast and robust matching of feature points, so that higher precision can be achieved, and the application range is greatly expanded.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a robot carbon block cleaning method based on three-dimensional reconstruction.
The invention provides a robot carbon block cleaning method based on three-dimensional reconstruction, which comprises the following steps:
acquiring pictures and three-dimensional coordinate data of N carbon blocks on a conveyor belt by a depth camera;
collecting three-dimensional coordinate data of N carbon blocks on a conveyor belt by a laser radar;
preprocessing the pictures of the N carbon blocks to obtain image data, inputting a pre-trained carbon block identification model to obtain an identification result of each carbon block, and thus obtaining a carbon bowl area and a groove area of each carbon block;
splicing the three-dimensional coordinate data collected by the depth camera and the three-dimensional coordinate data collected by the laser radar to obtain dense three-dimensional coordinate data;
performing real-time three-dimensional reconstruction of the carbon blocks based on the dense three-dimensional coordinate data to obtain three-dimensional models of the N carbon blocks, thereby obtaining surface three-dimensional coordinates of the N carbon blocks, and calculating a normal vector of each surface of each carbon block;
tracking and positioning each carbon block on the conveyor belt by using an SLAM technology to obtain the real-time position quantity of each carbon block on the conveyor belt;
and planning a motion track according to the real-time position of each carbon block on the conveyor belt and the normal vector of the surface of each carbon block, and controlling the mechanical arm to sequentially clean the side surface, the upper surface and the inner groove of each carbon block.
As an improvement of the above method, the preprocessing is to modify the picture size to obtain image data with a fixed length and width.
As an improvement of the above method, the input of the carbon block identification model is image data of a carbon block, and the output is a carbon block identification result, and the carbon block identification model is composed of a convolution layer, an RPN layer and a propofol layer; wherein the content of the first and second substances,
the convolutional layer comprises 4 blocks, 3 multiplied by 3 convolutional cores, the sliding step length is 1, the activation function is a ReLU function, and the pooling window is 5 multiplied by 5;
the RPN layer consists of 1 3 multiplied by 3 convolution kernel, a first branch and a second branch which are connected in parallel, wherein the first branch is used for obtaining positive and negative classifications of an anchors region through a softmax function, and the second branch is used for calculating frame regression offset of a candidate frame;
the Proposal layer is used for obtaining a candidate area for positive classification and frame regression offset of a corresponding candidate frame by integrating the anchors area, eliminating the candidate area with the intersection area of the candidate frame and the marking frame smaller than a first threshold value and exceeding a boundary, and selecting the candidate frame with the positive classification score larger than a second threshold value, so that the identification result of the carbon block is obtained.
As an improvement of the above method, the method further includes a step of training the carbon block identification model, specifically including:
preprocessing the collected picture to obtain image data with fixed length and width, selecting partial image data to be manually marked as a training set, and using the other partial image data as a test set;
inputting the training set data into a carbon block recognition model;
training until the accuracy index is highest by adjusting the learning rate to obtain a trained carbon block identification model;
inputting the test set into a trained carbon block identification model to complete verification.
As an improvement of the above method, the real-time three-dimensional reconstruction of the carbon blocks is performed based on the dense three-dimensional coordinate data to obtain a three-dimensional model of the N carbon blocks; the method specifically comprises the following steps:
filtering data noise of the dense three-dimensional coordinate data by using a probability model and a global pose estimation method;
obtaining a pose graph by adopting a sub-graph dividing method based on scene characteristics;
extracting the characteristic points to match the pose graphs, and constructing three-dimensional models of the N carbon blocks by combining image information.
As an improvement of the above method, the normal vector of each surface of each carbon block is specifically:
the coordinate values (x, y, z) of the normal vector are obtained by the following equation:
(x2-x1)·x+(y2-y1)·y+(z2-z1)·z=0
(x3-x1)·x+(y3-y1)·y+(z3-z1)·z=0
(x3-x2)·x+(y3-y2)·y+(z3-z2)·z=0
wherein (x)1,y1,z1),(x2,y2,z2),(x3,y3,z3) The coordinate values of three points on a certain plane on the surface of the carbon block are respectively;
the direction of the normal vector satisfies the right-hand spiral rule.
As an improvement of the above method, tracking and positioning each carbon block on the conveyor belt by means of SLAM technology to obtain the real-time position quantity of each carbon block on the conveyor belt; the method specifically comprises the following steps:
SIFT features are extracted from the dense three-dimensional coordinate data through a feature method, and pose estimation of the key frame is further obtained;
and managing local key frames and map points through local map building, and optimizing the position and the posture of the local key frames and the positions of the local map points so as to obtain the real-time position quantity of each carbon block on the conveyor belt.
As an improvement of the above method, the motion trajectory planning is performed according to the real-time position quantity of each carbon block on the conveyor belt and the normal vector of the surface of the carbon block, and the mechanical arm is controlled to sequentially perform the cleaning operation of the side surface, the upper surface and the inner groove on the carbon block; the method specifically comprises the following steps:
according to the calibration of a depth camera coordinate system and a mechanical arm coordinate system, matching three-dimensional coordinate data acquired by a depth camera into a carbon block coordinate system;
in a carbon block coordinate system, selecting a plurality of points for planning a motion track according to the real-time position quantity of each carbon block on a conveyor belt and the normal vector of the surface of the carbon block;
and controlling the mechanical arm to sequentially clean the side surface, the upper surface and the inner groove of the carbon block according to the motion trail.
Compared with the prior art, the invention has the advantages that:
1. the method disclosed by the invention does not need manual assistance, adopts industrial intelligent carbon block cleaning, can eliminate potential safety hazards, improves the product quality, and realizes automation and intellectualization of production;
2. the invention has the characteristics of low cost, full automation, high efficiency and the like.
Drawings
FIG. 1 is a schematic diagram of a track of a robot carbon block cleaning based on three-dimensional reconstruction according to the invention;
fig. 2 is a schematic view of the control arm of the present invention performing a carbon block cleaning operation.
Detailed Description
The method takes an industrial mechanical arm as a carrier, carries a depth camera, and performs three-dimensional reconstruction and analysis on a target object by using a laser point cloud perception algorithm. The carbon block cleaning robot replaces manual work to realize the tasks of cleaning and repairing the surface of the target object.
The technical solution of the present invention will be described in detail below with reference to the accompanying drawings and examples.
A flow chart of a robot carbon block cleaning method based on three-dimensional reconstruction. The method specifically comprises the following steps:
1) high precision three dimensional reconstruction
A carbon block three-dimensional reconstruction and attached residue and crack detection method based on linear structured light provides key technical support for realizing surface cleaning and crack repair of a carbon block. And the three-dimensional data acquisition is carried out on the carbon block from multiple angles through the rotation of the depth camera. Three key steps of the three-dimensional reconstruction method are as follows: and (3) representing and aggregating the model, constructing a subgraph and detecting a closed loop. For the representation and aggregation of the model, the point cloud with the probability model is used, the influence of data noise is fully considered in the model aggregation process, the quality of the model is enhanced, and meanwhile, the point cloud representation model is used, so that the data volume is reduced, and more flexibility is achieved. And a global pose estimation method is designed, the parallelism of the algorithm is increased, and the utilization efficiency of hardware is improved. The three-dimensional reconstruction method based on the dense point cloud has better robustness in closed-loop detection, and can remarkably improve the quality of the model and the accuracy of the pose.
2) Real-time positioning
The SLAM positioning technology mainly comprises 3 parts: tracking, local mapping and closed-loop detection, realizing accurate positioning of the carbon block position by using an SLAM technology, and splicing the three-dimensional point cloud of the depth camera and the three-dimensional point cloud of the laser radar to realize dense three-dimensional point cloud so as to improve the real-time property of the system and the robustness in a dim light environment. And extracting SIFT features by a feature method to obtain pose estimation of the key frame. The local image building part is responsible for managing local key frames and map points and optimizing the position and the posture of the local key frames and the position of the local map points
3) Carbon block identification
Firstly, building a carbon block identification network model, secondly, preprocessing collected data (from resize to 340 x 340, manually marking the data to complete a ground route data set, training the data on a GPU server to complete model parameter optimization to obtain a baseline model, then optimizing the training parameters, if lr (learning rate) is firstly set to be 0.1, 0.5, 0.01 and 0.001, so that model evaluation indexes are improved, and finally, selecting the accurateindex of accurateness of acuracy as an optimal model, adopting 100 carbon block pictures as a test set to complete the model test to ensure that the carbon block identification accuracy reaches 100%, and visually recognizing results to draw a carbon bowl area and a groove area.
The first convolution layer of the carbon block identification network structure consists of 4 blocks (conv + relu + conv + relu + pooling, conv: kernal ═ 3, stride ═ 1, relu function, pooling: average pooling, kernal ═ 5); then, the accessed RPN layer is firstly convoluted by 3x3, and then is divided into 2 lines, the first branch obtains positive and negative classifications through softmax classification anchors, and the second branch is used for calculating bounding box regression offsets of candidate frames to obtain candidate regions. And the final Proposal layer is responsible for obtaining a candidate area by synthesizing the positive anchors (candidate frames classified as carbon blocks) and the corresponding bounding box regression offset, eliminating the candidate area with the intersection area of the candidate frame and the labeling frame smaller than 0.5 and exceeding the boundary, and finally selecting the candidate frame with the positive classification score larger than 0.95 as the final result of the target positioning of the carbon block.
4) Calculating the normal vector of the surface of the carbon block
The normal vector of each point can be calculated by obtaining the three-dimensional point coordinates of the surface of the carbon block through three-dimensional reconstruction.
The coordinate values (x, y, z) of the normal vector are obtained by the following equation:
(x2-x1)·x+(y2-y1)·y+(z2-z1)·z=0
(x3-x1)·x+(y3-y1)·y+(z3-z1)·z=0
(x3-x2)·x+(y3-y2)·y+(z3-z2)·z=0
wherein (x)1,y1,z1),(x2,y2,z2),(x3,y3,z3) The coordinate values of three points on a certain plane on the surface of the carbon block are respectively;
the direction of the normal vector satisfies the right-hand spiral rule.
5) Mechanical arm on-line control
And calibrating according to a camera coordinate system and a mechanical arm coordinate system, and matching the point cloud of the camera into a carbon block coordinate system. And then selecting a plurality of points in the carbon block coordinate system as the motion trail of the mechanical arm. Fig. 1 is a schematic diagram showing the track of the mechanical arm for cleaning the side surface, the upper surface and the inner groove of the carbon block in sequence, and fig. 2 is a schematic diagram showing the control of the mechanical arm for cleaning the carbon block
The real-time position adjustment of the mechanical arm is realized by recording the position of the carbon block at the previous moment and the position of the current carbon block through real-time three-dimensional modeling of the carbon block and subtracting the relative positions at two moments from the motion track, and the comprehensive cleaning of the carbon block is completed according to the preset track.
Track point coordinate (a) of previous time1,b1,c1),(a2,b2,c2)......(an,bn,cn) N point coordinates in total;
track point coordinate (a) of current time1-s,b1,c1),(a2-s,b2,c2)......(an-s,bn,cn) And n point coordinates, wherein s is the distance the carbon block moves on the conveyor belt in the horizontal direction from the previous moment to the current moment.
Through the vision detection and mechanical arm grabbing system based on the ROS, the system can efficiently operate the robot to complete appointed control operation in real time, the adaptability of the system to the environment is improved, and the system has the characteristics of accurate grabbing and high object identification accuracy rate. And planning the motion track of the mechanical arm to accurately clean the carbon block in motion. The moving speed of the carbon block is measured and calculated, and the accuracy can reach millimeter per second.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and are not limited. Although the present invention has been described in detail with reference to the embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (8)

1. A method for cleaning a carbon block of a robot based on three-dimensional reconstruction, the method comprising:
acquiring pictures and three-dimensional coordinate data of N carbon blocks on a conveyor belt by a depth camera;
collecting three-dimensional coordinate data of N carbon blocks on a conveyor belt by a laser radar;
preprocessing the pictures of the N carbon blocks to obtain image data, inputting a pre-trained carbon block identification model to obtain an identification result of each carbon block, and thus obtaining a carbon bowl area and a groove area of each carbon block;
splicing the three-dimensional coordinate data collected by the depth camera and the three-dimensional coordinate data collected by the laser radar to obtain dense three-dimensional coordinate data;
performing real-time three-dimensional reconstruction of the carbon blocks based on the dense three-dimensional coordinate data to obtain three-dimensional models of the N carbon blocks, thereby obtaining surface three-dimensional coordinates of the N carbon blocks, and calculating a normal vector of each surface of each carbon block;
tracking and positioning each carbon block on the conveyor belt by using an SLAM technology to obtain the real-time position quantity of each carbon block on the conveyor belt;
and planning a motion track according to the real-time position of each carbon block on the conveyor belt and the normal vector of the surface of each carbon block, and controlling the mechanical arm to sequentially clean the side surface, the upper surface and the inner groove of each carbon block.
2. The method for cleaning the carbon block of the robot based on the three-dimensional reconstruction as recited in claim 1, wherein the preprocessing is to modify the picture size to obtain the image data with fixed length and width.
3. The method for cleaning a carbon block of a robot based on three-dimensional reconstruction as claimed in claim 2, wherein the input of the carbon block identification model is image data of the carbon block, and the output is a carbon block identification result, and the carbon block identification model is composed of a convolutional layer, an RPN layer and a Proposal layer; wherein the content of the first and second substances,
the convolutional layer comprises 4 blocks, 3 multiplied by 3 convolutional cores, the sliding step length is 1, the activation function is a ReLU function, and the pooling window is 5 multiplied by 5;
the RPN layer consists of 1 3 multiplied by 3 convolution kernel, a first branch and a second branch which are connected in parallel, wherein the first branch is used for obtaining positive and negative classifications of an anchors region through a softmax function, and the second branch is used for calculating frame regression offset of a candidate frame;
the Proposal layer is used for obtaining a candidate area for positive classification and frame regression offset of a corresponding candidate frame by integrating the anchors area, eliminating the candidate area with the intersection area of the candidate frame and the marking frame smaller than a first threshold value and exceeding a boundary, and selecting the candidate frame with the positive classification score larger than a second threshold value, so that the identification result of the carbon block is obtained.
4. The method for cleaning the carbon block of the robot based on the three-dimensional reconstruction as recited in claim 3, further comprising a step of training a carbon block recognition model, specifically comprising:
preprocessing the collected picture to obtain image data with fixed length and width, selecting partial image data to be manually marked as a training set, and using the other partial image data as a test set;
inputting the training set data into a carbon block recognition model;
training until the accuracy index is highest by adjusting the learning rate to obtain a trained carbon block identification model;
inputting the test set into a trained carbon block identification model to complete verification.
5. The method for cleaning the carbon block of the robot based on the three-dimensional reconstruction as recited in claim 1, wherein the real-time three-dimensional reconstruction of the carbon block is performed based on the dense three-dimensional coordinate data to obtain a three-dimensional model of the N carbon blocks; the method specifically comprises the following steps:
filtering data noise of the dense three-dimensional coordinate data by using a probability model and a global pose estimation method;
obtaining a pose graph by adopting a sub-graph dividing method based on scene characteristics;
extracting the characteristic points to match the pose graphs, and constructing three-dimensional models of the N carbon blocks by combining image information.
6. Robot carbon block cleaning method based on three-dimensional reconstruction as claimed in claim 5 characterized by that, the normal vector of each surface of each carbon block is:
the coordinate values (x, y, z) of the normal vector are obtained by the following equation:
(x2-x1)·x+(y2-y1)·y+(z2-z1)·z=0
(x3-x1)·x+(y3-y1)·y+(z3-z1)·z=0
(x3-x2)·x+(y3-y2)·y+(z3-z2)·z=0
wherein (x)1,y1,z1),(x2,y2,z2),(x3,y3,z3) The coordinate values of three points on a certain plane on the surface of the carbon block are respectively;
the direction of the normal vector satisfies the right-hand spiral rule.
7. The method for cleaning the robot carbon block based on the three-dimensional reconstruction as recited in claim 6, wherein the tracking and positioning of each carbon block on the conveyor belt are performed by SLAM technology, so as to obtain the real-time position quantity of each carbon block on the conveyor belt; the method specifically comprises the following steps:
SIFT features are extracted from the dense three-dimensional coordinate data through a feature method, and pose estimation of the key frame is further obtained;
and managing local key frames and map points through local map building, and optimizing the position and the posture of the local key frames and the positions of the local map points so as to obtain the real-time position quantity of each carbon block on the conveyor belt.
8. The method for cleaning the carbon block of the robot based on the three-dimensional reconstruction as recited in claim 7, wherein the movement path planning is performed according to the real-time position quantity of each carbon block on the conveyor belt and the normal vector of the surface of the carbon block, and the mechanical arm is controlled to sequentially perform the cleaning operation of the side surface, the upper surface and the inner groove on the carbon block; the method specifically comprises the following steps:
according to the calibration of a depth camera coordinate system and a mechanical arm coordinate system, matching three-dimensional coordinate data acquired by a depth camera into a carbon block coordinate system;
in a carbon block coordinate system, selecting a plurality of points for planning a motion track according to the real-time position quantity of each carbon block on a conveyor belt and the normal vector of the surface of the carbon block;
and controlling the mechanical arm to sequentially clean the side surface, the upper surface and the inner groove of the carbon block according to the motion trail.
CN202010841548.4A 2020-08-20 2020-08-20 Robot carbon block cleaning method based on three-dimensional reconstruction Active CN114078152B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010841548.4A CN114078152B (en) 2020-08-20 2020-08-20 Robot carbon block cleaning method based on three-dimensional reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010841548.4A CN114078152B (en) 2020-08-20 2020-08-20 Robot carbon block cleaning method based on three-dimensional reconstruction

Publications (2)

Publication Number Publication Date
CN114078152A true CN114078152A (en) 2022-02-22
CN114078152B CN114078152B (en) 2023-05-02

Family

ID=80281661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010841548.4A Active CN114078152B (en) 2020-08-20 2020-08-20 Robot carbon block cleaning method based on three-dimensional reconstruction

Country Status (1)

Country Link
CN (1) CN114078152B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070076946A1 (en) * 2005-09-30 2007-04-05 Nachi-Fujikoshi Corp. Object search apparatus, robot system equipped with object search apparatus, and object search method
CN104694970A (en) * 2013-12-10 2015-06-10 沈阳铝镁设计研究院有限公司 Automatic tracking method for stacking of prebaked anode carbon blocks
CN109948514A (en) * 2019-03-15 2019-06-28 中国科学院宁波材料技术与工程研究所 Workpiece based on single goal three-dimensional reconstruction quickly identifies and localization method
CN110288695A (en) * 2019-06-13 2019-09-27 电子科技大学 Single-frame images threedimensional model method of surface reconstruction based on deep learning
CN111476899A (en) * 2020-03-24 2020-07-31 清华大学 Three-dimensional reconstruction method for dense texture coordinates of human hand based on single-viewpoint RGB camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070076946A1 (en) * 2005-09-30 2007-04-05 Nachi-Fujikoshi Corp. Object search apparatus, robot system equipped with object search apparatus, and object search method
CN104694970A (en) * 2013-12-10 2015-06-10 沈阳铝镁设计研究院有限公司 Automatic tracking method for stacking of prebaked anode carbon blocks
CN109948514A (en) * 2019-03-15 2019-06-28 中国科学院宁波材料技术与工程研究所 Workpiece based on single goal three-dimensional reconstruction quickly identifies and localization method
CN110288695A (en) * 2019-06-13 2019-09-27 电子科技大学 Single-frame images threedimensional model method of surface reconstruction based on deep learning
CN111476899A (en) * 2020-03-24 2020-07-31 清华大学 Three-dimensional reconstruction method for dense texture coordinates of human hand based on single-viewpoint RGB camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨聚庆: "《面向智能制造的机器人位姿误差动态修正技术研究》" *

Also Published As

Publication number Publication date
CN114078152B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN106041937A (en) Control method of manipulator grabbing control system based on binocular stereoscopic vision
CN101161151A (en) Method and system for automatic generating shoe sole photopolymer coating track based on linear structure optical sensor
CN107253192A (en) It is a kind of based on Kinect without demarcation human-computer interactive control system and method
CN109632822A (en) A kind of quasi-static high-precision road surface breakage intelligent identification device and its method
CN110065068B (en) Robot assembly operation demonstration programming method and device based on reverse engineering
CN103247056B (en) Human bone articular system three-dimensional model-bidimensional image spatial registration method
Ferreira et al. A low-cost laser scanning solution for flexible robotic cells: spray coating
Hu et al. 3D vision technologies for a self-developed structural external crack damage recognition robot
CN103162659B (en) A kind of method constructing three-dimensional vehicle scan table and generate goods stochastic sampling point
CN106228570A (en) A kind of Truth data determines method and apparatus
Liu et al. Real-time 3D surface measurement in additive manufacturing using deep learning
CN112504123A (en) Automatic detection equipment and method for plates of power transmission tower
Flores-Fuentes et al. 3D spatial measurement for model reconstruction: A review
Tang et al. Grand challenges of machine-vision technology in civil structural health monitoring
Alamdari et al. A multi-scale robotic approach for precise crack measurement in concrete structures
CN114078152B (en) Robot carbon block cleaning method based on three-dimensional reconstruction
CN104266594B (en) Thickness compensation method for block frozen shrimp net content detection based on different visual technologies
CN105783782B (en) Surface curvature is mutated optical profilometry methodology
Zhang et al. Neural rendering-enabled 3D modeling for rapid digitization of in-service products
Mondal et al. Applications of depth sensing for advanced structural condition assessment in smart cities
CN1609894A (en) Steel products on-line counting system and method based on virtual multisensor fusion
CN110717981A (en) Method and device for acquiring indoor passable area of small robot
Urbanic et al. Targeted reverse engineering techniques for generating architectural solid models for additive manufacturing fabrication
CN114812408B (en) Method and system for measuring height of stone sweeper from rail surface
CN111366086B (en) Carriage pose measurement system and method for automatic loading

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Sun Yinjian

Inventor after: Chen Hong

Inventor after: Chen Ren

Inventor after: Zhang Youquan

Inventor after: Rong Guigan

Inventor after: Xin Zi

Inventor after: Chen Kai

Inventor after: Yue Caiwei

Inventor after: Yu Minghua

Inventor after: Li Long

Inventor after: Yu Chengjian

Inventor before: Sun Yinjian

Inventor before: Chen Hong

Inventor before: Chen Ren

Inventor before: Zhang Youquan

Inventor before: Rong Guigan

Inventor before: Xin Zi

Inventor before: Chen Kai

Inventor before: Yue Caiwei

Inventor before: Yu Minghua

Inventor before: Li Long

Inventor before: Yu Chengjian

GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100038 A-20, 155 jijiamiao, Huaxiang, Fengtai District, Beijing

Patentee after: Beijing watman Intelligent Technology Co.,Ltd.

Address before: 100038 A-20, 155 jijiamiao, Huaxiang, Fengtai District, Beijing

Patentee before: Beijing watman Technology Co.,Ltd.