CN112011750B - Slag dragging method based on machine vision and robot system - Google Patents

Slag dragging method based on machine vision and robot system Download PDF

Info

Publication number
CN112011750B
CN112011750B CN202010847127.2A CN202010847127A CN112011750B CN 112011750 B CN112011750 B CN 112011750B CN 202010847127 A CN202010847127 A CN 202010847127A CN 112011750 B CN112011750 B CN 112011750B
Authority
CN
China
Prior art keywords
slag
thick
picture
dragging
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010847127.2A
Other languages
Chinese (zh)
Other versions
CN112011750A (en
Inventor
但斌斌
陈刚
张振洲
李克波
熊凌
容芷君
牛清勇
付婷
李颖
罗钟邱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Science and Engineering WUSE
Original Assignee
Wuhan University of Science and Engineering WUSE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Engineering WUSE filed Critical Wuhan University of Science and Engineering WUSE
Priority to CN202010847127.2A priority Critical patent/CN112011750B/en
Publication of CN112011750A publication Critical patent/CN112011750A/en
Application granted granted Critical
Publication of CN112011750B publication Critical patent/CN112011750B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • CCHEMISTRY; METALLURGY
    • C23COATING METALLIC MATERIAL; COATING MATERIAL WITH METALLIC MATERIAL; CHEMICAL SURFACE TREATMENT; DIFFUSION TREATMENT OF METALLIC MATERIAL; COATING BY VACUUM EVAPORATION, BY SPUTTERING, BY ION IMPLANTATION OR BY CHEMICAL VAPOUR DEPOSITION, IN GENERAL; INHIBITING CORROSION OF METALLIC MATERIAL OR INCRUSTATION IN GENERAL
    • C23CCOATING METALLIC MATERIAL; COATING MATERIAL WITH METALLIC MATERIAL; SURFACE TREATMENT OF METALLIC MATERIAL BY DIFFUSION INTO THE SURFACE, BY CHEMICAL CONVERSION OR SUBSTITUTION; COATING BY VACUUM EVAPORATION, BY SPUTTERING, BY ION IMPLANTATION OR BY CHEMICAL VAPOUR DEPOSITION, IN GENERAL
    • C23C2/00Hot-dipping or immersion processes for applying the coating material in the molten state without affecting the shape; Apparatus therefor
    • C23C2/003Apparatus
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/08Programme-controlled manipulators characterised by modular constructions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • CCHEMISTRY; METALLURGY
    • C23COATING METALLIC MATERIAL; COATING MATERIAL WITH METALLIC MATERIAL; CHEMICAL SURFACE TREATMENT; DIFFUSION TREATMENT OF METALLIC MATERIAL; COATING BY VACUUM EVAPORATION, BY SPUTTERING, BY ION IMPLANTATION OR BY CHEMICAL VAPOUR DEPOSITION, IN GENERAL; INHIBITING CORROSION OF METALLIC MATERIAL OR INCRUSTATION IN GENERAL
    • C23CCOATING METALLIC MATERIAL; COATING MATERIAL WITH METALLIC MATERIAL; SURFACE TREATMENT OF METALLIC MATERIAL BY DIFFUSION INTO THE SURFACE, BY CHEMICAL CONVERSION OR SUBSTITUTION; COATING BY VACUUM EVAPORATION, BY SPUTTERING, BY ION IMPLANTATION OR BY CHEMICAL VAPOUR DEPOSITION, IN GENERAL
    • C23C2/00Hot-dipping or immersion processes for applying the coating material in the molten state without affecting the shape; Apparatus therefor
    • C23C2/04Hot-dipping or immersion processes for applying the coating material in the molten state without affecting the shape; Apparatus therefor characterised by the coating material
    • C23C2/06Zinc or cadmium or alloys based thereon
    • CCHEMISTRY; METALLURGY
    • C23COATING METALLIC MATERIAL; COATING MATERIAL WITH METALLIC MATERIAL; CHEMICAL SURFACE TREATMENT; DIFFUSION TREATMENT OF METALLIC MATERIAL; COATING BY VACUUM EVAPORATION, BY SPUTTERING, BY ION IMPLANTATION OR BY CHEMICAL VAPOUR DEPOSITION, IN GENERAL; INHIBITING CORROSION OF METALLIC MATERIAL OR INCRUSTATION IN GENERAL
    • C23CCOATING METALLIC MATERIAL; COATING MATERIAL WITH METALLIC MATERIAL; SURFACE TREATMENT OF METALLIC MATERIAL BY DIFFUSION INTO THE SURFACE, BY CHEMICAL CONVERSION OR SUBSTITUTION; COATING BY VACUUM EVAPORATION, BY SPUTTERING, BY ION IMPLANTATION OR BY CHEMICAL VAPOUR DEPOSITION, IN GENERAL
    • C23C2/00Hot-dipping or immersion processes for applying the coating material in the molten state without affecting the shape; Apparatus therefor
    • C23C2/34Hot-dipping or immersion processes for applying the coating material in the molten state without affecting the shape; Apparatus therefor characterised by the shape of the material to be treated
    • C23C2/36Elongated material
    • C23C2/40Plates; Strips

Landscapes

  • Engineering & Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Chemical Kinetics & Catalysis (AREA)
  • Materials Engineering (AREA)
  • Metallurgy (AREA)
  • Organic Chemistry (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a slag dragging method and a robot system based on machine vision, which identify and position a thick slag area influencing the quality of a galvanized sheet through a vision technology, then obtain a slag dragging sequence according to a priority rule of the slag dragging area, and finally control a slag dragging robot to drag and pour all thick slag into a nearby slag bucket. Compared with the traditional robot slag salvaging system, the invention provides a slag salvaging scheme of machine vision, the optimal slag salvaging sequence is determined according to the zinc slag distribution characteristics and the slag salvaging strategy, the zinc consumption and the power consumption caused by unnecessary slag salvaging times are reduced, the autonomy of the slag salvaging robot is obviously improved, and the working efficiency is improved.

Description

Slag dragging method based on machine vision and robot system
Technical Field
The invention belongs to the technical field of hot galvanizing slag salvaging, and particularly relates to an intelligent slag salvaging method based on machine vision and a robot system using the method.
Background
Hot galvanizing, also called hot dip galvanizing and hot dip galvanizing, is an effective metal corrosion prevention mode, and generally, a steel member after rust removal is immersed in molten zinc at about 460 ℃ to enable a zinc layer to be attached to the surface of the steel member, thereby achieving the purpose of corrosion prevention. During the galvanizing process, the generation of zinc slag is inevitable due to the continuous melting of iron on the surface of the strip steel into zinc liquid, the uneven components and temperature of a zinc pot and the oxidation caused by air knife blowing. The zinc slag defect on the surface of the hot-dip galvanized steel strip becomes one of the main quality defects of the hot-dip galvanized product, and the zinc slag defect seriously affects the appearance quality of the hot-dip galvanized product. The scum of hot galvanizing is generated more frequently, if the defect of the zinc slag is reduced, the timely slag salvaging is required to be ensured, the slag salvaging frequency is increased, a slag salvaging tool is improved, and the scum caused by slag salvaging is reduced to be involved in zinc liquid.
The slag dragging robot that drops into the industrialization at present and use mainly adopts six-axis six-degree-of-freedom robot to accomplish the slag dragging action, and the structure is simple relatively, accomplishes regularly to decide the slag dragging operation on route through simple programming, and this kind of slag dragging robot can only realize simple iterative slag dragging action, can't solve the regional effective cover of slag dragging and key problems such as the determination of optimum slag dragging frequency. On-the-spot adaptability is extremely poor, because can't effectively discern the zinc dross and discern the height of zinc liquid level, still need artifical temporary intervention many times, has only solved the problem that artifical sediment intensity of labour is big of dragging for to a certain extent, but lacks the flexibility that the sediment was dragged for to the manual work.
At present, robot products are also developed adaptively for the initial stage, only local optimization of slag dragging paths or methods is carried out, or a solution under a complex working condition environment of a specific production line (the layout of zinc pots of each enterprise is different) is provided. The six-axis slag dragging industrial robot is applied to a zinc pot on site by individual iron and steel enterprises, programming is utilized, more and more measuring and controlling instruments are applied to the zinc pot, the robot can overcome the adverse effect of limited space behind the nose of the zinc pot, and the robot partially replaces people to realize flexible slag dragging operation. The products are only partially improved and alternatively researched on the basis of the initial stage of the slag dragging robot, can not automatically identify and classify the zinc slag, have no fundamental change on the quality of the galvanized sheet, the slag dragging effect and the improvement of the slag dragging efficiency, and have low blind dragging efficiency and low slag dragging precision.
Disclosure of Invention
Aiming at the problems of low blind fishing efficiency and low fishing precision of the current slag fishing robot, the invention provides a slag fishing method and a robot system based on machine vision, which can automatically identify and classify zinc slag, continuously iterate and optimize the obtained data to obtain the optimal fishing sequence, reduce the fishing frequency, improve the fishing efficiency, reduce the loss and realize autonomous green slag fishing.
In order to solve the technical problem, the invention adopts the following technical scheme:
a slag salvaging method based on machine vision is characterized in that a thick slag area influencing the quality of a galvanized plate is identified and positioned through a vision technology, and then a robot is controlled according to priority to sequentially salvage all thick slag; the method comprises the following steps:
s1: collecting a color picture and a depth map of the slag dragging area, preprocessing the color picture and then storing the color picture for image processing and coordinate transformation in the subsequent steps;
s2: cutting the collected color pictures of the slag salvaging area to form small pictures, extracting deep image features of the small pictures by adopting a convolutional neural network, and classifying the small pictures after training to obtain the serial number of the thick slag pictures;
s3: determining the position of the zinc dross by combining the depth map and the center of the zinc dross picture, calculating the mass center coordinate of the thick dross picture, and calculating the three-dimensional coordinate of the mass center of the thick dross by combining the depth map of the depth camera;
s4: according to the distribution rule of the zinc slag, giving a priority rule of a slag dragging area, and sequencing the thick slag pictures in sequence under the constraint of the rule to obtain a slag dragging sequence;
s5: and controlling the slag dragging robot to drag the thick slag in sequence according to the slag dragging sequence and the three-dimensional coordinates of the camera of the thick slag mass center.
Further, in the step S1, a depth camera is fixed on a base outside the slag dragging robot for collection, and the depth camera measures the linear distance between a target pixel and a camera by adopting a flight time method; collecting a zinc slag color image and a corresponding depth map by using a depth camera; saving the depth map; denoising the color picture, extracting the region of interest, and storing the color picture into a format with the extension name of ". Jpg".
Further, in the step S2:
when the collected color pictures of the slag salvaging area are cut, the pixel value of the picture shot on site is assumed to be X multiplied by Y, and the pixel value of the picture corresponding to each slag salvaging is converted into X multiplied by Y according to the dimension a multiplied by b (centimeter) of the actuating mechanism of the slag salvaging robot and the dimension M multiplied by N (centimeter) of the slag salvaging area of the zinc pot
Figure BDA0002643417620000021
Cutting each picture into equal parts
Figure BDA0002643417620000022
Zhang Xiao pictures; after segmentation, the segmented pictures are named as picture sequence numbers according to the sequence from left to right and from top to bottom;
making a training set and a test set of a thick slag picture and a thin slag picture, extracting deep image features from the small pictures through a convolutional neural network, and classifying the deep image features, wherein the convolutional layer extracts the deep features of the pictures, the pooling layer reduces data dimensionality, the full-connection layer performs nonlinear combination on the extracted features, and the output layer outputs classification labels of the pictures;
after training is finished, testing the test set by using the trained network model; and classifying the pictures concentrated in the test to obtain the serial numbers of all the thick slag pictures and the thin slag pictures, and storing the serial numbers of all the pictures classified as the thick slag pictures.
Further, the S3 step is mainly to the center of the thick slag picture to the barycenter location of the thick slag, thereby can be accurate and complete to drag for the whole thick slag:
according to the segmentation result, calculating the center pixel coordinate of the picture by the name of the thick slag picture, namely the picture serial number, and if the picture serial number is c, obtaining that the centroid pixel coordinate of the picture is
Figure BDA0002643417620000031
Where X, Y, M, N, a, b are given above, "%" is the integer divide and remainder operation and "/" is the integer divide and quotient operation; storing the coordinates of the mass center pixels of each thick slag picture;
and analyzing the segmentation result, determining the number of the thick slag, calculating the mass center coordinate of each thick slag and storing the mass center coordinate.
And after the two-dimensional coordinates of each thick slag centroid are obtained, the depth of the corresponding pixel point in the depth map is combined, the color map of the depth map is aligned with the color map of the thick slag picture, the self-contained function of the depth camera is called, the three-dimensional coordinates of each thick slag centroid under the camera coordinate system are calculated, the obtained three-dimensional coordinates of the thick slag centroid under the camera coordinate system are subjected to coordinate conversion, and the three-dimensional information of the thick slag centroid under the robot coordinate system is obtained.
Further, the priority rule and the slag dragging sequence of the slag dragging area in the step S4 are carried out according to the following steps:
according to the influence of accumulation of zinc slag in each area of slag dragging on the quality of strip steel, the area of slag dragging is divided into 5 types according to the area 1-area 5, and the priority of each slag dragging area is sequentially increased from the area 1 to the area 5;
5 empty lists are newly built, and the 5 lists respectively correspond to five priorities from the area 1 to the area 5; inputting a two-dimensional coordinate of the center of a certain thick slag picture to obtain a square area of an empty list where the picture is located, namely the slag-dragging priority, and storing the picture sequence number into a corresponding list; circularly traversing all the thick slag pictures to obtain slag salvaging priorities of all the thick slag pictures, and storing the slag salvaging priorities in corresponding 5 lists; and sequentially combining the 5 lists according to the reverse order to obtain the slag dragging sequence of all thick slag in a sequence from high to low.
Further, in the step S4, the area 5 is respectively arranged in an area B and an area D of a slag removing channel at two sides of the zinc boiler nose; the areas 1-4 are distributed in a slag collecting area C behind the furnace nose; the depth camera 1 is arranged at the upper part of the slag collecting area C.
Further, in the step S5, the controller obtains the rotation angle of each joint of the robot arm according to the three-dimensional coordinates of the mass center of the thick slag and the slag dragging sequence obtained in the step S4, sends out a control signal, drives the robot arm to drag up all thick slag in sequence, and puts the zinc slag into a slag hopper beside the zinc pot.
In the invention, considering that the accumulation of zinc slag in each area of slag dragging has different influence on the quality of strip steel, the slag dragging area is divided into 5 types (1, 2, 3, 4 and 5) of areas, and the priority of slag dragging in each area is increased progressively according to the areas. According to the slag-dragging strategy, the area 5 is traversed to the area 1, and if thick slag exists in the area, the thick slag is preferentially sorted. And storing the coordinate values of the thick slag until all the thick slag is traversed. By adopting the method, all thick slag can be effectively traversed, all thick slag can be sequenced according to the regional priority affecting the quality of the strip steel with the help of manual experience, the quality of the strip steel can be protected to a certain extent, and the generation of surface defects can be reduced.
It should be noted that, in this embodiment, all thick slags in the slag fishing area at a certain time are identified and fished out, and since the generation speed of the zinc slag, especially the thick slag, is relatively slow, the operation speed of all algorithms is sufficient to fish out the thick slag before the distribution of the zinc slag changes.
It should be noted that the slag dragging interval can be defined by the working time of the galvanizing site, and when enough thick slag is generated, the program is started again to recognize the thick slag and drag the thick slag out.
Considering that slag with a certain thickness in a zinc pot needs to be kept to prevent the oxidation of zinc liquid, the slag dragging robot can work intermittently, but a vision system needs to be kept normally open to detect the distribution condition of the zinc slag, so that the cost can be saved.
The method comprises the steps of collecting a color image and a depth map of a zinc pot slag dragging area by using a depth camera, selecting one frame as a sample map, identifying and classifying the cut sample map, calculating the center of a thick slag picture, carrying out spatial positioning on the mass center of the thick slag by combining the depth map, optimizing the slag dragging sequence of a slag dragging robot by using a special slag dragging strategy, and finally transmitting information such as the coordinates of the thick slag, the slag dragging sequence and the like to a controller to control the robot to drag the slag. Zinc slag image acquisition: and acquiring and storing the picture of the slag dragging area of the zinc pot by adopting a depth camera. Zinc slag image identification: and cutting the zinc slag picture into a plurality of small images, making a training set, inputting the training set into a convolutional neural network to learn deep features of the picture, and classifying the picture to obtain the sequence numbers of all thick slag pictures. Positioning a thick slag target: and calculating the center of each thick slag picture, and calculating the spatial position of the mass center of the thick slag by combining the depth information of the camera. Slag dragging strategy optimization: and (4) carrying out slag salvaging sequence on the thick slag according to the proposed slag salvaging strategy, and fishing out all the thick slag in the zinc pot. A robot control module: and controlling the robot to fish up all thick slag according to the existing slag fishing sequence and the corresponding mass center coordinate value of the thick slag.
The utility model provides a drag for sediment robot system based on machine vision which characterized in that includes:
the zinc slag image acquisition module is used for acquiring a color picture and a depth map of a slag dragging area, preprocessing the color picture and then storing the color picture for image processing and coordinate transformation in subsequent steps;
the zinc slag image recognition module cuts the collected color pictures of the slag dragging area to form small pictures, extracts deep image features of the small pictures by adopting a convolutional neural network, classifies the small pictures after training, and obtains the serial number of the thick slag pictures;
the thick slag positioning module is used for determining the position of the zinc slag by combining the depth map and the center of the zinc slag picture, calculating the mass center coordinate of the thick slag picture and calculating the three-dimensional coordinate of the mass center of the thick slag by combining the depth map of the depth camera;
the slag dragging strategy optimization module gives a priority rule of a slag dragging area according to a zinc slag distribution rule, and sorts the thick slag pictures in sequence under the constraint of the rule to obtain a slag dragging sequence;
and the slag dragging robot control module controls the slag dragging robot to drag thick slag in sequence according to the slag dragging sequence and the three-dimensional coordinates of the mass center of the thick slag.
Furthermore, the zinc dross image acquisition module comprises a depth camera fixed on a base outside the dross dragging robot, and the depth camera comprises a color camera, a pair of laser transmitters and a receiver.
Furthermore, the slag salvaging robot control module controls the mechanical arm to carry out slag salvaging, the working range of the mechanical arm extends to the C area of the slag salvaging area, and the depth camera is arranged at the upper part of the C area of the slag collecting area.
The invention discloses a slag salvaging method and a robot system based on machine vision, which identify and position a thick slag area influencing the quality of a galvanized plate through a vision technology, and then control a robot to sequentially salvage all thick slag according to priority; acquiring a color picture and a depth picture of a slag salvaging site by using a depth camera; classifying the cut small zinc slag images by using a convolutional neural network; calculating the three-dimensional coordinate of the mass center of the thick slag under a camera coordinate system by utilizing the specific depth information of the depth camera, and obtaining the three-dimensional coordinate of the mass center of the thick slag under a robot coordinate system after coordinate conversion; and then obtaining a slag dragging sequence according to the priority rule of the slag dragging area, and finally controlling the slag dragging robot to drag and pour all thick slag into a nearby slag bucket.
According to the method and the system, machine vision and image processing technologies are applied to the slag fishing system, so that the zinc slag can be automatically identified and classified, the obtained data are continuously iterated and optimized, the optimal fishing sequence is obtained, the fishing frequency is reduced, and the fishing efficiency is improved. Compared with the traditional robot slag salvaging system, the invention provides a slag salvaging scheme of machine vision, the optimal slag salvaging sequence is determined according to the zinc slag distribution characteristics and the slag salvaging strategy, the zinc consumption and the power consumption caused by unnecessary slag salvaging times are reduced, the autonomy of the slag salvaging robot is obviously improved, and the working efficiency is improved.
The manual slag fishing is replaced by the robot slag fishing, so that the harm of severe slag fishing environment to people can be reduced, and the quality of the galvanized sheet can be improved
Compared with the prior art, the invention has the advantages and positive effects mainly reflected in the following aspects:
1) Compared with the prior manual slag salvaging, the system integrates the intelligent zinc slag identification method, and has three advantages: the system is unmanned, has autonomous working capacity, reduces the working time of operators in severe environments such as high temperature, strong noise, dust, strong magnetic radiation and the like, and reduces the risk of harm to the operators by the severe environments; the efficiency is high, the robot system is adopted to replace manpower, the slag removal work can be carried out continuously for 24 hours, the slag removal efficiency is improved by more than 1 time, and the product quality qualified rate is improved by more than 8%; the green type, the power consumption/efficiency ratio of the slag salvaging process is the lowest, the path planning of the robot is in accordance with certain specifications, the slag salvaging amount is maximized every time, and the power loss is reduced.
2) An optimization method for a slag dragging strategy is characterized in that existing slag dragging robots perform cyclic blind dragging on a zinc pot according to a specified route, not only waste energy, but also lose part of liquid zinc, and the method is not environment-friendly. Aiming at the difference of influence degree of accumulation of zinc dross in different areas on the quality of a galvanized sheet, the invention provides a special zinc pot area dross fishing priority strategy, which is characterized in that whether thick dross exists in the area is judged firstly, and then all the thick dross is fished in sequence according to the priority. The method can reduce the accumulation of thick slag in a galvanizing area, improve the quality of the galvanized sheet and reduce unnecessary energy consumption.
Drawings
FIG. 1 is a flow chart of a slag dragging robot system based on machine vision;
FIG. 2 is a plan structure view (top view) of the slag dragging robot system based on machine vision;
FIG. 3 is a flow chart of the operation of the zinc dross image recognition module according to the embodiment of the invention;
FIG. 4 is a flowchart of the thick slag positioning module of the embodiment of the present invention;
FIG. 5 is a flowchart of a slag salvaging strategy optimization algorithm of the embodiment of the present invention;
fig. 6 is a slag-dragging priority region division diagram according to the embodiment of the present invention.
Detailed Description
According to the slag salvaging method and the robot system based on the machine vision, which are implemented by the invention, as shown in the figures 1-6, the problem of low efficiency of the slag salvaging robot in the related art is solved.
The invention provides a slag dragging robot system based on machine vision, a system flow chart is shown in figures 1-2, a plane structure chart of the slag dragging system is shown in figure 2, the slag dragging robot system comprises a camera of a depth camera 1, a mechanical arm 2 of a slag dragging robot and a PLC (programmable logic controller) 5, the working range of the mechanical arm 2 extends to a slag dragging area C area, the depth camera 1 is arranged at the upper part of the slag collecting area C area, and the slag dragging area C area is positioned behind a furnace nose 3. The area A is a galvanizing area, the zinc dross 4 is mainly generated in the area, the area B and the area D are dross driving channels, the zinc dross is driven to the area C by the dross driving device, the area C is a dross collecting area, and the zinc dross is collected. B. C, D is an area where zinc dross tends to accumulate, so the present invention is an unmanned dross scooping system designed for the dross scooping area B, C, D.
The image storage unit and the image preprocessing unit in the depth camera 1 and the controller 5 together form a zinc slag image acquisition module 51, and the controller 5 further comprises a zinc slag image recognition module 52, a thick slag positioning module 53, a slag salvaging strategy optimization module 54 and a slag salvaging robot control module 55.
The hardware used by the image acquisition module 51 is the depth camera 1 fixed at one place, and because the temperature of the slag removing site is high, the depth camera 1 is generally fixed on a base outside the robot. Industrial depth cameras are able to obtain in-situ sharp color pictures and corresponding depth maps.
It should be noted that the zinc dross in the zinc pot area can be divided into thick dross and thin dross according to naked eyes, the thick dross can sink when accumulated excessively, and enters the galvanizing area along with the flow of the zinc liquid in the zinc pot, so that the quality of the galvanized sheet is influenced, the thin dross can not sink in a short time due to light weight, and the direct contact between the zinc liquid and the air is avoided, so that the oxidation of zinc is reduced to a certain extent. Therefore, the system is mainly used for preferentially fishing the thick slag part in the zinc pot.
The invention provides a slag dragging method based on machine vision, which comprises the following steps:
s1: the zinc dross image acquisition module 51 acquires a picture (color picture) and a depth map of a dross dragging area for image processing and coordinate transformation in subsequent steps;
s2: equally dividing the slag dragging area picture into small pictures, extracting deep image features of the small pictures by adopting a convolutional neural network, and classifying the small pictures after training to obtain the serial number of the thick slag picture.
S3: and calculating the coordinates of the mass center of the thick slag picture, and calculating the three-dimensional coordinates of the camera of the mass center of the thick slag by combining the depth map of the depth camera.
S4: according to the zinc slag distribution rule in actual production, giving a priority rule of a slag dragging area, and sequencing thick slag pictures in sequence under the constraint of the rule to obtain a slag dragging sequence.
S5: and according to the slag fishing sequence obtained in the previous step, the controller drives the robot to sequentially fish up the thick slag.
The invention provides an intelligent slag salvaging system based on machine vision, the flow chart of the system is shown in figure 1, the plan view of the slag salvaging system is shown in figure 2, the system comprises an autonomous slag salvaging system based on machine vision, and is characterized in that: the system comprises 1) a zinc slag image acquisition module 51, a zinc slag image recognition module 52, a thick slag positioning module 53, a slag dragging strategy optimization module 54 and a slag dragging robot control module 55.
The implementation steps of each module are described in detail as follows:
1) The zinc dross image acquisition module 51 acquires the picture of the dross dragging field, the work flow chart of the zinc dross image recognition module is shown in fig. 3, and the specific implementation comprises the following steps:
collecting a zinc slag color image and a corresponding depth map by using a depth camera 1 in a slag dragging area;
the depth map is saved.
And denoising the color picture and extracting the region of interest.
And storing the color pictures into a format with the extension name of being in a 'jpg' format as an experimental material.
It should be noted that the depth camera 1 used in the present invention includes a color camera and a pair of laser transmitters and receivers. The depth camera measures the linear distance between a target pixel and the camera by adopting a flight time method.
2) The zinc dross image recognition module 52 classifies the zinc dross images, and the specific flow chart is shown in fig. 4, and the specific implementation comprises the following steps:
if the size of the picture of the slag salvaging site is X multiplied by Y pixels, the length of the slag salvaging area is M centimeters, the width of the slag salvaging area is N centimeters, the specification of the slag salvaging robot actuating mechanism is a multiplied by b (centimeters), and through conversion, each timeThe slag corresponds to the pixel value in the picture as
Figure BDA0002643417620000081
Introducing a python image library PIL, and equally cutting each picture into pieces
Figure BDA0002643417620000082
Zhang Xiao pictures are divided, and then the divided pictures are named as 'picture serial numbers' according to the sequence from left to right and from top to bottom.
Making a training set and a test set of thick slag and thin slag pictures, extracting deep image features from the pictures through a convolutional neural network and classifying the deep image features, wherein the convolutional layer extracts the deep features of the pictures, the pooling layer reduces data dimensionality, the full-link layer performs nonlinear combination on the extracted features, and the output layer outputs classification labels of the pictures;
and finally counting and storing all serial numbers classified into the thick slag pictures.
It should be noted that, before the training of the network is arranged in actual production, the training set should acquire the zinc dross pictures under various working conditions as much as possible to make the network obtain better generalization performance. In actual production, a frame of picture shot by a camera at a certain time and a small picture cut by the camera are taken as a test set.
3) The thick slag positioning module 53 performs spatial positioning on the thick slag picture, and a positioning flow chart is shown in fig. 4, which specifically comprises the following steps:
it should be noted that the module locates the mass center of the thick slag, which corresponds to the center of the thick slag picture, so that the whole thick slag can be accurately and completely fished.
Calculating the center pixel coordinate of the picture according to the file name of the thick slag picture, namely the picture sequence number: if the serial number of the picture is c, the centroid pixel coordinate of the picture is easily obtained as
Figure BDA0002643417620000083
Where X, Y, M, N, a, b are given above, "%" is the integer divide and remainder operation and "/" is the integer divide and quotient operation.
And storing the coordinates of the mass center pixels of each thick slag picture.
The depth image is obtained while the color RGB image is obtained by adopting a depth camera, firstly, the depth image is aligned with the color image, a camera self-contained function MapColorFrameToCameraspace is called to obtain the three-dimensional coordinates under a camera coordinate system corresponding to each pixel point in the slag-dragging field image, and then the three-dimensional coordinates of the mass center of the thick slag image under the camera coordinate system can be obtained.
And performing coordinate conversion on the three-dimensional coordinate of the thick slag centroid obtained in the front under a camera coordinate system, establishing a space coordinate system by taking the slag dragging robot base as an original point, and converting the coordinate value of the target object in the coordinate system by taking the camera as the original point into the coordinate value in the coordinate system. To obtain the three-dimensional information of the mass center of the thick slag under the coordinate system of the robot.
4) The slag salvaging strategy optimization module 54 gives a slag salvaging sequence according to a pre-defined slag salvaging strategy optimization algorithm, a flow chart of the optimization algorithm is shown in fig. 5, and the specific implementation steps are as follows:
firstly, a slag removing priority area division diagram (shown in figure 6) is given by combining a slag removing path of industrial hot galvanizing and the distribution of figure 2, wherein thick frame blocks 1-4 in the diagram correspond to a slag collecting area C in figure 2, and a thick frame block 5 corresponds to a slag removing channel area B and an area D respectively. Each thin-frame small square in fig. 6 corresponds to the small zinc dross picture in step 2, the size of the proposed dividing method needs to be determined by comprehensively considering the zinc pot structure and the mechanical arm working space, and the areas of the thick-frame square areas are not required to be equal. In order to avoid the influence of the accumulation of the zinc dross in the channel on the galvanizing quality, the zinc dross in the dross driving channel, namely the zinc dross in the block 5, needs to be fished preferentially, then the zinc dross in the dross collecting area, namely the zinc dross in the blocks 1-4, is fished, the dross fishing priority is determined according to the regional value, and the higher the value is, the higher the priority is.
The algorithm flow of fig. 5 is as follows: 5 empty lists are created, with 5 lists corresponding to five priorities, 1-5, respectively. Inputting a two-dimensional coordinate of the center of a certain thick slag picture to obtain a square area where the picture is located, namely the slag salvaging priority, storing the picture sequence number into a corresponding list, circularly traversing all the thick slag pictures to obtain the slag salvaging priorities of all the thick slag pictures, and storing the slag salvaging priorities in corresponding 5 lists. And sequentially combining the 5 lists according to the reverse order to obtain the slag dragging sequence of all thick slag in a sequence from high to low.
It should be noted that, a standard slag salvaging sequence is given by adopting a slag salvaging strategy optimization algorithm, and compared with blind salvaging and fixed track slag salvaging, all thick slag can be specifically and sequentially fished, so that the method has the advantages of saving time and energy, improving the quality of the galvanized sheet and the like.
5) The slag salvaging robot control module 55 controls the mechanical arm 2 of the slag salvaging robot to fish out thick slag, and the concrete implementation steps are as follows:
according to the three-dimensional coordinates of the mass center of the thick slag obtained by the thick slag positioning module 53 and the slag fishing sequence obtained by the slag fishing strategy optimization module 54, the controller 5 sends out a control signal to drive the robot arm 2 to fish up all the thick slag, and the zinc slag is placed in a slag bucket beside the zinc pot.
It should be noted that, in this embodiment, all thick slags in the slag fishing area at a certain time are identified and fished out, and since the generation speed of the zinc slag, especially the thick slag, is slow, the operation speed of all algorithms is sufficient to fish out the zinc slag before the distribution of the zinc slag changes.
It should be noted that the slag dragging interval can be defined by the working time of the galvanizing site, and when enough thick slag is generated, the program is started again to recognize the thick slag and drag the thick slag out.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (9)

1. A slag salvaging method based on machine vision is characterized in that a thick slag area influencing the quality of a galvanized plate is identified and positioned through a vision technology, and then a robot is controlled according to priority to sequentially salvage all thick slag; the method comprises the following steps:
s1: collecting a color picture and a depth map of the slag dragging area, preprocessing the color picture and then storing the color picture for image processing and coordinate transformation in the subsequent steps;
s2: cutting the collected color pictures of the slag salvaging area to form small pictures, extracting deep image features of the small pictures by adopting a convolutional neural network, and classifying the small pictures after training to obtain the serial number of the thick slag pictures;
in the step S2:
when the collected color pictures of the slag salvaging area are cut, the pixel value of the picture shot on site is assumed to be X multiplied by Y, and the pixel value of the picture corresponding to each slag salvaging is converted into X multiplied by Y according to the size of the slag salvaging robot actuating mechanism which is a centimeter multiplied by b centimeter and the size of the zinc pot slag salvaging area which is M centimeter multiplied by N centimeter
Figure FDA0003966188290000011
Cutting each picture into equal parts
Figure FDA0003966188290000012
Zhang Xiao pictures; after segmentation, the segmented pictures are named as picture sequence numbers according to the sequence from left to right and from top to bottom;
making a training set and a test set of a thick slag picture and a thin slag picture, extracting deep image features from the small pictures through a convolutional neural network, and classifying the deep image features, wherein the deep features of the pictures are extracted by a convolutional layer, the data dimensionality is reduced by a pooling layer, the extracted features are subjected to nonlinear combination by a full-link layer, and a classification label of the pictures is output by an output layer;
after training is finished, testing the test set by using the trained network model; classifying the pictures in the test set to obtain the serial numbers of all thick slag pictures and all thin slag pictures, and storing the serial numbers of all the pictures classified as the thick slag pictures;
s3: determining the position of the zinc dross by combining the depth map and the center of the zinc dross picture, calculating the coordinate of the mass center of the thick dross picture, and calculating the three-dimensional coordinate of the mass center of the thick dross by combining the depth map of the depth camera;
s4: according to the distribution rule of the zinc slag, giving a priority rule of a slag dragging area, and sequencing the thick slag pictures in sequence under the constraint of the rule to obtain a slag dragging sequence;
s5: and controlling the slag salvaging robot to sequentially salvage the thick slag according to the slag salvaging sequence and the camera three-dimensional coordinates of the mass center of the thick slag.
2. The slag dragging method based on machine vision according to claim 1, characterized in that in the step S1, a depth camera is fixed on a base outside the slag dragging robot for collection, and the depth camera adopts a time-of-flight method to measure a linear distance between a target pixel and a camera; collecting a zinc slag color image and a corresponding depth map by using a depth camera; saving the depth map; denoising the color picture, extracting an interested region, and storing the color picture into a format with the extension name of ". Jpg".
3. The machine vision-based slag salvaging method according to claim 1, wherein the step S3 is mainly to position the center of mass of the thick slag according to the center of the thick slag picture, so that the whole thick slag can be accurately and completely salvaged:
according to the segmentation result, calculating the center pixel coordinate of the picture by the name of the thick slag picture, namely the picture serial number, and if the picture serial number is c, obtaining that the centroid pixel coordinate of the picture is
Figure FDA0003966188290000021
Wherein, "%" is integer division and remainder division, and "/" is integer division and quotient division; storing the coordinates of the mass center pixel of each thick slag picture;
analyzing the segmentation result, determining the number of the thick slag, calculating the mass center coordinate of each thick slag and storing the mass center coordinate;
and after the two-dimensional coordinates of each thick slag centroid are obtained, combining the depth of the corresponding pixel point in the depth map, aligning the color map of the depth map with the color map of the thick slag picture, calling a self-contained function of a depth camera, calculating the three-dimensional coordinates of each thick slag centroid under a camera coordinate system, and performing coordinate conversion on the obtained three-dimensional coordinates of the thick slag centroid under the camera coordinate system to obtain the three-dimensional information of the thick slag centroid under the robot coordinate system.
4. The slag dragging method based on machine vision according to claim 1, characterized in that the priority rule and the slag dragging sequence of the slag dragging area in the step S4 are performed as follows:
according to the influence of accumulation of zinc slag in each region of slag dragging, the slag dragging region is set to be 5 types of regions according to the regions 1-5, and the priority of each slag dragging region is sequentially increased from the region 1 to the region 5;
5 empty lists are newly built, and the 5 lists respectively correspond to five priorities from the area 1 to the area 5; inputting a two-dimensional coordinate of the center of a certain thick slag picture to obtain a square area of an empty list where the picture is located, namely the slag dragging priority, and storing the picture sequence number into a corresponding list; circularly traversing all the thick slag pictures to obtain slag salvaging priorities of all the thick slag pictures, and storing the slag salvaging priorities in corresponding 5 lists; and sequentially combining the 5 lists according to the reverse order to obtain the slag dragging sequence of all thick slag in a sequence from high to low.
5. The slag dragging method based on the machine vision is characterized in that in the step S4, the area 5 is respectively arranged in the area B and the area D of the slag driving channel at two sides of the zinc boiler nose; the areas 1-4 are distributed in a slag collecting area C behind the furnace nose; the depth camera 1 is arranged at the upper part of the slag collecting area C.
6. The slag dragging method based on machine vision as claimed in claim 1, wherein in step S5, the controller obtains the rotation angle of each joint of the robot arm according to the three-dimensional coordinates of the mass center of the thick slag and the slag dragging sequence obtained in step S4, sends out a control signal, drives the robot arm to drag up all the thick slag in sequence, and puts the zinc slag into a slag bucket beside the zinc pot.
7. A slag dragging robot system for dragging slag according to the slag dragging method based on machine vision of any one of claims 1-6, which is characterized by comprising:
the zinc slag image acquisition module is used for acquiring a color picture and a depth map of a slag dragging area, preprocessing the color picture and then storing the preprocessed color picture for image processing and coordinate transformation in the subsequent steps;
the zinc slag image recognition module cuts the collected color pictures of the slag dragging area to form small pictures, extracts deep image features of the small pictures by adopting a convolutional neural network, classifies the small pictures after training, and obtains the serial number of the thick slag pictures;
the thick slag positioning module is used for determining the position of the zinc slag by combining the depth map and the center of the zinc slag picture, calculating the mass center coordinate of the thick slag picture and calculating the three-dimensional coordinate of the mass center of the thick slag by combining the depth map of the depth camera;
the slag dragging strategy optimization module gives a priority rule of a slag dragging area according to a zinc slag distribution rule, and the thick slag pictures are sequentially sequenced under the constraint of the rule to obtain a slag dragging sequence;
and the slag dragging robot control module controls the slag dragging robot to drag thick slag in sequence according to the slag dragging sequence and the three-dimensional coordinates of the mass center of the thick slag.
8. The slag salvaging robot system of claim 7, wherein the zinc slag image collecting module comprises a depth camera fixed on a base outside the slag salvaging robot, the depth camera comprises a color camera and a pair of laser emitter and receiver.
9. The slag salvaging robot system as claimed in claim 7, wherein the slag salvaging robot control module controls the mechanical arm to carry out slag salvaging, the working range of the mechanical arm extends to the slag salvaging area C, and the depth camera is arranged at the upper part of the slag collecting area C.
CN202010847127.2A 2020-08-21 2020-08-21 Slag dragging method based on machine vision and robot system Active CN112011750B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010847127.2A CN112011750B (en) 2020-08-21 2020-08-21 Slag dragging method based on machine vision and robot system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010847127.2A CN112011750B (en) 2020-08-21 2020-08-21 Slag dragging method based on machine vision and robot system

Publications (2)

Publication Number Publication Date
CN112011750A CN112011750A (en) 2020-12-01
CN112011750B true CN112011750B (en) 2023-02-17

Family

ID=73505419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010847127.2A Active CN112011750B (en) 2020-08-21 2020-08-21 Slag dragging method based on machine vision and robot system

Country Status (1)

Country Link
CN (1) CN112011750B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN212495386U (en) * 2020-08-12 2021-02-09 烟台盛利达工程技术有限公司 Slag dragging system based on machine vision
CN113337789B (en) * 2021-04-29 2023-05-02 武汉钢铁有限公司 Hot-dip plating slag salvaging control method and device
CN113802077A (en) * 2021-08-16 2021-12-17 武汉钢铁有限公司 Zinc pot surface slag treatment method and system
CN116766210B (en) * 2023-08-12 2023-12-01 中天智能装备(天津)有限公司 Double-robot collaborative slag-fishing track planning method for large-scale anode smelting pool

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105714228A (en) * 2014-12-04 2016-06-29 重庆聆益机械有限公司 Zinc residue removing method used for galvanization process
CN110796046A (en) * 2019-10-17 2020-02-14 武汉科技大学 Intelligent steel slag detection method and system based on convolutional neural network
CN111101085A (en) * 2020-01-06 2020-05-05 宝钢湛江钢铁有限公司 Full-automatic slag salvaging system and method for zinc liquid surface scum of zinc pot
CN111394671A (en) * 2020-03-19 2020-07-10 武汉钢铁有限公司 Intelligent cooperative deslagging method and system for zinc pot

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105714228A (en) * 2014-12-04 2016-06-29 重庆聆益机械有限公司 Zinc residue removing method used for galvanization process
CN110796046A (en) * 2019-10-17 2020-02-14 武汉科技大学 Intelligent steel slag detection method and system based on convolutional neural network
CN111101085A (en) * 2020-01-06 2020-05-05 宝钢湛江钢铁有限公司 Full-automatic slag salvaging system and method for zinc liquid surface scum of zinc pot
CN111394671A (en) * 2020-03-19 2020-07-10 武汉钢铁有限公司 Intelligent cooperative deslagging method and system for zinc pot

Also Published As

Publication number Publication date
CN112011750A (en) 2020-12-01

Similar Documents

Publication Publication Date Title
CN112011750B (en) Slag dragging method based on machine vision and robot system
CN110516570B (en) Vision-based garbage classification and identification method and device
CN111069976B (en) Intelligent mobile monitoring system and method for damage of cutter for workshop or production line
CN112862704B (en) Glue spraying and glue spraying quality detection system based on 3D vision
CN107392929B (en) Intelligent target detection and size measurement method based on human eye vision model
CN106599915B (en) A kind of vehicle-mounted laser point cloud classifications method
CN110969660B (en) Robot feeding system based on three-dimensional vision and point cloud deep learning
CN111705283B (en) Method and device for optimizing slag salvaging
CN110852186B (en) Visual identification and picking sequence planning method for citrus on tree and simulation system thereof
CN112419237B (en) Deep learning-based automobile clutch master cylinder groove surface defect detection method
CN107677287A (en) Automatic Guided Vehicle system and dolly based on convolutional neural networks follow line method
CN116277025A (en) Object sorting control method and system of intelligent manufacturing robot
CN111761575A (en) Workpiece, grabbing method thereof and production line
CN111860321A (en) Obstacle identification method and system
CN111524154A (en) Image-based tunnel segment automatic segmentation method
Zhang et al. Vision-based robot sorting system
CN113909689A (en) Method for extracting characteristics of pipeline welding groove of laser light strip
CN114049340A (en) Diamond saw wire abrasion on-line detection method and device based on machine vision
CN111985406A (en) Zinc slag image identification and classification method
CN111652118A (en) Marine product autonomous grabbing guiding method based on underwater target neighbor distribution
CN116944818A (en) Intelligent processing method and system for new energy automobile rotating shaft
CN114638992A (en) Artificial intelligence-based microscopic rock and ore image identification method
CN112337810B (en) Vision guiding pearl sorting robot and sorting method thereof
CN116984274B (en) Electric furnace production intelligent control method and system based on 5G technology
CN1609894A (en) Steel products on-line counting system and method based on virtual multisensor fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant