CN114581859B - Converter slag discharging monitoring method and system - Google Patents

Converter slag discharging monitoring method and system Download PDF

Info

Publication number
CN114581859B
CN114581859B CN202210489189.XA CN202210489189A CN114581859B CN 114581859 B CN114581859 B CN 114581859B CN 202210489189 A CN202210489189 A CN 202210489189A CN 114581859 B CN114581859 B CN 114581859B
Authority
CN
China
Prior art keywords
module
stream
fusion
convolution
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210489189.XA
Other languages
Chinese (zh)
Other versions
CN114581859A (en
Inventor
李江昀
皇甫玉彬
申昊然
张议夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology Beijing USTB
Original Assignee
University of Science and Technology Beijing USTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology Beijing USTB filed Critical University of Science and Technology Beijing USTB
Priority to CN202210489189.XA priority Critical patent/CN114581859B/en
Publication of CN114581859A publication Critical patent/CN114581859A/en
Application granted granted Critical
Publication of CN114581859B publication Critical patent/CN114581859B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P10/00Technologies related to metal processing
    • Y02P10/25Process efficiency

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a system for monitoring converter slag discharging, and belongs to the field of metallurgical equipment and processes. The method comprises the steps of collecting pictures at different inclination angles in the converter tapping process, and generating a picture data set after marking pixel points; the method comprises the steps of constructing an image double-stream segmentation model, wherein the image double-stream segmentation model comprises a Stem module, a first stream convolution module, a second stream P-E module, a Transformer model-based fusion module, a first stream down-sampling module, a second stream down-sampling module and a segmentation head module; and training and verifying the image double-flow segmentation model, capturing a real-time tapping picture of the converter on site, preprocessing the picture, and inputting the preprocessed image double-flow segmentation model to obtain a steel slag real-time monitoring position. The invention enhances the expression capability of the model through the multiple crossing and combination of the shunting and fusion mechanisms; the Transformer model does not need pre-training weight and can flexibly adjust the model structure; the steel slag is accurately identified in real time, so that the monitoring precision is improved, and the safety of operators is ensured.

Description

Converter slag discharging monitoring method and system
Technical Field
The invention belongs to the field of metallurgical technology and equipment, and particularly relates to a converter slag tapping monitoring method and system.
Background
The converter steelmaking is characterized in that molten iron, scrap steel and ferroalloy are used as main raw materials, and the steelmaking process is completed in a converter by means of the physical heat of molten iron and the heat generated by chemical reaction among molten iron components without the help of external energy; during the smelting process, steel slag is generated on molten steel, and is blocked during tapping so as to prevent the quality of a steel ladle from being influenced by the excessive thickness of the steel slag on the upper layer of the steel ladle, and the slag blocking process is generally called slag discharging or slag discharging. In a steel-making production site, reducing the amount of slag discharged during converter tapping is an important factor for improving the quality of molten steel. If the slag quantity is too large, the phenomena of manganese return, phosphorus return, silicon return and the like easily occur in the molten steel, so that the components of the molten steel are difficult to control, impurities are easy to generate, more deoxidizers are consumed in subsequent refining, and the extra smelting cost is increased; in addition, when the converter inclination angle is too large during slag discharging, steel slag not only flows into a ladle, but also splashes onto a ladle travelling rail if the overflowing amount is too large, and damages are caused to surrounding production equipment. Therefore, it is necessary to monitor and process the converter slag in real time and to reduce the amount of slag as much as possible.
In the prior art, slag-off monitoring methods in a steel-making site generally comprise a manual method, a sensor method and an image processing method. When the slag discharging condition of the converter is monitored by adopting an operator observation method, the monitoring quality is related to the state and experience of operators, the flexibility is poor, and chemical substances generated in the production process can cause harm to human bodies; the sensor method is greatly influenced by the actual environment, has poor interference carrying capacity, is easy to cause danger, needs maintenance and has higher maintenance cost; the image processing method comprises a traditional image processing method and a deep neural network-based image processing method; the traditional image processing algorithm has the defects that the environment of a steelmaking site is complex, the acquired pictures have high noise, poor anti-interference capability and low precision, and cannot meet the actual requirements, while the conventional image processing method based on the deep neural network has poor capability of extracting global information in a complex steelmaking scene, so that the identification result is unsatisfactory, and in particular, when steel slag is close to a furnace mouth, wrong segmentation is easy to generate, and in addition, a high-precision network model has high complexity and poor real-time performance, and cannot meet the real-time requirements.
Disclosure of Invention
In view of this, the embodiment of the invention provides a method and a system for monitoring converter slag tapping, so as to improve the real-time performance and accuracy of slag tapping monitoring.
In order to achieve the above purpose, the embodiment of the invention adopts the following technical scheme:
in a first aspect, an embodiment of the present invention provides a method for monitoring converter slag tapping, including the following steps:
step S1, acquiring pictures at different inclination angles in the converter tapping process, wherein the pictures cover the whole process of tapping, and each picture at least comprises a steel slag image of a converter mouth;
step S2, labeling each pixel point in the picture, respectively labeling the pixel points as four types of segmentation labels of a background, steel slag, a furnace inner wall and a furnace mouth, and binding the labels with the inclination angle of the picture;
step S3, generating a picture data set from all the pictures marked with the labels, and dividing the picture data set into a training set and a verification set according to a preset proportion;
step S4, constructing an image double-stream segmentation model, wherein the image double-stream segmentation model comprises a Stem module, a first stream convolution module, a second stream P-E module, at least two transform model-based fusion modules, at least two first stream down-sampling modules, at least two second stream down-sampling modules and a segmentation head module, the fusion modules, the first stream down-sampling modules and the second stream down-sampling modules are the same in number, the number of the fusion modules is the maximum number of stages, and the fusion modules, the first stream down-sampling modules and the second stream down-sampling modules are sequentially arranged according to the number of stages; the input ends of the first stream convolution module and the second stream P-E module are simultaneously connected with the Stem module, and the output ends of the first stream convolution module and the second stream P-E module are simultaneously connected with the first-stage fusion module; the output end of the fusion module is simultaneously connected with a first stream down-sampling module and a second stream down-sampling module at the same level; except the first-stage fusion module, the input ends of the fusion modules of other stages are simultaneously connected with the first-stage down-sampling module and the second-stage down-sampling module of the previous stage; the last stage of the first stream down-sampling module and the second stream down-sampling module are simultaneously connected with the segmentation head module;
step S5, training and verifying the image double-flow segmentation model by adopting a training set and a verification set to obtain a mature image double-flow segmentation model;
and step S6, capturing a real-time tapping picture of the converter on site, preprocessing the picture, inputting a mature image double-flow segmentation model, and outputting a segmentation result to obtain the real-time monitoring positions of the steel slag, the furnace mouth and the inner wall of the furnace.
As a preferred embodiment of the present invention, the fusion module includes at least one set of fusion unit and shunt unit; wherein the content of the first and second substances,
each fusion unit comprises a convolution submodule, a lightweight Transformer submodule and a splicing fusion submodule, wherein the convolution submodule comprises a continuous convolution layer and/or a residual convolution layer, a batch normalization layer and a ReLU activation function layer;
each shunting unit comprises a 1 × 1 convolution submodule and a P-E submodule, wherein the 1 × 1 convolution submodule comprises a convolution layer with 1 × 1 step length, a batch normalization layer and a ReLU activation function, and the P-E submodule comprises a conversion feature map dimension layer and a linear mapping layer;
in each fusion unit, the input end of a convolution submodule is connected with a first stream convolution module, and the output end of the convolution submodule is connected with a splicing fusion submodule; the input end of the lightweight transform submodule is connected with the second stream P-E module, and the output end of the lightweight transform submodule is connected with the splicing fusion submodule; the output end of the splicing fusion sub-module is connected with the shunt unit, and the shunt unit is respectively connected with the next group of convolution sub-modules and the light-weight transform sub-modules in the same fusion module or connected with the first stream down-sampling module and the second stream down-sampling module in the same stage.
As a preferred embodiment of the present invention, the Stem module in step S4 includes a convolution layer with convolution kernel size of 7 × 7 and step size of 2, a batch normalization layer, and a ReLU activation function; the first stream convolution module comprises a convolution layer with 1 multiplied by 1 and the step size of 1, a batch normalization layer and a ReLU activation function; the second stream P-E module includes a transformed feature map dimension layer and a linear mapping layer.
As a preferred embodiment of the present invention, the first stream down-sampling module includes a maximum pooling layer and a convolution layer of 1 × 1 and step size 1, and the second stream down-sampling module includes a transform sequence feature dimension layer, a maximum pooling layer, a convolution layer of 1 × 1 and step size 1, and a transform feature map dimension layer; the segmentation head module comprises a conversion sequence characteristic dimension layer, a splicing merging layer, an up-sampling layer, a convolution layer with 1 multiplied by 1 and a step length of 1 and a normalization index layer.
As one of the present inventionIn a preferred embodiment, step S5, assume that the height of the input picture isHWide isWCIs the number of fundamental channel dimensions of the model,Dthe image double-flow segmentation model comprises an N-level fusion module, an N-level first-flow down-sampling module and an N-level second-flow down-sampling module, wherein the N-level fusion module, the N-level first-flow down-sampling module and the N-level second-flow down-sampling module are the basic sequence dimensionality of the model; the training and validation process is as follows:
step S51, after the picture is input into the Stem module, the size of the output characteristic graph is C multiplied by H/2 multiplied by W/2;
step S52, splitting the characteristic diagram through a 1 × 1 convolution module and a P-E module, wherein the first stream passes through the 1 × 1 convolution module to obtain a characteristic diagram with the size of C × H/2 × W/2, and the second stream passes through the P-E module to obtain a sequence characteristic with the size of Dx (HW/4);
step S53, simultaneously inputting the feature graph output by the first stream convolution module and the sequence feature output by the second stream PE module into a first-stage fusion module, and enhancing the feature extraction capability of the model through the fusion of the first-stage fusion module to obtain the feature graph with the size of C × H/2 × W/2 and the sequence feature of D × (HW/4);
a step S54 of inputting a feature map of C × H/2 × W/2 and a sequence feature of D × (HW/4) into the first stream down-sampling module and the second stream down-sampling module, respectively; in a first stream down-sampling module, the resolution of a first stream feature map is halved through a maximum pooling layer and a convolution layer with 1 × 1 and the step length being 1, and the channel number is doubled to obtain the feature map with the size of 2 CxH/4 xW/4; in a second stream down-sampling module, converting the second stream sequence characteristics into a sequence characteristic dimension layer to obtain a characteristic diagram with the size of D multiplied by H/2 multiplied by W/2, then reducing the resolution of the characteristic diagram by half and doubling the number of channels through a maximum pooling layer and a convolution layer with the size of 1 multiplied by 1 to obtain a characteristic diagram with the size of 2D multiplied by H/4 multiplied by W/4, and then converting the characteristic diagram dimension layer to obtain the sequence characteristics of 2D multiplied by (HW/16);
step S55, entering the circulation of the fusion of each level of fusion module and the down-sampling of the first and second down-sampling modules until the final level of fusion feature map is shunted, entering the final level of first and second down-sampling modules, and the final first stream output size is (2) N )C×H/(2 N+1 )×W/(2 N+1 ) Second stream output size (2) N )D×(HW/(2 N+1 ) 2 ) The sequence characteristics of (a);
step S56, converging the double-flow characteristics output by the last stage to the segmentation head module, merging by the splicing and merging layer, and outputting the segmentation result through the convolution layer, the up-sampling layer and the normalization index layer;
and step S57, loss calculation is carried out on the segmentation result output by the model and the segmentation label corresponding to the picture data set, the model parameters are updated through the gradient back-propagation value according to the calculation result of the loss function, and a mature image segmentation model is obtained after verification of the verification set.
As a preferred embodiment of the present invention, the fusion of the fusion modules in steps S53 and S54 performs the following operations:
the feature map and the sequence features enter a fusion unit in an i-level fusion module, and the first stream is subjected to a convolution submodule to obtain a size of (2) i-1 )C×H/2 i ×W/2 i The second stream is passed through a lightweight transform submodule to obtain a size of (2) i-1 )D×(HW/(2 i ) 2 ) Then the first stream feature map and the second stream feature are simultaneously entered into a splicing fusion submodule, in which the sequence features of the second stream are converted into sequence feature dimension layers with the size of (2) i-1 )D×(HW/(2 i ) 2 ) Sequence feature conversion to size of (2) i-1 )D×H/2 i ×W/2 i The feature map of the first stream is spliced with the feature map of the first stream in the channel dimension to obtain the size of (2) i-1 )(C+D)×H/2 i ×W/2 i The fused feature map of (1); inputting the fused feature map into a shunting unit, and obtaining the size of (2) in the shunting unit through a 1 × 1 convolution submodule i-1 )C×H/2 i ×W/2 i The first class profile of (2) is obtained by the P-E submodule i-1 )D×(HW/(2 i ) 2 ) A second stream sequence characteristic of (a);
the first stream feature map and the second stream sequence feature enter a next group of fusion unit and shunt unit or a first stream down-sampling module and a second stream down-sampling module at the same stage; and if entering the next group of the fusion unit and the shunting unit, repeating the operation of the fusion unit and the shunting unit.
As a preferred embodiment of the present invention, before the training in step S51, the picture is randomly horizontally flipped, randomly vertically flipped, randomly multi-scale transformed, randomly angle transformed and/or MixUp transformed, and the corresponding label is transformed in the same way.
In a second aspect, an embodiment of the present invention further provides a converter slag tapping monitoring system, where the monitoring system includes: the system comprises a data acquisition subsystem, an image double-flow segmentation model subsystem and a real-time image acquisition and monitoring result output subsystem; wherein the content of the first and second substances,
the data acquisition subsystem includes: the system comprises a historical picture acquisition module, a segmentation label labeling module and a data set generation module; the historical picture acquisition module is used for acquiring pictures at different inclination angles in the converter tapping process, the pictures cover the complete process of tapping, and each picture at least comprises a steel slag image of a converter mouth; the segmentation label marking module is used for marking each pixel point in the picture, respectively marking the pixel points as segmentation labels of four categories including a background, steel slag, a furnace inner wall and a furnace mouth, and binding the labels with the inclination angle of the picture; the data set generating module is used for generating picture data sets from all the pictures marked with the labels and dividing the picture data sets into a training set and a verification set according to a preset proportion;
the image double-flow segmentation model subsystem is used for providing an image double-flow segmentation model, completing training and verification and obtaining a mature image double-flow segmentation model; wherein the image dual-stream segmentation model comprises: the system comprises a Stem module, a first stream convolution module, a second stream P-E module, at least two fusion modules, at least two first stream down-sampling modules, at least two second stream down-sampling modules and a segmentation head module, wherein the fusion modules, the first stream down-sampling modules and the second stream down-sampling modules are the same in number, the number of the fusion modules is the maximum number of stages, and the fusion modules, the first stream down-sampling modules and the second stream down-sampling modules are sequentially arranged according to the number of stages; the input ends of the first stream convolution module and the second stream P-E module are simultaneously connected with the Stem module, and the output ends of the first stream convolution module and the second stream P-E module are simultaneously connected with the first-stage fusion module; the output end of the fusion module is simultaneously connected with a first stream down-sampling module and a second stream down-sampling module in the same stage; except the first-stage fusion module, the input ends of the fusion modules of other stages are simultaneously connected with the first-stage down-sampling module and the second-stage down-sampling module of the previous stage; the last stage of the first stream down-sampling module and the second stream down-sampling module are simultaneously connected with the segmentation head module;
the real-time picture acquisition and monitoring result output subsystem is used for capturing a real-time tapping picture of a field converter, preprocessing the picture and sending the preprocessed picture to the image double-flow segmentation model subsystem; and receiving a segmentation result obtained by a mature image double-flow segmentation model, and outputting the real-time monitoring positions of the steel slag, the furnace mouth and the inner wall of the furnace.
As a preferred embodiment of the present invention, the fusion module includes at least one set of fusion unit and shunt unit; wherein the content of the first and second substances,
each fusion unit comprises a convolution submodule, a lightweight Transformer submodule and a splicing fusion submodule, wherein the convolution submodule comprises a continuous convolution layer and/or a residual convolution layer, a batch standardization layer and a ReLU activation function layer;
each shunting unit comprises a 1 × 1 convolution submodule and a P-E submodule, wherein the 1 × 1 convolution submodule comprises a convolution layer with 1 × 1 step length, a batch normalization layer and a ReLU activation function, and the P-E submodule comprises a conversion characteristic diagram dimension layer and a linear mapping layer;
in each fusion unit, the input end of the convolution submodule is connected with the first stream convolution module, and the output end of the convolution submodule is connected with the splicing fusion submodule; the input end of the lightweight Transformer sub-module is connected with the second stream P-E module, and the output end of the lightweight Transformer sub-module is connected with the splicing fusion sub-module; the output end of the splicing fusion sub-module is connected with the shunting unit, and the shunting unit is respectively connected with the convolution sub-module and the lightweight transform sub-module in the same fusion module or connected with the first stream down-sampling module and the second stream down-sampling module in the same stage.
According to the converter slag-off monitoring method and system provided by the embodiment of the invention, an image segmentation model is constructed based on a transform model to monitor converter slag-off, the convolution characteristics obtain the global information characteristics extracted by a lightweight transform submodule through multiple crossing and combination of a flow-dividing and fusion mechanism, the sequence characteristics obtain the local information characteristics extracted by a convolution submodule, and interaction is carried out while different characteristics are fused, so that the perception field of the model is increased, and information complementation is realized, thereby enhancing the model expression capacity; in addition, the fusion and complementation of the characteristics enable the Transformer to directly extract the characteristics without the original pre-training weight, so that the model structure can be adjusted more flexibly according to the actual requirement, each fusion module can comprise a plurality of groups of fusion units and shunting units, and multi-stage fusion can be performed after shunting to obtain the optimal information expression result, thereby meeting the real-time performance of industrial field application, accurately identifying the steel slag, the inner wall of the furnace and the furnace mouth, avoiding the interference of the severe environment on the field and ensuring the safety of operators; the monitoring precision of the steel slag is improved, the robustness is strong, and the operation condition of the converter can be accurately processed; meanwhile, the resources are saved, and the steel-making production efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart of a method for monitoring converter slag tapping according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of an image dual-flow segmentation model in an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a fusion module according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a training and validation process in an embodiment of the present invention;
FIG. 5 is a schematic diagram of training data flow and variation in an embodiment of the present invention;
FIG. 6 is a diagram of an original drawing of a steel slag position and a segmentation effect at a first inclination angle of a converter during a steel tapping process monitored by an embodiment of the invention;
FIG. 7 is a diagram of an original drawing of a steel slag position and a segmentation effect at a second inclination angle of the converter during the monitoring of the tapping process according to the embodiment of the invention;
FIG. 8 is a diagram of an original drawing of a steel slag position and a segmentation effect of a steel slag at a third inclination angle of a converter during a steel tapping process monitored by using an embodiment of the invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to solve the above problems, embodiments of the present invention provide a method and a system for monitoring converter slag tapping, which are configured to segment a converter image acquired in real time based on a transform model in real time, detect a background, steel slag, a furnace inner wall, and a furnace mouth, and identify and process the steel slag image, thereby ensuring normal operation of steel-making production equipment, improving yield of alloy elements, saving material cost, and improving steel-making quality. The method has strong model generalization capability and global information extraction capability, can effectively inhibit the interference of a complex background area, can flexibly adjust the network structure according to actual requirements, has high image segmentation precision and strong real-time performance, and accurately identifies the steel slag part; meanwhile, the on-site condition of the metallurgical industry can be monitored in real time, on-site workers are effectively protected, the smelting cost is reduced, and the smelting quality is improved.
As shown in fig. 1, the method for monitoring converter slag tapping provided by the embodiment of the invention includes the following steps:
and step S1, acquiring pictures at different inclination angles in the converter tapping process, wherein the pictures cover the whole process of tapping, and each picture at least comprises a steel slag image of a converter mouth.
In this step, during the tapping process of the converter, along with the slag discharging, when the steel slag is on the upper side of the molten steel and the molten steel on the lower side is poured into the ladle, on one hand, it is required to ensure that the steel slag is not poured into the ladle, and on the other hand, it is required to ensure that the inclination angle of the converter does not cause the steel slag on the upper side of the molten steel to overflow the furnace mouth. Along with the pouring of the molten steel, the inclination angle of the converter is changed, and the pouring angle and the pouring speed are changed within the preset range, so that the high-quality pouring of the molten steel can be ensured. In the step, the tapping picture data of the converter under normal operation is collected at first and is used as basic data to provide support and reference for the subsequent operation of the converter. In this step, the different inclination angles are selected according to the actual conditions of the converter by performing data acquisition at intervals of a preset angle or time. The acquired pictures can provide a complete tilting process of the converter and simultaneously contain a steel slag image of the converter mouth.
And step S2, labeling each pixel point in the picture, labeling the pixel points as four types of segmentation labels of a background, steel slag, a furnace inner wall and a furnace mouth, and binding the labels with the inclination angle of the picture.
And step S3, generating a picture data set from all the pictures marked with the labels, and dividing the picture data set into a training set and a verification set according to a preset proportion.
Step S4, constructing an image dual-stream segmentation model, as shown in fig. 2, where the image dual-stream segmentation model includes a Stem module, a first stream convolution module, a second stream P-E module, at least two transform-based fusion modules, at least two first stream down-sampling modules, at least two second stream down-sampling modules, and a segmentation head module, and the fusion modules, the first stream down-sampling modules, and the second stream down-sampling modules are the same in number, the number of the fusion modules is the maximum number of stages, and the fusion modules, the first stream down-sampling modules, and the second stream down-sampling modules are sequentially arranged according to the number of stages; the input ends of the first stream convolution module and the second stream P-E (batch embedding) module are simultaneously connected with the Stem module, and the output ends of the first stream convolution module and the second stream P-E (batch embedding) module are simultaneously connected with the first-stage fusion module; the output end of the fusion module is simultaneously connected with a first stream down-sampling module and a second stream down-sampling module at the same level; except the first-stage fusion module, the input ends of the fusion modules of other stages are simultaneously connected with the first-stage down-sampling module and the second-stage down-sampling module of the previous stage; the last stage of the first stream down-sampling module and the second stream down-sampling module are simultaneously connected with the segmentation head module. In the step, after a picture is input into a constructed image double-stream segmentation model, the picture is divided into double streams from a Stem module as a starting point, the first stream passes through a 1 x 1 convolution module, the second stream passes through a P-E module, then a first-stage fusion module carries out first-stage fusion, and then down-sampling of the first stream and the second stream is respectively carried out; taking four levels as an example, after four-level fusion and respective down-sampling, the images are converged to the segmentation head module, and finally the segmentation images are output. The image double-flow segmentation model is constructed based on the introduction of a Transformer model, so that the network receptive field is increased, the capability of modeling context information of the model is enhanced, the mistaken segmentation of the background around the steel slag can be reduced, and the accuracy of the model is improved.
The Stem module in this step includes a convolution layer with a step length of 2 and a convolution kernel size of 7 × 7, a batch normalization layer, and a ReLU activation function; the first stream convolution module comprises a convolution layer with 1 multiplied by 1 and step size of 1, a batch normalization layer and a ReLU activation function; the second stream P-E module includes a transformed feature map dimension layer and a linear mapping layer.
As shown in fig. 3, the fusion module includes at least one group of fusion units and a splitting unit, each fusion unit includes a convolution submodule, a lightweight transform submodule and a splicing fusion submodule, wherein the convolution submodule includes a continuous convolution layer and/or a residual convolution layer, a batch normalization layer and a ReLU activation function layer, the lightweight transform submodule includes a self-attention mechanism layer, a normalization layer and a multilayer perceptron, and the splicing fusion submodule includes a transformed sequence feature dimension layer and a splicing merging layer; each shunting unit comprises a 1 × 1 convolution submodule and a P-E submodule, wherein the 1 × 1 convolution submodule comprises a convolution layer with 1 × 1 and step length of 1, a batch normalization layer and a ReLU activation function, and the P-E submodule comprises a conversion feature map dimension layer and a linear mapping layer. In each fusion unit, the input end of a convolution submodule is connected with a first stream convolution module, and the output end of the convolution submodule is connected with a splicing fusion submodule; the input end of the lightweight Transformer sub-module is connected with the second stream P-E module, and the output end of the lightweight Transformer sub-module is connected with the splicing fusion sub-module; and the output end of the splicing and fusing submodule is connected with the shunting unit. And data output from the splicing and fusing sub-modules of the fusing unit are shunted through the shunting unit, and the shunted data are respectively input into the next group of convolution sub-modules and light-weight transform sub-modules or the first stream down-sampling module and the second stream down-sampling module.
The first stream down-sampling module comprises a maximum pooling layer and a convolution layer with 1 × 1 and the step length of 1, and the second stream down-sampling module comprises a conversion sequence feature dimension layer, a maximum pooling layer, a convolution layer with 1 × 1 and the step length of 1 and a conversion feature map dimension layer; the segmentation head module comprises a conversion sequence characteristic dimension layer, a splicing merging layer, an up-sampling layer, a convolution layer with 1 multiplied by 1 and a step length of 1 and a normalization index layer.
And step S5, training and verifying the image double-flow segmentation model by adopting a training set and a verification set to obtain a mature image double-flow segmentation model.
In this step, when training and verifying the model using the training set and the verification set, it is assumed that the height of the input picture isHWide isWCIs the number of fundamental channel dimensions of the model,Dis the number of base sequence dimensions of the model; before training, the pictures are subjected to random horizontal turnover, random vertical turnover, random multi-scale transformation, random angle transformation and/or MixUp transformation, and the like, and the corresponding labels are subjected to the same transformation.
As shown in fig. 4, the training and validation process is as follows:
and step S51, after the picture is input into the Stem module, the size of the output feature map is C × H/2 × W/2.
And step S52, shunting the characteristic diagram through a 1 × 1 convolution module and a P-E module, wherein the first stream passes through the 1 × 1 convolution module to obtain a characteristic diagram with the size of C × H/2 × W/2, and the second stream passes through the P-E module to obtain a sequence characteristic with the size of D × (HW/4).
Step S53, inputting the feature graph output by the first stream convolution module and the sequence feature output by the second stream PE module into the first-stage fusion module at the same time, and obtaining the feature graph with the size of C × H/2 × W/2 and the sequence feature of D × (HW/4) through the fusion of the first-stage fusion module; in this embodiment, a first-stage fusion module including two sets of fusion units and a shunting unit is taken as an example for description.
The feature graph and the sequence features firstly enter a first fusion unit in a first-stage fusion module, a first stream passes through a convolution submodule to obtain a feature graph with the size of C x H/2 x W/2, a second stream passes through a lightweight transform submodule to obtain sequence features with the size of D x (HW/4), then the first stream feature graph and the second stream sequence features simultaneously enter a splicing fusion submodule, in the splicing fusion submodule, the sequence features of the second stream are converted into the feature graph with the size of D x (HW/4) through a conversion sequence feature dimension layer, and then the feature graph and the feature graph of the first stream are spliced on a channel dimension through a splicing and merging layer to obtain a fusion feature graph with the size of (C + D) x H/2 x W/2. And then inputting the fused feature map into a shunting unit, obtaining a first stream feature map with the size of C multiplied by H/2 multiplied by W/2 through a 1 multiplied by 1 convolution submodule in the shunting unit, and obtaining a second stream sequence feature with the size of D multiplied by (HW/4) through a P-E submodule. And the first flow characteristic diagram and the second flow sequence characteristic are re-entered into the second group of the fusion unit and the flow splitting unit, and the same operation as the first group is repeated. The first stream feature map and the second stream sequence feature are then input to a first stage first stream down-sampling module and a first stage second stream down-sampling module, respectively. The processing of the two groups of fusion units and the shunting unit enhances the overall characteristic extraction capability.
A step S54 of inputting the feature map of C × H/2 × W/2 and the sequence feature of D × (HW/4) into the first stream down-sampling module and the second stream down-sampling module, respectively; in the first stream down-sampling module, the resolution of the first stream characteristic diagram is halved through a maximum pooling layer and a convolution layer with 1 × 1 and 1 step length, the channel number is doubled to obtain a characteristic diagram with the size of 2 CxH/4 xW/4, in the second stream down-sampling module, the second stream sequence characteristic is converted through a sequence characteristic dimension layer to obtain a characteristic diagram with the size of D × H/2 xW/2, then the resolution of the characteristic diagram is halved through the maximum pooling layer and the convolution layer with 1 × 1 and 1 step length, the channel number is doubled to obtain a characteristic diagram with the size of 2 DxH/4 xW/4, and the sequence characteristic of 2 DxHW (HW/16) is obtained through the conversion of the characteristic diagram dimension layer.
Step S55, entering the circulation of the fusion of each level of fusion module and the down-sampling of the first and second down-sampling modules until the final level of fusion feature map is shunted, entering the final level of first and second down-sampling modules, and the final first stream output size is (2) N )C×H/(2 N+1 )×W/(2 N+1 ) Second stream output size (2) N )D×(HW/(2 N+1 ) 2 ) The sequence feature of (2). Taking four-level fusion modules, each of which includes 2, 4, 2 sets of fusion units and splitting units as an example, as shown in fig. 5, the final first stream output size is a feature map of 16C × H/32 × W/32, and the second stream output size is a sequence feature of 16D × (HW/1024).
In the whole circulation process, in a fusion unit of a fusion module, the characteristics extracted by a convolution submodule and a lightweight Transformer submodule are fused through a splicing fusion submodule, the first stream obtains global information characteristics extracted by the lightweight Transformer submodule, the second stream obtains local information characteristics extracted by the convolution submodule, and interaction is carried out while the characteristics are fused, so that the receptive field of a model is increased, and information complementation is realized, so that the model expression capacity is enhanced; in addition, the fusion and complementation of the features enable the Transformer to directly extract the features without the original pre-training weight, so that the model structure can be adjusted more flexibly, each fusion module can comprise a plurality of groups of fusion units and shunting units, and multi-level fusion can be performed after shunting to obtain the optimal information expression result.
And step S56, converging the double-stream characteristics output by the last stage to a segmentation head module, merging the second stream sequence characteristics and the first stream characteristic diagram through a conversion sequence characteristic dimension layer and a splicing and merging layer, and outputting a segmentation result through an up-sampling layer, a convolution layer and a normalization index layer.
As shown in fig. 5, taking four-level fusion modules, each of which includes 2, 4, and 2 sets of fusion units and shunting units as an example, the first-level fusion module includes two sets of fusion units and shunting units; the second-stage fusion module comprises two groups of fusion units and a shunting unit; the third-stage fusion module comprises four groups of fusion units and a shunting unit; the fourth-level fusion module comprises two groups of fusion units and a shunting unit. The flow and change process of the data from step S51 to step S55 can be seen from fig. 5.
And step S57, loss calculation is carried out on the segmentation result output by the model and the segmentation labels corresponding to the picture data set, model parameters are updated through gradient back-propagation values according to the calculation result of the loss function, and a mature image segmentation model is obtained after verification of the verification set.
And step S6, capturing a real-time tapping picture of the converter on site, preprocessing the picture, inputting a mature image double-flow segmentation model, and outputting a segmentation result to obtain the real-time monitoring positions of the steel slag, the furnace mouth and the inner wall of the furnace.
FIG. 6 is an original drawing of a steel slag position and a segmentation effect drawing at a first inclination angle of a converter in the tapping process; FIG. 7 is an original drawing of a steel slag position and a segmentation effect drawing at a second inclination angle of the converter during tapping; FIG. 8 is an original drawing of the steel slag position and a segmentation effect drawing at a third inclination angle of the converter during tapping. As shown in fig. 6 to 8, when the method for monitoring slag tapping of the converter is used to monitor the position of the steel slag during steel tapping of the converter, the segmentation result of the model can accurately identify the background, the steel slag, the inner wall of the converter, and the furnace mouth, so as to accurately obtain the position of the slag line, and provide a position setting for controlling the inclination angle of the converter.
According to the technical scheme, the image segmentation model based on the Transformer network is applied to slag tapping monitoring of the converter, the global information extraction capability is strong, the network structure can be flexibly adjusted according to actual requirements, the real-time performance of industrial field application is met, the steel slag position can be monitored, the steel slag, a furnace opening and the inner wall of the furnace are segmented, the interference of severe environment on the field is avoided, and the safety of operators is ensured; the monitoring precision of the steel slag is improved, the robustness is strong, and the operation condition of the converter can be accurately processed; meanwhile, the resources are saved, and the steel-making production efficiency is improved.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
Based on the same idea, an embodiment of the present invention further provides a converter slag tapping monitoring system, and as shown in fig. 8, the monitoring system includes: the system comprises a data acquisition subsystem, an image double-flow segmentation model subsystem and a real-time image acquisition and monitoring result output subsystem.
Wherein the data acquisition subsystem comprises: the system comprises a historical picture acquisition module, a segmentation label labeling module and a data set generation module;
the historical picture acquisition module is used for acquiring pictures at different inclination angles in the converter tapping process, the pictures cover the complete process of tapping, and each picture at least comprises a steel slag image of a converter mouth; the segmentation label marking module is used for marking each pixel point in the picture, respectively marking the pixel points as segmentation labels of four categories including a background, steel slag, a furnace inner wall and a furnace mouth, and binding the labels with the inclination angle of the picture; the data set generating module is used for generating a picture data set from all the pictures marked with the labels and dividing the picture data set into a training set and a verification set according to a preset proportion;
the image double-flow segmentation model subsystem is used for providing an image double-flow segmentation model, completing training and verification and obtaining a mature image double-flow segmentation model; as shown in fig. 2 and 3, the image dual-stream segmentation model includes: the system comprises a Stem module, a first stream convolution module, a second stream P-E module, at least two fusion modules, at least two first stream down-sampling modules, at least two second stream down-sampling modules and a segmentation head module, wherein the fusion modules, the first stream down-sampling modules and the second stream down-sampling modules are the same in number, the number of the fusion modules is the maximum number of stages, and the fusion modules, the first stream down-sampling modules and the second stream down-sampling modules are sequentially arranged according to the number of stages; the input ends of the first stream convolution module and the second stream P-E module are simultaneously connected with the Stem module, and the output ends of the first stream convolution module and the second stream P-E module are simultaneously connected with the first-stage fusion module; the output end of the fusion module is simultaneously connected with a first stream down-sampling module and a second stream down-sampling module at the same level; except the first-stage fusion module, the input ends of the fusion modules of other stages are simultaneously connected with the first-stage down-sampling module and the second-stage down-sampling module of the previous stage; the last stage of the first stream down-sampling module and the second stream down-sampling module are simultaneously connected with the segmentation head module.
The fusion module comprises at least one group of fusion units and a shunting unit, each fusion unit comprises a convolution submodule, a light-weight Transformer submodule and a splicing fusion submodule, wherein the convolution submodule comprises a continuous convolution layer and/or a residual convolution layer, a batch normalization layer and a ReLU activation function layer, the light-weight Transformer submodule comprises a self-attention mechanism layer, a normalization layer and a multi-layer perceptron, and the splicing fusion submodule comprises a conversion sequence characteristic dimension layer and a splicing merging layer; each shunting unit comprises a 1 × 1 convolution submodule and a P-E submodule, wherein the 1 × 1 convolution submodule comprises a convolution layer with 1 × 1 and step length of 1, a batch normalization layer and a ReLU activation function, and the P-E submodule comprises a conversion feature map dimension layer and a linear mapping layer. In each fusion unit, the input end of a convolution submodule is connected with a first stream convolution module, and the output end of the convolution submodule is connected with a splicing fusion submodule; the input end of the lightweight transform submodule is connected with the second stream P-E module, and the output end of the lightweight transform submodule is connected with the splicing fusion submodule; the output end of the splicing fusion sub-module is connected with the shunt unit, and the shunt unit is respectively connected with the next group of convolution sub-modules and the light-weight transform sub-modules in the same fusion module or connected with the first stream down-sampling module and the second stream down-sampling module in the same stage.
The real-time picture acquisition and monitoring result output subsystem is used for capturing a real-time tapping picture of a field converter, preprocessing the picture and sending the preprocessed picture to the image double-flow segmentation model subsystem; and receiving a segmentation result obtained by a mature image double-flow segmentation model, and outputting the real-time monitoring positions of the steel slag, the furnace mouth and the inner wall of the furnace.
In this embodiment, each subsystem and module is implemented by a processor, and when storage is needed, a memory is added appropriately. The Processor may be, but is not limited to, a microprocessor MPU, a Central Processing Unit (CPU), a Network Processor (NP), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), other programmable logic devices, discrete gates, transistor logic devices, discrete hardware components, and the like. The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
It should be noted that, the converter slag tapping monitoring system and the converter slag tapping monitoring method described in this embodiment correspond to each other, and the definition and description of the method are also applicable to the system, and are not repeated herein.
In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.).
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. The method for monitoring the converter slag discharging is characterized by comprising the following steps:
step S1, acquiring pictures at different inclination angles in the converter tapping process, wherein the pictures cover the whole process of tapping, and each picture at least comprises a steel slag image of a converter mouth;
step S2, labeling each pixel point in the picture, respectively labeling the pixel points as four types of segmentation labels of a background, steel slag, a furnace inner wall and a furnace mouth, and binding the labels with the inclination angle of the picture;
step S3, generating a picture data set from all the pictures marked with the labels, and dividing the picture data set into a training set and a verification set according to a preset proportion;
step S4, constructing an image double-stream segmentation model, wherein the image double-stream segmentation model comprises a Stem module, a first stream convolution module, a second stream P-E module, at least two transform model-based fusion modules, at least two first stream down-sampling modules, at least two second stream down-sampling modules and a segmentation head module, the fusion modules, the first stream down-sampling modules and the second stream down-sampling modules are the same in number, the number of the fusion modules is the maximum number of stages, and the fusion modules, the first stream down-sampling modules and the second stream down-sampling modules are sequentially arranged according to the number of stages; the input ends of the first stream convolution module and the second stream P-E module are simultaneously connected with the Stem module, and the output ends of the first stream convolution module and the second stream P-E module are simultaneously connected with the first-stage fusion module; the output end of the fusion module is simultaneously connected with a first stream down-sampling module and a second stream down-sampling module at the same level; except the first-stage fusion module, the input ends of the fusion modules of other stages are simultaneously connected with the first-stage down-sampling module and the second-stage down-sampling module of the previous stage; the last stage of the first stream down-sampling module and the second stream down-sampling module are simultaneously connected with the segmentation head module;
step S5, training and verifying the image double-flow segmentation model by adopting a training set and a verification set to obtain a mature image double-flow segmentation model;
and step S6, capturing a real-time tapping picture of the converter on site, preprocessing the picture, inputting a mature image double-flow segmentation model, and outputting a segmentation result to obtain the real-time monitoring positions of the steel slag, the furnace mouth and the inner wall of the furnace.
2. The converter slag tapping monitoring method according to claim 1, wherein the fusion module comprises at least one set of fusion unit and diversion unit; wherein, the first and the second end of the pipe are connected with each other,
each fusion unit comprises a convolution submodule, a lightweight Transformer submodule and a splicing fusion submodule, wherein the convolution submodule comprises a continuous convolution layer and/or a residual convolution layer, a batch standardization layer and a ReLU activation function layer;
each shunting unit comprises a 1 × 1 convolution submodule and a P-E submodule, wherein the 1 × 1 convolution submodule comprises a convolution layer with 1 × 1 step length, a batch normalization layer and a ReLU activation function, and the P-E submodule comprises a conversion characteristic diagram dimension layer and a linear mapping layer;
in each fusion unit, the input end of the convolution submodule is connected with the first stream convolution module, and the output end of the convolution submodule is connected with the splicing fusion submodule; the input end of the lightweight Transformer sub-module is connected with the second stream P-E module, and the output end of the lightweight Transformer sub-module is connected with the splicing fusion sub-module; the output end of the splicing fusion sub-module is connected with the shunt unit, and the shunt unit is respectively connected with the next group of convolution sub-modules and the light-weight transform sub-modules in the same fusion module or connected with the first stream down-sampling module and the second stream down-sampling module in the same stage.
3. The method as claimed in claim 1 or 2, wherein the Stem module in step S4 comprises convolution layer with convolution kernel size of 7 x 7 and step size of 2, batch normalization layer and ReLU activation function; the first stream convolution module comprises a convolution layer with 1 multiplied by 1 and step size of 1, a batch normalization layer and a ReLU activation function; the second stream P-E module includes a transformed feature map dimension layer and a linear mapping layer.
4. The method for monitoring converter slag tapping according to claim 3, wherein the first flow down-sampling module comprises a maximum pooling layer and a convolution layer with 1 x 1 and a step size of 1, and the second flow down-sampling module comprises a conversion sequence feature dimension layer, a maximum pooling layer, a convolution layer with 1 x 1 and a step size of 1 and a conversion feature map dimension layer; the segmentation head module comprises a conversion sequence characteristic dimension layer, a splicing merging layer, an upsampling layer, a convolution layer with the length of 1 multiplied by 1 and a normalization index layer.
5. The method for monitoring converter slag tapping according to claim 4, wherein in step S5, the height of the input picture is assumed to beHWide isWCIs the number of fundamental channel dimensions of the model,Dthe image double-flow segmentation model comprises an N-level fusion module, an N-level first-flow down-sampling module and an N-level second-flow down-sampling module, wherein the N-level fusion module, the N-level first-flow down-sampling module and the N-level second-flow down-sampling module are the basic sequence dimensionality of the model; the training and validation process is as follows:
step S51, after the picture is input into the Stem module, the size of the output characteristic graph is C multiplied by H/2 multiplied by W/2;
step S52, the characteristic diagram is divided through a 1 × 1 convolution module and a P-E module, the first stream passes through the 1 × 1 convolution module to obtain a characteristic diagram with the size of C × H/2 × W/2, and the second stream passes through the P-E module to obtain a sequence characteristic with the size of Dx (HW/4);
step S53, inputting the feature graph output by the first stream convolution module and the sequence feature output by the second stream PE module into the first-stage fusion module at the same time, and enhancing the feature extraction capability of the model through the fusion of the first-stage fusion module to obtain the feature graph with the size of C × H/2 × W/2 and the sequence feature of D × (HW/4);
a step S54 of inputting the feature map of C × H/2 × W/2 and the sequence feature of D × (HW/4) into the first stream down-sampling module and the second stream down-sampling module, respectively; in a first stream down-sampling module, the resolution of a first stream feature map is halved through a maximum pooling layer and a convolution layer with 1 × 1 and the step length being 1, and the channel number is doubled to obtain the feature map with the size of 2 CxH/4 xW/4; in a second stream down-sampling module, converting the second stream sequence characteristics into a sequence characteristic dimension layer to obtain a characteristic diagram with the size of D multiplied by H/2 multiplied by W/2, then reducing the resolution of the characteristic diagram by half and doubling the number of channels through a maximum pooling layer and a convolution layer with the size of 1 multiplied by 1 to obtain a characteristic diagram with the size of 2D multiplied by H/4 multiplied by W/4, and then converting the characteristic diagram dimension layer to obtain the sequence characteristics of 2D multiplied by (HW/16);
step S55, entering the circulation of the fusion of each level of fusion module and the down-sampling of the first and second down-sampling modules until the final level of fusion feature map is shunted, entering the final level of first and second down-sampling modules, and the final first stream output size is (2) N )C×H/(2 N+1 )×W/(2 N+1 ) Second stream output size (2) N )D×(HW/(2 N+1 ) 2 ) The sequence characteristics of (a);
step S56, converging the double-flow characteristics output by the last stage to the segmentation head module, merging by the splicing and merging layer, and outputting the segmentation result through the convolution layer, the up-sampling layer and the normalization index layer;
and step S57, loss calculation is carried out on the segmentation result output by the model and the segmentation labels corresponding to the picture data set, model parameters are updated through gradient back-propagation values according to the calculation result of the loss function, and a mature image segmentation model is obtained after verification of the verification set.
6. The method for monitoring converter slag tapping according to claim 5, wherein the fusion of the fusion modules in steps S53 and S54 is performed as follows:
the feature map and the sequence features enter a fusion unit in an i-level fusion module, and a first flow is convolutedThe sub-module gets a size of (2) i-1 )C×H/2 i ×W/2 i The second stream is passed through a lightweight transform submodule to obtain a size of (2) i-1 )D×(HW/(2 i ) 2 ) Then the first stream feature map and the second stream feature are simultaneously entered into a splicing fusion submodule, in which the sequence feature of the second stream is converted into a sequence feature dimension layer with the size of (2) i -1 )D×(HW/(2 i ) 2 ) Sequence feature conversion to size (2) i-1 )D×H/2 i ×W/2 i The feature map of the first stream is spliced with the feature map of the first stream in the channel dimension to obtain the size of (2) i-1 )(C+D)×H/2 i ×W/2 i The fused feature map of (1); inputting the fused feature map into a shunting unit, and obtaining the size of (2) in the shunting unit through a 1 × 1 convolution submodule i-1 )C×H/2 i ×W/2 i The first class profile of (2) is obtained by the P-E submodule i-1 )D×(HW/(2 i ) 2 ) A second stream sequence characteristic of (a);
the first stream feature map and the second stream sequence feature enter a next group of fusion unit and shunt unit or a first stream down-sampling module and a second stream down-sampling module at the same stage; and if entering the next group of the fusion unit and the shunting unit, repeating the operation of the fusion unit and the shunting unit.
7. The method for monitoring the discharged slag of the converter according to claim 5, wherein before the training in step S51, the picture is randomly horizontally flipped, randomly vertically flipped, randomly multi-scaled, randomly angularly transformed and/or MixUp transformed, and the corresponding labels are transformed in the same way.
8. A converter slag tapping monitoring system, characterized in that the monitoring system comprises: the system comprises a data acquisition subsystem, an image double-flow segmentation model subsystem and a real-time image acquisition and monitoring result output subsystem; wherein, the first and the second end of the pipe are connected with each other,
the data acquisition subsystem includes: the system comprises a historical picture acquisition module, a segmentation label labeling module and a data set generation module; the historical picture acquisition module is used for acquiring pictures at different inclination angles in the converter tapping process, the pictures cover the complete process of tapping, and each picture at least comprises a steel slag image of a converter mouth; the segmentation label marking module is used for marking each pixel point in the picture, respectively marking the pixel points as segmentation labels of four categories including a background, steel slag, a furnace inner wall and a furnace mouth, and binding the labels with the inclination angle of the picture; the data set generating module is used for generating picture data sets from all the pictures marked with the labels and dividing the picture data sets into a training set and a verification set according to a preset proportion;
the image double-flow segmentation model subsystem is used for providing an image double-flow segmentation model, completing training and verification and obtaining a mature image double-flow segmentation model; wherein the image dual-stream segmentation model comprises: the system comprises a Stem module, a first stream convolution module, a second stream P-E module, at least two fusion modules, at least two first stream down-sampling modules, at least two second stream down-sampling modules and a segmentation head module, wherein the fusion modules, the first stream down-sampling modules and the second stream down-sampling modules are the same in number, the number of the fusion modules is the maximum number of stages, and the fusion modules, the first stream down-sampling modules and the second stream down-sampling modules are sequentially arranged according to the number of stages; the input ends of the first stream convolution module and the second stream P-E module are simultaneously connected with the Stem module, and the output ends of the first stream convolution module and the second stream P-E module are simultaneously connected with the first-stage fusion module; the output end of the fusion module is simultaneously connected with a first stream down-sampling module and a second stream down-sampling module at the same level; except the first-stage fusion module, the input ends of the fusion modules of other stages are simultaneously connected with the first-stage down-sampling module and the second-stage down-sampling module of the previous stage; the last stage of the first stream down-sampling module and the second stream down-sampling module are simultaneously connected with the segmentation head module;
the real-time picture acquisition and monitoring result output subsystem is used for capturing a real-time tapping picture of a field converter, preprocessing the picture and sending the preprocessed picture to the image double-flow segmentation model subsystem; and receiving a segmentation result obtained by a mature image double-flow segmentation model, and outputting the real-time monitoring positions of the steel slag, the furnace mouth and the inner wall of the furnace.
9. The converter slag tapping monitoring system of claim 8, wherein the fusion module comprises at least one set of a fusion unit and a diversion unit; wherein the content of the first and second substances,
each fusion unit comprises a convolution submodule, a lightweight Transformer submodule and a splicing fusion submodule, wherein the convolution submodule comprises a continuous convolution layer and/or a residual convolution layer, a batch normalization layer and a ReLU activation function layer;
each shunting unit comprises a 1 × 1 convolution submodule and a P-E submodule, wherein the 1 × 1 convolution submodule comprises a convolution layer with 1 × 1 step length, a batch normalization layer and a ReLU activation function, and the P-E submodule comprises a conversion characteristic diagram dimension layer and a linear mapping layer;
in each fusion unit, the input end of the convolution submodule is connected with the first stream convolution module, and the output end of the convolution submodule is connected with the splicing fusion submodule; the input end of the lightweight transform submodule is connected with the second stream P-E module, and the output end of the lightweight transform submodule is connected with the splicing fusion submodule; the output end of the splicing fusion sub-module is connected with the shunt unit, and the shunt unit is respectively connected with the next group of convolution sub-modules and the light-weight transform sub-modules in the same fusion module or connected with the first stream down-sampling module and the second stream down-sampling module in the same stage.
CN202210489189.XA 2022-05-07 2022-05-07 Converter slag discharging monitoring method and system Active CN114581859B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210489189.XA CN114581859B (en) 2022-05-07 2022-05-07 Converter slag discharging monitoring method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210489189.XA CN114581859B (en) 2022-05-07 2022-05-07 Converter slag discharging monitoring method and system

Publications (2)

Publication Number Publication Date
CN114581859A CN114581859A (en) 2022-06-03
CN114581859B true CN114581859B (en) 2022-09-13

Family

ID=81769265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210489189.XA Active CN114581859B (en) 2022-05-07 2022-05-07 Converter slag discharging monitoring method and system

Country Status (1)

Country Link
CN (1) CN114581859B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117456427B (en) * 2023-12-19 2024-04-02 武汉新科冶金设备制造有限公司 Molten steel pouring method of steelmaking converter

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815785A (en) * 2018-12-05 2019-05-28 四川大学 A kind of face Emotion identification method based on double-current convolutional neural networks
CN110438284A (en) * 2019-08-26 2019-11-12 杭州谱诚泰迪实业有限公司 A kind of converter intelligence tapping set and control method
CN110532902A (en) * 2019-08-12 2019-12-03 北京科技大学 A kind of molten iron drossing detection method based on lightweight convolutional neural networks
CN110782462A (en) * 2019-10-30 2020-02-11 浙江科技学院 Semantic segmentation method based on double-flow feature fusion
CN110781944A (en) * 2019-10-21 2020-02-11 中冶南方(武汉)自动化有限公司 Automatic molten iron slag-off control method based on deep learning
CN110796046A (en) * 2019-10-17 2020-02-14 武汉科技大学 Intelligent steel slag detection method and system based on convolutional neural network
CN110929696A (en) * 2019-12-16 2020-03-27 中国矿业大学 Remote sensing image semantic segmentation method based on multi-mode attention and self-adaptive fusion
CN111681252A (en) * 2020-05-30 2020-09-18 重庆邮电大学 Medical image automatic segmentation method based on multipath attention fusion
CN111944942A (en) * 2020-07-30 2020-11-17 北京科技大学 Dynamic tapping control method and device for eccentric furnace bottom of converter
CN112767451A (en) * 2021-02-01 2021-05-07 福州大学 Crowd distribution prediction method and system based on double-current convolutional neural network
CN113077450A (en) * 2021-04-12 2021-07-06 大连大学 Cherry grading detection method and system based on deep convolutional neural network
CN113505759A (en) * 2021-09-08 2021-10-15 北京科技大学 Multitasking method, multitasking device and storage medium
CN114049384A (en) * 2021-11-09 2022-02-15 北京字节跳动网络技术有限公司 Method and device for generating video from image and electronic equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109815785A (en) * 2018-12-05 2019-05-28 四川大学 A kind of face Emotion identification method based on double-current convolutional neural networks
CN110532902A (en) * 2019-08-12 2019-12-03 北京科技大学 A kind of molten iron drossing detection method based on lightweight convolutional neural networks
CN110438284A (en) * 2019-08-26 2019-11-12 杭州谱诚泰迪实业有限公司 A kind of converter intelligence tapping set and control method
CN110796046A (en) * 2019-10-17 2020-02-14 武汉科技大学 Intelligent steel slag detection method and system based on convolutional neural network
CN110781944A (en) * 2019-10-21 2020-02-11 中冶南方(武汉)自动化有限公司 Automatic molten iron slag-off control method based on deep learning
CN110782462A (en) * 2019-10-30 2020-02-11 浙江科技学院 Semantic segmentation method based on double-flow feature fusion
CN110929696A (en) * 2019-12-16 2020-03-27 中国矿业大学 Remote sensing image semantic segmentation method based on multi-mode attention and self-adaptive fusion
CN111681252A (en) * 2020-05-30 2020-09-18 重庆邮电大学 Medical image automatic segmentation method based on multipath attention fusion
CN111944942A (en) * 2020-07-30 2020-11-17 北京科技大学 Dynamic tapping control method and device for eccentric furnace bottom of converter
CN112767451A (en) * 2021-02-01 2021-05-07 福州大学 Crowd distribution prediction method and system based on double-current convolutional neural network
CN113077450A (en) * 2021-04-12 2021-07-06 大连大学 Cherry grading detection method and system based on deep convolutional neural network
CN113505759A (en) * 2021-09-08 2021-10-15 北京科技大学 Multitasking method, multitasking device and storage medium
CN114049384A (en) * 2021-11-09 2022-02-15 北京字节跳动网络技术有限公司 Method and device for generating video from image and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"改进ResNet101网络下渣出钢状态识别研究";李爱莲 等;《中国测试》;20201130;第46卷(第11期);第116-119页 *

Also Published As

Publication number Publication date
CN114581859A (en) 2022-06-03

Similar Documents

Publication Publication Date Title
CN110438284B (en) Intelligent tapping device of converter and control method
CN109858367B (en) Visual automatic detection method and system for worker through supporting unsafe behaviors
CN114581859B (en) Converter slag discharging monitoring method and system
KR102015947B1 (en) Method for extracting image of learning object for autonomous driving and apparatus thereof
CN112560980B (en) Training method and device of target detection model and terminal equipment
US20170308768A1 (en) Character information recognition method based on image processing
CN116384901B (en) Petrochemical wharf digital twin management method and system
CN111291684A (en) Ship board detection method in natural scene
CN106462397A (en) Program generation device, program generation method and program
CN116493735B (en) Real-time tracking method for motion splash in Wanwave-level ultra-high power laser welding process
CN115222697A (en) Container damage detection method based on machine vision and deep learning
CN112396042A (en) Real-time updated target detection method and system, and computer-readable storage medium
CN107797784A (en) Obtain the method and device of the adaptation resolution ratio of splicing device
CN111951289A (en) BA-Unet-based underwater sonar image data segmentation method
Kumar et al. Semi-supervised transfer learning-based automatic weld defect detection and visual inspection
CN111274872B (en) Video monitoring dynamic irregular multi-supervision area discrimination method based on template matching
CN116258946B (en) Precondition-based multi-granularity cross-modal reasoning method and device
CN112669269A (en) Pipeline defect classification and classification method and system based on image recognition
JP6681965B2 (en) Apparatus and method for extracting learning target image for autonomous driving
CN113673478A (en) Port large-scale equipment detection and identification method based on depth panoramic stitching
CN111583341B (en) Cloud deck camera shift detection method
JPH06103967B2 (en) Image processor for overhead line inspection
CN114463300A (en) Steel surface defect detection method, electronic device, and storage medium
CN111476311B (en) Anchor chain flash welding quality online detection method based on increment learning
CN117910620A (en) Tundish molten steel weight prediction method and system thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Li Jiangyun

Inventor after: HuangFu Yubin

Inventor after: Shen Haoran

Inventor after: Zhang Yifu

Inventor before: HuangFu Yubin

Inventor before: Li Jiangyun

Inventor before: Shen Haoran

Inventor before: Zhang Yifu

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant