CN111815604B - Blast furnace tuyere monitoring method and device, electronic equipment and storage medium - Google Patents

Blast furnace tuyere monitoring method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111815604B
CN111815604B CN202010652956.5A CN202010652956A CN111815604B CN 111815604 B CN111815604 B CN 111815604B CN 202010652956 A CN202010652956 A CN 202010652956A CN 111815604 B CN111815604 B CN 111815604B
Authority
CN
China
Prior art keywords
sequence
image
blast furnace
features
furnace tuyere
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010652956.5A
Other languages
Chinese (zh)
Other versions
CN111815604A (en
Inventor
韩涛
李梓赫
张天放
谭昶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Iflytek Information Technology Co Ltd
Original Assignee
Iflytek Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Iflytek Information Technology Co Ltd filed Critical Iflytek Information Technology Co Ltd
Priority to CN202010652956.5A priority Critical patent/CN111815604B/en
Publication of CN111815604A publication Critical patent/CN111815604A/en
Application granted granted Critical
Publication of CN111815604B publication Critical patent/CN111815604B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30116Casting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Manufacture Of Iron (AREA)

Abstract

The embodiment of the invention provides a blast furnace tuyere monitoring method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring video data of a blast furnace tuyere, and extracting an image sequence from the video data; and carrying out blast furnace tuyere monitoring based on the image sequence and the relation between each frame of image in the image sequence and the whole image sequence. The method, the device, the electronic equipment and the storage medium provided by the embodiment of the invention apply the image sequence to the blast furnace tuyere monitoring to comprehensively and dynamically display the actual condition of the blast furnace tuyere, thereby effectively avoiding the false alarm problem caused by small difference of different working states of the blast furnace tuyere. The blast furnace tuyere monitoring is carried out by combining the relation between each frame of image in the image sequence and the whole image sequence, so that the robustness and the accuracy of the blast furnace tuyere monitoring can be further enhanced.

Description

Blast furnace tuyere monitoring method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of blast furnace detection, in particular to a blast furnace tuyere monitoring method, a device, electronic equipment and a storage medium.
Background
In the blast furnace smelting process, the local smelting condition in the furnace can be directly observed through the blast furnace tuyere. The image displayed by the tuyere peeping hole can effectively reflect the conditions of the heat system, the air supply system, the fuel injection and the furnace burden and gas flow movement of the blast furnace. Therefore, monitoring the tuyere of a blast furnace is a very important work for blast furnace smelting.
At present, the monitoring of the blast furnace tuyere generally needs to collect the blast furnace tuyere video, and a technician observes the blast furnace tuyere video one by naked eyes to judge the working condition of the corresponding tuyere. Because the number of the blast furnace tuyeres is numerous, and the difference reflected by the blast furnace tuyeres in videos under different working states is small, the human eyes capture and distinguish the concentration degree when the monitoring is completely dependent on the professional level of technicians, the accuracy and the reliability of the monitoring result cannot be ensured, and the stability of the blast furnace is greatly influenced.
Disclosure of Invention
The embodiment of the invention provides a blast furnace tuyere monitoring method, a device, electronic equipment and a storage medium, which are used for solving the problems of poor accuracy and reliability of the existing blast furnace tuyere monitoring method.
In a first aspect, an embodiment of the present invention provides a method for monitoring a tuyere of a blast furnace, including:
acquiring video data of a blast furnace tuyere, and extracting an image sequence from the video data;
and carrying out blast furnace tuyere monitoring based on the image sequence and the relation between each frame of image in the image sequence and the whole image sequence.
Optionally, the blast furnace tuyere monitoring is performed based on the image sequence and the relation between each frame of image in the image sequence and the whole image sequence, and specifically includes:
inputting the image sequence into a blast furnace tuyere monitoring model to obtain a blast furnace tuyere monitoring result output by the blast furnace tuyere monitoring model;
the blast furnace tuyere monitoring model is trained based on a sample image sequence and a corresponding sample monitoring result, and is used for carrying out blast furnace tuyere monitoring based on the image sequence and the relation between each frame of image in the image sequence and the whole image sequence.
Optionally, the inputting the image sequence to a blast furnace tuyere monitoring model to obtain a blast furnace tuyere monitoring result output by the blast furnace tuyere monitoring model specifically includes:
inputting each frame of image in the image sequence to a semantic feature layer of the blast furnace tuyere monitoring model to obtain semantic features of each frame of image output by the semantic feature layer;
inputting semantic features of each frame of image into a relation feature layer of the blast furnace tuyere monitoring model to obtain sequence relation features output by the relation feature layer, wherein the sequence relation features comprise the relation between each frame of image and the whole image sequence;
and inputting the sequence relation characteristics to a result output layer of the blast furnace tuyere monitoring model to obtain the blast furnace tuyere monitoring result output by the result output layer.
Optionally, the inputting the semantic feature of each frame of image to the relation feature layer of the blast furnace tuyere monitoring model to obtain the sequence relation feature output by the relation feature layer specifically includes:
inputting semantic features of each frame of image into a non-local feature layer of the relation feature layer to obtain a sequence non-local feature output by the non-local feature layer;
and inputting the sequence non-local features to the attention feature layer of the relation feature layer to obtain the sequence relation features output by the attention feature layer.
Optionally, the inputting the semantic feature of each frame of image to the non-local feature layer of the relation feature layer to obtain the sequence non-local feature output by the non-local feature layer specifically includes:
inputting the semantic features of each frame of image to a feature combination layer of the non-local feature layer to obtain sequence semantic features output by the feature combination layer;
inputting the sequence semantic features to a non-local feature correlation layer of the non-local feature layer to obtain non-local correlation features output by the non-local feature correlation layer;
and inputting the sequence semantic features and the non-local associated features to a residual error connecting layer of the non-local feature layer to obtain the sequence non-local features output by the residual error connecting layer.
Optionally, the inputting the sequence of non-local features to the attention feature layer of the relationship feature layer to obtain the sequence of relationship features output by the attention feature layer specifically includes:
inputting the sequence non-local features to a spatial attention feature layer of the attention feature layer to obtain spatial attention features output by the spatial attention feature layer;
inputting the sequence non-local features to a channel attention feature layer of the attention feature layer to obtain channel attention features output by the channel attention feature layer;
and inputting the spatial attention features and the channel attention features to a feature fusion layer of the attention feature layer to obtain the sequence relation features output by the feature fusion layer.
Optionally, the sample image sequence is obtained by image preprocessing including at least one of random cropping, random rotation, and random brightness variation.
In a second aspect, an embodiment of the present invention provides a blast furnace tuyere monitoring device, including:
the sequence extraction unit is used for obtaining video data of the blast furnace tuyere and extracting an image sequence from the video data;
and the tuyere monitoring unit is used for carrying out blast furnace tuyere monitoring based on the image sequence and the relation between each frame of image in the image sequence and the whole image sequence.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor, a communication interface, a memory, and a bus, where the processor, the communication interface, and the memory are in communication with each other via the bus, and the processor may invoke logic commands in the memory to perform the steps of the method as provided in the first aspect.
In a fourth aspect, embodiments of the present invention provide a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method as provided by the first aspect.
According to the blast furnace tuyere monitoring method, device, electronic equipment and storage medium provided by the embodiment of the invention, the image sequence is applied to blast furnace tuyere monitoring, and the actual condition of the blast furnace tuyere is comprehensively and dynamically displayed, so that the problem of false alarm caused by tiny difference of different working states of the blast furnace tuyere is effectively avoided. The blast furnace tuyere monitoring is carried out by combining the relation between each frame of image in the image sequence and the whole image sequence, so that the robustness and the accuracy of the blast furnace tuyere monitoring can be further enhanced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a method for monitoring a tuyere of a blast furnace according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a method for operating a blast furnace tuyere monitoring model according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a method for extracting sequence relation features according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a non-local feature layer according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a non-local feature layer according to another embodiment of the present invention;
FIG. 6 is a schematic diagram of a structure of a attention feature layer according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a spatial attention feature layer according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a blast furnace tuyere monitoring model according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a blast furnace tuyere monitoring device according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
At present, the monitoring of the blast furnace tuyere generally needs to collect the blast furnace tuyere video, and a technician observes the blast furnace tuyere video one by naked eyes to judge the working condition of the corresponding tuyere. Because the blast furnace tuyeres are numerous, and the difference reflected in the video of the blast furnace tuyeres under different working conditions is small, the human eyes capture and distinguish the concentration degree when monitoring completely depending on the professional level of technicians. In addition, for the monitoring of the blast furnace tuyere, the image of the blast furnace tuyere can be shot, and the tuyere monitoring is performed by automatically analyzing the image of the blast furnace tuyere. However, as the positions of different blast furnace tuyeres are not uniform, the deformation in the coal injection process is too large, the differences among abnormal categories are too small, the traditional image classification method can not accurately distinguish the blast furnace tuyere images under different working states, the accuracy and the reliability of the monitoring result can not be ensured, and the stability of the blast furnace is greatly influenced.
Aiming at the problems, the embodiment of the invention provides a blast furnace tuyere monitoring method. Fig. 1 is a schematic flow chart of a blast furnace tuyere monitoring method according to an embodiment of the present invention, as shown in fig. 1, the method includes:
step 110, obtaining video data of the blast furnace tuyere, and extracting an image sequence from the video data.
The video data of the blast-furnace tuyere, that is, the data obtained by video acquisition for the blast-furnace tuyere to be monitored, may specifically be the blast-furnace tuyere video data of the current period acquired in real time, or may be the blast-furnace tuyere video data of the history period acquired and stored in advance, which is not particularly limited in the embodiment of the present invention.
Considering that the change of the working state of the blast furnace tuyere is always continuous in a realistic production scene, the continuity of the working state cannot be reflected by a single image, and different working states are difficult to accurately distinguish. Here, the extraction of the image sequence may be to extract one frame of image in the video data at every preset extraction interval according to a preset extraction interval, or may be to form the image sequence by extracting the image sequence of the preset frame number from the video data at intervals according to a preset extraction frame number, which is not particularly limited in the embodiment of the present invention. For example, the extraction interval may be set to 8 frames, and one frame of image may be extracted from video data every 8 frames to construct an image sequence.
Step 120, performing blast furnace tuyere monitoring based on the image sequence and the relation between each frame of image in the image sequence and the whole image sequence.
Specifically, after the image sequence of the video data is obtained, the blast furnace tuyere can be monitored according to the image sequence, and compared with the traditional scheme of monitoring the blast furnace tuyere through a single image, the method and the device for monitoring the blast furnace tuyere by comprehensive multiple images can comprehensively and dynamically display the actual condition of the blast furnace tuyere, so that the problem of false alarm caused by tiny difference of different working states of the blast furnace tuyere is effectively avoided.
When the multi-frame images in the comprehensive image sequence are used for blast furnace tuyere monitoring, in order to balance the contribution of each frame image to the monitoring result in blast furnace tuyere monitoring, the relation between each frame image and the whole image sequence is required to be analyzed, and each frame image in the image sequence and the relation between each frame image and the whole image sequence are combined, so that the importance degree of the image with obvious characteristics for carrying out working state distinguishing in the image sequence is highlighted, the change process of the characteristics for carrying out working state distinguishing in the image sequence is enhanced, whether the shadow caused by insufficient coal combustion exists in the background part in the image sequence and the specific size of the shadow exist or not are judged, and the blast furnace tuyere monitoring with high robustness and high accuracy is realized, so that the blast furnace tuyere monitoring result is obtained.
The relationship between each frame image and the whole image sequence may be represented by a time sequence relationship between each frame image and the whole image sequence, and may also be represented by a relationship between a local image feature in each frame image and a global image feature of the whole image sequence, for example, a relationship between a background texture feature in each frame image and a global image feature of the whole image sequence, which is not particularly limited in the embodiment of the present invention.
The blast furnace tuyere monitoring result can be normal or abnormal, and the abnormal condition can be a large falling block and a large falling block, wherein the large falling block and the large falling block are caused by insufficient coal combustion, the large falling block is shown in a background part in an image, and the large falling block is shown in a small shadow in the background part in the image.
According to the method provided by the embodiment of the invention, the image sequence is applied to blast furnace tuyere monitoring, so that the actual condition of the blast furnace tuyere is comprehensively and dynamically displayed, and the problem of false alarm caused by tiny difference of different working states of the blast furnace tuyere is effectively avoided. The blast furnace tuyere monitoring is carried out by combining the relation between each frame of image in the image sequence and the whole image sequence, so that the robustness and the accuracy of the blast furnace tuyere monitoring can be further enhanced.
Based on the above embodiment, step 120 specifically includes:
inputting the image sequence into a blast furnace tuyere monitoring model to obtain a blast furnace tuyere monitoring result output by the blast furnace tuyere monitoring model;
the blast-furnace tuyere monitoring model is trained based on a sample image sequence and a corresponding sample monitoring result, and is used for carrying out blast-furnace tuyere monitoring based on the image sequence and the relation between each frame of image in the image sequence and the whole image sequence.
Specifically, after the image sequence is obtained, the blast furnace tuyere monitoring for the image sequence can be specifically realized through a blast furnace tuyere monitoring model obtained through pre-training. The blast furnace tuyere monitoring model is used for analyzing the relation between each frame of image in the image sequence and the whole image sequence according to the input image sequence, carrying out blast furnace tuyere monitoring on the basis again, and outputting a blast furnace tuyere monitoring result.
Before executing step 120, the blast furnace tuyere monitoring model may be trained in advance, and specifically, the blast furnace tuyere monitoring model may be trained in the following manner: firstly, a large number of sample image sequences are collected, and sample monitoring results corresponding to the sample image sequences are marked. And then training the initial model based on the sample image sequence and the sample monitoring result, thereby obtaining the blast furnace tuyere monitoring model.
According to the method provided by the embodiment of the invention, the relation between each frame of image in the image sequence and the whole image sequence is extracted by applying the neural network model, so that the rapid and accurate automatic blast furnace risk monitoring is realized.
Currently, the neural network model applied to video classification is usually implemented by adopting two schemes, namely 3D CNN (3D Convolutional Neural Network, three-dimensional convolutional neural network) or 3D CNN+LSTM (Long Short-Term Memory network). The 3D CNN is used for extracting three-dimensional features of sequence images in the video, a large number of three-dimensional convolution kernels are adopted, the calculated amount is huge, and a large memory is occupied.
In view of this problem, based on any one of the above embodiments, the blast furnace tuyere monitoring model includes a semantic feature layer, a relational feature layer and a result output layer, and fig. 2 is a schematic flow chart of an operation method of the blast furnace tuyere monitoring model according to an embodiment of the present invention, as shown in fig. 2, step 120 specifically includes:
step 121, inputting each frame of image in the image sequence to a semantic feature layer of the blast furnace tuyere monitoring model to obtain semantic features of each frame of image output by the semantic feature layer.
Specifically, the semantic feature layer is used for extracting semantic features of each frame of image in the image sequence respectively, so that the semantic features of each frame of image are output. Here, for any frame image, the semantic feature of the frame image is used to represent image semantic information of the frame image, and specifically may be boundary information, color information, texture information, and the like of the frame image, which is not specifically limited in the embodiment of the present invention.
Here, the semantic feature characterization layer is used for extracting the semantic features of each frame of two-dimensional image, so that a three-dimensional convolution kernel is not needed, and the calculated amount is greatly reduced. The semantic feature layer may employ a feature extraction part in a pre-trained natural image classification model, for example, features output by the penultimate layer of the Resnet-34 (residual network) for natural image classification may be employed as semantic features of the image.
And step 122, inputting the semantic features of each frame of image into a relation feature layer of the blast furnace tuyere monitoring model to obtain a sequence relation feature output by the relation feature layer, wherein the sequence relation feature comprises the relation between each frame of image and the whole image sequence.
Specifically, the relation feature layer is used for extracting features capable of representing the relation between each frame of image and the whole image sequence from the semantic features of each frame of image, and combining the semantic features of each frame of image on the basis to obtain and output sequence relation features. The sequence relation features can be used for representing the time sequence relation of each frame image in the whole image sequence, the relation between the local image features in each frame image and the global image features of the whole image sequence, and the like, so that the relation between the image sequence and each frame image in the image sequence is comprehensively represented in each dimension.
And step 123, inputting the sequence relation characteristics into a result output layer of the blast furnace tuyere monitoring model to obtain a blast furnace tuyere monitoring result output by the result output layer.
Specifically, the result output layer is used for analyzing and monitoring the working state of the blast furnace tuyere reflected by the image sequence according to the input sequence relation characteristics, and outputting a blast furnace tuyere monitoring result.
According to the method provided by the embodiment of the invention, the sequence relation features are obtained by applying the semantic feature layer and the relation feature layer, compared with the method that the image sequence features are extracted directly through the 3D CNN, the feature extraction capacity of the blast furnace tuyere monitoring model can be effectively enhanced while the calculated amount of feature extraction is reduced, and the accuracy of the follow-up monitoring result is improved.
Based on any one of the above embodiments, fig. 3 is a flowchart of a sequence relation feature extraction method according to an embodiment of the present invention, as shown in fig. 3, step 122 specifically includes:
step 1221, inputting the semantic features of each frame of image to the non-local feature layer of the relational feature layer to obtain the sequence non-local features output by the non-local feature layer.
The non-local feature layer is used for establishing a relation between each pixel point of each frame of the input image, so that semantic features of each pixel point in each frame of the image which is originally input are enriched into a sequence non-local feature capable of representing the semantic features of each pixel point in each frame of the image and the relation between each pixel point in each frame of the image, and the relation between each pixel point in each frame of the image can be particularly a spatial, time sequence or space-time relation.
The sequence non-local characteristics obtained by the method can pay more attention to the texture information of each frame of image in the tuyere background in the image sequence and the relation between the texture information of each frame of image in the tuyere background and the whole information of the image sequence.
Step 1222, inputting the sequence non-local feature to the attention feature layer of the relation feature layer to obtain the sequence relation feature output by the attention feature layer.
Specifically, the attention feature layer is used for carrying out attention transformation on the input sequence non-local features, so that features which can be used for distinguishing the working states of the blast furnace tuyere in the sequence non-local features are further highlighted, time sequence connection among each frame of images is enhanced, and further the accuracy of blast furnace tuyere monitoring is improved.
According to the method provided by the embodiment of the invention, the relationship between the local features and the global features in the image sequence is established through the non-local feature layer, and the feature strengthening and the time sequence connection are further carried out through the attention feature layer, so that the capability of the blast-furnace tuyere monitoring model for distinguishing the sequential relationship feature extraction of the working state of the blast-furnace tuyere is ensured, and the accuracy and the reliability of the blast-furnace tuyere monitoring are improved.
Based on any of the above embodiments, fig. 4 is a schematic structural diagram of a non-local feature layer according to an embodiment of the present invention, as shown in fig. 4, step 1221 specifically includes:
inputting the semantic features of each frame of image to a feature combination layer of the non-local feature layer to obtain sequence semantic features output by the feature combination layer;
inputting the sequence semantic features to a non-local feature correlation layer of the non-local feature layer to obtain non-local correlation features output by the non-local feature correlation layer;
and inputting the sequence semantic features and the non-local associated features into a residual error connecting layer of the non-local feature layer to obtain the sequence non-local features output by the residual error connecting layer.
Specifically, the feature combination layer is used for combining the semantic features of each frame of image, so as to obtain a sequence semantic feature capable of reflecting the semantic features of each frame of image in the image sequence. The feature combination layer can directly splice the semantic features of each frame of image, can directly carry out dimension superposition on the semantic features of each frame of image, and can further carry out operations such as dimension adjustment or convolution on the basis of dimension superposition so as to obtain sequence semantic features.
The non-local feature association layer is used for excavating the relation among all pixel points in each frame of image on the basis of the sequence semantic features, so that the sequence non-local features of the whole image sequence are obtained. The sequence non-local features here may be used to characterize the relationship between the local features of each frame of image and the global features of the image sequence.
The residual connection layer is used for fusing the sequence semantic features and the non-local associated features, so that the finally output sequence non-local features can reflect the relation between the local features of each frame of image and the global features of the image sequence, the semantic features of each frame of image are not omitted due to the extraction of the relation, and the integrity of the sequence non-local features is further ensured.
Based on any of the above embodiments, fig. 5 is a schematic structural diagram of a non-local feature layer according to another embodiment of the present invention, as shown in fig. 5, X is a sequence semantic feature obtained by combining semantic features of each frame of image by a feature combination layer, and the size of the sequence semantic feature may be expressed as t×h×w×1024, where T is the number of images in an image sequence, H and W are the height and width of one frame of image, and 1024 is a feature dimension.
At the non-local feature correlation layer, the sequence semantic features X can respectively pass through 3D convolution kernels theta of 1 multiplied by 1,And g, and obtaining corresponding convolution characteristic W θ X、/>And g (X), W θ X、/>And g (X) may further reflect the characteristics of the sequence semantic feature X in different feature dimensions. Where W is θ X、/>And g (X) are each T.times.H.times.W.times.512. Thereupon, the convolution characteristic W θ X、/>And g (X) to obtain local association characteristics, wherein the local association characteristics can be realized by the following formula:
in which W is θ Andconvolution kernels θ and +.>Corresponding parameters of->Is a>After transposition, the convolution characteristic W θ Convolution characteristics of X and transposed->And (3) carrying out softmax calculation after multiplication, multiplying the calculation result with a convolution feature g (X) to obtain a feature y, and finally carrying out 3D convolution on the feature y to obtain a non-local correlation feature. The size of the feature y may be expressed as t×h×w×512.
And adding the non-local associated feature and the sequence semantic feature X at a residual connection layer to obtain a sequence non-local feature Z, wherein the size of the sequence non-local feature Z can be expressed as T multiplied by H multiplied by W multiplied by 1024.
In the implementation scheme of the neural network model generally applied to video classification, the implementation scheme of 3D CNN+LSTM has the problems that the LSTM structure is difficult to parallelize and has low convergence rate, meanwhile, the problem of learning rate can also cause the condition of non-convergence, and the difficulty of optimizing parameter adjustment is high. In view of this problem, based on any of the above embodiments, fig. 6 is a schematic structural diagram of a attention feature layer provided in an embodiment of the present invention, and as shown in fig. 6, step 1222 specifically includes:
inputting the sequence non-local features to a spatial attention feature layer of the attention feature layer to obtain spatial attention features output by the spatial attention feature layer;
inputting the sequence non-local features to a channel attention feature layer of the attention feature layer to obtain channel attention features output by the channel attention feature layer;
and inputting the spatial attention features and the channel attention features into a feature fusion layer of the attention feature layer to obtain the sequence relation features output by the feature fusion layer.
Specifically, at the attention feature layer, further attention feature extraction can be performed on the sequence non-local features from two dimensions of spatial attention and channel attention, respectively:
the spatial attention feature layer is used for mutually enhancing the expression of respective features by utilizing the relation between any two pixel points in the sequence non-local features, so that the spatial attention feature which can further highlight the image layer relation between the pixel points in each frame image and the relation between the time sequence layers between the pixel points in different images is obtained, and any two pixel points can be two pixel points on the same frame image or two pixel points on different frame images.
The channel attention feature layer is used for mining the interdependence relation between different channels of the sequence non-local features to enhance the expression of the interdependent channel features, so that the channel attention features focusing on different channel information are obtained.
On the basis, the spatial attention features and the channel attention features can be fused through a feature fusion layer, so that the sequence relation features are obtained. The sequence relation feature may be obtained by adding the spatial attention feature and the channel attention feature according to pixel positions.
According to the method provided by the embodiment of the invention, through the combined application of the spatial attention mechanism and the channel attention mechanism, the expression of time sequence information and channel information in the sequence non-local characteristics is further enhanced, and compared with the traditional LSTM model, the application of the attention characteristic layer does not need to consider the problem of parallelization training, so that the convergence speed and the convergence effect of the blast furnace tuyere monitoring model training are greatly improved.
Based on any of the above embodiments, fig. 7 is a schematic structural diagram of a spatial attention feature layer provided by the embodiment of the present invention, as shown in fig. 7, features Q, K, V are all obtained by convolving non-local features of a sequence, on this basis, a transposed feature Q is transposed, and the transposed feature Q is multiplied by a feature K, and then, attention maps S of each position to other positions are normalized by a softmax operation denoted by the symbol in fig. 7, where S can represent weights of each pixel position in space. And then the dimension of the attention map S is adjusted through a Reshape function, then the S is multiplied by the feature V, and then the multiplied S is added with the sequence non-local feature, so that the space enhancement of the sequence non-local feature is realized, and the space attention feature is obtained.
Wherein, attention seeking S may be expressed by the following formula:
where S (x, y) is the influence of the xth pixel position on the yth pixel position in the force diagram S, K (x) For feature K of the x-th pixel location,transpose of feature Q for the y-th pixel location, N being the total number of pixel locations.
Based on any of the above embodiments, the sample image sequence is obtained by image preprocessing including at least one of random cropping, random rotation, and random brightness variation.
Specifically, before model training is performed on the blast-furnace tuyere monitoring model, image preprocessing can be performed on each frame of sample image in a sample image sequence extracted from a sample video, specifically, random cutting under a fixed size can be performed on the sample image, random angle rotation can be performed on the sample image, random brightness adjustment can be performed on the sample image, or random combination can be performed on the three preprocessing modes, so that the condition of uncertain blast-furnace tuyere position can be simulated, and the blast-furnace tuyere monitoring model obtained through sample image sequence training has better generalization capability and stronger robustness.
Based on any of the above embodiments, fig. 8 is a schematic structural diagram of a blast furnace tuyere monitoring model according to an embodiment of the present invention, and as shown in fig. 8, the blast furnace tuyere monitoring model includes a semantic feature layer, a non-local feature layer, an attention feature layer, and a result output layer, where the attention feature layer specifically includes a spatial attention feature layer, a channel attention feature layer, and a feature fusion layer.
The semantic feature layer is used for extracting the semantic features of each frame of image in the image sequence, and the non-local feature layer is used for establishing the relation between each pixel point of each frame of image on the basis of the semantic features of each frame of image, so that the semantic features of each pixel point in each frame of image can be represented, and the sequence non-local features of the relation between each pixel point in each frame of image can be represented.
In the attention characteristic layer, the spatial attention characteristic layer mutually enhances the expression of the respective characteristics by utilizing the relation between any two pixel points in the sequence non-local characteristics, so as to obtain the spatial attention characteristics; the channel attention feature layer digs the interdependence relation among different channels of the non-local features of the sequence to enhance the expression of the interdependent channel features, thereby obtaining the channel attention features; the feature fusion layer fuses the space attention feature and the channel attention feature, thereby obtaining the sequence relation feature.
The result output layer analyzes and monitors the working state of the blast furnace tuyere reflected by the image sequence based on the sequence relation characteristics and outputs a blast furnace tuyere monitoring result.
Based on any of the above embodiments, fig. 9 is a schematic structural diagram of a blast furnace tuyere monitoring device according to an embodiment of the present invention, as shown in fig. 9, the device includes a sequence extracting unit 910 and a tuyere monitoring unit 920;
the sequence extracting unit 910 is configured to obtain video data of a tuyere of a blast furnace, and extract an image sequence from the video data;
the tuyere monitoring unit 920 is configured to perform blast furnace tuyere monitoring based on the image sequence and a relationship between each frame of image in the image sequence and the whole image sequence.
The device provided by the embodiment of the invention applies the image sequence to the blast furnace tuyere monitoring to comprehensively and dynamically display the actual condition of the blast furnace tuyere, thereby effectively avoiding the false alarm problem caused by small difference of different working states of the blast furnace tuyere. The blast furnace tuyere monitoring is carried out by combining the relation between each frame of image in the image sequence and the whole image sequence, so that the robustness and the accuracy of the blast furnace tuyere monitoring can be further enhanced.
Based on any of the above embodiments, the tuyere monitoring unit 920 is specifically configured to:
inputting the image sequence into a blast furnace tuyere monitoring model to obtain a blast furnace tuyere monitoring result output by the blast furnace tuyere monitoring model;
the blast furnace tuyere monitoring model is trained based on a sample image sequence and a corresponding sample monitoring result, and is used for carrying out blast furnace tuyere monitoring based on the image sequence and the relation between each frame of image in the image sequence and the whole image sequence.
Based on any of the above embodiments, the tuyere monitoring unit 920 includes:
the semantic extraction subunit is used for inputting each frame of image in the image sequence to a semantic feature layer of the blast furnace tuyere monitoring model to obtain semantic features of each frame of image output by the semantic feature layer;
the relation extraction subunit is used for inputting semantic features of each frame of image into a relation feature layer of the blast furnace tuyere monitoring model to obtain sequence relation features output by the relation feature layer, wherein the sequence relation features comprise the relation between each frame of image and the whole image sequence;
and the result output subunit is used for inputting the sequence relation characteristics to a result output layer of the blast furnace tuyere monitoring model to obtain the blast furnace tuyere monitoring result output by the result output layer.
Based on any of the above embodiments, the relationship extraction subunit includes:
a non-local extraction subunit, configured to input semantic features of each frame of image to a non-local feature layer of the relational feature layer, and obtain a sequence non-local feature output by the non-local feature layer;
and the attention extraction subunit is used for inputting the sequence non-local features into the attention feature layer of the relation feature layer to obtain the sequence relation features output by the attention feature layer.
Based on any of the above embodiments, the non-local extraction subunit is specifically configured to:
inputting the semantic features of each frame of image to a feature combination layer of the non-local feature layer to obtain sequence semantic features output by the feature combination layer;
inputting the sequence semantic features to a non-local feature correlation layer of the non-local feature layer to obtain non-local correlation features output by the non-local feature correlation layer;
and inputting the sequence semantic features and the non-local associated features to a residual error connecting layer of the non-local feature layer to obtain the sequence non-local features output by the residual error connecting layer.
Based on any of the above embodiments, the attention extraction subunit is specifically configured to:
inputting the sequence non-local features to a spatial attention feature layer of the attention feature layer to obtain spatial attention features output by the spatial attention feature layer;
inputting the sequence non-local features to a channel attention feature layer of the attention feature layer to obtain channel attention features output by the channel attention feature layer;
and inputting the spatial attention features and the channel attention features to a feature fusion layer of the attention feature layer to obtain the sequence relation features output by the feature fusion layer.
Based on any of the above embodiments, the sample image sequence is obtained by image preprocessing including at least one of random cropping, random rotation, and random brightness variation.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, as shown in fig. 10, the electronic device may include: a processor 1010, a communication interface (Communications Interface) 1020, a memory 1030, and a communication bus 1040, wherein the processor 1010, the communication interface 1020, and the memory 1030 communicate with each other via the communication bus 1040. Processor 1010 may invoke logic commands in memory 1030 to perform the following methods: acquiring video data of a blast furnace tuyere, and extracting an image sequence from the video data; and carrying out blast furnace tuyere monitoring based on the image sequence and the relation between each frame of image in the image sequence and the whole image sequence.
In addition, the logic commands in the memory 1030 described above may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand alone product. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in the form of a software product stored in a storage medium, comprising several commands for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Embodiments of the present invention also provide a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the methods provided by the above embodiments, for example, comprising: acquiring video data of a blast furnace tuyere, and extracting an image sequence from the video data; and carrying out blast furnace tuyere monitoring based on the image sequence and the relation between each frame of image in the image sequence and the whole image sequence.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several commands for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A blast furnace tuyere monitoring method, characterized by comprising:
acquiring video data of a blast furnace tuyere, and extracting an image sequence from the video data;
based on the image sequence and the relation between each frame of image in the image sequence and the whole image sequence, carrying out blast furnace tuyere monitoring, wherein the relation between each frame of image in the image sequence and the whole image sequence comprises the time sequence relation of each frame of image in the whole image sequence, and the relation between the local image characteristic in each frame of image and the global image characteristic of the whole image sequence;
the relation between each frame of image and the whole image sequence is expressed as a sequence relation feature, wherein the sequence relation feature is obtained by extracting attention features of sequence non-local features from two dimensions of spatial attention and channel attention, and the sequence non-local features are used for representing the relation between the local features of each frame of image and the global features of the image sequence.
2. The method according to claim 1, wherein the blast furnace tuyere monitoring is performed based on the image sequence and a relationship between each frame of image in the image sequence and the whole image sequence, and specifically comprises:
inputting the image sequence into a blast furnace tuyere monitoring model to obtain a blast furnace tuyere monitoring result output by the blast furnace tuyere monitoring model;
the blast furnace tuyere monitoring model is trained based on a sample image sequence and a corresponding sample monitoring result, and is used for carrying out blast furnace tuyere monitoring based on the image sequence and the relation between each frame of image in the image sequence and the whole image sequence.
3. The method for monitoring the blast-furnace tuyere according to claim 2, wherein the inputting the image sequence into the blast-furnace tuyere monitoring model to obtain the blast-furnace tuyere monitoring result output by the blast-furnace tuyere monitoring model specifically comprises:
inputting each frame of image in the image sequence to a semantic feature layer of the blast furnace tuyere monitoring model to obtain semantic features of each frame of image output by the semantic feature layer;
inputting semantic features of each frame of image into a relation feature layer of the blast furnace tuyere monitoring model to obtain sequence relation features output by the relation feature layer;
and inputting the sequence relation characteristics to a result output layer of the blast furnace tuyere monitoring model to obtain the blast furnace tuyere monitoring result output by the result output layer.
4. The blast-furnace tuyere monitoring method according to claim 3, wherein the inputting semantic features of each frame of image into a relational feature layer of the blast-furnace tuyere monitoring model to obtain the sequence relational features outputted by the relational feature layer specifically comprises:
inputting semantic features of each frame of image into a non-local feature layer of the relation feature layer to obtain a sequence non-local feature output by the non-local feature layer;
and inputting the sequence non-local features to the attention feature layer of the relation feature layer to obtain the sequence relation features output by the attention feature layer.
5. The method for monitoring the tuyere of a blast furnace according to claim 4, wherein the step of inputting semantic features of each frame of image into a non-local feature layer of the relational feature layer to obtain a sequence non-local feature outputted by the non-local feature layer comprises the following steps:
inputting the semantic features of each frame of image to a feature combination layer of the non-local feature layer to obtain sequence semantic features output by the feature combination layer;
inputting the sequence semantic features to a non-local feature correlation layer of the non-local feature layer to obtain non-local correlation features output by the non-local feature correlation layer;
and inputting the sequence semantic features and the non-local associated features to a residual error connecting layer of the non-local feature layer to obtain the sequence non-local features output by the residual error connecting layer.
6. The method for monitoring a tuyere of a blast furnace according to claim 4, wherein the step of inputting the sequential non-local features into the attention feature layer of the relationship feature layer to obtain the sequential relationship features outputted from the attention feature layer comprises the steps of:
inputting the sequence non-local features to a spatial attention feature layer of the attention feature layer to obtain spatial attention features output by the spatial attention feature layer;
inputting the sequence non-local features to a channel attention feature layer of the attention feature layer to obtain channel attention features output by the channel attention feature layer;
and inputting the spatial attention features and the channel attention features to a feature fusion layer of the attention feature layer to obtain the sequence relation features output by the feature fusion layer.
7. The blast furnace tuyere monitoring method according to any one of claims 2 to 6, wherein the sample image sequence is obtained by image preprocessing including at least one of random cropping, random rotation and random brightness variation.
8. A blast furnace tuyere monitoring device, characterized by comprising:
the sequence extraction unit is used for obtaining video data of the blast furnace tuyere and extracting an image sequence from the video data;
the blast furnace tuyere monitoring unit is used for carrying out blast furnace tuyere monitoring based on the image sequence and the relation between each frame of image in the image sequence and the whole image sequence, wherein the relation between each frame of image in the image sequence and the whole image sequence comprises a time sequence relation of each frame of image in the whole image sequence and a relation between local image characteristics in each frame of image and global image characteristics of the whole image sequence;
the relation between each frame of image and the whole image sequence is expressed as a sequence relation feature, wherein the sequence relation feature is obtained by extracting attention features of sequence non-local features from two dimensions of spatial attention and channel attention, and the sequence non-local features are used for representing the relation between the local features of each frame of image and the global features of the image sequence.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the blast furnace tuyere monitoring method according to any one of claims 1 to 7 when executing the program.
10. A non-transitory computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the steps of the blast furnace tuyere monitoring method according to any one of claims 1 to 7.
CN202010652956.5A 2020-07-08 2020-07-08 Blast furnace tuyere monitoring method and device, electronic equipment and storage medium Active CN111815604B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010652956.5A CN111815604B (en) 2020-07-08 2020-07-08 Blast furnace tuyere monitoring method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010652956.5A CN111815604B (en) 2020-07-08 2020-07-08 Blast furnace tuyere monitoring method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111815604A CN111815604A (en) 2020-10-23
CN111815604B true CN111815604B (en) 2023-07-28

Family

ID=72842620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010652956.5A Active CN111815604B (en) 2020-07-08 2020-07-08 Blast furnace tuyere monitoring method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111815604B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114015825B (en) * 2021-11-09 2022-12-06 上海交通大学 Method for monitoring abnormal state of blast furnace heat load based on attention mechanism

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255392A (en) * 2018-09-30 2019-01-22 百度在线网络技术(北京)有限公司 Video classification methods, device and equipment based on non local neural network
CN110458838A (en) * 2019-08-23 2019-11-15 讯飞智元信息科技有限公司 A kind of detection method of fault type, device, storage medium and equipment
CN110929780A (en) * 2019-11-19 2020-03-27 腾讯科技(深圳)有限公司 Video classification model construction method, video classification device, video classification equipment and media
CN111028166A (en) * 2019-11-30 2020-04-17 温州大学 Video deblurring method based on iterative neural network

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9792534B2 (en) * 2016-01-13 2017-10-17 Adobe Systems Incorporated Semantic natural language vector space
JP6897585B2 (en) * 2018-01-24 2021-06-30 コニカミノルタ株式会社 Radiation image processing equipment, scattered radiation correction method and program
CN109711277B (en) * 2018-12-07 2020-10-27 中国科学院自动化研究所 Behavior feature extraction method, system and device based on time-space frequency domain hybrid learning
CN111027519B (en) * 2019-12-26 2023-08-01 讯飞智元信息科技有限公司 Method and device for monitoring blast furnace tuyere
CN111274995B (en) * 2020-02-13 2023-07-14 腾讯科技(深圳)有限公司 Video classification method, apparatus, device and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255392A (en) * 2018-09-30 2019-01-22 百度在线网络技术(北京)有限公司 Video classification methods, device and equipment based on non local neural network
CN110458838A (en) * 2019-08-23 2019-11-15 讯飞智元信息科技有限公司 A kind of detection method of fault type, device, storage medium and equipment
CN110929780A (en) * 2019-11-19 2020-03-27 腾讯科技(深圳)有限公司 Video classification model construction method, video classification device, video classification equipment and media
CN111028166A (en) * 2019-11-30 2020-04-17 温州大学 Video deblurring method based on iterative neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Jeff Donahue等."Long-Term Recurrent Convolutional Networks for Visual Recognition and Description".《IEEE Transactions on Pattern Analysis and Machine Intelligence》.2015,第39卷(第4期),全文. *

Also Published As

Publication number Publication date
CN111815604A (en) 2020-10-23

Similar Documents

Publication Publication Date Title
US10769487B2 (en) Method and device for extracting information from pie chart
CN108197618B (en) Method and device for generating human face detection model
JP2022526513A (en) Video frame information labeling methods, appliances, equipment and computer programs
CN110929622A (en) Video classification method, model training method, device, equipment and storage medium
WO2022077978A1 (en) Video processing method and video processing apparatus
CN111228821B (en) Method, device and equipment for intelligently detecting wall-penetrating plug-in and storage medium thereof
CN111309222B (en) Sliding block notch positioning and dragging track generation method for sliding block verification code
CN108764176A (en) A kind of action sequence recognition methods, system and equipment and storage medium
CN111310156B (en) Automatic identification method and system for slider verification code
CN111310155B (en) System architecture for automatic identification of slider verification code and implementation method
CN111815604B (en) Blast furnace tuyere monitoring method and device, electronic equipment and storage medium
CN112417947A (en) Method and device for optimizing key point detection model and detecting face key points
CN111950457A (en) Oil field safety production image identification method and system
CN111402156A (en) Restoration method and device for smear image, storage medium and terminal equipment
CN108921138B (en) Method and apparatus for generating information
CN113538254A (en) Image restoration method and device, electronic equipment and computer readable storage medium
CN117037244A (en) Face security detection method, device, computer equipment and storage medium
CN111314665A (en) Key video segment extraction system and method for video post-scoring
CN113516298A (en) Financial time sequence data prediction method and device
Tran et al. Predicting Media Memorability Using Deep Features with Attention and Recurrent Network.
CN114093027A (en) Dynamic gesture recognition method and device based on convolutional neural network and readable medium
CN114387443A (en) Image processing method, storage medium and terminal equipment
WO2024104068A1 (en) Video detection method and apparatus, device, storage medium, and product
US20240212392A1 (en) Determining inconsistency of local motion to detect edited video
CN117058599B (en) Ship lock operation data processing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant