CN114743152A - Automatic extraction method and system for video key frames of blast furnace burden surface - Google Patents

Automatic extraction method and system for video key frames of blast furnace burden surface Download PDF

Info

Publication number
CN114743152A
CN114743152A CN202210520149.7A CN202210520149A CN114743152A CN 114743152 A CN114743152 A CN 114743152A CN 202210520149 A CN202210520149 A CN 202210520149A CN 114743152 A CN114743152 A CN 114743152A
Authority
CN
China
Prior art keywords
charge level
image
pixel
video
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210520149.7A
Other languages
Chinese (zh)
Inventor
蒋朝辉
黄建才
桂卫华
潘冬
周科
易遵辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN202210520149.7A priority Critical patent/CN114743152A/en
Publication of CN114743152A publication Critical patent/CN114743152A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

According to the automatic extraction method and system for the key frames of the blast furnace burden surface video, the burden surface video in the blast furnace smelting process is collected, the salient region of the burden surface image is positioned based on the boundary prior and the image feature space, the accurate feature points of the image salient region are extracted, and the pixel displacement of two adjacent frames of images is calculated according to an optical flow method; then, a method for fusing local density maximum and GMM model clustering is provided to identify the charge level state; and finally, dividing the video acquired by the high-temperature industrial endoscope into different periods according to the change condition of the density of the image feature points, and extracting a key frame of each material distribution period based on the material surface state identification result. The method can accurately identify the charge level state and eliminate the redundant information of the charge level video, and automatically extract the key frame of the charge level video with stable central airflow and obvious image characteristics from the video of the charge level of the blast furnace. Meanwhile, the method is suitable for automatic extraction of other video key frames with periodic variation and non-uniform image quality.

Description

Automatic extraction method and system for video key frames of blast furnace burden surface
Technical Field
The invention mainly relates to the technical field of blast furnace smelting, in particular to a blast furnace charge level video key frame automatic extraction method and system.
Background
The blast furnace is a large-scale closed black box reaction vessel and is key equipment for producing molten iron. In the blast furnace ironmaking process, furnace burden (containing iron ore, limestone, coke and the like) is scattered into the blast furnace from the top of the blast furnace according to a certain proportion, hot air can be blown into the blast furnace from an air port at the bottom of the blast furnace, the furnace burden can form a charge level which changes within a certain height range on the top of the blast furnace under the action of the buoyancy of the hot air and the gravity of the furnace burden, the coke in the furnace burden is combusted at the air port to generate high-temperature reducing gas, the iron-containing ore is reduced into iron in the ascending process, and molten iron is formed in a high-temperature environment and is discharged from an iron port. The clear blast furnace charge level video is obtained, and the method has important significance for monitoring the running state in the furnace in real time and avoiding the occurrence of abnormal working conditions. However, considering that the blast furnace burden distribution is periodic and intermittent, the collected burden surface video also shows a periodically changing rule. Under normal working conditions, the shape change of the charge level is slow, and the charge level video in the same period is repeated and has more redundant information. In addition, due to the interference of the charging materials and the strong dust intermittently sprayed from the furnace top, the image quality of the video at different periods is often different greatly. Compared with charge level images shielded by furnace materials, dust and the like, the clear and stable charge level images can provide more accurate and reliable information for field personnel to timely master the running condition in the furnace, the change of the charge level morphology and the distribution of the coal gas flow. However, in the process of acquiring the blast furnace burden surface video, the field personnel needs to spend much time to acquire the key information in the video, and the efficiency is greatly reduced if all the frame images are analyzed and processed at the later stage. The blast furnace charge level video key frame is a clear image sequence with stable central airflow and obvious characteristics, and the research on the automatic extraction method of the blast furnace charge level video key frame can quickly screen out high-quality and representative video frames from a large amount of charge level videos, so that the precision and the efficiency of the later-stage video processing are greatly improved.
The extraction of the video key frames is an important means for reducing video redundancy and eliminating invalid repeated information, and has wide application in the fields of video retrieval, video compression storage, video abstraction, video classification, industrial video monitoring and the like. Existing video key frame extraction methods can be classified into image feature-based methods, shot detection-based methods, clustering-based methods, and motion information-based methods. Calculating the difference of the characteristics through the bottom layer characteristics of color, texture, brightness, motion and the like of the image based on the image characteristic method, and comparing the difference with a set threshold value; dividing a video into a plurality of shots based on a shot detection method, and selecting a first frame, a last frame and a plurality of frames fixed in the middle as key frames; the clustering-based method firstly adopts a clustering algorithm to divide similar frames into the same class, and selects representative key frames from different classes; motion information based methods require detecting images containing moving objects from the video. In consideration of the characteristics of severe environment in the blast furnace, the shape change periodicity of the blast furnace charge level video, the image quality non-uniformity and the like, compared with the extraction problem of the video key frame in a common scene, the automatic extraction of the video key frame of the blast furnace charge level is more complicated, and the method has important industrial value for monitoring the operation state change of the blast furnace smelting process in time.
Chinese patent CN114023345A "a method for extracting key frames of sintering machine tail based on audio information" discloses a method for extracting key frames of sintering machine tail by using audio information, which adopts wiener filtering and spectral subtraction to denoise the audio signals of the sintering machine, extracts the abrupt change position in the audio as the position of the dropped frame, and uses the next frame of the dropped frame as the dropped frame to realize the extraction of key frames of sintering machine tail. The method mainly utilizes the audio information of the sinter falling from the tail of the sintering machine, and is not suitable for extracting the key frame of the blast furnace charge level video.
Chinese patent CN107832694A "a video key frame extraction algorithm" discloses a video key frame extraction algorithm. The algorithm calculates the width, height and characteristic information of an effective image area of a certain frame in the current input video stream, compares the difference with the difference of the previous frame and the similarity of all frames in a cache area, obtains a difference frame through a set threshold value, counts the variance value of the difference frame and outputs a key frame. The method mainly judges whether the video frames are key frames according to the image difference of the video frames, a threshold value needs to be set in advance, and the method cannot be applied to extraction of the high-dynamic-change blast furnace charge level video key frames.
Disclosure of Invention
The method and the system for automatically extracting the key frame of the blast furnace burden surface video solve the technical problem that the clear key frame with obvious image characteristics cannot be automatically extracted from the blast furnace burden surface video which is periodically changed and has non-uniform image quality in the prior art.
In order to solve the technical problem, the automatic extraction method of the blast furnace burden surface video key frame provided by the invention comprises the following steps:
acquiring a significance area of the charge level image;
extracting characteristic points of the salient region;
carrying out feature point matching on two adjacent frames of charge level images, and calculating the number of feature points and feature point optical flows of the two adjacent frames of charge level images;
obtaining the number of the feature points and the local density maximum value of the average light stream vector data distribution based on kernel density estimation, and obtaining an initial clustering center of a Gaussian mixture model, wherein the average light stream vector is the average value of light streams of all the feature points in the charge level image;
based on an initial clustering center, performing state clustering on the characteristic point light flow of the charge level image by adopting a Gaussian mixture model to obtain the state of the charge level, wherein the state of the charge level comprises a rapid sinking state, an unstable state, a stable state and a slow sinking state;
and dividing the material distribution period according to the density change condition of the feature points of the material surface image, and extracting key frames in videos of different material distribution periods based on the state of the material surface.
Further, acquiring the salient region of the material surface image comprises:
dividing the charge level image into super pixel blocks with preset thresholds through SLIC clustering;
calculating the boundary connectivity of the superpixel blocks and the boundary;
obtaining the probability that the superpixel block belongs to the background according to the boundary connectivity;
acquiring a significant value of the superpixel block according to the probability that the superpixel block belongs to the background;
and acquiring a saliency area of the charge level image according to the saliency value of the super pixel block.
Further, acquiring the saliency region of the material surface image according to the saliency value of the super-pixel block comprises:
based on the characteristic space of the charge level image, obtaining the total difference degree among pixels of the charge level image, wherein the calculation formula for obtaining the total difference degree among the pixels of the charge level image is as follows:
Figure BDA0003641269510000031
wherein d is1(pi,pj)、d2(pi,pj) And d3(pi,pj) Pixels p representing the charge level images, respectivelyiAnd a pixel pjThe Euclidean distance of the color, luminance and texture features of (1), ds(pi,pj) Representing a pixel piAnd a pixel pjThe euclidean distance of the spatial features between them, α representing a parameter, set to 3;
acquiring the significant value of the pixel according to the significant value of the super pixel block and the total difference degree between the pixels, wherein the calculation formula for acquiring the significant value of the pixel is as follows:
Figure BDA0003641269510000032
wherein S ispiRepresenting a pixel piSignificant value of d (p)i,pn) Representing a pixel piAnd a pixelpnOverall degree of difference of (a), (b)piRepresenting a pixel piN denotes a super-pixel block of pixels piNumber of pixel points within an 8 × 8 neighborhood of centers, P (b)pi) Representing a superpixel block bpiA probability of belonging to a background;
and carrying out binarization processing on the charge level image according to the significant value of the pixel to obtain a significant area of the charge level image.
Further, extracting the feature points of the salient region includes:
down-sampling the area images corresponding to the salient areas to obtain area images with different sizes, arranging the area images with different sizes from large to small, and obtaining a Gaussian difference pyramid through Gaussian convolution and Gaussian difference;
comparing each pixel point of the image in the Gaussian difference pyramid with adjacent pixel points and pixels of adjacent areas of the upper and lower layers of images, and searching for an extreme point so as to obtain a discrete extreme point;
and fitting the discrete extreme points into continuous extreme points through a Taylor formula, eliminating unstable points, eliminating edge influence based on a Harris angular point detection algorithm, and obtaining a continuous characteristic point set so as to obtain characteristic points of the salient region.
Further, the calculation formula for calculating the number of the feature points and the feature point optical flow of the two adjacent frames of the charge level images is as follows:
(ux,vy)=(x+△x-x,y+△y-y)=(△x,△y),
wherein u isxAnd vyRespectively representing ith characteristic point P of the t frame charge level imagei(x, y) to the ith feature point P 'of the t +1 th frame of dough image'iMotion components in the horizontal and vertical directions at (x +. DELTA.x, y +. DELTA.y), x, y representing the characteristic point PiCoordinates of (x, y), x +. DELTA.x, y +. DELTA.y denote a feature point P'iCoordinates of (x +. DELTA.x, y +. DELTA.y).
Further, obtaining the number of the feature points and the local density maximum of the average optical flow vector data distribution based on the kernel density estimation, and obtaining the initial clustering center of the gaussian mixture model comprises:
collecting charge level images in different periods and different states, and calculating the number of characteristic points and an average light stream vector of two adjacent frames of images, wherein the average light stream vector is the average value of light streams of all the characteristic points in the charge level images;
eliminating abnormal values by adopting equal confidence probability, and carrying out normalization processing on the number of the feature points and the average light stream vector;
obtaining a probability density graph of the number of the feature points and the average optical flow vector data distribution by adopting a kernel density estimation function, wherein the kernel density estimation function specifically comprises the following steps:
Figure BDA0003641269510000041
wherein h represents a smoothing parameter, Kh(. cndot.) represents a kernel function, d represents an observed value, fh(d) Representing a probability density function, M representing the total number of samples, diRepresenting the ith sample in the feature space sample set;
and acquiring four points with the maximum local density as initial clustering centers according to the probability density graph, and taking the four points as the initial clustering centers of the Gaussian mixture model.
Further, based on the initial clustering center, performing state clustering on the characteristic point light flow of the charge level image by adopting a Gaussian mixture model, and acquiring the state of the charge level comprises the following steps:
updating the parameters of the Gaussian mixture model by adopting an EM algorithm until the log-likelihood function reaches the maximum value, and ending the parameter updating, wherein the log-likelihood function specifically comprises the following steps:
Figure BDA0003641269510000042
wherein alpha isiIndicating the probability, μ, that the current frame belongs to class ii,∑iMean and covariance matrices, f (α), representing the ith Gaussian distribution, respectivelyii,∑i) An objective function, L (α), representing an update of a parameterii,∑i) Represents logarithm ofLikelihood function, m denotes the number of samples, k denotes the number of Gaussian functions, xjRepresenting the current sample, P (x)ji) Represents a sample xjProbability of belonging to the ith Gaussian distribution;
drawing a Gaussian distribution curve, and classifying data in the Gaussian distribution curve or close to the curve into the same state.
Further, dividing the material distribution cycle according to the density change condition of the feature points of the material surface image, and extracting key frames in videos of different material distribution cycles based on the material surface state comprises:
obtaining the density degree of the feature points according to the average value of the feature points of the charge level image per second;
comparing the density degree of the characteristic points of the burden surface image at the current moment with the density degree of the characteristic points of the burden surface image in the last second, and dividing the burden distribution period according to a preset comparison threshold;
calculating Euclidean distances between the characteristic value of the current frame and different clustering centers, acquiring the state of the current frame, judging the current frame to be a non-key frame if the current frame is in a fast sinking state and an unstable state, keeping the number of frames unchanged, judging the current frame to be a candidate key frame if the current frame is in a stable state or a slow sinking state, and adding 1 to the number of frames;
determining sampling frequency according to the number of candidate key frames, ensuring that the number of key frames extracted in each period is the same, and respectively taking an image frame corresponding to a clustering center, an image frame with the largest number of characteristic points and the smallest characteristic point optical flow and an image obtained according to the fixed sampling frequency as key frames from the candidate key frames to obtain a key frame set of one period;
and extracting the key frame sets of different material distribution periods to obtain a blast furnace charge level video frame set.
The invention provides a blast furnace charge level video key frame automatic extraction system, which comprises:
the system comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor realizes the steps of the automatic extraction method of the video key frames of the blast furnace burden level when executing the computer program.
Compared with the prior art, the invention has the advantages that:
according to the automatic extraction method and system for the key frames of the blast furnace burden surface video, the burden surface video in the blast furnace smelting process is collected by adopting the high-temperature industrial endoscope, the burden surface image saliency area is positioned based on the boundary prior and the image feature space, the accurate feature points of the image saliency area are extracted, and the pixel displacement of two adjacent frames of images is calculated according to an optical flow method; then, a method for fusing local density maximum and GMM model clustering is provided to identify the charge level state; and finally, dividing the video acquired by the high-temperature industrial endoscope into different periods according to the change condition of the density of the image feature points, and extracting a key frame of each material distribution period based on the material surface state identification result. The method can accurately identify the charge level state and eliminate the redundant information of the charge level video, and automatically extract the key frame of the charge level video with stable central airflow and obvious image characteristics from the video of the charge level of the blast furnace. Meanwhile, the method is suitable for automatic extraction of other video key frames with periodic variation and non-uniform image quality.
The key points of the invention comprise:
(1) significant values of different pixels of the charge level image are obtained based on boundary prior and an image feature space, and a charge level image significant area is extracted, so that the influence of non-charge level areas and gas flow changes on feature extraction is greatly reduced;
(2) taking the density degree of the feature points and the average pixel displacement as two key features for recognizing the material surface image state, extracting the feature points of the material surface salient region, performing refined matching to obtain a feature point set corresponding to two adjacent frames one by one, and calculating the size of an optical flow vector of the feature points;
(3) the local density maximum value and the GMM model are fused to cluster the number of the feature points of the charge level image in different periods and the size of the optical flow vector, and the charge level image is divided into four states of instability, stillness, slow sinking and fast sinking;
(4) dividing the charge level period according to the density degree of the characteristic points, and extracting key frames with stable airflow and obvious characteristics in the video center of the charge level of the blast furnace based on the result of state recognition;
(5) the method combines methods of period division, motion information extraction, state clustering and the like, and can realize automatic extraction of the key frames of the complex industrial video with periodic variation, multiple states and uneven image quality according to different requirements.
Drawings
FIG. 1 is a second embodiment of the present invention of a method for automatically extracting video keyframes of a blast furnace burden surface;
fig. 2 is a result of extracting feature points in the significant region of the charge level image according to the second embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating the distribution cycle division according to the second embodiment of the present invention;
FIG. 4 is a schematic diagram of a key frame extraction process of a level video according to a second embodiment of the present invention;
fig. 5 is a block diagram of an automatic extraction system for key frames of a blast furnace burden level video according to an embodiment of the present invention.
Reference numerals:
10. a memory; 20. a processor.
Detailed Description
In order to facilitate an understanding of the invention, the invention will be described more fully and in detail below with reference to the accompanying drawings and preferred embodiments, but the scope of the invention is not limited to the specific embodiments below.
The embodiments of the invention will be described in detail below with reference to the drawings, but the invention can be implemented in many different ways as defined and covered by the claims.
Example one
The automatic extraction method of the video key frames of the blast furnace burden surface provided by the embodiment of the invention comprises the following steps:
step S101, a significant area of a charge level image is obtained;
step S102, extracting characteristic points of a salient region;
step S103, carrying out feature point matching on two adjacent frames of charge level images, and calculating the number of feature points and feature point optical flows of the two adjacent frames of charge level images;
step S104, obtaining the number of the feature points and the local density maximum value of the average optical flow vector data distribution based on kernel density estimation, and obtaining the initial clustering center of the Gaussian mixture model, wherein the average optical flow vector is the average value of the optical flows of all the feature points in the charge level image;
step S105, based on the initial clustering center, performing state clustering on the characteristic point light flow of the charge level image by adopting a Gaussian mixture model to obtain the state of the charge level, wherein the state of the charge level comprises a rapid sinking state, an unstable state, a stable state and a slow sinking state;
and S106, dividing the material distribution period according to the density change condition of the feature points of the material surface image, and extracting key frames in videos of different material distribution periods based on the material surface state.
The method for automatically extracting the key frames of the blast furnace burden surface video comprises the steps of extracting feature points of a significant area by obtaining the significant area of a burden surface image, matching the feature points of two adjacent frames of the burden surface image, calculating the number of the feature points and the light stream of the feature points of the two adjacent frames of the burden surface image, obtaining the number of the feature points and the local density maximum value of the average light stream vector data distribution based on the kernel density estimation, obtaining an initial clustering center of a Gaussian mixture model, carrying out state clustering on the feature point light stream of the burden surface image by adopting the Gaussian mixture model based on the initial clustering center, obtaining the state of a burden surface, dividing a distribution period according to the density change condition of the feature points of the burden surface image, extracting key frames in videos with different distribution periods based on the state of the burden surface, and solving the problem that the prior art cannot automatically extract clear frames from the blast furnace burden surface video with periodic change and uneven image quality, The technical problem of key frames with obvious image characteristics is that the charge level state can be accurately identified, the redundant information of the charge level video is eliminated, and the charge level video key frames with stable central airflow and obvious image characteristics are automatically extracted from the blast furnace charge level video. Meanwhile, the method is suitable for automatic extraction of other video key frames with periodic variation and non-uniform image quality.
Example two
The second embodiment of the present invention provides a method for automatically extracting a clear key frame with obvious image characteristics from a blast furnace charge level video with periodic variation and non-uniform image quality, and the overall idea is shown in fig. 1, and the method includes the following steps:
(1) the significant area of the charge level image is positioned based on the boundary prior and the image characteristic space, and the influence of non-charge level areas and gas flow changes in the charge level image is reduced.
(2) And taking the density degree of the feature points and the average pixel displacement as two key features for recognizing the material surface image state, extracting the feature points of the material surface salient region, performing refined matching, and calculating the feature point optical flows of two adjacent frames of images.
(3) And obtaining the number of the feature points of the charge level image and the local density maximum value of the average light stream vector data distribution based on kernel density estimation, using the local density maximum value as an initial clustering center of a Gaussian Mixture Model (GMM), and carrying out state clustering on the image feature point light streams based on the GMM to obtain the state of the charge level.
(4) And dividing the material distribution period according to the density change condition of the feature points of the material surface image, and extracting key frames in videos of different material distribution periods based on the state identification result.
The specific implementation scheme is as follows:
(1) the significant area of the charge level image is positioned based on the boundary prior and the image characteristic space, and the influence of non-charge level areas and gas flow changes in the charge level image is reduced.
The charge level image comprises non-charge level areas such as a coal gas flow area and a furnace wall area and a charge level area consisting of furnace materials, wherein the charge level area is an interested area. When the charge level image is processed, the accuracy of the processing result is affected by the non-charge level area. In the blast furnace smelting process, the smaller the change of the charge level appearance is, the more stable the furnace is, and under the normal working condition, the more obvious the change in the charge level image is the charge level areas with obvious brightness and texture in the edge profile of the central gas flow and the image, and the characteristic points of the areas are the densest and the most number, can also attract the most human visual attention, and are defined as the visual saliency areas of the charge level image. In order to reduce the influence of non-charge level areas and gas flow changes on post-image processing, the method extracts the significant area of the charge level image.
For the charge level image, furnace wall areas, gas flow areas and areas with blurred details and dark brightness of the charge level areas can be regarded as the background of the image, wherein the place where the background is connected with the non-background areas is regarded as a boundary. The invention introduces boundary prior to obtain a preliminary saliency map. In order to improve the efficiency of extracting the significant region of the image and avoid the problem of overlong processing time caused by overlarge image resolution, pixels with similar characteristics are clustered into the same superpixel block through SLIC.
B={b1,b2,...,bn} (1)
Where B denotes the super-pel block set and B denotes each super-pel block.
And characterizing the connection degree of different pixel blocks of the image and the boundary by adopting boundary connectivity, wherein the probability of belonging to a significant region is smaller when the boundary connectivity is larger, and the boundary connectivity can be represented by the length of a target region at the boundary of the image and the square root of the area of the region. The boundary connectivity of a superpixel block is expressed as follows:
Figure BDA0003641269510000081
wherein L (b)m) Indicating the number of pixels of the super-pixel block on the boundary, A (b)m) Representing the number of pixels of the entire super-pixel block.
Obtaining image pixel block b according to boundary connectivitymThe probability of belonging to the background is as follows:
Figure BDA0003641269510000082
wherein P (b)m) Representing the probability of a superpixel belonging to the background, σbIndicating a parameter, set to 1.
According to the formula (3), the significance values of different pixel blocks of the burden surface image can be obtained, but the obtained significance values are discontinuous, and the significance values of pixels close to a gas flow area are also lower. In order to improve the accuracy of salient region extraction, pixel salient values are corrected based on the material surface image feature space. The characteristic space of the charge level image comprises color characteristics, texture characteristics, brightness characteristics and space characteristics, and the total difference degree between pixels is defined as follows:
Figure BDA0003641269510000083
wherein d is1(pi,pj)、d2(pi,pj) And d3(pi,pj) Pixels p representing the charge level images, respectivelyiAnd a pixel pjThe Euclidean distance of the color, luminance and texture features of (1), ds(pi,pj) Representing a pixel piAnd a pixel pjThe euclidean distance between spatial features, α, representing a parameter, is set to 3. Formula (4) can be understood as that the salient region is a region with large differences in brightness characteristics, texture characteristics and color characteristics and relatively concentrated positions.
Based on the boundary prior and the overall degree of difference of the feature space, the saliency value of pixel i can be expressed as:
Figure BDA0003641269510000084
wherein SpiRepresenting a pixel piSignificant value of d (p)i,pn) Representing a pixel piAnd a pixel pnOverall degree of difference of (a), (b)piRepresenting a pixel piN denotes a super-pixel block represented by a pixel piThe number of pixel points in the central 8 × 8 neighborhood, P (b)pi) Representing a superpixel block bpiProbability of belonging to the background, d (p)i,pn) The larger, SpiThe larger.
After the significant values of different pixel points of the charge level image are obtained, the charge level image is subjected to binarization processing by adopting an OTSU method, the charge level image is divided into a gas flow area and a non-gas flow area, and after the position of the gas flow area is obtained, the significant value of the gas flow area is reduced. And carrying out binarization processing on the charge level image according to the calculated significant value to obtain a final significant area.
(2) And taking the density degree of the feature points and the average pixel displacement as two key features for charge level image state identification, extracting the feature points of the charge level salient region, performing refined matching, and calculating the feature point optical flows of two adjacent frames of images.
The density degree of characteristic points and pixel displacement of the charge level image have strong correlation with the motion state of the charge level of the blast furnace. Generally speaking, the denser the charge level image feature points, the smaller the pixel displacement, the slower the charge level descending, while the sparser the image feature points, the larger the pixel displacement, the faster the charge level descending, and the more unstable the charge level. Therefore, the embodiment of the invention selects the density degree of the feature points and the pixel displacement of the material surface image motion as two key features extracted from the key frame of the material surface video, wherein the density degree of the feature points is represented by the number of the feature points, and the pixel displacement of the image motion is represented by an average optical flow vector.
In order to reduce the influence of the scale space on feature extraction, the charge level images are subjected to down sampling to obtain charge level images with different sizes, the charge level images with different sizes are arranged from large to small, and a Gaussian difference pyramid (DOG) is obtained through Gaussian convolution and Gaussian difference.
L(x,y,σ)=G(x,y,σ)*I(x,y) (6)
Figure BDA0003641269510000091
Where I (x, y) represents the original image, σ represents the spatial scale, k represents the scale factor, G (x, y, σ) represents a two-dimensional gaussian function of variable scale, and L (x, y, σ) represents the scale space of the image.
And comparing each pixel point in the DOG image with an adjacent pixel point and pixels of adjacent areas of the upper and lower layers of images, searching for extreme points, and forming a discrete feature point candidate set by the extreme points.
After the discrete extreme points are obtained, the discrete extreme points are fitted into continuous extreme points through a Taylor formula, unstable points are removed, edge influence is removed based on a Harris angular point detection algorithm, and a more accurate feature point set L is obtained.
And calculating a scale value corresponding to the feature point and the argument and amplitude of the material surface image in the 16 x 16 neighborhood, and obtaining a descriptor of the feature point by counting the gradient and amplitude corresponding to the pixel in the field through the histogram, wherein the highest amplitude is the main direction.
And performing rough matching on the feature points of the feature point sets L and L' of the charge level images of adjacent frames based on a Brute-Froce matching method, calculating the distance between a descriptor of a certain feature point and descriptors of other feature points, and taking the point with the closest distance as a rough matching point of the feature points. Because the characteristics of the charge level image are relatively single, the corresponding relation of the characteristic points of the image pair is difficult to accurately identify, and a mismatching phenomenon can be caused, so that the mismatching points are removed based on the RANSAC algorithm to obtain a final matching result, as shown in FIG. 2. The feature point sets of the two adjacent frames of images at this time are represented as:
P={P1,P2,P3,...,Pn} (8)
P′={P′1,P′2,P′3,...,P′n} (9)
after the feature point set P, P' is acquired, the embodiment of the present invention calculates the optical flows of all feature points using the optical flow method. Suppose a certain characteristic point P of the t frame image in the charge level videoi(x, y) the motion components of the feature point in the horizontal and vertical directions are u, respectivelyxAnd vyOn t +1 frame picture, Pi(x, y) to P'i(x +. DELTA.x, y +. DELTA.y), characteristic point PiThe optical flow vectors in the horizontal and vertical directions of (x, y) are calculated as:
(ux,vy)=(x+△x-x,y+△y-y)=(△x,△y) (10)
where x, y represent the coordinates of the feature points.
After the optical flow vectors of all the feature points in the image are obtained, considering that the material surface is relatively stable under normal working conditions, the material surface moves integrally, and in order to simplify the operation, the average feature point optical flow vector is used as the moving pixel displacement of the material surface at a certain moment.
Figure BDA0003641269510000101
Where m denotes the number of image feature points, ua,vaRespectively representing the horizontal and vertical components of the pixel displacement of the image feature point a.
(3) And estimating and acquiring the number of the feature points of the charge level image and the local density maximum value of the average optical flow vector data distribution based on the kernel density.
And after the density degree of the characteristic points of the material surface image saliency area and the pixel displacement of the characteristic points are obtained, judging the state of the material surface based on the two characteristics. Considering that a gaussian mixture model can fit a distribution with any shape, a traditional Gaussian Mixture Model (GMM) is very sensitive to the selection of an initial value and is prone to fall into local optimization due to poor selection of the initial value. In order to accurately identify the state of the charge level, the invention provides a charge level state clustering method for fusing a local density maximum value and a GMM model. The method comprises the following specific steps:
step 1: collecting the charge level images in different periods and different states, and calculating the density degree (the number of characteristic points) and the average pixel displacement (the average optical flow vector) of the characteristic points of two adjacent frames of images.
Step 2: and eliminating abnormal values by adopting equal confidence probability, and normalizing the number of the characteristic points and the pixel displacement vector.
Figure BDA0003641269510000102
Wherein w represents the Showverer coefficient, d0Denotes the point of outlier, d*,di,dmin,dmaxThe normalized data, the data before normalization, the data minimum and maximum values are indicated, respectively.
Step 3: after data preprocessing, a probability density map of data distribution is obtained by using a kernel density estimation function, wherein the kernel density estimation function is as follows:
Figure BDA0003641269510000111
wherein h represents a smoothing parameter, Kh(. cndot.) represents a kernel function, d represents an observed value, fh(d) Representing a probability density function, M representing the total number of samples, diRepresenting the ith sample in the feature space sample set.
And Step4, taking the four points with the maximum local density as initial clustering centers, setting initial values of GMM model parameters according to the initial values, and updating the GMM model parameters by adopting an EM (effective mode) algorithm until the log-likelihood function reaches the maximum value, and finishing parameter updating.
Figure BDA0003641269510000112
Figure BDA0003641269510000113
Wherein alpha isiIndicating the probability, μ, that the current frame belongs to class ii,∑iMean and covariance matrices, f (α), representing the ith Gaussian distribution, respectivelyii,∑i) An objective function, L (α), representing an update of a parameterii,∑i) Representing a log-likelihood function, m representing the number of samples, k representing the number of Gaussian functions, xjRepresenting the current sample, P (x)j/mi) Represents a sample xjProbability of belonging to the ith Gaussian distribution;
and Step 5, drawing a Gaussian distribution curve, and classifying the data in the Gaussian distribution curve or close to the Gaussian distribution curve into the same state.
(4) And dividing the material distribution period according to the density change condition of the feature points of the material surface image, and extracting key frames in videos of different material distribution periods based on the state identification result.
The material level state is the main basis for judging whether the current frame is the key frame, and key frame sets meeting different conditions can be obtained according to different requirements. As shown in fig. 3, since the furnace top material distribution is periodic and intermittent, and the shapes and the definitions of the material surfaces in different periods have certain differences, the material surface videos are divided into video segments according to the distribution periods to obtain the material surface videos in different periods, and then the key frames of the videos in different periods are extracted. Referring to fig. 4, the specific steps of extracting the key frame of the material level video in this embodiment are as follows:
step 1: inputting the material surface video and initializing the relevant parameters (current frame t, frame number N)0Period T0)。
Step 2: considering that the material surface is relatively clear at the cloth interval, and the image characteristic points are dense; in the material distribution process, due to the shielding of furnace burden and dust, effective information of the charge level is reduced, image characteristic points are sparse, and therefore the material distribution period can be divided according to the change condition of the density degree of the image characteristic points. In order to reduce random error of feature point number statistics, the average value of the feature point number of each second charge level image is calculated and normalized to [0,1 ]]Is used for representing the degree of density f of characteristic points of the charge level at the current momenttAnd the density degree f of the feature points of the charge level image in the previous secondt-1Making a comparison if ft-ft-1T is less than or equal to T, the two are regarded as the same period, T0The change is not changed; if ft-ft-1>T, judging whether f is satisfied in the next few secondst+i-ft+i-1>0(i is 1,2 and 3), if the conditions are met, the material level is considered to be gradually clear at the moment, one-time material distribution is finished, and the next period, namely T is entered0+ 1; while the frame number is initialized to 1, N0=1。
Step 3: calculating Euclidean distances from different clustering centers, acquiring the state of the current frame, judging the current frame to be a non-key frame if the current frame is in a fast sinking state and an unstable state, and counting the number of frames N0The change is not changed; if the current frame is in a stable state or a slow sinking state, the current frame is respectively judged as a candidate key frameNumber of frames N0And increased by 1.
Step 4: and determining the sampling frequency according to the frame number of the candidate key frames, and ensuring that the number of the key frames extracted in each period is the same. And respectively taking the image frame corresponding to the clustering center, the image frame with the largest number of feature points and the smallest optical flow vector and the image obtained according to the fixed sampling frequency as key frames from the candidate key frames to obtain a key frame set of one period.
Step 5: repeating the steps 2 to 4 to obtain key frames of different material distribution periods in the material surface video to obtain a final key frame set. It should be noted that the determination rule of the key frame may be modified according to actual requirements, so that the extracted key frame can satisfy the conditions.
According to the automatic extraction method of the key frames of the blast furnace burden surface video, the burden surface video in the blast furnace smelting process is collected by adopting the high-temperature industrial endoscope, the burden surface image saliency area is positioned based on boundary prior and the image feature space, the accurate feature points of the image saliency area are extracted, and the pixel displacement of two adjacent frames of images is calculated according to an optical flow method; then, a method for fusing local density maximum and GMM model clustering is provided to identify the charge level state; and finally, dividing the video acquired by the high-temperature industrial endoscope into different periods according to the change condition of the density of the image characteristic points, and extracting a key frame of each material distribution period based on the material level state identification result. The invention can accurately identify the charge level state and eliminate the redundant charge level video information, and automatically extract the charge level video key frame with stable central airflow and obvious image characteristics from the charge level video of the blast furnace. Meanwhile, the method is suitable for automatic extraction of other video key frames with periodic variation and non-uniform image quality.
Furthermore, the automatic extraction method of the blast furnace charge level video key frames provided by the embodiment of the invention can automatically extract key frames with stable central airflow and obvious characteristics from the blast furnace charge level video with periodical appearance change and non-uniform image quality, can provide more accurate and reliable information for field operators to timely master the operation condition in the furnace, the appearance change of the charge level and the distribution of the gas flow, and greatly increases the precision and efficiency of the processing such as deep extraction based on the charge level image in the later period. The method comprises the steps of firstly, acquiring a charge level video in the running process of a blast furnace based on a high-temperature industrial endoscope, analyzing the charge level video, and dividing the charge level into four states of instability, stillness, slow sinking and rapid sinking. In order to eliminate redundant information of a video, the characteristic point optical flow of a material surface image salient region is calculated, a method of fusing local density maximum and GMM clustering is provided to identify the state of the material surface, and the automatic judgment of key frames with stable central air flow and obvious image characteristics in different material distribution periods is realized according to the number change condition of the characteristic points and the state of the material surface. The embodiment of the invention is suitable for extracting the video key frames with periodic variation, multiple states and uneven image quality, and can extract the key frames meeting the requirements according to different actual requirements.
EXAMPLE III
2650m of an iron and steel plant3The blast furnace is an experimental platform, the charge level video in the smelting process of the blast furnace is collected by a high-temperature industrial endoscope extending into the furnace, and the key frame with stable central airflow and obvious characteristics is automatically extracted from a large number of videos with periodic variation and uneven quality by using the key frame automatic extraction method provided by the invention. The specific implementation steps are as follows:
1. based on a high-temperature industrial endoscope, a large number of charge level videos of blast furnace smelting processes in different periods are collected.
2. In order to reduce the influence of image non-material surface areas and gas flow changes, a salient area of an image is extracted based on boundary prior and an image feature space.
3. And extracting and refining the characteristic points of the image saliency region to obtain a characteristic point set which can be in one-to-one correspondence with two adjacent frames of images, and calculating a characteristic point light stream value.
4. The method comprises the steps of collecting charge level images in different periods and different states, and respectively counting feature point density degree and average pixel displacement, wherein the feature point density degree is represented by the number of feature points, and the average pixel displacement is represented by feature point average light stream vectors.
5. The number of feature points and local density maxima of the average optical flow vector are obtained based on kernel density estimation. And performing state clustering on the density degree of the feature points and the average pixel displacement based on a Gaussian mixture model, wherein the initial clustering center is data corresponding to the local density maximum.
6. And calculating the density degree of the feature points of the charge level image per second, and comparing the change conditions of the density degree of the feature points of the adjacent charge level images per second. If the density of the feature points is suddenly increased and the density of the feature points is still increased at the next moment, the feature points are considered to be the moment when one cloth period ends and the next cloth period begins.
7. And judging the images in the fast sinking state and the unstable state as non-key frames, and selecting the charge level images in the static state and the slow sinking state as candidate key frames. And determining sampling frequency according to the number of the candidate key frames in the same period, extracting a clustering center and a charge level image obtained by fixing the sampling frequency from the candidate key frames, and outputting the charge level image as a key frame, wherein the extracted key frame center has stable airflow and obvious characteristics.
Referring to fig. 5, the system for automatically extracting a video key frame of a blast furnace burden surface according to the embodiment of the present invention includes:
the system comprises a memory 10, a processor 20 and a computer program stored on the memory 10 and executable on the processor 20, wherein the processor 20 implements the steps of the automatic extraction method of the blast furnace burden level video key frames proposed in the present embodiment when executing the computer program.
The specific working process and working principle of the automatic extraction system for the blast furnace burden surface video key frames in this embodiment can refer to the working process and working principle of the automatic extraction method for the blast furnace burden surface video key frames in this embodiment.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes will occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A blast furnace charge level video key frame automatic extraction method is characterized by comprising the following steps:
acquiring a significance area of the charge level image;
extracting characteristic points of the salient region;
carrying out feature point matching on two adjacent frames of charge level images, and calculating the number of feature points and feature point optical flows of the two adjacent frames of charge level images;
obtaining the number of the feature points and the local density maximum value of the average optical flow vector data distribution based on kernel density estimation, and obtaining an initial clustering center of a Gaussian mixture model, wherein the average optical flow vector is the average value of optical flows of all the feature points in the charge level image;
based on the initial clustering center, performing state clustering on the characteristic point optical flow of the charge level image by adopting a Gaussian mixture model to obtain the state of the charge level, wherein the state of the charge level comprises a rapid sinking state, an unstable state, a stable state and a slow sinking state;
and dividing the material distribution period according to the density change condition of the feature points of the material surface image, and extracting key frames in videos of different material distribution periods based on the state of the material surface.
2. The method of claim 1, wherein the obtaining of the salient region of the charge level image comprises:
dividing the charge level image into super pixel blocks with preset thresholds through SLIC clustering;
calculating the boundary connectivity of the superpixel blocks and the boundary;
obtaining the probability that the superpixel block belongs to the background according to the boundary connectivity;
acquiring a significant value of the superpixel block according to the probability that the superpixel block belongs to the background;
and acquiring a saliency area of the charge level image according to the saliency value of the super pixel block.
3. The method of claim 2, wherein the obtaining the saliency region of the burden surface image according to the saliency values of the super pixel blocks comprises:
based on the characteristic space of the charge level image, obtaining the total difference degree among pixels of the charge level image, wherein the calculation formula for obtaining the total difference degree among the pixels of the charge level image is as follows:
Figure FDA0003641269500000011
wherein d is1(pi,pj)、d2(pi,pj) And d3(pi,pj) Pixels p representing the charge level images, respectivelyiAnd a pixel pjThe Euclidean distance of the color, luminance and texture features of (1), ds(pi,pj) Representing a pixel piAnd a pixel pjThe euclidean distance of the spatial features between them, α representing a parameter, set to 3;
acquiring the significant value of the pixel according to the significant value of the super pixel block and the total difference degree between the pixels, wherein the calculation formula for acquiring the significant value of the pixel is as follows:
Figure FDA0003641269500000021
wherein the content of the first and second substances,
Figure FDA0003641269500000022
representing a pixel piSignificant value of d (p)i,pn) Representing a pixel piAnd a pixel pnThe degree of overall difference of (a) is,
Figure FDA0003641269500000023
representing a pixel piN denotes a super-pixel block of pixels piThe number of pixel points within the central 8 x 8 neighborhood,
Figure FDA0003641269500000024
representing superpixel blocks
Figure FDA0003641269500000025
A probability of belonging to a background;
and carrying out binarization processing on the charge level image according to the significant value of the pixel to obtain a significant area of the charge level image.
4. The automatic extraction method of the blast furnace burden surface video key frames as claimed in claim 3, wherein the extracting the feature points of the salient region comprises:
down-sampling the area images corresponding to the salient areas to obtain area images with different sizes, arranging the area images with different sizes from large to small, and obtaining a Gaussian difference pyramid through Gaussian convolution and Gaussian difference;
comparing each pixel point of the image in the Gaussian difference pyramid with adjacent pixel points and pixels of adjacent areas of the upper and lower layers of images, and searching for an extreme point so as to obtain a discrete extreme point;
and fitting the discrete extreme points into continuous extreme points through a Taylor formula, eliminating unstable points, eliminating edge influence based on a Harris angular point detection algorithm, and obtaining a continuous characteristic point set so as to obtain characteristic points of the salient region.
5. The automatic extraction method of blast furnace charge level video key frames as claimed in claim 4, wherein the calculation formula for calculating the number of the feature points and the light flow of the feature points of two adjacent charge level images is as follows:
(ux,vy)=(x+△x-x,y+△y-y)=(△x,△y),
wherein u isxAnd vyRespectively representing ith characteristic point P of the t frame charge level imagei(x, y) moving to the ith feature point P of the t +1 th frame charge level imagei' (x +. DELTA.x, y +. DELTA.y) and the motion components in the horizontal and vertical directions, x, y representing the characteristic point PiCoordinates of (x, y), x +. DELTA.x, y +. DELTA.y represent the feature point Pi' (x +. DELTA.x, y +. DELTA.y).
6. The automatic extraction method of blast furnace burden surface video key frames as claimed in any one of claims 1 to 5, wherein obtaining the number of feature points and the local density maximum of the average optical flow vector data distribution based on kernel density estimation comprises:
collecting charge level images in different periods and different states, and calculating the number of feature points and an average optical flow vector of two adjacent frames of images, wherein the average optical flow vector is the average value of optical flows of all the feature points in the charge level images;
eliminating abnormal values by adopting equal confidence probability, and carrying out normalization processing on the number of the feature points and the average light stream vector;
obtaining a probability density graph of the number of the feature points and the average optical flow vector data distribution by adopting a kernel density estimation function, wherein the kernel density estimation function specifically comprises the following steps:
Figure FDA0003641269500000031
wherein h represents a smoothing parameter, Kh(. cndot.) represents a kernel function, d represents an observed value, fh(d) Representing a probability density function, M representing the total number of samples, diRepresenting the ith sample in the feature space sample set;
and acquiring four points with the maximum local density as initial clustering centers according to the probability density graph, and taking the four points as the initial clustering centers of the Gaussian mixture model.
7. The automatic extraction method of the blast furnace charge level video key frames according to claim 6, wherein the state clustering is performed on the characteristic point light stream of the charge level image by adopting a Gaussian mixture model based on an initial clustering center, and the obtaining of the state of the charge level comprises:
updating the parameters of the Gaussian mixture model by adopting an EM algorithm until the log-likelihood function reaches the maximum value, and ending the parameter updating, wherein the log-likelihood function specifically comprises the following steps:
Figure FDA0003641269500000032
wherein alpha isiIndicating the probability, μ, that the current frame belongs to class ii,∑iMean and covariance matrices, f (α), representing the ith Gaussian distribution, respectivelyii,∑i) An objective function, L (α), representing an update of a parameterii,∑i) Representing a log-likelihood function, m representing the number of samples, k representing the number of Gaussian functions, xjRepresents the current sample, P (x)ji) Represents a sample xjProbability of belonging to the ith Gaussian distribution;
drawing a Gaussian distribution curve, and classifying data in the Gaussian distribution curve or close to the curve into the same state.
8. The automatic extraction method of the video key frames of the blast furnace burden surface as claimed in claim 7, wherein the step of dividing the burden distribution cycle according to the density variation of the feature points of the burden surface image, and extracting the key frames in the videos of different burden distribution cycles based on the state of the burden surface comprises the steps of:
obtaining the density degree of the feature points according to the average value of the feature points of the charge level image per second;
comparing the density degree of the characteristic points of the charge level image at the current moment with the density degree of the characteristic points of the charge level image in the last second, and dividing the material distribution period according to a preset comparison threshold;
calculating Euclidean distances between the characteristic value of the current frame and different clustering centers, acquiring the state of the current frame, judging the current frame to be a non-key frame if the current frame is in a fast sinking state and an unstable state, keeping the number of frames unchanged, judging the current frame to be a candidate key frame if the current frame is in a stable state or a slow sinking state, and adding 1 to the number of frames;
determining sampling frequency according to the number of candidate key frames, ensuring that the number of key frames extracted in each period is the same, and respectively taking an image frame corresponding to a clustering center, an image frame with the largest number of characteristic points and the smallest characteristic point optical flow and an image obtained according to the fixed sampling frequency as key frames from the candidate key frames to obtain a key frame set of one period;
and extracting the key frame sets of different material distribution periods to obtain a blast furnace charge level video frame set.
9. A blast furnace burden surface video key frame automatic extraction system, the system comprises:
memory (10), processor (20) and a computer program stored on the memory (10) and executable on the processor (20), characterized in that the processor (20) implements the steps of the method of any of the preceding claims 1 to 8 when executing the computer program.
CN202210520149.7A 2022-05-12 2022-05-12 Automatic extraction method and system for video key frames of blast furnace burden surface Pending CN114743152A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210520149.7A CN114743152A (en) 2022-05-12 2022-05-12 Automatic extraction method and system for video key frames of blast furnace burden surface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210520149.7A CN114743152A (en) 2022-05-12 2022-05-12 Automatic extraction method and system for video key frames of blast furnace burden surface

Publications (1)

Publication Number Publication Date
CN114743152A true CN114743152A (en) 2022-07-12

Family

ID=82286301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210520149.7A Pending CN114743152A (en) 2022-05-12 2022-05-12 Automatic extraction method and system for video key frames of blast furnace burden surface

Country Status (1)

Country Link
CN (1) CN114743152A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578612A (en) * 2022-10-11 2023-01-06 浙江大学 Method and device for identifying material distribution stage at top of blast furnace based on marker target detection
CN115908280A (en) * 2022-11-03 2023-04-04 广东科力新材料有限公司 Data processing-based performance determination method and system for PVC calcium zinc stabilizer
CN116188460A (en) * 2023-04-24 2023-05-30 青岛美迪康数字工程有限公司 Image recognition method and device based on motion vector and computer equipment
CN117061189A (en) * 2023-08-26 2023-11-14 上海六坊信息科技有限公司 Data packet transmission method and system based on data encryption

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115578612A (en) * 2022-10-11 2023-01-06 浙江大学 Method and device for identifying material distribution stage at top of blast furnace based on marker target detection
CN115908280A (en) * 2022-11-03 2023-04-04 广东科力新材料有限公司 Data processing-based performance determination method and system for PVC calcium zinc stabilizer
CN116188460A (en) * 2023-04-24 2023-05-30 青岛美迪康数字工程有限公司 Image recognition method and device based on motion vector and computer equipment
CN116188460B (en) * 2023-04-24 2023-08-25 青岛美迪康数字工程有限公司 Image recognition method and device based on motion vector and computer equipment
CN117061189A (en) * 2023-08-26 2023-11-14 上海六坊信息科技有限公司 Data packet transmission method and system based on data encryption
CN117061189B (en) * 2023-08-26 2024-01-30 上海六坊信息科技有限公司 Data packet transmission method and system based on data encryption

Similar Documents

Publication Publication Date Title
CN114743152A (en) Automatic extraction method and system for video key frames of blast furnace burden surface
WO2022099598A1 (en) Video dynamic target detection method based on relative statistical features of image pixels
CN107909081B (en) Method for quickly acquiring and quickly calibrating image data set in deep learning
JP5604256B2 (en) Human motion detection device and program thereof
CN108876820B (en) Moving target tracking method under shielding condition based on mean shift
CN111062974B (en) Method and system for extracting foreground target by removing ghost
CN112184759A (en) Moving target detection and tracking method and system based on video
CN107230188B (en) Method for eliminating video motion shadow
CN101916448A (en) Moving object detecting method based on Bayesian frame and LBP (Local Binary Pattern)
CN104966305B (en) Foreground detection method based on motion vector division
CN111260738A (en) Multi-scale target tracking method based on relevant filtering and self-adaptive feature fusion
CN109919053A (en) A kind of deep learning vehicle parking detection method based on monitor video
CN109255326B (en) Traffic scene smoke intelligent detection method based on multi-dimensional information feature fusion
TWI415032B (en) Object tracking method
CN105139417A (en) Method for real-time multi-target tracking under video surveillance
CN106997599B (en) A kind of video moving object subdivision method of light sensitive
CN113989276A (en) Detection method and detection device based on depth image and camera equipment
CN116452506A (en) Underground gangue intelligent visual identification and separation method based on machine learning
JP7096175B2 (en) Object extraction method and device
CN103578121B (en) Method for testing motion based on shared Gauss model under disturbed motion environment
KR101690050B1 (en) Intelligent video security system
CN111626107B (en) Humanoid contour analysis and extraction method oriented to smart home scene
CN107301652B (en) Robust target tracking method based on local sparse representation and particle swarm optimization
Zhou et al. Dynamic background subtraction using spatial-color binary patterns
CN112288765A (en) Image processing method for vehicle-mounted infrared pedestrian detection and tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination