CN113221674A - Video stream key frame extraction system and method based on rough set reduction and SIFT - Google Patents

Video stream key frame extraction system and method based on rough set reduction and SIFT Download PDF

Info

Publication number
CN113221674A
CN113221674A CN202110449176.5A CN202110449176A CN113221674A CN 113221674 A CN113221674 A CN 113221674A CN 202110449176 A CN202110449176 A CN 202110449176A CN 113221674 A CN113221674 A CN 113221674A
Authority
CN
China
Prior art keywords
sift
frame sequence
reduction
video frame
feature points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110449176.5A
Other languages
Chinese (zh)
Other versions
CN113221674B (en
Inventor
刘通
袁展图
梁伟民
潘盛
方孖计
冼庆祺
赵善龙
萧镜辉
翟少翩
林钦文
李文丁
王宇斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan Power Supply Bureau of Guangdong Power Grid Co Ltd
Original Assignee
Dongguan Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan Power Supply Bureau of Guangdong Power Grid Co Ltd filed Critical Dongguan Power Supply Bureau of Guangdong Power Grid Co Ltd
Priority to CN202110449176.5A priority Critical patent/CN113221674B/en
Publication of CN113221674A publication Critical patent/CN113221674A/en
Application granted granted Critical
Publication of CN113221674B publication Critical patent/CN113221674B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a video stream key frame extraction system based on rough set reduction and SIFT.A frame sequence extraction module converts monitoring video data in a preset time period into a corresponding monitoring video frame sequence; the feature point extraction module extracts all SIFT feature points of the monitoring video frame sequence by adopting a feature extraction method based on distributed frame SIFT for the monitoring video frame sequence; the attribute dimensionality reduction module is used for carrying out attribute dimensionality reduction on the feature points; the similarity calculation module calculates a similarity curve between adjacent frames of the monitored video frame sequence after the attribute dimension reduction of the feature points; the key frame identification module finds out inflection points in similarity curves between adjacent frames, and the inflection points are used as key frames. The invention can overcome the problems of lower accuracy and large quantity of transmitted video data when the key frame of the on-site monitoring video data is extracted in the prior art.

Description

Video stream key frame extraction system and method based on rough set reduction and SIFT
Technical Field
The invention relates to the technical field of monitoring field data video information processing, in particular to a video stream key frame extraction system and method based on rough set reduction and Scale-invariant feature transform (SIFT).
Background
In the field of electric power emergency repair commanding, disaster field data acquisition and information uploading are realized based on technologies such as intelligent sensing and mobile interconnection, meanwhile, a data analysis technology is utilized to provide some disaster repair decision support, and the method becomes one of important support technologies of emergency commanding technologies based on intelligent sensing and mobile interconnection.
Video monitoring is a widely applied data acquisition means for emergency repair of electric power at present, mass video data are acquired, transmitted, circulated, mined and analyzed, emergency repair decisions are further guided, and new means and directions are brought to emergency repair commands of electric power. However, in a severe environment, the reliability of a wireless channel is reduced, and bandwidth resources are limited, which brings great challenges to reliable real-time transmission of video data, so that key image frames in a video stream are extracted for key frame transmission, so that the dependence of transmission on a network is reduced, and the transmission efficiency is improved, which becomes a mainstream technology of video transmission. However, the existing key frame technology has the disadvantages of low extraction accuracy, large video data volume, long extraction process time consumption and increased video data processing time before data transmission. Therefore, the research on the technology for extracting the key frames of the video data streams in the severe environment realizes the effective interception of the key video frames, reduces the data transmission amount, realizes the reliable real-time video data transmission, and is a problem which needs to be solved urgently in the field of power emergency repair.
Disclosure of Invention
The invention aims to provide a video stream key frame extraction system and method based on rough set reduction and SIFT (scale-invariant feature transform), which can solve the problems of low accuracy and large transmission video data volume in the prior art when extracting key frames of field monitoring video data.
In order to achieve the purpose, the video stream key frame extraction system based on rough set reduction and SIFT is characterized in that: the system comprises a frame sequence extraction module, a feature point extraction module, an attribute dimension reduction module, a similarity calculation module and a key frame identification module;
the frame sequence extraction module is used for converting the monitoring video data in a preset time period into a corresponding monitoring video frame sequence;
the feature point extraction module is used for extracting all SIFT feature points of the monitoring video frame sequence by adopting a feature extraction method based on distributed frame SIFT for the monitoring video frame sequence;
the attribute dimension reduction module is used for performing attribute dimension reduction on feature points of all SIFT feature points of the monitoring video frame sequence by adopting a rough set attribute reduction method;
the similarity calculation module is used for calculating a similarity curve between adjacent frames of the monitored video frame sequence after the dimension reduction of the attribute of the characteristic points according to the number of matched characteristic points between the adjacent frames in the monitored video frame sequence;
the key frame identification module is used for finding out inflection points in similarity curves between adjacent frames by adopting a sliding window-based similarity segmentation algorithm, taking the inflection points as key frames, and extracting.
According to the video key frame extraction method, the image feature points are extracted concurrently through the distributed SIFT, and feature point selection is performed by rough set attribute reduction, so that the feature point dimension is effectively reduced, the extraction speed and efficiency of the video data key frame are increased, the disaster damaged equipment is further improved, and the field emergency command decision is guided.
Drawings
FIG. 1 is a schematic structural view of the present invention;
FIG. 2 is a flowchart of a key frame extraction model of the present invention;
FIG. 3 is a flow chart of distributed frame SIFT feature extraction according to the present invention;
FIG. 4 is a block diagram of image frame division data according to the present invention;
FIG. 5 is a flowchart of the inter-frame similarity segmentation algorithm based on sliding window according to the present invention;
the system comprises a frame sequence extraction module 1, a feature point extraction module 2, an attribute dimension reduction module 3, a similarity calculation module 4 and a key frame identification module 5.
Detailed Description
The invention is described in further detail below with reference to the following figures and specific examples:
the video stream key frame extraction system based on rough set reduction and SIFT as shown in FIG. 1 is characterized in that: the system comprises a frame sequence extraction module 1, a feature point extraction module 2, an attribute dimension reduction module 3, a similarity calculation module 4 and a key frame identification module 5;
the frame sequence extraction module 1 is used for converting the monitoring video data in a preset time period into a corresponding monitoring video frame sequence;
the feature point extraction module 2 is used for extracting all SIFT feature points of the monitoring video frame sequence by adopting a feature extraction method based on distributed frame SIFT for the monitoring video frame sequence;
the attribute dimensionality reduction module 3 is used for performing feature point attribute dimensionality reduction on all SIFT feature points of the monitored video frame sequence by adopting a rough set attribute reduction method, and the feature point dimensionality reduction can effectively increase the feature point extraction efficiency;
the similarity calculation module 4 is used for calculating a similarity curve between adjacent frames of the monitored video frame sequence after the dimension reduction of the attribute of the feature points according to the number of matched feature points between the adjacent frames in the monitored video frame sequence, wherein the higher the similarity between frames is, the higher the content expressed by the adjacent frames is, the more the content is, the key frame is;
the key frame identification module 5 is configured to find an inflection point in a similarity curve between adjacent frames by using a sliding window-based similarity segmentation algorithm, take the inflection point as a key frame, and extract the key frame, where the inflection point in the similarity curve is where the similarity is mutated, where the mutation means that the adjacent frames have sudden changes, and the extraction of the key frame is a frame where the changes are extracted from a video sequence.
In the technical scheme, SIFT feature point extraction is a mature video frame feature point extraction method, the feature extraction accuracy is high, but the extraction efficiency is low due to high feature point dimensionality, so that dimensionality reduction processing is needed, and the feature extraction speed is improved.
In the above technical solution, the frame sequence extraction module 1 selects and downloads a live monitoring video, intercepts a section of the live monitoring video as original video data, performs size scaling on the image, and numbers video frames in sequence according to a time sequence to obtain a video frame image sequence with uniform size.
In the above technical solution, the specific method for extracting all SIFT feature points of the monitored video frame sequence by the feature point extraction module 2 is as follows:
firstly, preprocessing an input monitoring video frame sequence;
secondly, dividing the preprocessed monitoring video frame sequence into a plurality of data blocks by adopting an equal division data block division method;
then, different data blocks are allocated to a specified computing node (which refers to computing resources and may be different servers) for SIFT feature point extraction, and the allocation rule is generally used for allocating different data blocks to different computing resources, which facilitates synchronous extraction. The extraction speed is higher;
then, in a designated computing node, taking the received data blocks as input based on an SIFT feature point extraction algorithm, and concurrently extracting SIFT feature points of each data block;
and finally, combining the SIFT feature points of all the data blocks belonging to the same image frame to obtain all the SIFT feature points of the monitoring video frame sequence.
In the above technical solution, the specific method for performing feature point attribute dimension reduction on all SIFT feature points of a monitored video frame sequence by the attribute dimension reduction module 3 based on a rough set attribute reduction method is as follows: and (3) performing dimensionality reduction on the extracted SIFT feature point vector data (128-dimensional feature points) by adopting an approximate attribute reduction method based on a rough set attribute reduction method to obtain a monitoring video frame sequence with feature points subjected to dimensionality reduction.
In the above technical solution, the specific method for calculating the similarity curve between adjacent frames of the monitored video frame sequence after the feature point attribute dimension reduction by the similarity calculation module 4 is as follows:
firstly, calculating the matching degree of feature points between adjacent frames according to the Euclidean distance between the feature points of the adjacent frames in the monitored video frame sequence after the dimension reduction of the feature points;
then, calculating the number of all matched feature points between adjacent frames according to a preset matching degree threshold;
then, based on the number of all matched feature points and the total number of feature points between adjacent frames, calculating the similarity between the adjacent frames;
and finally, calculating the similarity between adjacent frames of all adjacent frames in a section of monitoring video frame sequence to form a similarity curve between the adjacent frames.
The specific method for calculating the similarity curve between the adjacent frames of the monitored video frame sequence after the dimension reduction of the feature point attribute by the similarity calculation module 4 is as follows:
first, for a certain feature point A of frame 1, two feature points B and C with the distance A being the nearest are found in frame 2, and the Euclidean distances are x1And x2(AB and AC Euclidean distances x, respectively1And x2)
Then, the matching degree of the feature points is calculated, a threshold value alpha (generally 0.5) can be set, if x is1/x2When the alpha is less than or equal to alpha, the A and the B are considered to be a pair of feature points which are matched with each other, otherwise, the A and the B are not matched;
then, determining the number of matching points of the frame 1 and the frame 2 according to the matching logarithm among the characteristic points;
then, the similarity between frames is calculated, and the similarity between frames is m/(n)1+n2) Where m is the total number of feature point matching logarithms between two adjacent frames, n1And n2The number of features of frames 1 and 2, respectively;
and finally, calculating the similarity between adjacent frames of all adjacent frames in a section of monitoring video frame sequence to form a similarity curve between the adjacent frames.
In the technical scheme, the preprocessing of the monitoring video frame sequence comprises the decoloring processing of a color video frame, and the color image is converted into a gray image to obtain a plurality of groups of gray image frames so as to reduce the data transmission amount in the feature extraction engineering.
In the above technical solution, the frame SIFT feature point attribute dimension reduction based on rough set attribute reduction includes the following steps: the specific method for reducing the dimension of the feature point attribute comprises the following steps:
firstly, inputting all SIFT feature points of a monitoring video frame sequence, namely feature vectors described as 128 dimensions;
then, according to sample data (a video frame sequence, a sample data set (such as a frame number, a feature point and a feature vector) is obtained through SIFT feature extraction, and an initial decision table is generated by the feature vector and the feature point (the feature point is a multi-dimensional vector, and the attribute reduction is to perform attribute reduction on the feature vector forming the feature point);
then, an approximate reduction algorithm based on condition and attribute consistency judgment is adopted to carry out attribute reduction on the SIFT feature points, unimportant attributes are deleted, and the important attributes are reserved;
and finally, outputting the reduced SIFT feature point dimension, namely finishing the SIFT feature point dimension reduction.
A video stream key frame extraction method based on rough set reduction and SIFT comprises the following steps:
step 1: converting monitoring video data in a preset time period into a corresponding monitoring video frame sequence;
step 2: extracting all SIFT feature points of the monitoring video frame sequence by adopting a feature extraction method based on distributed frame SIFT for the monitoring video frame sequence;
and step 3: performing feature point attribute dimension reduction on all SIFT feature points of the monitoring video frame sequence by adopting a rough set attribute reduction method;
and 4, step 4: calculating a similarity curve between adjacent frames of the monitored video frame sequence after the attribute dimension reduction of the characteristic points according to the number of matched characteristic points between the adjacent frames in the monitored video frame sequence;
and 5: and finding out inflection points in a similarity curve between adjacent frames by adopting a similarity segmentation algorithm between the adjacent frames based on a sliding window, taking the inflection points as key frames, and extracting.
In the step 5, the inter-frame similarity of the field monitoring video tends to be stable under normal conditions, and only when the content is changed, the similarity is suddenly changed. When the content changes suddenly, the similarity between frames in the boundary area is greatly reduced, and after the change is finished, the similarity between frames tends to be stable. The keyframes may be identified using a sliding window based inter-frame similarity segmentation algorithm.
Referring to fig. 1 and 2, this embodiment is accomplished by the following steps: extracting monitored video data according to a disaster site, and extracting a section of video as a video frame sequence (frame 1, frame 2, …, frame n) to be processed, wherein the length of the video frame sequence is n; extracting all feature points of n video frames by adopting a distributed SIFT feature extraction method; reducing the feature points by using a rough set attribute reduction method, and determining the final feature points of each video frame; calculating the matching degree of the characteristic points between frames, and calculating the similarity between frames according to the matching degree to form a similarity curve between frames; and identifying and extracting the key frame by adopting an interframe similarity segmentation algorithm based on a sliding window according to the interframe similarity curve. The technical difficulty is the key technologies such as a distributed frame SIFT feature extraction method, a rough set attribute reduction-based feature point selection method, a sliding window-based inter-frame similarity segmentation algorithm and the like. The method is realized by the following steps:
collecting the monitoring video data of the emergency repair site, intercepting a section of video as a video section to be processed, and then extracting a frame sequence of the video section to form a section of image frame sequence with the length of n, wherein the resolution of each image frame is m x h.
Referring to the attached figure 3, the method for extracting the feature points by adopting the distributed SIFT feature extraction method comprises the following steps:
firstly, preprocessing a video frame sequence, namely performing decoloring processing on a colorful image to form a gray image frame sequence with the length of n;
and dividing the data blocks. The method comprises the following steps of dividing a complete image frame into L data blocks by adopting an equal division method, wherein the division rule is as follows: if the image resolutions m and h are even numbers, the image frame can be divided into 2dA data block, d is an integer larger than 1 and is determined by the resolution of the image, and the larger the image, the larger d is; if at least one of m and h is odd, the row or column with odd resolution is reduced by one row or column, the rest data blocks are divided equally according to the previous rule, and the last data block is merged with the redundant row or column. And after the division is finished, numbering the data blocks in sequence according to the sequence of the front row and the rear row.
The data blocks are allocated. Distributing different data blocks to a specified computing node in a distributed manner for processing, and extracting characteristics;
and extracting sub-features. The received data blocks are used as input based on an SIFT feature point extraction algorithm, and the SIFT feature point extraction algorithm comprises the following basic steps: constructing a pyramid, detecting an extreme value, calculating the direction of the characteristic point and calculating the characteristic point, extracting the characteristic point of each data block in a concurrent manner, and outputting a result for each data block;
and (6) merging the features. Combining the feature points of all the data blocks belonging to the same image frame according to the coordinate positions of the feature points in the data blocks, and deleting the feature points overlapped in the middle to finally obtain the feature points of each image frame, as shown in fig. 4.
And (3) selecting the SIFT feature points of the frame based on rough set attribute reduction, generating an initial decision table for the 128-dimensional feature point vector data extracted in the last step, and performing dimension reduction processing by adopting an approximate attribute reduction method based on condition and attribute consistency judgment to keep important attributes. Obtaining 32-dimensional feature point vector data after dimension reduction;
the interframe similarity calculation method based on interframe feature point matching comprises the following steps of:
for a certain feature point A of the frame 1, two feature points B and C which are closest to the feature point A are found in the frame 2, and Euclidean distances of the two feature points are x1And x2
The matching degree of the feature points is calculated by setting the threshold α to 0.5 if x1/x2When the alpha is less than or equal to alpha, the A and the B are considered to be a pair of feature points which are matched with each other, otherwise, the A and the B are not matched;
determining the number of matching points of the frame 1 and the frame 2 according to the matching logarithm among the characteristic points;
calculating the similarity between frames, wherein the similarity between frames is m/(n)1+n2) Where m is the total number of pairs of point-of-visit matches between two frames, n1And n2The number of characteristic points of frames 1 and 2 respectively;
for the above step loop operation, an inter-frame similarity curve of a video frame sequence with the length of n is obtained.
And extracting key frames. And performing key frame identification by adopting an inter-frame similarity segmentation algorithm based on a sliding window. The method comprises the following steps:
firstly, inputting all inter-frame similarity values in sequence according to the sequence of frames in a video sequence;
calculating the mean value and the standard deviation of the frame similarity values in the whole sequence, and respectively recording the mean value and the standard deviation as u and delta;
inputting the size of a sliding window, wherein the parameter is a fixed value, and an optimal parameter can be determined through multiple times of adjustment and is used as the size of the window;
sliding a window from left to right according to a video sequence, and calculating the mean value and the standard deviation of the inter-frame similarity in the window, and recording the mean value and the standard deviation as u1 and delta 1;
maximum error rate given mean and standard deviation of u0And delta0And calculating and judging the mean error rate
Figure BDA0003038104960000081
And standard error
Figure BDA0003038104960000082
And u0And delta0Size: if the former is larger than the latter, the last frame in the window is output as a video key frame, otherwise, the window is continuously slid rightwards until the right side of the window reaches the rightmost side of the video frame sequence.
And outputting all inflection points as key frames of the video segment, and finishing the algorithm.
In conclusion, a keyframe extraction method based on rough set attribute reduction and SIFT frame feature point extraction algorithm is established, real-time transmission of main video data is achieved by transmitting monitoring video keyframes in real time, so that the effect of video compression transmission is achieved, the video data transmission speed of an emergency repair site is increased, the data processing efficiency of a center side is further improved, the intelligent decision making speed is increased, the emergency repair site command decision generation is assisted in real time, and the intelligentization and digitization of emergency repair commands of a disaster site are achieved.
Details not described in this specification are within the skill of the art that are well known to those skilled in the art.

Claims (8)

1. A video stream key frame extraction system based on rough set reduction and SIFT is characterized in that: the system comprises a frame sequence extraction module (1), a feature point extraction module (2), an attribute dimension reduction module (3), a similarity calculation module (4) and a key frame identification module (5);
the frame sequence extraction module (1) is used for converting monitoring video data in a preset time period into a corresponding monitoring video frame sequence;
the feature point extraction module (2) is used for extracting all SIFT feature points of the monitoring video frame sequence by adopting a feature extraction method based on distributed frame SIFT for the monitoring video frame sequence;
the attribute dimensionality reduction module (3) is used for performing attribute dimensionality reduction on feature points of all SIFT feature points of the monitored video frame sequence by adopting a rough set attribute reduction method;
the similarity calculation module (4) is used for calculating a similarity curve between adjacent frames of the monitored video frame sequence after the dimension reduction of the attribute of the feature points according to the number of matched feature points between the adjacent frames in the monitored video frame sequence;
the key frame identification module (5) is used for finding out an inflection point in a similarity curve between adjacent frames by adopting a similarity segmentation algorithm between the adjacent frames based on a sliding window, and taking the inflection point as a key frame.
2. The coarse set reduction and SIFT based video stream keyframe extraction system of claim 1 wherein: the frame sequence extraction module (1) selects and downloads a field monitoring video, intercepts a section of the field monitoring video as original video data, performs size scaling processing on the image, and sequentially numbers video frames according to a time sequence to obtain a video frame image sequence with uniform size.
3. The coarse set reduction and SIFT based video stream keyframe extraction system of claim 1 wherein: the specific method for extracting all SIFT feature points of the monitoring video frame sequence by the feature point extraction module (2) is as follows:
firstly, preprocessing an input monitoring video frame sequence;
secondly, dividing the preprocessed monitoring video frame sequence into a plurality of data blocks by adopting an equal division data block division method;
then, distributing different data blocks to specified computing nodes for SIFT feature point extraction;
then, in a designated computing node, taking the received data blocks as input based on an SIFT feature point extraction algorithm, and concurrently extracting SIFT feature points of each data block;
and finally, combining the SIFT feature points of all the data blocks belonging to the same image frame to obtain all the SIFT feature points of the monitoring video frame sequence.
4. The coarse set reduction and SIFT based video stream keyframe extraction system of claim 1 wherein: the attribute dimension reduction module (3) adopts a rough set attribute reduction method to perform feature point attribute dimension reduction on all SIFT feature points of a monitored video frame sequence, and the specific method comprises the following steps: and based on a rough set attribute reduction method, performing dimensionality reduction on the extracted SIFT feature point vector data by adopting an approximate attribute reduction method to obtain a monitoring video frame sequence with feature points subjected to dimensionality reduction.
5. The coarse set reduction and SIFT based video stream keyframe extraction system of claim 1 wherein: the specific method for the similarity calculation module (4) to calculate the similarity curve between adjacent frames of the monitored video frame sequence after the feature point attribute dimension reduction comprises the following steps:
firstly, calculating the matching degree of feature points between adjacent frames according to the Euclidean distance between the feature points of the adjacent frames in the monitored video frame sequence after the dimension reduction of the feature points;
then, calculating the number of all matched feature points between adjacent frames according to a preset matching degree threshold;
then, based on the number of all matched feature points and the total number of feature points between adjacent frames, calculating the similarity between the adjacent frames;
and finally, calculating the similarity between adjacent frames of all adjacent frames in a section of monitoring video frame sequence to form a similarity curve between the adjacent frames.
6. The coarse set reduction and SIFT based video stream keyframe extraction system of claim 3 wherein: the preprocessing of the monitoring video frame sequence comprises the decoloring processing of color video frames, and the color images are converted into gray images to obtain a plurality of groups of gray image frames.
7. The coarse set reduction and SIFT based video stream keyframe extraction system of claim 4 wherein: the specific method for reducing the dimension of the feature point attribute comprises the following steps:
firstly, inputting all SIFT feature points of a monitoring video frame sequence;
then, generating an initial decision table according to the characteristic vectors and the characteristic points in the sample data;
then, performing attribute reduction on the SIFT feature points by adopting an approximate reduction algorithm based on condition and attribute consistency judgment;
and finally, outputting the reduced SIFT feature point dimension, namely finishing the SIFT feature point dimension reduction.
8. A video stream key frame extraction method based on rough set reduction and SIFT is characterized by comprising the following steps:
step 1: converting monitoring video data in a preset time period into a corresponding monitoring video frame sequence;
step 2: extracting all SIFT feature points of the monitoring video frame sequence by adopting a feature extraction method based on distributed frame SIFT for the monitoring video frame sequence;
and step 3: performing feature point attribute dimension reduction on all SIFT feature points of the monitoring video frame sequence by adopting a rough set attribute reduction method;
and 4, step 4: calculating a similarity curve between adjacent frames of the monitored video frame sequence after the attribute dimension reduction of the characteristic points according to the number of matched characteristic points between the adjacent frames in the monitored video frame sequence;
and 5: and finding out inflection points in a similarity curve between adjacent frames by adopting a similarity segmentation algorithm between the adjacent frames based on a sliding window, taking the inflection points as key frames, and extracting.
CN202110449176.5A 2021-04-25 2021-04-25 Video stream key frame extraction system and method based on rough set reduction and SIFT Active CN113221674B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110449176.5A CN113221674B (en) 2021-04-25 2021-04-25 Video stream key frame extraction system and method based on rough set reduction and SIFT

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110449176.5A CN113221674B (en) 2021-04-25 2021-04-25 Video stream key frame extraction system and method based on rough set reduction and SIFT

Publications (2)

Publication Number Publication Date
CN113221674A true CN113221674A (en) 2021-08-06
CN113221674B CN113221674B (en) 2023-01-24

Family

ID=77088888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110449176.5A Active CN113221674B (en) 2021-04-25 2021-04-25 Video stream key frame extraction system and method based on rough set reduction and SIFT

Country Status (1)

Country Link
CN (1) CN113221674B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116665101A (en) * 2023-05-30 2023-08-29 石家庄铁道大学 Method for extracting key frames of monitoring video based on contourlet transformation
CN117115718A (en) * 2023-10-20 2023-11-24 思创数码科技股份有限公司 Government affair video data processing method, system and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103559196A (en) * 2013-09-23 2014-02-05 浙江大学 Video retrieval method based on multi-core canonical correlation analysis
CN104954791A (en) * 2015-07-01 2015-09-30 中国矿业大学 Method for selecting key frame from wireless distributed video coding for mine in real time
CN106203277A (en) * 2016-06-28 2016-12-07 华南理工大学 Fixed lens real-time monitor video feature extracting method based on SIFT feature cluster
CN109190637A (en) * 2018-07-31 2019-01-11 北京交通大学 A kind of image characteristic extracting method
CN109828996A (en) * 2018-12-21 2019-05-31 西安交通大学 A kind of Incomplete data set rapid attribute reduction
CN110826491A (en) * 2019-11-07 2020-02-21 北京工业大学 Video key frame detection method based on cascading manual features and depth features

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103559196A (en) * 2013-09-23 2014-02-05 浙江大学 Video retrieval method based on multi-core canonical correlation analysis
CN104954791A (en) * 2015-07-01 2015-09-30 中国矿业大学 Method for selecting key frame from wireless distributed video coding for mine in real time
WO2017000465A1 (en) * 2015-07-01 2017-01-05 中国矿业大学 Method for real-time selection of key frames when mining wireless distributed video coding
CN106203277A (en) * 2016-06-28 2016-12-07 华南理工大学 Fixed lens real-time monitor video feature extracting method based on SIFT feature cluster
CN109190637A (en) * 2018-07-31 2019-01-11 北京交通大学 A kind of image characteristic extracting method
CN109828996A (en) * 2018-12-21 2019-05-31 西安交通大学 A kind of Incomplete data set rapid attribute reduction
CN110826491A (en) * 2019-11-07 2020-02-21 北京工业大学 Video key frame detection method based on cascading manual features and depth features

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
夏东 等: "基于粗糙集近似约简和SIFT特征的图像匹配算法", 《信号处理》 *
朱亚玲等: "一种基于RS和I帧的关键帧提取方法", 《电脑开发与应用》 *
闵D等: "简析当前主要的视频关键帧提取技术", 《广西警官高等专科学校学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116665101A (en) * 2023-05-30 2023-08-29 石家庄铁道大学 Method for extracting key frames of monitoring video based on contourlet transformation
CN116665101B (en) * 2023-05-30 2024-01-26 石家庄铁道大学 Method for extracting key frames of monitoring video based on contourlet transformation
CN117115718A (en) * 2023-10-20 2023-11-24 思创数码科技股份有限公司 Government affair video data processing method, system and computer readable storage medium
CN117115718B (en) * 2023-10-20 2024-01-09 思创数码科技股份有限公司 Government affair video data processing method, system and computer readable storage medium

Also Published As

Publication number Publication date
CN113221674B (en) 2023-01-24

Similar Documents

Publication Publication Date Title
CN107682319B (en) Enhanced angle anomaly factor-based data flow anomaly detection and multi-verification method
CN113221674B (en) Video stream key frame extraction system and method based on rough set reduction and SIFT
CN110909658A (en) Method for recognizing human body behaviors in video based on double-current convolutional network
CN110866896B (en) Image saliency target detection method based on k-means and level set super-pixel segmentation
CN109086777B (en) Saliency map refining method based on global pixel characteristics
CN109948721B (en) Video scene classification method based on video description
CN110689482A (en) Face super-resolution method based on supervised pixel-by-pixel generation countermeasure network
CN111008978B (en) Video scene segmentation method based on deep learning
CN111462261A (en) Fast CU partition and intra decision method for H.266/VVC
CN108154158B (en) Building image segmentation method for augmented reality application
CN116055413B (en) Tunnel network anomaly identification method based on cloud edge cooperation
CN107808391B (en) Video dynamic target extraction method based on feature selection and smooth representation clustering
US6594375B1 (en) Image processing apparatus, image processing method, and storage medium
CN112770116B (en) Method for extracting video key frame by using video compression coding information
CN112348033B (en) Collaborative saliency target detection method
CN111741313B (en) 3D-HEVC rapid CU segmentation method based on image entropy K-means clustering
CN106022310B (en) Human body behavior identification method based on HTG-HOG and STG characteristics
CN109829377A (en) A kind of pedestrian's recognition methods again based on depth cosine metric learning
CN114119577B (en) High-speed rail tunnel leakage cable buckle detection method
CN112446245A (en) Efficient motion characterization method and device based on small displacement of motion boundary
CN113205010B (en) Intelligent disaster-exploration on-site video frame efficient compression system and method based on target clustering
CN110460840B (en) Shot boundary detection method based on three-dimensional dense network
Liang et al. Object tracking algorithm based on multi-channel extraction of ahlbp texture features
CN111832336B (en) Improved C3D video behavior detection method
WO2020168526A1 (en) Image encoding method and device, and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant