CN105357594B - The massive video abstraction generating method of algorithm is concentrated based on the video of cluster and H264 - Google Patents

The massive video abstraction generating method of algorithm is concentrated based on the video of cluster and H264 Download PDF

Info

Publication number
CN105357594B
CN105357594B CN201510802199.4A CN201510802199A CN105357594B CN 105357594 B CN105357594 B CN 105357594B CN 201510802199 A CN201510802199 A CN 201510802199A CN 105357594 B CN105357594 B CN 105357594B
Authority
CN
China
Prior art keywords
video
frame
background
target
segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510802199.4A
Other languages
Chinese (zh)
Other versions
CN105357594A (en
Inventor
张真
刘鹏
杨雪松
曹骝
秦恩泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Innovative Data Technologies Inc
Original Assignee
Nanjing Innovative Data Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Innovative Data Technologies Inc filed Critical Nanjing Innovative Data Technologies Inc
Priority to CN201510802199.4A priority Critical patent/CN105357594B/en
Publication of CN105357594A publication Critical patent/CN105357594A/en
Application granted granted Critical
Publication of CN105357594B publication Critical patent/CN105357594B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of massive video abstraction generating methods concentrating algorithm based on the video of cluster and H264, include the following steps:Original video is chosen, and it is cut, obtains the n segments being approximately equal in length, coded format H264, wherein n are natural number;Video decoding is carried out to each segment after cutting, foreground target is obtained according to estimation and Background, and algorithm is repaired by wrong report deletion based on sparse optical flow and missing inspection, the verification and measurement ratio of each segment is carried out perfect, and updates Background;The single segment comprising movable information is regarded as upgrading unit, is compressed, is spliced after the completion of compression, generate one section of complete video frequency abstract.

Description

The massive video abstraction generating method of algorithm is concentrated based on the video of cluster and H264
Technical field
It is especially a kind of to be concentrated based on the video of cluster and H264 the invention belongs to massive video data concentration technique field The massive video abstraction generating method of algorithm.
Background technology
It is well known that video monitoring system is just penetrating among the various occasions of society, and in such as security protection of many industries, friendship Increasingly important role is played under the environment such as logical, industrial production;Rapidly increase with the quantity of monitoring camera, daily The video data of magnanimity also generates therewith, is to browse these videos manually, and extract wherein significant mostly at present Information.
On the one hand, video is more, and required personnel are also more;On the other hand, artificial treatment efficiency is as more next More data become low, and handling result is inevitably omitted or gone wrong;However, its processing cost is considerable;Depending on Frequency abstract technology is come into being, and can automatically be retained significant video data, be cast out garbage, and artificial in this way need to be clear Look at significant data, cost is effectively reduced.
Video frequency abstract is also known as video concentration, and the processing procedure of the technology is typically:First by background modeling, foreground is obtained Object;Track algorithm is then used, movement locus is preserved;Finally the track of object is combined in a manner, and is copied to In background image, concentration video is formed;However, the video length after existing video concentration technique concentration is generally much smaller than original and regards A length of 10 hours HD videos when frequently, if there is one section, common background modeling algorithm, such as GMM, the speed of service are approximately equal to Real-time broadcasting speed will equally consume 10 hours, efficiency is not substantially improved but if to browse the video after entire concentration.
Invention content
The technical problem to be solved by the present invention is to, the shortcomings that overcoming the prior art, provides and a kind of being based on cluster and H264 Video concentration algorithm massive video abstraction generating method.
In order to solve the above technical problems, the present invention provides a kind of magnanimity concentrating algorithm based on the video of cluster and H264 Video abstraction generating method includes the following steps:
1. choosing original video, and it is cut, obtain the n segments being approximately equal in length, coded format is H264, wherein n are natural number;
2. carrying out video decoding to each segment after cutting, foreground target is obtained according to estimation and Background, And algorithm is repaired by wrong report deletion based on sparse optical flow and missing inspection, the verification and measurement ratio of each segment is carried out perfect, and updates the back of the body Jing Tu;3. the single segment comprising movable information is regarded as upgrading unit, compressed, spliced after the completion of compression, is generated One section of complete video frequency abstract;
1. step specifically includes following processing:
Assuming that the i-th frame of original video is video cut point set by user, frame range F ∈ [i-k × f i+k × f] are defined, Wherein, k is iterations, and f is constant, searches image j within this range so that without foreground target in jth frame, and | i-j | be It is minimum;
In range [i-k × f i+k × f] if the numerical value of the interior estimation for finding continuous m frames is less than threshold value Tmv, recognize A width Background has been achieved in that if this m frame image is all Background, for no foreground target if without Background, k=k+ in F 1, go to step 1.;If jth frame is Background (j ∈ [i-k × f i+k × f]), calculate | i-j | and make | i-j | minimum, then j For the cut point of video, cycle is exited;
2. step specifically includes following processing:If estimation numerical value is less than threshold value Tmv, then it is assumed that without foreground target, such as Fruit continuous multiple frames have been achieved in that a width Background without foreground target;For P frames or B frames, the movement in present frame is first determined whether Whether estimation is more than threshold value Tmv, and if more than Tmv, then by both present image and Background gray processing, simultaneously respective pixel is subtracted each other, phase Absolute value after subtracting then is assigned to 255 if more than certain threshold value Tdiff, is otherwise assigned to 0, thus obtains a width bianry image M;
If estimation is less than Tmv, does not make any processing, enter next frame and continue to calculate;For I frames, with same Method calculate present image and Background difference value M;Noise and Objective extraction are removed, bianry image M is closed into fortune first It calculates, then extracts the outermost profile of each object, its size and location is indicated with boundary rectangle;If the length and width of rectangle is all big In threshold value Tlen, then it is assumed that be a foreground target, be otherwise considered as noise;
Target following calculates each target of present frame and the rectangle registration of each target of next frame between any two, if certain The maximum registration of target and next frame be more than threshold value Toverlap, then it is assumed that the two rectangles are the same targets, tracking at Work(;If tracking failure, and object has been moved near image boundaries, then object has been moved out the visual field or region of interest in next frame Domain, without tracking;
Removal wrong report and reparation missing inspection, if certain target does not track success in next frame, then it is assumed that missing inspection has occurred;For leakage Inspection calculates the harris angle points in rectangle first, if certain harris angle point corresponding pixel in binary map M is 0, rejecting should Angle point, secondly using optical flow method to all angle points into line trace, calculate angle point in horizontal and vertical direction average displacement dx and Current goal is translated dx and dy pixel, the position as next frame by dy respectively in horizontal and vertical direction;If certain target Missing inspection has occurred in continuous multiple frames, and continuous missing inspection frame number is more than Tm, then it is assumed that the target is a wrong report, is deleted;
The update of background image Bg determines which region belongs to foreground in image after the execution of the above processing step, which Belong to background, context update is comprised the concrete steps that just for the point in those nontarget areas, if certain pixel of present frame Fcur Pxl is not in any target rectangle, then the background pixel is replaced by the mean value of the Fcur and Bg of respective coordinates;Preservation detects Foreground object coordinate, size, subgraph and motion segments information;
Step 3. include following specific processing:
(1) background modeling based on H264 preserves the foreground object coordinate detected, size, subgraph and movement piece The information such as start frame, the end frame of section;If segment number is accumulated to Tsec or arrived video last frame and piece in memory Hop count is more than 1, then goes to step (2);If it is 0 to reach last frame and segments, exit the program;
(2) first segment is added to set A and all target images in the segment first frame is copied into Background;
(3) judge whether remaining segment is full at average, maximum two aspect of registration of object with the segment in set A successively Foot value set by user, if satisfied, set A then is added in the segment;
(4) all segments in set A are corresponded into the target image in frame number and copies to Background;If some segment has replicated Complete last frame deletes it from set A;If all segments have all replicated and without rest segment, have gone to step in set A (1), otherwise enter next frame, go to step (3).
As the preferred technical solution of the present invention:
Further, the massive video abstraction generating method above-mentioned that algorithm is concentrated based on the video of cluster and H264, n The segment being approximately equal in length be parallel carry out step 2. and step 3. in concentration operation, independently of each other.
The beneficial effects of the invention are as follows:
1. the massive video abstraction generating method profit for concentrating algorithm based on the video of cluster and H264 designed by the present invention With the mode of parallel processing, the efficiency of video concentration greatly improved;
2. the massive video abstraction generating method energy for concentrating algorithm based on the video of cluster and H264 designed by the present invention Enough solve the problems, such as the two class screen flickers caused by missing inspection, object adhesion.
Description of the drawings
Fig. 1 is the general flow chart of the video abstraction generating method based on cluster in the present invention;
Fig. 2 is the detail flowchart of the video abstraction generating method based on cluster in the present invention;
Fig. 3 is the hardware device schematic diagram of the video abstraction generating method based on cluster in the present invention;
Fig. 4 is that the motion segments of the video abstraction generating method based on cluster in the present invention concentrate schematic diagram;
Fig. 5 is the variation relation figure between concentration total time and the number of processes of parallel processing in the present invention.
Specific implementation mode
As Figure 1-Figure 4, a kind of magnanimity being concentrated algorithm based on the video of cluster and H264 provided in this embodiment is regarded Frequency abstraction generating method, includes the following steps:
1. choosing original video, and it is cut, obtain the n segments being approximately equal in length, coded format is H264, wherein n are natural number;
2. carrying out video decoding to each segment after cutting, foreground target is obtained according to estimation and Background, And algorithm is repaired by wrong report deletion based on sparse optical flow and missing inspection, the verification and measurement ratio of each segment is carried out perfect, and updates the back of the body Jing Tu;3. the single segment comprising movable information is regarded as upgrading unit, compressed, spliced after the completion of compression, is generated One section of complete video frequency abstract;
1. step specifically includes following processing:
Assuming that the i-th frame of original video is video cut point set by user, frame range F ∈ [i-k × f i+k × f] are defined, Wherein, k is iterations, and f is constant, searches image j within this range so that without foreground target in jth frame, and | i-j | be It is minimum;
In range [i-k × f i+k × f] if the numerical value of the interior estimation for finding continuous m frames is less than threshold value Tmv, recognize A width Background has been achieved in that if this m frame image is all Background, for no foreground target if without Background, k=k+ in F 1, go to step 1.;If jth frame is Background (j ∈ [i-k × f i+k × f]), calculate | i-j | and make | i-j | minimum, then j For the cut point of video, cycle is exited;
2. step specifically includes following processing:If estimation numerical value is less than threshold value Tmv, then it is assumed that without foreground target, such as Fruit continuous multiple frames have been achieved in that a width Background without foreground target;For P frames or B frames, the movement in present frame is first determined whether Whether estimation is more than threshold value Tmv, and if more than Tmv, then by both present image and Background gray processing, simultaneously respective pixel is subtracted each other, phase Absolute value after subtracting then is assigned to 255 if more than certain threshold value Tdiff, is otherwise assigned to 0, thus obtains a width bianry image M;
If estimation is less than Tmv, does not make any processing, enter next frame and continue to calculate;For I frames, with same Method calculate present image and Background difference value M;Noise and Objective extraction are removed, bianry image M is closed into fortune first It calculates, then extracts the outermost profile of each object, its size and location is indicated with boundary rectangle;If the length and width of rectangle is all big In threshold value Tlen, then it is assumed that be a foreground target, be otherwise considered as noise;
Target following calculates each target of present frame and the rectangle registration of each target of next frame between any two, if certain The maximum registration of target and next frame be more than threshold value Toverlap, then it is assumed that the two rectangles are the same targets, tracking at Work(;If tracking failure, and object has been moved near image boundaries, then object has been moved out the visual field or region of interest in next frame Domain, without tracking;
Removal wrong report and reparation missing inspection, if certain target does not track success in next frame, then it is assumed that missing inspection has occurred;For leakage Inspection calculates the harris angle points in rectangle first, if certain harris angle point corresponding pixel in binary map M is 0, rejecting should Angle point, secondly using optical flow method to all angle points into line trace, calculate angle point in horizontal and vertical direction average displacement dx and Current goal is translated dx and dy pixel, the position as next frame by dy respectively in horizontal and vertical direction;If certain target Missing inspection has occurred in continuous multiple frames, and continuous missing inspection frame number is more than Tm, then it is assumed that the target is a wrong report, is deleted;
The update of background image Bg determines which region belongs to foreground in image after the execution of the above processing step, which Belong to background, context update is comprised the concrete steps that just for the point in those nontarget areas, if certain pixel of present frame Fcur Pxl is not in any target rectangle, then the background pixel is replaced by the mean value of the Fcur and Bg of respective coordinates;Preservation detects Foreground object coordinate, size, subgraph and motion segments information;
Step 3. include following specific processing:
(1) background modeling based on H264 preserves the foreground object coordinate detected, size, subgraph and movement piece The information such as start frame, the end frame of section;If segment number is accumulated to Tsec or arrived video last frame and piece in memory Hop count is more than 1, then goes to step (2);If it is 0 to reach last frame and segments, exit the program;
(2) first segment is added to set A and all target images in the segment first frame is copied into Background;
(3) judge whether remaining segment is full at average, maximum two aspect of registration of object with the segment in set A successively Foot value set by user, if satisfied, set A then is added in the segment;
(4) all segments in set A are corresponded into the target image in frame number and copies to Background;If some segment has replicated Complete last frame deletes it from set A;If all segments have all replicated and without rest segment, have gone to step in set A (1), otherwise enter next frame, go to step (3).
Further, the massive video abstraction generating method above-mentioned that algorithm is concentrated based on the video of cluster and H264, n The segment being approximately equal in length be parallel carry out step 2. and step 3. in concentration operation, independently of each other.
In actual experiment, a length of 8 minutes monitor videos when selecting one section, resolution ratio 1280 × 720, respectively with mixed Closing Gauss background modelings, vibe background modelings and modeling algorithm of the invention, (video cuts into 5 sections, and each section only with one Vertical process concentrates), total time-consuming of the invention includes following a few classes:1, video clipping time;2, every section of video concentration time; 3, splice every section of video spent time after concentrating, performance comparison is shown in Table 1;
Table 1
GMM is modeled Vibe is modeled Inventive algorithm
It is average to be taken per frame 41.46ms 22.45ms 12.24ms
Total time-consuming 550.66s 316.44s 47.82s
By table 1, it can be seen that being regarded using the designed magnanimity for concentrating algorithm based on the video of cluster and H264 of invention is done Frequency abstraction generating method compared with prior art, can greatly improve efficiency up to 100% or more;Fig. 5 is to select 100 minutes durations Video concentrated, concentrate the variation relation between total time and the number of processes of parallel processing.
Above example is merely illustrative of the invention's technical idea, and protection scope of the present invention cannot be limited with this, every According to technological thought proposed by the present invention, any change done on the basis of technical solution each falls within the scope of the present invention Within.

Claims (2)

1. a kind of massive video abstraction generating method concentrating algorithm based on the video of cluster and H264, which is characterized in that including Following steps:
1. choosing original video, and it is cut, obtains the segments that n is approximately equal in length, coded format H264, Middle n is natural number;
2. carrying out video decoding to each segment after cutting, foreground target is obtained according to estimation and Background, and lead to It crosses wrong report deletion based on sparse optical flow and algorithm is repaired in missing inspection, the verification and measurement ratio of each segment is carried out perfect, and update Background; 3. the single segment comprising movable information is regarded as upgrading unit, compressed, spliced after the completion of compression, one section of generation is complete Whole video frequency abstract;
1. the step specifically includes following processing:
Assuming that the i-th frame of original video is video cut point set by user, frame range is defined
F ∈ [i-k × f i+k × f], wherein k is iterations, and f is constant, searches image j within this range so that jth frame Interior no foreground target, and | i-j | for minimum;
In range [i-k × f i+k × f] if the numerical value of the interior estimation for finding continuous m frames is less than threshold value Tmv, then it is assumed that nothing Foreground target has been achieved in that a width Background if this m frame image is all Background, if turning without Background, k=k+1 in F 1. to step;If jth frame is Background (j ∈ [i-k × f i+k × f]), calculate | i-j | and make | i-j | it is minimum, then j be regarding The cut point of frequency exits cycle;
2. the step specifically includes following processing:If estimation numerical value is less than threshold value Tmv, then it is assumed that without foreground target, such as Fruit continuous multiple frames have been achieved in that a width Background without foreground target;For P frames or B frames, the movement in present frame is first determined whether Whether estimation is more than threshold value Tmv, and if more than Tmv, then by both present image and Background gray processing, simultaneously respective pixel is subtracted each other, phase Absolute value after subtracting then is assigned to 255 if more than certain threshold value Tdiff, is otherwise assigned to 0, thus obtains a width bianry image M;
If estimation is less than Tmv, does not make any processing, enter next frame and continue to calculate;For I frames, with same side Method calculates the difference value M of present image and Background;Noise and Objective extraction are removed, bianry image M is made into closed operation first, with After extract the outermost profile of each object, indicate its size and location with boundary rectangle;If the length and width of rectangle is both greater than threshold value Tlen, then it is assumed that be a foreground target, be otherwise considered as noise;
Target following calculates each target of present frame and the rectangle registration of each target of next frame between any two, if certain target It is more than threshold value Toverlap with the maximum registration of next frame, then it is assumed that the two rectangles are the same targets, are tracked successfully;If Tracking failure, and object has been moved near image boundaries, then object has been moved out the visual field or area-of-interest in next frame, is not necessarily to Tracking;
Removal wrong report and reparation missing inspection, if certain target does not track success in next frame, then it is assumed that missing inspection has occurred;For missing inspection, The harris angle points in rectangle are calculated first, if certain harris angle point corresponding pixel in binary map M is 0, reject the angle Point, secondly using optical flow method to all angle points into line trace, calculate angle point in horizontal and vertical direction average displacement dx and dy, Current goal is translated into dx and dy pixel respectively in horizontal and vertical direction, the position as next frame;If certain target connects Missing inspection has occurred in continuous multiframe, and continuous missing inspection frame number is more than Tm, then it is assumed that the target is a wrong report, is deleted;
The update of background image Bg determines which region belongs to foreground in image after the execution of the above processing step, which belongs to In background, context update is comprised the concrete steps that just for the point in those nontarget areas, if certain pixel p xl of present frame Fcur Not in any target rectangle, then the background pixel is replaced by the mean value of the Fcur and Bg of respective coordinates;What preservation detected Foreground object coordinate, size, subgraph and motion segments information;
The step 3. include following specific processing:
(1) background modeling based on H264 preserves foreground object coordinate, size, subgraph and the motion segments detected The information such as start frame, end frame;If segment number is accumulated to Tsec or arrived video last frame and segments in memory More than 1, then step (2) is gone to;If it is 0 to reach last frame and segments, exit the program;
(2) first segment is added to set A and all target images in the segment first frame is copied into Background;
(3) judge whether remaining segment meets use with the segment in set A at average, maximum two aspect of registration of object successively The value of family setting, if satisfied, set A then is added in the segment;
(4) all segments in set A are corresponded into the target image in frame number and copies to Background;If some segment has replicated most A later frame deletes it from set A;If all segments have all replicated and without rest segment, have gone to step (1) in set A, Otherwise enter next frame, go to step (3).
2. the massive video abstraction generating method according to claim 1 that algorithm is concentrated based on the video of cluster and H264, It is characterized in that, the segments that the n is approximately equal in length be parallel carry out step 2. and step 3. in concentration operation, phase It is mutually independent.
CN201510802199.4A 2015-11-19 2015-11-19 The massive video abstraction generating method of algorithm is concentrated based on the video of cluster and H264 Active CN105357594B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510802199.4A CN105357594B (en) 2015-11-19 2015-11-19 The massive video abstraction generating method of algorithm is concentrated based on the video of cluster and H264

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510802199.4A CN105357594B (en) 2015-11-19 2015-11-19 The massive video abstraction generating method of algorithm is concentrated based on the video of cluster and H264

Publications (2)

Publication Number Publication Date
CN105357594A CN105357594A (en) 2016-02-24
CN105357594B true CN105357594B (en) 2018-08-31

Family

ID=55333431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510802199.4A Active CN105357594B (en) 2015-11-19 2015-11-19 The massive video abstraction generating method of algorithm is concentrated based on the video of cluster and H264

Country Status (1)

Country Link
CN (1) CN105357594B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107657626B (en) * 2016-07-25 2021-06-01 浙江宇视科技有限公司 Method and device for detecting moving target
CN107018367A (en) * 2017-04-11 2017-08-04 深圳市粮食集团有限公司 A kind of method and system for implementing grain monitoring
CN107360476B (en) * 2017-08-31 2019-09-20 苏州科达科技股份有限公司 Video abstraction generating method and device
CN107943837B (en) * 2017-10-27 2022-09-30 江苏理工学院 Key-framed video abstract generation method for foreground target
CN110996169B (en) * 2019-07-12 2022-03-01 北京达佳互联信息技术有限公司 Method, device, electronic equipment and computer-readable storage medium for clipping video
CN113051415B (en) * 2019-12-27 2023-02-21 浙江宇视科技有限公司 Image storage method, device, equipment and storage medium
CN111526434B (en) * 2020-04-24 2021-05-18 西北工业大学 Converter-based video abstraction method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102375816A (en) * 2010-08-10 2012-03-14 中国科学院自动化研究所 Online video concentration device, system and method
CN102708182A (en) * 2012-05-08 2012-10-03 浙江捷尚视觉科技有限公司 Rapid video concentration abstracting method
CN104284057A (en) * 2013-07-05 2015-01-14 浙江大华技术股份有限公司 Video processing method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BRPI0620497B1 (en) * 2005-11-15 2018-09-25 Yissum Research Development Company Of The Hebrew Univ Of Jerusalem method for creating a video synopsis, and system for transforming a source sequence of video frames from a first dynamic scene into a synopsis sequence of at least two video frames illustrating a second dynamic scene.

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102375816A (en) * 2010-08-10 2012-03-14 中国科学院自动化研究所 Online video concentration device, system and method
CN102708182A (en) * 2012-05-08 2012-10-03 浙江捷尚视觉科技有限公司 Rapid video concentration abstracting method
CN104284057A (en) * 2013-07-05 2015-01-14 浙江大华技术股份有限公司 Video processing method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《基于目标分离的视频浓缩技术在安防行业的应用和研究》;马婷婷;《电脑知识与技术》;20150109;第10卷(第35期);8529-8530 *
《智能视频浓缩与检索技术在变电站监控中的应用》;周刚,谢善益;《通讯世界》;20140122(第20期);1-3 *

Also Published As

Publication number Publication date
CN105357594A (en) 2016-02-24

Similar Documents

Publication Publication Date Title
CN105357594B (en) The massive video abstraction generating method of algorithm is concentrated based on the video of cluster and H264
CN104331905A (en) Surveillance video abstraction extraction method based on moving object detection
US20150326833A1 (en) Image processing method, image processing device and monitoring system
US10043105B2 (en) Method and system to characterize video background changes as abandoned or removed objects
CN107679495B (en) Detection method for movable engineering vehicles around power transmission line
Lai et al. Video summarization of surveillance cameras
CN104392461A (en) Video tracking method based on texture features
CN106296677A (en) A kind of remnant object detection method of double mask context updates based on double-background model
Ullah et al. Gaussian mixtures for anomaly detection in crowded scenes
CN103824074A (en) Crowd density estimation method based on background subtraction and texture features and system
CN104253994A (en) Night monitored video real-time enhancement method based on sparse code fusion
Pan et al. A robust video object segmentation scheme with prestored background information
CN111612681A (en) Data acquisition method, watermark identification method, watermark removal method and device
CN104978734A (en) Foreground image extraction method and foreground image extraction device
Yang A moving objects detection algorithm in video sequence
Hung et al. Exemplar-based video inpainting approach using temporal relationship of consecutive frames
CN110443134B (en) Face recognition tracking system based on video stream and working method
Zhang et al. Flame image segmentation algorithm based on background subtraction
CN107749068A (en) Particle filter realizes object real-time tracking method with perceptual hash algorithm
Kuang et al. Computer Vision and Normalizing Flow-Based Defect Detection
Dewan et al. Segmentation of moving object for content based applications
Rosell-Ortega et al. Feature sets for people and luggage recognition in airport surveillance under real-time constraints
Yuan et al. Research on Multi-Object Tracking of Overhead Transmission Line Components Based on Improved YOLOv8+ ByteTrack
Zhou et al. A shadow elimination method based on color and texture
Chen et al. Supervised video object segmentation using a small number of interactions

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Massive video summarization generation method based on clustering and H264 video concentration algorithm

Effective date of registration: 20221121

Granted publication date: 20180831

Pledgee: Nanjing Branch of Jiangsu Bank Co.,Ltd.

Pledgor: NANJING YUNCHUANG BIG DATA TECHNOLOGY Co.,Ltd.

Registration number: Y2022980022505