CN102708182B - Rapid video concentration abstracting method - Google Patents

Rapid video concentration abstracting method Download PDF

Info

Publication number
CN102708182B
CN102708182B CN201210142026.0A CN201210142026A CN102708182B CN 102708182 B CN102708182 B CN 102708182B CN 201210142026 A CN201210142026 A CN 201210142026A CN 102708182 B CN102708182 B CN 102708182B
Authority
CN
China
Prior art keywords
video
target
concentration
detection
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210142026.0A
Other languages
Chinese (zh)
Other versions
CN102708182A (en
Inventor
尚凌辉
刘嘉
陈石平
张兆生
高勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Icare Vision Technology Co ltd
Original Assignee
ZHEJIANG ICARE VISION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZHEJIANG ICARE VISION TECHNOLOGY Co Ltd filed Critical ZHEJIANG ICARE VISION TECHNOLOGY Co Ltd
Priority to CN201210142026.0A priority Critical patent/CN102708182B/en
Publication of CN102708182A publication Critical patent/CN102708182A/en
Application granted granted Critical
Publication of CN102708182B publication Critical patent/CN102708182B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Television Signal Processing For Recording (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a rapid video concentration abstracting method. The conventional video concentration technology is poor in detection rate and tracking rate to a moving target, and cannot effectively concentrate the video length. The rapid video concentration abstracting method is characterized in that a server side detects and tracks the moving target in a pretreatment video, judges according to the length of the video or the number of the detection targets in the video, cuts the video into multiple concentration segments, performs collision detection and rearrangement on the target tracks in each concentration segment, and then, records the concentration segment information and enters in an index file; and a client side analyzes the index file stored in the server side, obtains the treated concentration segments, renders the concentration segments frame by frame to form a video sequence, and dynamically regulates the target density in the played concentrated video. According to the method, the target tracking continuity is excellent, profile region is complete, the detection rate is high, the false detecting rate is low, and the target density at each time point of the concentrated video is substantially consistent.

Description

A kind of fast video concentrates method of abstracting
Technical field
The invention belongs to video frequency searching and video frequency abstract field, especially a kind of fast video concentration method of abstracting.
Background technology
[1] it is used for the method and system -200780050610.0 of video index and video summary
[2] it is used for the method and system -200680048754.8 for producing video summary
[3] the intelligent extraction video summarization method -201110170308.7 based on temporal-spatial fusion
[4] video summarization system -201020660533.X
[5] the video automatic concentration method -201110208090.X based on video surveillance network
[6] PRITCH Y, RAV-ACHA A, PELEG S. Nonchronological video synopsis and indexing [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2008,30(11): 1971-1984
[7] Y. Pritch, S. Ratovitch, A. Hendel, and S. Peleg, Clustered Synopsis of Surveillance Video, 6th IEEE Int. Conf. on Advanced Video and Signal Based Surveillance (AVSS'09), Genoa, Italy, Sept. 2-4, 2009
With the popularization and the development of Video Supervision Technique of video monitoring, the monitor video data for having magnanimity daily are produced and are recorded in equipment.How the data of these magnanimity are carried out effective browsing and analyze the problem that the field receives much concern that has become.Usual people are only to some of video target(Mainly moving target)It is interested with content, it would be desirable to the content of interest occurred in the rapid lapse of time video of quick browse video one.Video concentration technique is reset by video content analysis, splitting moving target, and to their time of occurrence so that all targets can be effectively presented to user in the most short time.
[1] the video concentration protocol based on moving Object Segmentation, background modeling and collision detection is proposed in [2] [6] [7].The program can obtain more satisfactory concentrated effect.But wherein collision detection scheme needs to calculate multiple different collision costs, and amount of calculation is larger, is unfavorable for the real-time processing to HD video.[3] a kind of intelligent extraction video summarization method based on temporal-spatial fusion is proposed.This method obtains objective contour by frame difference method, and target is tracked according to rectangular profile.For the target sequence traced into, its position occurred on a timeline is reset, to form new concentration video.If different target has superposition, Vitrification management is carried out.This method major defect is:The collision to target is not detected to obtain preferable visual effect;Be not segmented for long track, if therefore occur hovering target for a long time, can influence to concentrate the reduction length of video.
[4] a kind of video concentration systems scheme is proposed, the system includes input module, analysis module, database module and output module.Input module is obtained after video, and feeding analysis module carries out target detection and tracking, and the objective contour traced into is cut out, is stored in database.Output module is presented the target that different frame occurs in same frame of video.The system does not refer to how supporting to export the video that part processing is completed, does not provide yet and how to avoid target collision, and how to ensure that the target length for cutting out is moderate, to obtain preferable visual effect.
[5] a kind of video automatic concentration method based on video surveillance network is proposed.This method processing has the video source of overlapping region shot by camera from two, proposes, based on figure matching and random walk thought, to match different cameral projected footprint, realize the target following across video camera.The concentration of video is carried out on the panorama sketch across Camera Match, the concentration video of large scene can be obtained.During concentration, counterweight row defines 5 energy losses, and defines compression ratio, is degenerated and is reset to optimize track with simulation.This method does not refer to how track being segmented, if therefore occur hovering target for a long time, can influence to concentrate the reduction length of video.And energy term design and the amount of calculation of simulated annealing optimization are larger, it is unfavorable for the real-time processing to HD video.
The content of the invention
The defect that the present invention is directed to prior art presence concentrates method of abstracting there is provided a kind of fast video, effectively improves the verification and measurement ratio and tracking rate of moving target, effectively concentrates video length, realize effective density control.
Therefore, the present invention is adopted the following technical scheme that:A kind of fast video concentrates method of abstracting, including server end, it is characterized in that carrying out detecting and tracking to the moving target in preprocessed video by server end, judged according to the quantity of detection target in the length or video of video, video is cut to multiple enriching sections, collision detection and rearrangement are carried out to the target trajectory in each enriching section, the segment information of record concentration afterwards enters in index file;Also include client, described client is analyzed the index file in deposit server end, obtains processed enriching section, enriching section is rendered frame by frame, form video sequence, and to the concentration video dynamic adjustment target density in broadcasting.
Described moving object detection is to carry out background modeling to scene using the mixed Gaussian method of adaptive threshold, with reference to interframe change detection prospect, region contour is become more meticulous using multi-scale information during scene area before extraction, random areas is positioned using density estimation method, the method finally sampled using random areas is updated to background model, and effective detection goes out the target of low contrast;Described target following is that the motion detection region of multiframe is associated using the method assumed more, objective contour is predicted, and the outline position of present frame is positioned based on marginal information, when target occur divide, collide, lose inspection when according to the position produce assume, optimal hypothesis is finally provided using Hungary Algorithm, and history is assumed to cut, obtain the pursuit path of target.
The generation of the enriching section is realized using following methods:When one section of video accumulated time length more than Tmax or destination number more than Nmax(With path length maximum permissible value Lmax and pre-set density d positive correlations)When, then produce a new enriching section.
The trace information that each target occurs in video can be obtained by moving object detection and tracking, including frame, region, bounding box, according to the bounding box position of each frame of cutting target trajectory, cutting is carried out to the long track in video, it is ensured that each path length is more than Lmin and is less than Lmax.
Judge the collision between target, define energy term and collision is punished, then using the greedy method of variable step iteration, ensure that each iteration energy has decline, and iterative convergence speed is fast, and avoided with randomization method being absorbed in locally optimal solution, complete the detection and rearrangement of target collision.
The Optimization Steps of the greedy method of the variable step iteration are as follows:
A. initialize:Set initial iteration step S1, final iteration step length S2, wherein S2<S1.Set step change number ds, each step iteration times N.Set current step S=S1.
B. with current step S iteration n times:
A) current collision cost E1 is calculated.
B) track is randomly choosed.
C) using step-length S as interval, all possible positions in enriching section reappose track time of occurrence.
D) minimum collision cost E2 in all positions is calculated.
If e) collision cost E2<Track, then be placed at minimum collision cost by E1.
C. S=S-ds are set.If S>=S2, repeat step 2, otherwise terminates.
Client when rendering frame by frame to video, first according to searching the moment corresponding Background in present frame ID indexed files, and search the corresponding area pixel value of target of all moment appearance, target area is added on Background, if a position has multiple targets to occur, the pixel value of the position is being averaged for multiple target pixel values.
Background sets an accumulation interval by being obtained to multiple image cumulative mean, first, if the interval Background of adjacent accumulation changes more than thresholding T1, records a new Background;If change exceedes thresholding T2(T2>T1), then labeled as new enriching section.
When enriching section is generated, using the concentration densities d of acquiescence, video can enter Mobile state adjustment according to desired density of playing, when setting new broadcasting density d n, rearrange the time of occurrence of each target, define T in client terminal playingoFor the original time of occurrence of target, then the new time is Tn=To*d/dn
The present invention has advantages below:
1. it is good to track goal succession, contour area is complete, and verification and measurement ratio is high, and false drop rate is low;
2. the target density for concentrating video each time point is basically identical;
3. segment broadcasting can be cut into long target, video compression efficiency is high, plays good visual effect;
4. collision detection and rearrangement speed are fast;
5. for needing the video of long time treatment to support to play in processing.
6. adjustable density as needed when playing.
Brief description of the drawings
Fig. 1 is flow chart of the invention.
Embodiment
Below by embodiment, technical scheme is described in further detail.
Fast video concentration method of abstracting as shown in Figure 1, including server end and client, processing are comprised the following steps that:Server end first detects and split the moving object detection occurred in the video, background modeling is carried out to scene using the mixed Gaussian method of adaptive threshold, with reference to interframe change detection prospect, region contour is become more meticulous using multi-scale information during scene area before extraction.Change with reference to textural characteristics and interframe uniformity, effectively inhibit the interference of illumination variation, and energy effective detection goes out the target of low contrast;Leaf is rocked using density estimation method, ripples flowing etc. random areas position;The method finally sampled using random areas is updated to background model, enhances the robustness of background model.
The motion detection region of multiframe is associated using the method assumed more, objective contour is predicted, and based on marginal information position present frame outline position, when target occur divide, collide, lose inspection when according to the position produce assume.Finally using Hungary Algorithm provide it is optimal it is assumed that and to history assume cut, obtain the pursuit path of target
Input video typically has two kinds of forms:Video file and live video stream.For video file, what its time span and frame per second were to determine, and the time span of live video stream is uncertain.The target density of different time sections may be also different in video, and for example the monitor video stream of people on daytime in street is than comparatively dense, and the stream of people at night is than sparse.And over time, the increase and decrease of object in the change of illumination or the visual field can cause scene to change.To ensure that concentration video has similar density in different time sections, and background changes over time, concentrates by the way of segment processing.When following any condition is met, then a new enriching section is produced:Accumulated video time span is more than Tmax;Destination number is more than Nmax(With path length maximum permissible value Lmax and pre-set density d positive correlations)When;When significant changes occur for the background of scene, it can ensure that each time point target density is basically identical after concentration with reference to the rearrangement of track.
Moving object detection and tracking can obtain the trace information that each target occurs in video, including frame(Or absolute time), region, bounding box.Cutting is carried out for the long track occurred in video, to ensure that each path length is no more than maximum permissible value Lmax.Because too short target trajectory has flickering when browsing, to ensure that track can not be shorter than predefined most short visual length Lmin after human eye vision effect, cutting.
According to the bounding box position of each frame of cutting target trajectory, it can be determined that the collision between target.It is Ti to define i-th track in target trajectory, and j-th strip track is Tj.The gross area overlapped and the cost sum misplaced on the time between total collision cost of whole enriching section and each track:E=Eo+Et
The overlapping cost of two tracks is defined as using the normalized bounding box overlapping region area of video image size: E o = &Sigma; i , j ( Ti &cap; Tj ) / Area
Because video slicing is that dislocation cost can be ignored on enriching section, therefore time, approximate has E ≈ Eo.It is this approximately to cause collision cost computation amount.To minimize collision gross energy, using the greedy method of variable step iteration.Optimization Steps are as follows:
1. initialization:Set initial iteration step S1, final iteration step length S2, wherein S2<S1.Set step change number ds, each step iteration times N.Set current step S=S1.
2. with current step S iteration n times:
A. current collision cost E1 is calculated.
B. a track is randomly choosed.
C. using step-length S as interval, all possible positions in enriching section reappose track time of occurrence.
D. minimum collision cost E2 in all positions is calculated.
If e. colliding cost E2<Track, then be placed at minimum collision cost by E1.
3. set S=S-ds.If S>=S2, repeat step 2.Otherwise terminate.
Above Optimization Steps may insure that energy progressively declines in iterative process.It minimum collision cost E2 is calculated to accelerate calculating speed, in step 2 could alternatively be and directly calculate whether collision cost declines.After i.e. each iteration, choose the placement location of track on all possible positions with other track collision Least-costs.Due to having carried out randomization to track selection, energy-optimised process can be avoided to be absorbed in local optimum.The optimizing of variable step is that, by the thick searching idea to essence, the step-length than directly searching most thin is more efficient.The definition of energy above and optimizing mode ensure that quick target collision detection and reset.
Background to multiple image cumulative mean by obtaining.One accumulation of setting is interval, if the interval Background of adjacent accumulation changes more than thresholding T1, records a new Background;If change exceedes thresholding T2(T2>T1), then labeled as new enriching section.
The invention allows for a kind of C S structures video concentration systems framework, and support while processing while plays concentrate video.During service end processing, dynamic segmentation is carried out to video, to realize parallel processing.For each Parallel Unit, video-frequency band is handled by preceding method, and self adaptation cutting is enriching section.The information of each enriching section storage includes:The target trajectory occurred in the video-frequency band;The time of each track appearing and subsiding;The Background of enriching section accumulation;Every Background starting and ending time.
By the information Store of all enriching sections in video into an index file.Index file head record has preserved enriching section number, the state pause judgments time of each enriching section correspondence original video, and the position preserved in indexed file.
Client is obtained after index file, and Analytical Index file header can obtain completed enriching section and play out, and to be realized and play that there is provided preferable Consumer's Experience in processing.When client is rendered frame by frame to video, first according to searching the moment corresponding Background in present frame ID indexed files, and the corresponding area pixel value of target that all moment occur is searched, target area is added on Background.If a position has multiple targets to occur, the pixel value of the position is being averaged for multiple target pixel values(That is transparency overlay).
During foregoing enriching section generation, the concentration densities d of acquiescence can be used.Video is in client terminal playing, it may be desirable to which playing density can dynamically adjust.When client sets new broadcasting density for dn, the time of occurrence of each target is rearranged.Define ToFor the original time of occurrence of target.Then new time of occurrence is:Tn=To*d/dn
When this rearranged form can ensure concentration densities reduction, collision energy reduction improves visual effect.Due to only needing to directly calculate the time of occurrence of each target trajectory, without with calculating collision energy etc., therefore real-time density adjustment can be realized.
Of particular note is that, the mode of above-described embodiment is only limitted to describe embodiment, but the present invention is not limited to aforesaid way, and those skilled in the art can easily be modified without departing from the scope of the present invention accordingly, therefore the scope of the present invention should include the maximum magnitude of disclosed principle and new feature.

Claims (5)

1. a kind of fast video concentrates method of abstracting, it is characterized in that first carrying out detecting and tracking to the moving target in preprocessed video by server end, judged according to the quantity of detection target in the length or video of video, video is cut to multiple enriching sections, collision detection and rearrangement are carried out to the target trajectory in each enriching section, the segment information of record concentration afterwards enters in index file;The index file in deposit server end is analyzed by client again, processed enriching section is obtained, enriching section is rendered frame by frame, video sequence is formed, and target density is dynamically adjusted to the concentration video in broadcasting; 
Described moving object detection is to carry out background modeling to scene using the mixed Gaussian method of adaptive threshold, with reference to interframe change detection prospect, region contour is become more meticulous using multi-scale information during scene area before extraction, random areas is positioned using density estimation method, the method finally sampled using random areas is updated to background model, and effective detection goes out the target of low contrast;Described target following is that the motion detection region of multiframe is associated using the method assumed more, objective contour is predicted, and the outline position of present frame is positioned based on marginal information, when target occur divide, collide, lose inspection when according to the position produce assume, optimal hypothesis is finally provided using Hungary Algorithm, and history is assumed to cut, obtain the pursuit path of target; 
The generation of the enriching section is realized using following methods:When one section of video accumulated time length more than Tmax or destination number more than Nmax when, then produce a new enriching section; 
The trace information that each target occurs in video can be obtained by moving object detection and tracking, cutting is carried out to the long track in video, it is ensured that each path length is more than Lmin and is less than Lmax; 
The trace information that each target occurs in video can be obtained by moving object detection and tracking, including frame, region, bounding box, according to the bounding box position of each frame of cutting target trajectory, judge the collision between target, define energy term and collision is punished, then using the greedy method of variable step iteration, ensure that each iteration energy has decline, and iterative convergence speed is fast, and avoided with randomization method being absorbed in locally optimal solution, complete the detection and rearrangement of target collision. 
2. a kind of fast video concentration method of abstracting according to claim 1, it is characterised in that the Optimization Steps of the greedy method of the variable step iteration are as follows: 
(1)Initialize:Set initial iteration step S1, final iteration step length S2, wherein S2<S1.Set step change number ds, each step iteration times N, setting current step S=S1;
(2)With current step S iteration n times: 
A) current collision cost E1 is calculated; 
B) track is randomly choosed; 
C) using step-length S as interval, all possible positions in enriching section reappose track time of occurrence; 
D) minimum collision cost E2 in all positions is calculated; 
If e) collision cost E2<Track, then be placed at minimum collision cost by E1; 
(3)S=S-ds are set, if S>=S2, repeat step(2), otherwise terminate. 
3. a kind of fast video concentration method of abstracting according to claim 1 or 2, it is characterized in that client when rendering frame by frame to video, first according to searching the moment corresponding Background in present frame ID indexed files, and search the corresponding area pixel value of target of all moment appearance, target area is added on Background, if a position has multiple targets to occur, the pixel value of the position is being averaged for multiple target pixel values. 
4. a kind of fast video concentration method of abstracting according to claim 3, it is characterized in that Background to multiple image cumulative mean by obtaining, first set an accumulation interval, if the adjacent Background for accumulating interval changes more than thresholding T1, record a new Background;If change exceedes thresholding T2, and T2>T1, then labeled as new enriching section. 
5. a kind of fast video concentration method of abstracting according to claim 4, it is characterized in that during enriching section generation, use the concentration densities d of acquiescence, video is in client terminal playing, Mobile state adjustment can be entered according to desired density of playing, when setting new broadcasting density d n, the time of occurrence of each target is rearranged, T is definedoFor the original time of occurrence of target, then the new time is Tn=To*d/dn。 
CN201210142026.0A 2012-05-08 2012-05-08 Rapid video concentration abstracting method Expired - Fee Related CN102708182B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210142026.0A CN102708182B (en) 2012-05-08 2012-05-08 Rapid video concentration abstracting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210142026.0A CN102708182B (en) 2012-05-08 2012-05-08 Rapid video concentration abstracting method

Publications (2)

Publication Number Publication Date
CN102708182A CN102708182A (en) 2012-10-03
CN102708182B true CN102708182B (en) 2014-07-02

Family

ID=46900948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210142026.0A Expired - Fee Related CN102708182B (en) 2012-05-08 2012-05-08 Rapid video concentration abstracting method

Country Status (1)

Country Link
CN (1) CN102708182B (en)

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930061B (en) * 2012-11-28 2016-01-06 安徽水天信息科技有限公司 A kind of video summarization method based on moving object detection
CN103079117B (en) * 2012-12-30 2016-05-25 信帧电子技术(北京)有限公司 Video abstraction generating method and video frequency abstract generating apparatus
CN103096185B (en) * 2012-12-30 2016-04-20 信帧电子技术(北京)有限公司 A kind of video abstraction generating method and device
CN103226586B (en) * 2013-04-10 2016-06-22 中国科学院自动化研究所 Video summarization method based on Energy distribution optimal strategy
CN104284057B (en) * 2013-07-05 2016-08-10 浙江大华技术股份有限公司 A kind of method for processing video frequency and device
CN103345764B (en) * 2013-07-12 2016-02-10 西安电子科技大学 A kind of double-deck monitor video abstraction generating method based on contents of object
CN104301699B (en) * 2013-07-16 2016-04-06 浙江大华技术股份有限公司 A kind of image processing method and device
CN103455625B (en) * 2013-09-18 2016-07-06 武汉烽火众智数字技术有限责任公司 A kind of quick target rearrangement method for video abstraction
CN104618681B (en) * 2013-11-01 2019-03-26 南京中兴力维软件有限公司 Multi-channel video concentration method and device thereof
CN103617234B (en) * 2013-11-26 2017-10-24 公安部第三研究所 Active video enrichment facility and method
CN103686095B (en) * 2014-01-02 2017-05-17 中安消技术有限公司 Video concentration method and system
CN103793477B (en) * 2014-01-10 2017-02-08 同观科技(深圳)有限公司 System and method for video abstract generation
CN103778237B (en) * 2014-01-27 2017-02-15 北京邮电大学 Video abstraction generation method based on space-time recombination of active events
CN103957472B (en) * 2014-04-10 2017-01-18 华中科技大学 Timing-sequence-keeping video summary generation method and system based on optimal reconstruction of events
CN104284158B (en) * 2014-10-23 2018-09-14 南京信必达智能技术有限公司 Method applied to event-oriented intelligent monitoring camera
CN105007433B (en) * 2015-06-03 2020-05-15 南京邮电大学 Moving object arrangement method based on energy constraint minimization of object
CN105262932B (en) * 2015-10-20 2018-06-29 深圳市华尊科技股份有限公司 A kind of method and terminal of video processing
CN105357594B (en) * 2015-11-19 2018-08-31 南京云创大数据科技股份有限公司 The massive video abstraction generating method of algorithm is concentrated based on the video of cluster and H264
CN107680117B (en) * 2017-09-28 2020-03-24 江苏东大金智信息系统有限公司 Method for constructing concentrated video based on irregular target boundary object
TWI638337B (en) * 2017-12-21 2018-10-11 晶睿通訊股份有限公司 Image overlapping method and related image overlapping device
CN110166851B (en) * 2018-08-21 2022-01-04 腾讯科技(深圳)有限公司 Video abstract generation method and device and storage medium
CN110322471B (en) * 2019-07-18 2021-02-19 华中科技大学 Method, device and equipment for concentrating panoramic video and storage medium
CN111107376A (en) * 2019-12-09 2020-05-05 国网辽宁省电力有限公司营口供电公司 Video enhancement concentration method suitable for security protection of power system
CN112446358A (en) * 2020-12-15 2021-03-05 北京京航计算通讯研究所 Target detection method based on video image recognition technology
CN112507913A (en) * 2020-12-15 2021-03-16 北京京航计算通讯研究所 Target detection system based on video image recognition technology
CN112580548A (en) * 2020-12-24 2021-03-30 北京睿芯高通量科技有限公司 Video concentration system and method in novel intelligent security system
CN115941997B (en) * 2022-12-01 2023-06-30 石家庄铁道大学 Segment-adaptive monitoring video concentration method
CN116156206B (en) * 2023-04-04 2023-06-27 石家庄铁道大学 Monitoring video concentration method taking target group as processing unit
CN117376638B (en) * 2023-09-02 2024-05-21 石家庄铁道大学 Video concentration method for segment segmentation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101262568A (en) * 2008-04-21 2008-09-10 中国科学院计算技术研究所 A method and system for generating video outline
CN101689394A (en) * 2007-02-01 2010-03-31 耶路撒冷希伯来大学伊森姆研究发展有限公司 The method and system that is used for video index and video summary
CN102375816A (en) * 2010-08-10 2012-03-14 中国科学院自动化研究所 Online video concentration device, system and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7881505B2 (en) * 2006-09-29 2011-02-01 Pittsburgh Pattern Recognition, Inc. Video retrieval system for human face content

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101689394A (en) * 2007-02-01 2010-03-31 耶路撒冷希伯来大学伊森姆研究发展有限公司 The method and system that is used for video index and video summary
CN101262568A (en) * 2008-04-21 2008-09-10 中国科学院计算技术研究所 A method and system for generating video outline
CN102375816A (en) * 2010-08-10 2012-03-14 中国科学院自动化研究所 Online video concentration device, system and method

Also Published As

Publication number Publication date
CN102708182A (en) 2012-10-03

Similar Documents

Publication Publication Date Title
CN102708182B (en) Rapid video concentration abstracting method
CN106856577B (en) Video abstract generation method capable of solving multi-target collision and shielding problems
Maksai et al. What players do with the ball: A physically constrained interaction modeling
CN103347167B (en) A kind of monitor video content based on segmentation describes method
CN102222104B (en) Method for intelligently extracting video abstract based on time-space fusion
CN103929685A (en) Video abstract generating and indexing method
CN104217417B (en) A kind of method and device of video multi-target tracking
CN103413444A (en) Traffic flow surveying and handling method based on unmanned aerial vehicle high-definition video
CN103413330A (en) Method for reliably generating video abstraction in complex scene
CN103646257A (en) Video monitoring image-based pedestrian detecting and counting method
CN102323948A (en) Automatic detection method for title sequence and tail leader of TV play video
CN104270608A (en) Intelligent video player and playing method thereof
CN108769598A (en) Across the camera video method for concentration identified again based on pedestrian
CN110674886A (en) Video target detection method fusing multi-level features
CN101304479B (en) Method and apparatus for detecting motion in video sequence
CN110519532A (en) A kind of information acquisition method and electronic equipment
CN112884808B (en) Video concentrator set partitioning method for reserving target real interaction behavior
CN109063630B (en) Rapid vehicle detection method based on separable convolution technology and frame difference compensation strategy
CN109766743A (en) A kind of intelligent bionic policing system
Wang et al. A semi-automatic video labeling tool for autonomous driving based on multi-object detector and tracker
CN115330837A (en) Robust target tracking method and system based on graph attention Transformer network
CN104301699A (en) Image processing method and device
CN115661683A (en) Vehicle identification statistical method based on multi-attention machine system network
CN112307895A (en) Crowd gathering abnormal behavior detection method under community monitoring scene
CN103957472A (en) Timing-sequence-keeping video summary generation method and system based on optimal reconstruction of events

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C53 Correction of patent of invention or patent application
CB02 Change of applicant information

Address after: Hangzhou City, Zhejiang province Yuhang District 310013 West Street Wuchang No. 998 building 7 East

Applicant after: ZHEJIANG ICARE VISION TECHNOLOGY Co.,Ltd.

Address before: 310013, Zhejiang, Xihu District, Hangzhou, Tian Shan Road, No. 398, Kun building, 4 floor, South Block

Applicant before: ZHEJIANG ICARE VISION TECHNOLOGY Co.,Ltd.

C14 Grant of patent or utility model
GR01 Patent grant
C56 Change in the name or address of the patentee

Owner name: ZHEJIANG ICARE VISION TECHNOLOGY CO., LTD.

Free format text: FORMER NAME: HANGZHOU ICARE VISION TECHNOLOGY CO., LTD.

CP01 Change in the name or title of a patent holder

Address after: Hangzhou City, Zhejiang province Yuhang District 310013 West Street Wuchang No. 998 building 7 East

Patentee after: ZHEJIANG ICARE VISION TECHNOLOGY Co.,Ltd.

Address before: Hangzhou City, Zhejiang province Yuhang District 310013 West Street Wuchang No. 998 building 7 East

Patentee before: ZHEJIANG ICARE VISION TECHNOLOGY Co.,Ltd.

PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Rapid video concentration abstracting method

Effective date of registration: 20190820

Granted publication date: 20140702

Pledgee: Hangzhou Yuhang Financial Holding Co.,Ltd.

Pledgor: ZHEJIANG ICARE VISION TECHNOLOGY Co.,Ltd.

Registration number: Y2019330000016

PE01 Entry into force of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20200917

Granted publication date: 20140702

Pledgee: Hangzhou Yuhang Financial Holding Co.,Ltd.

Pledgor: ZHEJIANG ICARE VISION TECHNOLOGY Co.,Ltd.

Registration number: Y2019330000016

PC01 Cancellation of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A fast video summarization method

Effective date of registration: 20200921

Granted publication date: 20140702

Pledgee: Hangzhou Yuhang Financial Holding Co.,Ltd.

Pledgor: ZHEJIANG ICARE VISION TECHNOLOGY Co.,Ltd.

Registration number: Y2020330000737

PE01 Entry into force of the registration of the contract for pledge of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140702

CF01 Termination of patent right due to non-payment of annual fee