CN105469425A - Video condensation method - Google Patents

Video condensation method Download PDF

Info

Publication number
CN105469425A
CN105469425A CN201510818289.2A CN201510818289A CN105469425A CN 105469425 A CN105469425 A CN 105469425A CN 201510818289 A CN201510818289 A CN 201510818289A CN 105469425 A CN105469425 A CN 105469425A
Authority
CN
China
Prior art keywords
target
moving
background
frame
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510818289.2A
Other languages
Chinese (zh)
Inventor
阳海华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Junesh Information Technology Co Ltd
Original Assignee
Shanghai Junesh Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Junesh Information Technology Co Ltd filed Critical Shanghai Junesh Information Technology Co Ltd
Priority to CN201510818289.2A priority Critical patent/CN105469425A/en
Publication of CN105469425A publication Critical patent/CN105469425A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Abstract

The invention discloses a video condensation method. The method is characterized by through a mixed Gaussian background model, detecting a prospect of a moving object, carrying out statistics on pixel points of each frame of image, establishing a background and using statistical difference to classify each pixel point; for the moving object, searching a connected region and carrying out connected region merging on segmented regions; tracking the moving object, using a Kalman filter and combining a minimum space distance method to predict a coordinate position of each moving object, updating, tracking and removing the motion disappearance moving object, recording a center-of-mass coordinate of the moving object and a speed and carrying out statistics; optimizing a movement track, translating the moving object appeared at different time periods to a same time period, pasting a foreground to a background and avoiding conditions of object missing and severe overlapping. By using the method in the invention, rapid browsing of a monitoring video is realized and an utilization rate of video data is increased.

Description

A kind of video concentration method
Technical field
The present invention relates to technical field of video processing, specifically a kind of video concentration method.
Background technology
Along with greatly developing of current safe city and smart city, increasing camera is placed, and the thing followed is magnanimity monitor video.When needs are checked these videos, adopt traditional browsing mode, waste a large amount of manpower and materials.Adopt the mode of fast forwarding and fast rewinding, often omit bulk information.Can not meet the access of people to video far away, therefore video concentration technique produces.
It is a simplified summary to video video content that video concentrates, first by foreground detection algorithm, moving target is analyzed, extract moving target, then track algorithm is passed through, the movement locus of each target is analyzed, different targets recombinated and is spliced in common background scene, forming video.Therefore, in supervisory system, can concentrate fast browsing by video, locking searching object, has positive directive significance for productive life.
The concentrated technology of traditional generating video mainly contains following several: the first concentrates based on the video of key frame of video, and first the method obtains the key frame of video, and then synthesis forms new summarized radio.Typical Key-frame Extraction Algorithm comprises: extract key frame based on shot boundary, extract key frame based on image information, extract key frame and extract key frame based on Video clustering based on motion analysis, also has the video frame extraction algorithm for particular video frequency type, as extracted news video key frame etc. based on image and Word message.Another kind is by obtaining moving target, by analyzing the dense degree of moving object, obtains concentrated video.
The video that these two kinds of methods obtain all cannot meet the requirement that user obtains specific objective information.These two kinds of methods are all spliced by video segment, still there is the problem that duration is longer, motion is not compact.
Summary of the invention
The object of the present invention is to provide a kind of video concentration method realizing the fast browsing of monitor video, the utilization factor of raising video data, to solve the problem proposed in above-mentioned background technology.
For achieving the above object, the invention provides following technical scheme:
A kind of video concentration method, by analyzing moving target, extracting moving target, then analyzing the movement locus of each target, different targets is spliced in a common background scene, and they is combined; Specifically comprise the following steps:
1) foreground detection module: the prospect being detected moving target by mixture Gaussian background model, add up the pixel on each two field picture, the probability density statistical information of pixel great amount of samples value is in a long time utilized to set up background, and each pixel of Using statistics differential pair is classified, if the mean bias of pixel is less than threshold value, think background dot, otherwise be foreground point;
2), after moving object being detected, by search connected region, judge whether same moving target is split into and be no less than 2 regions, connected region merging is carried out, until target detection completes in those divided regions of opening;
3) motion target tracking module: moving target is followed the tracks of, utilize Kalman filter in conjunction with minimum space Furthest Neighbor to predict the coordinate position of each moving target, and carry out the Moving Objects upgrading, follow the tracks of, remove motion disappearance, and record center-of-mass coordinate, the speed of moving target;
4) based on moving object detection and target following, statistic procedure 3) in the quantity of target, target occur initial time, target's center, prospect width of frame and height;
5) summarized radio concentrates module: be optimized movement locus, and the moving target realized different time sections occurs moves to the same time period, prospect is pasted background, and does not occur track rejection and overlapping.
As the further scheme of the present invention: step 1) concrete grammar be: in mixture Gaussian background model, thinking that between each frame picture pixels point, colouring information is uncorrelated mutually, is separate to the process of each pixel; For pixel each in video image, the stochastic process constantly producing pixel point value is regarded in the change of its value in sequence image as, namely by the rule that the color that Gaussian distribution describes each pixel presents, set up K multimodal Gaussian distribution model, each pixel of image is according to multiple Gaussian distribution overlay modelings of different weights; Then to K model, each pixel and equal value difference meet threshold value, and threshold value is 2.5 σ, think that this pixel is background, otherwise are prospects.
For multimodal Gaussian distribution model, each pixel of image carrys out modeling by the superposition of multiple Gaussian distribution of different weights, often kind of Gaussian distribution correspondence one may produce the state of the color that pixel presents, and weights and the distribution parameter of each Gaussian distribution upgrade in time.When processing coloured image, assuming that three chrominance channel R, G, B of image slices vegetarian refreshments are separate and have identical variance.
For the observation data collection { x of stochastic variable X 1, x 2..., x n, x t=(r t, g t, b t) be the sample of t pixel, then single sample is obeyed mixture Gaussian background model distribution probability density function and is:
p ( x t ) = Σ i = 1 K w i , t × η ( x t , μ i , t , τ i , t ) - - - ( 1 )
η ( x t , μ i , t , τ i , t ) = 1 | τ i , t | 1 / 2 e - 1 2 ( x t - μ i , t ) T τ i , t - 1 ( x t - μ i , t ) - - - ( 2 )
τ i , t = δ i , t 2 I - - - ( 3 )
Wherein, K is pattern sum, η (x t, μ i, t, τ i, t) be t i-th Gaussian distribution, μ i, tfor its average, τ i, tfor its covariance matrix, δ i, tfor variance, I is three-dimensional matrice, w i, tit is the weight of i-th Gaussian distribution.
Initialization K Gaussian distribution background model, each pixel comprises the background model of K sample:
M(x)={p 1,p 2,…,p N}(4)
And each new pixel compares with a current K model, until find the background model of mating new pixel, judge the distributed model belonging to pixel to be sorted according to following formula:
|X ti,t-1|≤2.5σ i,t-1(5)
To a new frame, each pixel and K model compare, until find the pixel of coupling and model mean bias in 2.5 σ; If the pattern of coupling meets context request, this pixel belongs to background, otherwise belongs to prospect;
Each schema weight upgrades by following formula, and wherein α is learning rate, for the pattern M of coupling k,t=1, otherwise M k,t=0.
w k,t=(1-α)*w k,t-1+α*M k,t(6)
For pattern weight Normalization: average μ and the standard deviation sigma of non-match pattern are constant, the parameter of match pattern is according to formula renewal below:
ρ=α*η(X tk,σ k)(7)
μ t=(1-ρ)*μ t-1+ρ*X t(8)
σ t 2 = ( 1 - ρ ) * σ t - 1 2 + ρ * ( X t - μ t ) T ( X t - μ t ) - - - ( 9 )
If without any pattern match, then the pattern that weight is minimum is replaced, and namely this pattern average is current pixel point value, and standard deviation is initial higher value, and weight is smaller value.Each pattern is according to w/ α 2by descending sort.Weight is large, and the pattern that standard deviation is little is stood out.Before choosing, B pattern is as a setting, and parameter T represents the ratio shared by background, judges that a new frame pixel point value belongs to prospect or background, shown in following formula:
B = arg ( min b ( Σ k = 1 b w k > T ) ) - - - ( 10 )
As the further scheme of the present invention: step 2) concrete grammar be, by mixture Gaussian background model, following situation is existed to foreground detection: carry out after foreground segmentation through mixture Gaussian background model, if same target is split into be no less than 2 object block, there is certain position relation in these object block minimum enclosed rectangle on relative position: one is that large frame comprises little frame; Two is that two frames have common factor, and not exclusively comprises; Three is that two frames become upper and lower relation, and between barycenter, horizontal and vertical distance is all less than threshold value; : for relation of inclusion remove the rectangle frame of the inside to the disposal route of above-mentioned object block minimum enclosed rectangle relation, outside retaining; For the rectangle frame intersected, remove to entangle two little rectangle frames with large rectangle frame; For the rectangle frame that there is overlying relation, live two little rectangle frames by large rectangle circle, and delete the little rectangle frame in two, the inside.
As the further scheme of the present invention: the step 3 that describes) be specially: moving target is followed the tracks of, namely follow the tracks of the process of the position of same target in image sequence different frame, namely by certain target in present frame, in search next frame, the positional information of this target, judges whether to follow the tracks of successfully.
As the further scheme of the present invention: the step 4 that describes) concrete grammar, comprise the following steps:
The first step: for all moving targets detected, respectively to each target i, utilize Kalman filter to predict this target coordinate position in the next frame, this position is designated as predicted position;
Second step: for all moving objects detected in next frame, position undetermined is designated as to the coordinate position of each target j;
3rd step: for the predicted position of certain i, if the undetermined position nearest with it is the position of target j, simultaneously for the position undetermined of this target j, the predicted position nearest with it is the position of i, then i and target j is described, and the match is successful, then follow the tracks of successfully, then the information of i is upgraded by the information of target j, and upgrade Kalman filter according to the positional information of target j simultaneously, to adjust its parameter, carry out the prediction of robustness;
4th step: for the predicted position not having certain the target i that the match is successful, illustrate that this target disappears, thus removes this target from tracking list; For the position undetermined not having the target j that the match is successful, illustrate that target j is emerging target, then for it opens up new storage space, and initialization its be movement locus.
As the further scheme of the present invention: described step 4) in, the minimum enclosed rectangle of all targets of success being followed the tracks of is saved in prospect container, be saved in object listing by each target in the position of each frame and size information, the institute obtained through mixture Gaussian background model being had powerful connections is kept in background container.
As the further scheme of the present invention: described step 5) in, when generating summarized radio, target trajectory description document provides spatial information and the temporal information of track prospect matrix sequence, does not have overlap or overlapping after guaranteeing moving target translation.
As the further scheme of the present invention: described step 5) in, when generating summarized radio, the translation of moving target, refers to temporal translation; With the form of transparent pinup picture, the region overlapped for target trajectory guarantees that each target is visible; For each target label time stamp and prospect rectangle frame.
As the further scheme of the present invention: described step 5) in, when generating summarized radio, for the process of background, the present invention adopts average background method, when namely carrying out target detection and follow the tracks of, preserved by each two field picture obtained by mixture Gaussian background model, the average of getting these background frames when carrying out video and concentrating is pasted with prospect as a setting.
As the further scheme of the present invention: described step 5) in, when generating summarized radio, concrete grammar comprises:
(5-1) when each moving target prospect being pasted, when blocking appears in two objects, carry out translucent process, ensure that two objects can be seen;
(5-2) when all moving targets being pasted, the prospect of preservation and background buffer memory are removed, carry out preservation and the concentration operation of a new round.
As the further scheme of the present invention: in described step (5-1), when generating summarized radio, when two moving targets block, the target be wherein blocked all loses partial information; To the region of blocking, carry out translucent process, two parts object corresponding pixel points is respectively multiplied by 0.5, is then spliced to background; Two-part like this occlusion area can be seen.
Compared with prior art, the invention has the beneficial effects as follows: the present invention proposes and achieves video concentration method, the method Extraction object be saved in motion list from magnanimity monitor video, the background dynamically generated and these moving targets are spliced, achieve the object translation that occurs in different time sections like this to the same time period, realize the object that video is concentrated, thus realize the fast browsing of monitor video, improve the utilization factor of video data.
Also comprise following characteristics:
1. artificially can set the density that video is concentrated;
2. the size of concentration densities, also can be set by monitoring environment;
3., by integrating multiply connected region blending algorithm in foreground detection module, make moving object detection result more accurate.
Accompanying drawing explanation
Fig. 1 is main flow figure of the present invention;
Fig. 2 is mixture Gaussian background model schematic diagram;
Fig. 3 is algorithm of target detection process flow diagram;
Fig. 4 is that video concentrates ultimate principle figure;
Fig. 5 is the outline flowchart of the embodiment of the present invention.
Embodiment
Below in conjunction with the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
Embodiment 1
Refer to Fig. 1-Fig. 5, the present invention is mainly divided into three large modules:
(1) foreground detection module
This module Main Function is detected by foreground target, exactly each pixel of each two field picture carried out sort operation: belong to prospect or background.Consider based on video concentration systems real-time, adopt mixture Gaussian background model to detect.
Basic mixture Gaussian background model, classifies to each pixel by the method for statistics difference exactly, according to each pixel whether Gaussian distributed, judges prospect or background, as shown in Figure 2.
Due to dim light in environment, the shortcomings such as video camera itself is of low quality, and the video contrast taken is low.Often cause using mixture Gaussian background model deleterious.Therefore the space distribution obtaining several connected region is first split in the present invention, the difference of same moving target is merged together with region simultaneously.Carry out motion detection with the algorithm after improving, make result more accurate.
After using mixture Gaussian background model prospect to be split, if same target is split into different object block, these object block minimum enclosed rectangle can also exist certain relation on relative position, such as comprise, intersect do not comprise, non-intersect, overlying relation etc.
For this several relation of rectangle frame, the present invention is process like this: for relation of inclusion, removes the rectangle frame of the inside, outside retaining; For the rectangle frame intersected, remove to entangle two little rectangle frames with large rectangle frame; For the rectangle frame that there is overlying relation, live two little rectangle frames by large rectangle circle, and delete the little rectangle frame in two, the inside.
Moving object detection is the basis of motion target tracking, sends the moving target detected to motion target tracking module, then follows the tracks of this target, as shown in Figure 3.
(2) motion target tracking module
This module is followed the tracks of the Moving Objects that early stage obtains, and determines the position of same target at image sequence different frame.Certain target in known video n-th frame, goes the information of searching this target in n+1 frame and subsequent frame.Have found, follow the tracks of successfully; Can not find, follow the tracks of failure or target disappearance.
The present invention proposes the tracking based on space length smallest match in motion target tracking module.Compare the Euclidean distance of target in the frame video image of front and back.For convenience of process, the center (i.e. barycenter) of the minimum enclosed rectangle of moving object is represented the position of this target on image.
In the present invention, the concrete steps of motion target tracking module are as follows:
The first step, for all moving targets detected, respectively to each target i, first utilize Kalman filter to predict this target coordinate position in the next frame, this position is predicted position;
Second step, for all moving targets detected in next frame, is designated as position undetermined to the coordinate position of each target j;
3rd step, to the predicted position of certain i, if the undetermined position nearest with it is the position of target j, simultaneously for the position undetermined of target j, the predicted position nearest with it is the position of i, i and target j is described, and the match is successful, namely follow the tracks of successfully, and then carry out the information of new and old target i by the information of target j, and simultaneously upgrade Kalman filter according to the position of target j and movable information, and adjust parameter, do stability analysis.
4th step, for the predicted position not having the target i that the match is successful, thinks that target disappears, and is removed by object from tracking list; For the position undetermined of the target j that do not have that the match is successful, think that target j is emerging target, opening space, is initialized as movement locus.
Colored for the minimum enclosed rectangle following the tracks of successful all targets prospect is saved in prospect container, each target is saved in object listing in the position of each frame and size, the institute obtained by mixture Gaussian background model has powerful connections and is saved in background container.Video for module three is concentrated prepares.
(3) summarized radio concentrates module
Carry out on the basis of moving object detection and motion target tracking in early stage, and set a course length, the destination number k that statistics motion target tracking module is preserved, when this numerical value equals the threshold k set, the all foreground targets preserved in second module are spliced in the background that a frame handles well simultaneously, realize the moving target that different time occurs and present at one time.Play by the frame of video one frame frame spliced, formative dynamics concentrates video.
For the process of background, adopt average background method, in the detection and tracking of moving target, each the frame background image obtained by mixture Gaussian background model is preserved, when carrying out video and concentrating, the mean value getting these backgrounds splices with prospect as a setting.
As shown in Figure 4, transverse axis represents positional information, and the longitudinal axis represents temporal information, the object that video concentrates is exactly that the target that makes to occur in different time interval and event occur in section at one time simultaneously, the great compression stroke time period, employ one's time to the best advantage and space, improve its utilization factor.
It is as follows that summarized radio concentrates module execution step:
The first step, the destination number of preserving in statistics motion target tracking module, when course length k equals setting threshold k, performs second step; Otherwise the execution first step.
Second step, calculates first aim when occurring occurring to N number of target, and the mean value of the video background frame obtained through gauss hybrid models is as the background frames that will splice.
3rd step, to average background, is substituted into background by pixels all in the minimum enclosed rectangle of the foreground target previously preserved.The moving target completed different time sections occurs thus occurs at synchronization, completes static concentrating.
For the operation of blocking, the pixel blocked is weighted operation, and two object pixel point values are respectively multiplied by 0.5, then interpolation is spliced in respective background.Such occlusion area all can be seen.
To those skilled in the art, obviously the invention is not restricted to the details of above-mentioned one exemplary embodiment, and when not deviating from spirit of the present invention or essential characteristic, the present invention can be realized in other specific forms.Therefore, no matter from which point, all should embodiment be regarded as exemplary, and be nonrestrictive, scope of the present invention is limited by claims instead of above-mentioned explanation, and all changes be therefore intended in the implication of the equivalency by dropping on claim and scope are included in the present invention.
In addition, be to be understood that, although this instructions is described according to embodiment, but not each embodiment only comprises an independently technical scheme, this narrating mode of instructions is only for clarity sake, those skilled in the art should by instructions integrally, and the technical scheme in each embodiment also through appropriately combined, can form other embodiments that it will be appreciated by those skilled in the art that.

Claims (11)

1. a video concentration method, by analyzing moving target, extracting moving target, then analyzing the movement locus of each target, different targets be spliced in a common background scene, and they combined; It is characterized in that: specifically comprise the following steps:
1) foreground detection module: the prospect being detected moving target by mixture Gaussian background model, add up the pixel on each two field picture, the probability density statistical information of pixel great amount of samples value in is for a long time utilized to set up background, and each pixel of Using statistics differential pair is classified, if the mean bias of pixel is less than threshold value, think background dot, otherwise be foreground point;
2), after moving object being detected, by search connected region, judge whether same moving target is split into and be no less than 2 regions, connected region merging is carried out, until target detection completes in those divided regions of opening;
3) motion target tracking module: moving target is followed the tracks of, utilize Kalman filter in conjunction with minimum space Furthest Neighbor to predict the coordinate position of each moving target, and carry out the Moving Objects upgrading, follow the tracks of, remove motion disappearance, and record center-of-mass coordinate, the speed of moving target;
4) based on moving object detection and target following, statistic procedure 3) in the quantity of target, target occur initial time, target's center, prospect width of frame and height;
5) summarized radio concentrates module: be optimized movement locus, and the moving target realized different time sections occurs moves to the same time period, prospect is pasted background, and does not occur track rejection and overlapping.
2. video concentration method according to claim 1, it is characterized in that: the concrete grammar of step 1) is: in mixture Gaussian background model, thinking that between each frame picture pixels point, colouring information is uncorrelated mutually, is separate to the process of each pixel; For pixel each in video image, the stochastic process constantly producing pixel point value is regarded in the change of its value in sequence image as, namely by the rule that the color that Gaussian distribution describes each pixel presents, set up K multimodal Gaussian distribution model, each pixel of image is according to multiple Gaussian distribution overlay modelings of different weights; Then to K model, each pixel and equal value difference meet threshold value, and threshold value is 2.5, think that this pixel is background, otherwise are prospects.
3. video concentration method according to claim 1, it is characterized in that, step 2) concrete grammar be, by mixture Gaussian background model, following situation is existed to foreground detection: carry out after foreground segmentation through mixture Gaussian background model, if same target is split into be no less than 2 object block, there is certain position relation in these object block minimum enclosed rectangle on relative position: one is that large frame comprises little frame; Two is that two frames have common factor, and not exclusively comprises; Three is that two frames become upper and lower relation, and between barycenter, horizontal and vertical distance is all less than threshold value; : for relation of inclusion remove the rectangle frame of the inside to the disposal route of above-mentioned object block minimum enclosed rectangle relation, outside retaining; For the rectangle frame intersected, remove to entangle two little rectangle frames with large rectangle frame; For the rectangle frame that there is overlying relation, live two little rectangle frames by large rectangle circle, and delete the little rectangle frame in two, the inside.
4. video concentration method according to claim 1, it is characterized in that, being specially of step 3) that describe: moving target is followed the tracks of, namely follow the tracks of the process of the position of same target in image sequence different frame, namely by certain target in present frame, in search next frame, the positional information of this target, judges whether to follow the tracks of successfully.
5. video concentration method according to claim 1, the concrete grammar of step 4) that describes, comprise the following steps:
The first step: for all moving targets detected, respectively to each target i, utilize Kalman filter to predict this target coordinate position in the next frame, this position is designated as predicted position;
Second step: for all moving objects detected in next frame, position undetermined is designated as to the coordinate position of each target j;
3rd step: for the predicted position of certain i, if the undetermined position nearest with it is the position of target j, simultaneously for the position undetermined of this target j, the predicted position nearest with it is the position of i, then i and target j is described, and the match is successful, then follow the tracks of successfully, then the information of i is upgraded by the information of target j, and upgrade Kalman filter according to the positional information of target j simultaneously, to adjust its parameter, carry out the prediction of robustness;
4th step: for the predicted position not having certain the target i that the match is successful, illustrate that this target disappears, thus removes this target from tracking list; For the position undetermined not having the target j that the match is successful, illustrate that target j is emerging target, then for it opens up new storage space, and initialization its be movement locus.
6. video concentration method as claimed in claim 1, it is characterized in that, in described step 4), the minimum enclosed rectangle of all targets of success being followed the tracks of is saved in prospect container, be saved in object listing by each target in the position of each frame and size information, the institute obtained through mixture Gaussian background model being had powerful connections is kept in background container.
7. video concentration method as claimed in claim 1, it is characterized in that, in described step 5), when generating summarized radio, target trajectory description document provides spatial information and the temporal information of track prospect matrix sequence, does not have overlap or overlapping after guaranteeing moving target translation.
8. video concentration method as claimed in claim 1, is characterized in that, in described step 5), when generating summarized radio, the translation of moving target, refers to temporal translation; With the form of transparent pinup picture, the region overlapped for target trajectory guarantees that each target is visible; For each target label time stamp and prospect rectangle frame.
9. video concentration method as claimed in claim 1, it is characterized in that, in described step 5), when generating summarized radio, process for background adopts average background method, when namely carrying out target detection and follow the tracks of, preserved by each two field picture obtained by mixture Gaussian background model, the average of getting these background frames when carrying out video and concentrating is pasted with prospect as a setting.
10. video concentration method as claimed in claim 1, is characterized in that in described step 5), and when generating summarized radio, concrete grammar comprises:
(5-1) when each moving target prospect being pasted, when blocking appears in two objects, carry out translucent process, ensure that two objects can be seen;
(5-2) when all moving targets being pasted, the prospect of preservation and background buffer memory are removed, carry out preservation and the concentration operation of a new round.
11. video concentration method as claimed in claim 10, is characterized in that, in described step (5-1), when generating summarized radio, when two moving targets block, the target be wherein blocked all loses partial information; To the region of blocking, carry out translucent process, two parts object corresponding pixel points is respectively multiplied by 0.5, is then spliced to background; Two-part like this occlusion area can be seen.
CN201510818289.2A 2015-11-24 2015-11-24 Video condensation method Pending CN105469425A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510818289.2A CN105469425A (en) 2015-11-24 2015-11-24 Video condensation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510818289.2A CN105469425A (en) 2015-11-24 2015-11-24 Video condensation method

Publications (1)

Publication Number Publication Date
CN105469425A true CN105469425A (en) 2016-04-06

Family

ID=55607084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510818289.2A Pending CN105469425A (en) 2015-11-24 2015-11-24 Video condensation method

Country Status (1)

Country Link
CN (1) CN105469425A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106781498A (en) * 2017-01-10 2017-05-31 成都通甲优博科技有限责任公司 A kind of statistical method of traffic flow of highway
CN107835381A (en) * 2017-10-17 2018-03-23 浙江大华技术股份有限公司 A kind of generation is dynamic to call the roll of the contestants in athletic events as the method and device of preview graph
CN108022258A (en) * 2017-10-20 2018-05-11 南京邮电大学 Real-time multi-target tracking based on the more frame detectors of single and Kalman filtering
CN108288047A (en) * 2018-02-07 2018-07-17 成都新舟锐视科技有限公司 A kind of pedestrian/vehicle checking method
US10283166B2 (en) 2016-11-10 2019-05-07 Industrial Technology Research Institute Video indexing method and device using the same
CN109803112A (en) * 2017-11-16 2019-05-24 中兴通讯股份有限公司 Video analysis management method based on big data, apparatus and system, storage medium
CN109862313A (en) * 2018-12-12 2019-06-07 科大讯飞股份有限公司 A kind of video concentration method and device
CN110719474A (en) * 2019-10-18 2020-01-21 山东浪潮人工智能研究院有限公司 Monitoring video secondary compression method based on connected domain analysis
CN111107376A (en) * 2019-12-09 2020-05-05 国网辽宁省电力有限公司营口供电公司 Video enhancement concentration method suitable for security protection of power system
CN111709972A (en) * 2020-06-11 2020-09-25 石家庄铁道大学 Space constraint-based method for quickly concentrating wide-area monitoring video
WO2020228296A1 (en) * 2019-05-13 2020-11-19 北京三快在线科技有限公司 Annotate object in image sequence
CN112312087A (en) * 2020-10-22 2021-02-02 中科曙光南京研究院有限公司 Method and system for quickly positioning event occurrence time in long-term monitoring video
CN113283279A (en) * 2021-01-25 2021-08-20 广东技术师范大学 Deep learning-based multi-target tracking method and device in video
CN114099234A (en) * 2021-11-23 2022-03-01 复旦大学 Intelligent rehabilitation robot data processing method and system for assisting rehabilitation training
WO2022241805A1 (en) * 2021-05-20 2022-11-24 广州广电运通金融电子股份有限公司 Video synopsis method, system and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103929685A (en) * 2014-04-15 2014-07-16 中国华戎控股有限公司 Video abstract generating and indexing method
US8787730B2 (en) * 2012-05-21 2014-07-22 Yahoo! Inc. Creating video synopsis for use in playback
CN104063883A (en) * 2014-07-07 2014-09-24 杭州银江智慧医疗集团有限公司 Surveillance video abstract generating method based on combination of object and key frames
CN104093001A (en) * 2014-07-23 2014-10-08 山东建筑大学 Online dynamic video compression method
CN104581437A (en) * 2014-12-26 2015-04-29 中通服公众信息产业股份有限公司 Video abstract generation and video backtracking method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8787730B2 (en) * 2012-05-21 2014-07-22 Yahoo! Inc. Creating video synopsis for use in playback
CN103929685A (en) * 2014-04-15 2014-07-16 中国华戎控股有限公司 Video abstract generating and indexing method
CN104063883A (en) * 2014-07-07 2014-09-24 杭州银江智慧医疗集团有限公司 Surveillance video abstract generating method based on combination of object and key frames
CN104093001A (en) * 2014-07-23 2014-10-08 山东建筑大学 Online dynamic video compression method
CN104581437A (en) * 2014-12-26 2015-04-29 中通服公众信息产业股份有限公司 Video abstract generation and video backtracking method and system

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10283166B2 (en) 2016-11-10 2019-05-07 Industrial Technology Research Institute Video indexing method and device using the same
CN106781498A (en) * 2017-01-10 2017-05-31 成都通甲优博科技有限责任公司 A kind of statistical method of traffic flow of highway
CN107835381A (en) * 2017-10-17 2018-03-23 浙江大华技术股份有限公司 A kind of generation is dynamic to call the roll of the contestants in athletic events as the method and device of preview graph
CN108022258A (en) * 2017-10-20 2018-05-11 南京邮电大学 Real-time multi-target tracking based on the more frame detectors of single and Kalman filtering
CN108022258B (en) * 2017-10-20 2020-07-03 南京邮电大学 Real-time multi-target tracking method based on single multi-frame detector and Kalman filtering
CN109803112A (en) * 2017-11-16 2019-05-24 中兴通讯股份有限公司 Video analysis management method based on big data, apparatus and system, storage medium
CN108288047A (en) * 2018-02-07 2018-07-17 成都新舟锐视科技有限公司 A kind of pedestrian/vehicle checking method
CN109862313B (en) * 2018-12-12 2022-01-14 科大讯飞股份有限公司 Video concentration method and device
CN109862313A (en) * 2018-12-12 2019-06-07 科大讯飞股份有限公司 A kind of video concentration method and device
WO2020228296A1 (en) * 2019-05-13 2020-11-19 北京三快在线科技有限公司 Annotate object in image sequence
CN110719474A (en) * 2019-10-18 2020-01-21 山东浪潮人工智能研究院有限公司 Monitoring video secondary compression method based on connected domain analysis
CN111107376A (en) * 2019-12-09 2020-05-05 国网辽宁省电力有限公司营口供电公司 Video enhancement concentration method suitable for security protection of power system
CN111709972A (en) * 2020-06-11 2020-09-25 石家庄铁道大学 Space constraint-based method for quickly concentrating wide-area monitoring video
CN111709972B (en) * 2020-06-11 2022-03-11 石家庄铁道大学 Space constraint-based method for quickly concentrating wide-area monitoring video
CN112312087A (en) * 2020-10-22 2021-02-02 中科曙光南京研究院有限公司 Method and system for quickly positioning event occurrence time in long-term monitoring video
CN112312087B (en) * 2020-10-22 2022-07-29 中科曙光南京研究院有限公司 Method and system for quickly positioning event occurrence time in long-term monitoring video
CN113283279A (en) * 2021-01-25 2021-08-20 广东技术师范大学 Deep learning-based multi-target tracking method and device in video
CN113283279B (en) * 2021-01-25 2024-01-19 广东技术师范大学 Multi-target tracking method and device in video based on deep learning
WO2022241805A1 (en) * 2021-05-20 2022-11-24 广州广电运通金融电子股份有限公司 Video synopsis method, system and device
CN114099234A (en) * 2021-11-23 2022-03-01 复旦大学 Intelligent rehabilitation robot data processing method and system for assisting rehabilitation training
CN114099234B (en) * 2021-11-23 2023-08-22 复旦大学 Intelligent rehabilitation robot data processing method and system for assisting rehabilitation training

Similar Documents

Publication Publication Date Title
CN105469425A (en) Video condensation method
CN105844234B (en) Method and equipment for counting people based on head and shoulder detection
CN101729872B (en) Video monitoring image based method for automatically distinguishing traffic states of roads
Jana et al. YOLO based Detection and Classification of Objects in video records
Xie et al. Real-time illegal parking detection system based on deep learning
US9147259B2 (en) Multi-mode video event indexing
CN102915544B (en) Video image motion target extracting method based on pattern detection and color segmentation
CN110097026B (en) Paragraph association rule evaluation method based on multi-dimensional element video segmentation
CN103246896B (en) A kind of real-time detection and tracking method of robustness vehicle
CN110197135B (en) Video structuring method based on multi-dimensional segmentation
CN101799968B (en) Detection method and device for oil well intrusion based on video image intelligent analysis
CN104244113A (en) Method for generating video abstract on basis of deep learning technology
CN103971386A (en) Method for foreground detection in dynamic background scenario
CN104978567B (en) Vehicle checking method based on scene classification
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
Szwoch Extraction of stable foreground image regions for unattended luggage detection
WO2023065395A1 (en) Work vehicle detection and tracking method and system
CN104063885A (en) Improved movement target detecting and tracking method
CN105654508A (en) Monitoring video moving target tracking method based on self-adaptive background segmentation and system thereof
CN108647649A (en) The detection method of abnormal behaviour in a kind of video
CN103929685A (en) Video abstract generating and indexing method
CN102034240A (en) Method for detecting and tracking static foreground
CN103577832B (en) A kind of based on the contextual people flow rate statistical method of space-time
CN109447082A (en) A kind of scene motion Target Segmentation method, system, storage medium and equipment
CN105678810A (en) Cell tracking method based on global and local optimum

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160406