CN103888768B - A kind of method for concentration of video image frame sequence and device - Google Patents

A kind of method for concentration of video image frame sequence and device Download PDF

Info

Publication number
CN103888768B
CN103888768B CN201210564152.5A CN201210564152A CN103888768B CN 103888768 B CN103888768 B CN 103888768B CN 201210564152 A CN201210564152 A CN 201210564152A CN 103888768 B CN103888768 B CN 103888768B
Authority
CN
China
Prior art keywords
sequence
moving target
image frame
video image
frame sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210564152.5A
Other languages
Chinese (zh)
Other versions
CN103888768A (en
Inventor
黄虎
潘晖
郑韬
潘石柱
张兴明
傅利泉
朱江明
吴军
吴坚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201210564152.5A priority Critical patent/CN103888768B/en
Publication of CN103888768A publication Critical patent/CN103888768A/en
Application granted granted Critical
Publication of CN103888768B publication Critical patent/CN103888768B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a kind of method for concentration and device of video image frame sequence.By video image frame sequence more to be concentrated and the background model pre-set, the sequence of image frames that the picture frame determining only to comprise each moving target is formed, in addition, the position of each moving target in the picture frame at its place is also determined and it when occurring first in video image frame sequence; According to the position determined and moment, judge whether have overlap in the picture frame that this moving target and each moving target occurred before this moving target comprise at corresponding sequence of image frames; And according to above-mentioned judged result, moment and each sequence of image frames and background model, generate concentrated video image frame sequence.

Description

A kind of method for concentration of video image frame sequence and device
Technical field
The present invention relates to field of video processing, particularly relate to a kind of method for concentration and device of video image frame sequence
Background technology
Along with popularizing of digital intellectualization, video monitoring equipment in city gets more and more, it relates to the several scenes such as traffic intersection, bank, community, the thing followed is exactly produce a large amount of video datas every day, but when searching a certain specific objective in the video data that these store, need the video data of the multiple video monitoring equipment records recalling continuous several days to carry out artificial enquiry screening, the coverage rate of inquiry is wider, time is longer, and investigation work will be more difficult.For solving the problem, traditional video image compression technology proposes the video of original record to be compressed, namely the picture frame without foreground target is rejected, and then all picture frames that there is moving target are formed video image again, thus shorten the reproduction time of original record video.Wherein, " foreground target ", for background model, is often referred to the moving target of generation relative to background model.But the picture frame without foreground target is just deleted the length shortening original video by this video image compression technology, and owing to not utilizing the spatial information in video scene, the video length formed after causing final process is still very long, therefore, while retaining the temporal logic relation in original video between moving target, how significantly to shorten the problem that original video length becomes in the urgent need to address at present.
Summary of the invention
The embodiment of the present invention provides a kind of method for concentration of video image frame sequence, in order to solve the problem how significantly shortening original video length while retaining the temporal logic relation in original video between moving target existed in prior art.
The embodiment of the present invention is by the following technical solutions:
A method for concentration for video image frame sequence, comprising:
According to video image frame sequence to be concentrated and the background model pre-set, determine the total movement target occurred in described video image frame sequence; Wherein, described moving target comprises: motion sub-goal not overlapping with other motion sub-goals in described video image frame sequence, and has multiple motion sub-goals of overlapping relation;
Based on described video image frame sequence, described background model and motion target tracking technology, generate each sequence of image frames comprising each moving target determined respectively; And
Determine position in each picture frame that described each moving target comprises at the sequence of image frames at its place respectively and go out now first in described video image frame sequence; And
According to the above-mentioned position determined with describedly go out now first, for other each moving targets except the moving target occurred the earliest occurred in video image frame sequence, judge whether have overlap in the picture frame that this moving target and each moving target occurred before this moving target comprise at corresponding sequence of image frames; And
According to each judged result obtained for each moving target of other except the moving target occurred the earliest, describedly go out now, background model and each sequence of image frames first, generate concentrated video image frame sequence.
An enrichment facility for video image frame sequence, comprising:
Moving target determining unit, for according to video image frame sequence to be concentrated and the background model pre-set, determines the total movement target occurred in described video image frame sequence; Wherein, described moving target comprises: motion sub-goal not overlapping with other motion sub-goals in described video image frame sequence, and has multiple motion sub-goals of overlapping relation;
Sequence of image frames generation unit, for based on described video image frame sequence, described background model and motion target tracking technology, generates each sequence of image frames comprising each moving target determined respectively;
Position and moment determining unit, for determining position in each picture frame that the sequence of image frames that the described sequence of image frames generation unit of each moving target at its place generates comprises respectively and go out now first in described video image frame sequence;
Judging unit, for the described position determined according to described position and moment determining unit with describedly go out now first, for other each moving targets except the moving target occurred the earliest occurred in described video image frame sequence, judge whether have overlap in the picture frame that this moving target and each moving target occurred before this moving target comprise at corresponding sequence of image frames;
Concentrated video image frame sequence generation unit, determine to go out first for each judged result of obtaining for each moving target of other except the moving target occurred the earliest according to background model, described judging unit and described position and moment determining unit each sequence of image frames that now and described sequence of image frames generation unit determine, generate concentrated video image frame sequence.
Beneficial effect of the present invention is as follows:
The concentration protocol of the video image frame sequence that the embodiment of the present invention provides passes through with the overlapping degree of different motion target in corresponding picture frame, and each moving target goes out now first as concentrated foundation in video image frame sequence, thus the spatial information that not only make use of in video scene, also contemplate the time sequencing that each moving target occurs in video image frame sequence to be concentrated, therefore the method for concentration of video image frame sequence that the embodiment of the present invention provides significantly shortens the length of original video while retaining the temporal logic relation in original video between moving target.
Accompanying drawing explanation
The method flow diagram of a kind of video image frame sequence method for concentration that Fig. 1 provides for the embodiment of the present invention;
The method flow diagram of another video image frame sequence method for concentration that Fig. 2 provides for the embodiment of the present invention;
The structure chart of a kind of video image frame sequence enrichment facility that Fig. 3 provides for the embodiment of the present invention.
Embodiment
For solving the problem for the length of original video significantly can not be compressed while the temporal logic relation in reservation original video between moving target that prior art exists, embodiments provide a kind of method for concentration and device of video image frame sequence.It is a simplified summary to video content that video image frame sequence concentrated can be understood as, and extracts moving target and different targets is spliced in a common background scene.The method for concentration of a kind of video image frame sequence that the embodiment of the present invention provides is by with the overlapping degree of different motion target in corresponding picture frame, and each moving target goes out now first as concentrated foundation in video image frame sequence, thus when realizing the concentration to video image frame sequence to be concentrated, not only make use of the spatial information in video scene, also contemplate the time sequencing that each moving target occurs in video image frame sequence to be concentrated, therefore the method for concentration of video image frame sequence that the embodiment of the present invention provides significantly shortens the length of original video while retaining the temporal logic relation in original video between moving target.
Below in conjunction with Figure of description, embodiments of the invention are described, should be appreciated that embodiment described herein is only for instruction and explanation of the present invention, is not limited to the present invention.And when not conflicting, the embodiment in this explanation and the feature in implementation column can be combined with each other.
Embodiment 1
Based on above-mentioned basic thought, the flow chart of the method for concentration of a kind of video image frame sequence that the embodiment of the present invention provides as shown in Figure 1, specifically comprises the following steps:
Step 11, according to video image frame sequence to be concentrated and the background model that pre-sets, determine the total movement target occurred in video image frame sequence.Wherein, this moving target comprises: motion sub-goal not overlapping with other motion sub-goals in video image frame sequence, and has multiple motion sub-goals of overlapping relation.
For the determination of moving target, first can all regard the object of each motion occurred in video image frame sequence to be concentrated as motion sub-goal, then not will not be defined as moving target by the motion sub-goal overlapping with other motion sub-goals, and the multiple motion sub-goals indicating overlapping relation with other motion specific items will be defined as a moving target jointly.Wherein, have overlapping relation generally to refer in the frame that those motion sub-goals comprise at video image frame sequence or multi-frame video image between multiple motion sub-goal to have occurred mutually blocking.Wherein, if mutually blocking has appearred in motion sub-goal first and motion sub-goal second in certain frame video image, then first and second have had overlapping relation; If mutually blocking has appearred again in second and another motion sub-goal third in certain frame video image, then can determine that first, second and third has overlapping relation further.Such as, suppose in video image frame sequence to be concentrated, there are two motion sub-goals: motion sub-goal A and motion sub-goal B.If motion sub-goal A and motion sub-goal B has in video image frame sequence to be concentrated mutually block, such as, in a certain frame in video image frame sequence to be concentrated or a few two field picture, motion sub-goal A has had part to occur with motion sub-goal B to overlap, so just jointly regard motion sub-goal A and motion sub-goal B as a moving target.If motion sub-goal A and motion sub-goal B does not occur mutually to block in video image frame sequence to be concentrated, then motion sub-goal A is defined as a moving target, and motion sub-goal B is defined as a moving target.
Step 12, based on described video image frame sequence, described background model and motion target tracking technology, generate and comprise each sequence of image frames of each moving target determined respectively.
For containing the moving target of multiple motion sub-goal, by the calculus of differences that performs video image frame sequence and background model and the tracking process utilizing motion target tracking technology to realize moving target, the temporal logic relation between each motion sub-goal can be retained.For example, suppose that motion sub-goal A and motion sub-goal B is seen as a moving target owing to having overlapping relation in video image frame sequence, and A and B meets: A occurs first in the 1st two field picture in video image frame sequence; After 50 two field pictures, B occurs first in video image frame sequence; At the 100th frame in the 150th two field picture, there is overlap in A and B; In the 151st two field picture, A and B is no longer overlapping, and in the 200th two field picture, A disappears; In the 220th two field picture, B disappears.So, by performing the calculus of differences of video image frame sequence and background model and just comprising all picture frames from the 1st two field picture to the 220th two field picture to the sequence of image frames comprising the moving target of motion sub-goal A and motion sub-goal B that the tracking process of motion sub-goal A and motion sub-goal B obtains, so both remain the relative position relation of motion sub-goal A and motion sub-goal B, also remain motion sub-goal A and the temporal logic relation of motion sub-goal B in video image frame sequence.
It should be noted that, because motion target tracking technology has been a kind of technology of comparative maturity in prior art, and it is not the improvements of the present invention for prior art, therefore repeats no more this technology in the embodiment of the present invention.
Step 13, determine position in each picture frame that above-mentioned each moving target comprises at the sequence of image frames at its place and go out now first in video image frame sequence respectively.
Particularly, for containing the moving target of multiple motion sub-goal, location coordinate information in each motion sub-goal comprised according to this moving target each video frame image in video image frame sequence, determines the location coordinate information in this moving target each picture frame in sequence of image frames; The each motion sub-goal comprised according to this moving target goes out now first in video image frame sequence, determine that this moving target goes out now first in video image frame sequence, normal conditions, can be defined as this moving target now and go out now first in video image frame sequence by that motion sub-goal the going out first in video image frame sequence occurred the earliest.
Alternatively, after determining moving target, for the sequencing that each moving target occurs first in video image frame sequence, different target identifications is distributed respectively to the moving target determined, be such as first moving target allocation identification " No. 1 target " occurred, be second moving target allocation identification " No. 2 targets " occurred, by that analogy.Usually, the target identification of distribution can adopt some marks that can embody moving target appearance order, and such as " No. 1 target " and " No. 2 targets " are exactly such mark.Correspond to each target identification simultaneously, in video image frame sequence, now can be gone out first by this moving target of corresponding record.For the moving target containing multiple motion sub-goal, also can be that its motion sub-goal distributes sub-goal mark, such as, suppose that No. 2 moving targets contain 2 motion sub-goals, so according to the sequencing that each motion sub-goal occurs first in video image frame sequence, for these 2 motion sub-goals distribute different sub-goal marks respectively, such as 2.1 work song targets, 2.2 work song targets.
Step 14, according to by performing step 13 and the position determined and going out now first, for other each moving targets except the moving target occurred the earliest occurred in video image frame sequence, judge whether have overlap in the picture frame that this moving target and each moving target occurred before this moving target comprise at corresponding sequence of image frames.
Wherein, judge whether have overlap in the picture frame that this moving target and each moving target occurred before this moving target comprise at corresponding sequence of image frames, specifically can perform in the steps below:
Based on each second sequence of image frames at first sequence of image frames at this moving target place and each moving target of occurring before this moving target place respectively, being corresponding in turn to the operation of comparing each picture frame in the first sequence of image frames and each picture frame in each second sequence of image frames by performing, judging whether have overlap in the picture frame that this moving target and each moving target occurred before this moving target comprise at corresponding sequence of image frames.
Wherein, judge whether have overlapping criterion to have a variety of situation in the picture frame that this moving target and each moving target occurred before this moving target comprise at corresponding sequence of image frames, such as represent a moving target with a rectangle frame, can two rectangle frames be partly overlapped as judging overlapping standard, can completing of two rectangle frames be overlapped as judging overlapping standard, can also colliding two rectangle frames as judging overlapping standard.Different overlapping criterions can affect the concentrated effect of the final video image frame sequence obtained.
Step 15, in video image frame sequence, going out now and background model first according to each judged result obtained for each moving target of other except the moving target occurred the earliest, each sequence of image frames that each moving target of determining by performing step 12 is corresponding, each moving target of determining by performing step 13, generating concentrated video image frame sequence.
The specific implementation process of step 15 can comprise following sub-step:
First, for each moving target of other except the moving target occurred the earliest, according to corresponding judged result, determine there is overlapping overlapped motion target with this moving target in corresponding picture frame, and there is no overlapping non-overlapped moving target with this moving target in corresponding picture frame.
Secondly, by the sequence of image frames synthesizing background model and generate based on the moving target occurred the earliest in video image frame sequence, generate benchmark image frame sequence;
Then, in video image frame sequence, going out now according to by performing step 13 and determine each moving target first, determining the appearance order of other moving targets except the moving target occurred the earliest.Alternatively, when adopt be can embody moving target appearance order target identification time, can according to the target identification of each moving target determine each moving target appearance order.
Finally, based on benchmark image frame sequence, the appearance order of each moving target determined, the Images uniting rule preset and other sequence of image frames except the sequence of image frames at the moving target place occurred the earliest, concentrated video image frame sequence is generated.
Optionally, the Images uniting rule preset can be:
For overlapped motion target, according to the appearance order of overlapped motion target in video image frame sequence, determine that each overlapped motion target first goes out now in concentrated video image frame sequence respectively; And synthesize sequence of image frames and the benchmark image frame sequence at each overlapped motion target place; And
For non-overlapped moving target, according to the appearance order of non-overlapped moving target in video image frame sequence, determine that each non-overlapped moving target second goes out now in concentrated video image frame sequence respectively; And synthesize sequence of image frames and the benchmark image frame sequence at each non-overlapped moving target place;
Wherein, determine each first, second to go out now all satisfied: make not have overlap between the different motion target in concentrated video image frame sequence, and the appearance order of different motion target in concentrated video image frame sequence and its appearance sequence consensus in described video image frame sequence.
Particularly, suppose to determine two moving targets: No. 1 moving target and No. 2 moving targets, and No. 1 moving target and No. 2 moving targets only comprise a motion sub-goal respectively.Suppose that each moving target rectangle frame represents again.If obtain in the picture frame comprised at corresponding sequence of image frames to the rectangle frame for characterizing No. 1 moving target of the rectangle frame for characterizing No. 2 moving targets through the judgement of step 14 and have collision, then No. 2 moving targets are carried out temporal skew for starting point with No. 1 moving target the now that goes out first in concentrated video image frame sequence the now that goes out first in video image frame sequence to be concentrated, until then stop offseting when the rectangle frame for characterizing No. 2 moving targets no longer includes collision to the rectangle frame for characterizing No. 1 moving target in the picture frame that corresponding sequence of image frames comprises, and this now corresponding moment is defined as No. 2 moving targets in concentrated video image frame sequence, goes out now first.Do not collide if obtained in the picture frame comprised at corresponding sequence of image frames to the rectangle frame for characterizing No. 1 moving target of the rectangle frame for characterizing No. 2 moving targets through the judgement of step 14, then No. 1 moving target going out first in video image frame sequence to be concentrated can be defined as No. 2 moving targets now and go out now first in concentrated video image frame sequence, namely may see in concentrated video image frame sequence that No. 2 moving targets and No. 1 moving target occur simultaneously.But known according to the target identification (namely No. 1, No. 2) of moving target, it is go out now first early than No. 2 moving targets that No. 1 moving target goes out now first in video image frame sequence to be concentrated.
Further, suppose to determine two moving targets: No. 1 moving target and No. 2 moving targets, wherein No. 1 moving target only comprises a motion sub-goal, and No. 2 moving targets comprise No. 2.1 motion sub-goals and No. 2.2 motion sub-goals.And each motion sub-goal rectangle frame represents.If obtain rectangle frame for characterizing No. 2.1 motion sub-goals through the judgement of step 14 or have collision in the picture frame that comprises at corresponding sequence of image frames for the rectangle frame that characterizes No. 2.2 motion sub-goals and the rectangle frame for characterizing No. 1 moving target, then No. 2 moving targets are carried out temporal skew for starting point with No. 1 moving target the now that goes out first in concentrated video image frame sequence the now that goes out first in video image frame sequence to be concentrated, until then stop offseting when all no longer including collision for the rectangle frame that characterizes No. 2.1 motion sub-goals and the rectangle frame for characterizing No. 2.2 motion sub-goals to the rectangle frame for characterizing No. 1 moving target in the picture frame that corresponding sequence of image frames comprises, and this now corresponding moment is defined as No. 2 moving targets in concentrated video image frame sequence, goes out now first, namely this now corresponding moment is defined as 2.1 work song targets and goes out now first in concentrated video image frame sequence.If obtain rectangle frame for characterizing No. 2.1 motion sub-goals through the judgement of step 14 and all do not collide in the picture frame that comprises at corresponding sequence of image frames for the rectangle frame that characterizes No. 2.2 motion sub-goals and the rectangle frame for characterizing No. 1 moving target, then No. 1 moving target going out first in video image frame sequence to be concentrated can be defined as No. 2 moving targets now and go out now first in concentrated video image frame sequence, namely may see in concentrated video image frame sequence that No. 2.1 motion sub-goals and No. 1 moving target occur simultaneously.But according to (son) target identification, No. 1 moving target now that goes out first in video image frame sequence to be concentrated goes out now early than No. 2.1 motion sub-goals first in video image frame sequence to be concentrated.As can be seen here, the embodiment of the present invention is while realization is carried out the time is compressed to video image frame sequence to be concentrated, not only consider the time sequencing that each moving target occurs in video image frame sequence to be concentrated, also use spatial information, thus reach the effect significantly shortening the length of original video while retaining the temporal logic relation in original video between moving target.
Alternatively, if video image frame sequence to be concentrated comprises the video frame image possessing different background image information, then said method is further comprising the steps of:
Assigned operation is performed respectively successively for each video image in video image frame sequence to be concentrated, until by comparing respectively each video frame image in this video image frame sequence to be concentrated and background model, and after saving each background image frame to be saved determined, according to the background image frame of preserving, generate concentrated background image frame sequence.
Above-mentioned assigned operation comprises: the background model comparing this video frame image Yu pre-set, and upgrades background model according to comparative result, obtains the background model upgraded; Judge current time be the background image frame determined according to prefixed time interval preserve the moment time, determine that the background model upgraded is background image frame to be saved and preserves it; And using the background model of renewal as the background model pre-set.
By performing the operation of above-mentioned renewal background model, can avoid only generating concentrated video image frame sequence based on changeless background model, and not consider in video image frame sequence to be concentrated background image change, the background image in the final concentrated video image frame sequence generated can be caused too wide in the gap with the background image in video image frame sequence to be concentrated and the concentrated sequence of video images problem not really that occurs.
It should be noted that, the background image information difference of different video picture frame specifically can be, but not limited to be presented as: the monochrome information difference of different video picture frame (monochrome information of the video frame image such as caused due to the change of light is different), or the area of shadow of object in the background image of different video picture frame is different etc.
In sum, the method for concentration of a kind of video image frame sequence that the embodiment of the present invention provides, by video image frame sequence more to be concentrated and the background model that pre-sets, position in the picture frame at its place of the sequence of image frames that the picture frame determining only to comprise each moving target is formed and each moving target and go out now first in video image frame sequence; According to the position determined with go out now first, judge whether have overlap in the picture frame that this moving target and each moving target occurred before this moving target comprise at corresponding sequence of image frames; And according to each judged result obtained for each moving target of other except the moving target occurred the earliest, above-mentionedly go out now first, each sequence of image frames and background model generate concentrated video image frame sequence.Visible, the embodiment of the present invention is while realization is carried out the time is compressed to video image frame sequence to be concentrated, not only consider the time sequencing that each moving target occurs in video image frame sequence to be concentrated, also use spatial information, thus reach the effect significantly shortening the length of original video while retaining the temporal logic relation in original video between moving target.
Embodiment two
The method for concentration of a kind of video image frame sequence that the embodiment of the present invention two provides, the basis of embodiment one achieves the renewal of background model, thus makes the final concentrated video image frame sequence generated true.Concrete method flow diagram as shown in Figure 2.
Step 21, background model is set.
According to continuous multiple frames image initial in video image frame sequence to be concentrated, set up background model, the modeling method of background model has single Gaussian Background modeling and mixed Gaussian background modeling under normal circumstances.It should be noted that, background model can a corresponding background image frame, the background image frame sequence that also can corresponding be made up of multiple background image frame.In embodiments of the present invention, suppose that background model corresponds to a background image frame.
Step 22, determine moving target.
Particularly, after background model is provided with, utilize the method for background difference, by the video frame image of video image frame sequence to be concentrated compared with background model, adding up the pixel that all difference result are greater than assign thresholds is foreground pixel point, and the image of those foreground pixel points composition contains moving target.Wherein, this moving target can be motion sub-goal not overlapping with other motion sub-goals in video image frame sequence, also can for the moving target be jointly made up of multiple motion sub-goals with overlapping relation.Suppose that the moving target in the embodiment of the present invention two is motion sub-goal not overlapping with other motion sub-goals in video image frame sequence.
Step 23, pursuit movement target.
After moving target is determined, the moving target occurred in the scene determined background model is needed to carry out tracking process, can think that each moving target entering into the scene determined by background model distributes target identification, such as No. 1 moving target, No. 2 moving targets, No. 3 moving targets etc., wherein No. 2 moving targets enter scene after No. 1 moving target, No. 3 moving targets enter scene after No. 2 moving targets, the like.Each moving target correspond to target identification from entering scene all respectively to leaving scene like this.Suppose that a moving target is an event from entering scene to leaving the moving target sequence definition that scene formed, the mark of so each event is to should the mark of moving target in event.
For each target identification, the co-ordinate position information of the corresponding video frame image of each moving target of track record in video image frame sequence to be concentrated, the corresponding video image information of each video frame image and going out now first and going out now for the last time in video image frame sequence to be concentrated.
Step 24, extraction target information.
For each moving target determined, in video image frame sequence to be concentrated, extract the sequence of image frames of the picture frame formation only comprising this moving target, i.e. the corresponding event of each sequence of image frames.According to co-ordinate position information, the video image frame information of each moving target of target identification obtaining step 23 track record corresponding to sequence of image frames and go out now first and go out now for the last time.
Step 25, determine the generation moment of each event at concentrated video image frame sequence.
Suppose to utilize T i=(t ori_s i, t ori_e i; t syp_s i, t syp_e i) and T j=(t ori_s j, t ori_e j; t syp_s j, t syp_e j) distinguish presentation of events i and event j starting and ending moment corresponding in video image frame sequence to be concentrated and starting and ending moment corresponding in concentrated video image frame sequence.Suppose that event i occurs before event j, then with event i for benchmark, event j is displaced to the initial time identical with event i, whether the moving target Mi that the moving target Mi that decision event i is corresponding is corresponding with event j blocks mutually, if mutually do not blocked, then the initial time that event j is corresponding in concentrated video image frame sequence is t syp_s j=t syp_s i, finish time is t syp_e j=t syp_s j+ (t ori_e j-t ori_s j); Mutually block if had, then the initial time of event j is offset shift(shift > 0 backward), until moving target Mi corresponding to event j after the skew moving target Mi corresponding with event i no longer occurs mutually to block, now the initial time of event j corresponding in concentrated video image frame sequence is t syp_s j=t syp_s i+ shift, finish time is t syp_e j=t syp_s j+ (t ori_e j-t ori_s j).The object of above-mentioned implementation is just to guarantee that event j still occurs after event i in concentrated video image frame sequence.
For foregoing description, lift an example explanation.Suppose to have occurred three moving targets before and after in video image frame sequence to be concentrated, No. 1 moving target, No. 2 moving targets and No. 3 moving targets, and each moving target corresponding equal-sized rectangle frame respectively, then according to the co-ordinate position information of rectangle frame corresponding to No. 1 moving target in each video frame image in video image frame sequence to be concentrated and in video image frame sequence to be concentrated go out first now and last occur when, determine event 1; According to the co-ordinate position information of rectangle frame corresponding to No. 2 moving targets in each video frame image in video image frame sequence to be concentrated and going out now first and going out now for the last time in video image frame sequence to be concentrated, determine event 2; According to the co-ordinate position information of rectangle frame corresponding to No. 3 moving targets in each video frame image in video image frame sequence to be concentrated and in video image frame sequence to be concentrated go out first now and last occur when, determine event 3.
According to above-mentioned hypothesis, because first No. 1 moving target goes out now early than No. 2 moving targets the now that goes out first in video image frame sequence to be concentrated, be then benchmark decision event with event 1, the initial time of event 2 is offset to the moment identical with the initial time of event 1, whether correspondence compares the rectangle frame that comprises in event 2 and collides with the rectangle frame that comprises in event 1, suppose that comparative result is no, then event 1 is defined as the initial time of event 2 in concentrated video image frame sequence at the initial time of video image frame sequence to be concentrated.When then the initial time of event 3 being offset to identical with the initial time of event 2 in the video image frame sequence concentrated, compare the rectangle frame that comprises in event 3 respectively whether to collide with the rectangle frame that comprises in event 1 and event 2, suppose that comparative result is yes, then by the initial time of event 3 with the initial time of event 2 in concentrated video image frame sequence for starting point carries out temporal skew, until stop offseting when the rectangle frame comprised in event 3 no longer collides with the rectangle frame that comprises in event 1 and event 2, and the moment corresponding after skew is defined as the initial time of event 3 in concentrated video image frame sequence.
From above-mentioned example, can be summarized as the processing procedure of all moving targets occurred in the video image frame sequence to be concentrated determined: be selected in event corresponding to the moving target that occurs at first in video image frame sequence to be concentrated as the benchmark decision event in concentrated video, the event of follow-up appearance only with its before event do temporal skew and judge, thus ensure that the original time logical relation between event while realizing compressing original video over time and space.
Step 26, renewal background model, and interval selects background image frame to form concentrated background image frame sequence at preset timed intervals.
Concrete, suppose to contain 100 video frame images in concentrated video image frame sequence, the background model that step 21 is determined is designated as background model 1, the process then upgrading background model can be described as: the background image frame that video frame image is corresponding with background model 1 of the 1st in video image frame sequence to be concentrated compared, and upgrade background model according to comparative result, obtain the background model upgraded, and the background model of this renewal is designated as background model 2; Again the video frame image of the 2nd in video image frame sequence to be concentrated and background model 2 are compared, continue to upgrade background model according to comparative result, obtain the background model upgraded, and the background model of this renewal is designated as background model 3, the like, until to complete in video image frame sequence to be concentrated comparing of 100th video frame image and background model 100, and when obtaining background model 101, terminate the renewal to background model.
Wherein, once (namely obtaining the background model once upgraded) is upgraded whenever completing background model, all need to judge whether current time is that the background image frame determined according to prefixed time interval preserves the moment, if judged result is yes, then background image frame corresponding for the background model of renewal is preserved.Until obtain background model 101 thus terminate after the renewal of background model, according to the background image frame of preserving, generate concentrated background image frame sequence.
For example, one section of video image frame sequence in 7 o'clock to 9 o'clock morning to be concentrated, the background image of default background model corresponding to the morning video frame image of 7, in order to make the final concentrated video and graphic frame sequence obtained truer, the background image frame that the background model of the once current renewal obtained by upgrading is corresponding can being preserved every 2 minutes, also can preserving the background image frame that the background model of the once current renewal obtained by upgrading is corresponding every 1 minute.Once (suppose not preserve the background graphics frame of initial time) if preserved every 1 minute, then for 7 o'clock to 9 o'clock this section of video image frame sequences, finally save altogether 120 background image frame, these 120 background image frame just constitute concentrated background image frame sequence.
Step 27, the concentrated video image frame sequence of generation.
Determine the generation moment of each event at concentrated video image frame sequence according to step 25, in the concentrated background image frame sequence determine each event splicing to step 26, thus generate concentrated video image frame sequence.
Concrete, suppose that total duration of all event duration to be spliced is 12 minutes, then for example lifted in step 26, when generating concentrated video image frame sequence, using the 1st second to the 6th second concentrated video frame image background image in concentrated sequence of video images of the image corresponding to the 1st background image frame, using the 7th second background image to the 12nd second concentrated video frame image in concentrated sequence of video images of the image corresponding to the 2nd background image frame, the rest may be inferred, each event is spliced in the concentrated background image frame sequence obtained by execution step 26 by final realization, thus generate concentrated video image frame sequence.Visible, the concentrated background image frame sequence utilizing aforesaid way to be formed embodies the change (change of such as light) of background image in sequence of video images to be concentrated, like this, the video of original number hour is concentrated into the authenticity that a few minutes moderately can keep former video when realizing.
In sum, the method for concentration of a kind of video image frame sequence that the embodiment of the present invention provides is while realization is carried out the time is compressed to video image frame sequence to be concentrated, not only make use of spatial information, also contemplate the time sequencing that each moving target occurs in video image frame sequence to be concentrated, thus while reaching the temporal logic relation retained in original video between moving target, shorten the effect of the length of original video.
Corresponding to the method for concentration of a kind of video image frame sequence that the embodiment of the present invention provides, the embodiment of the present invention also provides a kind of enrichment facility of video image frame sequence as shown in Figure 3, in order to solve the problem for significantly not compressing the length of original video while the temporal logic relation in reservation original video between moving target that prior art exists.The concrete structure of this device can comprise:
Moving target determining unit 31, for according to video image frame sequence to be concentrated and the background model pre-set, determines the total movement target occurred in video image frame sequence to be concentrated; Wherein, described moving target comprises: motion sub-goal not overlapping with other motion sub-goals in described video image frame sequence, and has multiple motion sub-goals of overlapping relation.
Sequence of image frames generation unit 32, for based on described video image frame sequence, described background model and motion target tracking technology, generates each sequence of image frames comprising each moving target determined respectively.
Position and moment determining unit 33, for determining position in each picture frame that each moving target comprises at the sequence of image frames at its place respectively and go out now first in video image frame sequence.
Judging unit 34, for the position determined according to position and moment determining unit 33 with go out now first, for other each moving targets except the moving target occurred the earliest occurred in video image frame sequence, judge whether have overlap in the picture frame that this moving target and each moving target occurred before this moving target comprise at corresponding sequence of image frames;
Concentrated video image frame sequence generation unit 35, that determines for each judged result of obtaining for each moving target of other except the moving target occurred the earliest according to background model, judging unit 34 and position and moment determining unit 33 goes out each sequence of image frames that now and sequence of image frames generation unit 32 determine first, generates concentrated video image frame sequence.
Optionally, if video image frame sequence to be concentrated comprises the video frame image possessing different background image information, then the enrichment facility of above-mentioned video image frame sequence can also comprise:
Concentrated Background generation unit 36, for performing assigned operation respectively for each video image in video image frame sequence to be concentrated successively, until by comparing respectively each video frame image in this video image frame sequence to be concentrated and background model, and after saving each background image frame to be saved determined, according to the background image frame of preserving, generate concentrated background image frame sequence;
Above-mentioned assigned operation comprises: the background model comparing this video frame image Yu pre-set, and upgrades described background model according to comparative result, obtains the background model upgraded; Judge current time be the background image frame determined according to prefixed time interval preserve the moment time, determine that the background model upgraded is background image frame to be saved and preserves it; And using the background model of renewal as the background model pre-set; Then
Concentrated video image frame sequence generation unit 35, what each judged result obtained specifically for concentrated background image frame sequence, the judging unit 34 determined according to concentrated Background generation unit 36, position and moment determining unit 33 were determined go out first now and each sequence of image frames that sequence of image frames generation unit 32 is determined, generate and concentrate video image frame sequence.
Further, judging unit 34 specifically for:
The position determined according to position and moment determining unit 33 and go out now first, for other each moving targets except the moving target occurred the earliest occurred in video image frame sequence, based on each second sequence of image frames at first sequence of image frames at this moving target place and each moving target of occurring before this moving target place respectively, the operation of comparing each picture frame in the first sequence of image frames and each picture frame in each second sequence of image frames is corresponding in turn to by performing, judge whether have overlap in the picture frame that this moving target and each moving target occurred before this moving target comprise at corresponding sequence of image frames.
Concentrated video image frame sequence generation unit 35 specifically comprises:
Moving target distinguishes subelement 351, for for each moving target of other except the moving target occurred the earliest, according to the corresponding judged result that judging unit 34 is determined, determine there is overlapping overlapped motion target with this moving target in corresponding picture frame, and there is no overlapping non-overlapped moving target with this moving target in corresponding picture frame;
Consensus sequence generates subelement 352, for the sequence of image frames being concentrated background image frame sequence by synthesis and generated based on the moving target occurred the earliest in video image frame sequence, generates benchmark image frame sequence;
Order determines subelement 353, goes out now first for what determine according to position and moment determining unit, determines the appearance order of other moving targets except the moving target occurred the earliest;
Sequent synthesis subelement 354, for generating benchmark image frame sequence that subelement 352 determines based on consensus sequence, order determines the single 353 yuan of appearance orders determined of son, the Images uniting rule preset and other sequence of image frames except the sequence of image frames at the moving target place occurred the earliest, generates concentrated video image frame sequence.
Wherein, Images uniting rule can comprise:
For overlapped motion target, according to the appearance order of overlapped motion target in described video image frame sequence, determine that each overlapped motion target first goes out now in concentrated video image frame sequence respectively; And synthesize sequence of image frames and the benchmark image frame sequence at each overlapped motion target place; And
For non-overlapped moving target, according to the appearance order of non-overlapped moving target in described video image frame sequence, determine that each non-overlapped moving target second goes out now in concentrated video image frame sequence respectively; And synthesize sequence of image frames and the benchmark image frame sequence at each non-overlapped moving target place;
Wherein, determine each first, second to go out now all satisfied: make not have overlap between the different motion target in concentrated video image frame sequence, and the appearance order of different motion target in concentrated video image frame sequence and its appearance sequence consensus in described video image frame sequence.
In sum, the enrichment facility of a kind of video image frame sequence that the embodiment of the present invention provides, by video image frame sequence more to be concentrated and the background model that pre-sets, position in the picture frame at its place of the sequence of image frames that the picture frame determining only to comprise each moving target is formed and each moving target and go out now first in video image frame sequence; According to the position determined with go out now first, judge whether have overlap in the picture frame that this moving target and each moving target occurred before this moving target comprise at corresponding sequence of image frames; And according to each judged result obtained for each moving target of other except the moving target occurred the earliest, above-mentionedly go out now and each sequence of image frames first, the spatial information making full use of background model generates concentrated video image frame sequence.
Visible, the embodiment of the present invention is while realization is carried out the time is compressed to video image frame sequence to be concentrated, not only consider the time sequencing that each moving target occurs in video image frame sequence to be concentrated, also use spatial information, thus reach the effect significantly shortening the length of original video while retaining the temporal logic relation in original video between moving target.
Obviously, those skilled in the art can carry out various change and modification to the present invention and not depart from the spirit and scope of the present invention.Like this, if these amendments of the present invention and modification belong within the scope of the claims in the present invention and equivalent technologies thereof, then the present invention is also intended to comprise these change and modification.

Claims (8)

1. a method for concentration for video image frame sequence, is characterized in that, comprising:
According to video image frame sequence to be concentrated and the background model pre-set, determine the total movement target occurred in described video image frame sequence; Wherein, described moving target comprises: motion sub-goal not overlapping with other motion sub-goals in described video image frame sequence, and has multiple motion sub-goals of overlapping relation;
Based on described video image frame sequence, described background model and motion target tracking technology, generate each sequence of image frames comprising each moving target determined respectively; And
Determine position in each picture frame that described each moving target comprises at the sequence of image frames at its place respectively and go out now first in described video image frame sequence; And
According to the described position determined with describedly go out now first, for other each moving targets except the moving target occurred the earliest occurred in described video image frame sequence, judge whether have overlap in the picture frame that this moving target and each moving target occurred before this moving target comprise at corresponding sequence of image frames; And
According to each judged result obtained for each moving target of other except the moving target occurred the earliest, describedly go out now, background model and each sequence of image frames first, generate concentrated video image frame sequence;
Wherein, according to each judged result obtained for each moving target of other except the moving target occurred the earliest, describedly go out now, background model and each sequence of image frames first, generate concentrated video image frame sequence, specifically comprise:
For each moving target of other except the moving target occurred the earliest, according to corresponding described judged result, determine there is overlapping overlapped motion target with this moving target in corresponding picture frame, and there is no overlapping non-overlapped moving target with this moving target in corresponding picture frame;
By the sequence of image frames synthesizing background model and generate based on the moving target occurred the earliest in described video image frame sequence, generate benchmark image frame sequence;
Go out now first according to described, determine the appearance order of other moving targets except the moving target occurred the earliest;
Based on described benchmark image frame sequence, described appearance order, the Images uniting rule preset and other sequence of image frames except the sequence of image frames at the moving target place occurred the earliest, generate concentrated video image frame sequence.
2. the method for claim 1, is characterized in that, described video image frame sequence to be concentrated comprises the video frame image possessing different background image information; Then
Described method also comprises:
Assigned operation is performed respectively successively for each video image in video image frame sequence to be concentrated, until by comparing respectively each video frame image in described video image frame sequence to be concentrated and background model, and after saving each background image frame to be saved determined, according to the background image frame of preserving, generate concentrated background image frame sequence;
Described assigned operation comprises: more described video frame image and the background model pre-set, and upgrades described background model according to comparative result, obtains the background model upgraded; Judge current time be the background image frame determined according to prefixed time interval preserve the moment time, determine that the background model upgraded is background image frame to be saved and preserves it; And using the background model of renewal as the background model pre-set;
By the sequence of image frames synthesizing background model and generate based on the moving target occurred the earliest in described video image frame sequence, generate benchmark image frame sequence, specifically comprise:
By the sequence of image frames synthesizing concentrated background image frame sequence and generate based on the moving target occurred the earliest in described video image frame sequence, generate benchmark image frame sequence.
3. the method for claim 1, is characterized in that, judges whether have overlap in the picture frame that this moving target and each moving target occurred before this moving target comprise at corresponding sequence of image frames, specifically comprises:
Based on each second sequence of image frames at first sequence of image frames at this moving target place and each moving target of occurring before this moving target place respectively, being corresponding in turn to the operation of each picture frame compared in the first sequence of image frames each picture frame corresponding to each second sequence of image frames by performing, judging whether have overlap in the picture frame that this moving target and each moving target occurred before this moving target comprise at corresponding sequence of image frames.
4. method as claimed in claim 2, it is characterized in that, described Images uniting rule comprises:
For overlapped motion target, according to the appearance order of overlapped motion target in described video image frame sequence, determine that each overlapped motion target first goes out now in concentrated video image frame sequence respectively; And synthesize sequence of image frames and the benchmark image frame sequence at each overlapped motion target place; And
For non-overlapped moving target, according to the appearance order of non-overlapped moving target in described video image frame sequence, determine that each non-overlapped moving target second goes out now in concentrated video image frame sequence respectively; And synthesize sequence of image frames and the benchmark image frame sequence at each non-overlapped moving target place;
Wherein, determine each first, second to go out now all satisfied: make not have overlap between the different motion target in concentrated video image frame sequence, and the appearance order of different motion target in concentrated video image frame sequence and its appearance sequence consensus in described video image frame sequence.
5. an enrichment facility for video image frame sequence, is characterized in that, comprising:
Moving target determining unit, for according to video image frame sequence to be concentrated and the background model pre-set, determines the total movement target occurred in described video image frame sequence; Wherein, described moving target comprises: motion sub-goal not overlapping with other motion sub-goals in described video image frame sequence, and has multiple motion sub-goals of overlapping relation;
Sequence of image frames generation unit, for based on described video image frame sequence, described background model and motion target tracking technology, generates each sequence of image frames comprising each moving target determined respectively;
Position and moment determining unit, for determining position in each picture frame that each moving target comprises at the sequence of image frames at its place respectively and go out now first in described video image frame sequence;
Judging unit, for the described position determined according to described position and moment determining unit with describedly go out now first, for other each moving targets except the moving target occurred the earliest occurred in described video image frame sequence, judge whether have overlap in the picture frame that this moving target and each moving target occurred before this moving target comprise at corresponding sequence of image frames;
Concentrated video image frame sequence generation unit, that determines for each judged result obtained for each moving target of other except the moving target occurred the earliest according to background model, described judging unit, described position and moment determining unit goes out each sequence of image frames that now and described sequence of image frames generation unit determine first, generates concentrated video image frame sequence;
Wherein, concentrated video image frame sequence generation unit specifically comprises:
Moving target distinguishes subelement, for for each moving target of other except the moving target occurred the earliest, according to corresponding described judged result, determine there is overlapping overlapped motion target with this moving target in corresponding picture frame, and there is no overlapping non-overlapped moving target with this moving target in corresponding picture frame;
Consensus sequence generates subelement, for by the sequence of image frames synthesizing background model and generate based on the moving target occurred the earliest in described video image frame sequence, generates benchmark image frame sequence;
Order determines subelement, for going out now first according to described, determines the appearance order of other moving targets except the moving target occurred the earliest;
Sequent synthesis subelement, for based on described benchmark image frame sequence, described appearance order, the Images uniting rule preset and other sequence of image frames except the sequence of image frames at the moving target place occurred the earliest, generates concentrated video image frame sequence.
6. device as claimed in claim 5, it is characterized in that, described video image frame sequence to be concentrated comprises the video frame image possessing different background image information; Then
Described device also comprises:
Concentrated Background generation unit, for performing assigned operation respectively for each video image in video image frame sequence to be concentrated successively, until by comparing respectively each video frame image in described video image frame sequence to be concentrated and background model, and after saving each background image frame to be saved determined, according to the background image frame of preserving, generate concentrated background image frame sequence;
Described assigned operation comprises: more described video frame image and the background model pre-set, and upgrades described background model according to comparative result, obtains the background model upgraded; Judge current time be the background image frame determined according to prefixed time interval preserve the moment time, determine that the background model upgraded is background image frame to be saved and preserves it; And using the background model of renewal as the background model pre-set; Then
Described consensus sequence generates subelement, specifically for the sequence of image frames being concentrated background image frame sequence by synthesis and generated based on the moving target occurred the earliest in described video image frame sequence, generates benchmark image frame sequence.
7. device as claimed in claim 5, is characterized in that, judging unit specifically for:
The position determined according to described position and moment determining unit and go out now first, for other each moving targets except the moving target occurred the earliest occurred in described video image frame sequence, based on each second sequence of image frames at first sequence of image frames at this moving target place and each moving target of occurring before this moving target place respectively, the operation of comparing each picture frame in the first sequence of image frames and each picture frame in each second sequence of image frames is corresponding in turn to by performing, judge whether have overlap in the picture frame that this moving target and each moving target occurred before this moving target comprise at corresponding sequence of image frames.
8. device as claimed in claim 6, it is characterized in that, described Images uniting rule comprises:
For overlapped motion target, according to the appearance order of overlapped motion target in described video image frame sequence, determine that each overlapped motion target first goes out now in concentrated video image frame sequence respectively; And synthesize sequence of image frames and the benchmark image frame sequence at each overlapped motion target place; And
For non-overlapped moving target, according to the appearance order of non-overlapped moving target in described video image frame sequence, determine that each non-overlapped moving target second goes out now in concentrated video image frame sequence respectively; And synthesize sequence of image frames and the benchmark image frame sequence at each non-overlapped moving target place;
Wherein, determine each first, second to go out now all satisfied: make not have overlap between the different motion target in concentrated video image frame sequence, and the appearance order of different motion target in concentrated video image frame sequence and its appearance sequence consensus in described video image frame sequence.
CN201210564152.5A 2012-12-21 2012-12-21 A kind of method for concentration of video image frame sequence and device Active CN103888768B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210564152.5A CN103888768B (en) 2012-12-21 2012-12-21 A kind of method for concentration of video image frame sequence and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210564152.5A CN103888768B (en) 2012-12-21 2012-12-21 A kind of method for concentration of video image frame sequence and device

Publications (2)

Publication Number Publication Date
CN103888768A CN103888768A (en) 2014-06-25
CN103888768B true CN103888768B (en) 2016-02-10

Family

ID=50957458

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210564152.5A Active CN103888768B (en) 2012-12-21 2012-12-21 A kind of method for concentration of video image frame sequence and device

Country Status (1)

Country Link
CN (1) CN103888768B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104284158B (en) * 2014-10-23 2018-09-14 南京信必达智能技术有限公司 Method applied to event-oriented intelligent monitoring camera
CN104394488B (en) * 2014-11-28 2018-08-17 苏州科达科技股份有限公司 A kind of generation method and system of video frequency abstract
CN104539890A (en) * 2014-12-18 2015-04-22 苏州阔地网络科技有限公司 Target tracking method and system
CN104683765B (en) * 2015-02-04 2019-04-12 上海依图网络科技有限公司 A kind of video concentration method based on detecting moving object
CN107018352A (en) * 2016-01-27 2017-08-04 北京中科晶上科技有限公司 A kind of method of video image processing, apparatus and system
CN105898343B (en) * 2016-04-07 2019-03-12 广州盈可视电子科技有限公司 A kind of net cast, terminal net cast method and apparatus
CN109165317B (en) * 2018-10-31 2019-08-06 杭州恒生数字设备科技有限公司 A kind of real time monitoring aspect indexing inquiry system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101262568A (en) * 2008-04-21 2008-09-10 中国科学院计算技术研究所 A method and system for generating video outline
CN101366027A (en) * 2005-11-15 2009-02-11 耶路撒冷希伯来大学伊森姆研究发展公司 Method and system for producing a video synopsis
JP2010134923A (en) * 2008-11-21 2010-06-17 Yissum Research Development Co Of The Hebrew Univ Of Jerusalem Method and system for producing video synopsis using clustering
CN102222104A (en) * 2011-06-23 2011-10-19 华南理工大学 Method for intelligently extracting video abstract based on time-space fusion

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101366027A (en) * 2005-11-15 2009-02-11 耶路撒冷希伯来大学伊森姆研究发展公司 Method and system for producing a video synopsis
CN101262568A (en) * 2008-04-21 2008-09-10 中国科学院计算技术研究所 A method and system for generating video outline
JP2010134923A (en) * 2008-11-21 2010-06-17 Yissum Research Development Co Of The Hebrew Univ Of Jerusalem Method and system for producing video synopsis using clustering
CN102222104A (en) * 2011-06-23 2011-10-19 华南理工大学 Method for intelligently extracting video abstract based on time-space fusion

Also Published As

Publication number Publication date
CN103888768A (en) 2014-06-25

Similar Documents

Publication Publication Date Title
CN103888768B (en) A kind of method for concentration of video image frame sequence and device
CN110536151B (en) Virtual gift special effect synthesis method and device and live broadcast system
US20180018944A1 (en) Automated object selection and placement for augmented reality
CN105100688B (en) Image processing method, image processing apparatus and monitoring system
CN103024447B (en) A kind of many videos mobile terminal editing high in the clouds synthetic method shooting in the same time and place and server
CN106648391B (en) Handwriting synchronization method, device and system
US20040015740A1 (en) System and method for asynchronous storage and playback of a system state
CN105306899B (en) Method and device for processing monitoring video
CN107801093B (en) Video rendering method and device, computer equipment and readable storage medium
US8704853B2 (en) Modifying graphical paths
KR20080051015A (en) Depth estimation apparatus for depth consistency between frames and its method
RU2729601C1 (en) Method and system for visualizing object in virtual form
CN111476875B (en) Smart building Internet of things object simulation method and building cloud server
CN113946640A (en) Action track display method, device, terminal and storage medium
US9202286B2 (en) Image processing apparatus, computer-readable medium storing an image processing program, and image processing method
CN108520347A (en) Panoptic visualization grid voltage quality methods of exhibiting, system and computer equipment
CN111654747B (en) Bullet screen display method and device
EP3723365A1 (en) Image processing apparatus, system that generates virtual viewpoint video image, control method of image processing apparatus and storage medium
CN104182959A (en) Target searching method and target searching device
CN109784238A (en) A kind of method and device of determining object to be identified
US8611647B2 (en) Method for generating background image and system for generating background image
CN104754248A (en) Method and device for acquiring target snapshot
CN110324661B (en) Method, device and equipment for generating video abstract under mobile shooting and storage medium
EP3876543A1 (en) Video playback method and apparatus
JP4546810B2 (en) Trajectory-added video generation apparatus and trajectory-added video generation program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant