CN102231820A - Monitoring image processing method, device and system - Google Patents

Monitoring image processing method, device and system Download PDF

Info

Publication number
CN102231820A
CN102231820A CN201110159214XA CN201110159214A CN102231820A CN 102231820 A CN102231820 A CN 102231820A CN 201110159214X A CN201110159214X A CN 201110159214XA CN 201110159214 A CN201110159214 A CN 201110159214A CN 102231820 A CN102231820 A CN 102231820A
Authority
CN
China
Prior art keywords
video
background
module
threshold value
predetermined threshold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201110159214XA
Other languages
Chinese (zh)
Other versions
CN102231820B (en
Inventor
谢佳亮
张丛喆
刘威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GUANGZHOU JIAQI INTELLIGENT TECHNOLOGY CO LTD
Original Assignee
GUANGZHOU JIAQI INTELLIGENT TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GUANGZHOU JIAQI INTELLIGENT TECHNOLOGY CO LTD filed Critical GUANGZHOU JIAQI INTELLIGENT TECHNOLOGY CO LTD
Priority to CN201110159214.XA priority Critical patent/CN102231820B/en
Publication of CN102231820A publication Critical patent/CN102231820A/en
Application granted granted Critical
Publication of CN102231820B publication Critical patent/CN102231820B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a monitoring image processing method, which comprises the following steps of: acquiring an original video; processing the original video, and fusing videos in different time periods in a number which is a preset threshold value into the same time period; and outputting the processed video. The invention also discloses a monitoring image processing device, which comprises a segmentation module, a background extraction module, a description module, a data storage module and a fusion module. The invention also provides a monitoring image processing system, which comprises an input unit, an image processing device and an output unit. By the monitoring image processing method, the monitoring image processing device and the monitoring image processing system provided by the invention, video browsing efficiency can be effectively improved.

Description

The methods, devices and systems that a kind of monitoring image is handled
Technical field
The present invention relates to technical field of video monitoring, particularly relate to a kind of methods, devices and systems of monitoring image retrieval.
Background technology
Monitoring is the physical basis that every profession and trade key sector or important place are monitored in real time, administrative department can obtain valid data, image or acoustic information by it, the process of paroxysmal abnormality incident is monitored timely and remembers, in order to provide efficiently, in time commander and height, arrange police strength, settle a case etc.Develop rapidly and promote along with what current computer was used, the whole world has started one wave of digitalization, and the various device digitlization has become the primary goal of security protection.The performance characteristics of digital monitoring alarm is: monitored picture shows in real time, video recording image quality single channel regulatory function, and every road video recording speed can be provided with respectively, retrieval fast, multiple video recording mode set-up function, automated back-up, The Cloud Terrace/camera lens controlled function, Network Transmission etc.
The video data of video monitoring system record, reviewing afterwards or inquiring about and have great importance some incidents.Yet, in the reality, the situation that the locus of video camera and background scene is fixed, moving target is relatively more rare is quite general, have access to playback video and require a great deal of time, dry as dust, the labor intensive of its tediously long process is unfavorable for finding interested target in the video data of magnanimity.
Summary of the invention
The methods, devices and systems that the object of the present invention is to provide a kind of monitoring image to handle, it has solved the problem that playback video in the prior art needs plenty of time and energy.
For realizing the method for a kind of monitoring image processing that the object of the invention provides, comprise step:
A. obtain original video;
B. original video is handled, the video in first predetermined threshold value different time sections was fused in the same time period;
C. with treated video output.
More preferably, described step B comprises step:
B1. with original video according to the time sequencing segmentation;
B2. extract the background of every section video;
B3., the track of the motion object of every section video is described;
B4. set up the metadata of the track of the described background of every section video and described motion object;
B5. the movement locus and the same background that with quantity are the different video-frequency band of first predetermined threshold value merge.
More preferably, described step B5 comprises:
B51., first predetermined threshold value is set;
B52. same background is added to each two field picture correspondence of the motion object of first predetermined threshold value video-frequency band.
More preferably, after the described step B2, also comprise step before the step B3:
B6. reject the not video-frequency band of motion object.
More preferably, described step B6 comprises:
B61. with each two field picture and background subtracting, obtain a difference;
B62. if described difference less than second predetermined threshold value, is then rejected this two field picture.
Device for realizing that the object of the invention also provides a kind of monitoring image to handle comprises:
Segmentation module is used for original video according to the time sequencing segmentation;
The background extracting module is used to extract the background of every section video;
Describing module is used to describe the track of the motion object of every section video;
Data memory module is used to set up the metadata of the track of the described background of every section video and described motion object;
Fusion Module, being used for quantity is the movement locus and the fusion of same background of the different video-frequency band of first predetermined threshold value.
System for realizing that the object of the invention also provides a kind of monitoring image to handle comprises:
Input unit is used to obtain original video;
Image processing apparatus is used for original video is handled, and the video in first predetermined threshold value different time sections was fused in the same time period;
Output unit is used for treated video output.
More preferably, described image processing apparatus comprises:
Segmentation module is used for original video according to the time sequencing segmentation;
The background extracting module is used to extract the background of every section video;
Describing module is used to describe the track of the motion object of every section video;
Data memory module is used to set up the metadata of the track of the described background of every section video and described motion object;
Fusion Module, being used for quantity is the movement locus and the fusion of same background of the different video-frequency band of first predetermined threshold value.
More preferably, described output unit comprises:
The original video broadcast window is used for controls playing speed, has stepping, rollback and frame alignment function;
Merge the rear video broadcast window, be used for controls playing speed, the color mark that the track that merges object shows, merges the density degree of object and merges object is set;
The metadata visualization window is used to set up the visable representation of the tree-like or relationship type of metadata, is convenient to inquiry and strengthens show.
More preferably, described input unit comprises:
Video camera is used for document image;
Transmission equipment is used for transmitted image;
The image read module is used to set up the project file of original video, and reads original video.
The invention has the beneficial effects as follows: the methods, devices and systems that a kind of monitoring image of the present invention is handled, by the moving target in a plurality of time periods is merged on same fixing background, can keep under the constant prerequisite of content, effectively and apace browsing video, localizing objects are realized the long-time section of fast browsing video.
Description of drawings
Fig. 1 is the flow chart of the method for a kind of monitoring image processing of the present invention;
Fig. 2 is the flow chart of the step B of the method handled of a kind of monitoring image of the present invention;
Fig. 3 a is that the video of the method for a kind of monitoring image processing of the present invention merges schematic diagram;
Fig. 3 b is that the video of the method for a kind of monitoring image processing of the present invention merges schematic diagram;
Fig. 3 c is that the video of the method for a kind of monitoring image processing of the present invention merges schematic diagram;
Fig. 4 is the structural representation of the device of a kind of monitoring image processing of the present invention;
Fig. 5 is the structural representation of the system of a kind of monitoring image processing of the present invention;
Fig. 6 is the schematic diagram of the output unit of the system that handles of a kind of monitoring image of the present invention;
Fig. 7 is first kind of application scenarios of the system of a kind of monitoring image processing of the present invention;
Fig. 8 is second kind of application scenarios of the system of a kind of monitoring image processing of the present invention;
Fig. 9 is the third application scenarios of the system of a kind of monitoring image processing of the present invention.
Embodiment
In order to make purpose of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, the methods, devices and systems that a kind of monitoring image of the present invention is handled are further elaborated.Should be appreciated that specific embodiment described herein only in order to explanation the present invention, and be not used in qualification the present invention.
The situation that in the reality, the locus of video camera and background scene is fixed, moving target is relatively rare is quite general, and in this case, corresponding video shows as the fusion of relatively-stationary background and sparse moving object.Therefore, to this class video, can attempt the moving target in a plurality of time periods is merged on same fixing background, to shorten video time.Because the user's interest content concentrates in the moving object specific in the relatively-stationary time period in the scene, the method for this fusion can keep under the constant prerequisite of content, effectively and apace browsing video, localizing objects.
The invention provides the method that a kind of monitoring image is handled, comprise the steps: as shown in Figure 1
A obtains original video.
B handles original video, and the video in first predetermined threshold value different time sections was fused in the same time period.
C is with treated video output.
As shown in Figure 2, wherein step B comprises:
B1, with original video according to the time sequencing segmentation.
Utilize the Shot Detection technology, with original video be cut into some short time period, allow overlapping video in case of necessity, each section should comprise a motion object from entering into the process that shifts out scene, and each section all comprises stable background.The Shot Detection technology is a prior art, specifies list of references: Hong.Jiang Zhang, Atreyi Kankanhalli, StephenW.Smoliar, Automatic Partitioning of Full-motion Video, Multimedia Systems, 11 (1): 10-28,1993 (Hong.JiangZhang, Atreyi Kankanhalli, Stephen W.Smoliar, the automatic full dynamic video of dividing, multimedia system, 11 (1): 10-28,1993), repeat no more herein.
B2 extracts the background of every section video.
The process of background extracting, available following formula is described
B n(i,j)=θI n(i,j)+(1-θ)B n-1(i,j),n>1
Bn represents reference background; In represents the instant background of current pending frame, and it has reflected the intensity of variation of current background; θ is the coefficient of value between 0 to 1, is used for controlling the sensitivity to image change of the renewal speed of background and context update; N (n>1) and n-1 represent the frame number of current pending frame and the frame number of preorder consecutive frame respectively.Wherein, In can be described by following formula
I n ( i , j ) = f n ( i , j ) , | f n ( i , j ) - C n - 1 ( i , j ) | ≤ T ; C n - 1 ( i , j ) , | f n ( i , j ) - C n - 1 ( i , j ) | > T .
According to top formula, if the difference of the reference background Cn-1 of present frame fn and preorder consecutive frame greater than given threshold value T, decidable respective pixel art belongs to the zone of action of present frame, the reference background corresponding pixel value of then using the preorder consecutive frame is as the I value.If difference value is less than or equal to T, can think that this pixel belongs to non-moving region, the pixel of then directly using the present frame corresponding position is as the I value.In present specification, given threshold value T is defined as the 3rd predetermined threshold value.
Background extracting technology main purpose is that the background that system is extracted can be upgraded on the video sequence time shaft gradually, the influence of the foreground moving target being extracted with the variation that reduces even eliminate external environment (illumination, weather etc.).
More preferably, should calculate the stable background of every section video.According to the method for extracting background, the background of video piecemeal put together calculate a mean value, adopting this mean value to do difference with the next background that identifies calculates, if difference within the specific limits, think that then this background is comparatively stable, be about to the stable background of this background as this section video.
More preferably, should reject the not video-frequency band of motion object, the result based on background identification with each two field picture and background subtracting, obtains the block of pixels of non-background, and these block of pixels have been formed the moving target object, i.e. the motion object.If the pixel that obtains is less than a threshold value m, we just think not motion object of this two field picture, then with its rejecting.In present specification, threshold value m is defined as second predetermined threshold value.
B3 describes the track of the motion object of every section video.
The track of motion object comprises each motion object motion beginning and ending time, the terminal of motion path, the key frame in the motion process.The method of obtaining the motion object is similar with the mode of rejecting motion object not, and the method for all utilizing frame to subtract each other calculates.At first, in algorithm, set the minimum movement unit of a numerical value, as (being that frequency was 50 frame/seconds) is the minimum movement chronomere of a motion object to define 0.02 second, and the frame of video in the time period is calculated as a motion object.Algorithm detects since the I frame, as the I frame motion object is arranged all, then with the starting point of I frame as motion path, detect the I+1 frame once more, if the I+1 frame also has moving target, then continue to detect the I+2 frame, the rest may be inferred, if detect not motion object of I+n frame, then we are then with the terminal point of I+n-1 frame as this movement locus.So I then is defined as a movement locus to the I+n-1 frame.
B4 sets up the metadata of the track of the described background of every section video and described motion object.
The position of each segment video in former video, the markup informations such as frame position at each motion object place are noted, set up metadata table (Metadata).
Because what monitoring trade adopted is the MPEG4/H.264 standard, supports object-based coding, video scene can be used as prospect and background object individual processing, rather than is seen as a series of complete rectangular frame.Utilize this principle to store in conjunction with metadata, so more help the utilization again, fusion of scenario objects and significantly mutual etc. the video after analyzing.
B5 is the movement locus and the fusion of same background of the different video-frequency band of first predetermined threshold value with quantity.
Fusion can be interpreted as intuitively to the difference stack of motion object in same frame constantly.Shown in Fig. 3 a, Fig. 3 b and Fig. 3 c, square frame is represented the fixed background in the video among the figure, arrow is represented a motion motion of objects track: Fig. 3 a is the 9:00-9:20 period, and Fig. 3 b is the 10:00-10:15 period, and Fig. 3 c is that two objects merge in period 9:00-9:20.Like this, can browse two sections original videos with nearly half time.
Program all is stored in all movement locus in the data meta object, define a segmentation in the algorithm and merged threshold value, be defined as first predetermined threshold value in this article, as to set first predetermined threshold value be 3, then expression will be set up a fusion video that is merged by 3 sections movement locus all the time.According to the genesis sequence of movement locus,, be added to above the background simultaneously from three movement locus of first movement locus to the.
Concrete steps are, at first video are analyzed, and obtain the initial frame number of motor segment, preserve hereof; Analyze motor segment, record moving object position in image; Stack, when fusional movement track quantity is 3, getting 3 movement locus successively superposes, as the 1st movement locus, the 1st two field picture of the 2nd movement locus and the 3rd movement locus is superimposed, the 1st movement locus, and the 2nd two field picture of the 2nd movement locus and the 3rd movement locus is superimposed, if certain motor segment has superposeed, get next movement locus and replace.
The definition of first predetermined threshold value is open, can decide according to actual conditions by those skilled in the art, for example, in a period of time, the negligible amounts of movement locus, and the blanking time of movement locus is longer, and we think that its picture motion is comparatively sparse so, at this moment, first predetermined threshold value can be provided with more greatly.If in a period of time, movement locus quantity is more, the movement locus time interval is shorter, and so, first predetermined threshold value can be provided with smaller.
Different movement locus may come from the different time periods, when its original background not simultaneously, then in the process that processes, calculate the background of different tracks correspondence, with each section background mean value calculation that tries again, obtain one and be fit on this of some reclosing time, the background frames that the different motion track is all suited.
To handle the relation between the number of objects of the density degree of picture and fusion during overlap-add procedure well.Overstockedly can shorten the browsing time, but can cause the picture confusion; Cross and dredge the layout that helps picture, but can not shorten the browsing time effectively.Also to handle the background of space invariance and the relation between the dynamic factor well.It spatially is constant that problem is limited to background, but illumination and weather conditions etc. can cause the difference of background.Merging needs to carry out at stable background.
The device that the present invention also provides a kind of monitoring image to handle, as shown in Figure 4, comprising: segmentation module is used for original video according to the time sequencing segmentation; The background extracting module is used to extract the background of every section video; Describing module is used to describe the track of the motion object of every section video; Data memory module is used to set up the metadata of the track of the described background of every section video and described motion object; Fusion Module, being used for quantity is the movement locus and the fusion of same background of the different video-frequency band of first predetermined threshold value.
The system that the present invention also provides a kind of monitoring image to handle, as shown in Figure 5, comprising: input unit is used to obtain original video; Image processing apparatus is used for original video is handled, and the video in first predetermined threshold value different time sections was fused in the same time period; Output unit is used for treated video output.
Wherein, image processing apparatus comprises segmentation module, background extracting module, describing module, data memory module and Fusion Module as mentioned above.
As shown in Figure 6, output unit comprises: the original video broadcast window, be used for controls playing speed, and have stepping, rollback and frame alignment function; Merge the rear video broadcast window, be used for controls playing speed, the color mark that the track that merges object shows, merges the density degree of object and merges object is set; The metadata visualization window is used to set up the visable representation of the tree-like or relationship type of metadata, is convenient to inquiry and strengthens show.
Original video broadcast window and the equal may command playback rate of fusion rear video broadcast window have functions such as stepping, rollback, frame alignment.Merge the rear video broadcast window and density degree that the track that merges object shows, merges object can be set, merge the color mark of object etc., support the mouse event of motion object, most of and user are embodied in this window alternately.
Three windows carry out interactive operation between can be in twos, realize content update synchronously.Wherein, the active of original video broadcast window alternately seldom.
Preferably, described output unit also comprises memory cell, is used to store the analysis result of original video, comprises the result of storing metadata and motion tracking and identification, and the interesting target that user inquiring arrives behind the storage interactive browser.
Preferably, output unit also comprises edit cell, is used to browse the editor and the printout of report.
Input unit comprises: video camera is used for document image; Transmission equipment is used for transmitted image; The image read module is used to set up the project file of original video, and reads original video.
Fig. 7, Fig. 8 and Figure 9 shows that three kinds of application scenarioss of the system that monitoring image of the present invention is handled.
Figure 7 shows that the digital hard disc video recorder video inserts the server independent operating.The advantage of this scene is that intelligent video summary server can independent operating.Shortcoming is to link to each other with DVR by network, and device category is many and complicated, and compatibility needs to consider.
Figure 8 shows that digital signal inserts, server docks with the video monitoring networking platform.The advantage of this scene be intelligent video summary server can with safe city, the butt joint of global eyes platform, the market potential is huge.Shortcoming is can not independent operating, and the task scheduling of server sends instruction by platform, and server is in listening state, and this machine can not watch the result, need feed back to platform to the result and watch.
Figure 9 shows that video analog signal inserts the server independent operating.But the advantage of this scene is the isolated operation of intelligent video summary server, oneself records a video and the making video frequency abstract.Shortcoming is the requirement of the large-scale video monitoring platform of incompatibility.
Video through method provided by the present invention was handled, can carry out playback with reference to following mode:
1. direct initialization apparatus when main interface starts.
2. after entering playback interfaces, select to want the equipment and the related channel program of playback earlier.
3. behind selection equipment and the passage, as certain sky in this month corresponding Summary file is arranged, then in the time control below, can click the same day.As do not have corresponding Summary file, then can not click.
4. after clicking, do not lack, then begin to play the video file of summary as file.
5. during certain time point in thinking playback summarized radio file, suspend this video file earlier, can on video, draw afterwards and represent the circle of moving in the time period, double-click after choosing circle.
6. after double-clicking, finding according to the XML file (Extensible Markup Language, extend markup language) (as: 1011-1059.xml) of summary configuration file is which moving object of which frame, obtains the frame number at this object place afterwards again.And then according to the segmentation video file, obtain the scope number of a frame, and then obtain the time migration of original of the scope correspondence of this frame according to MAP (map) file, obtain the concrete time period of playback according to time migration and FILEDATA (file data) file.
7. login relevant device.
8. invocation facility SDK (Software Development Kit, SDK), and import into and calculate the good time and begin playback.
9. can do some play control, begin as time-out etc.
10. when not needing to continue to watch playback, click stops, and just can stop playback.
The methods, devices and systems that a kind of monitoring image provided by the present invention is handled can be implemented under the prerequisite that guarantees video information lost not, browse more videos with the short period.
Should be noted that at last that obviously those skilled in the art can carry out various changes and modification to the present invention and not break away from the spirit and scope of the present invention.Like this, if of the present invention these revise and modification belongs within the scope of claim of the present invention and equivalent technologies thereof, then the present invention also is intended to comprise these changes and modification.

Claims (10)

1. the method that monitoring image is handled is characterized in that, comprises step:
A. obtain original video;
B. original video is handled, the video in first predetermined threshold value different time sections was fused in the same time period;
C. with treated video output.
2. method according to claim 1 is characterized in that, described step B comprises step:
B1. with original video according to the time sequencing segmentation;
B2. extract the background of every section video;
B3., the track of the motion object of every section video is described;
B4. set up the metadata of the track of the described background of every section video and described motion object;
B5. the movement locus and the same background that with quantity are the different video-frequency band of first predetermined threshold value merge.
3. method according to claim 2 is characterized in that, described step B5 comprises:
B51., first predetermined threshold value is set;
B52. same background is added to each two field picture correspondence of the motion object of first predetermined threshold value video-frequency band.
4. method according to claim 2 is characterized in that, after the described step B2, also comprises step before the step B3:
B6. reject the not video-frequency band of motion object.
5. method according to claim 4 is characterized in that, described step B6 comprises:
B61. with each two field picture and background subtracting, obtain a difference;
B62. if described difference less than second predetermined threshold value, is then rejected this two field picture.
6. the device that monitoring image is handled is characterized in that, comprising:
Segmentation module is used for original video according to the time sequencing segmentation;
The background extracting module is used to extract the background of every section video;
Describing module is used to describe the track of the motion object of every section video;
Data memory module is used to set up the metadata of the track of the described background of every section video and described motion object;
Fusion Module, being used for quantity is the movement locus and the fusion of same background of the different video-frequency band of first predetermined threshold value.
7. the system that monitoring image is handled is characterized in that, comprising:
Input unit is used to obtain original video;
Image processing apparatus is used for original video is handled, and the video in first predetermined threshold value different time sections was fused in the same time period;
Output unit is used for treated video output.
8. system according to claim 7 is characterized in that, described image processing apparatus comprises:
Segmentation module is used for original video according to the time sequencing segmentation;
The background extracting module is used to extract the background of every section video;
Describing module is used to describe the track of the motion object of every section video;
Data memory module is used to set up the metadata of the track of the described background of every section video and described motion object;
Fusion Module, being used for quantity is the movement locus and the fusion of same background of the different video-frequency band of first predetermined threshold value.
9. system according to claim 7 is characterized in that, described output unit comprises:
The original video broadcast window is used for controls playing speed, has stepping, rollback and frame alignment function;
Merge the rear video broadcast window, be used for controls playing speed, the color mark that the track that merges object shows, merges the density degree of object and merges object is set;
The metadata visualization window is used to set up the visable representation of the tree-like or relationship type of metadata, is convenient to inquiry and strengthens show.
10. system according to claim 7 is characterized in that, described input unit comprises:
Video camera is used for document image;
Transmission equipment is used for transmitted image;
The image read module is used to set up the project file of original video, and reads original video.
CN201110159214.XA 2011-06-14 2011-06-14 Monitoring image processing method, device and system Expired - Fee Related CN102231820B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110159214.XA CN102231820B (en) 2011-06-14 2011-06-14 Monitoring image processing method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110159214.XA CN102231820B (en) 2011-06-14 2011-06-14 Monitoring image processing method, device and system

Publications (2)

Publication Number Publication Date
CN102231820A true CN102231820A (en) 2011-11-02
CN102231820B CN102231820B (en) 2014-08-06

Family

ID=44844347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110159214.XA Expired - Fee Related CN102231820B (en) 2011-06-14 2011-06-14 Monitoring image processing method, device and system

Country Status (1)

Country Link
CN (1) CN102231820B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930541A (en) * 2012-10-29 2013-02-13 深圳市开天源自动化工程有限公司 Background extracting and updating method of video images
CN103187083A (en) * 2011-12-29 2013-07-03 深圳中兴力维技术有限公司 Storage method and system based on time domain video fusion
CN103227963A (en) * 2013-03-20 2013-07-31 西交利物浦大学 Static surveillance video abstraction method based on video moving target detection and tracing
CN103347167A (en) * 2013-06-20 2013-10-09 上海交通大学 Surveillance video content description method based on fragments
CN104751164A (en) * 2013-12-30 2015-07-01 鸿富锦精密工业(武汉)有限公司 Method and system for capturing movement trajectory of object
CN104811797A (en) * 2015-04-15 2015-07-29 广东欧珀移动通信有限公司 Video processing method and mobile terminal
CN105336347A (en) * 2014-06-26 2016-02-17 杭州海康威视数字技术股份有限公司 Video file browsing method and device
CN105872540A (en) * 2016-04-26 2016-08-17 乐视控股(北京)有限公司 Video processing method and device
CN106504270A (en) * 2016-11-08 2017-03-15 浙江大华技术股份有限公司 The methods of exhibiting and device of target object in a kind of video
CN108012202A (en) * 2017-12-15 2018-05-08 浙江大华技术股份有限公司 Video concentration method, equipment, computer-readable recording medium and computer installation
CN110706193A (en) * 2018-06-21 2020-01-17 北京京东尚科信息技术有限公司 Image processing method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1744695A (en) * 2005-05-13 2006-03-08 圆刚科技股份有限公司 Method for searching picture of monitoring system
CN1761319A (en) * 2004-10-12 2006-04-19 国际商业机器公司 Video analysis, archiving and alerting methods and apparatus for a video surveillance system
CN101500090A (en) * 2008-01-30 2009-08-05 索尼株式会社 Image processing apparatus, image processing method, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1761319A (en) * 2004-10-12 2006-04-19 国际商业机器公司 Video analysis, archiving and alerting methods and apparatus for a video surveillance system
CN1744695A (en) * 2005-05-13 2006-03-08 圆刚科技股份有限公司 Method for searching picture of monitoring system
CN101500090A (en) * 2008-01-30 2009-08-05 索尼株式会社 Image processing apparatus, image processing method, and program

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
《2010 2nd International Conference on Signal Processing Systems (ICSPS)》 20101231 Zhong Ji et al. Surveillance video summarization based on moving object detection and trajectory extraction 摘要、I-IV、V2-250-253页 1-10 , *
《Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)》 20061231 Alex Rav-Acha et al. Making a Long Video Short: Dynamic Video Synopsis 1-10 , *
ALEX RAV-ACHA ET AL.: "Making a Long Video Short: Dynamic Video Synopsis", 《PROCEEDINGS OF THE 2006 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR’06)》 *
MICHAL IRANI ET AL.: "Efficient representations of video sequences and their applications", 《SIGNAL PROCESSING: IMAGE COMMUNICATION》 *
ZHONG JI ET AL.: "Surveillance video summarization based on moving object detection and trajectory extraction", 《2010 2ND INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING SYSTEMS (ICSPS)》 *
姜超: "视频序列中运动目标提取与跟踪技术的研究", 《中国科技论文在线,HTTP://WWW.PAPER.EDU.CN/RELEASEPAPER/CONTENT/200911-291》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103187083A (en) * 2011-12-29 2013-07-03 深圳中兴力维技术有限公司 Storage method and system based on time domain video fusion
CN103187083B (en) * 2011-12-29 2016-04-13 深圳中兴力维技术有限公司 A kind of storage means based on time domain video fusion and system thereof
CN102930541B (en) * 2012-10-29 2015-06-24 深圳市开天源自动化工程有限公司 Background extracting and updating method of video images
CN102930541A (en) * 2012-10-29 2013-02-13 深圳市开天源自动化工程有限公司 Background extracting and updating method of video images
CN103227963A (en) * 2013-03-20 2013-07-31 西交利物浦大学 Static surveillance video abstraction method based on video moving target detection and tracing
CN103347167A (en) * 2013-06-20 2013-10-09 上海交通大学 Surveillance video content description method based on fragments
CN104751164A (en) * 2013-12-30 2015-07-01 鸿富锦精密工业(武汉)有限公司 Method and system for capturing movement trajectory of object
CN105336347B (en) * 2014-06-26 2018-04-20 杭州萤石网络有限公司 A kind of video file browsing method and device
CN105336347A (en) * 2014-06-26 2016-02-17 杭州海康威视数字技术股份有限公司 Video file browsing method and device
CN104811797A (en) * 2015-04-15 2015-07-29 广东欧珀移动通信有限公司 Video processing method and mobile terminal
CN104811797B (en) * 2015-04-15 2017-09-29 广东欧珀移动通信有限公司 The method and mobile terminal of a kind of Video processing
CN105872540A (en) * 2016-04-26 2016-08-17 乐视控股(北京)有限公司 Video processing method and device
CN106504270A (en) * 2016-11-08 2017-03-15 浙江大华技术股份有限公司 The methods of exhibiting and device of target object in a kind of video
CN106504270B (en) * 2016-11-08 2019-12-20 浙江大华技术股份有限公司 Method and device for displaying target object in video
US11049262B2 (en) 2016-11-08 2021-06-29 Zhejiang Dahua Technology Co., Ltd. Methods and systems for data visualization
US11568548B2 (en) 2016-11-08 2023-01-31 Zhejiang Dahua Technology Co., Ltd. Methods and systems for data visualization
CN108012202A (en) * 2017-12-15 2018-05-08 浙江大华技术股份有限公司 Video concentration method, equipment, computer-readable recording medium and computer installation
WO2019114835A1 (en) * 2017-12-15 2019-06-20 Zhejiang Dahua Technology Co., Ltd. Methods and systems for generating video synopsis
US11076132B2 (en) 2017-12-15 2021-07-27 Zhejiang Dahua Technology Co., Ltd. Methods and systems for generating video synopsis
CN110706193A (en) * 2018-06-21 2020-01-17 北京京东尚科信息技术有限公司 Image processing method and device

Also Published As

Publication number Publication date
CN102231820B (en) 2014-08-06

Similar Documents

Publication Publication Date Title
CN102231820B (en) Monitoring image processing method, device and system
CN113709561B (en) Video editing method, device, equipment and storage medium
US10685460B2 (en) Method and apparatus for generating photo-story based on visual context analysis of digital content
EP3253042B1 (en) Intelligent processing method and system for video data
CN101420595B (en) Method and equipment for describing and capturing video object
CN101631237B (en) Video monitoring data storing and managing system
CN110914872A (en) Navigating video scenes with cognitive insights
CN103347167A (en) Surveillance video content description method based on fragments
CN102314916B (en) Video processing method and system
CN104581437A (en) Video abstract generation and video backtracking method and system
US11037604B2 (en) Method for video investigation
CN101689394A (en) The method and system that is used for video index and video summary
KR20090093904A (en) Apparatus and method for scene variation robust multimedia image analysis, and system for multimedia editing based on objects
CN104918060B (en) The selection method and device of point position are inserted in a kind of video ads
EP2735984A1 (en) Video query method, device and system
KR101484844B1 (en) Apparatus and method for privacy masking tool that provides real-time video
CN116188821A (en) Copyright detection method, system, electronic device and storage medium
Höferlin et al. Uncertainty-aware video visual analytics of tracked moving objects
Malon et al. Toulouse campus surveillance dataset: scenarios, soundtracks, synchronized videos with overlapping and disjoint views
KR20180087970A (en) apparatus and method for tracking image content context trend using dynamically generated metadata
CN103167265A (en) Video processing method and video processing system based on intelligent image identification
Ul Haq et al. An effective video summarization framework based on the object of interest using deep learning
CN105100715A (en) Video switching method and apparatus of monitoring device
KR20150026178A (en) Apparatus for Providing Video Synopsis Computer-Readable Recording Medium with Program therefore
CN106162222B (en) A kind of method and device of video lens cutting

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140806

Termination date: 20210614