CN102231820B - Monitoring image processing method, device and system - Google Patents

Monitoring image processing method, device and system Download PDF

Info

Publication number
CN102231820B
CN102231820B CN201110159214.XA CN201110159214A CN102231820B CN 102231820 B CN102231820 B CN 102231820B CN 201110159214 A CN201110159214 A CN 201110159214A CN 102231820 B CN102231820 B CN 102231820B
Authority
CN
China
Prior art keywords
background
video
frame
section
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201110159214.XA
Other languages
Chinese (zh)
Other versions
CN102231820A (en
Inventor
谢佳亮
张丛喆
刘威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GUANGZHOU JIAQI INTELLIGENT TECHNOLOGY CO LTD
Original Assignee
GUANGZHOU JIAQI INTELLIGENT TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GUANGZHOU JIAQI INTELLIGENT TECHNOLOGY CO LTD filed Critical GUANGZHOU JIAQI INTELLIGENT TECHNOLOGY CO LTD
Priority to CN201110159214.XA priority Critical patent/CN102231820B/en
Publication of CN102231820A publication Critical patent/CN102231820A/en
Application granted granted Critical
Publication of CN102231820B publication Critical patent/CN102231820B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a monitoring image processing method, which comprises the following steps of: acquiring an original video; processing the original video, and fusing videos in different time periods in a number which is a preset threshold value into the same time period; and outputting the processed video. The invention also discloses a monitoring image processing device, which comprises a segmentation module, a background extraction module, a description module, a data storage module and a fusion module. The invention also provides a monitoring image processing system, which comprises an input unit, an image processing device and an output unit. By the monitoring image processing method, the monitoring image processing device and the monitoring image processing system provided by the invention, video browsing efficiency can be effectively improved.

Description

A kind of methods, devices and systems of monitoring image processing
Technical field
The present invention relates to technical field of video monitoring, particularly relate to a kind of methods, devices and systems of monitoring image retrieval.
Background technology
Monitoring is that every profession and trade key sector or important place carry out the physical basis of monitoring in real time, administrative department can obtain valid data, image or acoustic information by it, the process of paroxysmal abnormality event is monitored timely and is remembered, in order to providing efficiently, in time commander and height, arrange police strength, settle a case etc.Develop rapidly and promote along with what current computer was applied, the whole world has started one wave of digitalization, and various device digitlization has become the primary goal of security protection.The performance characteristics of digital monitoring alarm is: monitored picture shows in real time, video recording image quality single channel regulatory function, and every road video recording speed can arrange respectively, quick-searching, multiple video recording mode set-up function, automated back-up, The Cloud Terrace/lens control function, Internet Transmission etc.
The video data of video monitoring system record, reviewing afterwards or inquiring about and have great importance some events.But, in reality, the situation that the locus of video camera and background scene is fixed, moving target is more rare is quite general, have access to playback video and require a great deal of time, its tediously long process is dry as dust, labor intensive, is unfavorable for finding interested target in the video data of magnanimity.
Summary of the invention
The object of the present invention is to provide a kind of methods, devices and systems of monitoring image processing, it has solved the problem that playback video in prior art needs plenty of time and energy.
For realizing the method for a kind of monitoring image processing that the object of the invention provides, comprise step:
A. obtain original video;
B. original video is processed, the video fusion in the first predetermined threshold value different time sections was arrived in the same time period;
C. by treated video output.
More preferably, described step B comprises step:
B1. by original video according to time sequencing segmentation;
B2. extract the background of every section of video;
B3., the track of the Moving Objects of every section of video is described;
B4. set up the metadata of the described background of every section of video and the track of described Moving Objects;
B5. the movement locus and the same background that by quantity are the different video-frequency band of the first predetermined threshold value merge.
More preferably, described step B5 comprises:
B51., the first predetermined threshold value is set;
B52. same background is added to each two field picture correspondence of the Moving Objects of the first predetermined threshold value video-frequency band.
More preferably, after described step B2, before step B3, also comprise step:
B6. reject the video-frequency band that there is no Moving Objects.
More preferably, described step B6 comprises:
B61. by each two field picture and background subtracting, obtain a difference;
If B62. described difference is less than the second predetermined threshold value, reject this two field picture.
The device that a kind of monitoring image processing is also provided for realizing the object of the invention, comprising:
Segmentation module, for by original video according to time sequencing segmentation;
Background extracting module, for extracting the background of every section of video;
Describing module, for describing the track of Moving Objects of every section of video;
Data memory module, for setting up the metadata of the described background of every section of video and the track of described Moving Objects;
Fusion Module, for being that the movement locus of different video-frequency bands and the same background of the first predetermined threshold value merges by quantity.
The system that a kind of monitoring image processing is also provided for realizing the object of the invention, comprising:
Input unit, for obtaining original video;
Image processing apparatus, for original video is processed, arrives the video fusion in the first predetermined threshold value different time sections in the same time period;
Output unit, for exporting treated video.
More preferably, described image processing apparatus comprises:
Segmentation module, for by original video according to time sequencing segmentation;
Background extracting module, for extracting the background of every section of video;
Describing module, for describing the track of Moving Objects of every section of video;
Data memory module, for setting up the metadata of the described background of every section of video and the track of described Moving Objects;
Fusion Module, for being that the movement locus of different video-frequency bands and the same background of the first predetermined threshold value merges by quantity.
More preferably, described output unit comprises:
Original video broadcast window, for controlling playback rate, has stepping, rollback and frame alignment function;
Merge rear video broadcast window, for controlling playback rate, the color mark that merges the track display of object, the density degree that merges object and fusion object is set;
Metadata visualization window, for setting up the visable representation of tree-like or relationship type of metadata, is convenient to inquiry and strengthens show.
More preferably, described input unit comprises:
Video camera, for document image;
Transmission equipment, for transmitting image;
Image reading module, for setting up the project file of original video, and reads original video.
The invention has the beneficial effects as follows: the methods, devices and systems of a kind of monitoring image processing of the present invention, by the moving target in multiple time periods is merged in same fixing background, can keep under the constant prerequisite of content, effectively and rapidly browsing video, localizing objects, realizes the long-time section of fast browsing video.
Brief description of the drawings
Fig. 1 is the flow chart of the method for a kind of monitoring image processing of the present invention;
Fig. 2 is the flow chart of the step B of the method for a kind of monitoring image processing of the present invention;
Fig. 3 a is the video fusion schematic diagram of the method for a kind of monitoring image processing of the present invention;
Fig. 3 b is the video fusion schematic diagram of the method for a kind of monitoring image processing of the present invention;
Fig. 3 c is the video fusion schematic diagram of the method for a kind of monitoring image processing of the present invention;
Fig. 4 is the structural representation of the device of a kind of monitoring image processing of the present invention;
Fig. 5 is the structural representation of the system of a kind of monitoring image processing of the present invention;
Fig. 6 is the schematic diagram of the output unit of the system of a kind of monitoring image processing of the present invention;
Fig. 7 is the first application scenarios of the system of a kind of monitoring image processing of the present invention;
Fig. 8 is the second application scenarios of the system of a kind of monitoring image processing of the present invention;
Fig. 9 is the third application scenarios of the system of a kind of monitoring image processing of the present invention.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, the methods, devices and systems of a kind of monitoring image processing of the present invention are further elaborated.Should be appreciated that specific embodiment described herein, only in order to explain the present invention, is not intended to limit the present invention.
In reality, the situation that the locus of video camera and background scene is fixed, moving target is more rare is quite general, and in this case, corresponding video shows as the fusion of relatively-stationary background and sparse moving object.Therefore,, to this class video, can attempt the moving target in multiple time periods to merge in same fixing background, to shorten video time.Because the interested content of user in scene concentrates in the relatively-stationary time period in specific moving object, the method for this fusion can keep under the constant prerequisite of content, effectively and rapidly browsing video, localizing objects.
A kind of method that the invention provides monitoring image processing, comprises the steps: as shown in Figure 1
A, obtains original video.
B, processes original video, and the video fusion in the first predetermined threshold value different time sections was arrived in the same time period.
C, by treated video output.
As shown in Figure 2, wherein step B comprises:
B1, by original video according to time sequencing segmentation.
Utilize Shot Detection technology, by original video be cut into some short time period, allow overlapping video, each section should comprise a Moving Objects from entering into the process that shifts out scene if desired, and each section all comprises stable background.Shot Detection technology is prior art, illustrate list of references: Hong.Jiang Zhang, Atreyi Kankanhalli, StephenW.Smoliar, Automatic Partitioning of Full-motion Video, Multimedia Systems, 11 (1): 10-28,1993 (Hong.JiangZhang, Atreyi Kankanhalli, Stephen W.Smoliar, automatically the full dynamic video of dividing, multimedia system, 11 (1): 10-28,1993), repeat no more herein.
B2, extracts the background of every section of video.
The process of background extracting, available following formula is described
B n(i,j)=θI n(i,j)+(1-θ)B n-1(i,j),n>1
Bn represents with reference to background; In represents the instant background of current pending frame, and it has reflected the intensity of variation of current background; θ is the coefficient of value between 0 to 1, is used for controlling renewal speed and the sensitivity of context update to image change of background; N (n > 1) and n-1 represent respectively the frame number of current pending frame and the frame number of preorder consecutive frame.Wherein, In can be described by following formula
I n ( i , j ) = f n ( i , j ) , | f n ( i , j ) - C n - 1 ( i , j ) | ≤ T ; C n - 1 ( i , j ) , | f n ( i , j ) - C n - 1 ( i , j ) | > T .
According to formula above, if the difference of the reference background Cn-1 of present frame fn and preorder consecutive frame is greater than given threshold value T, can judge that respective pixel art belongs to the zone of action of present frame, use pixel value corresponding to the reference background of preorder consecutive frame as I value.If difference value is less than or equal to T, can think that this pixel belongs to non-moving region, directly use the pixel of present frame corresponding position as I value.In present specification, given threshold value T is defined as the 3rd predetermined threshold value.
Background extracting technology main purpose is that the background that system is extracted can be upgraded gradually on video sequence time shaft, the impact of foreground moving target being extracted to reduce even to eliminate the variation of external environment (illumination, weather etc.).
More preferably, should calculate the Steady Background Light of every section of video.According to the method for extracting background, the background of video is piecemeal put together and calculated a mean value, adopting this mean value to do difference with the next background identifying calculates, if difference within the specific limits, think that this background is comparatively stable, the Steady Background Light by this background as this section of video.
More preferably, should reject the video-frequency band that there is no Moving Objects, based on the result of Background Recognition, by each two field picture and background subtracting, obtain the block of pixels of non-background, these block of pixels have formed moving target object, i.e. Moving Objects.If the pixel obtaining is less than a threshold value m, we just think that this two field picture does not have Moving Objects, by its rejecting.In present specification, threshold value m is defined as to the second predetermined threshold value.
B3, describes the track of the Moving Objects of every section of video.
The track of Moving Objects comprises each Moving Objects motion beginning and ending time, the terminal of motion path, the key frame in motion process.The method of obtaining Moving Objects does not have the mode of Moving Objects similar with rejecting, and the method for all utilizing frame to subtract each other calculates.First, in algorithm, set the minimum movement unit of a numerical value as a Moving Objects, as (being that frequency was 50 frame/seconds) is the minimum movement chronomere of a Moving Objects to define 0.02 second, and the frame of video in the time period is calculated.Algorithm detects since I frame, as I frame all has Moving Objects, the starting point using I frame as motion path, again detect I+1 frame, if I+1 frame also has moving target, continue to detect I+2 frame, the rest may be inferred, if detect, I+n frame does not have Moving Objects, our terminal using I+n-1 frame as this movement locus.So I is defined as a movement locus to I+n-1 frame.
B4, sets up the metadata of the described background of every section of video and the track of described Moving Objects.
The markup informations such as the position by every a bit of video in former video, the frame position at each Moving Objects place are recorded, and set up metadata table (Metadata).
Due to monitoring trade adopt be MPEG4/H.264 standard, support object-based coding, video scene can be used as prospect and background object is processed separately, instead of is seen as a series of complete rectangular frame.Utilize this principle to store in conjunction with metadata the video after analyzing, be so more conducive to the re-using, fusion of scenario objects and significantly mutual etc.
B5 is that movement locus and the same background of the different video-frequency band of the first predetermined threshold value merges by quantity.
Fusion can be interpreted as intuitively to the not Moving Objects in the same time stack in same frame.As shown in Fig. 3 a, Fig. 3 b and Fig. 3 c, in figure, square frame represents the fixed background in video, arrow represents the movement locus of a Moving Objects: Fig. 3 a is the 9:00-9:20 period, and Fig. 3 b is the 10:00-10:15 period, and Fig. 3 c is that two objects merge in period 9:00-9:20.Like this, can browse two sections of original videos with the time of nearly half.
Program is all stored in all movement locus in a data meta object, in algorithm, define a segmentation and merged threshold value, be defined as in this article the first predetermined threshold value, as to set the first predetermined threshold value be 3, represent to set up a fusion video being merged by 3 sections of movement locus all the time.According to the genesis sequence of movement locus, from three movement locus of first movement locus to the, be added to above background simultaneously.
Concrete steps are, first video are analyzed, and obtain the initial frame number of motor segment, preserve hereof; Analyze motor segment, record moving object position in image; Stack, in the time that fusional movement track quantity is 3, getting successively 3 movement locus superposes, as the 1st movement locus, the 1st two field picture of the 2nd movement locus and the 3rd movement locus is superimposed, the 1st movement locus, and the 2nd two field picture of the 2nd movement locus and the 3rd movement locus is superimposed, if certain motor segment has superposeed, get next movement locus and replace.
The definition of the first predetermined threshold value is open, can be determined according to actual conditions by those skilled in the art, for example, when in a period of time, the negligible amounts of movement locus, and the interval time of movement locus is longer, and we think that its picture motion is comparatively sparse so, now, the first predetermined threshold value can arrange largerly.If in a period of time, movement locus quantity is more, the movement locus time interval is shorter, and so, the first predetermined threshold value can arrange smaller.
Different movement locus may come from the different time periods, in the time that its original background is different, in the process processing, calculate background corresponding to different tracks, by the each section of background mean value calculation that tries again, obtaining one, to be applicable to this point upper reclosing time, to all suitable background frames of different motion track.
When overlap-add procedure, to handle the relation between the density degree of picture and the number of objects of fusion well.Overstockedly the browsing time can be shortened, but picture confusion can be caused; Cross and dredge the layout that is conducive to picture, but can not effectively shorten the browsing time.Also to handle the relation between background and the dynamic factor of space invariance well.It is spatially constant that problem is limited to background, but illumination and weather conditions etc. can cause the difference of background.Fusion need be carried out for stable background.
The present invention also provides a kind of device of monitoring image processing, as shown in Figure 4, comprising: segmentation module, for by original video according to time sequencing segmentation; Background extracting module, for extracting the background of every section of video; Describing module, for describing the track of Moving Objects of every section of video; Data memory module, for setting up the metadata of the described background of every section of video and the track of described Moving Objects; Fusion Module, for being that the movement locus of different video-frequency bands and the same background of the first predetermined threshold value merges by quantity.
The present invention also provides a kind of system of monitoring image processing, as shown in Figure 5, comprising: input unit, for obtaining original video; Image processing apparatus, for original video is processed, arrives the video fusion in the first predetermined threshold value different time sections in the same time period; Output unit, for exporting treated video.
Wherein, image processing apparatus is described above, comprises segmentation module, background extracting module, describing module, data memory module and Fusion Module.
As shown in Figure 6, output unit comprises: original video broadcast window, for controlling playback rate, has stepping, rollback and frame alignment function; Merge rear video broadcast window, for controlling playback rate, the color mark that merges the track display of object, the density degree that merges object and fusion object is set; Metadata visualization window, for setting up the visable representation of tree-like or relationship type of metadata, is convenient to inquiry and strengthens show.
Original video broadcast window and all controlled playback rates processed of fusion rear video broadcast window, have the functions such as stepping, rollback, frame alignment.Merge rear video broadcast window can arrange merge object track display, merge object density degree, merge the color mark etc. of object, support the mouse event of Moving Objects, most of and user are embodied in this window alternately.
Three windows carry out interactive operation between can be between two, synchronously realize content update.Wherein, the active of original video broadcast window is little alternately.
Preferably, described output unit also comprises memory cell, for storing the analysis result of original video, comprises the result of storing metadata and motion tracking and identification, and the interesting target that after storage interactive browser, user inquires.
Preferably, output unit also comprises edit cell, for browsing editor and the printout of report.
Input unit comprises: video camera, for document image; Transmission equipment, for transmitting image; Image reading module, for setting up the project file of original video, and reads original video.
Fig. 7, Fig. 8 and Figure 9 shows that three kinds of application scenarioss of the system of monitoring image processing of the present invention.
Figure 7 shows that the access of digital hard disc video recorder video, server independent operating.The advantage of this scene is that intelligent video summary server can independent operating.Shortcoming is to be connected with DVR by network, and device category is many and complicated, and compatibility needs to consider.
Figure 8 shows that digital signal access, server docks with video monitoring networking platform.The advantage of this scene be intelligent video summary server can with safe city, the docking of global eyes platform, market potential is huge.Shortcoming is can not independent operating, and the task scheduling of server sends instruction by platform, and server is in listening state, and the machine can not watch result, result feedback need to be watched to platform.
Figure 9 shows that video analog signal access, server independent operating.The advantage of this scene is that intelligent video summary server can isolated operation, oneself records a video and makes video frequency abstract.Shortcoming is the requirement that is not suitable with large-scale video monitoring platform.
The video of processing through method provided by the present invention, can carry out playback with reference to following mode:
1. direct initialization apparatus when main interface starts.
2. enter after playback interfaces, first select to want equipment and the related channel program of playback.
3. after selection equipment and passage, as there is corresponding Summary file in certain sky in this month,, in time control below, can click the same day.As there is no corresponding Summary file, can not click.
4., after clicking, as file does not lack, start to play the video file of summary.
When think in playback summarized radio file certain time point time, first suspend this video file, can on video, draw and represent the circle of moving in the time period afterwards, double-click after choosing circle.
6. after double-clicking, according to XML file (the Extensible Markup Language of summary configuration file, extend markup language) (as: 1011-1059.xml) finds is which moving object of which frame, obtains afterwards the frame number at this object place again.And then according to segmentation video file, obtain the scope number of a frame, and then obtain the time migration of original corresponding to the scope of this frame according to MAP (map) file, obtain the concrete time period of playback according to time migration and FILEDATA (file data) file.
7. login relevant device.
8. invocation facility SDK (Software Development Kit, SDK), and import the time calculating into and start playback.
9. can do some play control, start as suspended etc.
10. while not needing to continue to watch playback, click stops, and just can stop playback.
The methods, devices and systems of a kind of monitoring image processing provided by the present invention, can realize under the prerequisite that ensures video information lost not, browse more videos with the short period.
Finally it should be noted that obviously, those skilled in the art can carry out various changes and modification and not depart from the spirit and scope of the present invention the present invention.Like this, if these amendments of the present invention and within modification belongs to the scope of the claims in the present invention and equivalent technologies thereof, the present invention is also intended to comprise these changes and modification.

Claims (8)

1. a method for monitoring image processing, is characterized in that, comprises step:
A. obtain original video;
B. original video is processed, the video fusion in the first predetermined threshold value different time sections was arrived in the same time period;
C. by treated video output;
Described step B comprises step:
B1. by original video according to time sequencing segmentation; Each section comprises a Moving Objects from entering into the process that shifts out scene, and each section all comprises stable background;
B2. extract the background of every section of video;
The process of background extracting, available following formula is described
B n(i,j)=θI n(i,j)+(1-θ)B n-1(i,j),n>1,
Bn represents with reference to background; In represents the instant background of current pending frame, and it has reflected the intensity of variation of current background; θ is the coefficient of value between 0 to 1, is used for controlling renewal speed and the sensitivity of context update to image change of background; N and n-1 represent respectively the frame number of current pending frame and the frame number of preorder consecutive frame; Wherein, In can be described by following formula
According to formula above, if the difference of the reference background Cn-1 of present frame fn and preorder consecutive frame is greater than given threshold value T, can judge that respective pixel belongs to the zone of action of present frame, use pixel value corresponding to the reference background of preorder consecutive frame as I value; If difference is less than or equal to T, can think that this pixel belongs to non-moving region, directly use the pixel of present frame corresponding position as I value;
According to the method for background extracting, the background of video is piecemeal put together and calculated a mean value, adopting this mean value to do difference with the next background identifying calculates, if difference within the specific limits, think that this background is more stable, and Steady Background Light using this background as this section of video;
B3., the track of the Moving Objects of every section of video is described;
B4. set up the metadata of the described background of every section of video and the track of described Moving Objects;
B5. the movement locus and the same background that by quantity are the different video-frequency band of the first predetermined threshold value merge.
2. method according to claim 1, is characterized in that, described step B5 comprises:
B51., the first predetermined threshold value is set;
B52. same background is added to each two field picture correspondence of the Moving Objects of the first predetermined threshold value video-frequency band.
3. method according to claim 1, is characterized in that, after described step B2, also comprises step before step B3:
B6. reject the video-frequency band that there is no Moving Objects.
4. method according to claim 3, is characterized in that, described step B6 comprises:
B61. by each two field picture and background subtracting, obtain a difference;
If B62. described difference is less than the second predetermined threshold value, reject this two field picture.
5. a device for monitoring image processing, is characterized in that, comprising:
Segmentation module, for by original video according to time sequencing segmentation, each section comprises a Moving Objects from entering into the process that shifts out scene, and each section all comprises stable background;
Background extracting module, for extracting the background of every section of video;
Described background extracting module is carried out following functions:
The process of background extracting, available following formula is described
B n(i,j)=θI n(i,j)+(1-θ)B n-1(i,j),n>1,
Bn represents with reference to background; In represents the instant background of current pending frame, and it has reflected the intensity of variation of current background; θ is the coefficient of value between 0 to 1, is used for controlling renewal speed and the sensitivity of context update to image change of background; N and n-1 represent respectively the frame number of current pending frame and the frame number of preorder consecutive frame; Wherein, In can be described by following formula
According to formula above, if the difference of the reference background Cn-1 of present frame fn and preorder consecutive frame is greater than given threshold value T, can judge that respective pixel belongs to the zone of action of present frame, use pixel value corresponding to the reference background of preorder consecutive frame as I value; If difference is less than or equal to T, can think that this pixel belongs to non-moving region, directly use the pixel of present frame corresponding position as I value; According to the method for background extracting, the background of video is piecemeal put together and calculated a mean value, adopting this mean value to do difference with the next background identifying calculates, if difference within the specific limits, think that this background is more stable, and Steady Background Light using this background as this section of video; Describing module, for describing the track of Moving Objects of every section of video;
Data memory module, for setting up the metadata of the described background of every section of video and the track of described Moving Objects;
Fusion Module, for being that the movement locus of different video-frequency bands and the same background of the first predetermined threshold value merges by quantity.
6. a system for monitoring image processing, is characterized in that, comprising:
Input unit, for obtaining original video;
Image processing apparatus, for original video is processed, arrives the video fusion in the first predetermined threshold value different time sections in the same time period;
Output unit, for exporting treated video;
Described image processing apparatus comprises:
Segmentation module, for by original video according to time sequencing segmentation, each section comprises a Moving Objects from entering into the process that shifts out scene, and each section all comprises stable background;
Background extracting module, for extracting the background of every section of video;
Described background extracting module is carried out following functions:
The process of background extracting, available following formula is described
B n(i,j)=θI n(i,j)+(1-θ)B n-1(i,j),n>1,
Bn represents with reference to background; In represents the instant background of current pending frame, and it has reflected the intensity of variation of current background; θ is the coefficient of value between 0 to 1, is used for controlling renewal speed and the sensitivity of context update to image change of background; N and n-1 represent respectively the frame number of current pending frame and the frame number of preorder consecutive frame; Wherein, In can be described by following formula
According to formula above, if the difference of the reference background Cn-1 of present frame fn and preorder consecutive frame is greater than given threshold value T, can judge that respective pixel belongs to the zone of action of present frame, use pixel value corresponding to the reference background of preorder consecutive frame as I value; If difference is less than or equal to T, can think that this pixel belongs to non-moving region, directly use the pixel of present frame corresponding position as I value; According to the method for background extracting, the background of video is piecemeal put together and calculated a mean value, adopting this mean value to do difference with the next background identifying calculates, if difference within the specific limits, think that this background is more stable, and Steady Background Light using this background as this section of video;
Describing module, for describing the track of Moving Objects of every section of video;
Data memory module, for setting up the metadata of the described background of every section of video and the track of described Moving Objects;
Fusion Module, for being that the movement locus of different video-frequency bands and the same background of the first predetermined threshold value merges by quantity.
7. system according to claim 6, is characterized in that, described output unit comprises:
Original video broadcast window, for controlling playback rate, has stepping, rollback and frame alignment function;
Merge rear video broadcast window, for controlling playback rate, the color mark that merges the track display of object, the density degree that merges object and fusion object is set;
Metadata visualization window, for setting up the visable representation of tree-like or relationship type of metadata, is convenient to inquiry and strengthens show.
8. system according to claim 6, is characterized in that, described input unit comprises:
Video camera, for document image;
Transmission equipment, for transmitting image;
Image reading module, for setting up the project file of original video, and reads original video.
CN201110159214.XA 2011-06-14 2011-06-14 Monitoring image processing method, device and system Expired - Fee Related CN102231820B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110159214.XA CN102231820B (en) 2011-06-14 2011-06-14 Monitoring image processing method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110159214.XA CN102231820B (en) 2011-06-14 2011-06-14 Monitoring image processing method, device and system

Publications (2)

Publication Number Publication Date
CN102231820A CN102231820A (en) 2011-11-02
CN102231820B true CN102231820B (en) 2014-08-06

Family

ID=44844347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110159214.XA Expired - Fee Related CN102231820B (en) 2011-06-14 2011-06-14 Monitoring image processing method, device and system

Country Status (1)

Country Link
CN (1) CN102231820B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103187083B (en) * 2011-12-29 2016-04-13 深圳中兴力维技术有限公司 A kind of storage means based on time domain video fusion and system thereof
CN102930541B (en) * 2012-10-29 2015-06-24 深圳市开天源自动化工程有限公司 Background extracting and updating method of video images
CN103227963A (en) * 2013-03-20 2013-07-31 西交利物浦大学 Static surveillance video abstraction method based on video moving target detection and tracing
CN103347167B (en) * 2013-06-20 2018-04-17 上海交通大学 A kind of monitor video content based on segmentation describes method
CN104751164A (en) * 2013-12-30 2015-07-01 鸿富锦精密工业(武汉)有限公司 Method and system for capturing movement trajectory of object
CN105336347B (en) * 2014-06-26 2018-04-20 杭州萤石网络有限公司 A kind of video file browsing method and device
CN107333175B (en) * 2015-04-15 2019-06-25 广东欧珀移动通信有限公司 A kind of method and mobile terminal of video processing
CN105872540A (en) * 2016-04-26 2016-08-17 乐视控股(北京)有限公司 Video processing method and device
CN106504270B (en) 2016-11-08 2019-12-20 浙江大华技术股份有限公司 Method and device for displaying target object in video
CN108012202B (en) 2017-12-15 2020-02-14 浙江大华技术股份有限公司 Video concentration method, device, computer readable storage medium and computer device
CN110706193A (en) * 2018-06-21 2020-01-17 北京京东尚科信息技术有限公司 Image processing method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1744695A (en) * 2005-05-13 2006-03-08 圆刚科技股份有限公司 Method for searching picture of monitoring system
CN1761319A (en) * 2004-10-12 2006-04-19 国际商业机器公司 Video analysis, archiving and alerting methods and apparatus for a video surveillance system
CN101500090A (en) * 2008-01-30 2009-08-05 索尼株式会社 Image processing apparatus, image processing method, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1761319A (en) * 2004-10-12 2006-04-19 国际商业机器公司 Video analysis, archiving and alerting methods and apparatus for a video surveillance system
CN1744695A (en) * 2005-05-13 2006-03-08 圆刚科技股份有限公司 Method for searching picture of monitoring system
CN101500090A (en) * 2008-01-30 2009-08-05 索尼株式会社 Image processing apparatus, image processing method, and program

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Alex Rav-Acha et al..Making a Long Video Short: Dynamic Video Synopsis.《Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06)》.2006,
Efficient representations of video sequences and their applications;Michal Irani et al.;《Signal Processing: Image Communication》;19960531;第8卷(第4期);摘要、2-4、第2-37页 *
Making a Long Video Short: Dynamic Video Synopsis;Alex Rav-Acha et al.;《Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06)》;20061231;摘要、1-6 *
Michal Irani et al..Efficient representations of video sequences and their applications.《Signal Processing: Image Communication》.1996,第8卷(第4期),摘要、2-4、第2-37页.

Also Published As

Publication number Publication date
CN102231820A (en) 2011-11-02

Similar Documents

Publication Publication Date Title
CN102231820B (en) Monitoring image processing method, device and system
Miao et al. Neuromorphic vision datasets for pedestrian detection, action recognition, and fall detection
CN101631237B (en) Video monitoring data storing and managing system
US10970334B2 (en) Navigating video scenes using cognitive insights
CN101420595B (en) Method and equipment for describing and capturing video object
Ajmal et al. Video summarization: techniques and classification
US20160350599A1 (en) Video camera scene translation
US20160133297A1 (en) Dynamic Video Summarization
CN103347167A (en) Surveillance video content description method based on fragments
CN104918060B (en) The selection method and device of point position are inserted in a kind of video ads
CN102314916B (en) Video processing method and system
EP2735984A1 (en) Video query method, device and system
KR20090093904A (en) Apparatus and method for scene variation robust multimedia image analysis, and system for multimedia editing based on objects
KR20160097870A (en) System and method for browsing summary image
US11037604B2 (en) Method for video investigation
KR101484844B1 (en) Apparatus and method for privacy masking tool that provides real-time video
Höferlin et al. Uncertainty-aware video visual analytics of tracked moving objects
Malon et al. Toulouse campus surveillance dataset: scenarios, soundtracks, synchronized videos with overlapping and disjoint views
KR102072022B1 (en) Apparatus for Providing Video Synopsis Computer-Readable Recording Medium with Program therefore
CN111031351A (en) Method and device for predicting target object track
JP7111873B2 (en) SIGNAL LAMP IDENTIFICATION METHOD, APPARATUS, DEVICE, STORAGE MEDIUM AND PROGRAM
CN114170556A (en) Target track tracking method and device, storage medium and electronic equipment
CN106162222B (en) A kind of method and device of video lens cutting
CN103986981A (en) Recognition method and device of scenario segments of multimedia files
Chao et al. Augmented 3-D keyframe extraction for surveillance videos

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140806

Termination date: 20210614