CN103177453B - A kind of affair analytical method based on moving object edge characteristics and device - Google Patents

A kind of affair analytical method based on moving object edge characteristics and device Download PDF

Info

Publication number
CN103177453B
CN103177453B CN201110433912.4A CN201110433912A CN103177453B CN 103177453 B CN103177453 B CN 103177453B CN 201110433912 A CN201110433912 A CN 201110433912A CN 103177453 B CN103177453 B CN 103177453B
Authority
CN
China
Prior art keywords
edge
video
space
frame
baseline locomotor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110433912.4A
Other languages
Chinese (zh)
Other versions
CN103177453A (en
Inventor
任广杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN201110433912.4A priority Critical patent/CN103177453B/en
Publication of CN103177453A publication Critical patent/CN103177453A/en
Application granted granted Critical
Publication of CN103177453B publication Critical patent/CN103177453B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a kind of affair analytical method based on moving object edge characteristics and system.The method comprises: obtain the baseline locomotor edge figure in the foreground image of multiple frame of video in the first predetermined time interval; Calculate the geometric center position of the baseline locomotor edge figure of multiple frame of video in the first predetermined time interval, dutycycle space diffusivity; The temporal characteristics of the temporal characteristics of computational geometry center, the temporal characteristics of dutycycle and space expansion speed rate; According at least one in the temporal characteristics of the temporal characteristics of geometric center position, the temporal characteristics of dutycycle and space expansion speed rate, determine whether to belong to the event time feature preset.By combining motion feature, characteristic set can and different events between have good corresponding relation, thus the event content in video segment can be analyzed exactly.

Description

A kind of affair analytical method based on moving object edge characteristics and device
Technical field
The present invention relates to video image analysis field, particularly a kind of affair analytical method based on moving object edge characteristics and device.
Background technology
Along with the development of video monitoring and analytical technology, its range of application is more and more wider, comprise the fields such as public security, quality inspection, insurance, environmental protection, meteorology, finance, traffic, water conservancy, municipal administration and medium-sized and small enterprises, simultaneously along with the development of network technology, each control point is no longer the monitoring of simple unit and may monitors from each orientation by multiple extension set.The time cycle of monitoring also extended to 24 hours from 8 hours of every day gradually.Especially in important region and time period, higher safety requirements is proposed to social safety, personal safety, this promotes that field of video monitoring more develops, the quantity of monitoring camera constantly increases, and meanwhile also brings more arduous challenge to the analytical work of the video content obtained by monitoring.
Wherein, be how to find different types of social event fast and accurately to the huge challenge of urban safety management and Community Safety management, use video monitoring and analytical technology to differentiate the generation of this kind of event, having great significance to maintaining social stability.
Summary of the invention
The present inventor proposes a kind of new technical scheme, according to the video image that video monitoring arrives, carries out the analysis of moving object edge characteristics, thus analyzes whether specific event occurs.
An object of the present invention is to provide a kind of affair analytical method based on moving object edge characteristics and device.
According to a first aspect of the invention, provide a kind of affair analytical method based on moving object edge characteristics, the method comprises:
Obtain the baseline locomotor edge figure in the foreground image of multiple frame of video in the first predetermined time interval;
Calculate the geometric center position of the described baseline locomotor edge figure of multiple frame of video in described first predetermined time interval, dutycycle and space diffusivity,
In described first predetermined time interval the described baseline locomotor edge figure of first frame of video geometric center position centered by, divide equally with the plane of predetermined angle by figure place, described baseline locomotor edge, be that coordinate axis forms the first coordinate system with bisector, described space diffusivity is the vector be made up of the absolute value of the projection sum of point on the corresponding coordinate axle of described first coordinate system forming described baseline locomotor edge figure;
According to the geometric center position of the described baseline locomotor edge figure of adjacent two frames before and after in described first predetermined time interval, dutycycle and space diffusivity, calculate the rate of change of the rate of change of geometric center position of the baseline locomotor edge figure of described multiple frame of video in described first predetermined time interval, the rate of change of dutycycle and space diffusivity respectively, form the temporal characteristics of the temporal characteristics of geometric center position, the temporal characteristics of dutycycle and space diffusivity;
According at least one in the temporal characteristics of the described temporal characteristics of geometric center position, the temporal characteristics of dutycycle and space expansion speed rate, determine whether to belong to the event time feature preset.
Preferably, the ratio of the pixel quantity in the region that the pixel quantity that the dutycycle of described baseline locomotor edge figure is described baseline locomotor edge figure surrounds with described baseline locomotor edge figure.
Preferably, the method also comprises:
Obtain the baseline locomotor edge figure of multiple frame of video in the second predetermined time interval;
Calculate the geometric center position of the described baseline locomotor edge figure of multiple frame of video in described second predetermined time interval;
Calculate the spatial variations amount of the geometric center position of the described baseline locomotor edge figure of first frame of video in described first predetermined time interval and described second predetermined time interval, form the space characteristics of geometric center position;
By the center translation of described first coordinate system extremely with the geometric center position of the described baseline locomotor edge figure of first frame of video in described second predetermined time interval, become the second coordinate system, calculate the space diffusivity of described baseline locomotor edge figure under described second coordinate system of multiple frame of video in described second predetermined time interval;
The space diffusivity mean value of the baseline locomotor edge figure calculating multiple frame of video in described first predetermined time interval under the first coordinate system is as the first space diffusivity average;
The space diffusivity mean value of the baseline locomotor edge figure calculating multiple frame of video in described second predetermined time interval under the second coordinate system is as second space diffusivity average;
Calculate the spatial variations amount of difference as space diffusivity of described first space diffusivity average and described second space diffusivity average, the space characteristics of Special composition diffusivity;
According to the space characteristics of described geometric center position, the space characteristics of space diffusivity, determine whether to belong to the event space feature preset.
Preferably, the baseline locomotor edge figure in the foreground image of described frame of video obtains by the following method:
Generate the background of described frame of video;
Based on described frame of video background extracting described in the prospect of frame of video;
Extract the monochrome information of each pixel of the prospect of described video, as the motion feature factor of each pixel;
Select the described molility factor rate of change of described frame of video maximum o'clock as the first moving edges reference point, described motion feature factor variations rate is the variable quantity of the described motion feature factor value in the prospect of present frame and next frame;
Not yet by the motion feature factor variations rate of pixel compared in 8 relatively more adjacent with described first moving edges reference point pixels, select pixel that wherein motion feature factor variations rate is maximum as the second moving edges reference point,
Using described second moving edges reference point as the first moving edges reference point, circulation performs described comparison step, until all pixels of described frame of video are all by relatively mistake, all pixels of wherein said frame of video only by more once,
Described first and second moving edges reference points form the baseline locomotor edge figure in the prospect of described frame of video.
Preferably, described monochrome information is the product of the luminance component under YUV pattern and the luminance component under HSL pattern.
Root second invention of the present invention, additionally provide a kind of device based on moving object edge characteristics, this device comprises:
Baseline locomotor edge figure acquiring unit, for obtaining the baseline locomotor edge figure in the foreground image of multiple frame of video in the first predetermined time interval;
Baseline locomotor edge graphics calculations unit, for calculating the geometric center position of the described baseline locomotor edge figure of multiple frame of video in described first predetermined time interval, dutycycle and space diffusivity,
In described first predetermined time interval the described baseline locomotor edge figure of first frame of video geometric center position centered by, divide equally with the plane of predetermined angle by figure place, described baseline locomotor edge, be that coordinate axis forms the first coordinate system with bisector, described space diffusivity is the vector be made up of the absolute value of the projection sum of point on the corresponding coordinate axle of described first coordinate system forming described baseline locomotor edge figure;
Temporal characteristics computing unit, for the geometric center position according to the described baseline locomotor edge figure of adjacent two frames in front and back in described first predetermined time interval, dutycycle and space diffusivity, calculate the rate of change of the rate of change of geometric center position of the baseline locomotor edge figure of described multiple frame of video in described first predetermined time interval, the rate of change of dutycycle and space diffusivity respectively, form the temporal characteristics of the temporal characteristics of geometric center position, the temporal characteristics of dutycycle and space diffusivity;
Event identifying unit, for according at least one in the temporal characteristics of the described temporal characteristics of geometric center position, the temporal characteristics of dutycycle and space expansion speed rate, determines whether to belong to the event time feature preset.
Preferably, the ratio of the pixel quantity in the region that the pixel quantity that the dutycycle of described baseline locomotor edge figure is described baseline locomotor edge figure surrounds with described baseline locomotor edge figure.
Preferably, this device also comprises:
Second baseline locomotor edge figure acquiring unit, for obtaining the baseline locomotor edge figure of multiple frame of video in the second predetermined time interval;
Second baseline locomotor edge graphics calculations unit, for calculating the geometric center position of the described baseline locomotor edge figure of multiple frame of video in described second predetermined time interval;
Geometric center locational space feature calculation unit, for calculating the spatial variations amount of the geometric center position of the described baseline locomotor edge figure of first frame of video in described first predetermined time interval and described second predetermined time interval, form the space characteristics of geometric center position;
Space diffusivity space characteristics computing unit, for by the center translation of described first coordinate system to the geometric center position of the described baseline locomotor edge figure of first frame of video in described second predetermined time interval, become the second coordinate system, calculate the space diffusivity of described baseline locomotor edge figure under described second coordinate system of multiple frame of video in described second predetermined time interval
The space diffusivity mean value of the baseline locomotor edge figure calculating multiple frame of video in described first predetermined time interval under the first coordinate system as the first space diffusivity average,
The space diffusivity mean value of the baseline locomotor edge figure calculating multiple frame of video in described second predetermined time interval under the second coordinate system as second space diffusivity average,
Calculate the spatial variations amount of difference as space diffusivity of described first space diffusivity average and described second space diffusivity average, the space characteristics of Special composition diffusivity;
Second event identifying unit, for according to the space characteristics of described geometric center position, the space characteristics of space diffusivity, determines whether to belong to the event space feature preset.
Preferably, described baseline locomotor edge figure acquiring unit comprises:
Background generation module, for generating the background of described frame of video;
Foreground extracting module, for the prospect of frame of video described in the background extracting based on described frame of video;
The motion feature factor shifts to an earlier date module, for extracting the monochrome information of each pixel of the prospect of described video, as the motion feature factor of each pixel;
Baseline locomotor edge figure acquisition module, for select the described molility factor rate of change of described frame of video maximum o'clock as the first moving edges reference point, described motion feature factor variations rate is the variable quantity of the described motion feature factor value in the prospect of present frame and next frame
Not yet by the motion feature factor variations rate of pixel compared in 8 relatively more adjacent with described first moving edges reference point pixels, select pixel that wherein motion feature factor variations rate is maximum as the second moving edges reference point,
Using described second moving edges reference point as the first moving edges reference point, circulation performs described comparison step, until all pixels of described frame of video are all by relatively mistake, all pixels of wherein said frame of video only by more once,
Described first and second moving edges reference points form the baseline locomotor edge figure in the prospect of described frame of video.
Preferably, described monochrome information is the product of the luminance component under YUV pattern and the luminance component under HSL pattern.
In technology provided by the invention, by the analysis to moving object edge characteristics in video image, mark off and differentiate different groups event.Namely by the analysis to the baseline locomotor edge figure of moving object in video image, the multiple feature of Calculation Basis moving edges figure, thus the multiple motion feature factors extracting this moving object.By combining these motion features, obtain the proper vector set of this realm.Proper vector set can and different events between have good corresponding relation.Such as, simple personnel's walking, runs, goes around, squat, fight, assemble.Thus the event content that can analyze exactly in the video segment in a period of time.
Further, the change of the proper vector set of event in the similarity degree of the proper vector set of different event and different time video segment can also be utilized, can also the possibility of predicted events attribute development.By this system, the speed of video search based on event type and target localization can be improved, can predict the development of particular event.
By referring to the detailed description of accompanying drawing to exemplary embodiment of the present invention, further feature of the present invention and advantage thereof will become clear.
Accompanying drawing explanation
What form a part for instructions drawings describes embodiments of the invention, and together with the description for explaining principle of the present invention.
With reference to accompanying drawing, according to detailed description below, clearly the present invention can be understood, wherein:
Fig. 1 illustrates the schematic flow sheet of the affair analytical method embodiment based on moving object edge characteristics provided by the invention;
Fig. 2 illustrates the structural representation of the event analysis apparatus embodiment based on moving object edge characteristics provided by the invention.
Embodiment
Various exemplary embodiment of the present invention is described in detail now with reference to accompanying drawing.It should be noted that: unless specifically stated otherwise, otherwise the parts of setting forth in these embodiments and the positioned opposite of step do not limit the scope of the invention.
Meanwhile, it should be understood that for convenience of description, the size of the various piece shown in accompanying drawing is not draw according to the proportionate relationship of reality.
Illustrative to the description only actually of at least one exemplary embodiment below, never as any restriction to the present invention and application or use.
May not discuss in detail for the known technology of person of ordinary skill in the relevant, method and apparatus, but in the appropriate case, described technology, method and apparatus should be regarded as a part of authorizing instructions.
In all examples with discussing shown here, any occurrence should be construed as merely exemplary, instead of as restriction.Therefore, other example of exemplary embodiment can have different values.
It should be noted that: represent similar terms in similar label and letter accompanying drawing below, therefore, once be defined in an a certain Xiang Yi accompanying drawing, then do not need to be further discussed it in accompanying drawing subsequently.
based on the affair analytical method of moving object edge characteristics
Fig. 1 illustrates the schematic flow sheet of the affair analytical method embodiment based on moving object edge characteristics provided by the invention.Shown in Figure 1, introduce the affair analytical method embodiment based on moving object edge characteristics provided by the present invention.
Step 101, obtains the baseline locomotor edge figure in the foreground image of multiple frame of video in the first predetermined time interval.
Baseline locomotor edge figure in foreground image can adopt the multiple method known to those skilled in the art.A kind of preferred method the monochrome information in Utilization prospects image can extract baseline locomotor edge figure.Introduce the monochrome information obtained in foreground image by monochrome information below in detail and extract baseline locomotor edge figure.
First, the background of generating video frame.
For frame of video multiple in predetermined time interval, can for each frame video image, method well known to those skilled in the art of sampling carrys out the background of generating video.Such as, in order to obtain processing speed more faster, single frames modeling can be adopted, the method upgraded fast.
Next, based on the prospect of the background extracting frame of video of frame of video.
On the basis of the background of frame of video, the background based on frame of video mentions prospect and the extraction prospect of frame of video.
Next, the monochrome information of each pixel of the prospect of video is extracted, as the motion feature factor of each pixel.
Contain motion realm image in foreground image, groups of motion type games can be embodied by the edge of realm, therefore can extract in each frame foreground image the edge of the group that moves.Method is the monochrome information utilizing edge image, namely extracts the monochrome information of each picture element point chromatic of the prospect of video, as the motion feature factor of each pixel.
Monochrome information can be the product comprising the luminance Y component of pixel under YUV pattern and the brightness L component under HSL pattern.The monochrome information method of each pixel of the prospect of concrete extraction video can be each pixel Y-component be multiplied with L component, as motion feature factor mu=(Y*L) of each pixel.
Next, select the molility factor rate of change of frame of video maximum o'clock as the first moving edges reference point, motion feature factor variations rate is the variable quantity of the motion feature factor value in the prospect of present frame and next frame.
Calculate the molility factor rate of change of frame of video maximum o'clock as the first moving edges reference point, motion feature factor variations rate is the variable quantity of the motion feature factor value in the prospect of present frame and next frame maximum some moving edges reference point as a reference.
Next, not yet by the motion feature factor variations rate of pixel compared in 8 relatively more adjacent with the first moving edges reference point pixels, select pixel that wherein motion feature factor variations rate is maximum as the second moving edges reference point, using the second moving edges reference point as the first moving edges reference point, circulation performs comparison step, until all pixels of frame of video are all by relatively mistake, wherein all pixels of frame of video are only by more once.
Specifically, to start with the point that molility factor rate of change is maximum exactly, find coupled 8 pixels to external expansion successively extract in these 8 points maximum point does second moving edges reference point, like this, is all relatively crossed once by all pixels in the scope of potential prospect thus all moving edges reference points obtained in the foreground point of frame of video.Thus obtain a figure be made up of motion reference point for each frame, as the baseline locomotor edge figure T of this frame.
Step 102, calculates the geometric center position of the baseline locomotor edge figure of multiple frame of video in the first predetermined time interval, dutycycle and space expansion speed rate.
The edge characteristics calculating the baseline locomotor edge figure T of each frame comprises geometric center position cc, dutycycle OBR and space diffusivity SDR.Specifically can be by the following method.
Geometric center position cc
When baseline locomotor edge figure T is the border of a closed region, can direct Calculation Basis moving edges descriptive geometry center.When baseline locomotor edge figure is not the border of a closed region, to be that the point connected connects with straight line in nonocclusive figure, form unexpansive closed region, the i.e. external unexpansive closed region of baseline locomotor edge figure, thus can Calculation Basis moving edges descriptive geometry center.
Dutycycle OBR
Dutycycle describes the index of the content points gathering dense degree of baseline locomotor edge figure T.The ratio of the pixel quantity in the region that the pixel quantity that the dutycycle of baseline locomotor edge figure can be expressed as baseline locomotor edge figure surrounds with baseline locomotor edge figure.All pixels of baseline locomotor edge figure might not form a closed interval border, now, can adopt and close unexpansive for nonocclusive figure similarly with said method, form the external unexpansive closed region of baseline locomotor edge figure.
Space diffusivity SDR
For obtaining the traffic direction conversion trend of baseline locomotor edge figure more accurately, in the first predetermined time interval the baseline locomotor edge figure of first frame of video geometric center position centered by, with predetermined angle Ω, the plane at figure place, baseline locomotor edge is divided equally, forming multiple angle is Ω fan region, be coordinate axis with bisector, form the first coordinate system.Next, the space diffusivity of baseline locomotor edge figure under the first coordinate system of multiple frame of video in the first predetermined time interval is calculated.The space diffusivity of baseline locomotor edge figure T is the vector be made up of the absolute value of the projection sum o'clock on the corresponding coordinate axle of the first coordinate system forming baseline locomotor edge figure.The obvious trend whether the content points distribution that space diffusivity SDR describes baseline locomotor edge figure T deposits in particular directions spreads.In computation process, angle Ω is less, and the space angle of division is more intensive, and result of calculation is more correct on final judgement impact, but calculated amount is larger.
Step 103, according to the geometric center position of the baseline locomotor edge figure of adjacent two frames before and after in the first predetermined time interval, dutycycle and space diffusivity, calculate the rate of change of the rate of change of geometric center position of the baseline locomotor edge figure of multiple frame of video in the first predetermined time interval, the rate of change of dutycycle and space diffusivity respectively, form the temporal characteristics of the temporal characteristics of geometric center position, the temporal characteristics of dutycycle and space expansion speed rate.
For the frame of video in the first predetermined time interval, the temporal change characteristic of Calculation Basis moving edges figure, is expressed as μ t.The temporal characteristics of the temporal characteristics of geometric center position of the front and back frame in certain hour section, the temporal characteristics of dutycycle and space expansion speed rate is expressed as μ t-cc, μ t-obr, μ t-sdr.Such as, predetermined amount of time is 500ms, namely using 500ms as a standard time section Du.μ t corresponding cc, obr, sdr are μ t-cc, μ t-obr, μ t-sdr respectively.When calculating rate of change, the time interval of front and back frame is less, and the frame number namely in predetermined amount of time is more, and result is more accurate, but corresponding calculated amount can increase.
In another kind of embodiment, the computation and analysis to the space characteristics us of baseline locomotor edge figure T can also be increased further, comprise the space characteristics us-cc of geometric center position and the space characteristics us-sdr of space diffusivity.The space characteristics us-cc of geometric center position embodies the change at two time period internal coordinate initial points.The method of the space characteristics us-cc of computational geometry center and the space characteristics us-sdr of space diffusivity is specific as follows.
The space characteristics of geometric center position
First, the baseline locomotor edge figure of multiple frame of video in the second predetermined time interval is obtained.
Next, the coordinate of the geometric center position of the baseline locomotor edge figure of multiple frame of video in the second predetermined time interval is calculated.
Next, calculate the spatial variations amount of the geometric center position of the baseline locomotor edge figure of first frame of video in the first predetermined time interval and the second predetermined time interval, form the space characteristics of geometric center position.
The space characteristics of space diffusivity
First, by the center translation of the first coordinate system extremely with the geometric center position of the baseline locomotor edge figure of first frame of video in the second predetermined time interval, become the second coordinate system, calculate the space diffusivity of baseline locomotor edge figure under the second coordinate system of multiple frame of video in the second predetermined time interval.
Next, the space diffusivity mean value of the baseline locomotor edge figure calculating multiple frame of video in the first predetermined time interval under the first coordinate system is as the first space diffusivity average.
Next, the space diffusivity mean value of the baseline locomotor edge figure calculating multiple frame of video in the second predetermined time interval under the second coordinate system is as second space diffusivity average.
Next, the spatial variations amount of difference as space diffusivity of the first space diffusivity average and second space diffusivity average is calculated, the space characteristics of Special composition diffusivity.
According to us-cc and μ s-sdr, further, the space discrimination factor μ s of baseline locomotor edge figure can also be obtained.μ s is a linear function be made up of μ s-cc and μ s-sdr, us=A*us-cc+B*us-sdr, wherein, A and B is predetermined coefficient, its slope characterizes the attention rate of operator to different event characteristic, is a parameter can carrying out adjusting according to the tendency of operator.
Step 104, according at least one in the temporal characteristics of the temporal characteristics of geometric center position, the temporal characteristics of dutycycle and space expansion speed rate, determines whether to belong to the event time feature preset.
One or more according in the temporal characteristics μ t-sdr of the temporal characteristics μ t-cc of the geometric center position of baseline locomotor edge figure, the temporal characteristics μ t-obr of dutycycle and space expansion speed rate, form the motion feature factor set β in the video image prospect in predetermined time interval.β can be regarded as the space of one 3 dimension, different motion realms takes on a different character factor set.
According to another kind of embodiment, the space characteristics us-cc of geometric center position and the space characteristics us-sdr of space diffusivity can also be increased further, thus the motion feature factor set β obtained can regard the space of one 5 dimension as.
According to the feature of different events, motion feature factor set β can comprise a characterization factor, also can comprise the combination of multiple characterization factor.Such as shown by following table, following very first time interval D u is 500ms, angle Ω is 45 degree.
According to temporal characteristics and the space characteristics of the event calculated, the development of all right predicted events.Time sequencing is analyzed the change of the β in surrounding time section, thus the possibility of event evolves can be drawn, and then its probability in space distribution can also be obtained.
based on the event analysis apparatus of moving object edge characteristics
Fig. 2 illustrates the structural representation of the event analysis apparatus embodiment based on moving object edge characteristics provided by the invention.Shown in Figure 2, introduce the event analysis apparatus embodiment based on moving object edge characteristics provided by the present invention.
This device comprises baseline locomotor edge figure acquiring unit 201, baseline locomotor edge graphics calculations unit 202, temporal characteristics computing unit 203 and event identifying unit 204.
Baseline locomotor edge figure acquiring unit 201 is for obtaining the baseline locomotor edge figure in the foreground image of multiple frame of video in the first predetermined time interval.
Baseline locomotor edge graphics calculations unit 202 is for calculating the geometric center position of the baseline locomotor edge figure of multiple frame of video in the first predetermined time interval, dutycycle and space diffusivity, in the first predetermined time interval the baseline locomotor edge figure of first frame of video geometric center position centered by, divide equally with the plane of predetermined angle by figure place, baseline locomotor edge, be that coordinate axis forms the first coordinate system with bisector, space diffusivity is the vector be made up of the absolute value of the projection sum o'clock on the corresponding coordinate axle of the first coordinate system forming baseline locomotor edge figure.
Temporal characteristics computing unit 203 is for the geometric center position according to the baseline locomotor edge figure of adjacent two frames in front and back in the first predetermined time interval, dutycycle and space diffusivity, calculate the rate of change of the rate of change of geometric center position of the baseline locomotor edge figure of multiple frame of video in the first predetermined time interval, the rate of change of dutycycle and space diffusivity respectively, form the temporal characteristics of the temporal characteristics of geometric center position, the temporal characteristics of dutycycle and space diffusivity.
Wherein, the ratio of the pixel quantity in the region that the pixel quantity that the dutycycle of baseline locomotor edge figure is baseline locomotor edge figure surrounds with baseline locomotor edge figure.
Event identifying unit 204, for according at least one in the temporal characteristics of the temporal characteristics of geometric center position, the temporal characteristics of dutycycle and space expansion speed rate, determines whether to belong to the event time feature preset.
In another kind of embodiment, analytical equipment can also comprise the second baseline locomotor edge figure acquiring unit, the second baseline locomotor edge graphics calculations unit, geometric center locational space feature calculation unit, space diffusivity space characteristics computing unit and second event identifying unit.
Second baseline locomotor edge figure acquiring unit, for obtaining the baseline locomotor edge figure of multiple frame of video in the second predetermined time interval.
Second baseline locomotor edge graphics calculations unit is for calculating the geometric center position of the baseline locomotor edge figure of multiple frame of video in the second predetermined time interval.
Geometric center locational space feature calculation unit, for calculating the spatial variations amount of the geometric center position of the baseline locomotor edge figure of first frame of video in the first predetermined time interval and the second predetermined time interval, forms the space characteristics of geometric center position.
Space diffusivity space characteristics computing unit is used for the center translation of the first coordinate system extremely with the geometric center position of the baseline locomotor edge figure of first frame of video in the second predetermined time interval, become the second coordinate system, calculate the space diffusivity of baseline locomotor edge figure under the second coordinate system of multiple frame of video in the second predetermined time interval, the space diffusivity mean value of the baseline locomotor edge figure calculating multiple frame of video in the first predetermined time interval under the first coordinate system is as the first space diffusivity average, the space diffusivity mean value of the baseline locomotor edge figure calculating multiple frame of video in the second predetermined time interval under the second coordinate system is as second space diffusivity average, calculate the spatial variations amount of difference as space diffusivity of the first space diffusivity average and second space diffusivity average, the space characteristics of Special composition diffusivity.
Second event identifying unit is used for according to the space characteristics of geometric center position, the space characteristics of space diffusivity, determines whether to belong to the event space feature preset.
Baseline locomotor edge figure acquiring unit 201 specifically can comprise Background generation module, foreground extracting module, the motion feature factor shift to an earlier date module and baseline locomotor edge figure acquisition module.
Background generation module is used for the background of generating video frame.
Foreground extracting module is used for the prospect based on the background extracting frame of video of frame of video.
The motion feature factor shifts to an earlier date module for extracting the monochrome information of each pixel of the prospect of video, as the motion feature factor of each pixel.According to the product that information can be the luminance component under YUV pattern and the luminance component under HSL pattern.
Baseline locomotor edge figure acquisition module, select the molility factor rate of change of frame of video maximum o'clock as the first moving edges reference point, motion feature factor variations rate is the variable quantity of the motion feature factor value in the prospect of present frame and next frame, not yet by the motion feature factor variations rate of pixel compared in 8 relatively more adjacent with the first moving edges reference point pixels, select pixel that wherein motion feature factor variations rate is maximum as the second moving edges reference point, using the second moving edges reference point as the first moving edges reference point, circulation performs comparison step, until all pixels of frame of video are all by relatively mistake, wherein all pixels of frame of video are only by more once, first and second moving edges reference points form the baseline locomotor edge figure in the prospect of frame of video.
So far, described in detail according to a kind of affair analytical method based on moving object edge characteristics of the present invention, device.In order to avoid covering design of the present invention, details more known in the field are not described.Those skilled in the art, according to description above, can understand how to implement technical scheme disclosed herein completely.
Method of the present invention, system and equipment may be realized in many ways.Such as, any combination by software, hardware, firmware or software, hardware, firmware realizes method and system of the present invention.Said sequence for the step of described method is only to be described, and the step of method of the present invention is not limited to above specifically described order, unless specifically stated otherwise.In addition, in certain embodiments, can be also record program in the recording medium by the invention process, these programs comprise the machine readable instructions for realizing according to method of the present invention.Thus, the present invention also covers the recording medium stored for performing the program according to method of the present invention.
Although be described in detail specific embodiments more of the present invention by example, it should be appreciated by those skilled in the art, above example is only to be described, instead of in order to limit the scope of the invention.It should be appreciated by those skilled in the art, can without departing from the scope and spirit of the present invention, above embodiment be modified.Scope of the present invention is limited by claims.

Claims (10)

1. based on an affair analytical method for moving object edge characteristics, it is characterized in that, comprising:
Obtain the baseline locomotor edge figure in the foreground image of multiple frame of video in the first predetermined time interval;
Calculate the geometric center position of the described baseline locomotor edge figure of multiple frame of video in described first predetermined time interval, dutycycle and space diffusivity, in described first predetermined time interval the described baseline locomotor edge figure of first frame of video geometric center position centered by, divide equally with the plane of predetermined angle by figure place, described baseline locomotor edge, be that coordinate axis forms the first coordinate system with bisector, described space diffusivity is the vector be made up of the absolute value of the projection sum of point on the corresponding coordinate axle of described first coordinate system forming described baseline locomotor edge figure,
According to the geometric center position of the described baseline locomotor edge figure of adjacent two frames before and after in described first predetermined time interval, dutycycle and space diffusivity, calculate the rate of change of the rate of change of geometric center position of the baseline locomotor edge figure of described multiple frame of video in described first predetermined time interval, the rate of change of dutycycle and space diffusivity respectively, form the temporal characteristics of the temporal characteristics of geometric center position, the temporal characteristics of dutycycle and space diffusivity;
According at least one in the temporal characteristics of the described temporal characteristics of geometric center position, the temporal characteristics of dutycycle and space expansion speed rate, determine whether to belong to the event time feature preset.
2. method according to claim 1, is characterized in that, the ratio of the pixel quantity in the region that the pixel quantity that the dutycycle of described baseline locomotor edge figure is described baseline locomotor edge figure surrounds with described baseline locomotor edge figure.
3. method according to claim 1, is characterized in that, the method also comprises:
Obtain the baseline locomotor edge figure of multiple frame of video in the second predetermined time interval;
Calculate the geometric center position of the described baseline locomotor edge figure of multiple frame of video in described second predetermined time interval;
Calculate the spatial variations amount of the geometric center position of the described baseline locomotor edge figure of first frame of video in described first predetermined time interval and described second predetermined time interval, form the space characteristics of geometric center position;
By the center translation of described first coordinate system extremely with the geometric center position of the described baseline locomotor edge figure of first frame of video in described second predetermined time interval, become the second coordinate system, calculate the space diffusivity of described baseline locomotor edge figure under described second coordinate system of multiple frame of video in described second predetermined time interval;
The space diffusivity mean value of the baseline locomotor edge figure calculating multiple frame of video in described first predetermined time interval under described first coordinate system is as the first space diffusivity average;
The space diffusivity mean value of the baseline locomotor edge figure calculating multiple frame of video in described second predetermined time interval under described second coordinate system is as second space diffusivity average;
Calculate the spatial variations amount of difference as space diffusivity of described first space diffusivity average and described second space diffusivity average, the space characteristics of Special composition diffusivity;
According to the space characteristics of described geometric center position, the space characteristics of space diffusivity, determine whether to belong to the event space feature preset.
4. method according to claim 1, is characterized in that, the baseline locomotor edge figure in the foreground image of described frame of video obtains by the following method:
Generate the background of described frame of video;
Based on described frame of video background extracting described in the prospect of frame of video;
Extract the monochrome information of each pixel of the prospect of described frame of video, as the motion feature factor of each pixel;
Select the motion feature factor variations rate of described frame of video maximum o'clock as the first moving edges reference point, described motion feature factor variations rate is the variable quantity of the described motion feature factor value in the prospect of present frame and next frame;
Not yet by the motion feature factor variations rate of pixel compared in 8 relatively more adjacent with described first moving edges reference point pixels, select pixel that wherein motion feature factor variations rate is maximum as the second moving edges reference point, using described second moving edges reference point as the first moving edges reference point, circulation performs described comparison step, until all pixels of described frame of video are all by relatively mistake, all pixels of wherein said frame of video are only by more once, described first and second moving edges reference points form the baseline locomotor edge figure in the prospect of described frame of video.
5. method according to claim 4, is characterized in that, described monochrome information is the product of the luminance component under YUV pattern and the luminance component under HSL pattern.
6. based on an event analysis apparatus for moving object edge characteristics, it is characterized in that, comprising:
Baseline locomotor edge figure acquiring unit, for obtaining the baseline locomotor edge figure in the foreground image of multiple frame of video in the first predetermined time interval;
Baseline locomotor edge graphics calculations unit, for calculating the geometric center position of the described baseline locomotor edge figure of multiple frame of video in described first predetermined time interval, dutycycle and space diffusivity, in described first predetermined time interval the described baseline locomotor edge figure of first frame of video geometric center position centered by, divide equally with the plane of predetermined angle by figure place, described baseline locomotor edge, be that coordinate axis forms the first coordinate system with bisector, described space diffusivity is the vector be made up of the absolute value of the projection sum of point on the corresponding coordinate axle of described first coordinate system forming described baseline locomotor edge figure,
Temporal characteristics computing unit, for the geometric center position according to the described baseline locomotor edge figure of adjacent two frames in front and back in described first predetermined time interval, dutycycle and space diffusivity, calculate the rate of change of the rate of change of geometric center position of the baseline locomotor edge figure of described multiple frame of video in described first predetermined time interval, the rate of change of dutycycle and space diffusivity respectively, form the temporal characteristics of the temporal characteristics of geometric center position, the temporal characteristics of dutycycle and space diffusivity;
Event identifying unit, for according at least one in the temporal characteristics of the described temporal characteristics of geometric center position, the temporal characteristics of dutycycle and space expansion speed rate, determines whether to belong to the event time feature preset.
7. device according to claim 6, is characterized in that, the ratio of the pixel quantity in the region that the pixel quantity that the dutycycle of described baseline locomotor edge figure is described baseline locomotor edge figure surrounds with described baseline locomotor edge figure.
8. device according to claim 6, is characterized in that, this device also comprises:
Second baseline locomotor edge figure acquiring unit, for obtaining the baseline locomotor edge figure of multiple frame of video in the second predetermined time interval;
Second baseline locomotor edge graphics calculations unit, for calculating the geometric center position of the described baseline locomotor edge figure of multiple frame of video in described second predetermined time interval;
Geometric center locational space feature calculation unit, for calculating the spatial variations amount of the geometric center position of the described baseline locomotor edge figure of first frame of video in described first predetermined time interval and described second predetermined time interval, form the space characteristics of geometric center position;
Space diffusivity space characteristics computing unit, for by the center translation of described first coordinate system to the geometric center position of the described baseline locomotor edge figure of first frame of video in described second predetermined time interval, become the second coordinate system, calculate the space diffusivity of described baseline locomotor edge figure under described second coordinate system of multiple frame of video in described second predetermined time interval, the space diffusivity mean value of the baseline locomotor edge figure calculating multiple frame of video in described first predetermined time interval under described first coordinate system is as the first space diffusivity average, the space diffusivity mean value of the baseline locomotor edge figure calculating multiple frame of video in described second predetermined time interval under described second coordinate system is as second space diffusivity average, calculate the spatial variations amount of difference as space diffusivity of described first space diffusivity average and described second space diffusivity average, the space characteristics of Special composition diffusivity,
Second event identifying unit, for according to the space characteristics of described geometric center position, the space characteristics of space diffusivity, determines whether to belong to the event space feature preset.
9. device according to claim 6, is characterized in that, described baseline locomotor edge figure acquiring unit comprises:
Background generation module, for generating the background of described frame of video;
Foreground extracting module, for the prospect of frame of video described in the background extracting based on described frame of video;
The motion feature factor shifts to an earlier date module, for extracting the monochrome information of each pixel of the prospect of described frame of video, as the motion feature factor of each pixel;
Baseline locomotor edge figure acquisition module, for select the motion feature factor variations rate of described frame of video maximum o'clock as the first moving edges reference point, described motion feature factor variations rate is the variable quantity of the described motion feature factor value in the prospect of present frame and next frame, not yet by the motion feature factor variations rate of pixel compared in 8 relatively more adjacent with described first moving edges reference point pixels, select pixel that wherein motion feature factor variations rate is maximum as the second moving edges reference point, using described second moving edges reference point as the first moving edges reference point, circulation performs described comparison step, until all pixels of described frame of video are all by relatively mistake, all pixels of wherein said frame of video are only by more once, described first and second moving edges reference points form the baseline locomotor edge figure in the prospect of described frame of video.
10. device according to claim 9, is characterized in that, described monochrome information is the product of the luminance component under YUV pattern and the luminance component under HSL pattern.
CN201110433912.4A 2011-12-22 2011-12-22 A kind of affair analytical method based on moving object edge characteristics and device Active CN103177453B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110433912.4A CN103177453B (en) 2011-12-22 2011-12-22 A kind of affair analytical method based on moving object edge characteristics and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110433912.4A CN103177453B (en) 2011-12-22 2011-12-22 A kind of affair analytical method based on moving object edge characteristics and device

Publications (2)

Publication Number Publication Date
CN103177453A CN103177453A (en) 2013-06-26
CN103177453B true CN103177453B (en) 2016-01-27

Family

ID=48637283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110433912.4A Active CN103177453B (en) 2011-12-22 2011-12-22 A kind of affair analytical method based on moving object edge characteristics and device

Country Status (1)

Country Link
CN (1) CN103177453B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07298271A (en) * 1994-04-28 1995-11-10 Matsushita Electric Ind Co Ltd Method and device for detecting motion vector in motion image coding
KR100400375B1 (en) * 2001-06-27 2003-10-08 엘지전자 주식회사 Display Apparatus with the pseudo-contour noise detector using skin-color filter and Method of processing an image Thereof
CN101339688A (en) * 2008-08-27 2009-01-07 北京中星微电子有限公司 Intrusion checking method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07298271A (en) * 1994-04-28 1995-11-10 Matsushita Electric Ind Co Ltd Method and device for detecting motion vector in motion image coding
KR100400375B1 (en) * 2001-06-27 2003-10-08 엘지전자 주식회사 Display Apparatus with the pseudo-contour noise detector using skin-color filter and Method of processing an image Thereof
CN101339688A (en) * 2008-08-27 2009-01-07 北京中星微电子有限公司 Intrusion checking method and system

Also Published As

Publication number Publication date
CN103177453A (en) 2013-06-26

Similar Documents

Publication Publication Date Title
CN108257146B (en) Motion trail display method and device
Cheng et al. Depth enhanced saliency detection method
CN101777129B (en) Image matching method based on feature detection
CN103268480A (en) System and method for visual tracking
CN103679745B (en) A kind of moving target detecting method and device
CN102025981B (en) Method for detecting foreground in monitoring video
CN102760230B (en) Flame detection method based on multi-dimensional time domain characteristics
CN102194443A (en) Display method and system for window of video picture in picture and video processing equipment
CN103735269A (en) Height measurement method based on video multi-target tracking
CN106097366A (en) A kind of image processing method based on the Codebook foreground detection improved
CN103440667A (en) Automatic device for stably tracing moving targets under shielding states
CN104778697A (en) Three-dimensional tracking method and system based on fast positioning of image dimension and area
CN105469427A (en) Target tracking method applied to videos
CN111400423B (en) Smart city CIM three-dimensional vehicle pose modeling system based on multi-view geometry
CN102799646A (en) Multi-view video-oriented semantic object segmentation method
CN104898954B (en) A kind of interactive browsing method based on augmented reality
CN103065312B (en) Foreground extraction method in gesture tracking process
CN101877135B (en) Moving target detecting method based on background reconstruction
CN110322479B (en) Dual-core KCF target tracking method based on space-time significance
CN113034383A (en) Method for obtaining video image based on improved grid motion statistics
CN103177453B (en) A kind of affair analytical method based on moving object edge characteristics and device
CN105138979A (en) Method for detecting the head of moving human body based on stereo visual sense
Zhang et al. An Improved Computational Approach for Salient Region Detection.
CN113139539B (en) Method and device for detecting characters of arbitrary-shaped scene with asymptotic regression boundary
CN108564020A (en) Micro- gesture identification method based on panorama 3D rendering

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant