CN104378582B - A kind of intelligent video analysis system and method cruised based on Pan/Tilt/Zoom camera - Google Patents
A kind of intelligent video analysis system and method cruised based on Pan/Tilt/Zoom camera Download PDFInfo
- Publication number
- CN104378582B CN104378582B CN201310359688.8A CN201310359688A CN104378582B CN 104378582 B CN104378582 B CN 104378582B CN 201310359688 A CN201310359688 A CN 201310359688A CN 104378582 B CN104378582 B CN 104378582B
- Authority
- CN
- China
- Prior art keywords
- mtd
- cruise
- mtr
- point
- msub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Closed-Circuit Television Systems (AREA)
Abstract
The invention provides a kind of intelligent video analysis system and method cruised based on Pan/Tilt/Zoom camera, the system includes front end Pan/Tilt/Zoom camera and back-end server, and back-end server includes:Cruise configuration module, carries out setting for the cruise group to system and cruise point and generates cruise list, Pan/Tilt/Zoom camera control module is analyzed cruise list, automatically generates cruise and perform list;Video analysis configuration module, for configuring related intelligent video analysis algorithm for each cruise point and being configured in cruise list;System control module, is that each cruise point carries out camera parameters demarcation and calls video-splicing module, the panoramic mosaic figure in whole cruise cycle is automatically generated according to the execution sequence of cruise point;Intelligent video analysis module, is set for target detection and event analysis, and produce Real-time Alarm to the event detected according to related;Alarm management module, corresponding local management function is carried out to alarm.
Description
Technical field
The invention belongs to video monitoring, video analysis, area of pattern recognition, Pan/Tilt/Zoom camera is based on more particularly, to one kind
The intelligent video analysis system and method for cruise.
Background technology
Video monitoring is the important component of safety and protection system, with the development of Video Supervision Technique, video camera
Machine has been widely used for monitoring various environment, region and place in real time.Because Pan/Tilt/Zoom camera is imaged compared to fixed
Machine has the advantages such as variable visual angle and variable focal length, with monitoring scene is bigger, tracking target zone is wider, has obtained more next
More it is widely applied.
Pan/Tilt/Zoom camera tracking technique is that one kind utilizes image processing techniques, realizes target detection and controls Pan/Tilt/Zoom camera pair
Moving target is positioned in certain scene domain, the monitoring technology for tracking and capturing.This technology can be used for road conditions prison
Control, public place security monitoring, the multiple fields such as forest fire protection.But in current monitoring field, typically one video camera
Only be responsible for the region that oneself is monitored, although and every Pan/Tilt/Zoom camera can be moved, monitoring scope is still very limited,
Because the information of tri- variables of PTZ of Pan/Tilt/Zoom camera needs to place one's entire reliance upon the feedback of track algorithm during tracking, it is difficult to accurate
True is controlled to Pan/Tilt/Zoom camera, and automatic PTZ track algorithms are poor to Small object and environmental suitability at present, at present also
Can not popularization and application on a large scale.
Target detection track algorithm relative maturity based on still camera, but be due to that the view field of single camera has
Limit, larger monitoring visual field needs multiple video cameras to realize covering, in actual Target Tracking System, more using many
Camera chain, the cost of system will be inevitably increased in the case where budget is certain or the quality of video camera is reduced,
And because camera scene is fixed, in order to which the visual field for taking into account video camera is covered, intelligent video analysis algorithm can not typically be examined
The minutia of target is surveyed, the license board information of the facial information of the people of tracking and the vehicle of tracking is such as obtained.
In view of introduction before, it has been found that the automatic tracking system of common system either base Pan/Tilt/Zoom camera at present
Be also based on fixing video camera target detection tracking system all exist than it is larger the problem of, it is impossible to meet to video monitoring system
With as far as possible few video camera, wider covering is realized, the demand of more accurate intelligent video analysis effect is carried out, how fully
The characteristic of Pan/Tilt/Zoom camera is excavated, the characteristic for how combining Pan/Tilt/Zoom camera carries out the framework of intelligent video analysis system as us
The key problem to be solved.
The content of the invention
In order to solve the above problems, the invention provides a kind of intelligent video analysis system cruised based on Pan/Tilt/Zoom camera,
The system includes front end Pan/Tilt/Zoom camera and back-end server, and back-end server includes:Cruise configuration module, for system
Cruise group and cruise point carry out setting generation cruise list, and one of the cruise point correspondence Pan/Tilt/Zoom camera of each cruise group is preset
Position, is cruise point configuration cruise mode and the cruise time of each cruise group;Pan/Tilt/Zoom camera control module, enters to cruise list
Row analysis, automatically generates cruise and performs list, Pan/Tilt/Zoom camera is carried out according to default cruise order between each presetting bit
Cruise detection;Video analysis configuration module, for configuring related intelligent video analysis algorithm for each cruise point and configuring
Into cruise list;System control module, performs the video camera open function of related cruise point, each in configured list
The algorithm configuration of cruise point, is that each point that cruises carries out camera parameters mark to enable the video analysis algorithm of related cruise point
Determine and call video-splicing module, the panoramic mosaic figure in whole cruise cycle is automatically generated according to the execution sequence of cruise point;
Intelligent video analysis module, is set for target detection and event analysis, and the event detected is produced real according to related
When alert;Alarm management module, corresponding local management function is carried out to alarm.
The camera parameters demarcation is exactly whole or the portion for going out projection matrix P by given reference substance backwards calculation
Divide parameter, after demarcation, pass through its two-dimensional coordinate in the two dimensional image that camera is captured, and the projection matrix P obtained, you can
Seek the positional information of the target of some in three-dimensional.
The intelligent video analysis module further comprises:Image pre-processing module, using the adaptive fast of wavelet transformation
Fast image noise reduction algorithm is filtered noise reduction, greyscale transformation operation to image;Module of target detection, for carrying out moving target inspection
Survey, clarification of objective is extracted, pedestrian/vehicle detection, face/car plate detection positioning, and target knowledge is carried out according to clarification of objective
Other algorithm;Target tracking module, target following is carried out using two-way optical flow method;Target's feature-extraction module, is detected to previous frame
The target that arrives sets up the joint histogram masterplate based on color and HOG features, this joint histogram combine color characteristic and
HOG Gradient Features;Feature detection matching module, scans for matching in present frame, is compared, that is, existed by Pasteur's distance
Scan for matching that to find optimal matched position as possible in the range of surrounding's certain radius of the target location of previous frame
Target is in the position of present frame;Event checking module, the target location based on detection changes to judge whether generation event.
The module of target detection be based on frame is poor, image conversion, the triplicity of mixed Gaussian probabilistic model detect mesh
Mark.
Pedestrian/the vehicle detection uses a kind of method that optical flow field relative motion and Hog+SVM model trainings are combined
Extract the target of specified type.
The video-splicing module be further configured for:The matching of feature point extraction and characteristic point is carried out, is specially:
The grey scale change situation of the pixel and vicinity points is represented with minimal gray variance on four Main ways of pixel, i.e.,
The interest value of pixel, then selects the point with maximum interest value as characteristic point, in reference picture in the local of image
Lap in choose 4 regions, characteristic point is found out in each region using Moravec operators, chooses centered on characteristic point
Fixed size region, most like matching is found in search graph, using the central point of the characteristic area of matching, substitute into
Lower equation is solved, and required solution is the conversion coefficient M between two images:
Present invention also offers a kind of intelligent video analysis method cruised based on Pan/Tilt/Zoom camera, it includes:
Step(1)The cruise group to system is carried out first and cruise point is set, generation cruise list, each cruise group
Cruise point correspondence Pan/Tilt/Zoom camera a presetting bit, be the cruise point configuration cruise mode and cruise time of each cruise group;
Step(2)Called, Pan/Tilt/Zoom camera is moved on corresponding cruise point, system control module pin by presetting bit
Camera parameters demarcation is carried out for each cruise point to current scene, and related intelligence is configured by video analysis configuration module regarding
Frequency analysis algorithm is added in cruise list;
Step(3)After activation system, Pan/Tilt/Zoom camera control module automatically generates cruise by the analysis to list of cruising
List is performed, Pan/Tilt/Zoom camera is carried out cruise detection between each presetting bit according to default cruise order;
Step(4)System control module calls video-splicing module, is automatically generated entirely according to the execution sequence of cruise point
The panoramic mosaic figure in cruise cycle;
Step(5)Intelligent video analysis module is set for target detection and event analysis according to related, and to detection
The event arrived produces real-time alarm.
The step(2)Middle camera parameters demarcation goes out projection matrix P by given reference substance backwards calculation
All or part of parameter, after demarcation, passes through its two-dimensional coordinate in the two dimensional image that camera is captured, and the projection obtained
Matrix P, you can seek the positional information of the target of some in three-dimensional.
The step(5)Further comprise:Image is carried out using the adaptive rapid image noise reduction algorithm of wavelet transformation
Filter noise reduction, greyscale transformation operation;Moving object detection is carried out, clarification of objective is extracted, pedestrian/vehicle detection, face/car plate
Detection positioning, and Target Recognition Algorithms are carried out according to clarification of objective;Target following is carried out using two-way optical flow method;To previous frame
The target detected sets up the joint histogram masterplate based on color and HOG features, and it is special that this joint histogram combines color
Seek peace HOG Gradient Features;Matching is scanned in present frame, is compared by Pasteur's distance, i.e. the target in previous frame
Scan for matching that to find optimal matched position be possible target in present frame in the range of surrounding's certain radius of position
Position;Target location based on detection changes to judge whether generation event.
The target detection be based on frame is poor, image conversion, the triplicity of mixed Gaussian probabilistic model detect target.
Pedestrian/the vehicle detection uses a kind of method that optical flow field relative motion and Hog+SVM model trainings are combined
Extract the target of specified type.
The video-splicing is specifically included:The pixel is represented with minimal gray variance on four Main ways of pixel
With the interest value of the grey scale change situation, i.e. pixel of vicinity points, then there is the emerging of maximum in the local selection of image
The point of interest value chooses 4 regions as characteristic point in the lap of reference picture, and each region utilizes Moravec operators
Characteristic point is found out, the region of the fixed size centered on characteristic point is chosen, most like matching is found in search graph, is utilized
The central point of the characteristic area of matching, substitutes into below equation and solves, and required solution is the conversion coefficient M between two images:
Brief description of the drawings
Fig. 1 is the structured flowchart of the analysis system according to the present invention;
Fig. 2 is the functional diagram of the cruise configuration module of the analysis system according to the present invention;
Fig. 3 is the structure chart of the intelligent video analysis module of the analysis system according to the present invention;
Fig. 4 is to describe the module map that system control module is called according to the analysis system of the present invention;
Fig. 5 show the schematic diagram of image coordinate system, camera coordinate system and world coordinate system.
Embodiment
To make the above objects, features and advantages of the present invention more obvious understandable, below in conjunction with the accompanying drawings and specific embodiment party
The present invention is further detailed explanation for formula:
Present invention employs front-end collection equipment of the Pan/Tilt/Zoom camera as system, by 360 degree that combine Pan/Tilt/Zoom camera
Cruise and presetting bit fixed point cruise function exploitation video analysis algorithm, carry out target detection tracking so that video camera can be with
The bigger field range of monitoring, and the Automatic Targets by video analysis algorithm and tracking, have reached with a shooting
Machine can monitor multiple regions simultaneously and carry out Automatic Targets and identification requirement in a wider context.In actual applications both
The lower deployment cost of system can be saved, automatic target detection Tracking Recognition demand can be realized again, with very big theoretical wound
New and application innovation, and have great Social benefit and economic benefit.
The invention provides a kind of intelligent video analysis system cruised based on Pan/Tilt/Zoom camera, mainly by front end, PTZ is imaged
Machine and back-end server composition, specific configuration operation point following steps:
Step 1:The cruise group to system is carried out first and cruise point is set, by system generation cruise list.Each
One presetting bit of the cruise point correspondence Pan/Tilt/Zoom camera of cruise group.
Step 2:Cruise mode and cruise time are configured for the cruise point of each cruise group.
If 1. cruise point is configured to 360 degree of automatic cruise modes, it is necessary to the direction of level (P) rotation be set, during cruise
Between, cruising speed class information.All presetting bits to be monitored are set successively, are automatically generated by system according to configuration information
In cruise list.
If 2. cruise point is configured to pinpoint cruise mode, needing to set the cruise time.By Pan/Tilt/Zoom camera control module
Automatically generated according to configuration information in cruise list.
Step 3:Called, Pan/Tilt/Zoom camera is moved on corresponding cruise point, system control module is directed to by presetting bit
Current scene is that each cruise point carries out a camera parameters demarcation, and by the related intelligent video of video analysis configuration module configuration
Parser:Including behavioral value, vehicle detection leaves analyte detection, and article removes detection, flame and Smoke Detection, traffic thing
Part is detected, and corresponding configuration is added in cruise list.
Step 4:After activation system, Pan/Tilt/Zoom camera control module is automatically generated cruise and held by the analysis to list of cruising
Row-column list, makes Pan/Tilt/Zoom camera carry out cruise detection between each presetting bit according to default cruise order.And system is controlled
Module calls video-splicing module, and the panoramic mosaic figure in whole cruise cycle is automatically generated according to the execution sequence of cruise point.Have
Overlapping region is automatically performed splicing, and non-overlapping region is spliced according to sequencing.
Step 5:Intelligent video analysis module is set for target detection and event analysis according to related, and to detecting
Event produce real-time alarm, alarm management module carries out alerting corresponding local management function, such as video recording, grabgraf, bullet screen,
And remind Surveillance center to be analyzed and processed to Surveillance center by network upload warning information.
In the step 1, according to the need that different time sections are monitored with different scenes operation different intelligent video analysis algorithm
Ask, carry out cruise group and cruise point configuration is divided, system automatically generates cruise list.
In the step 2,360 degree of automatic cruisings are carried out to each cruise point and fixed point cruise mode is freely configured, PTZ takes the photograph
Camera control module is automatically generated in cruise list.
In the step 2, cruise point fixed point cruise mode then needs control Pan/Tilt/Zoom camera to carry out cruise point in the scene
Setting, and to the cruise order of cruise point, residence time information set.By Pan/Tilt/Zoom camera control module according to configuration
Automatic generation of information is into cruise list.
In the step 3, the camera parameters of corresponding cruise point are demarcated, are by diverse location people in scene
Demarcation determine.
It is the related intelligent video analysis algorithm of corresponding cruise point configuration in the step 3:Including behavioral value,
Vehicle detection, leaves analyte detection, and article removes detection, flame and Smoke Detection, traffic incidents detection.
In the step 4, after the cruise parameter and intelligent video analysis algorithm of the good all corresponding cruise points of configuration, start system
System, makes system carry out cruise detection between each cruise point according to default cruise order.Given birth to by corresponding configured list
Into the corresponding intelligent video analysis algorithm calling sequence of the Pan/Tilt/Zoom camera control sequence in time series, and cruise point.
In the step 4, the cruise mode control of cruise point be by the Pan/Tilt/Zoom camera control module of server end according to
Cruise point configuration is carried out, and especially 360 automatic cruisings are by with certain horizontal rotation speed P, according to fixed direction
What rotation was realized.
In the step 4, system carries out the splicing of video image according to the sequencing of cruise point automatically, realizes cruise group
Panoramic picture displaying.Especially relate to have being automatically performed for overlapping region to 360 degree of panoramic mosaics rotated and picture
Splicing, the strategy spliced according to sequencing in non-overlapping region.
In the step 5, monitoring is reminded by the Surveillance center that is locally stored and uploads to for carrying out alarm and corresponding video recording grabgraf
Center is analyzed and processed.
The invention provides a kind of intelligent video analysis system cruised based on Pan/Tilt/Zoom camera, by Pan/Tilt/Zoom camera
Presetting bit according to setting cruise group and cruise point carry out system management and call, can cruise point on carry out 360 degree patrol
Boat and fixed point cruise function, with reference to the deployment of intelligent video analysis algorithm, realize the automatic target of single camera in larger scope
Detection and affair alarm, the ability with larger range of video monitoring and intelligent video analysis.Especially for video camera fortune
Video analysis under dynamic scene, by carrying out the detection of algorithm for pattern recognition to specific target, and passes through video camera and field
The demarcation of scape, determines the shape size of target, velocity information so that the precision of target detection is greatly increased.PTZ
360 degree of cruise detections of video camera can also obtain panoramic picture, and what can be become apparent from indicates target and event in the scene
Position, facilitate the practical application of user.Bag can be realized by the cruise function and intelligent video analysis algorithm that combine system
Behavioral value is included, vehicle detection leaves analyte detection, and article removes detection, flame and Smoke Detection, traffic incidents detection.
The invention reside in provide a kind of intelligent video analysis system cruised based on Pan/Tilt/Zoom camera, the software and hardware frame of system
Structure as shown in figure 1, comprising front end Pan/Tilt/Zoom camera and back-end server, cruise configuration module is deployed with back-end server,
Pan/Tilt/Zoom camera control module, video analysis configuration module, system control module, intelligent video analysis module, alarm management mould
Block, the configuration of cruise group and cruise point is carried out by combining the cruise function and preset bit function of Pan/Tilt/Zoom camera, and for each
Cruise point carries out the deployment of intelligent video analysis algorithm, single camera is had larger range of video monitoring and intelligent video point
Analysis ability, has very big innovation on the framework of system so that system architecture monitors simpler relative to multiple-camera, drop
The low deployment and maintenance cost of system, with very big economy and social value.
Step, which is described in further detail, to be realized to the present invention below in conjunction with the accompanying drawings:
Step 1:In step 1, as shown in Fig. 2 we carry out first system cruise configuration, according to system monitoring when
Between section divide with carrying out cruise group, set presetting bit under the scene for needing to monitor, and presetting bit be added to corresponding cruise
In group, our presetting bits are called cruise point.Such as in setting time section 8:00-12:00 is cruise group 1, setting time section 12:00-
18:00 is cruise group 2, it would be desirable to Pan/Tilt/Zoom camera is moved to A, B, C, D, E, F carries out the setting of presetting bit, we with A,
B, C, D, E, F represent corresponding presetting bit, by configuration of cruising successively A, B, C, D, and presetting bit is used as 4 cruise point additions
Into cruise group 1, D, E, F presetting bits are added in cruise group 2 as 3 cruise points, and system is automatically generated according to configuration and patrolled
Navigate Groups List.
Step 2:As shown in Fig. 2 the point that cruises for being followed successively by each cruise group carries out the configuration of cruise parameter, and with confidence
Breath is synchronized in cruise list, that is, sets cruise parameter of the Pan/Tilt/Zoom camera on corresponding cruise point, including cruise mode, cruise
Temporal information.Such as it is set as first group of cruise point AAutomatic cruise mode, left-handed rotation are spent, the cruise time is 5 points
Clock, cruising speed rank is 3;The point B that cruises is fixed point cruise mode, and the cruise time is 5 minutes.
Step 3:As shown in Fig. 2 by calling the corresponding presetting bit of cruise point, realization is moved to Pan/Tilt/Zoom camera accordingly
Cruise point operation, carry out a corresponding camera parameters demarcation for current cruise point scene, be corresponding cruise point configuration
Related intelligent video analysis algorithm:Including behavioral value, vehicle detection leaves analyte detection, and article removes detection, flame and cigarette
Mist detection, traffic incidents detection.Finally scene calibration, the configuration information of intelligent analysis process is corresponding with the configuration of cruise point,
It is synchronized in cruise list.
Camera parameters scaling method:
The crown and sole point pair of the pedestrian in scene to be calibrated at diverse location in one section of video are extracted, by these points pair
One group of vertical line section on vertical scene ground is constituted, the end point and level that can calculate vertical direction by this group of vertical line section disappear
Lose line.If having known the length of one group of orthogonal line segment on ground, can using this orthogonal line segment of group as three-dimensional coordinate in addition
Two axis, calculate two other end points on a horizontal in the two axis.By three orthogonal end points,
With the principal point coordinate of the video camera calculated, the inside and outside parameter of video camera can be calculated, the projection of video camera is then calculated
Matrix, completes the demarcation of camera parameters.
The spatial distribution of image coordinate system, camera coordinate system and world coordinate system is illustrated in figure 5, wherein image is sat
Mark system (o0Uv) with world coordinate system (O1XY) it is all two-dimensional coordinate system, and is approximately the same plane, except that image coordinate
The origin of coordinates of system is in the most upper left corner, and the origin of coordinates of camera coordinate system is in the middle of image;World coordinate system is three
A point P in dimension coordinate system, real space(x,y,z)By imaging, the point p (X, Y) in camera coordinate system is obtained.
Pin-hole imaging model is typical linear model, space any point P(x,y,z)Image in camera coordinate system
P (X, Y) point, can be obtained by similar triangles:
F is camera focal length in formula.Joint image coordinate system is simultaneously expressed as with next coordinate form:
S is scale factor in formula;αx=f/dX is the scale factor on u axles;αy=f/dY is the scale factor on v axles;u0、
v0Respectively position of the camera coordinate system origin in image coordinate system;R, t are respectively that camera coordinates lie in world coordinates
Spin matrix and translation vector between system;
Matrix M1Parameter alphax、αy、u0、v0It is only relevant with intrinsic parameters of the camera, therefore these parameters are referred to as video camera
Inner parameter;M2Parameter R, t determined by video camera with respect to the orientation of world coordinate system, therefore referred to as video camera external parameter,
Then the process of camera parameters demarcation, which can be then converted to, solves these parameters, and camera parameters demarcation passes through given
Reference substance backwards calculation goes out projection matrix P all or part of parameter.After having demarcated, if wondering the mesh of some in three-dimensional
Target positional information, by its two-dimensional coordinate in the two dimensional image that camera is captured, and the projection matrix P obtained just now,
It can ask.
The flow of intelligent video analysis algoritic module is as shown in Figure 3:
Image pre-processing module:The real-time video of collection is inevitably by light, rain, snow, mist and system interference
Influence, image can have certain fuzzy, noise jamming problem.First have to be filtered image noise reduction, greyscale transformation behaviour
Make.The present invention is pre-processed using the adaptive rapid image noise reduction algorithm of wavelet transformation to image.
Module of target detection:Image is carried out after noise reduction process according to the intelligent video analysis algorithm configuration of presetting bit
Invocation target detection module, carries out moving object detection, and clarification of objective is extracted, pedestrian/vehicle detection, face/car plate detection
Positioning, and Target Recognition Algorithms are carried out according to clarification of objective.
Wherein module of target detection, of the invention to propose that one kind is based on transform domain for the static background of fixed point cruise mode
The background modeling method of image, using frame is poor, image is converted based on having merged(Embossment is converted), mixed Gaussian probabilistic model examines
Survey target.Wherein frame is poor, and embossment conversion, and Gaussian modeling all have certain light adaptation, knot to a certain extent
Close three and carry out background modeling processing, further enhancing adaptability of the algorithm to complex situations such as light, more complete carries
Take out the moving target of scene.
1. frame difference can be the difference of the poor or a little interframe of consecutive frame.This method has stronger scene changes adaptability,
Resisting illumination variation and noise resisting ability are strong;
F(x,y)=abs(In(x, y) I(n-i)(x, y))
Wherein, In(x, y) is the gray value that n moment (x, y) is put, I(n-i)(x, y) represents the n-th-i two field picture coordinates(x,y)
The gray value at place, i generally takes 3~5.Image converts algorithm process by embossment, it may have certain resisting illumination variation ability;It is floating
Carving algorithm is to carry out process of convolution to each point of image to carry out using following matrix:
For coordinate (i, i) point, the algorithm of its anaglyph figure is:
Y (i, j)=X (i-1, j-1)-X (i-1, j+1)+128
Wherein, X (i, j) and Y (i, j) be respectively (i, j) coordinate points original pixel value and conversion after pixel value.
2. utilizing frame difference image, embossment changing image, original-gray image is combined as mixed Gaussian background modeling
Source images are inputted, probabilistic model is set up, the detection of foreground target is carried out.The theoretical foundation of this method is sturdy, can add priori
Knowledge, Detection results are good.
The basic thought of mixed Gaussian background modeling be the color that each pixel is presented with K state come table
Show, usual K is taken between 3-5.The sampled value for being stochastic variable X in each moment T pixel values for obtaining video image.Gauss model
There are three parameters, respectively mean μkVariances sigmak, weights omegak, l≤k≤K..
The K weights for being distributed in moment t can be updated with below equation:
The more new formula of weights:
Model modification formula is:
Wherein, α is turnover rate, 0 < α <, 1l k lK, when the 1st model for meeting matching condition is k, Mk(x,
Y)-_ 1, otherwise Mk(x, y)=0.
When the model number of a pixel is k, and k>When 1, to this k model, according to priority size is ranked up, excellent
First level calculation formula isIn matching, matched since the maximum model of priority, if first meets matching bar
The model of part is k, then k is the Matching Model that this puts this moment, it is not necessary to the Model Matching small with priority ratio k again.
Context update and study by limited frame, set up a background model.
Highest priority is nursed:The present invention is mainly to develop according to the fixation target patrolled on waypoint location, double
Edge and graded between detection determine the degree changed between detecting twice, change between detecting twice
Degree produces alarm when being more than the threshold value of setting.
The present invention carries out computing using Canny convolution operators to image, and major part is fallen using local maximum policy filtering
Non-edge point.
Its x to, y to first-order partial derivative matrix, the mathematic(al) representation of gradient magnitude and gradient direction is:
P [ij]=(f [i, j+1]-f [i, j]+f [i+1, j 10]-f [i, j])/2
Q [i, j]=(f [i, j]-f [i+1, j]+f [i, j+1]-f [i+1, j+1])/2
θ [i, j]=arctan (Q [i, j]/P [i, j])
M [i, j] in above formula represents gradient magnitude of the image at coordinate [i, j] place, and θ [i, j] is represented at coordinate [i, j] place
Gradient direction.
Pedestrian/vehicle detection:For 360 degree of cruise modes, the present invention proposes a kind of optical flow field relative motion and Hog+SVM
The method that model training is combined extracts the target of specified type, and detection effect well is achieved in the detection of pedestrian and vehicle
Really, and in unattended project it is widely used.HOG is exactly finger direction histogram of gradients(Histogram of
Oriented Gradient,HOG)A kind of abbreviation, be that one kind is used for carrying out object inspection in computer vision and image procossing
The profiler of survey.It is by calculating the gradient orientation histogram with statistical picture regional area come constitutive characteristic;Its is main
Thought is that the presentation and shape of localized target can well be retouched by the direction Density Distribution at gradient or edge in a sub-picture
State.Its concrete implementation method is:Small connected region is divided the image into first, and we are cell factory it.Then gather
Gradient or edge the direction histogram of each pixel in cell factory.Finally altogether just can be with structure these set of histograms
Into profiler.
SVM is SVMs (Support Vector Machine) abbreviation, be Corinna Cortes and
Vapnik8 is equal to what nineteen ninety-five proposed first, and it shows many spies in small sample, the identification of non-linear and high dimensional pattern is solved
Some advantages, and can promote the use of in the other machines problem concerning study such as Function Fitting.Support vector machine method is built upon
In the VC dimensions theory and Structural risk minization basis of Statistical Learning Theory, according to limited sample information answering in model
Polygamy(I.e. to the study precision of specific training sample)And learning ability(The ability of arbitrary sample is recognized without error)Between
Seek optimal compromise, in the hope of obtaining best Generalization Ability.
Following steps will be divided by carrying out Hog+SVM training and detection:
1) collects training sample, including substantial amounts of positive sample and negative sample.Sample is cut manually, uniformly zooms to fixation
Size.
2) extracts all positive samples and the feature of negative sample respectively.
3) assigns sample label to all positive negative samples, and positive sample is labeled as 1, and negative sample is labeled as -1.
4) is by the Hog features of positive negative sample, and positive and negative sample label is input in Linear SVM grader and is trained.
5) is detected using the grader trained to the target in scene.
Two-way optical flow method:Optical flow method concept is derived from optical flow field, and the pattern further of the video of moving object on the surface is exactly
So-called optical flow field, is a two-dimension speed.If I (x, y, t) is pixel value of the picture point (x, y) in moment t, if u (x,
Y) and v (x, y) be the light stream x and y-component, it is assumed that point when t+ δ ts move to (x+ δ xly+ δ y) pixel value keep
Constant, δ x=u δ t, δ y=v δ t then have optical flow equation:
I (x+u δ t, y+u δ t, t+ δ t)=I(X, y, t)
It is come on finding one using the correlation between change of the pixel in image sequence in time-domain and consecutive frame
Frame is with the corresponding relation that exists between present frame, so as to calculate a kind of method light stream of the movable information of object between consecutive frame
Method is really that the side in object translational speed and direction is changed with time and then be inferred to by the intensity of detection image pixel
Method.
The calculating of two-way optical flow field is carried out to the candidate target of Hog+SVM detection of classifier, can more accurately be detected
There are people and Che of relative motion etc. in scene.Improve accuracy of detection reduction flase drop.
Target tracking module:Target following is exactly that the interesting target detected is analyzed in time domain, obtains mesh
During target state parameter such as change in location movement locus/spatial feature, to carry out the Treatment Analysis of next step, such as behavioural analysis
Deng.The present invention carries out target following using two-way optical flow method mentioned above, can effectively utilize the time domain of moving target
With the information of spatial domain, so that tracking more accurate stable, touching between target in visual field is solved to a certain extent
Hit separation and occlusion issue.
Comparatively, the general unobstructed problem of the method based on model, but be difficult to set up a general template(As deformed
Template).Matching how is defined in addition to measure to make tracking more accurate and be a great problem.
Target's feature-extraction module:The target detected to previous frame sets up the joint Nogata based on color and HOG features
Artwork version, this joint histogram combines color characteristic and HOG Gradient Features, can be good at than more completely description mesh
Mark characteristic information.
Feature detection matching module:Matching is scanned in present frame, passes through Pasteur's distance(One kind measurement histogram difference
The method of property)It is compared, i.e., is scanning for matching around the target location of previous frame in the range of certain radius(We are real
It is that 20 pixel effects are relatively good to trample middle discovery radius), it is possible target in the position of present frame to find optimal matched position
Put.
Event checking module:For judging whether generation event according to the change of the target location of detection.
Step 4:As shown in figure 4, after the cruise parameter and intelligent video analysis algorithm of the good all cruise points of configuration, starting
System, by the analysis of cruise control list, automatically generates cruise and performs list, make Pan/Tilt/Zoom camera suitable according to default cruise
Sequence carries out cruise detection between each cruise point.And system calls video-splicing module, according to the execution sequence of cruise point
Automatically generate the panoramic mosaic figure in whole cruise cycle.Have a splicing that is automatically performed of overlapping region, non-overlapping region according to elder generation
Order is spliced afterwards.
Video automatic Mosaic algorithm:The image mosaic technology of distinguished point based as video-splicing technology core technology,
It is broadly divided into the step of matching two of feature point extraction and characteristic point.
The present invention carries out feature point extraction using Moravec operators, and its basic thought is, with four of pixel main sides
Upward minimal gray variance represents the interest value of the grey scale change situation, i.e. pixel of the pixel and vicinity points, then
Characteristic point is used as in local point (grey scale change obvious point) of the selection with maximum interest value of image.
WhereingC+i, rRepresent that image, in the gray value at coordinate [c+i, r] place, by that analogy, takes wherein most
Small person is pixel IV (c, r) interest value:
IV (c, r)=V=min { V1, V2, V3, V4}
According to given threshold value, the point that selection interest value is more than the threshold value is used as the candidate point of characteristic point.If VTTo be prior
The threshold value set, if V > VT, then V be characterized candidate point a little.Local modulus maxima conduct is chosen in candidate point
The characteristic point needed.
On the basis of feature point extraction more than having, feature based point matching algorithm key step is as follows:
(1) 4 regions are chosen in reference picture T lap, feature is found out in each region using Moravec operators
Point.
(2) region centered on characteristic point, the region that region present invention selection size is 7 × 7, in search graph are chosen
Most like matching is found in S.Because there are 4 characteristic points, therefore there are 4 characteristic areas, find the matching of corresponding characteristic area
Also there are 4 pieces.
(3) central point of the characteristic area of this 4 groups of matchings, that is, 4 pairs of characteristic points matched are utilized, below equation is substituted into
Formula is solved, and required solution is the conversion coefficient between two images.
Step 5:
System control module performs the video camera open function of related cruise point, further according to each in algorithm configuration list
The algorithm configuration of cruise point, to enable the video analysis algorithm of related cruise point, i.e., deploys troops on garrison duty each algorithm for cruising point.
Video analysis algorithm is set for target detection and event detection according to related, and the time detected is produced
Alarm in real time, carries out alarm and corresponding video recording, and grabgraf is locally stored and uploaded to Surveillance center and reminds Surveillance center
Analyzed and processed.Control platform receives video analysis result, and various management and control orders are issued according to analysis result.
Above is the detailed description that the preferred embodiments of the present invention are carried out, but one of ordinary skill in the art should anticipate
Know, within the scope of the present invention, and guided by the spirit, various improvement, addition and replacement are all possible.These are all in this hair
In the protection domain that bright claim is limited.
Claims (10)
1. a kind of intelligent video analysis system cruised based on Pan/Tilt/Zoom camera, the system includes front end Pan/Tilt/Zoom camera and rear end takes
Business device, it is characterised in that back-end server includes:
Cruise configuration module, carries out setting for the cruise group to system and cruise point and generates cruise list, each cruise group
One presetting bit of cruise point correspondence Pan/Tilt/Zoom camera, is cruise point configuration cruise mode and the cruise time of each cruise group;
Pan/Tilt/Zoom camera control module, to cruise list analyze, automatically generate cruise execution list, make Pan/Tilt/Zoom camera according to
Default cruise order carries out cruise detection between each presetting bit;
Video analysis configuration module, for configuring related intelligent video analysis algorithm for each cruise point and being configured to cruise
In list;The intelligent video analysis algorithm includes:Behavioral value, leave analyte detection, article remove detection, flame and smog inspection
Survey, traffic incidents detection;
System control module, performs the video camera open function of related cruise point, each cruise point in configured list
Algorithm configuration, is that each cruise point carries out camera parameters demarcation and tune to enable the video analysis algorithm of related cruise point
Video-splicing module is used, the panoramic mosaic figure in whole cruise cycle is automatically generated according to the execution sequence of cruise point;
Intelligent video analysis module, is set for target detection and event analysis, and the event detected is produced according to related
Raw Real-time Alarm;
Alarm management module, corresponding local management function is carried out to alarm;
The intelligent video analysis module further comprises:
Image pre-processing module, noise reduction is filtered to image using the adaptive rapid image noise reduction algorithm of wavelet transformation, ash
Spend map function;
Module of target detection, for carrying out moving object detection, clarification of objective is extracted, pedestrian/vehicle detection, face/car plate
Detection positioning, and Target Recognition Algorithms are carried out according to clarification of objective;
Target tracking module, target following is carried out using two-way optical flow method;
Target's feature-extraction module, the target detected to previous frame sets up the joint histogram mould based on color and HOG features
Version, this joint histogram combines color characteristic and HOG Gradient Features;
Feature detection matching module, scans for matching in present frame, is compared by Pasteur's distance, i.e. the mesh in previous frame
Scan for matching that to find optimal matched position be possible target current in the range of surrounding's certain radius of cursor position
The position of frame;
Event checking module, the target location based on detection changes to judge whether generation event.
2. system according to claim 1, it is characterised in that:
The camera parameters demarcation is exactly all or part of ginseng for going out projection matrix P by given reference substance backwards calculation
Number, after demarcation, passes through its two-dimensional coordinate in the two dimensional image that camera is captured, and the projection matrix P obtained, you can ask three
The positional information of the target of some in dimension.
3. system according to claim 1, it is characterised in that the module of target detection be based on frame is poor, image is converted,
Mixed Gaussian probabilistic model triplicity detects target.
4. system according to claim 1, it is characterised in that:Pedestrian/the vehicle detection is relative using a kind of optical flow field
The method that motion and Hog+SVM model trainings are combined extracts the target of specified type.
5. system according to claim 1, it is characterised in that:The video-splicing module be further configured for:
The matching of feature point extraction and characteristic point is carried out, is specially:With minimal gray variance on four Main ways of pixel
The interest value of the grey scale change situation, i.e. pixel of the pixel and vicinity points is represented, is then selected in the part of image
Point with maximum interest value chooses 4 regions as characteristic point in the lap of reference picture, and each region is utilized
Moravec operators find out characteristic point, choose the region of the fixed size centered on characteristic point, find most like in search graph
Matching, using the central point of the characteristic area of matching, substitute into below equation and solve, required solution is between two images
Conversion coefficient M:
<mrow>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msup>
<mi>x</mi>
<mo>&prime;</mo>
</msup>
</mtd>
</mtr>
<mtr>
<mtd>
<msup>
<mi>y</mi>
<mo>&prime;</mo>
</msup>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mi>M</mi>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mi>x</mi>
</mtd>
</mtr>
<mtr>
<mtd>
<mi>y</mi>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>m</mi>
<mn>0</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>m</mi>
<mn>1</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>m</mi>
<mn>2</mn>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>m</mi>
<mn>3</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>m</mi>
<mn>4</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>m</mi>
<mn>5</mn>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>m</mi>
<mn>6</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>m</mi>
<mn>7</mn>
</msub>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mi>x</mi>
</mtd>
</mtr>
<mtr>
<mtd>
<mi>y</mi>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>.</mo>
</mrow>
6. a kind of intelligent video analysis method cruised based on Pan/Tilt/Zoom camera, it is characterised in that:
Step (1) carries out cruise group to system first and cruise point is set, generation cruise list, and each cruise group is patrolled
One presetting bit of destination correspondence Pan/Tilt/Zoom camera, is cruise point configuration cruise mode and the cruise time of each cruise group;
Step (2) is called by presetting bit, Pan/Tilt/Zoom camera is moved on corresponding cruise point, system control module is for working as
Preceding scene carries out a camera parameters demarcation for the point that each cruises, and the intelligent video for configuring correlation by video analysis configuration module divides
Analysis algorithm is added in cruise list;The intelligent video analysis algorithm includes:Behavioral value, leave analyte detection, article and remove
Detection, flame and Smoke Detection, traffic incidents detection;
After step (3) activation system, Pan/Tilt/Zoom camera control module is automatically generated cruise and performed by the analysis to list of cruising
List, makes Pan/Tilt/Zoom camera carry out cruise detection between each presetting bit according to default cruise order;
Step (4) system control module calls video-splicing module, and whole cruise is automatically generated according to the execution sequence of cruise point
The panoramic mosaic figure in cycle;
Step (5) intelligent video analysis module is set for target detection and event analysis according to related, and to detecting
Event produces real-time alarm;The step (5) further comprises:
Noise reduction, greyscale transformation operation are filtered to image using the adaptive rapid image noise reduction algorithm of wavelet transformation;
Moving object detection is carried out, clarification of objective is extracted, pedestrian/vehicle detection, face/car plate detection positioning, and according to mesh
Target feature carries out Target Recognition Algorithms;
Target following is carried out using two-way optical flow method;
The target detected to previous frame sets up the joint histogram masterplate based on color and HOG features, this joint histogram
Combine color characteristic and HOG Gradient Features;
Scan for matching in present frame, be compared by Pasteur's distance, i.e., it is certain around the target location of previous frame
Scan for matching that to find optimal matched position be possible target in the position of present frame in radius;
Target location based on detection changes to judge whether generation event.
7. method according to claim 6, it is characterised in that:
Camera parameters demarcation is exactly the whole for going out projection matrix P by given reference substance backwards calculation in the step (2)
Or partial parameters, after demarcation, pass through its two-dimensional coordinate in the two dimensional image that camera is captured, and the projection matrix obtained
P, you can seek the positional information of the target of some in three-dimensional.
8. method according to claim 6, it is characterised in that the target detection is based on frame is poor, image is converted, mixing
Gaussian probability model triplicity detects target.
9. method according to claim 6, it is characterised in that:Pedestrian/the vehicle detection is relative using a kind of optical flow field
The method that motion and Hog+SVM model trainings are combined extracts the target of specified type.
10. method according to claim 6, it is characterised in that:The video-splicing is specifically included:With four of pixel
Minimal gray variance represents the interest of the grey scale change situation, i.e. pixel of the pixel and vicinity points on Main way
Value, then selects the point with maximum interest value as characteristic point, in the lap of reference picture in the local of image
4 regions are chosen, characteristic point is found out in each region using Moravec operators, fixed size of the selection centered on characteristic point
Region, finds most like matching in search graph, using the central point of the characteristic area of matching, substitutes into below equation and asks
Solution, required solution is the conversion coefficient M between two images:
<mrow>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msup>
<mi>x</mi>
<mo>&prime;</mo>
</msup>
</mtd>
</mtr>
<mtr>
<mtd>
<msup>
<mi>y</mi>
<mo>&prime;</mo>
</msup>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mi>M</mi>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mi>x</mi>
</mtd>
</mtr>
<mtr>
<mtd>
<mi>y</mi>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<msub>
<mi>m</mi>
<mn>0</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>m</mi>
<mn>1</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>m</mi>
<mn>2</mn>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>m</mi>
<mn>3</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>m</mi>
<mn>4</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>m</mi>
<mn>5</mn>
</msub>
</mtd>
</mtr>
<mtr>
<mtd>
<msub>
<mi>m</mi>
<mn>6</mn>
</msub>
</mtd>
<mtd>
<msub>
<mi>m</mi>
<mn>7</mn>
</msub>
</mtd>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mfenced open = "[" close = "]">
<mtable>
<mtr>
<mtd>
<mi>x</mi>
</mtd>
</mtr>
<mtr>
<mtd>
<mi>y</mi>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>.</mo>
</mrow>
3
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310359688.8A CN104378582B (en) | 2013-08-16 | 2013-08-16 | A kind of intelligent video analysis system and method cruised based on Pan/Tilt/Zoom camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310359688.8A CN104378582B (en) | 2013-08-16 | 2013-08-16 | A kind of intelligent video analysis system and method cruised based on Pan/Tilt/Zoom camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104378582A CN104378582A (en) | 2015-02-25 |
CN104378582B true CN104378582B (en) | 2017-08-22 |
Family
ID=52557203
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310359688.8A Active CN104378582B (en) | 2013-08-16 | 2013-08-16 | A kind of intelligent video analysis system and method cruised based on Pan/Tilt/Zoom camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104378582B (en) |
Families Citing this family (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104754231B (en) * | 2015-03-31 | 2019-02-19 | Oppo广东移动通信有限公司 | Shoot the method and device of personage's video |
CN106162070B (en) * | 2015-04-23 | 2019-07-30 | 神讯电脑(昆山)有限公司 | Safety monitoring system and its method |
CN105282521B (en) * | 2015-11-21 | 2018-09-14 | 浙江宇视科技有限公司 | Method for testing motion and device in a kind of cruise of web camera |
CN105700547B (en) * | 2016-01-16 | 2018-07-27 | 深圳先进技术研究院 | A kind of aerial three-dimensional video-frequency streetscape system and implementation method based on navigation dirigible |
CN105915800B (en) * | 2016-06-03 | 2019-04-02 | 中林信达(北京)科技信息有限责任公司 | Large scene monitors the method for automatic configuration and device of lower all standing cruise parameter |
CN105933678B (en) * | 2016-07-01 | 2019-01-15 | 湖南源信光电科技有限公司 | More focal length lens linkage imaging device based on Multiobjective Intelligent tracking |
CN108234927B (en) * | 2016-12-20 | 2021-02-19 | 腾讯科技(深圳)有限公司 | Video tracking method and system |
CN108230348B (en) * | 2016-12-22 | 2022-01-21 | 杭州海康威视数字技术股份有限公司 | Target tracking method and device |
CN107197278B (en) * | 2017-05-24 | 2019-08-23 | 西安万像电子科技有限公司 | The treating method and apparatus of the global motion vector of screen picture |
CN107360394B (en) * | 2017-06-16 | 2019-09-27 | 河北汉光重工有限责任公司 | More preset point dynamic and intelligent monitoring methods applied to frontier defense video monitoring system |
CN107590834A (en) * | 2017-08-10 | 2018-01-16 | 北京博思廷科技有限公司 | A kind of road traffic accident video detecting method and system |
CN107680103A (en) * | 2017-09-12 | 2018-02-09 | 南方医科大学南方医院 | The method that actual situation for stomach cancer hysteroscope intelligent operation real-time navigation system blocks processing mixed reality automatically |
CN110113560B (en) * | 2018-02-01 | 2021-06-04 | 中兴飞流信息科技有限公司 | Intelligent video linkage method and server |
CN108337486A (en) * | 2018-04-19 | 2018-07-27 | 北京软通智城科技有限公司 | A kind of device and method of the video analysis of the algorithm configuration based on scene |
CN109194927A (en) * | 2018-10-19 | 2019-01-11 | 天津天地基业科技有限公司 | Vehicle-mounted target tracking holder camera apparatus based on deep learning |
CN111343377A (en) * | 2018-12-19 | 2020-06-26 | 杭州海康威视系统技术有限公司 | Camera control method, device, system and storage medium |
CN109982047B (en) * | 2019-04-04 | 2021-02-02 | 郑州和光电子科技有限公司 | Flight monitoring panorama fusion display method |
CN111192426A (en) * | 2020-01-14 | 2020-05-22 | 中兴飞流信息科技有限公司 | Railway perimeter intrusion detection method based on anthropomorphic visual image analysis video cruising |
CN111259825B (en) * | 2020-01-19 | 2023-06-02 | 成都依能科技股份有限公司 | PTZ scanning path generation method based on face recognition |
CN111246097B (en) * | 2020-01-19 | 2021-06-04 | 成都依能科技股份有限公司 | PTZ scanning path generation method based on graph perception |
CN111382697B (en) * | 2020-03-09 | 2023-07-25 | 中国铁塔股份有限公司 | Image data processing method and first electronic equipment |
CN111683229B (en) * | 2020-06-22 | 2021-10-26 | 杭州海康威视系统技术有限公司 | Cruise monitoring method, device, equipment and storage medium |
CN112378385B (en) * | 2020-07-31 | 2022-09-06 | 浙江宇视科技有限公司 | Method, device, medium and electronic equipment for determining position of attention information |
CN112367475B (en) * | 2021-01-15 | 2021-03-30 | 上海闪马智能科技有限公司 | Traffic incident detection method and system and electronic equipment |
CN112885096A (en) * | 2021-02-05 | 2021-06-01 | 同济大学 | Bridge floor traffic flow full-view-field sensing system and method depending on bridge arch ribs |
CN112672064B (en) * | 2021-03-18 | 2021-07-20 | 视云融聚(广州)科技有限公司 | Algorithm scheduling method, system and equipment based on video region label |
CN113542672B (en) * | 2021-05-25 | 2023-08-18 | 浙江大华技术股份有限公司 | Camera cruising method, electronic device and storage medium |
CN113643449B (en) * | 2021-08-11 | 2023-07-18 | 周健龙 | Anti-following device for gateway-free parking lot entrance and exit and processing method |
CN113920194B (en) * | 2021-10-08 | 2023-04-21 | 电子科技大学 | Positioning method of four-rotor aircraft based on visual inertia fusion |
CN115086559B (en) * | 2022-06-14 | 2024-04-05 | 北京宜通科创科技发展有限责任公司 | Intelligent cruising method, system and equipment |
CN115243010A (en) * | 2022-07-15 | 2022-10-25 | 浪潮通信信息系统有限公司 | Bright kitchen scene intelligent detection system and device |
CN116700247B (en) * | 2023-05-30 | 2024-03-19 | 东莞市华复实业有限公司 | Intelligent cruising management method and system for household robot |
CN116958707B (en) * | 2023-08-18 | 2024-04-23 | 武汉市万睿数字运营有限公司 | Image classification method, device and related medium based on spherical machine monitoring equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101719968A (en) * | 2009-12-18 | 2010-06-02 | 中兴通讯股份有限公司 | Schedule management method, terminal and system |
CN102866712A (en) * | 2012-09-07 | 2013-01-09 | 安科智慧城市技术(中国)有限公司 | Method and system for realizing automatic cruise of pan-tilts |
CN202907049U (en) * | 2012-11-06 | 2013-04-24 | 温州金谷丰垣工贸有限公司 | Automatic cruising camera |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9210312B2 (en) * | 2004-06-02 | 2015-12-08 | Bosch Security Systems, Inc. | Virtual mask for use in autotracking video camera images |
CN101119482B (en) * | 2007-09-28 | 2011-07-20 | 北京智安邦科技有限公司 | Overall view monitoring method and apparatus |
CN101616310B (en) * | 2009-07-17 | 2011-05-11 | 清华大学 | Target image stabilizing method of binocular vision system with variable visual angle and resolution ratio |
US20130021433A1 (en) * | 2011-07-21 | 2013-01-24 | Robert Bosch Gmbh | Overview configuration and control method for ptz cameras |
-
2013
- 2013-08-16 CN CN201310359688.8A patent/CN104378582B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101719968A (en) * | 2009-12-18 | 2010-06-02 | 中兴通讯股份有限公司 | Schedule management method, terminal and system |
CN102866712A (en) * | 2012-09-07 | 2013-01-09 | 安科智慧城市技术(中国)有限公司 | Method and system for realizing automatic cruise of pan-tilts |
CN202907049U (en) * | 2012-11-06 | 2013-04-24 | 温州金谷丰垣工贸有限公司 | Automatic cruising camera |
Non-Patent Citations (3)
Title |
---|
动态场景下多运动目标检测及跟踪方法研究;曾鹏鑫;《中国优秀博硕士学位论文全文数据库 信息科技辑》;20061115;摘要 * |
图像快速配准与自动拼接技术研究;冯宇平;《中国博士学位论文全文数据库 信息科技辑》;20101015;摘要,正文第55、66-74页 * |
高清视频会议智能终端的研究;赵雁斌;《中国优秀硕士学位论文全文数据库 信息科技辑》;20100815;正文第4-8、40-49页 * |
Also Published As
Publication number | Publication date |
---|---|
CN104378582A (en) | 2015-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104378582B (en) | A kind of intelligent video analysis system and method cruised based on Pan/Tilt/Zoom camera | |
US11468660B2 (en) | Pixel-level based micro-feature extraction | |
CN103824070B (en) | A kind of rapid pedestrian detection method based on computer vision | |
US7859564B2 (en) | Video surveillance system | |
CN104166841B (en) | The quick detection recognition methods of pedestrian or vehicle is specified in a kind of video surveillance network | |
CN107133559B (en) | Mobile object detection method based on 360 degree of panoramas | |
CN110210276A (en) | A kind of motion track acquisition methods and its equipment, storage medium, terminal | |
CN105930822A (en) | Human face snapshot method and system | |
Kumar et al. | Study of robust and intelligent surveillance in visible and multi-modal framework | |
CN110210474A (en) | Object detection method and device, equipment and storage medium | |
CN101406390A (en) | Method and apparatus for detecting part of human body and human, and method and apparatus for detecting objects | |
CN110929593A (en) | Real-time significance pedestrian detection method based on detail distinguishing and distinguishing | |
CN106570490A (en) | Pedestrian real-time tracking method based on fast clustering | |
JP2007209008A (en) | Surveillance device | |
CN107316024B (en) | Perimeter alarm algorithm based on deep learning | |
Liu et al. | Multi-type road marking recognition using adaboost detection and extreme learning machine classification | |
Kongurgsa et al. | Real-time intrusion—detecting and alert system by image processing techniques | |
Liang et al. | Methods of moving target detection and behavior recognition in intelligent vision monitoring. | |
Sage et al. | Security applications of computer vision | |
Cao et al. | Learning spatial-temporal representation for smoke vehicle detection | |
Lalonde et al. | A system to automatically track humans and vehicles with a PTZ camera | |
CN114140745A (en) | Method, system, device and medium for detecting personnel attributes of construction site | |
JP2007028680A (en) | Monitoring device | |
CN111260687A (en) | Aerial video target tracking method based on semantic perception network and related filtering | |
Yao et al. | A real-time pedestrian counting system based on rgb-d |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |