CN103325121A - Method and system for estimating network topological relations of cameras in monitoring scenes - Google Patents

Method and system for estimating network topological relations of cameras in monitoring scenes Download PDF

Info

Publication number
CN103325121A
CN103325121A CN2013102703492A CN201310270349A CN103325121A CN 103325121 A CN103325121 A CN 103325121A CN 2013102703492 A CN2013102703492 A CN 2013102703492A CN 201310270349 A CN201310270349 A CN 201310270349A CN 103325121 A CN103325121 A CN 103325121A
Authority
CN
China
Prior art keywords
grid
monitoring scene
light stream
video camera
color histogram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013102703492A
Other languages
Chinese (zh)
Other versions
CN103325121B (en
Inventor
张红广
崔建竹
唐潮
田飞
王鹏
邓娜娜
蒋建彬
马娜
高会武
徐尚鹏
季益华
马铁
宋成国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SMART CITY INFORMATION TECHNOLOGY Co Ltd
Shanghai Advanced Research Institute of CAS
China Security and Surveillance Technology PRC Inc
Original Assignee
SMART CITY INFORMATION TECHNOLOGY Co Ltd
Shanghai Advanced Research Institute of CAS
China Security and Surveillance Technology PRC Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SMART CITY INFORMATION TECHNOLOGY Co Ltd, Shanghai Advanced Research Institute of CAS, China Security and Surveillance Technology PRC Inc filed Critical SMART CITY INFORMATION TECHNOLOGY Co Ltd
Priority to CN201310270349.2A priority Critical patent/CN103325121B/en
Publication of CN103325121A publication Critical patent/CN103325121A/en
Application granted granted Critical
Publication of CN103325121B publication Critical patent/CN103325121B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to the technical field of security and protection, and provides a method and system for establishing network topological relations of cameras in monitoring scenes. The method comprises the steps of resolving the monitoring scenes in video streams shot in a monitoring network into grids, obtaining color histogram information of light streams of all grids in the monitoring scenes, conducting clustering on the grids in the monitoring scenes according to the color histogram information of the light streams of the grids in the monitoring scenes to obtain semantic region segmentation results of the monitoring scenes, and determining the network topological relations among all the cameras according to the semantic region segmentation results in the monitoring scenes. The method and system for establishing the network topological relations among the cameras in the monitoring scenes solves the problem that in the prior art, due to the fact that the topological relations among the cameras are all based on locating and tracking of specific target activities, when obstructions exist in a monitoring environment or the monitoring image resolution ratio is low, algorithm performance decreases sharply.

Description

Camera network topological relation evaluation method and system in a kind of monitoring scene
Technical field
The invention belongs to technical field of security and protection, relate in particular to camera network topological relation evaluation method and system in a kind of monitoring scene.
Background technology
The topology of camera network estimates it is the key issue that camera network is disposed, and accurate topological estimation not only can be grasped the motor pattern of the targets such as the interior personnel of guarded region, crowd, also can by feedback, further optimize deployment.
The topology that prior art provides certain methods to carry out camera network is estimated, comprising:
One, based on personnel's detection and tracking result of image background rejecting, obtains the relevance out of control of crowd activity between a plurality of video cameras, for the goal activities pattern of analyzing and set up whole scene provides foundation.
Two, personnel's paces information of utilizing a plurality of shooting robots to catch is obtained personnel activity's general modfel, and is readjusted video camera according to this pattern and dispose, and realizes arriving monitoring objective with better visual angle and video camera number still less.
Three, based on the mixing probability density estimator of Parzen window and gaussian kernel, the probability density function that movement velocity equivalent when estimating by the time interval, the position that passes in and out the observation ken and the turnover ken forms, whole estimation procedure is realized by the method for learning training collection data.
Four, adopting a kind of Fuzzy Time interval to represent the possibility that observed object occurs aspect the time-domain constraints in next video camera, this possibility is estimated to obtain by the equation of motion.
Five, utilizing a large amount of target observation data, by the method for unsupervised learning, is that a multiple-camera monitor network is automatically set up the time-space domain topological relation between the video camera.On this basis, they give the method for verification algorithm performance and have realized the tracking of target in this network.
Six, utilize a more generally information theory thought of trusting about statistics, uncertain correspondence and bayes method are combined, reduced assumed condition and embodied preferably performance.
Seven, suppose that all there is potential annexation in all video cameras, then by observation impossible connection is removed, experimental results show that their method aspect the extensive camera network topological relation of study, especially in the situation that learning sample is less, has preferably efficient and effect.
Eight, extensive work utilizes the topological relation of multiple-camera, carries out overall activity analysis and pedestrian and identifies.
But above topology inference algorithm is substantially all based on location, tracking to the objectives activity, and is higher to the monitor video quality requirements, blocks or monitoring image resolution when low when existing in the monitoring environment, and algorithm performance will sharply descend.
Summary of the invention
The purpose of the embodiment of the invention is to provide camera network topological relation evaluation method and system in a kind of monitoring scene, exist with the solution prior art, existing topology inference algorithm is substantially all based on location, tracking to the objectives activity, higher to the monitor video quality requirements, block or monitoring image resolution when low the problem that algorithm performance will sharply descend when existing in the monitoring environment.
Embodiments of the invention are achieved in that camera network topological relation evaluation method in a kind of monitoring scene, said method comprising the steps of:
Monitoring scene in the video flowing that every video camera in the monitor network is photographed is decomposed into grid;
For each monitoring scene, obtain the color histogram information of the light stream of each grid in the described monitoring scene;
For each monitoring scene, according to the color histogram information of the light stream of each grid in the described monitoring scene grid in the described monitoring scene is carried out cluster, obtain the semantic region segmentation result of described monitoring scene;
Determine the network topology between each video camera in the monitor network according to the semantic region segmentation result of each monitoring scene.
The purpose of another embodiment of the present invention is to provide camera network topological relation estimating system in a kind of monitoring scene, and described system comprises:
Resolving cell, the monitoring scene in the video flowing that is used for every video camera of monitor network is photographed is decomposed into grid;
Acquiring unit is used for for each monitoring scene, obtains the color histogram information of the light stream of each grid in the described monitoring scene;
Cluster cell is used for for each monitoring scene, according to the color histogram information of the light stream of each grid in the described monitoring scene grid in the described monitoring scene is carried out cluster, obtains the semantic region segmentation result of described monitoring scene;
Determining unit is for the network topology of determining according to the semantic region segmentation result of each monitoring scene between each video camera of monitor network.
The embodiment of the invention is by the color histogram feature of the light stream of optical flow algorithm computing grid, further calculate the topological relation between the video camera, do not need very clearly to obtain the location of moving target or follow the trail of, solved that prior art exists, topological relation between the calculating video camera is all based on location, tracking to the objectives activity, higher to the monitor video quality requirements, block or monitoring image resolution when low the problem that algorithm performance will sharply descend when existing in the monitoring environment.
Description of drawings
In order to be illustrated more clearly in the technical scheme in the embodiment of the invention, the below will do to introduce simply to the accompanying drawing of required use in embodiment or the description of the Prior Art, apparently, accompanying drawing in the following describes only is some embodiments of the present invention, for those of ordinary skills, under the prerequisite of not paying creative work, can also obtain according to these accompanying drawings other accompanying drawing.
Fig. 1 is the realization flow figure of camera network topological relation evaluation method in the monitoring scene that provides of one embodiment of the invention;
Fig. 2 is the video camera topological relation estimated result figure that another embodiment of the present invention provides;
Fig. 3 is that floor and the video camera that another embodiment of the present invention provides disposed sketch;
Fig. 4 is the modular structure figure of camera network topological relation estimating system in the monitoring scene that provides of another embodiment of the present invention.
Embodiment
In order to make purpose of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, is not intended to limit the present invention.
One embodiment of the invention provides camera network topological relation evaluation method in the monitoring scene, described method as shown in Figure 1, concrete steps comprise:
In step S101, the monitoring scene in the video flowing that every video camera in the monitor network is photographed is decomposed into grid.
In the present embodiment, comprise at least two video cameras in the monitor network, every video camera all photographs video flowing, and the video flowing the inside comprises that a plurality of frames, every frame all are piece images, and in a plurality of frames, the image that the moving target process is arranged is exactly monitoring scene.
Need to prove, sizing grid is generally 10*10, also can preset, but the sizing grid after the monitoring scene in the video flowing that all video cameras photograph decomposes is consistent.
In step S102, for each monitoring scene, obtain the color histogram information of the light stream of each grid in the described monitoring scene.
Particularly, the method for color histogram information that realizes obtaining the light stream of each grid in the described monitoring scene is specially:
The video flowing that the definition video camera photographs is I n(X), wherein X is the coordinate of the grid in the monitoring scene of described video flowing, X=(x; Y) T, described x is the horizontal ordinate of described grid, and described y is the ordinate of described grid, and the transposition of described T representing matrix, described n are the numbering of the frame of video that comprises of described video flowing;
Definition W ( X ; p ) = ( 1 + p 1 ) × x + p 3 × y + p 5 p 2 × x + ( 1 + p 4 ) × y + p 6 ; - - - ( 1 )
Wherein, W represents deformable template, p=(p1, p2, p3, p4, p5, p6) T, described p1, p2, p3, p4 are 0, described p5, p6 are the Optic flow information of grid;
Definition p = arg min p Σ x [ I ( W ( x ; p + Δp ) ) - T ( x ) ] 2 ; - - - ( 2 )
Wherein, Δ p represents p poor of twice iteration, T(x) grid that decomposes of expression video flowing the first frame;
Need to prove, the grid that video flowing the first frame decomposes refers to the grid after the decomposition of the first two field picture in the video flowing.
Carry out iteration according to (3), (4), (5), until satisfy Δ p less than predetermined threshold value ε;
▿ I = W ( Δx ; p ) = Ix + p 5 Iy + p 6 ; - - - ( 3 )
Wherein, described Ix represents the gradient map of grid on the x direction of principal axis, and described Iy represents the gradient map of grid on the y direction of principal axis, and described ▽ I represents that grid is at process deformable template W (X; P) gradient map after the conversion;
H = Σ x [ ▿ I ] T [ ▿ I ] ; - - - ( 4 )
Δp = H - 1 * Σ x [ ▿ I ] T [ T ( x ) - I ( W ( x ; p ) ) ] ; - - - ( 5 )
Calculate p5, p6 when satisfying Δ p less than predetermined threshold value ε;
Obtain the color of light stream information that light stream obtains grid from three components of RGB of the Optic flow information of grid;
Color of light stream information according to described grid, calculate light stream at the histogram information of 8 directions, described light stream is color histogram features of the light stream of grid at the histogram information of 8 directions, and the color histogram feature of the light stream of described grid comprises light stream u ' b on the horizontal direction and the light stream v ' on the vertical direction b
Need to prove, described 8 directions are specially every direction of 45 degree.
In step S103, for each monitoring scene, according to the color histogram information of the light stream of each grid in the described monitoring scene grid in the described monitoring scene is carried out cluster, obtain the semantic region segmentation result of described monitoring scene.
Particularly, performing step S103 is specially:
u n = Σ b ∈ r n u ′ b - - - ( 6 )
v n = Σ b ∈ r n v ′ b - - - ( 7 )
In step S104, determine the network topology between each video camera in the monitor network according to the semantic region segmentation result of each monitoring scene.
Particularly, take two video cameras as example, described two video cameras are respectively the first video camera and the second video camera, and described first and second do not represent order, only are used for distinguishing video camera; The first video camera obtains the first video flowing, and the second video camera obtains the second video flowing, and video flowing comprises a plurality of frames, and every frame all is piece image, and the first video flowing comprises the first image, and the second video flowing comprises the second image; In the first image, the image that the moving target process is arranged is the first monitoring scene, and described moving target comprises people, animal or other material object, and in the second image, the image that the moving target process is arranged is the second monitoring scene.
ρ a i , a j ( τ ) = E [ a i c ] E [ a i 2 ] E [ c 2 ] ; - - - ( 8 )
τ ^ a i , a j = arg max τ Σ ρ a i , a j ( τ ) Γ ; - - - ( 9 )
Ψ i , j = ρ a i , a j ( τ ) ( 1 - τ ^ a i , a j ) ; - - - ( 10 )
Wherein, described a iThe color histogram feature that represents the light stream of the first grid, described the first grid is decomposed by the first monitoring scene, and described the first monitoring scene is taken by the first video camera, described a jThe color histogram feature that represents the light stream of the second grid, described the second grid is decomposed by the second monitoring scene, described the second monitoring scene is taken by the second video camera, described the first video camera and the second video camera are any two video cameras in the monitor network, described c represents second grid of τ after the moment, and is described
Figure BDA00003434914300075
Represent the degree of association of color histogram feature of the light stream of the color histogram feature of light stream of the first grid and the second grid, described
Figure BDA00003434914300074
Represent the time shift of the first grid and the second grid, described Ψ I, jThe topological relation estimated result that represents the first video camera and the second video camera.
Need to prove Ψ I, jGreater than 0.5 o'clock, represent that the first video camera and the second video camera are that topology is relevant; In step S104, need to calculate the topological relation estimated result between any two video cameras.
Another embodiment of the present invention provides video camera topological relation estimated result as shown in Figure 2, and video camera topological relation estimated result is as follows:
The embodiment of the invention is chosen 7 video cameras of research institute's administrative building one deck such as Chinese Academy of Sciences Shanghai is high, therefrom chooses at ten one at noon one day to afternoon any video flowing and calculates video camera topological relation estimated result as sample.
1. number and 3. 7 video cameras are deployed in same floor in the experiment, and wherein number video camera is deployed in lift port, and all the other video cameras are deployed in place, 5 entrance and exit of the passage.Floor and video camera are disposed sketch as shown in Figure 3.
Numeral camera number in the circle among Fig. 2, corresponding with Fig. 3 camera number, between two video cameras existence association between these two camera supervised targets of the continuous expression of solid line is arranged in the experimental result, be that same target appears in two camera coverages, can be from the angle reflection goal activities trend of probability statistics.Very little without solid line these two video camera onrelevants of continuous expression or relevance between video camera.As 1. 6. 7. there being very strong association between number video camera, because 7. number video camera position is the main entrance of whole administrative building, enters behind the floor and must go upstairs or enter dining room, Stall rear end by the 6. passage at number video camera place by the 1. elevator at number video camera place.Because the selected time period is the lunchtime, the above a lot of personnel of second floor need to arrive Stall by 1. number video camera place elevator, enter the dining room from 6. number video camera passage again, and after the lunch, personnel understand former road and return.2. number video camera with 3. do not have related (or relevance is very little) between number video camera, only inner for the dining room because 3. number video camera place elevator is Cargo Lift, be kitchen, backstage, dining room between two place's video cameras, do not have direct path.
Another embodiment of the present invention provides camera network topological relation estimating system in the monitoring scene, and the modular structure of described system specifically comprises as shown in Figure 4:
Resolving cell 41, the monitoring scene in the video flowing that is used for every video camera of monitor network is photographed is decomposed into grid;
Acquiring unit 42 is used for for each monitoring scene, obtains the color histogram information of the light stream of each grid in the described monitoring scene;
Cluster cell 43 is used for for each monitoring scene, according to the color histogram information of the light stream of each grid in the described monitoring scene grid in the described monitoring scene is carried out cluster, obtains the semantic region segmentation result of described monitoring scene;
Determining unit 44 is for the network topology of determining according to the semantic region segmentation result of each monitoring scene between each video camera of monitor network.
Optionally, described acquiring unit 42 specifically is used for:
The video flowing that the definition video camera photographs is I n(X), wherein X is the coordinate of the grid in the monitoring scene of described video flowing, X=(x; Y) T, described x is the horizontal ordinate of described grid, and described y is the ordinate of described grid, and the transposition of described T representing matrix, described n are the numbering of the frame of video that comprises of described video flowing;
Definition W ( X ; p ) = ( 1 + p 1 ) × x + p 3 × y + p 5 p 2 × x + ( 1 + p 4 ) × y + p 6 ; - - - ( 1 )
Wherein, W represents deformable template, p=(p1, p2, p3, p4, p5, p6) T, described p1, p2, p3, p4 are 0, described p5, p6 are the Optic flow information of grid;
Definition p = arg min p Σ x [ I ( W ( x ; p + Δp ) ) - T ( x ) ] 2 ; - - - ( 2 )
Wherein, Δ p represents p poor of twice iteration, T(x) grid that decomposes of expression video flowing the first frame;
Carry out iteration according to (3), (4), (5), until satisfy Δ p less than predetermined threshold value ε;
▿ I = W ( Δx ; p ) = Ix + p 5 Iy + p 6 ; - - - ( 3 )
Wherein, described Ix represents the gradient map of grid on the x direction of principal axis, and described Iy represents the gradient map of grid on the y direction of principal axis, and described ▽ I represents that grid is at process deformable template W (X; P) gradient map after the conversion;
H = Σ x [ ▿ I ] T [ ▿ I ] ; - - - ( 4 )
Δp = H - 1 * Σ x [ ▿ I ] T [ T ( x ) - I ( W ( x ; p ) ) ] ; - - - ( 5 )
Calculate p5, p6 when satisfying Δ p less than predetermined threshold value ε;
Obtain the color of light stream information that light stream obtains grid from three components of RGB of the Optic flow information of grid;
Color of light stream information according to described grid, calculate light stream at the histogram information of 8 directions, described light stream is color histogram features of the light stream of grid at the histogram information of 8 directions, and the color histogram feature of the light stream of described grid comprises the light stream u ' on the horizontal direction bWith the light stream v ' on the vertical direction b
Optionally, described cluster cell 43 specifically is used for:
u n = Σ b ∈ r n u ′ b ; - - - ( 6 )
v n = Σ b ∈ r n v ′ b . - - - ( 7 )
Optionally, described determining unit 44 specifically is used for:
ρ a i , a j ( τ ) = E [ a i c ] E [ a i 2 ] E [ c 2 ] ; - - - ( 8 )
τ ^ a i , a j = arg max τ Σ ρ a i , a j ( τ ) Γ ; - - - ( 9 )
Ψ i , j = ρ a i , a j ( τ ) ( 1 - τ ^ a i , a j ) ; - - - ( 10 )
Wherein, described a iThe color histogram feature that represents the light stream of the first grid, described the first grid is decomposed by the first monitoring scene, and described the first monitoring scene is taken by the first video camera, described a jThe color histogram feature that represents the light stream of the second grid, described the second grid is decomposed by the second monitoring scene, described the second monitoring scene is taken by the second video camera, described the first video camera and described the second video camera are any two video cameras in the monitor network, described c represents second grid of τ after the moment, and is described
Figure BDA00003434914300107
Represent the degree of association of color histogram feature of the light stream of the color histogram feature of light stream of the first grid and the second grid, described
Figure BDA00003434914300104
Represent the time shift of the first grid and the second grid, described Ψ I, jThe topological relation estimated result that represents the first video camera and the second video camera.
Optionally, described 8 directions are specially every direction of 45 degree.
One of ordinary skill in the art will appreciate that as the included modules of above-described embodiment is to divide according to function logic, but be not limited to above-mentioned division, as long as can realize corresponding function; In addition, the concrete title of each functional module also just for the ease of mutual differentiation, is not limited to protection scope of the present invention.
Those of ordinary skills it is also understood that, realize that all or part of step in above-described embodiment method is to come the relevant hardware of instruction to finish by program, described program can be in being stored in read/write memory medium, and described storage medium comprises ROM/RAM etc.
The above only is preferred embodiment of the present invention, not in order to limiting the present invention, all any modifications of doing within the spirit and principles in the present invention, is equal to and replaces and improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. camera network topological relation evaluation method in the monitor network is characterized in that, described method comprises:
Monitoring scene in the video flowing that every video camera in the monitor network is photographed is decomposed into grid;
For each monitoring scene, obtain the color histogram information of the light stream of each grid in the described monitoring scene;
For each monitoring scene, according to the color histogram information of the light stream of each grid in the described monitoring scene grid in the described monitoring scene is carried out cluster, obtain the semantic region segmentation result of described monitoring scene;
Determine the network topology between each video camera in the monitor network according to the semantic region segmentation result of each monitoring scene.
2. the method for claim 1 is characterized in that, the described color histogram information of obtaining the light stream of each grid in the described monitoring scene is specially:
The video flowing that the definition video camera photographs is I n(X), wherein X is the coordinate of the grid in the monitoring scene of described video flowing, X=(x; Y) T, described x is the horizontal ordinate of described grid, and described y is the ordinate of described grid, and the transposition of described T representing matrix, described n are the numbering of the frame of video that comprises of described video flowing;
Definition W ( X ; p ) = ( 1 + p 1 ) × x + p 3 × y + p 5 p 2 × x + ( 1 + p 4 ) × y + p 6 ; - - - ( 1 )
Wherein, W represents deformable template, p=(p1, p2, p3, p4, p5, p6) T, described p1, p2, p3, p4 are 0, described p5, p6 are the Optic flow information of grid;
Definition p = arg min p Σ x [ I ( W ( x ; p + Δp ) ) - T ( x ) ] 2 ; - - - ( 2 )
Wherein, Δ p represents p poor of twice iteration, T(x) grid that decomposes of expression video flowing the first frame;
Carry out iteration according to (3), (4), (5), until satisfy Δ p less than predetermined threshold value ε;
▿ I = W ( Δx ; p ) = Ix + p 5 Iy + p 6 ; - - - ( 3 )
Wherein, described Ix represents the gradient map of grid on the x direction of principal axis, and described Iy represents the gradient map of grid on the y direction of principal axis, and described ▽ I represents that grid is at process deformable template W (X; P) gradient map after the conversion;
H = Σ x [ ▿ I ] T [ ▿ I ] ; - - - ( 4 )
Δp = H - 1 * Σ x [ ▿ I ] T [ T ( x ) - I ( W ( x ; p ) ) ] ; - - - ( 5 )
Calculate p5, p6 when satisfying Δ p less than predetermined threshold value ε;
Obtain the color of light stream information that light stream obtains grid from three components of RGB of the Optic flow information of grid;
Color of light stream information according to described grid, calculate light stream at the histogram information of 8 directions, described light stream is color histogram features of the light stream of grid at the histogram information of 8 directions, and the color histogram feature of the light stream of described grid comprises the light stream u ' on the horizontal direction bWith the light stream v ' on the vertical direction b
3. method as claimed in claim 2, it is characterized in that, described for each monitoring scene, according to the color histogram information of the light stream of each grid in the described monitoring scene grid in the described monitoring scene is carried out cluster, the semantic region segmentation result that obtains described monitoring scene is specially:
u n = Σ b ∈ r n u ′ b ; - - - ( 6 )
v n = Σ b ∈ r n v ′ b . - - - ( 7 )
4. method as claimed in claim 3 is characterized in that, described semantic region segmentation result according to each monitoring scene determines that the network topology between each video camera is specially in the monitor network:
ρ a i , a j ( τ ) = E [ a i c ] E [ a i 2 ] E [ c 2 ] ; - - - ( 8 )
τ ^ a i , a j = arg max τ Σ ρ a i , a j ( τ ) Γ ; - - - ( 9 )
Ψ i , j = ρ a i , a j ( τ ) ( 1 - τ ^ a i , a j ) ; - - - ( 10 )
Wherein, described a iThe color histogram feature that represents the light stream of the first grid, described the first grid is decomposed by the first monitoring scene, and described the first monitoring scene is taken by the first video camera, described a jThe color histogram feature that represents the light stream of the second grid, described the second grid is decomposed by the second monitoring scene, described the second monitoring scene is taken by the second video camera, described the first video camera and the second video camera are any two video cameras in the monitor network, described c represents second grid of τ after the moment, and is described
Figure FDA00003434914200034
Represent the degree of association of color histogram feature of the light stream of the color histogram feature of light stream of the first grid and the second grid, described
Figure FDA00003434914200033
Represent the time shift of the first grid and the second grid, described Ψ I, jThe topological relation estimated result that represents the first video camera and the second video camera.
5. method as claimed in claim 2 is characterized in that, described 8 directions are specially every direction of 45 degree.
6. camera network topological relation estimating system in the monitor network is characterized in that, described system comprises:
Resolving cell, the monitoring scene in the video flowing that is used for every video camera of monitor network is photographed is decomposed into grid;
Acquiring unit is used for for each monitoring scene, obtains the color histogram information of the light stream of each grid in the described monitoring scene;
Cluster cell is used for for each monitoring scene, according to the color histogram information of the light stream of each grid in the described monitoring scene grid in the described monitoring scene is carried out cluster, obtains the semantic region of described monitoring scene
Segmentation result;
Determining unit is for the network topology of determining according to the semantic region segmentation result of each monitoring scene between each video camera of monitor network.
7. system as claimed in claim 6 is characterized in that, described acquiring unit specifically is used for:
The video flowing that the definition video camera photographs is I n(X), wherein X is the coordinate of the grid in the monitoring scene of described video flowing, X=(x; Y) T, described x is the horizontal ordinate of described grid, and described y is the ordinate of described grid, and the transposition of described T representing matrix, described n are the numbering of the frame of video that comprises of described video flowing; Definition W ( X ; p ) = ( 1 + p 1 ) × x + p 3 × y + p 5 p 2 × x + ( 1 + p 4 ) × y + p 6 ; - - - ( 1 )
Wherein, W represents deformable template, p=(p1, p2, p3, p4, p5, p6) T, described p1, p2, p3, p4 are 0, described p5, p6 are the Optic flow information of grid; Definition p = arg min p Σ x [ I ( W ( x ; p + Δp ) ) - T ( x ) ] 2 ; - - - ( 2 )
Wherein, Δ p represents p poor of twice iteration, T(x) grid that decomposes of expression video flowing the first frame;
Carry out iteration according to (3), (4), (5), until satisfy Δ p less than predetermined threshold value ε;
▿ I = W ( Δx ; p ) = Ix + p 5 Iy + p 6 ; - - - ( 3 )
Wherein, described Ix represents the gradient map of grid on the x direction of principal axis, and described Iy represents the gradient map of grid on the y direction of principal axis, and described ▽ I represents that grid is at process deformable template W (X; P) gradient map after the conversion;
H = Σ x [ ▿ I ] T [ ▿ I ] ; - - - ( 4 )
Δp = H - 1 * Σ x [ ▿ I ] T [ T ( x ) - I ( W ( x ; p ) ) ] ; - - - ( 5 )
Calculate p5, p6 when satisfying Δ p less than predetermined threshold value ε;
Obtain the color of light stream information that light stream obtains grid from three components of RGB of the Optic flow information of grid;
Color of light stream information according to described grid, calculate light stream at the histogram information of 8 directions, described light stream is color histogram features of the light stream of grid at the histogram information of 8 directions, and the color histogram feature of the light stream of described grid comprises the light stream u ' on the horizontal direction bWith the light stream v ' on the vertical direction b
8. system as claimed in claim 7 is characterized in that, described cluster cell specifically is used for:
u n = Σ b ∈ r n u ′ b ; - - - ( 6 )
v n = Σ b ∈ r n v ′ b . - - - ( 7 )
9. system as claimed in claim 8 is characterized in that, described determining unit specifically is used for:
ρ a i , a j ( τ ) = E [ a i c ] E [ a i 2 ] E [ c 2 ] ; - - - ( 8 )
τ ^ a i , a j = arg max τ Σ ρ a i , a j ( τ ) Γ ; - - - ( 9 )
Ψ i , j = ρ a i , a j ( τ ) ( 1 - τ ^ a i , a j ) ; - - - ( 10 )
Wherein, described ai represents the color histogram feature of the light stream of the first grid, described the first grid is decomposed by the first monitoring scene, described the first monitoring scene is taken by the first video camera, described aj represents the color histogram feature of the light stream of the second grid, described the second grid is decomposed by the second monitoring scene, described the second monitoring scene is taken by the second video camera, described the first video camera and described the second video camera are any two video cameras in the monitor network, described c represents second grid of τ after the moment, and is described
Figure FDA00003434914200057
Represent the degree of association of color histogram feature of the light stream of the color histogram feature of light stream of the first grid and the second grid, described
Figure FDA00003434914200056
Represent the time shift of the first grid and the second grid, described Ψ I, jThe topological relation estimated result that represents the first video camera and the second video camera.
10. system as claimed in claim 7 is characterized in that, described 8 directions are specially every direction of 45 degree.
CN201310270349.2A 2013-06-28 2013-06-28 Method and system for estimating network topological relations of cameras in monitoring scenes Expired - Fee Related CN103325121B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310270349.2A CN103325121B (en) 2013-06-28 2013-06-28 Method and system for estimating network topological relations of cameras in monitoring scenes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310270349.2A CN103325121B (en) 2013-06-28 2013-06-28 Method and system for estimating network topological relations of cameras in monitoring scenes

Publications (2)

Publication Number Publication Date
CN103325121A true CN103325121A (en) 2013-09-25
CN103325121B CN103325121B (en) 2017-05-17

Family

ID=49193844

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310270349.2A Expired - Fee Related CN103325121B (en) 2013-06-28 2013-06-28 Method and system for estimating network topological relations of cameras in monitoring scenes

Country Status (1)

Country Link
CN (1) CN103325121B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886089A (en) * 2014-03-31 2014-06-25 吴怀正 Travelling record video concentrating method based on learning
US20150302655A1 (en) * 2014-04-18 2015-10-22 Magic Leap, Inc. Using a map of the world for augmented or virtual reality systems
CN107292266A (en) * 2017-06-21 2017-10-24 吉林大学 A kind of vehicle-mounted pedestrian area estimation method clustered based on light stream
CN110798654A (en) * 2018-08-01 2020-02-14 华为技术有限公司 Method and system for defining camera by software and camera
CN113763435A (en) * 2020-06-02 2021-12-07 精标科技集团股份有限公司 Tracking shooting method based on multiple cameras

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101616309A (en) * 2009-07-16 2009-12-30 上海交通大学 Non-overlapping visual field multiple-camera human body target tracking method
KR20110034298A (en) * 2009-09-28 2011-04-05 삼성테크윈 주식회사 Monitoring system of storage area network
CN102436662A (en) * 2011-11-29 2012-05-02 南京信息工程大学 Human body target tracking method in nonoverlapping vision field multi-camera network
CN102724482A (en) * 2012-06-18 2012-10-10 西安电子科技大学 Intelligent visual sensor network moving target relay tracking system based on GPS (global positioning system) and GIS (geographic information system)

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101616309A (en) * 2009-07-16 2009-12-30 上海交通大学 Non-overlapping visual field multiple-camera human body target tracking method
KR20110034298A (en) * 2009-09-28 2011-04-05 삼성테크윈 주식회사 Monitoring system of storage area network
CN102436662A (en) * 2011-11-29 2012-05-02 南京信息工程大学 Human body target tracking method in nonoverlapping vision field multi-camera network
CN102724482A (en) * 2012-06-18 2012-10-10 西安电子科技大学 Intelligent visual sensor network moving target relay tracking system based on GPS (global positioning system) and GIS (geographic information system)

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
常发亮,李江宝: "拓扑模型和特征学习的多摄像机接力跟踪策略", 《吉林大学学报(工学版)》 *
张磊,项学智, 赵春晖: "基于光流场与水平集的运动目标检测", 《计算机应用》 *
申明军,欧阳宁,莫建文,张彤: "多摄像机环境下的目标跟踪", 《现代电子技术》 *

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103886089B (en) * 2014-03-31 2017-12-15 吴怀正 Driving recording video concentration method based on study
CN103886089A (en) * 2014-03-31 2014-06-25 吴怀正 Travelling record video concentrating method based on learning
US10008038B2 (en) 2014-04-18 2018-06-26 Magic Leap, Inc. Utilizing totems for augmented or virtual reality systems
US10198864B2 (en) 2014-04-18 2019-02-05 Magic Leap, Inc. Running object recognizers in a passable world model for augmented or virtual reality
US9766703B2 (en) 2014-04-18 2017-09-19 Magic Leap, Inc. Triangulation of points using known points in augmented or virtual reality systems
US20150302655A1 (en) * 2014-04-18 2015-10-22 Magic Leap, Inc. Using a map of the world for augmented or virtual reality systems
US9761055B2 (en) 2014-04-18 2017-09-12 Magic Leap, Inc. Using object recognizers in an augmented or virtual reality system
US9852548B2 (en) 2014-04-18 2017-12-26 Magic Leap, Inc. Systems and methods for generating sound wavefronts in augmented or virtual reality systems
US10043312B2 (en) 2014-04-18 2018-08-07 Magic Leap, Inc. Rendering techniques to find new map points in augmented or virtual reality systems
US9911233B2 (en) 2014-04-18 2018-03-06 Magic Leap, Inc. Systems and methods for using image based light solutions for augmented or virtual reality
US9911234B2 (en) 2014-04-18 2018-03-06 Magic Leap, Inc. User interface rendering in augmented or virtual reality systems
US9922462B2 (en) 2014-04-18 2018-03-20 Magic Leap, Inc. Interacting with totems in augmented or virtual reality systems
US9928654B2 (en) 2014-04-18 2018-03-27 Magic Leap, Inc. Utilizing pseudo-random patterns for eye tracking in augmented or virtual reality systems
US9972132B2 (en) 2014-04-18 2018-05-15 Magic Leap, Inc. Utilizing image based light solutions for augmented or virtual reality
US9984506B2 (en) 2014-04-18 2018-05-29 Magic Leap, Inc. Stress reduction in geometric maps of passable world model in augmented or virtual reality systems
US9996977B2 (en) 2014-04-18 2018-06-12 Magic Leap, Inc. Compensating for ambient light in augmented or virtual reality systems
US11205304B2 (en) 2014-04-18 2021-12-21 Magic Leap, Inc. Systems and methods for rendering user interfaces for augmented or virtual reality
US10013806B2 (en) 2014-04-18 2018-07-03 Magic Leap, Inc. Ambient light compensation for augmented or virtual reality
US9881420B2 (en) 2014-04-18 2018-01-30 Magic Leap, Inc. Inferential avatar rendering techniques in augmented or virtual reality systems
US10109108B2 (en) 2014-04-18 2018-10-23 Magic Leap, Inc. Finding new points by render rather than search in augmented or virtual reality systems
US10115233B2 (en) 2014-04-18 2018-10-30 Magic Leap, Inc. Methods and systems for mapping virtual objects in an augmented or virtual reality system
US10115232B2 (en) * 2014-04-18 2018-10-30 Magic Leap, Inc. Using a map of the world for augmented or virtual reality systems
US10127723B2 (en) 2014-04-18 2018-11-13 Magic Leap, Inc. Room based sensors in an augmented reality system
US10186085B2 (en) 2014-04-18 2019-01-22 Magic Leap, Inc. Generating a sound wavefront in augmented or virtual reality systems
US9767616B2 (en) 2014-04-18 2017-09-19 Magic Leap, Inc. Recognizing objects in a passable world model in an augmented or virtual reality system
US10262462B2 (en) 2014-04-18 2019-04-16 Magic Leap, Inc. Systems and methods for augmented and virtual reality
US10909760B2 (en) 2014-04-18 2021-02-02 Magic Leap, Inc. Creating a topological map for localization in augmented or virtual reality systems
US10665018B2 (en) 2014-04-18 2020-05-26 Magic Leap, Inc. Reducing stresses in the passable world model in augmented or virtual reality systems
US10825248B2 (en) 2014-04-18 2020-11-03 Magic Leap, Inc. Eye tracking systems and method for augmented or virtual reality
US10846930B2 (en) 2014-04-18 2020-11-24 Magic Leap, Inc. Using passable world model for augmented or virtual reality
CN107292266A (en) * 2017-06-21 2017-10-24 吉林大学 A kind of vehicle-mounted pedestrian area estimation method clustered based on light stream
CN110798654A (en) * 2018-08-01 2020-02-14 华为技术有限公司 Method and system for defining camera by software and camera
CN110798654B (en) * 2018-08-01 2021-12-10 华为技术有限公司 Method and system for defining camera by software and camera
CN113763435A (en) * 2020-06-02 2021-12-07 精标科技集团股份有限公司 Tracking shooting method based on multiple cameras

Also Published As

Publication number Publication date
CN103325121B (en) 2017-05-17

Similar Documents

Publication Publication Date Title
Sheikh et al. Bayesian modeling of dynamic scenes for object detection
CN105844234B (en) Method and equipment for counting people based on head and shoulder detection
Stauffer Estimating tracking sources and sinks
CN102436662B (en) Human body target tracking method in nonoverlapping vision field multi-camera network
CN105447458B (en) A kind of large-scale crowd video analytic system and method
Pellegrini et al. Improving data association by joint modeling of pedestrian trajectories and groupings
Hu et al. A system for learning statistical motion patterns
Yu et al. Multiple target tracking using spatio-temporal markov chain monte carlo data association
EP1854083B1 (en) Object tracking camera
WO2019031083A1 (en) Method and system for detecting action
CN103325121A (en) Method and system for estimating network topological relations of cameras in monitoring scenes
CN103426179B (en) A kind of method for tracking target based on mean shift multiple features fusion and device
CN107977646B (en) Partition delivery detection method
CN102447835A (en) Non-blind-area multi-target cooperative tracking method and system
Abdelkader et al. Integrated motion detection and tracking for visual surveillance
Choeychuen Automatic parking lot mapping for available parking space detection
CN103971384B (en) Node cooperation target tracking method of wireless video sensor
Ullah et al. Structured learning for crowd motion segmentation
CN113743260A (en) Pedestrian tracking method under dense pedestrian flow condition of subway platform
Fehr et al. Counting people in groups
CN109977796A (en) Trail current detection method and device
Andrade et al. Characterisation of optical flow anomalies in pedestrian traffic
Yugendar et al. Analysis of crowd flow parameters using artificial neural network
Aycard et al. Grid based fusion & tracking
Sebe et al. Globally optimum multiple object tracking

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 518000 Guangdong province Shenzhen city Futian District District Shennan Road Press Plaza room 1306

Applicant after: Bianco robot Co Ltd

Applicant after: Shanghai Zhongke Institute for Advanced Study

Applicant after: Smart City Information Technology Co., Ltd.

Address before: 518000 Guangdong province Shenzhen city Futian District District Shennan Road Press Plaza room 1306

Applicant before: Anke Smart Cities Technolongy (PRC) Co., Ltd.

Applicant before: Shanghai Zhongke Institute for Advanced Study

Applicant before: Smart City Information Technology Co., Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170517

Termination date: 20180628

CF01 Termination of patent right due to non-payment of annual fee