CN113192125B - Multi-camera video concentration method and system in virtual viewpoint-optimized geographic scene - Google Patents

Multi-camera video concentration method and system in virtual viewpoint-optimized geographic scene Download PDF

Info

Publication number
CN113192125B
CN113192125B CN202110327605.1A CN202110327605A CN113192125B CN 113192125 B CN113192125 B CN 113192125B CN 202110327605 A CN202110327605 A CN 202110327605A CN 113192125 B CN113192125 B CN 113192125B
Authority
CN
China
Prior art keywords
camera
video
virtual
scene
geographic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110327605.1A
Other languages
Chinese (zh)
Other versions
CN113192125A (en
Inventor
解愉嘉
毛波
王崴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Finance and Economics
Original Assignee
Nanjing University of Finance and Economics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Finance and Economics filed Critical Nanjing University of Finance and Economics
Priority to CN202110327605.1A priority Critical patent/CN113192125B/en
Publication of CN113192125A publication Critical patent/CN113192125A/en
Application granted granted Critical
Publication of CN113192125B publication Critical patent/CN113192125B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

A multi-camera video concentration method and system in a virtual viewpoint-optimized geographic scene acquires coordinate data of a homonymous point pair by acquiring video sequence image information and acquiring the homonymous point pair in a video image and a three-dimensional geographic scene model; establishing a mapping relation between the video image and the geographic space according to the coordinate data of the same name point pairs, and positioning the camera vision; constructing a camera observation domain model by analyzing the observable distance and the sight deflection angle, and generating an observable set of a camera group; optimizing the observable collection by constructing an evaluation model to obtain a virtual viewpoint group; presetting display parameters of a moving target, and concentrating the multi-camera video according to the display parameters. The method has the remarkable effects that the mapping relation between the video target and the geographic scene is established, the fusion expression effect of the monitoring video in the geographic scene is enhanced, and great convenience is provided for the integrated quick retrieval and efficient understanding of the information of the video geographic scene.

Description

Multi-camera video concentration method and system in virtual viewpoint-optimized geographic scene
Technical Field
The invention relates to the technical field of real-time fusion of video streams and three-dimensional models, in particular to a multi-camera video concentration method and system in a geographic scene with preferable virtual view points.
Background
Along with the improvement of the accuracy and real-time requirements of Virtual Geographic Environments (VGEs) on scene simulation, multi-source heterogeneous data is introduced into the visual expression and analysis functions of the enhanced VGEs. The video data not only can realize real-scene presentation of the geographic environment, but also can describe space-time movement of moving objects (pedestrians, vehicles and the like) in the geographic scene. When a user views a video in a VGE, the virtual viewpoint is typically selected at a virtual location that is close to the original geographic location of the camera.
However, the conventional virtual viewpoint selection manner often has the following difficulties and problems in actual deployment and use:
first, it is convenient to see a single camera, short-duration video in this way, but if multiple videos are contained in a scene, and the directions of the camera's vision are different, the vision fields do not overlap each other and are discretely distributed, it is difficult for the user to see all videos through a single virtual viewpoint. If each path of video is independently provided with a virtual viewpoint for watching one by one, the watching time length can be greatly prolonged, and the user is not beneficial to quickly watching the video content.
Secondly, video moving targets usually appear in cameras shooting different areas in sequence, each path of video image is watched independently, and the global motion condition of the moving targets in a scene can not be expressed.
Therefore, how to effectively select a small number of virtual viewpoints in a virtual scene to quickly view a multi-camera video object and show the inter-camera motion condition of the video object becomes a technical problem to be solved.
Disclosure of Invention
Therefore, the invention provides a multi-camera video concentration method and system in a geographic scene with preferable virtual view points, which aims to solve the problem that in the prior art, when a user watches videos in VGE, the virtual view points are generally selected at virtual positions close to the original geographic positions of cameras, so that the mode of watching single-camera short-time long videos is convenient, but if multiple paths of videos are contained in the scene, the directions of sight lines of the cameras are different, the sight lines are not overlapped and are distributed discretely, and the user can hardly watch all videos through the single virtual view point; if each path of video is independently provided with a virtual viewpoint for watching one by one, the watching time length can be greatly prolonged, and the video content can not be watched by a user quickly; on the other hand, video moving targets usually appear in cameras shooting different areas in sequence, each path of video image is watched independently, and the problem of global motion condition of the moving targets in a scene across cameras cannot be expressed.
In order to achieve the above object, the present invention provides the following technical solutions: in a first aspect, a method for multi-camera video concentration in a virtual viewpoint-preferred geographic scene is provided, including the steps of:
acquiring video sequence image information, acquiring homonymy point pairs in a video image and a three-dimensional geographic scene model, and acquiring coordinate data of the homonymy point pairs, wherein the coordinate data comprises image coordinates and geographic coordinates;
establishing a mapping relation between the video image and the geographic space according to the coordinate data of the same name point pairs, and positioning the camera vision;
constructing a camera observation domain model by analyzing the observable distance and the sight deflection angle, and generating an observable set of a camera group;
optimizing the observable collection by constructing an evaluation model to obtain a virtual viewpoint group;
presetting display parameters of a moving target, and concentrating the multi-camera video according to the display parameters.
As a preferable scheme of the multi-camera video concentration method in the virtual viewpoint preferable geographical scene, the video image is a first frame image of the intercepted monitoring video.
As a preferable scheme of the multi-camera video concentration method in the geographic scene with preferable virtual view points, the three-dimensional geographic scene model is a three-dimensional scene model constructed according to real geographic scene measurement information, the number of the same-name point pairs collected on the video image and the virtual geographic scene is not less than three, and the three same-name point pairs are not all collinear.
As a preferable scheme of the multi-camera video concentration method in the virtual viewpoint preferable geographical scene, the establishing of the mapping relation between the video image and the geographical space comprises the following steps:
a1 Presetting an object point geospatial coordinate Q corresponding to a given image point image space coordinate Q, and expressing Q and Q as homogeneous coordinates:
q=[x y 1] T
Q=[X Y Z 1] T
and recording a homography matrix M, wherein the relation between Q and Q is as follows:
q=MQ;
the homography matrix M has the expression:
a2 Solving geospatial coordinates of object points corresponding to image points in each image:
a3 Assuming that there are L cameras in the current camera network, then for the kth camera (k=1, 2 … L), its mapping matrix is labeled M k The method comprises the steps of carrying out a first treatment on the surface of the Defining each camera position in the geographic space asThe geospatial position of each camera view polygon is +.>
Wherein the camera positionConsidered as one point in the geospatial, the camera view polygon consists of o boundary points P k,num Sequentially connecting the polygonal records.
As a preferable scheme of the multi-camera video concentration method in the geographic scene with preferable virtual view points, in the process of positioning the view field of the camera, two factors of the virtual sight line distance and the included angle between the camera and the virtual view points are selected as constraint conditions;
the virtual sight distance refers to the geographic space distance between the virtual viewpoint and a certain point in the viewing area; the camera-virtual viewpoint included angle refers to an included angle formed by taking a given point in a view field as an angular point and calculating the projection of the given point, the virtual viewpoint and a camera position point on a horizontal plane;
defining a distance threshold T dis And an angle threshold T ang As a means ofConstraint, and assume a distance threshold T dis And an angle threshold T ang It has been given to find a region meeting the constraint in the scene model as a virtual viewpoint range.
As a preferable scheme of the multi-camera video concentration method in the virtual viewpoint preferable geographical scene, the steps of constructing a camera observation domain model by analyzing the observable distance and the line-of-sight deflection angle and generating a camera group observable collection include the following steps:
b1 Record camera positionAnd camera view polygon->And->Distance>Is P k,n1 P k,n2 The furthest line segment is P k,n3 P k,n4
b2 At point P) k,n3 ,P k,n4 As the center of a circle, a distance threshold T dis Radius, at line segment P k,n3 P k,n4 Towards the camera positionIs a semicircle drawn on one side of the line segment P k,n1 P k,n2 Near camera position->An intersection area on one side as a virtual viewpoint distance reasonable area A k,dis
b3 At point P) k,n1 ,P k,n2 Angular point, threshold value T ang For deflection angle, respectively clockwise and anticlockwise deflect T ang Four rays are taken to be directed to the on-line segment P k,n1 P k,n2 Near camera positionIntersection area on one side as virtual viewpoint angle reasonable area A k,ang
b4 Virtual viewpoint range a of camera k Namely A is k,dis And A is a k,ang Is a complex of the intersection of (a) and (b);
b5 Record Obj as the total set of all video moving objects in all video cameras; note that there is N in the kth camera k A plurality of video moving objects, the track of each video moving object is marked as C k,i ,C k,i The expression of (2) is as follows:
Obj={C k,i ,(k=1,2…L)}
C k,i ={I k,i,j ,P k,i,j ,(i=1,2,…,N k )(j=1,2,…,n)};
wherein L represents the number of cameras, I k,i,j And P k,i,j Representing the ith video moving object in the kth camera, analyzing the subgraph in the jth video frame and the geographic space position of the subgraph by video moving object cross-camera correlation, merging the single-camera video moving object tracks to obtain a multi-camera video moving object trackThe method comprises the steps of realizing multi-camera video moving target association organization:
Cube io ={C k1,i1 ,C k2,i2 …C ko,iL ,(k1,k2,…ko)∈(1,2…L)};
wherein L is o Representing total number of video moving targets after merging of cross-camera homonymous video moving targets in monitoring video network, and Cube io Representing the global track of video moving objects with sequence number io in the surveillance video network,representing the sub-track of video moving objects with the sequence number io in the ko th camera.
As a preferable scheme of the multi-camera video concentration method in the geographical scene with preferable virtual view points, the observable collection is preferable by constructing an evaluation model, and a virtual view point group is obtained, which specifically comprises the following steps:
c1 Let L be the number of cameras and M be the set of all combinations of cameras:
m i ={n i,j }
wherein m is i Refers to the ith camera combination mode, and comprises all camera groups in the combination mode; n is n i,j Refer to camera combination mode m i The j-th group of cameras under the condition comprises all cameras of the group;refers to the ith camera in the jth camera group in the ith camera combination mode;
c2 By defining a distance threshold T dis And an angle threshold T ang For each camera combination mode m i Each camera group n i,j All cameras in (1)Solving observable domains and solving intersection; if a certain camera combination mode m i All camera sets n i,j Any number of camera observation domain intersections in the camera combination mode m are not empty i For an observable combination, otherwise, the camera combination mode m is recorded i Is a non-observable combination;
c3 Based on multi-camera video object trajectory data, designating the following video concentration optimization targets to achieve the preference for the camera set:
(1) the consistency of the cross-camera expression of the same-name targets, namely, the video cameras with single targets appear are expressed in a consistent way with as few virtual viewpoints as possible;
(2) the total number of virtual viewpoints used for expressing all video targets is as small as possible;
c4 Comprehensively evaluating the multi-camera video target expression effect of the camera combination corresponding to the virtual viewpoint group by value:
wherein n is c Representing the total number of cameras, n v Representing the number of virtual viewpoints, N representing the total number of video moving objects, m i The number of virtual viewpoints which are expressed in an associated mode of each video moving target is represented, and mu is a weight parameter;
c5 At a distance threshold T) dis And an angle threshold T ang And when the value is taken at a certain time, calculating the value of the current observable set of all cameras by defining the parameter alpha, taking the maximum value as a camera combination selection result, and concentrating the multi-camera video in the virtual scene.
As a preferable scheme of the multi-camera video concentration method in the geographical scene with preferable virtual view point, the preset display parameter of the moving object, and performing multi-camera video concentration according to the display parameter, includes the following steps:
d1 Recording that the video moving targets of all L cameras need W virtual viewpoints (W is less than or equal to L) under the current camera combination; meanwhile, setting the frame rate fps of the video moving object subgraphs displayed in the three-dimensional scene as the number of the single video moving object display subgraphs per second; setting an object display interval time t 0 As adding a time interval for displaying a new video moving object;
d2 For a certain virtual viewpoint W (w.ltoreq.W), the first appearing moving object O is displayed first 0 In geospatial trajectory T 0 And identify the videoThe sequence in which objects appear between different cameras;
screening the video object subgraphs according to the frame rate fps, converting the plane coordinates corresponding to the screened video object subgraphs into geographic coordinates, and simultaneously according to the proportionality coefficient P w 、P h Scaling video object subgraphs, P w 、P h The calculation formula is as follows:
wherein the method comprises the steps ofThe average width and height of proper amount of subgraphs are randomly selected from a video object subgraph library, the coordinates of three points corresponding to the selected subgraphs in the left upper, the left lower and the right upper in the original video frame are mapped to the corresponding geographic positions in the virtual scene to obtain the length and height of the video object subgraphs in the three-dimensional space, and the length and height of the video object subgraphs are calculated according to the coordinates>The average length and height of the video object subgraphs displayed in the virtual scene;
d3 Display O in camera view in virtual scene according to frame rate fps during dynamic display 0 The video object subgraphs of the current frame are not displayed in the corresponding geographic positions;
at t 0 ,2t 0 …,nt 0 Time of day, add video object O respectively 1 ,O 2 …O n Dynamically expressed in a three-dimensional scene model to realize multi-camera video object concentration.
As a preferable scheme of the multi-camera video concentration method in the geographic scene with preferable virtual view points, for the situation that the same section of object track generated by overlapping camera shooting areas is obtained by a plurality of cameras, the camera for obtaining an object subgraph is determined by comparing the included angles between the virtual view points and the three-point connecting lines of the object track points and the two camera positions respectively:
the camera a and the camera b have a view overlapping part C, for the video object passing through the view C, the included angles among the camera position, the track point and the virtual viewpoint V are compared, namely the sizes of alpha and beta, if alpha is less than or equal to beta, the video object subgraph acquired by the camera a is used, and otherwise, the video object subgraph acquired by the camera b is used.
In a second aspect, there is provided a multi-camera video concentration system in a virtual-viewpoint-preferred geographical scene, employing the method of multi-camera video concentration in a virtual-viewpoint-preferred geographical scene of the first aspect or any possible implementation thereof, the concentration system comprising:
the homonymy point acquisition module: the method comprises the steps of acquiring video sequence image information, acquiring homonymous point pairs in a video image and a three-dimensional geographic scene model, and acquiring coordinate data of the homonymous point pairs, wherein the coordinate data comprises image coordinates and geographic coordinates;
the mapping relation construction module: the method comprises the steps of establishing a mapping relation between a video image and a geographic space according to coordinate data of a homonymy point pair, and positioning a camera viewing area;
the camera group observable collection generating module: the method comprises the steps of constructing a camera observation domain model by analyzing an observable distance and a sight deflection angle, and generating a camera group observable set;
virtual viewpoint group generation module: the method comprises the steps of optimizing the observable collection by constructing an evaluation model to obtain a virtual viewpoint group;
video object space-time motion expression module: and presetting display parameters of a moving object, and concentrating the multi-camera video according to the display parameters.
The invention has the following advantages: acquiring image information of a video sequence, acquiring a homonymy point pair in a video image and a three-dimensional geographic scene model, and acquiring coordinate data of the homonymy point pair, wherein the coordinate data comprises image coordinates and geographic coordinates; establishing a mapping relation between the video image and the geographic space according to the coordinate data of the same name point pairs, and positioning the camera vision; constructing a camera observation domain model by analyzing the observable distance and the sight deflection angle, and generating an observable set of a camera group; optimizing the observable collection by constructing an evaluation model to obtain a virtual viewpoint group; presetting display parameters of a moving target, and concentrating the multi-camera video according to the display parameters. The method has the remarkable effects that the mapping relation between the video target and the geographic scene is established, the fusion expression effect of the monitoring video in the geographic scene is enhanced, and great convenience is provided for the integrated quick retrieval and efficient understanding of the information of the video geographic scene.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It will be apparent to those of ordinary skill in the art that the drawings in the following description are exemplary only and that other implementations can be obtained from the extensions of the drawings provided without inventive effort.
The structures, proportions, sizes, etc. shown in the present specification are shown only for the purposes of illustration and description, and are not intended to limit the scope of the invention, which is defined by the claims, so that any structural modifications, changes in proportions, or adjustments of sizes, which do not affect the efficacy or the achievement of the present invention, should fall within the scope of the invention.
Fig. 1 is a schematic diagram of integrated representation of multiple videos in a VGE according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a multi-camera video concentration method in a geographic scene with preferable virtual viewpoint provided in the embodiment of the invention;
FIG. 3 is a schematic diagram of a camera and a geospatial coordinate system and an image space coordinate system provided in an embodiment of the present invention;
FIG. 4 (a) is a schematic view of a virtual line of sight distance provided in an embodiment of the present invention;
fig. 4 (b) is a schematic diagram of a camera-virtual viewpoint angle provided in an embodiment of the present invention;
FIG. 5 (a) is a schematic diagram of the nearest/farthest line segment from the camera to the video field of view according to an embodiment of the present invention;
fig. 5 (b) is a schematic diagram of a virtual viewpoint distance reasonable area provided in an embodiment of the present invention;
fig. 5 (c) is a schematic diagram of a reasonable virtual viewpoint angle area provided in the embodiment of the present invention;
FIG. 5 (d) is a schematic view of a virtual viewpoint angle and distance reasonable region provided in an embodiment of the present invention;
FIG. 6 is a schematic diagram of an observable camera set provided in an embodiment of the present invention;
FIG. 7 is a schematic diagram of multi-camera video object concentration in a geographic scene provided in an embodiment of the present invention;
FIG. 8 is a schematic diagram of a camera view overlapping process according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a multi-camera video concentration system in a geographic scene with preferred virtual viewpoint according to an embodiment of the present invention.
Detailed Description
Other advantages and advantages of the present invention will become apparent to those skilled in the art from the following detailed description, which, by way of illustration, is to be read in connection with certain specific embodiments, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Referring to fig. 1, by introducing video into VGE, video intelligent analysis is supported by geospatial information, and related functions such as video data organization management, spatial mapping, video-scene fusion expression and the like in VGE can be realized.
Referring to fig. 2, a method for multi-camera video concentration in a virtual viewpoint-preferred geographic scene is provided, which includes the following steps:
s1, acquiring video sequence image information, acquiring a homonymous point pair in a video image and a three-dimensional geographic scene model, and acquiring coordinate data of the homonymous point pair, wherein the coordinate data comprises image coordinates and geographic coordinates;
s2, establishing a mapping relation between the video image and the geographic space according to the coordinate data of the same name point pair, and positioning the camera viewing area;
s3, constructing a camera observation domain model by analyzing the observable distance and the sight deflection angle, and generating an observable set of a camera set;
s4, optimizing the observable collection by constructing an evaluation model to obtain a virtual viewpoint group;
s5, presetting display parameters of a moving object, and concentrating the multi-camera video according to the display parameters.
Specifically, in step S1, the video image is a first frame image of the capturing surveillance video. In step S1, the three-dimensional geographic scene model is a three-dimensional scene model constructed according to real geographic scene measurement information, the number of the same name point pairs collected on the video image and the virtual geographic scene is not less than three, and the three same name point pairs are not all collinear.
Specifically, referring to fig. 3, the relationship between the camera and the image space coordinate system, the geospatial coordinate system is illustrated. The center of the recording station is marked as C, and the image space coordinate system is marked as O i X i Y i The geospatial coordinate system is denoted as O g X g Y g Z g . In step S2, the establishing a mapping relationship between the video image and the geographic space includes the following steps:
a1 Presetting an object point geospatial coordinate Q corresponding to a given image point image space coordinate Q, and expressing Q and Q as homogeneous coordinates:
q=[x y 1] T
Q=[X Y Z 1] T
and recording a homography matrix M, wherein the relation between Q and Q is as follows:
q=MQ;
the homography matrix M has the expression:
a2 Since M has 6 unknowns, at least 3 sets of known image point image space coordinates and object point geospatial coordinates are needed to achieve a solution for M. After M is determined, solving the geospatial coordinates of the object point corresponding to the image point in each image:
a3 Assuming that there are L cameras in the current camera network, then for the kth camera (k=1, 2 … L), its mapping matrix is labeled M k The method comprises the steps of carrying out a first treatment on the surface of the On this basis, each camera position in the defined geospatial isThe geospatial position of each camera view polygon is +.>
Wherein the camera positionConsidered as one point in the geospatial, the camera view polygon consists of o boundary points P k,num Sequentially connecting the polygonal records.
In the embodiment, in step S2, in the process of positioning the camera view, two factors, i.e., the virtual line of sight distance and the camera-virtual viewpoint angle, are selected as constraint conditions;
referring specifically to fig. 4, the virtual sight line distance refers to a geospatial distance between the virtual viewpoint and a point in the viewing area; the camera-virtual viewpoint included angle refers to an included angle formed by taking a given point in a view field as an angular point and calculating the projection of the given point, the virtual viewpoint and a camera position point on a horizontal plane;
defining a distance threshold T dis And an angle threshold T ang As a constraint, and assume a distance threshold T dis And an angle threshold T ang It has been shown that on this basis, a region meeting the constraint is found in the scene model as a virtual viewpoint range.
Specifically, in step S3, the step of constructing a camera observation domain model by analyzing the observable distance and the line-of-sight deflection angle, and generating the observable collection of the camera group includes the following steps:
b1 Referring specifically to fig. 5 (a), camera position is notedAnd camera view polygon->And->In distanceIs P k,n1 P k,n2 The furthest line segment is P k,n3 P k,n4
b2 Referring specifically to FIG. 5 (b), at points P k,n3 ,P k,n4 As the center of a circle, a distance threshold T dis Radius, at line segment P k, n3 P k,n4 Towards the camera positionIs a semicircle drawn on one side of the line segment P k,n1 P k,n2 Near camera position->An intersection area on one side as a virtual viewpoint distance reasonable area A k,dis
b3 Referring specifically to FIG. 5 (c), at points P k,n1 ,P k,n2 Angular point, threshold value T ang For deflection angle, respectively clockwise and anticlockwise deflect T ang Four rays are taken to be directed to the on-line segment P k,n1 P k,n2 Near camera positionIntersection area on one side as virtual viewpoint angle reasonable area A k,ang
b4 Referring specifically to fig. 5 (d), the virtual viewpoint range a of the camera k Namely A is k,dis And A is a k,ang Is a complex of the intersection of (a) and (b);
b5 Record Obj as the total set of all video moving objects in all video cameras; note N in the kth camera k A plurality of video moving objects, the track of each video moving object is marked as C k,i ,C k,i The expression of (2) is as follows:
Obj={C k,i ,(k=1,2…L)}
C k,i ={I k,i,j ,P k,i,j ,(i=1,2,…,N k )(j=1,2,…,n)};
wherein L represents the number of cameras, I k,i,j And P k,i,j Representing the ith video moving object in the kth camera, analyzing the subgraph in the jth video frame and the geographic space position of the subgraph by video moving object cross-camera correlation, merging the single-camera video moving object tracks to obtain a multi-camera video moving object trackThe method comprises the steps of realizing multi-camera video moving target association organization:
Cube io ={C k1,i1 ,C k2,i2 …C ko,iL ,(k1,k2,…ko)∈(1,2…L)};
wherein L is o Representing total number of video moving targets after merging of cross-camera homonymous video moving targets in monitoring video network, and Cube io Representing the global track of video moving objects with sequence number io in the surveillance video network,representing the sub-track of video moving objects with the sequence number io in the ko th camera.
Specifically, in step S4, the observable collection is optimized by constructing an evaluation model, and a virtual viewpoint group is obtained, which specifically includes the following steps:
c1 Let L be the number of cameras and M be the set of all combinations of cameras:
m i ={n i,j }
see in particular FIG. 6, where m i Refers to the ith camera combination mode, and comprises all camera groups in the combination mode; n is n i,j Refer to camera combination mode m i The j-th group of cameras under the condition comprises all cameras of the group;refers to the ith camera in the jth camera group in the ith camera combination mode;
c2 By defining a distance threshold T dis And an angle threshold T ang For each camera combination mode m i Each camera group n i,j All cameras in (1)Solving observable domains and solving intersection; if a certain camera combination mode m i All camera sets n i,j Any number of camera observation domain intersections in the camera combination mode m are not empty i For an observable combination, otherwise, the camera combination mode m is recorded i Is a non-observable combination;
c3 Based on multi-camera video object trajectory data, designating the following video concentration optimization targets to achieve the preference for the camera set:
(1) the consistency of the cross-camera expression of the same-name targets, namely, the video cameras with single targets appear are expressed in a consistent way with as few virtual viewpoints as possible;
(2) the total number of virtual viewpoints used for expressing all video targets is as small as possible;
c4 Comprehensively evaluating the multi-camera video target expression effect of the camera combination corresponding to the virtual viewpoint group by value:
wherein n is c Representing the total number of cameras, n v Representing the number of virtual viewpoints, N representing the total number of video moving objects, m i The number of virtual viewpoints which are expressed in an associated mode of each video moving target is represented, and mu is a weight parameter;
c5 At a distance threshold T) dis And an angle threshold T ang And when the value is taken at a certain time, calculating the value of the current observable set of all cameras by defining the parameter alpha, taking the maximum value as a camera combination selection result, and concentrating the multi-camera video in the virtual scene.
Referring specifically to fig. 7, step S4 sets a moving object display parameter with the center point of each camera group observable domain as a virtual viewpoint under the condition selected by the camera display combination based on the observable set preference result, and performs multi-camera video concentration.
In step S5, the preset display parameters of the moving object, and performing multi-camera video concentration according to the display parameters, includes the following steps:
d1 Recording that the video moving targets of all L cameras need W virtual viewpoints (W is less than or equal to L) under the current camera combination; meanwhile, setting the frame rate fps of the video moving object subgraphs displayed in the three-dimensional scene as the number of the single video moving object display subgraphs per second; setting an object display interval time t 0 As adding a time interval for displaying a new video moving object;
d2 For a certain virtual viewpoint W (w.ltoreq.W), the first appearing moving object O is displayed first 0 In geospatial trajectory T 0 And identifying the order in which the video objects appear between the different cameras;
screening the video object subgraphs according to the frame rate fps, converting the plane coordinates corresponding to the screened video object subgraphs into geographic coordinates, and simultaneously according to the proportionality coefficient P w 、P h Scaling video object subgraphs, P w 、P h The calculation formula is as follows:
wherein the method comprises the steps ofThe average width and height of proper amount of subgraphs are randomly selected from a video object subgraph library, the coordinates of three points corresponding to the selected subgraphs in the left upper, the left lower and the right upper in the original video frame are mapped to the corresponding geographic positions in the virtual scene to obtain the length and height of the video object subgraphs in the three-dimensional space, and the length and height of the video object subgraphs are calculated according to the coordinates>The average length and height of the video object subgraphs displayed in the virtual scene;
d3 Display O in camera view in virtual scene according to frame rate fps during dynamic display 0 Currently, the method is thatThe video object subgraphs of the frames are in the corresponding geographic positions, and the old video object subgraphs are not displayed any more;
on the other hand, at t 0 ,2t 0 …,nt 0 Time of day, add video object O respectively 1 ,O 2 …O n Dynamically expressed in a three-dimensional scene model to realize multi-camera video object concentration.
Specifically, in step S5, for the case that the same segment of object track generated by overlapping the camera shooting areas is obtained by multiple cameras, the camera for obtaining the object subgraph is determined by comparing the virtual viewpoint and the included angle between the object track points and the three-point connection lines of the two camera positions respectively:
referring specifically to fig. 8, camera a and camera b have a view overlapping portion C, and for a video object passing through view C, the angles between the camera position, the locus point, and the virtual viewpoint V, that is, the magnitudes of α and β, are compared, if α is less than or equal to β, the video object subgraph acquired by camera a is used, otherwise the video object subgraph acquired by camera b is used.
Example 2
Referring to fig. 9, the present invention further provides a multi-camera video concentration system in a virtual-viewpoint-preferred geographic scene, using the multi-camera video concentration method in a virtual-viewpoint-preferred geographic scene of embodiment 1 or any possible implementation thereof, the concentration system comprising:
homonymy point acquisition module 1: the method comprises the steps of acquiring video sequence image information, acquiring homonymous point pairs in a video image and a three-dimensional geographic scene model, and acquiring coordinate data of the homonymous point pairs, wherein the coordinate data comprises image coordinates and geographic coordinates;
the mapping relation construction module 2: the method comprises the steps of establishing a mapping relation between a video image and a geographic space according to coordinate data of a homonymy point pair, and positioning a camera viewing area;
camera group observable collection generating module 3: the method comprises the steps of constructing a camera observation domain model by analyzing an observable distance and a sight deflection angle, and generating a camera group observable set;
virtual viewpoint group generation module 4: the method comprises the steps of optimizing the observable collection by constructing an evaluation model to obtain a virtual viewpoint group;
specifically, a video image visual field model is constructed, the range of each camera in a virtual scene which can be effectively observed is described, then a virtual viewpoint is generated, and the global motion condition of a video moving object in a multi-camera geographic scene is checked; based on the camera observation domain model, the camera observable collection is exhausted, and the combination with the best video target information expression effect is optimized to be used as a virtual viewpoint generation area;
video object spatiotemporal motion expression module 5: and presetting display parameters of a moving object, and concentrating the multi-camera video according to the display parameters.
Specifically, based on the preferred results of the observable sets, under the condition that the camera display combination is selected, setting the display parameters of the moving targets by taking the central point of the observable domain of each camera set as a virtual viewpoint, and performing multi-camera video concentration.
It should be noted that, because the content of information interaction and execution process between each module/unit of the multi-camera video concentration system in the preferred geographical scene of the virtual viewpoint is based on the same concept as the method embodiment in the embodiment of the present application, the technical effects brought by the method embodiment are the same as the method embodiment of the present application, and the specific content can be referred to the description in the method embodiment shown in the foregoing description of the present application.
Acquiring image information of a video sequence, acquiring a homonymy point pair in a video image and a three-dimensional geographic scene model, and acquiring coordinate data of the homonymy point pair, wherein the coordinate data comprises image coordinates and geographic coordinates; establishing a mapping relation between the video image and the geographic space according to the coordinate data of the same name point pairs, and positioning the camera vision; constructing a camera observation domain model by analyzing the observable distance and the sight deflection angle, and generating an observable set of a camera group; optimizing the observable collection by constructing an evaluation model to obtain a virtual viewpoint group; presetting display parameters of a moving target, and concentrating the multi-camera video according to the display parameters. The method has the remarkable effects that the mapping relation between the video target and the geographic scene is established, the fusion expression effect of the monitoring video in the geographic scene is enhanced, and great convenience is provided for the integrated quick retrieval and efficient understanding of the information of the video geographic scene.
Those skilled in the art will appreciate that the various aspects of the invention may be implemented as a system, method, or program product. Accordingly, aspects of the invention may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "module" or "platform.
While the invention has been described in detail in the foregoing general description and specific examples, it will be apparent to those skilled in the art that modifications and improvements can be made thereto. Accordingly, such modifications or improvements may be made without departing from the spirit of the invention and are intended to be within the scope of the invention as claimed.

Claims (9)

1. A method for multi-camera video concentration in a virtual viewpoint-preferred geographic scene, comprising the steps of:
acquiring video sequence image information, acquiring homonymy point pairs in a video image and a three-dimensional geographic scene model, and acquiring coordinate data of the homonymy point pairs, wherein the coordinate data comprises image coordinates and geographic coordinates;
establishing a mapping relation between the video image and the geographic space according to the coordinate data of the same name point pairs, and positioning the camera vision;
constructing a camera observation domain model by analyzing the observable distance and the sight deflection angle, and generating an observable set of a camera group;
optimizing the observable collection by constructing an evaluation model to obtain a virtual viewpoint group;
the observable collection is optimized by constructing an evaluation model, and a virtual viewpoint group is obtained, which comprises the following steps:
c1 Let L be the number of cameras and M be the set of all combinations of cameras:
m i ={n i,j }
wherein m is i Refers to the ith camera combination mode, and comprises all camera groups in the combination mode; n is n i,j Refer to camera combination mode m i The j-th group of cameras under the condition comprises all cameras of the group;refers to the ith camera in the jth camera group in the ith camera combination mode;
c2 By defining a distance threshold T dis And an angle threshold T ang For each camera combination mode m i Each camera group n i,j All cameras in (1)Solving observable domains and solving intersection; if a certain camera combination mode m i All camera sets n i,j Any number of camera observation domain intersections in the camera combination mode m are not empty i For an observable combination, otherwise, the camera combination mode m is recorded i Is a non-observable combination;
c3 Based on multi-camera video object trajectory data, designating the following video concentration optimization targets to achieve the preference for the camera set:
(1) the consistency of the cross-camera expression of the same-name targets, namely, the video cameras with single targets appear are expressed in a consistent way with as few virtual viewpoints as possible;
(2) the total number of virtual viewpoints used for expressing all video targets is as small as possible;
c4 Comprehensively evaluating the multi-camera video target expression effect of the camera combination corresponding to the virtual viewpoint group by value:
wherein n is c Representing the total number of cameras, n v Representing the number of virtual viewpoints, N representing the total number of video moving objects, m i The number of virtual viewpoints which are expressed in an associated mode of each video moving target is represented, and mu is a weight parameter;
c5 At a distance threshold T) dis And an angle threshold T ang When the value is given, calculating the value of the current observable set of all cameras by defining a parameter alpha, taking the maximum value as a camera combination selection result, and concentrating the multi-camera video in the virtual scene;
presetting display parameters of a moving target, and concentrating the multi-camera video according to the display parameters.
2. The method of claim 1, wherein the video image is a first frame of image of the captured surveillance video.
3. The method for concentrating the multi-camera video in the geographic scene with the preferable virtual view point according to claim 2, wherein the three-dimensional geographic scene model is a three-dimensional scene model constructed according to real geographic scene measurement information, the number of the same-name point pairs acquired on the video image and the virtual geographic scene is not less than three, and the three same-name point pairs are not all collinear.
4. A method for multi-camera video concentration in a virtual viewpoint-preferred geographic scene as claimed in claim 3 wherein said establishing a mapping relationship between video images and geographic space, locating camera views comprises the steps of:
a1 Presetting an object point geospatial coordinate Q corresponding to a given image point image space coordinate Q, and expressing Q and Q as homogeneous coordinates:
q=[x y 1] T
Q=[X Y Z 1] T
and recording a homography matrix M, wherein the relation between Q and Q is as follows:
q=MQ;
the homography matrix M has the expression:
a2 Solving geospatial coordinates of object points corresponding to image points in each image:
a3 Assuming that there are L cameras in the current camera network, then for the kth camera (k=1, 2 … L), its mapping matrix is labeled M k The method comprises the steps of carrying out a first treatment on the surface of the Defining each camera position in the geographic space asThe geospatial position of each camera field of view polygon is
Wherein the camera positionConsidered as one point in the geospatial, the camera view polygon consists of o boundary points P k,num Sequentially connecting the polygonal records.
5. The method for multi-camera video concentration in a virtual viewpoint-preferred geographic scene as recited in claim 4, wherein said positioning the camera view selects two factors, the virtual line-of-sight distance and the camera-virtual viewpoint angle, as constraints;
the virtual sight distance refers to the geographic space distance between the virtual viewpoint and a certain point in the viewing area; the camera-virtual viewpoint included angle refers to an included angle formed by taking a given point in a view field as an angular point and calculating the projection of the given point, the virtual viewpoint and a camera position point on a horizontal plane;
defining a distance threshold T dis And an angle threshold T ang As a constraint, and assume a distance threshold T dis And an angle threshold T ang It has been given to find a region meeting the constraint in the scene model as a virtual viewpoint range.
6. The method for multi-camera video concentration in a virtual viewpoint-preferred geographic scene as recited in claim 5, wherein said constructing a camera observation domain model by analyzing observable distance and line-of-sight deflection, generating a camera group observable collection comprises the steps of:
b1 Record camera positionAnd camera view polygon->And->Distance>Is P k,n1 P k,n2 The furthest line segment is P k,n3 P k,n4
b2 At point P) k,n3 ,P k,n4 As the center of a circle, a distance threshold T dis Radius, at line segment P k,n3 P k,n4 Towards the camera positionIs a semicircle drawn on one side of the line segment P k,n1 P k,n2 Near camera position->An intersection area on one side as a virtual viewpoint distance reasonable area A k,dis
b3 At point P) k,n1 ,P k,n2 Angular point, threshold value T ang For deflection angle, respectively clockwise and anticlockwise deflect T ang Four rays are taken to be directed to the on-line segment P k,n1 P k,n2 Near camera positionIntersection area on one side as virtual viewpoint angle reasonable area A k,ang
b4 Virtual viewpoint range a of camera k Namely A is k,dis And A is a k,ang Is a complex of the intersection of (a) and (b);
b5 Record Obj as the total set of all video moving objects in all video cameras; note that there is N in the kth camera k A plurality of video moving objects, the track of each video moving object is marked as C k,i ,C k,i The expression of (2) is as follows:
Obj={C k,i ,(k=1,2…L)}
C k,i ={I k,i,j ,P k,i,j ,(i=1,2,…,N k )(j=1,2,…,n)};
wherein L represents the number of cameras, I k,i,j And P k,i,j Representing the sub-image of the ith video moving object in the kth video frame and the geographic space position of the sub-image in the kth video frame through the video moving object cross-camera correlation analysis, merging the single-camera video operationsObtaining multi-camera video moving object track from moving object trackThe method comprises the steps of realizing multi-camera video moving target association organization:
Cube io ={C k1,i1 ,C k2,i2 …C ko,iL ,(k1,k2,…ko)∈(1,2…L)};
wherein L is o Representing total number of video moving targets after merging of cross-camera homonymous video moving targets in monitoring video network, and Cube io Representing the global track of video moving objects with sequence number io in the surveillance video network,representing the sub-track of video moving objects with the sequence number io in the ko th camera.
7. The method for multi-camera video concentration in a virtual viewpoint-preferred geographic scene according to claim 6, wherein the display parameters of the preset moving object are used for multi-camera video concentration according to the display parameters, and the method comprises the following steps:
d1 Recording that the video moving targets of all L cameras need W virtual viewpoints in total under the current camera combination, wherein W is less than or equal to L; meanwhile, setting the frame rate fps of the video moving object subgraphs displayed in the three-dimensional scene as the number of the single video moving object display subgraphs per second; setting an object display interval time t 0 As adding a time interval for displaying a new video moving object;
d2 For a certain virtual viewpoint W, W is less than or equal to W; first of all, the first appearing moving object O is displayed 0 In geospatial trajectory T 0 Identifying the sequence of video objects among different cameras;
according to frame rate fpsScreening the video object subgraphs, converting plane coordinates corresponding to the screened video object subgraphs into geographic coordinates, and simultaneously according to a proportion coefficient P w 、P h Scaling video object subgraphs, P w 、P h The calculation formula is as follows:
wherein the method comprises the steps ofThe average width and height of proper amount of subgraphs are randomly selected from a video object subgraph library, the coordinates of three points corresponding to the selected subgraphs in the left upper, the left lower and the right upper in the original video frame are mapped to the corresponding geographic positions in the virtual scene to obtain the length and height of the video object subgraphs in the three-dimensional space, and the length and height of the video object subgraphs are calculated according to the coordinates>The average length and height of the video object subgraphs displayed in the virtual scene;
d3 Display O in camera view in virtual scene according to frame rate fps during dynamic display 0 The video object subgraphs of the current frame are not displayed in the corresponding geographic positions;
at t 0 ,2t 0 …,nt 0 Time of day, add video object O respectively 1 ,O 2 …O n Dynamically expressed in a three-dimensional scene model to realize multi-camera video object concentration.
8. The method for concentrating multi-camera video in a geographical scene with preferable virtual viewpoint according to claim 7, wherein for the case that the same object track generated by overlapping camera shooting areas is obtained by a plurality of cameras, the camera for obtaining the object subgraph is determined by comparing the included angles between the virtual viewpoint and the object track point and the three-point connection line of the two camera positions respectively:
the camera a and the camera b have a view overlapping part C, for the video object passing through the view C, the included angles among the camera position, the track point and the virtual viewpoint V are compared, namely the sizes of alpha and beta, if alpha is less than or equal to beta, the video object subgraph acquired by the camera a is used, and otherwise, the video object subgraph acquired by the camera b is used.
9. A multi-camera video concentration system in a virtual-viewpoint-preferred geographical scene employing the method of multi-camera video concentration in a virtual-viewpoint-preferred geographical scene as claimed in any one of claims 1 to 8, the concentration system comprising:
the homonymy point acquisition module: the method comprises the steps of acquiring video sequence image information, acquiring homonymous point pairs in a video image and a three-dimensional geographic scene model, and acquiring coordinate data of the homonymous point pairs, wherein the coordinate data comprises image coordinates and geographic coordinates;
the mapping relation construction module: the method comprises the steps of establishing a mapping relation between a video image and a geographic space according to coordinate data of a homonymy point pair, and positioning a camera viewing area;
the camera group observable collection generating module: the method comprises the steps of constructing a camera observation domain model by analyzing an observable distance and a sight deflection angle, and generating a camera group observable set;
virtual viewpoint group generation module: the method comprises the steps of optimizing the observable collection by constructing an evaluation model to obtain a virtual viewpoint group;
video object space-time motion expression module: and presetting display parameters of a moving object, and concentrating the multi-camera video according to the display parameters.
CN202110327605.1A 2021-03-26 2021-03-26 Multi-camera video concentration method and system in virtual viewpoint-optimized geographic scene Active CN113192125B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110327605.1A CN113192125B (en) 2021-03-26 2021-03-26 Multi-camera video concentration method and system in virtual viewpoint-optimized geographic scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110327605.1A CN113192125B (en) 2021-03-26 2021-03-26 Multi-camera video concentration method and system in virtual viewpoint-optimized geographic scene

Publications (2)

Publication Number Publication Date
CN113192125A CN113192125A (en) 2021-07-30
CN113192125B true CN113192125B (en) 2024-02-20

Family

ID=76974146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110327605.1A Active CN113192125B (en) 2021-03-26 2021-03-26 Multi-camera video concentration method and system in virtual viewpoint-optimized geographic scene

Country Status (1)

Country Link
CN (1) CN113192125B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114067071B (en) * 2021-11-26 2022-08-30 湖南汽车工程职业学院 High-precision map making system based on big data

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013033442A1 (en) * 2011-08-30 2013-03-07 Digimarc Corporation Methods and arrangements for identifying objects
CN110009561A (en) * 2019-04-10 2019-07-12 南京财经大学 A kind of monitor video target is mapped to the method and system of three-dimensional geographical model of place
CN110148223A (en) * 2019-06-03 2019-08-20 南京财经大学 Monitor video target concentration expression and system in three-dimensional geography model of place
CN110516014A (en) * 2019-01-18 2019-11-29 南京泛在地理信息产业研究院有限公司 A method of two-dimensional map is mapped to towards urban road monitor video
CN111161130A (en) * 2019-11-25 2020-05-15 北京智汇云舟科技有限公司 Video correction method based on three-dimensional geographic information
CN111582022A (en) * 2020-03-26 2020-08-25 深圳大学 Fusion method and system of mobile video and geographic scene and electronic equipment
CN112381935A (en) * 2020-09-29 2021-02-19 西安应用光学研究所 Synthetic vision generation and multi-element fusion device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2471231C1 (en) * 2011-09-30 2012-12-27 Общество с ограниченной ответственностью "Ай Ти Ви групп" Method to search for objects in sequence of images produced from stationary video camera

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013033442A1 (en) * 2011-08-30 2013-03-07 Digimarc Corporation Methods and arrangements for identifying objects
CN110516014A (en) * 2019-01-18 2019-11-29 南京泛在地理信息产业研究院有限公司 A method of two-dimensional map is mapped to towards urban road monitor video
CN110009561A (en) * 2019-04-10 2019-07-12 南京财经大学 A kind of monitor video target is mapped to the method and system of three-dimensional geographical model of place
CN110148223A (en) * 2019-06-03 2019-08-20 南京财经大学 Monitor video target concentration expression and system in three-dimensional geography model of place
CN111161130A (en) * 2019-11-25 2020-05-15 北京智汇云舟科技有限公司 Video correction method based on three-dimensional geographic information
CN111582022A (en) * 2020-03-26 2020-08-25 深圳大学 Fusion method and system of mobile video and geographic scene and electronic equipment
CN112381935A (en) * 2020-09-29 2021-02-19 西安应用光学研究所 Synthetic vision generation and multi-element fusion device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Jianqing Zhu ; Shengcai Liao ; Stan Z. Li.Multicamera Joint Video Synopsis.IEEE Transactions on Circuits and Systems for Video Technology.2015,全文. *
地理场景中监控视频浓缩方法研究;解愉嘉;中国博士学位论文全文数据库 信息科技辑;全文 *

Also Published As

Publication number Publication date
CN113192125A (en) 2021-07-30

Similar Documents

Publication Publication Date Title
CN109348119B (en) Panoramic monitoring system
US7522186B2 (en) Method and apparatus for providing immersive surveillance
CN102148965B (en) Video monitoring system for multi-target tracking close-up shooting
EP2553924B1 (en) Effortless navigation across cameras and cooperative control of cameras
AU2019281667A1 (en) Data collection and model generation method for house
CN112053446A (en) Real-time monitoring video and three-dimensional scene fusion method based on three-dimensional GIS
WO2017038160A1 (en) Monitoring information generation device, imaging direction estimation device, monitoring information generation method, imaging direction estimation method, and program
CN109691084A (en) Information processing unit and method and program
US20100103173A1 (en) Real time object tagging for interactive image display applications
JP2015521419A (en) A system for mixing or synthesizing computer generated 3D objects and video feeds from film cameras in real time
CN103795976A (en) Full space-time three-dimensional visualization method
KR20200013585A (en) Method and camera system combining views from plurality of cameras
CN101021669A (en) Whole-view field imaging and displaying method and system
Jian et al. Augmented virtual environment: fusion of real-time video and 3D models in the digital earth system
CN113192125B (en) Multi-camera video concentration method and system in virtual viewpoint-optimized geographic scene
US20200183958A1 (en) User interaction event data capturing system for use with aerial spherical imagery
CN114387679A (en) System and method for realizing sight line estimation and attention analysis based on recursive convolutional neural network
Cui et al. Fusing surveillance videos and three‐dimensional scene: A mixed reality system
de Haan et al. Spatial navigation for context-aware video surveillance
KR101686797B1 (en) Method for analyzing a visible area of a closed circuit television considering the three dimensional features
JP5213883B2 (en) Composite display device
Rieffel et al. Geometric tools for multicamera surveillance systems
CN108510433B (en) Space display method and device and terminal
KR101051355B1 (en) 3D coordinate acquisition method of camera image using 3D spatial data and camera linkage control method using same
Kollert et al. Mapping of 3D eye-tracking in urban outdoor environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant