CN115294207A - Fusion scheduling system and method for smart campus monitoring video and three-dimensional GIS model - Google Patents

Fusion scheduling system and method for smart campus monitoring video and three-dimensional GIS model Download PDF

Info

Publication number
CN115294207A
CN115294207A CN202210758533.0A CN202210758533A CN115294207A CN 115294207 A CN115294207 A CN 115294207A CN 202210758533 A CN202210758533 A CN 202210758533A CN 115294207 A CN115294207 A CN 115294207A
Authority
CN
China
Prior art keywords
dimensional
cameras
model
scheduling
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210758533.0A
Other languages
Chinese (zh)
Inventor
李定成
林泽
唐云霄
吴昱东
宋攀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Nanyou Institute Of Information Technovation Co ltd
Original Assignee
Nanjing Nanyou Institute Of Information Technovation Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Nanyou Institute Of Information Technovation Co ltd filed Critical Nanjing Nanyou Institute Of Information Technovation Co ltd
Priority to CN202210758533.0A priority Critical patent/CN115294207A/en
Publication of CN115294207A publication Critical patent/CN115294207A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a fusion scheduling system of a smart campus monitoring video and a three-dimensional GIS (geographic information system) model, which comprises a scheduling server and a three-dimensional model generator, wherein the scheduling server calibrates three-dimensional coordinates of all cameras, and the three-dimensional coordinates adopt a three-dimensional rectangular coordinate system to generate scheduling rules of the calibrated cameras; and the three-dimensional model generator receives the group of real-time images of the camera, calculates and generates an accurate three-dimensional fusion image, and transmits the obtained three-dimensional fusion image to the three-dimensional model storage terminal and the three-dimensional model presentation terminal. The system and the method improve the fusion accuracy of the three-dimensional GIS model and the video, extract the attitude information when the camera shoots the image, more accurately realize the mapping from the two-dimensional image of the camera to the three-dimensional model, provide the regional image with enough visual angle and key angle image, and improve the display effect after fusion.

Description

Fusion scheduling system and method for smart campus monitoring video and three-dimensional GIS model
Technical Field
The invention relates to the technical field, in particular to a system and a method for fusion scheduling of smart campus monitoring videos and a three-dimensional GIS model.
Background
The existing three-dimensional model and surveillance video fusion technology on the market at present usually extracts video frames from a surveillance video stream and projects the video frames into a three-dimensional space scene to realize full-time-space stereo fusion of real-time video data and three-dimensional model data, but the technology does not fully consider the matching of the posture and the focal length of a camera when shooting a specific scene and the matching of parameters of multiple cameras when shooting the specific scene.
In addition, in the technology for fusing the three-dimensional model and the monitoring video based on image rendering, the image in the visible area of the camera is utilized to render the video image of the three-dimensional model in a full-time-space stereoscopic manner so as to realize the fusion of the three-dimensional model and the monitoring video. However, due to the complexity of the monitoring scene, the effect of the three-dimensional map model established by the related art is not ideal. Particularly, in campus video monitoring, video fuzziness of different distances often occurs, clear and definite judgment of activity details of a campus cannot be realized, and problems of poor posture and orientation pertinence, blurred view shielding and key angle image loss caused by the fact that a three-dimensional model is generated afterwards often occur.
Chinese patent application CN108154553A discloses a method and an apparatus for seamless fusion of a three-dimensional model and a surveillance video, which generates video texture by constructing a depth map of a three-dimensional space scene corresponding to a target video image, then calculates coordinates of a target micro-patch in the three-dimensional model in a normalization device coordinate system and texture coordinates in a video frame texture, determines a view boundary according to the coordinates of a normalization device coordinate system of the target micro-patch, and determines whether the micro-patch is occluded according to depth information of the micro-patch in the depth map, if the micro-patch is occluded and is in the view, the original texture of the three-dimensional model is used to render the target micro-patch, thereby improving the display effect of the fusion of the three-dimensional model and the surveillance video. The seamless fusion method is characterized in that whether a micro patch is shielded or not is judged in a depth map, and if not, video texturing is rendered into a background color, so that the purpose of rendering clearly is achieved. The method can achieve the purpose of clear display of the monitoring video boundary of the three-dimensional GIS system in the using process, but still has problems in local observation of the video and deep processing of the monitoring video.
In addition, although the chinese patent CN206322218U discloses a 3D commanding and scheduling system based on GIS, which relies on an LTE private network to merge voice and video subsystems and present multimedia scheduling for restoring a real scene to a user, it does not solve the problem of merging multiple systems in a three-dimensional GIS.
Therefore, the technical problem to be solved by the application is as follows: how to improve the integration speed and the accuracy of wisdom campus surveillance video and three-dimensional GIS model, the technical problem that the shooting angle cooperation is unreasonable, the vision field shelters from and key angle image is lacked when the source image is three-dimensionally integrated by many cameras is solved.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a system which comprises a scheduling server, a series of cameras, a three-dimensional model generator, a three-dimensional model storage terminal and a three-dimensional live-action presentation terminal; when the fusion scheduling system is implemented, firstly, the three-dimensional coordinates of all cameras are calibrated, then a scheduling server generates scheduling rules, each rule specifies the three-dimensional coordinate range of an area to be monitored, and a group of cameras, attitude parameters and retention time are used; the scheduling server executes the scheduling rules: setting a group of scheduled cameras to specified attitude parameters, the azimuth, the pitch, the roll and the focal length of the cameras, continuously shooting the scheduled cameras for specified duration, and uploading images and camera attitudes to a three-dimensional model generator; the three-dimensional model generator comprehensively calculates and generates an accurate three-dimensional fusion image from a series of received real-time images of the camera, and transmits the obtained three-dimensional fusion image to the three-dimensional model storage terminal and the three-dimensional model presentation terminal. Through the technical measures, the fusion definition is improved in the fusion scheduling process of the smart campus monitoring video and the three-dimensional GIS model, and the technical problems that the matching of shooting angles is unreasonable, vision shielding is caused, and depth and texture information are lost when the source images of multiple cameras are subjected to three-dimensional fusion are solved.
The technical scheme adopted by the invention is as follows: wisdom campus surveillance video and three-dimensional GIS model's integration scheduling system has a plurality of sets up in each regional a series of cameras in campus, the integration scheduling system includes:
the scheduling server calibrates the three-dimensional coordinates of all the cameras, the three-dimensional coordinates adopt a three-dimensional rectangular coordinate system, and a scheduling rule of the calibrated cameras is generated;
and a three-dimensional model generator which receives real-time images of a group of cameras, calculates and generates an accurate three-dimensional fusion image, and transmits the obtained three-dimensional fusion image to a three-dimensional model storage terminal and a three-dimensional model presentation terminal, wherein: the three-dimensional model storage terminal persistently stores the three-dimensional model fusion image, and the three-dimensional model presentation terminal displays the three-dimensional image fused with the three-dimensional GIS model in real time.
Preferably, each scheduling rule generated by the scheduling server at least specifies:
a three-dimensional coordinate range of an area to be monitored;
and the set of cameras, attitude parameters and dwell time used;
wherein: the attitude parameters include the orientation, pitch, roll and focal length of the camera.
Preferably, the scheduling server executing the scheduling rule includes:
setting a scheduled group of cameras to specified attitude parameters, the attitude parameters including azimuth, pitch, roll and focal length of the cameras;
and continuously shooting the scheduled camera for a specified time, and uploading the image and the camera pose to the three-dimensional model generator.
Preferably, the three-dimensional model generator selects real-time images of cameras with related positions, and calculates the correlation of a plurality of areas in the real-time images of different cameras;
and when the correlation of the area images is greater than a given threshold value, judging that the area images correspond to the same real space, and matting the area images.
Preferably, the three-dimensional model generator, when performing region image fetching:
calculating three-dimensional coordinates of all points of the two area images in a real space according to a triangulation method;
and then splicing and synthesizing a series of regional images with three-dimensional coordinates into a three-dimensional fusion image seamlessly fused with the three-dimensional GIS model.
Preferably, the method for fusing and scheduling the smart campus monitoring video and the three-dimensional GIS model,
the campus monitoring video and the three-dimensional GIS model are fused by using the fusion scheduling system, and the specific fusion steps at least comprise:
s1, calibrating three-dimensional coordinates of all cameras, wherein the three-dimensional coordinates adopt a three-dimensional rectangular coordinate system;
s2, setting a scheduling rule for generating a three-dimensional coordinate range of an area to be monitored by a scheduling server;
s3, the scheduling server executes the scheduling rule, and the specific steps comprise;
s301, setting a group of scheduled cameras to specified attitude parameters, wherein the attitude parameters comprise the azimuth, the pitch, the roll and the focal length of the cameras;
s302, continuously shooting the scheduled camera for a specified time, and uploading a real-time image and a camera posture to a three-dimensional model generator;
and S4, receiving a series of real-time images of the camera by the three-dimensional model generator, and comprehensively calculating the real-time images to generate an accurate three-dimensional fusion image.
Preferably, the fused scheduling method further includes:
and S5, the three-dimensional model generator transmits the obtained three-dimensional fusion image to the three-dimensional model storage terminal and the three-dimensional model presentation terminal.
Preferably, the three-dimensional coordinates of step S1 are obtained through a graphical human-computer interaction interface of the three-dimensional live-action presentation terminal, the position of each camera in the three-dimensional GIS model is specified by inputting with a keyboard, a mouse and a touch screen, and the three-dimensional coordinates of the cameras are obtained through conversion by the three-dimensional GIS model.
Preferably, the three-dimensional coordinate range of the monitoring area is specified in step S2, the spatial range of the specified area to be monitored in the three-dimensional GIS model is input through a keyboard, a mouse and a touch screen on the three-dimensional live-action presentation terminal through a graphical human-machine interaction interface, and then the three-dimensional coordinate range is obtained through conversion of the three-dimensional GIS model.
Compared with the prior art, the invention has the beneficial effects that:
1. the fusion scheduling system provided by the invention is additionally provided with the scheduling server, the scheduling rules set by the scheduling server in advance are used for controlling the cameras to adjust the posture, shoot and upload the images, and the three-dimensional GIS model of the same object is synthesized according to the images of the multiple cameras, so that the problems of poor posture and direction pertinence, blurred view shielding and key angle image loss caused by the fact that the three-dimensional GIS model is generated afterwards are avoided, and the high utilization rate of equipment with the cooperation of the multiple cameras is realized.
2. The three-dimensional model generator of the fusion scheduling system is additionally arranged, and when the three-dimensional model is synthesized according to images of multiple cameras, the three-dimensional model generator simultaneously obtains the attitude information of the cameras, so that the pixel information of any three-dimensional coordinate corresponding to the image of the camera in the vision field can be calculated, the mapping from the two-dimensional image of the camera to the three-dimensional model is accurately realized, the depth information loss caused by the traditional three-dimensional live-action modeling is avoided, and the problems of shielding and vision blurring of the subsequent three-dimensional model processing are also avoided.
In conclusion, the system and the method of the invention improve the fusion accuracy of the three-dimensional GIS model and the video, extract the attitude information when the camera shoots the image, more accurately realize the mapping from the two-dimensional image of the camera to the three-dimensional model, provide the regional image with enough visual angle and key angle image, and improve the display effect after the campus monitoring video and the three-dimensional GIS model are fused.
Drawings
FIG. 1 is a schematic block diagram of a fusion scheduling system of smart campus surveillance video and a three-dimensional GIS model;
FIG. 2 is a flow chart of a fusion scheduling method of smart campus surveillance video and a three-dimensional GIS model;
FIG. 3 is a flow chart of the substep of step S4 of the fusion scheduling system of the smart campus surveillance video and the three-dimensional GIS model;
FIG. 4 is a schematic view of the camera pan-tilt azimuth and pitch calculation method of the fusion scheduling system of the smart campus surveillance video and the three-dimensional GIS model,
figure 5 is a schematic diagram of the implementation principle of the step S401,
fig. 6 is a schematic diagram illustrating the implementation principle that the P1/P2 point in step S402 corresponds to the real space three-dimensional coordinate.
Wherein: the method comprises the steps of 1-dispatching server, 2-camera, 3-three-dimensional model generator, 4-three-dimensional model storage terminal and 5-three-dimensional live-action presentation terminal.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention and are not to be construed as limiting the present invention.
In the description of the present invention, it is to be understood that the terms "center", "upper", "lower", "front", "rear", "left", "right", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings only for the convenience of description and simplification of description, but do not indicate or imply that the combination or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. In addition, in the description process of the embodiment of the present invention, the positional relationships of the devices such as "upper", "lower", "front", "rear", "left", "right", and the like in all the drawings are based on fig. 1.
As shown in fig. 1, the system for integrating and scheduling smart campus surveillance videos and three-dimensional GIS models comprises a scheduling server 1, a series of cameras 2, a three-dimensional model generator 3, a three-dimensional model storage terminal 4 and a three-dimensional live-action presentation terminal 5; the dispatch server 1 is connected with a series of cameras 2 through a network, and the three-dimensional model generator 3 is connected with a series of cameras through a network and is connected with a three-dimensional model storage terminal 4 and a three-dimensional live-action presentation terminal 5. The system is provided with a plurality of cameras arranged in each campus area, and the fusion scheduling system comprises:
the system comprises a scheduling server 1, a scheduling server and a scheduling server, wherein the scheduling server 1 calibrates three-dimensional coordinates of all cameras 2, the three-dimensional coordinates adopt a three-dimensional rectangular coordinate system, and a scheduling rule of the calibrated cameras is generated;
and a three-dimensional model generator 3, the three-dimensional model generator 3 receives a group of camera real-time images, calculates and generates an accurate three-dimensional fusion image, and sends the obtained three-dimensional fusion image to a three-dimensional model storage terminal 4 and a three-dimensional model presentation terminal 5, wherein: and the three-dimensional model storage terminal stores the three-dimensional model fusion image permanently, and the three-dimensional model presentation terminal displays the three-dimensional image fused with the three-dimensional GIS model in real time.
More preferably, each scheduling rule generated by the scheduling server at least specifies:
a three-dimensional coordinate range of an area to be monitored;
and the set of cameras, pose parameters, and dwell time used;
wherein: the attitude parameters include the orientation, pitch, roll and focal length of the camera.
In a more preferred embodiment, the scheduling server executing the scheduling rule includes:
setting a scheduled group of cameras to specified attitude parameters, the attitude parameters including azimuth, pitch, roll and focal length of the cameras;
and continuously shooting the scheduled camera for a specified time, and uploading the image and the camera pose to a three-dimensional model generator.
In a more preferable embodiment, the three-dimensional model generator selects real-time images of cameras with related positions, and calculates the correlation of a plurality of areas in the real-time images of different cameras;
and when the correlation of the area images is greater than a given threshold value, judging that the area images correspond to the same real space, and matting the area images.
More preferably, the three-dimensional model generator, when performing the region image extraction:
calculating three-dimensional coordinates of all points of the two area images in a real space according to a triangulation method;
and then splicing and synthesizing a series of regional images with three-dimensional coordinates into a three-dimensional fusion image seamlessly fused with the three-dimensional GIS model.
A fusion scheduling method of a smart campus monitoring video and a three-dimensional GIS model,
the campus monitoring video and the three-dimensional GIS model are fused by using the fusion scheduling system, and the specific fusion steps at least comprise:
s1, calibrating three-dimensional coordinates of all cameras, wherein the three-dimensional coordinates adopt a three-dimensional rectangular coordinate system;
s2, setting a scheduling rule for generating a three-dimensional coordinate range of an area to be monitored by a scheduling server;
s3, the scheduling server executes the scheduling rule, and the specific steps comprise;
s301, setting a group of scheduled cameras to specified attitude parameters, wherein the attitude parameters comprise the azimuth, the pitch, the roll and the focal length of the cameras;
s302, continuously shooting the scheduled camera for a specified time, and uploading a real-time image and a camera posture to a three-dimensional model generator;
and S4, receiving a series of real-time images of the camera by the three-dimensional model generator, and comprehensively calculating the real-time images to generate an accurate three-dimensional fusion image.
In a more preferred embodiment, the converged scheduling method further includes:
and S5, the three-dimensional model generator transmits the obtained three-dimensional fusion image to the three-dimensional model storage terminal and the three-dimensional model presentation terminal.
In a more preferable embodiment, the three-dimensional coordinates of step S1 are obtained through a graphical human-computer interaction interface of the three-dimensional live-action presentation terminal, the position of each camera in the three-dimensional GIS model is specified by inputting with a keyboard, a mouse and a touch screen, and the three-dimensional coordinates of the camera are obtained through conversion by the three-dimensional GIS model.
In a more preferable embodiment, the three-dimensional coordinate range of the monitoring area is specified in step S2, and the three-dimensional real scene presenting terminal inputs the space range of the specified area to be monitored in the three-dimensional GIS model through a graphical human-computer interaction interface by using a keyboard, a mouse and a touch screen, and then the three-dimensional coordinate range is obtained through conversion by the three-dimensional GIS model.
Example 1
The implementation steps of the fusion scheduling system of the smart campus monitoring video and the three-dimensional GIS model are shown in fig. 2, and the implementation steps comprise:
s1, calibrating three-dimensional coordinates of all cameras, wherein the three-dimensional coordinates adopt a three-dimensional rectangular coordinate system;
the x, y and z axes are usually taken to be the positive z-axis direction from the vertical ground and the positive x-axis direction from the north. A geodetic coordinate system may also be used.
S2, a scheduling server generates scheduling rules, each rule specifies a three-dimensional coordinate range of an area to be monitored, a group of cameras, attitude parameters and retention time, wherein the attitude parameters comprise the azimuth, the pitch, the roll and the focal length of the cameras;
s3, the specific steps of the scheduling server for executing the scheduling rules are shown in the right part of the figure 2 and comprise the following steps;
s301, setting a group of scheduled cameras to specified attitude parameters, wherein the attitude parameters comprise the azimuth, the pitch, the roll and the focal length of the cameras;
s302, continuously shooting the scheduled camera for a specified time, and uploading the image and the camera posture to a three-dimensional model generator; the camera pose includes bearing, pitch and roll data of the camera;
s4, the three-dimensional model generator comprehensively calculates and generates an accurate three-dimensional fusion image from the received real-time images of the series of cameras;
s5, the three-dimensional model generator transmits the obtained three-dimensional fusion image to a three-dimensional model storage terminal and a three-dimensional model presentation terminal, wherein: persistent storage of the three-dimensional model fusion image is carried out on the three-dimensional model storage terminal, and the three-dimensional image fused with the three-dimensional GIS model is displayed in real time on the three-dimensional model presentation terminal;
specifically, the process of generating the precise three-dimensional fusion image in S4 includes the following sub-steps as shown in fig. 3:
s401, selecting real-time images of cameras related to positions by a three-dimensional model generator, and calculating the relevance of a plurality of areas in the real-time images of different cameras; when the correlation of the area images is larger than a given threshold value, the two images are considered to correspond to the same real space, and the two area images are scratched.
The following exemplifies the practical implementation principle, as shown in fig. 5. For two images A and B from different cameras, respectively calculating the correlation of all sub-areas in the images A and B or forming the similarity; when the similarity of the images in the regions is greater than a given threshold, the two images are considered to correspond to the same real space. Methods for determining image similarity/correlation are usually a Convolutional Neural Network (CNN) method, a wavelet analysis method, a histogram comparison method, a perceptual hash method, and the like.
After judging that the two area images are similar, the two area images are extracted.
S402, calculating three-dimensional coordinates of all points of the two area images in a real space by a three-dimensional model generator according to a triangulation method;
the calculation method is exemplified below.
First, the precise azimuth and elevation values of the camera corresponding to each pixel point in the regional image are calculated.
In image a of fig. 5, this image information is associated with the camera attitude information (including the orientation ψ, the pitch θ) photographed according to S302, and it can be seen that:
the central point of the image A corresponds to the azimuth psi 0 and the elevation theta 0 of the camera;
for other points in the image a, averaging is performed according to the azimuth coverage W1, the elevation coverage W2, the number of horizontal pixels N1, and the number of vertical pixels N2 of the camera field of view: the azimuth angle corresponding to one pixel in the horizontal direction is W1/N1, and the pitch angle corresponding to one pixel in the vertical direction is W2/N2, so that the accurate azimuth and pitch value of the camera corresponding to each pixel in the image A are obtained through linear calculation.
Then, according to the similarity relation between the two regional images, finding out similar points inside the image A and the image B as shown in FIG. 5, and assuming as pixel points P1 and P2; according to the camera orientation ψ 1 mapped by the point P1, the pitch θ 1, the camera orientation ψ 2 mapped by the point P2, and the pitch θ 2, the three-dimensional coordinates C1 (x 1, y1, z 1) of the camera C1 and the three-dimensional coordinates (x 2, y2, z 2) of the C2 camera are combined, and the principle is realized as shown in fig. 6, that is, the three-dimensional coordinates (xn, yn, zn) corresponding to the real space of any relevant point P1/P2 point in the area images of the two can be calculated. At this time, a three-dimensional fusion image in which the three-dimensional coordinates of the real space are superimposed is obtained.
In the calculation process of the three-dimensional coordinates of all the points, additional operations such as rectification, rotation and the like of the image may be required, and the actions do not influence the completeness of the scheme. The rectification and rotation process of the image uses the boundary coordinate range of the area image in the source image.
And S403, splicing and synthesizing a series of regional images with three-dimensional coordinates into a three-dimensional fusion image seamlessly fused with the three-dimensional GIS model by the three-dimensional model generator.
In the process, the three-dimensional fusion images superposed with the three-dimensional coordinates of the real space are sequentially spliced in an ending way according to the three-dimensional coordinates to be synthesized into the three-dimensional fusion image seamlessly fused with the required three-dimensional GIS model.
In summary, the following steps:
in the process, the camera, the attitude and the execution time length required by the target area to be monitored are specified in advance, and a group of cameras are set to the specified attitude parameters when the scheduling rule is executed, so that the technical problems of blurred vision and unclear occlusion caused by extracting textures from historical frequency streams and deducing depth information and then rendering are solved, if a reasonably arranged scheduling plan is made, the invalid cruise and routing inspection of the camera can be reduced, and the utilization efficiency of the camera is improved;
judging the correlation/similarity of the images in the two areas from the real-time images from a plurality of cameras; and extracting the area images meeting the conditions, combining the installation position and the posture of the camera for each pixel point in the area images, accurately calculating the three-dimensional coordinates of all points of the area images in the area images and the camera in a real space according to a triangulation method, splicing and synthesizing the three-dimensional coordinates into three-dimensional GIS model fusion images meeting the conditions, and presenting and storing the three-dimensional GIS model fusion images. The method achieves the aims of achieving high definition of real-time images of multiple cameras and a three-dimensional GIS model, real-time resolving of three-dimensional coordinate information of video images and seamless perfect fusion at a relatively low cost.
Example 2
S1, calibrating the three-dimensional coordinates of all cameras is achieved by the steps that a user specifies the installation position of each camera in a three-dimensional model through a graphical human-computer interaction interface in a keyboard, mouse and touch screen operation mode at the three-dimensional live-action presentation terminal, and then the three-dimensional coordinates are obtained through conversion of the three-dimensional model.
The three-dimensional GIS map engine supports the calculation of the three-dimensional coordinates of a certain point in the electronic map in the real space.
Example 3
S2, the three-dimensional coordinate range of the monitoring area is appointed by the user through a graphical man-machine interaction interface at the three-dimensional live-action presentation terminal in a mode of operating a keyboard, a mouse and a touch screen, and the three-dimensional coordinate range is obtained through conversion of the three-dimensional model.
Example 4
S2, the camera attitude parameter is specified by manually controlling the camera pan-tilt to rotate to a required position, and then recording the real-time azimuth, pitch, roll and focal length of the camera pan-tilt as the camera attitude parameter to be specified.
When the camera image is observed manually and the shot area meets the requirements, the camera attitude meeting the requirements is considered, and at the moment, the real-time azimuth, the pitch, the roll and the focal length of the camera holder are recorded by a program in the scheduling server and can be used as the attitude parameters of the camera to be specified.
Example 5
S2, the method for realizing the camera attitude parameter designation comprises the following steps: and calculating the azimuth, pitch, roll and focal length of the required camera holder according to the three-dimensional coordinates of the camera and the three-dimensional coordinate range of the required monitoring area, and taking the azimuth, pitch, roll and focal length as the attitude parameters of the camera to be specified.
The specific calculation method comprises the following steps:
1) Calculating to obtain the azimuth, pitch and roll of the required camera holder according to the three-dimensional coordinates of the camera, the three-dimensional coordinates of the central point in the required monitoring area and the installation angle of the camera;
as shown in fig. 4, point B represents the camera position, the three-dimensional coordinates are (x 2, y2, z 2), point a represents the center point of the area to be monitored, and the three-dimensional coordinates are (x 1, y1, z 1), at this time, the azimuth and the pitch of the trace B- > a are calculated, that is, the azimuth and the pitch angle that the camera pan-tilt needs to reach, and the relative azimuth, the pitch and the roll that the camera pan-tilt needs to be set when taking a picture of a given monitored area can be calculated by combining the azimuth, the pitch and the roll of the initial installation of the camera.
2) And calculating to obtain the required focal distance of the camera according to the distance between the camera and the monitored area.
In FIG. 4, the distance of the B- > A trace is equal to sqrt ((x 2-x 1) ^2+ (y 2-y 1) ^2+ (z 2-z 1) ^ 2), from which the corresponding focal length of the camera can be calculated.
The embodiments of the present invention are disclosed as the preferred embodiments, but not limited thereto, and those skilled in the art can easily understand the spirit of the present invention and make various extensions and changes without departing from the spirit of the present invention.

Claims (9)

1. Wisdom campus surveillance video and three-dimensional GIS model's integration scheduling system has a plurality of sets up in each regional a series of cameras in campus, its characterized in that, it still includes to fuse scheduling system:
the scheduling server calibrates the three-dimensional coordinates of all the cameras, the three-dimensional coordinates adopt a three-dimensional rectangular coordinate system, and a scheduling rule of the calibrated cameras is generated;
and a three-dimensional model generator which receives real-time images of a group of cameras, calculates and generates an accurate three-dimensional fusion image, and transmits the obtained three-dimensional fusion image to a three-dimensional model storage terminal and a three-dimensional model presentation terminal, wherein: the three-dimensional model storage terminal persistently stores the three-dimensional model fusion image, and the three-dimensional model presentation terminal displays the three-dimensional image fused with the three-dimensional GIS model in real time.
2. The system according to claim 1, wherein the system comprises:
each scheduling rule generated by the scheduling server at least specifies:
a three-dimensional coordinate range of an area to be monitored;
and the set of cameras, pose parameters, and dwell time used;
wherein: the attitude parameters include the orientation, pitch, roll and focal length of the camera.
3. The system according to claim 2, wherein the system comprises:
the scheduling server executing the scheduling rule comprises:
setting a scheduled group of cameras to specified attitude parameters, the attitude parameters including azimuth, pitch, roll and focal length of the cameras;
and continuously shooting the scheduled camera for a specified time, and uploading the image and the camera pose to the three-dimensional model generator.
4. The system according to any one of claims 1-3, wherein the system comprises:
the three-dimensional model generator selects the real-time images of the cameras related to the positions and calculates the relevance of a plurality of areas in the real-time images of different cameras;
and when the correlation of the area images is greater than a given threshold value, judging that the area images correspond to the same real space, and matting the area images.
5. The system according to claim 4, wherein the system comprises:
when the three-dimensional model generator deducts the region image:
calculating three-dimensional coordinates of all points of the two area images in a real space according to a triangulation method;
and then splicing and synthesizing a series of regional images with three-dimensional coordinates into a three-dimensional fusion image seamlessly fused with the three-dimensional GIS model.
6. The fusion scheduling method of the smart campus monitoring video and the three-dimensional GIS model is characterized by comprising the following steps:
the fusion scheduling system of any one of claims 1 to 5 is used for fusing the campus surveillance video with the three-dimensional GIS model, and the specific fusion steps at least comprise:
s1, calibrating three-dimensional coordinates of all cameras, wherein the three-dimensional coordinates adopt a three-dimensional rectangular coordinate system;
s2, setting a scheduling rule for generating a three-dimensional coordinate range of an area to be monitored by a scheduling server;
s3, the scheduling server executes the scheduling rule, and the specific steps comprise;
s301, setting a group of scheduled cameras to specified attitude parameters, wherein the attitude parameters comprise the azimuth, the pitch, the roll and the focal length of the cameras;
s302, continuously shooting the scheduled camera for a specified time, and uploading a real-time image and a camera posture to a three-dimensional model generator;
and S4, receiving a series of real-time images of the camera by the three-dimensional model generator, and comprehensively calculating the real-time images to generate an accurate three-dimensional fusion image.
7. The converged scheduling method of claim 6, wherein:
the fusion scheduling method further comprises the following steps:
and S5, the three-dimensional model generator transmits the obtained three-dimensional fusion image to a three-dimensional model storage terminal and a three-dimensional model presentation terminal.
8. The converged scheduling method of claim 7, wherein:
and S1, acquiring the three-dimensional coordinates through a graphical human-computer interaction interface of the three-dimensional live-action presentation terminal, inputting and designating the position of each camera in the three-dimensional GIS model through a keyboard, a mouse and a touch screen, and converting the three-dimensional coordinates into the three-dimensional coordinates of the cameras through the three-dimensional GIS model.
9. The converged scheduling method of claim 6, 7 or 8, wherein:
and S2, designating the three-dimensional coordinate range of the monitoring area, inputting the spatial range of the designated area to be monitored in the three-dimensional GIS model through a graphical human-computer interaction interface by using a keyboard, a mouse and a touch screen at the three-dimensional live-action presentation terminal, and converting the three-dimensional coordinate range by using the three-dimensional GIS model to obtain the three-dimensional coordinate range.
CN202210758533.0A 2022-06-30 2022-06-30 Fusion scheduling system and method for smart campus monitoring video and three-dimensional GIS model Pending CN115294207A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210758533.0A CN115294207A (en) 2022-06-30 2022-06-30 Fusion scheduling system and method for smart campus monitoring video and three-dimensional GIS model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210758533.0A CN115294207A (en) 2022-06-30 2022-06-30 Fusion scheduling system and method for smart campus monitoring video and three-dimensional GIS model

Publications (1)

Publication Number Publication Date
CN115294207A true CN115294207A (en) 2022-11-04

Family

ID=83822280

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210758533.0A Pending CN115294207A (en) 2022-06-30 2022-06-30 Fusion scheduling system and method for smart campus monitoring video and three-dimensional GIS model

Country Status (1)

Country Link
CN (1) CN115294207A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012564A (en) * 2023-01-17 2023-04-25 宁波艾腾湃智能科技有限公司 Equipment and method for intelligent fusion of three-dimensional model and live-action photo
CN116543322A (en) * 2023-05-17 2023-08-04 深圳市保臻社区服务科技有限公司 Intelligent property routing inspection method based on community potential safety hazards

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012564A (en) * 2023-01-17 2023-04-25 宁波艾腾湃智能科技有限公司 Equipment and method for intelligent fusion of three-dimensional model and live-action photo
CN116012564B (en) * 2023-01-17 2023-10-20 宁波艾腾湃智能科技有限公司 Equipment and method for intelligent fusion of three-dimensional model and live-action photo
CN116543322A (en) * 2023-05-17 2023-08-04 深圳市保臻社区服务科技有限公司 Intelligent property routing inspection method based on community potential safety hazards

Similar Documents

Publication Publication Date Title
CN111062873B (en) Parallax image splicing and visualization method based on multiple pairs of binocular cameras
CN108830894B (en) Remote guidance method, device, terminal and storage medium based on augmented reality
CN109348119B (en) Panoramic monitoring system
CN108616731B (en) Real-time generation method for 360-degree VR panoramic image and video
CN111586360B (en) Unmanned aerial vehicle projection method, device, equipment and storage medium
CN110458897B (en) Multi-camera automatic calibration method and system and monitoring method and system
US8803992B2 (en) Augmented reality navigation for repeat photography and difference extraction
CN107341832B (en) Multi-view switching shooting system and method based on infrared positioning system
CN115294207A (en) Fusion scheduling system and method for smart campus monitoring video and three-dimensional GIS model
WO2023093217A1 (en) Data labeling method and apparatus, and computer device, storage medium and program
CN108958469B (en) Method for adding hyperlinks in virtual world based on augmented reality
CN112207821B (en) Target searching method of visual robot and robot
CN103345736A (en) Virtual viewpoint rendering method
CN106780629A (en) A kind of three-dimensional panorama data acquisition, modeling method
CN110838164A (en) Monocular image three-dimensional reconstruction method, system and device based on object point depth
CN108519102A (en) A kind of binocular vision speedometer calculation method based on reprojection
CN114442805A (en) Monitoring scene display method and system, electronic equipment and storage medium
JP2023546739A (en) Methods, apparatus, and systems for generating three-dimensional models of scenes
CN113936121B (en) AR label setting method and remote collaboration system
CN113112532B (en) Real-time registration method for multi-TOF camera system
Yang et al. Seeing as it happens: Real time 3D video event visualization
Huang et al. Design and application of intelligent patrol system based on virtual reality
KR20170012717A (en) Method and apparatus for generating location information based on video image
CN116866522B (en) Remote monitoring method
Zhang et al. Design of a 3D reconstruction model of multiplane images based on stereo vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination