WO2017114508A1 - 三维监控系统中基于三维重构的交互式标定方法和装置 - Google Patents

三维监控系统中基于三维重构的交互式标定方法和装置 Download PDF

Info

Publication number
WO2017114508A1
WO2017114508A1 PCT/CN2016/113805 CN2016113805W WO2017114508A1 WO 2017114508 A1 WO2017114508 A1 WO 2017114508A1 CN 2016113805 W CN2016113805 W CN 2016113805W WO 2017114508 A1 WO2017114508 A1 WO 2017114508A1
Authority
WO
WIPO (PCT)
Prior art keywords
current frame
dimensional
background model
point cloud
scene
Prior art date
Application number
PCT/CN2016/113805
Other languages
English (en)
French (fr)
Inventor
周杰
邓磊
赖伟良
Original Assignee
清华大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 清华大学 filed Critical 清华大学
Priority to US16/066,191 priority Critical patent/US10607369B2/en
Publication of WO2017114508A1 publication Critical patent/WO2017114508A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20108Interactive selection of 2D slice in a 3D data set
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Definitions

  • the present invention relates to the technical field of computer vision, image processing, and the like, and more particularly to an interactive calibration method and apparatus based on three-dimensional reconstruction in a three-dimensional monitoring system.
  • the three-dimensional monitoring system is a frontier research direction in the intelligent monitoring system.
  • the 3D monitoring system embeds the video images of a large number of monitoring devices into a unified reference background model in real time, and integrates all the monitoring screen information to form an overall cognition and free viewing angle of the monitoring situation.
  • the monitoring personnel can quickly obtain the exact position and monitoring content of the camera without facing dozens or even hundreds of monitoring screens, and corresponding to the on-site environment.
  • the 3D monitoring system can support high-level intelligent analysis of multi-camera collaboration, such as target detection and tracking, abnormal event detection, etc., and has broad prospects in the fields of intelligent transportation, intelligent security, and intelligent communities.
  • the position and posture of the calibration camera in the 3D reference background model is the core link.
  • one of the calibration problems is a sensor-based method (such as GPS, inertial navigation, attitude sensor, etc.), which relies on specialized equipment and is not accurate.
  • the other is an automatic calibration method based on computer vision. This method usually requires sufficient overlapping field of view between the monitored images, and uses motion matching or feature matching to calibrate the relative pose between the cameras. If the above calibration method is directly used to match the camera image with the reference background model image, it often fails because the difference between the two is too large or lacks corresponding target motion information.
  • the existing three-dimensional monitoring system mostly adopts the method of interactive calibration, and establishes the corresponding relationship between each camera and the reference background model one by one, and combines the geometric calculation to obtain the pose of the camera.
  • this method is labor intensive (such as proportional to the number of cameras) and is only suitable for static cameras, and cannot handle camera disturbances and Pan-Tilt-Zoom (PTZ) motion.
  • PTZ Pan-Tilt-Zoom
  • the object of the present invention is to solve at least one of the above technical problems to some extent.
  • a first object of the present invention is to propose an interactive calibration method based on three-dimensional reconstruction in a three-dimensional monitoring system.
  • the method collects scene images and performs 3D reconstruction by offline, and then adds a simple manual calibration point. (such as ⁇ 4 points), you can automatically calibrate multiple target cameras online and embed their monitoring images into the reference background model, which enables intuitive, convenient and unified 3D video surveillance.
  • a second object of the present invention is to provide an interactive calibration device based on three-dimensional reconstruction in a three-dimensional monitoring system.
  • a third object of the present invention is to provide a storage medium.
  • an interactive calibration method based on three-dimensional reconstruction in a three-dimensional monitoring system includes: acquiring a reference background model and monitoring a video, and collecting a plurality of scene images, and the monitoring video.
  • Corresponding multiple surveillance cameras are connected, wherein at least one of the plurality of scene images has an overlapping portion with the scene of the surveillance video; three-dimensional reconstruction is performed according to the plurality of scene images to generate three-dimensional features of the scene a point cloud, embedding the three-dimensional feature point cloud into the reference background model; estimating a current frame pose of the surveillance video, and performing automatic calibration of the surveillance camera; and calculating the current frame pose to the reference background model
  • the singular transformation is performed and the image projection is embedded into the reference background model.
  • the interactive calibration method based on three-dimensional reconstruction in the three-dimensional monitoring system can quickly and automatically estimate the pose of multiple cameras in the reference background model, and can overcome image motion (such as disturbance or active camera, etc.)
  • image motion such as disturbance or active camera, etc.
  • the impact is different from the traditional manual calibration of the target camera's complex interaction mode.
  • 3D feature point cloud as the middle layer, it is only necessary to establish the geometric transformation relationship between the 3D point cloud and the reference background model at one time.
  • each target camera can be automatically calibrated, significantly reducing the workload.
  • the camera motion can be handled automatically.
  • an interactive calibration device based on three-dimensional reconstruction in a three-dimensional monitoring system includes: an acquisition module for acquiring a reference background model and a monitoring video; and an acquisition module for collecting multiple a scene image, wherein at least one of the plurality of scene images has an overlap with a scene of the monitoring video; a connection module, configured to connect a plurality of surveillance cameras corresponding to the monitoring video; and generate a module And performing a three-dimensional reconstruction according to the plurality of scene images to generate a three-dimensional feature point cloud of the scene; an embedding module, configured to embed the three-dimensional feature point cloud into the reference background model; and an estimation module, configured to estimate the monitoring a current frame pose of the video and performing automatic calibration of the surveillance camera; and a calculation module for calculating a homography transformation of the current frame pose to the reference background model and embedding an image projection into the reference background model.
  • the interactive calibration device based on three-dimensional reconstruction in the three-dimensional monitoring system can quickly and automatically estimate the pose of multiple cameras in the reference background model, and can overcome image motion (such as disturbance or active camera, etc.)
  • image motion such as disturbance or active camera, etc.
  • the impact is different from the traditional manual calibration of the target camera's complex interaction mode.
  • 3D feature point cloud as the middle layer, it is only necessary to establish the geometric transformation relationship between the 3D point cloud and the reference background model at one time.
  • each target camera can be automatically calibrated, significantly reducing the workload.
  • the camera motion can be handled automatically.
  • a storage medium configured to store an application, where The program is used to perform an interactive calibration method based on three-dimensional reconstruction in the three-dimensional monitoring system according to the first aspect of the present invention.
  • FIG. 1 is a flow chart of an interactive calibration method based on three-dimensional reconstruction in a three-dimensional monitoring system according to an embodiment of the present invention
  • FIG. 2 is a flow chart of estimating a current frame pose of a surveillance video, in accordance with an embodiment of the present invention
  • FIG. 3 is a flow chart of the calculation of a homography transformation and the embedding of an image projection into a reference background model, in accordance with an embodiment of the present invention
  • FIG. 4 is a diagram showing an example of an interactive calibration method based on three-dimensional reconstruction in a three-dimensional monitoring system according to an embodiment of the present invention
  • FIG. 5 is a structural block diagram of an interactive calibration apparatus based on three-dimensional reconstruction in a three-dimensional monitoring system according to an embodiment of the present invention.
  • FIG. 1 is a flow chart of an interactive calibration method based on three-dimensional reconstruction in a three-dimensional monitoring system in accordance with one embodiment of the present invention.
  • the interactive calibration method based on three-dimensional reconstruction in the three-dimensional monitoring system may include:
  • S101 Acquire a reference background model and a monitoring video, and collect a plurality of scene images, and connect the plurality of monitoring cameras corresponding to the monitoring video, wherein at least one of the plurality of scene images overlaps with the scene of the monitoring video.
  • the reference background model may be a three-dimensional model of the scene or a certain view of the three-dimensional model of the scene, and may specifically be a certain perspective of the satellite map or the three-dimensional map.
  • the acquiring device for acquiring the monitoring video and the scene image may be a camera, a mobile phone, PTZ lens, panoramic acquisition equipment, etc.
  • the term "plurality” can be understood broadly, that is, correspondingly enough.
  • S102 Perform three-dimensional reconstruction according to the plurality of scene images to generate a three-dimensional feature point cloud of the scene, and embed the three-dimensional feature point cloud into the reference background model.
  • a general motion recovery structure method (Structure-From-Motion, SFM for short) can be used for three-dimensional reconstruction, and a three-dimensional feature point cloud of the scene and a camera matrix corresponding to each scene image are obtained.
  • corresponding SIFT feature points may be extracted for multiple scene images, and then image feature matching may be performed according to SIFT feature points, and the basic matrix is estimated according to the RANSAC framework, wherein the basic matrix For denoising, finally, the basic matrix is reconstructed in three dimensions according to a general three-dimensional reconstruction algorithm to obtain a three-dimensional feature point cloud of the scene.
  • the estimated geometric relationship needs to rely on the corresponding points of the input three-dimensional feature point cloud and the reference background model, at least four sets of corresponding points (such as three sets of solutions + one set of verification), combined with RANSAC robust estimation framework to get the best solution.
  • the method for estimating the ground plane equation L of the scene is to estimate the plane equations of the 3D points in the point cloud coordinate system by inputting a number of 3D points belonging to the plane (where the number of 3D points can be greater than or equal to 3).
  • the above geometric relationship is mainly represented by considering a view of a certain perspective of the reference background model as a virtual perspective camera in the feature point cloud coordinate system. Estimating the geometric relationship between the 3D feature point cloud and the reference background model is to obtain the attitude of the virtual perspective camera, specifically:
  • the model may be an affine camera or an infinity camera model, the purpose of which is to obtain a projection matrix P ⁇ , wherein the projection matrix has only 6 degrees of freedom
  • the 3D ⁇ 2D corresponding points of the three-dimensional feature point cloud and the reference background view are determined, and all the parameters of the projection matrix are obtained by using the projection relationship, that is, the two-dimensional view of the three-dimensional model can be understood as an image obtained by observing a three-dimensional model of a certain camera.
  • a satellite image can be seen as a camera on a satellite to capture a three-dimensional image on the surface of the earth.
  • the infinity camera model is used to eliminate the perspective; when the reference background model is a three-dimensional model, the three-dimensional feature point cloud is input. A number of 3D corresponding points between the three-dimensional background models are used to calculate a similar transformation T of the three-dimensional point cloud to the three-dimensional background model.
  • the interactive calibration method may further comprise: building an index tree on the features of the three-dimensional feature point cloud.
  • the specific implementation process of estimating the current frame posture of the surveillance video and performing automatic calibration of the surveillance camera may include the following steps:
  • S202 if present, perform a 2D-2D feature matching on the current frame and the reference frame.
  • the calculation of the relative motion is mainly considered from the following two situations: one is a camera disturbance caused by natural factors (such as wind, collision, etc.), and the motion is approximately pure rotation.
  • the second is the motion of the active PTZ (pan-tilt-zoom) lens, which can be regarded as a scaled rotation because it can be rotated and zoomed.
  • the estimation of the current frame pose of the surveillance video is mainly two processes: 2D-3D image localization and 2D-2D gesture transmission.
  • the current frame pose is generally a relative pose, so a reference frame is needed.
  • 2D-2D attitude transmission can be performed.
  • the 2D-2D attitude transmission mainly includes the following aspects: feature extraction and matching, relative motion estimation, and reference frame update.
  • a SiftGPU method may be employed, and the relative motion is estimated using the RANSAC framework.
  • 2D-3D image positioning can be performed, and the pose of the current image is estimated by the camera pose estimation algorithm PnP. If the pose estimation is successful, the reference frame is created in this pose.
  • step S104 may include the following steps:
  • the area that needs to be projected can be understood as the lower half of the blanking line in the current frame image.
  • the current frame is calculated posture angle of the camera P K and P ⁇ by a virtual ground plane equation L guided homography transformation.
  • the foregoing steps S101 and S102 may be offline analysis, that is, the geometric transformation relationship between the three-dimensional point cloud and the reference background model may be established in advance through the foregoing S101 and S102, and It is stored for use in subsequent online calibration phases.
  • the above steps S103 and S104 can be online analysis, that is, each target camera can be automatically calibrated online by means of a geometric transformation relationship between the pre-established three-dimensional point cloud and the reference background model.
  • offline analysis may be performed first, and then the reference background model may be acquired first, and sufficient scene images may be actively taken, and then feature extraction may be performed on the scene image, and image features may be performed according to the extracted feature points.
  • the monitoring video can be obtained, and the image of the current frame corresponding to the monitoring video is extracted, and then the current frame can be matched with the existing reference frame by 2D-2D feature. If the matching fails or no reference The frame is matched with the pre-generated 3D feature point cloud for 2D-3D feature matching, and the camera pose estimation is performed according to the matching relationship, and the reference frame is updated according to the estimation result; if the 2D-2D feature is successfully matched, according to the RANSAC framework Calculating the relative motion of the current frame and the reference frame, and estimating the current frame pose according to the relative motion, and calculating the relative motion of the current frame relative to the reference frame, and when the relative motion is sufficiently large (indicating that the rotation of the camera is large at this time) The reference frame is updated with the current frame. Finally, the homography transformation of the current frame pose to the reference background model can be calculated, and the image projection is embedded into the reference background model to continuously improve the reference background model.
  • the interactive calibration method based on three-dimensional reconstruction in the three-dimensional monitoring system can quickly and automatically estimate the pose of multiple cameras in the reference background model, and can overcome image motion (such as disturbance or active camera, etc.)
  • image motion such as disturbance or active camera, etc.
  • the impact is different from the traditional manual calibration of the target camera's complex interaction mode.
  • 3D feature point cloud as the middle layer, it is only necessary to establish the geometric transformation relationship between the 3D point cloud and the reference background model at one time.
  • each target camera can be automatically calibrated, significantly reducing the workload.
  • the camera motion can be handled automatically.
  • the present invention mainly has the following advantages: one is to manually calibrate a small number (such as ⁇ 4 groups) of 2D-3D corresponding points; the second is to the newly added camera basic Both can be automatically calibrated; the third is to be able to calibrate all images in an overall way, reducing the workload.
  • the present invention also proposes an interactive calibration device based on three-dimensional reconstruction in a three-dimensional monitoring system.
  • the interactive calibration device may include: an acquisition module 100, an acquisition module 200, a connection module 300, a generation module 400, an embedded module 500, an estimation module 600, and a calculation module 700.
  • the obtaining module 100 can be configured to acquire a reference background model and monitor the video.
  • the reference background model may be a three-dimensional model of the scene or a certain view of the three-dimensional model of the scene, and may specifically be a certain perspective of the satellite map or the three-dimensional map.
  • the collecting module 200 can be configured to collect a plurality of scene images, wherein at least one of the plurality of scene images has an overlapping portion with the scene of the monitoring video.
  • the acquisition module 100 acquires the monitoring video and the collection device used by the collection module 200 to collect the scene image may be a camera, a mobile phone, a PTZ lens, a panoramic collection device, or the like.
  • the term "plurality” can be understood broadly, that is, correspondingly enough.
  • the connection module 300 can be used to connect multiple surveillance cameras corresponding to the surveillance video.
  • the generating module 400 is configured to perform three-dimensional reconstruction according to the plurality of scene images to generate a three-dimensional feature point cloud of the scene. More specifically, the generating module 400 can perform three-dimensional reconstruction using a general motion recovery structure method (Summ-From-Motion, SFM for short) to obtain a three-dimensional feature point cloud of the scene and a camera matrix corresponding to each scene image. Specifically, in the embodiment of the present invention, the generating module 400 may first extract corresponding SIFT feature points for multiple scene images, and then perform image feature matching according to the SIFT feature points, and estimate a basic matrix according to the RANSAC framework, where The basic matrix is used for denoising. Finally, the basic matrix is reconstructed in three dimensions according to a general three-dimensional reconstruction algorithm to obtain a three-dimensional feature point cloud of the scene.
  • a general motion recovery structure method Sudm-From-Motion, SFM for short
  • the generating module 400 is further configured to: build an index tree on features of the three-dimensional feature point cloud.
  • the embedding module 500 can be used to embed a three-dimensional feature point cloud into a reference background model.
  • the embedding module 500 can estimate the geometric relationship between the three-dimensional feature point cloud and the reference background model to embed the calibrated camera into the reference background model.
  • the embedding module 500 mainly solves two problems in the process of embedding the three-dimensional feature point cloud into the reference background model: estimating the geometric relationship between the three-dimensional feature point cloud and the reference background model, To embed the calibrated camera into the reference background model; and to estimate the ground plane equation L of the scene in order to calculate the viewable area of the camera monitor image projected onto the ground plane of the scene.
  • the estimated geometric relationship needs to rely on the corresponding points of the input three-dimensional feature point cloud and the reference background model, at least four sets of corresponding points (such as three sets of solutions + one set of verification), combined with RANSAC robust estimation framework to get the best solution.
  • the method for estimating the ground plane equation L of the scene is to estimate the plane equations of the 3D points in the point cloud coordinate system by inputting a number of 3D points belonging to the plane (where the number of 3D points can be greater than or equal to 3).
  • the above geometric relationship is mainly represented by considering a view of a certain perspective of the reference background model as a virtual perspective camera in the feature point cloud coordinate system. Estimating the geometric relationship between the 3D feature point cloud and the reference background model is to obtain the attitude of the virtual perspective camera, specifically:
  • the model is an affine camera or an infinity camera model
  • the purpose is to obtain its projection matrix P ⁇ , wherein the projection matrix has only 6 degrees of freedom, given by
  • the 3D ⁇ 2D corresponding points of the three-dimensional feature point cloud and the reference background view are used to obtain all the parameters of the projection matrix by means of the projection relationship, that is, the two-dimensional view of the three-dimensional model can be understood as an image obtained by a certain camera observing a three-dimensional model.
  • a satellite image can be seen as a camera on a satellite to capture a three-dimensional image on the surface of the earth.
  • the infinity camera model is used to eliminate the perspective; when the reference background model is a three-dimensional model, the three-dimensional feature point cloud and three-dimensional are input. A number of 3D corresponding points between the background models are calculated, and a similar transformation T of the three-dimensional point cloud to the three-dimensional background model is calculated.
  • the estimation module 600 can be used to estimate the current frame pose of the surveillance video and perform automatic calibration of the surveillance camera. Specifically, in an embodiment of the present invention, the estimation module 600 may first extract an image feature of a current frame corresponding to the monitoring video, and determine, according to the current frame, whether there is a corresponding reference frame of the already existing posture, and if present, the current frame Performing 2D-2D feature matching with the reference frame; if the 2D-2D feature matching fails or the reference frame does not exist, the monitoring video is matched with the 3D feature point cloud for 2D-3D feature matching, and the current frame corresponding camera is estimated according to the matching relationship.
  • the pose in the cloud coordinate system and update the reference frame; if the 2D-2D match is successful, calculate the relative motion of the current frame and the reference frame according to the RANSAC framework, and estimate the current frame pose P K according to the relative motion of the current frame and the reference frame. And calculating a relative motion of the current frame relative to the reference frame, and updating the reference frame according to the current frame when the relative motion of the current frame relative to the reference frame is greater than a preset threshold.
  • the calculation of the relative motion is mainly considered from the following two situations: one is a camera disturbance caused by natural factors (such as wind, collision, etc.), and the motion is approximately pure rotation.
  • the second is the motion of the active PTZ (pan-tilt-zoom) lens, which can be regarded as a scaled rotation because it can be rotated and zoomed.
  • the estimation of the current frame pose of the surveillance video is mainly two processes: 2D-3D image localization and 2D-2D gesture transmission.
  • the current frame pose is generally a relative pose, so a reference frame is needed.
  • 2D-2D attitude transmission can be performed.
  • the 2D-2D attitude transmission mainly includes the following aspects: feature extraction and matching, relative motion estimation, and reference frame update.
  • a SiftGPU method may be employed, and the relative motion is estimated using the RANSAC framework.
  • 2D-3D image positioning can be performed, and the pose of the current image is estimated by the camera pose estimation algorithm PnP. If the pose estimation is successful, the reference frame is created in this pose.
  • the calculation module 700 can be configured to calculate a homography transformation of the current frame pose to the reference background model and embed the image projection into the reference background model. Specifically, in the embodiment of the present invention, the calculation module 700 may first calculate the ground blank line according to the ground plane equation L and the current frame posture P K of the monitoring video, and cut the current frame image plane according to the blank line to obtain the required projection. region, wherein the region of the projection in the current frame image is understood in the lower half of the line blanking, then, the current frame is calculated posture angle of the camera P K and P ⁇ by a virtual ground plane equation L guided homography transformation, and finally The area to be projected is embedded into the reference background model according to the homography transformation, and the projection area is updated in real time.
  • the interactive calibration device based on three-dimensional reconstruction in the three-dimensional monitoring system can quickly and automatically estimate the pose of multiple cameras in the reference background model, and can overcome image motion (such as disturbance or active camera, etc.)
  • image motion such as disturbance or active camera, etc.
  • the impact is different from the traditional manual calibration of the target camera's complex interaction mode.
  • 3D feature point cloud as the middle layer, it is only necessary to establish the geometric transformation relationship between the 3D point cloud and the reference background model at one time.
  • each target camera can be automatically calibrated, significantly reducing the workload.
  • the camera motion can be handled automatically.
  • the present invention also provides a storage medium for storing an application for performing interactive calibration based on three-dimensional reconstruction in the three-dimensional monitoring system according to any of the above embodiments of the present invention. method.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only Memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the invention may be implemented in hardware, software, firmware or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: having logic gates for implementing logic functions on data signals. Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种三维监控系统中基于三维重构的交互式标定方法和装置,其中方法包括:获取参考背景模型以及监控视频,并采集多个场景图像,并将监控视频对应的多个监控摄像机连接起来,其中,多个场景图像中至少一个场景图像与监控视频的现场有重叠部分(S101);根据多个场景图像进行三维重构以生成场景的三维特征点云,并将三维特征点云嵌入至参考背景模型(S102);估计监控视频的当前帧姿态,并进行监控摄像机的自动标定(S103);以及计算当前帧姿态到参考背景模型的单应变换,并将图像投影嵌入至参考背景模型(S104)。该方法能够实现直观、方便和统一的三维视频监控效果。

Description

三维监控系统中基于三维重构的交互式标定方法和装置
相关申请的交叉引用
本申请要求清华大学于2015年12月30日提交的、发明名称为“三维监控系统中基于三维重构的交互式标定方法和装置”的、中国专利申请号“201511024306.1”的优先权。
技术领域
本发明涉及计算机视觉、图像处理等技术领域,尤其涉及一种三维监控系统中基于三维重构的交互式标定方法以及装置。
背景技术
目前,三维监控系统是智能监控系统中的一个前沿研究方向。三维监控系统将大量监控设备的视频画面实时嵌入到统一的参考背景模型中,整合所有的监控画面信息,形成对监控态势的整体认知和自由视角观察。相比传统的二维监控系统,监控人员无需面对数十甚至上百台监控屏幕,便能够快速获取摄像机的确切位置和监视内容,与现场环境建立对应。三维监控系统可支持多摄像机协同的高层智能分析,如目标检测与跟踪、异常事件检测等,在智能交通、智能安防、智能社区等领域中前景广阔。在三维监控系统的建立过程中,标定摄像机在三维参考背景模型中的位置姿态是核心环节。
相关技术中,对于标定问题,其中一种是采用基于传感器的方法(如GPS、惯性导航、姿态传感器等),这类方法依赖专门设备且精度不高。另外一种是基于计算机视觉的自动标定方法,该方法通常要求监控图像间有足够的重叠视场,利用运动匹配或特征匹配的方法标定摄像机间的相对位姿。若将上述标定方法直接用于匹配摄像机图像与参考背景模型图像时,常因为二者差异过大或缺乏对应目标运动信息而失败。
然而,已有的三维监控系统多采用交互式标定的方法,逐一建立每个摄像机与参考背景模型的对应关系,结合几何计算得到摄像机的位姿。但是,这种方法工作量大(如正比于摄像机数量),且仅适合静态摄像机,无法处理摄像机扰动和主动相机(Pan-Tilt-Zoom,PTZ)的运动。
发明内容
本发明的目的旨在至少在一定程度上解决上述的技术问题之一。
为此,本发明的第一个目的在于提出一种三维监控系统中基于三维重构的交互式标定方法。该方法通过一次性离线采集场景图像并进行三维重构,然后加入简单的人工标定点 (如≥4个点)便可以在线自动标定多个目标摄像机并将其监控画面嵌入至参考背景模型中,能够实现直观、方便和统一的三维视频监控效果。
本发明的第二个目的在于提出一种三维监控系统中基于三维重构的交互式标定装置。
本发明的第三个目的在于提出一种存储介质。
为了实现上述目的,本发明第一方面实施例的三维监控系统中基于三维重构的交互式标定方法,包括:获取参考背景模型以及监控视频,并采集多个场景图像,并将所述监控视频对应的多个监控摄像机连接起来,其中,所述多个场景图像中至少一个场景图像与所述监控视频的现场有重叠部分;根据所述多个场景图像进行三维重构以生成场景的三维特征点云,并将所述三维特征点云嵌入至所述参考背景模型;估计所述监控视频的当前帧姿态,并进行监控摄像机的自动标定;以及计算所述当前帧姿态到所述参考背景模型的单应变换,并将图像投影嵌入至所述参考背景模型。
根据本发明实施例的三维监控系统中基于三维重构的交互式标定方法,可快速自动地估计多个摄像机在参考背景模型中的位姿,并能够克服图像运动(如扰动或者主动相机等)带来的影响;不同于传统的人工逐一标定目标摄像机的复杂交互方式,通过引入了三维特征点云作为中间层,仅需一次性建立三维点云与参考背景模型间的几何变换关系,之后可借助该三维点云就能自动标定每一个目标摄像机,显著降低了工作量。另外,除静态摄像机外,还可以自动处理摄像机运动的情况。
为了实现上述目的,本发明第二方面实施例的三维监控系统中基于三维重构的交互式标定装置,包括:获取模块,用于获取参考背景模型以及监控视频;采集模块,用于采集多个场景图像,其中,所述多个场景图像中至少一个场景图像与所述监控视频的现场有重叠部分;连接模块,用于将所述监控视频对应的多个监控摄像机连接起来;生成模块,用于根据所述多个场景图像进行三维重构以生成场景的三维特征点云;嵌入模块,用于将所述三维特征点云嵌入至所述参考背景模型;估计模块,用于估计所述监控视频的当前帧姿态,并进行监控摄像机的自动标定;以及计算模块,用于计算所述当前帧姿态到所述参考背景模型的单应变换,并将图像投影嵌入至所述参考背景模型。
根据本发明实施例的三维监控系统中基于三维重构的交互式标定装置,可快速自动地估计多个摄像机在参考背景模型中的位姿,并能够克服图像运动(如扰动或者主动相机等)带来的影响;不同于传统的人工逐一标定目标摄像机的复杂交互方式,通过引入了三维特征点云作为中间层,仅需一次性建立三维点云与参考背景模型间的几何变换关系,之后可借助该三维点云就能自动标定每一个目标摄像机,显著降低了工作量。另外,除静态摄像机外,还可以自动处理摄像机运动的情况。
为了实现上述目的,本发明第三方面实施例的存储介质,用于存储应用程序,所述应 用程序用于执行本发明第一方面实施例所述的三维监控系统中基于三维重构的交互式标定方法。
本发明附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。
附图说明
本发明上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中,
图1是根据本发明一个实施例的三维监控系统中基于三维重构的交互式标定方法的流程图;
图2是根据本发明实施例的估计监控视频的当前帧姿态的流程图;
图3是根据本发明实施例的单应变换的计算以及图像投影嵌入到参考背景模型的流程图;
图4是根据本发明一个具体实施例的三维监控系统中基于三维重构的交互式标定方法的示例图;以及
图5是根据本发明一个实施例的三维监控系统中基于三维重构的交互式标定装置的结构框图。
具体实施方式
下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本发明,而不能理解为对本发明的限制。
下面参考附图描述根据本发明实施例的三维监控系统中基于三维重构的交互式标定方法以及装置。
图1是根据本发明一个实施例的三维监控系统中基于三维重构的交互式标定方法的流程图。如图1所示,该三维监控系统中基于三维重构的交互式标定方法可以包括:
S101,获取参考背景模型以及监控视频,并采集多个场景图像,并将监控视频对应的多个监控摄像机连接起来,其中,多个场景图像中至少一个场景图像与监控视频的现场有重叠部分。
其中,在本发明的实施例中,参考背景模型可以是场景的三维模型或者场景三维模型的某一视图,具体可以是卫星地图或三维地图的某一视角。
此外,在本发明的实施例中,获取监控视频和场景图像的采集设备可以是相机、手机、 PTZ镜头、全景采集设备等。
可以理解,场景图像中至少一些与监控视频的视场有一定的重叠部分,以便后续的图像定位能够顺利进行。此外,在本发明的实施例中,术语“多个”可进行广义理解,即对应足够多的数量。
S102,根据多个场景图像进行三维重构以生成场景的三维特征点云,并将三维特征点云嵌入至参考背景模型。
具体地,可采用通用的运动恢复结构的方法(Structure-From-Motion,简称SFM)进行三维重构,得到场景的三维特征点云和各个场景图像对应的摄像机矩阵。具体而言,在本发明的实施例中,可先对多个场景图像提取对应的SIFT特征点,之后,可根据SIFT特征点进行图像特征匹配,并根据RANSAC框架估计基础矩阵,其中,基础矩阵用于去噪,最后,根据通用的三维重构算法对基础矩阵进行三维重构,得到场景的三维特征点云。
可以理解,在本发明的实施例中,在将三维特征点云嵌入至参考背景模型的过程中,主要解决两个问题:估计三维特征点云与参考背景模型之间的几何关系,以便将标定的摄像机嵌入至参考背景模型中;以及估计场景的地平面方程L,以便计算摄像机监控画面投影到场景的地平面上的可视区域。
其中,估计几何关系需要借助输入的若干三维特征点云与参考背景模型的对应点,至少需要四组对应点(如三组求解+一组验证),结合RANSAC鲁棒估计框架来得到最佳的解。而估计场景地平面方程L的方法是通过输入属于该平面上的若干3D点(其中3D点的个数可大于等于3),采用RANSAC框架估计这些3D点在点云坐标系下的平面方程。
需要说明的是,在本发明的实施例中,上述几何关系主要表现为将参考背景模型某一视角的视图视为在特征点云坐标系中的虚拟视角摄像机。估计三维特征点云与参考背景模型之间的几何关系即为求取该虚拟视角摄像机的姿态,具体为:
当参考背景模型为三维模型的二维视图时,其模型可为仿射摄像机或无穷远摄像机模型,目的是求取其投影矩阵P,其中,所述投影矩阵只有6个自由度,通过给定若干三维特征点云与参考背景视图的3D→2D对应点,并借助投影关系来求取投影矩阵的所有参数,即三维模型的二维视图可理解为某个摄像机观察一个三维模型得到的图像,比如卫星图可以看做卫星上的摄像机拍摄地球表面上的三维建筑而得到的图像其中,采用无穷远摄像机模型是为了消除透视照应;当参考背景模型为三维模型时,输入三维特征点云与三维背景模型间的若干3D对应点,计算三维点云到三维背景模型的相似变换T。
为了加速后续在线图像的定位,进一步的,在本发明的一个实施例中,该交互式标定方法还可包括:对三维特征点云的特征建立索引树。
S103,估计监控视频的当前帧姿态,并进行监控摄像机的自动标定。
具体而言,在本发明的实施例中,如图2所示,估计监控视频的当前帧姿态,并进行监控摄像机的自动标定的具体实现过程可包括以下步骤:
S201,提取监控视频对应的当前帧的图像特征,并根据当前帧判断是否存在对应的已知姿态的参考帧。
S202,如果存在,则将当前帧与参考帧进行2D-2D特征匹配。
S203,如果2D-2D特征匹配失败或者参考帧不存在,则将监控视频与三维特征点云进行2D-3D特征匹配,并根据匹配关系估计当前帧对应摄像机在点云坐标系中的位姿,并更新参考帧。
S204,如果2D-2D匹配成功,则根据RANSAC框架计算当前帧与参考帧的相对运动,并根据当前帧与参考帧的相对运动估计当前帧姿态PK
需要说明的是,在本发明的实施例中,主要从以下两种情况来考虑相对运动的计算:一是由于自然因素(如风、碰撞等)造成的摄像机扰动,这种运动近似为纯旋转运动;二是主动PTZ(pan-tilt-zoom)镜头的运动情况,由于其可以进行旋转和变焦,可以视为带尺度的旋转。
S205,计算当前帧相对参考帧的相对运动,并在当前帧相对参考帧的相对运动大于预设阈值时,根据当前帧更新参考帧。
也就是说,估计监控视频的当前帧姿态主要为两个过程:2D-3D图像定位和2D-2D姿态传递。其中,当前帧姿态一般为相对姿态,故需要参考帧。当已有参考帧时,可进行2D-2D姿态传递,该2D-2D姿态传递主要包括以下几个方面:特征提取与匹配、相对运动估计、参考帧的更新。其中,在本发明的实施例中,为了加速特征提取,可采用了SiftGPU的方法,并使用RANSAC框架估计相对运动。当没有参考帧或2D-2D匹配失败时,可进行2D-3D图像定位,并采用摄像机姿态估计算法PnP估计出当前图像的位姿,若姿态估计成功,则以此姿态创建参考帧。
S104,计算当前帧姿态到参考背景模型的单应变换,并将图像投影嵌入至参考背景模型。
具体而言,在本发明的实施例中,如图3所示,上述步骤S104的具体实现过程可包括以下步骤:
S301,根据地平面方程L和监控视频的当前帧姿态PK计算地面消隐线,并根据消隐线切割当前帧图像平面,得到需要投影的区域。
其中,在本发明的实施例中,上述需要投影的区域可理解为当前帧图像中消隐线下半部分。
S302,计算当前帧姿态PK与虚拟视角摄像机P由地平面方程L引导的单应变换。
S303,根据单应变换将需要投影的区域嵌入至参考背景模型中,并实时更新投影区域。
需要说明的是,在本发明的实施例中,上述步骤S101和S102可为离线分析,也就是说,通过上述S101和S102可以预先建立三维点云与参考背景模型间的几何变换关系,并将其进行存储,以供后续在线标定阶段的使用。此外,上述步骤S103和S104可为在线分析,即借助预先建立的三维点云与参考背景模型间的几何变换关系即可在线自动标定每一个目标摄像机。
下面将结合图4对本发明实施例的交互式标定方法进行进一步描述。
举例而言,如图4所示,可先进行离线分析,即可先获取参考背景模型,并主动拍摄足够的场景图像,之后可对场景图像进行特征提取,并根据提取的特征点进行图像特征匹配,并根据RANSAC框架估计基础矩阵,之后根据通用的三维重构算法对基础矩阵进行三维重构,得到场景的三维特征点云,然后,可将三维特征点云嵌入至参考背景模型,以不断完善参考背景模型。
之后进行在线分析阶段,可获取监控视频,并对该监控视频对应的当前帧的图像进行特征提取,之后可将当前帧与已有的参考帧进行2D-2D特征匹配,若匹配失败或者无参考帧,则将监控视频与预先生成的三维特征点云进行2D-3D特征匹配,并根据匹配关系进行摄像机姿态估计,并根据估计结果更新参考帧;若2D-2D特征匹配成功,则根据RANSAC框架计算当前帧与参考帧的相对运动,并根据该相对运动估计当前帧姿态,并可计算当前帧相对参考帧的相对运动,并在该相对运动足够大(说明此时摄像机的旋转较大)时,用当前帧更新参考帧,最后,可计算当前帧姿态到参考背景模型的单应变换,并将图像投影嵌入到参考背景模型,以不断完善参考背景模型。
根据本发明实施例的三维监控系统中基于三维重构的交互式标定方法,可快速自动地估计多个摄像机在参考背景模型中的位姿,并能够克服图像运动(如扰动或者主动相机等)带来的影响;不同于传统的人工逐一标定目标摄像机的复杂交互方式,通过引入了三维特征点云作为中间层,仅需一次性建立三维点云与参考背景模型间的几何变换关系,之后可借助该三维点云就能自动标定每一个目标摄像机,显著降低了工作量。另外,除静态摄像机外,还可以自动处理摄像机运动的情况。
综上所述,相对于传统的摄像机位姿标定方法,本发明主要具有以下优点:一是只需人工标定少量(如≥4组)的2D-3D对应点;二是对新加入的摄像机基本都能实现自动标定;三是能够对所有图像进行整体标定,降低了工作量。
为了实现上述实施例,本发明还提出了一种三维监控系统中基于三维重构的交互式标定装置。
图5是根据本发明一个实施例的三维监控系统中基于三维重构的交互式标定装置的结 构框图。如图5所示,该交互式标定装置可以包括:获取模块100、采集模块200、连接模块300、生成模块400、嵌入模块500、估计模块600和计算模块700。
具体地,获取模块100可用于获取参考背景模型以及监控视频。其中,在本发明的实施例中,参考背景模型可以是场景的三维模型或者场景三维模型的某一视图,具体可以是卫星地图或三维地图的某一视角。
采集模块200可用于采集多个场景图像,其中,多个场景图像中至少一个场景图像与监控视频的现场有重叠部分。在本发明的实施例中,获取模块100获取监控视频以及采集模块200采集场景图像所使用的采集设备可以是相机、手机、PTZ镜头、全景采集设备等。
可以理解,场景图像中至少一些与监控视频的视场有一定的重叠部分,以便后续的图像定位能够顺利进行。此外,在本发明的实施例中,术语“多个”可进行广义理解,即对应足够多的数量。
连接模块300可用于将监控视频对应的多个监控摄像机连接起来。
生成模块400可用于根据多个场景图像进行三维重构以生成场景的三维特征点云。更具体地,生成模块400可采用通用的运动恢复结构的方法(Structure-From-Motion,简称SFM)进行三维重构,得到场景的三维特征点云和各个场景图像对应的摄像机矩阵。具体而言,在本发明的实施例中,生成模块400可先对多个场景图像提取对应的SIFT特征点,之后,可根据SIFT特征点进行图像特征匹配,并根据RANSAC框架估计基础矩阵,其中,基础矩阵用于去噪,最后,根据通用的三维重构算法对基础矩阵进行三维重构,得到场景的三维特征点云。
为了加速后续在线图像的定位,进一步地,在本发明的一个实施例中,生成模块400还可用于:对三维特征点云的特征建立索引树。
嵌入模块500可用于将三维特征点云嵌入至参考背景模型。具体而言,在本发明的实施例中,嵌入模块500可估计三维特征点云与参考背景模型之间的几何关系,以便将标定的摄像机嵌入至参考背景模型中。
可以理解,在本发明的实施例中,嵌入模块500在将三维特征点云嵌入至参考背景模型的过程中,主要解决两个问题:估计三维特征点云与参考背景模型之间的几何关系,以便将标定的摄像机嵌入至参考背景模型中;以及估计场景的地平面方程L,以便计算摄像机监控画面投影到场景的地平面上的可视区域。
其中,估计几何关系需要借助输入的若干三维特征点云与参考背景模型的对应点,至少需要四组对应点(如三组求解+一组验证),结合RANSAC鲁棒估计框架来得到最佳的解。而估计场景地平面方程L的方法是通过输入属于该平面上的若干3D点(其中3D点的个数可大于等于3),采用RANSAC框架估计这些3D点在点云坐标系下的平面方程。
需要说明的是,在本发明的实施例中,上述几何关系主要表现为将参考背景模型某一视角的视图视为在特征点云坐标系中的虚拟视角摄像机。估计三维特征点云与参考背景模型之间的几何关系即为求取该虚拟视角摄像机的姿态,具体为:
当参考背景模型为三维模型的二维视图时,其模型为仿射摄像机或无穷远摄像机模型,目的是求取其投影矩阵P,其中,所述投影矩阵只有6个自由度,通过给定若干三维特征点云与参考背景视图的3D→2D对应点,并借助投影关系来求取投影矩阵的所有参数,即三维模型的二维视图可理解为某个摄像机观察一个三维模型得到的图像,比如卫星图可以看做卫星上的摄像机拍摄地球表面上的三维建筑而得到的图像其中,采用无穷远摄像机模型是为了消除透视照应;当参考背景模型为三维模型时,输入三维特征点云与三维背景模型间的若干3D对应点,计算三维点云到三维背景模型的相似变换T。
估计模块600可用于估计监控视频的当前帧姿态,并进行监控摄像机的自动标定。具体而言,在本发明的实施例中,估计模块600可先提取监控视频对应的当前帧的图像特征,并根据当前帧判断是否存在对应的已经姿态的参考帧,如果存在,则将当前帧与参考帧进行2D-2D特征匹配;如果2D-2D特征匹配失败或者参考帧不存在,则将监控视频与三维特征点云进行2D-3D特征匹配,并根据匹配关系估计当前帧对应摄像机在点云坐标系中的位姿,并更新参考帧;如果2D-2D匹配成功,则根据RANSAC框架计算当前帧与参考帧的相对运动,并根据当前帧与参考帧的相对运动估计当前帧姿态PK,以及计算当前帧相对参考帧的相对运动,并在当前帧相对参考帧的相对运动大于预设阈值时,根据当前帧更新参考帧。
需要说明的是,在本发明的实施例中,主要从以下两种情况来考虑相对运动的计算:一是由于自然因素(如风、碰撞等)造成的摄像机扰动,这种运动近似为纯旋转运动;二是主动PTZ(pan-tilt-zoom)镜头的运动情况,由于其可以进行旋转和变焦,可以视为带尺度的旋转。
也就是说,估计监控视频的当前帧姿态主要为两个过程:2D-3D图像定位和2D-2D姿态传递。其中,当前帧姿态一般为相对姿态,故需要参考帧。当已有参考帧时,可进行2D-2D姿态传递,该2D-2D姿态传递主要包括以下几个方面:特征提取与匹配、相对运动估计、参考帧的更新。其中,在本发明的实施例中,为了加速特征提取,可采用了SiftGPU的方法,并使用RANSAC框架估计相对运动。当没有参考帧或2D-2D匹配失败时,可进行2D-3D图像定位,并采用摄像机姿态估计算法PnP估计出当前图像的位姿,若姿态估计成功,则以此姿态创建参考帧。
计算模块700可用于计算当前帧姿态到参考背景模型的单应变换,并将图像投影嵌入至参考背景模型。具体而言,在本发明的实施例中,计算模块700可先根据地平面方程L和监控视频的当前帧姿态PK计算地面消隐线,并根据消隐线切割当前帧图像平面,得到需要 投影的区域,其中,该需要投影的区域可理解为当前帧图像中消隐线下半部分,之后,计算当前帧姿态PK与虚拟视角摄像机P由地平面方程L引导的单应变换,最后,根据单应变换将需要投影的区域嵌入至参考背景模型中,并实时更新投影区域。
根据本发明实施例的三维监控系统中基于三维重构的交互式标定装置,可快速自动地估计多个摄像机在参考背景模型中的位姿,并能够克服图像运动(如扰动或者主动相机等)带来的影响;不同于传统的人工逐一标定目标摄像机的复杂交互方式,通过引入了三维特征点云作为中间层,仅需一次性建立三维点云与参考背景模型间的几何变换关系,之后可借助该三维点云就能自动标定每一个目标摄像机,显著降低了工作量。另外,除静态摄像机外,还可以自动处理摄像机运动的情况。
为了实现上述实施例,本发明还提出了一种存储介质,用于存储应用程序,该应用程序用于执行本发明上述任一个实施例所述的三维监控系统中基于三维重构的交互式标定方法。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读 存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本发明各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (13)

  1. 一种三维监控系统中基于三维重构的交互式标定方法,其特征在于,包括以下步骤:
    获取参考背景模型以及监控视频,并采集多个场景图像,并将所述监控视频对应的多个监控摄像机连接起来,其中,所述多个场景图像中至少一个场景图像与所述监控视频的现场有重叠部分;
    根据所述多个场景图像进行三维重构以生成场景的三维特征点云,并将所述三维特征点云嵌入至所述参考背景模型;
    估计所述监控视频的当前帧姿态,并进行监控摄像机的自动标定;以及
    计算所述当前帧姿态到所述参考背景模型的单应变换,并将图像投影嵌入至所述参考背景模型。
  2. 如权利要求1所述的方法,其特征在于,根据所述多个场景图像进行三维重构以生成场景的三维特征点云具体包括:
    对所述多个场景图像提取对应的SIFT特征点;
    根据所述SIFT特征点进行图像特征匹配,并根据RANSAC框架估计基础矩阵,其中,所述基础矩阵用于去噪;
    根据通用的三维重构算法对所述基础矩阵进行三维重构,得到所述场景的三维特征点云。
  3. 如权利要求2所述的方法,其特征在于,还包括:
    对所述三维特征点云的特征建立索引树;
    估计所述场景的地平面方程L,以便计算摄像机监控画面投影到所述场景的地平面上的可视区域。
  4. 如权利要求1所述的方法,其特征在于,将所述三维特征点云嵌入至所述参考背景模型,具体包括:
    估计所述三维特征点云与所述参考背景模型之间的几何关系,以便将标定的摄像机嵌入至所述参考背景模型中。
  5. 如权利要求1所述的方法,其特征在于,估计所述监控视频的当前帧姿态,并进行监控摄像机的自动标定,具体包括:
    提取所述监控视频对应的当前帧的图像特征,并根据所述当前帧判断是否存在对应的已知姿态的参考帧;
    如果存在,则将所述当前帧与所述参考帧进行2D-2D特征匹配;
    如果2D-2D特征匹配失败或者所述参考帧不存在,则将所述监控视频与所述三维特征 点云进行2D-3D特征匹配,并根据匹配关系估计所述当前帧对应摄像机在点云坐标系中的位姿,并更新所述参考帧;
    如果2D-2D匹配成功,则根据RANSAC框架计算所述当前帧与所述参考帧的相对运动,并根据所述当前帧与所述参考帧的相对运动估计所述当前帧姿态PK
    计算所述当前帧相对所述参考帧的相对运动,并在所述当前帧相对所述参考帧的相对运动大于预设阈值时,根据所述当前帧更新所述参考帧。
  6. 如权利要求3所述的方法,其特征在于,计算所述当前帧姿态到所述参考背景模型的单应变换,并将图像投影嵌入至所述参考背景模型,具体包括:
    根据所述地平面方程L和所述监控视频的当前帧姿态PK计算地面消隐线,并根据所述消隐线切割当前帧图像平面,得到需要投影的区域;
    计算所述当前帧姿态PK与虚拟视角摄像机P由所述地平面方程L引导的单应变换;
    根据所述单应变换将所述需要投影的区域嵌入至所述参考背景模型中,并实时更新投影区域。
  7. 一种三维监控系统中基于三维重构的交互式标定装置,其特征在于,包括:
    获取模块,用于获取参考背景模型以及监控视频;
    采集模块,用于采集多个场景图像,其中,所述多个场景图像中至少一个场景图像与所述监控视频的现场有重叠部分;
    连接模块,用于将所述监控视频对应的多个监控摄像机连接起来;
    生成模块,用于根据所述多个场景图像进行三维重构以生成场景的三维特征点云;
    嵌入模块,用于将所述三维特征点云嵌入至所述参考背景模型;
    估计模块,用于估计所述监控视频的当前帧姿态,并进行监控摄像机的自动标定;以及
    计算模块,用于计算所述当前帧姿态到所述参考背景模型的单应变换,并将图像投影嵌入至所述参考背景模型。
  8. 如权利要求7所述的装置,其特征在于,所述生成模块具体用于:
    对所述多个场景图像提取对应的SIFT特征点;
    根据所述SIFT特征点进行图像特征匹配,并根据RANSAC框架估计基础矩阵,其中,所述基础矩阵用于去噪;
    根据通用的三维重构算法对所述基础矩阵进行三维重构,得到所述场景的三维特征点云。
  9. 如权利要求8所述的装置,其特征在于,所述生成模块还用于:
    对所述三维特征点云的特征建立索引树;
    估计所述场景的地平面方程L,以便计算摄像机监控画面投影到所述场景的地平面上的可视区域。
  10. 如权利要求7所述的装置,其特征在于,所述嵌入模块具体用于:
    估计所述三维特征点云与所述参考背景模型之间的几何关系,以便将标定的摄像机嵌入至所述参考背景模型中。
  11. 如权利要求7所述的装置,其特征在于,所述估计模块具体用于:
    提取所述监控视频对应的当前帧的图像特征,并根据所述当前帧判断是否存在对应的已经姿态的参考帧;
    如果存在,则将所述当前帧与所述参考帧进行2D-2D特征匹配;
    如果2D-2D特征匹配失败或者所述参考帧不存在,则将所述监控视频与所述三维特征点云进行2D-3D特征匹配,并根据匹配关系估计所述当前帧对应摄像机在点云坐标系中的位姿,并更新所述参考帧;
    如果2D-2D匹配成功,则根据RANSAC框架计算所述当前帧与所述参考帧的相对运动,并根据所述当前帧与所述参考帧的相对运动估计所述当前帧姿态PK
    计算所述当前帧相对所述参考帧的相对运动,并在所述当前帧相对所述参考帧的相对运动大于预设阈值时,根据所述当前帧更新所述参考帧。
  12. 如权利要求9所述的装置,其特征在于,所述计算模块具体用于:
    根据所述地平面方程L和所述监控视频的当前帧姿态PK计算地面消隐线,并根据所述消隐线切割当前帧图像平面,得到需要投影的区域;
    计算所述当前帧姿态PK与虚拟视角摄像机P由所述地平面方程L引导的单应变换;
    根据所述单应变换将所述需要投影的区域嵌入至所述参考背景模型中,并实时更新投影区域。
  13. 一种存储介质,其特征在于,用于存储应用程序,所述应用程序用于执行权利要求1至6中任一项所述的三维监控系统中基于三维重构的交互式标定方法。
PCT/CN2016/113805 2015-12-30 2016-12-30 三维监控系统中基于三维重构的交互式标定方法和装置 WO2017114508A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/066,191 US10607369B2 (en) 2015-12-30 2016-12-30 Method and device for interactive calibration based on 3D reconstruction in 3D surveillance system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201511024306.1 2015-12-30
CN201511024306.1A CN105678748B (zh) 2015-12-30 2015-12-30 三维监控系统中基于三维重构的交互式标定方法和装置

Publications (1)

Publication Number Publication Date
WO2017114508A1 true WO2017114508A1 (zh) 2017-07-06

Family

ID=56189831

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/113805 WO2017114508A1 (zh) 2015-12-30 2016-12-30 三维监控系统中基于三维重构的交互式标定方法和装置

Country Status (3)

Country Link
US (1) US10607369B2 (zh)
CN (1) CN105678748B (zh)
WO (1) WO2017114508A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111899174A (zh) * 2020-07-29 2020-11-06 北京天睿空间科技股份有限公司 基于深度学习的单摄像头旋转拼接方法

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678748B (zh) * 2015-12-30 2019-01-15 清华大学 三维监控系统中基于三维重构的交互式标定方法和装置
CN106204595B (zh) * 2016-07-13 2019-05-10 四川大学 一种基于双目摄像机的机场场面三维全景监视方法
CN106131535B (zh) * 2016-07-29 2018-03-02 传线网络科技(上海)有限公司 视频采集方法及装置、视频生成方法及装置
US10742940B2 (en) 2017-05-05 2020-08-11 VergeSense, Inc. Method for monitoring occupancy in a work area
US11044445B2 (en) 2017-05-05 2021-06-22 VergeSense, Inc. Method for monitoring occupancy in a work area
CN107564062B (zh) * 2017-08-16 2020-06-19 清华大学 位姿异常检测方法及装置
US11039084B2 (en) 2017-11-14 2021-06-15 VergeSense, Inc. Method for commissioning a network of optical sensors across a floor space
WO2019196475A1 (zh) * 2018-04-09 2019-10-17 华为技术有限公司 全局匹配patch的获取方法及装置
CN108921907B (zh) * 2018-07-26 2022-03-08 上海慧子视听科技有限公司 一种运动测试评分的方法、装置、设备及存储介质
CN111383340B (zh) * 2018-12-28 2023-10-17 成都皓图智能科技有限责任公司 一种基于3d图像的背景过滤方法、装置及系统
AU2020241843B2 (en) 2019-03-15 2023-09-28 VergeSense, Inc. Arrival detection for battery-powered optical sensors
CN110120090B (zh) * 2019-04-01 2020-09-25 贝壳找房(北京)科技有限公司 三维全景模型构建方法、装置及可读存储介质
US11620808B2 (en) 2019-09-25 2023-04-04 VergeSense, Inc. Method for detecting human occupancy and activity in a work area
CN110751719B (zh) * 2019-10-22 2023-09-12 深圳瀚维智能医疗科技有限公司 乳房三维点云重建方法、装置、存储介质及计算机设备
CN110956219B (zh) * 2019-12-09 2023-11-14 爱芯元智半导体(宁波)有限公司 视频数据的处理方法、装置和电子系统
CN111445574B (zh) * 2020-03-26 2023-07-07 众趣(北京)科技有限公司 一种视频监控设备布控方法、装置及系统
CN111640181A (zh) * 2020-05-14 2020-09-08 佳都新太科技股份有限公司 一种交互式视频投影方法、装置、设备及存储介质
CN111464795B (zh) * 2020-05-22 2022-07-26 联想(北京)有限公司 监控设备配置的实现方法、装置及电子设备
CN111860493B (zh) * 2020-06-12 2024-02-09 北京图森智途科技有限公司 一种基于点云数据的目标检测方法及装置
CN111986086B (zh) * 2020-08-27 2021-11-09 贝壳找房(北京)科技有限公司 三维图像优化生成方法及系统
CN112040128A (zh) * 2020-09-03 2020-12-04 浙江大华技术股份有限公司 工作参数的确定方法及装置、存储介质、电子装置
US11232595B1 (en) 2020-09-08 2022-01-25 Weta Digital Limited Three-dimensional assembly for motion capture calibration
US11282233B1 (en) * 2020-09-08 2022-03-22 Weta Digital Limited Motion capture calibration
CN112150558B (zh) * 2020-09-15 2024-04-12 阿波罗智联(北京)科技有限公司 用于路侧计算设备的障碍物三维位置获取方法及装置
CN112288816B (zh) * 2020-11-16 2024-05-17 Oppo广东移动通信有限公司 位姿优化方法、位姿优化装置、存储介质与电子设备
CN112753047B (zh) * 2020-12-30 2022-08-26 华为技术有限公司 摄像头的硬件在环标定、靶点设置方法、系统及相关设备
CN112950667B (zh) * 2021-02-10 2023-12-22 中国科学院深圳先进技术研究院 一种视频标注方法、装置、设备及计算机可读存储介质
CN113124883B (zh) * 2021-03-01 2023-03-28 浙江国自机器人技术股份有限公司 基于3d全景相机的离线标点方法
CN112907736B (zh) * 2021-03-11 2022-07-15 清华大学 基于隐式场的十亿像素场景人群三维重建方法和装置
CN113160053B (zh) * 2021-04-01 2022-06-14 华南理工大学 一种基于位姿信息的水下视频图像复原与拼接方法
CN113205579B (zh) * 2021-04-28 2023-04-18 华中科技大学 三维重建方法、装置、设备及存储介质
CN113724379B (zh) * 2021-07-08 2022-06-17 中国科学院空天信息创新研究院 融合图像与激光点云的三维重建方法及装置
CN113793383A (zh) * 2021-08-24 2021-12-14 江西省智能产业技术创新研究院 一种3d视觉识别取放系统及方法
WO2023091129A1 (en) * 2021-11-17 2023-05-25 Innopeak Technology, Inc. Plane-based camera localization
CN116962649B (zh) * 2023-09-19 2024-01-09 安徽送变电工程有限公司 图像监控调整系统及线路施工模型
KR102648852B1 (ko) * 2024-01-17 2024-03-18 아이티인 주식회사 전면 혹은 후면으로 단속방향 변경이 가능한 교통단속시스템

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198488A (zh) * 2013-04-16 2013-07-10 北京天睿空间科技有限公司 Ptz监控摄像机实时姿态快速估算方法
CN103400409A (zh) * 2013-08-27 2013-11-20 华中师范大学 一种基于摄像头姿态快速估计的覆盖范围3d可视化方法
CN103824278A (zh) * 2013-12-10 2014-05-28 清华大学 监控摄像机的标定方法和系统
CN104050712A (zh) * 2013-03-15 2014-09-17 索尼公司 三维模型的建立方法和装置
CN105678748A (zh) * 2015-12-30 2016-06-15 清华大学 三维监控系统中基于三维重构的交互式标定方法和装置

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7860301B2 (en) * 2005-02-11 2010-12-28 Macdonald Dettwiler And Associates Inc. 3D imaging system
US7944454B2 (en) * 2005-09-07 2011-05-17 Fuji Xerox Co., Ltd. System and method for user monitoring interface of 3-D video streams from multiple cameras
US9305401B1 (en) * 2007-06-06 2016-04-05 Cognitech, Inc. Real-time 3-D video-security
CN101398937B (zh) * 2008-10-29 2011-05-18 北京航空航天大学 基于同一场景散乱照片集的三维重构方法
CN102103747B (zh) * 2009-12-16 2012-09-05 中国科学院电子学研究所 采用参照物高度的监控摄像机外部参数标定方法
EP2375376B1 (en) * 2010-03-26 2013-09-11 Alcatel Lucent Method and arrangement for multi-camera calibration
US9036001B2 (en) * 2010-12-16 2015-05-19 Massachusetts Institute Of Technology Imaging system for immersive surveillance
US9191650B2 (en) * 2011-06-20 2015-11-17 National Chiao Tung University Video object localization method using multiple cameras
CA2819956C (en) * 2013-07-02 2022-07-12 Guy Martin High accuracy camera modelling and calibration method
CN103646391B (zh) * 2013-09-30 2016-09-28 浙江大学 一种针对动态变化场景的实时摄像机跟踪方法
CN103834278B (zh) * 2014-03-04 2016-05-25 芜湖市艾德森自动化设备有限公司 一种安全环保耐磨紫外光固化涂料及其制备方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050712A (zh) * 2013-03-15 2014-09-17 索尼公司 三维模型的建立方法和装置
CN103198488A (zh) * 2013-04-16 2013-07-10 北京天睿空间科技有限公司 Ptz监控摄像机实时姿态快速估算方法
CN103400409A (zh) * 2013-08-27 2013-11-20 华中师范大学 一种基于摄像头姿态快速估计的覆盖范围3d可视化方法
CN103824278A (zh) * 2013-12-10 2014-05-28 清华大学 监控摄像机的标定方法和系统
CN105678748A (zh) * 2015-12-30 2016-06-15 清华大学 三维监控系统中基于三维重构的交互式标定方法和装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111899174A (zh) * 2020-07-29 2020-11-06 北京天睿空间科技股份有限公司 基于深度学习的单摄像头旋转拼接方法

Also Published As

Publication number Publication date
CN105678748B (zh) 2019-01-15
US10607369B2 (en) 2020-03-31
CN105678748A (zh) 2016-06-15
US20190221003A1 (en) 2019-07-18

Similar Documents

Publication Publication Date Title
WO2017114508A1 (zh) 三维监控系统中基于三维重构的交互式标定方法和装置
US11165959B2 (en) Connecting and using building data acquired from mobile devices
US20170078593A1 (en) 3d spherical image system
Zollmann et al. Augmented reality for construction site monitoring and documentation
EP3825954A1 (en) Photographing method and device and unmanned aerial vehicle
US9578310B2 (en) Automatic scene calibration
WO2022120567A1 (zh) 一种基于视觉引导的自动化标定系统
CN110458897B (zh) 多摄像头自动标定方法及系统、监控方法及系统
US11557083B2 (en) Photography-based 3D modeling system and method, and automatic 3D modeling apparatus and method
US20170305546A1 (en) Autonomous navigation method and system, and map modeling method and system
CN113196208A (zh) 通过使用采集装置传感器对图像采集的自动化控制
US9361731B2 (en) Method and apparatus for displaying video on 3D map
CN104715479A (zh) 基于增强虚拟的场景复现检测方法
WO2023093217A1 (zh) 数据标注方法、装置、计算机设备、存储介质和程序
Côté et al. Live mobile panoramic high accuracy augmented reality for engineering and construction
WO2022088881A1 (en) Method, apparatus and system for generating a three-dimensional model of a scene
CA3069813C (en) Capturing, connecting and using building interior data from mobile devices
CN113496503B (zh) 点云数据的生成及实时显示方法、装置、设备及介质
US20180350216A1 (en) Generating Representations of Interior Space
TWI502271B (zh) 控制方法及電子裝置
Li et al. Fish-eye distortion correction based on midpoint circle algorithm
EP3882846B1 (en) Method and device for collecting images of a scene for generating virtual reality data
Jung et al. Human height analysis using multiple uncalibrated cameras
CA3102860C (en) Photography-based 3d modeling system and method, and automatic 3d modeling apparatus and method
WO2022141721A1 (zh) 一种多模态无监督的行人像素级语义标注方法和系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16881297

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16881297

Country of ref document: EP

Kind code of ref document: A1