CN106780312B - Image space and geographic scene automatic mapping method based on SIFT matching - Google Patents

Image space and geographic scene automatic mapping method based on SIFT matching Download PDF

Info

Publication number
CN106780312B
CN106780312B CN201611234924.3A CN201611234924A CN106780312B CN 106780312 B CN106780312 B CN 106780312B CN 201611234924 A CN201611234924 A CN 201611234924A CN 106780312 B CN106780312 B CN 106780312B
Authority
CN
China
Prior art keywords
image
point
geographic
scene
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611234924.3A
Other languages
Chinese (zh)
Other versions
CN106780312A (en
Inventor
林冰仙
石林
徐长禄
周良辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Normal University
Original Assignee
Nanjing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Normal University filed Critical Nanjing Normal University
Priority to CN201611234924.3A priority Critical patent/CN106780312B/en
Publication of CN106780312A publication Critical patent/CN106780312A/en
Application granted granted Critical
Publication of CN106780312B publication Critical patent/CN106780312B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/08
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The invention discloses an automatic mapping method of an image space and a geographic scene based on SIFT matching, which comprises the following steps: data for calculating homography matrices of the camera initial bit image space and the geographic scene are collected, homography matrices H1 between the camera initial bit image space and the geographic scene are calculated, SIFT matching is carried out, projection of matching points to the geographic scene is carried out, and homography matrices H2 between the next image and the geographic scene are calculated. The method only needs to collect the control point coordinates corresponding to the geographic scene at the initial position of the camera and calculate the homography matrix of the initial position, and is simple and convenient.

Description

Image space and geographic scene automatic mapping method based on SIFT matching
Technical Field
The invention relates to the technologies of video GIS, video monitoring, computer graphics and the like, in particular to an automatic mapping method of an image space and a geographic scene in a rotating process of a pan-tilt monitoring camera based on SIFI matching.
Background
With the development of social economy and science and technology, in recent years, people have higher and higher requirements on safety and technical prevention. With the introduction of various novel security concepts, the digital video monitoring system has been widely applied in the security field of various industries and departments in the society, and the video monitoring technology plays an important role in the public security field (the current situation and development of video monitoring technology [ J ]. China market, 2015,11: 201-charge 202 ]) in the last eighties of the last century.
In the existing video monitoring system, because the cameras are mutually independent and have different directions, the monitoring range of a single camera is small, and in addition, "well" window grid views frequently adopted in the video monitoring system are difficult to provide large-range video views, a uniform macroscopic view field cannot be presented for a user, the understanding difficulty of the user on a monitoring scene is increased, and the analysis and application of the monitoring video are limited.
The video GIS combines the video monitoring data with the geographic data, uniformly brings the video monitoring system cameras into a map for management and display, can provide a uniform macroscopic view for users, and improves the application of monitoring videos. The video GIS first needs to realize the projection problem between video images and geographic scenes.
The projection problem between the video image and the geographic scene can be solved by calibrating the camera and solving the internal and external parameters of the camera. The current camera calibration technology mainly comprises a traditional calibration method and a self-calibration method (Buckingqian, Huzang. research and development of the camera self-calibration method [ J ]. automated chemical newspaper, 2003,01: 110-. The traditional camera calibration method needs to utilize a specific calibration block, and repeated calibration is often needed in the actual use process, so that the method is not economical and time-consuming. The self-calibration method does not need a calibration block, only needs a series of images under different visual angles, but requires the same characteristic points in the series of images, and establishes the corresponding relation of the characteristic points in different images, so extremely high image identification technology is required, and the realization is difficult.
The homography matrix can represent the mutual mapping relationship between two planes (Liu Jiu, Wang Meizhen, Zuiyan, Luyue. Single image geometry measurement research progress [ J ]. Wuhan university bulletin (information science edition), 2011,08: 941-. The areas monitored by the video monitoring system are generally public activity places such as airports, customs, squares, railway stations and the like, and the public activity places can meet the requirements of a geographical plane scene, so that projection between a video image and the geographical plane scene can be realized by utilizing a homography matrix.
In a word, as the video monitoring system plays more and more important roles in the social public security field, the current video monitoring system cannot provide a uniform macroscopic view for users, is difficult to realize the associated use and uniform scheduling of a large-scale monitoring camera, and limits the application of monitoring videos. The video GIS breaks through the limitation of the existing pure video monitoring system, brings the monitoring video into a large-scale geographic scene, and can realize associated use and unified scheduling. How to efficiently realize the efficient projection of video images and geographic scenes in a video GIS system is the problem to be solved by any video GIS system firstly.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to solve the defects in the prior art and provides an automatic mapping method of an image space and a geographic scene based on SIFI (scale invariant feature information) matching. In particular to SIFT matching, perspective geometry, video GIS and other contents.
The technical scheme is as follows: the invention relates to an automatic mapping method of an image space and a geographic scene based on SIFT matching, which specifically comprises the following steps:
(1) data for calculating a homography matrix for an initial position of a camera is acquired:
collecting control point coordinates (X, Y) in a geographic scene corresponding to the initial position of a camera, and collecting at least 4 points; measuring image coordinates (r, c) corresponding to each control point on the camera initial position image, wherein r is the row number of the image point, and c is the column number of the image point;
(2) substituting the data acquired in the step (1) into a calculation formula (1) of a homography matrix, and calculating a homography matrix H1 between the video image and the geographic scene when the pan-tilt camera is at the initial position:
(3) SIFT matching:
finding a corresponding point p between an image shot by the pan-tilt camera at an initial position and a next frame of image by using an SIFT matching algorithm1And p2And corresponding point p1、p2Image coordinates p of1(r1,c1),p2(r2,c2) (ii) a Wherein the corresponding point p1On the first image, p2In the second pictureLike above;
(4) the matching points are projected to the geographical scene:
using homography matrix H1 and SIFT matching result to match the coordinate p of the matching point in the first image1(r1,c1) Projecting to a geographical scene to obtain an image point p1And p2Geographic coordinates P (X, Y) of the corresponding ground point;
(5) calculating a homography matrix H2 between the next image and the geographic scene:
using the image coordinates p of the matching points on the second image2(r2,c2) And its corresponding geographic coordinates P (X, Y), calculating a homography H2 between the second image and the geographic scene; after H2 is obtained, any image point p (r, c) on the second image can be projected to the geographic space, and the projection formula is as follows:
Figure GDA0001231290070000031
Figure GDA0001231290070000032
in the formula H2ijRepresenting the elements in row i and column j of the H2 matrix.
Has the advantages that: according to the invention, only the corresponding ground control point coordinates of the geographic scene when the monitoring camera is at the initial position are acquired, the homography matrix of the initial position is calculated, and then the homography matrix between the video image and the geographic scene can be automatically calculated in the process of rotating the pan-tilt monitoring camera, so that the projection of the video image to the geographic scene is realized.
The method is simple, convenient and automatic in calculation, and realizes the projection of the video image to the geographic scene.
Drawings
FIG. 1 is a flow chart in an embodiment of the present invention;
fig. 2 is a SIFT matching result diagram in the embodiment of the present invention.
Detailed Description
The technical solution of the present invention is described in detail below, but the scope of the present invention is not limited to the embodiments.
To facilitate a clearer understanding of the present invention, the following description is made:
homography matrix means that in computer vision, the homography of a plane is defined as the projection mapping from one plane to another, and the mapping relationship can be represented by a 3 × 3 matrix, which is called homography matrix of two planes. A homography H is calculated between the two images and applying this relationship transforms all points in one view into the other.
SIFT (Scale-invariant feature transform) matching refers to a computer vision algorithm proposed by David Lowe in 1999 and summarized in 2004. The SIFT feature matching algorithm can process the matching problem under the conditions of translation, rotation and affine transformation between two images, and has strong matching capability. The algorithm mainly comprises two stages: the first stage is the generation of SIFT features, which are based on some local appearance interest points on the object and are independent of the size and rotation of the image; the second stage is the matching of SIFT feature vectors, and SIFT features with similar feature point vectors are matched together.
The corresponding point is a point in which the image point and the image point position (image coordinate) of the same spatial target point in the two images captured are different because the pan-tilt camera attitude may change, but they correspond to the same spatial target, and such a point is called a corresponding point.
The automatic mapping method of the image space and the geographic scene in the rotating process of the pan-tilt monitoring camera based on SIFT matching comprises the following steps:
the method comprises the steps of collecting control point coordinates of a monitoring camera corresponding to a geographical scene at an initial position, automatically calculating homography matrixes between images shot by a holder monitoring camera at different moments and the geographical scene in the rotation process based on an SIFT matching algorithm, realizing projection of video images to the geographical scene, and improving the projection efficiency of the video images to the geographical scene in a video GIS system.
Example 1:
as shown in fig. 1, an automatic image space and geographic scene mapping method based on SIFT matching specifically includes the following steps:
step 1: data for calculating a homography matrix of the camera initial bit image space and the geographic scene is collected. When a Trimble VX space station-surveying instrument is used to measure the coordinates (X, Y) of the corresponding geographic scene control point of the pan-tilt monitoring camera used in this example at the initial position, image coordinates (r, c) of the image point corresponding to the geographic control point in the image shot at the initial position by the pan-tilt camera are measured by using image processing software Photoshop CS6, where r is the image line number and c is the image column number, 10 sets of corresponding points are collected in this embodiment 1, and the specific data are as follows:
table 1 ten sets of corresponding points in example 1
Figure GDA0001231290070000051
Step 2: a homography H1 is computed between the camera initial bit image space and the geographic scene. Substituting the data acquired in the step 1 into a calculation formula of the homography matrix, and calculating a homography matrix H1 of the image and the geographic scene shot by the pan-tilt camera at the initial position:
Figure GDA0001231290070000052
and step 3: SIFT matching. And searching a corresponding point pair between the image shot by the pan-tilt camera at the initial position and the next image and image coordinates (r, c) of the corresponding point by utilizing an SIFT matching algorithm in computer graphics, wherein r is the row number of the image point, and c is the column number of the image point. Assuming that P1 and P2 are one set of corresponding point pairs, P1(211,423) and P2(217,276) can be obtained from the matching results. The matching result is shown in fig. 2.
And 4, step 4: the matching points are projected to the geographical scene. Matching with SIFT using homography matrix H1As a result, the first image is imaged as P1 (r)1,c1) Such matching points are all projected onto the geographic scene, resulting in image point P1 (r)1,c1) Corresponding ground point geographic coordinates P (X, Y).
And 5: image coordinates P2 (r) on the second image using the matching points2,c2) And its corresponding geographic coordinates P (X, Y) (obtained from the projection onto the geographic scene in step 4), a homography H2 between the second image and the geographic scene is calculated,
Figure GDA0001231290070000053
when H2 is obtained, a pixel p (189, 321) on the second image is substituted into the following formula,
Figure GDA0001231290070000054
Figure GDA0001231290070000061
the coordinates projected onto the geographical space are determined as P (3554386.7, 679673.417).

Claims (1)

1. An image space and geographic scene automatic mapping method based on SIFT matching is characterized in that: the method specifically comprises the following steps:
(1) data for calculating a homography matrix for an initial position of a camera is acquired:
collecting control point coordinates (X, Y) in a geographic scene corresponding to the initial position of a camera, and collecting at least 4 points; measuring image coordinates (r, c) corresponding to each control point on the camera initial position image, wherein r is the row number of the image point, and c is the column number of the image point;
(2) substituting the data acquired in the step (1) into a calculation formula (1) of a homography matrix, and calculating a homography matrix H1 between the video image and the geographic scene when the pan-tilt camera is at the initial position:
Figure FDA0002143255880000011
(3) SIFT matching:
finding a corresponding point p between an image shot by the pan-tilt camera at an initial position and a next frame of image by using an SIFT matching algorithm1And p2And corresponding point p1、p2Image coordinates p of1(r1,c1),p2(r2,c2) (ii) a Wherein the corresponding point p1On the first image, p2On the second image;
(4) the matching points are projected to the geographical scene:
using homography matrix H1 and SIFT matching result to match the coordinate p of the matching point in the first image1(r1,c1) Projecting to a geographical scene to obtain an image point p1And p2Geographic coordinates P (X, Y) of the corresponding ground point;
(5) calculating a homography matrix H2 between the next image and the geographic scene:
using the image coordinates p of the matching points on the second image2(r2,c2) And its corresponding geographic coordinates P (X, Y), calculating a homography H2 between the second image and the geographic scene; after H2 is obtained, any image point p (k, t) on the second image can be projected to the geographic space, and the projection formula is as follows:
Figure FDA0002143255880000012
Figure FDA0002143255880000013
in the formula H2ijRepresenting the elements in row i and column j of the H2 matrix.
CN201611234924.3A 2016-12-28 2016-12-28 Image space and geographic scene automatic mapping method based on SIFT matching Active CN106780312B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611234924.3A CN106780312B (en) 2016-12-28 2016-12-28 Image space and geographic scene automatic mapping method based on SIFT matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611234924.3A CN106780312B (en) 2016-12-28 2016-12-28 Image space and geographic scene automatic mapping method based on SIFT matching

Publications (2)

Publication Number Publication Date
CN106780312A CN106780312A (en) 2017-05-31
CN106780312B true CN106780312B (en) 2020-02-18

Family

ID=58924497

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611234924.3A Active CN106780312B (en) 2016-12-28 2016-12-28 Image space and geographic scene automatic mapping method based on SIFT matching

Country Status (1)

Country Link
CN (1) CN106780312B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101231750A (en) * 2008-02-21 2008-07-30 南京航空航天大学 Calibrating method of binocular three-dimensional measuring system
CN101839722A (en) * 2010-05-06 2010-09-22 南京航空航天大学 Method for automatically recognizing target at medium and low altitudes and positioning carrier with high accuracy
CN101998136A (en) * 2009-08-18 2011-03-30 华为技术有限公司 Homography matrix acquisition method as well as image pickup equipment calibrating method and device
CN104103051A (en) * 2013-04-03 2014-10-15 华为技术有限公司 Image splicing method and device
CN105931186A (en) * 2016-04-26 2016-09-07 电子科技大学 Panoramic video mosaicing system and method based on camera automatic calibration and color correction

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120301014A1 (en) * 2011-05-27 2012-11-29 Microsoft Corporation Learning to rank local interest points

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101231750A (en) * 2008-02-21 2008-07-30 南京航空航天大学 Calibrating method of binocular three-dimensional measuring system
CN101998136A (en) * 2009-08-18 2011-03-30 华为技术有限公司 Homography matrix acquisition method as well as image pickup equipment calibrating method and device
CN101839722A (en) * 2010-05-06 2010-09-22 南京航空航天大学 Method for automatically recognizing target at medium and low altitudes and positioning carrier with high accuracy
CN104103051A (en) * 2013-04-03 2014-10-15 华为技术有限公司 Image splicing method and device
CN105931186A (en) * 2016-04-26 2016-09-07 电子科技大学 Panoramic video mosaicing system and method based on camera automatic calibration and color correction

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Augmented Reality Camera Tracking with Homographies;Simon J.D. Prince等;《IEEE Computer Graphics and Applications》;20021231;第39-45页 *
地理场景协同的多摄像机目标跟踪研究;张兴国;《中国博士学位论文全文数据库 基础科学辑》;20141115;第2014年卷(第11期);第A008-5页 *
基于SIFT特征匹配的监控图像自动拼接;张朝伟等;《计算机应用》;20080131;第28卷(第1期);第191-194页 *

Also Published As

Publication number Publication date
CN106780312A (en) 2017-05-31

Similar Documents

Publication Publication Date Title
CN109978755B (en) Panoramic image synthesis method, device, equipment and storage medium
WO2021196294A1 (en) Cross-video person location tracking method and system, and device
CN111080724A (en) Infrared and visible light fusion method
WO2019042232A1 (en) Fast and robust multimodal remote sensing image matching method and system
Chen et al. Indoor camera pose estimation via style‐transfer 3D models
WO2021017882A1 (en) Image coordinate system conversion method and apparatus, device and storage medium
CN110009561A (en) A kind of monitor video target is mapped to the method and system of three-dimensional geographical model of place
CN113192646B (en) Target detection model construction method and device for monitoring distance between different targets
Wang et al. Single view metrology from scene constraints
WO2021136386A1 (en) Data processing method, terminal, and server
CN110400315A (en) A kind of defect inspection method, apparatus and system
CN112489099B (en) Point cloud registration method and device, storage medium and electronic equipment
CN110782498B (en) Rapid universal calibration method for visual sensing network
WO2020211427A1 (en) Segmentation and recognition method, system, and storage medium based on scanning point cloud data
CN103353941B (en) Natural marker registration method based on viewpoint classification
CN112946679B (en) Unmanned aerial vehicle mapping jelly effect detection method and system based on artificial intelligence
Andaló et al. Efficient height measurements in single images based on the detection of vanishing points
Xue et al. A fast visual map building method using video stream for visual-based indoor localization
CN115830135A (en) Image processing method and device and electronic equipment
CN111612895A (en) Leaf-shielding-resistant CIM real-time imaging method for detecting abnormal parking of shared bicycle
CN112884795A (en) Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion
CN103914818A (en) Omni-directional image sparse reconstruction method based on omni-directional total variation
CN114693782A (en) Method and device for determining conversion relation between three-dimensional scene model coordinate system and physical coordinate system
CN106780312B (en) Image space and geographic scene automatic mapping method based on SIFT matching
CN116823966A (en) Internal reference calibration method and device for camera, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant