CN110245199A - A kind of fusion method of high inclination-angle video and 2D map - Google Patents

A kind of fusion method of high inclination-angle video and 2D map Download PDF

Info

Publication number
CN110245199A
CN110245199A CN201910350808.5A CN201910350808A CN110245199A CN 110245199 A CN110245199 A CN 110245199A CN 201910350808 A CN201910350808 A CN 201910350808A CN 110245199 A CN110245199 A CN 110245199A
Authority
CN
China
Prior art keywords
video
background image
dynamic object
map
static
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910350808.5A
Other languages
Chinese (zh)
Other versions
CN110245199B (en
Inventor
朱雪坚
刘学军
叶远智
刘洋
王晓辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Natural Resources Monitoring Center
Nanjing Normal University
Original Assignee
Zhejiang Natural Resources Monitoring Center
Nanjing Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Natural Resources Monitoring Center, Nanjing Normal University filed Critical Zhejiang Natural Resources Monitoring Center
Priority to CN201910350808.5A priority Critical patent/CN110245199B/en
Publication of CN110245199A publication Critical patent/CN110245199A/en
Application granted granted Critical
Publication of CN110245199B publication Critical patent/CN110245199B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram
    • G09B29/003Maps
    • G09B29/006Representation of non-cartographic information on maps, e.g. population distribution, wind direction, radiation levels, air and sea routes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Remote Sensing (AREA)
  • Ecology (AREA)
  • Mathematical Physics (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses the fusion method of a kind of high inclination-angle video and 2D map, model to obtain static video background image by video background;The extraction of prospect dynamic object is carried out in such a way that background subtraction method is combined with three frame difference methods;Static state video background image obtained in S1 is subjected to image segmentation, obtains road surface region in image, based on the high inclination-angle video image geometric correction algorithm singly answered, geometric correction is carried out to video background image, the video background image after being corrected;Based on camera interior and exterior parameter, the mutual mapping model of monitor video Yu 2D geographical spatial data is established;Based on the mutual mapping model of the S4 monitor video established and 2D geographical spatial data, the static video background image after correction and the prospect dynamic object extracted are mapped on two-dimensional map, complete the integrated presentation of monitor video and two-dimensional map.

Description

A kind of fusion method of high inclination-angle video and 2D map
Technical field
The invention belongs to the integration technologies of monitor video and 2D map, and in particular to a kind of high inclination-angle video and 2D map Fusion method.
Background technique
Video and the integrated of GIS are the new paragons for carrying out geographic scenes expression, wherein in the integrated side of video and 2D GIS Face, 1978, Massachusetts Institute of Technology Lippman (Lippman A.Movie maps:an application of the Optical videodisc to computer graphics [J] .SIGGRAPH ' 80,1980,14 (3): 32-42.) for the first time Video and GIS are integrated, and develop the hypermedia map of dynamic, user's interactive mode, then, video is integrated with GIS Research work gradually deeply, increasingly by the attention of researchers, and has carried out a large amount of relevant researchs.Berry (Berry J K.Capture"Where"and"When"on video-based GIS[J].GEO WORLD,2000,13: The frame of video map 26-27.) et al. is proposed, and corresponding concept is devised with application to the field data acquisition of data, processing Scheme;Lewis(Lewis P,Fotheringham S,Winstanley A.Spatial video and GIS. International Journal of Geographical Information Science,2011,25(5):697-716) Et al. define GIS constraint under geographical space video data model pyramid data structure, can be applied to two-dimension GIS analysis With visualization, and pass through its feasibility of experimental verification;Han Zhigang (the campus Han Zhigang, Zeng Ming, Kong Yunfeng geography video monitoring WebGIS systematical design idea [J], Surveying and mapping, 2012,37 (1): 195-197.) et al. and Zhang Di (Zhang Di be based on GIS The Kaifeng public security video monitoring prediction scheme systematical design idea [D]: He'nan University, 2011.) et al. realize video prison respectively Control system and GIS's is loose integrated, is indicated video camera with a point in two-dimensional map or is indicated to take the photograph with a sector The scope of sight of camera, and played out using hyperlink mode static call video file;Kong Yunfeng (Kong Yunfeng geography view Frequency Data Model Designing and network video GIS realize [J] Wuhan University Journal information science version, 2010,35 (2): 133- Et al. 137.) mapping relations will be established between geographical location (XY), highway mileage (M), video time or frame (T/F);Zhang Xingguo (Zhang Xingguo, Liu Xuejun, Wang Sining wait monitor video and 2D geographical spatial data mutually to map [J] Wuhan University Journal information Scientific version, 2015,40 (8): 1130-1136.) the mutual mapping model of monitor video and geographical spatial data is had studied, and mention A kind of mutual mapping method of semi-automation based on characteristic matching is gone out.
The geometric correction of image has a wide range of applications in remote sensing fields, it is to reduce remote sensing image and ground true form One of the important means of difference.In photogrammetric field, the geometry of small tilted photograph (inclination angle is within 2 °) is generally only discussed Correction problem.In terms of the geometric correction of small tilted image, existing algorithm mainly includes being based on according to certain mathematical model The method that control point resolves, such as polynomial method, another kind of is the collinearity equation method (grandson based on digital elevation model and imaging equation Family's handle Remote Sensing Principles, methods and applications [M] Mapping Press, 1997.).For oblique image (inclination angle 2 ° -90 ° it Between) geometric correction in terms of, also there are some scholars to carry out certain research.At (Cheng Xia, Zhu Fangwen, Yuan Zhengpeng the base such as rosy clouds Geometric correction [J] the Shanghai University journal of the oblique image of Yu Danying: natural science edition, 2005,11 (5): 481-484.) For the oblique image of handheld camera shooting, propose a kind of based on the geometric correction side for singly answering principle in computer vision Method.Xu Qingyang (Xu Qingyang near space high inclination-angle Method of Remote Sensing Image Geometric Correction research [D] Harbin Institute of Technology, 2009.) it has made intensive studies, proposes for the difficulties near space high inclination-angle Image correction in remote sensing technology Piecewise polynomial calibration model, and error control point algorithm is removed by iteration and is uniformly distributed algorithm, optimize SIFT interest Operator chooses the process at control point automatically, realizes the full-automatic geometric correction of image.Zhu Tiewen (the big inclination of Zhu Tiewen, Wang Yong Aeroplane photography image geographical coordinate adding method [J] marine charting, 2010,30 (3): 23-26.) et al. to big oblique aerial Photographs proposes basis of reference image, the method for carrying out geometric correction based on improved six parameter affine transforms model.
As seen from the above analysis, the integration problem of monitor video and two-dimensional map research is paid attention to by scholar, phase Closing theory and method research gradually becomes the hot spot of academia's related fields concern, also achieves corresponding research achievement, but Existing main problem shows themselves in that
(1) it in the integrated aspect of video and GIS, existing research or using video data as the attribute of spatial data, adopts With hyperlink mode static call video file, lack the space analysis of video, or only simply video is placed in ground On figure, the abundant information that does not make full use of video to be contained.Moreover, existing research is only to the prison of each video camera overlay area Control video is integrated with map, less to the deduction concern of blind area of monitoring, can not perceive the space of blind area of monitoring dynamic object Pattern.
(2) in terms of geometric rectification of imagery, photogrammetric field generally only discusses the geometric correction problem of small tilted image, And current monitor camera, due to the needs of monitoring range, inclination angle is generally all bigger, this to be adapted to small inclination figure The geometric correction method of picture cannot be simply applied to tilt the geometric correction of monitor video image greatly.Existing big inclination figure As geometric correction method, there are biggish geometry deformations for the dynamic object for correcting in result, have biggish distortion, and It is difficult to meet the needs of real time monitoring video image geometric correction.
Summary of the invention
For in the existing big obtained remedial frames result of tilted image geometric correction method, there are larger for dynamic object Geometry deformation, there is biggish distortion, and the problem that correct algorithm efficiency is lower, the invention proposes a kind of high inclination-angles The fusion method of video and 2D map.
The invention discloses the fusion methods of a kind of high inclination-angle video and 2D map, comprising the following steps:
S1: according to camera parameters, the mutual mapping model of monitor video Yu 2D map is established;
S2: according to mutual mapping model, the prospect for facing static video background image and monitor video of monitor video is moved State target is mapped on 2D map, completes the integrated presentation of monitor video and 2D map.
Further, the monitor video the acquisition for facing static video background image the following steps are included:
The static video background image of monitor video is obtained according to video background modeling technique;
Geometric correction is carried out to static video background image, its is obtained and corresponding faces image.
Further, the extraction step of the prospect dynamic object in the monitor video includes:
By the prospect dynamic object binary map that three frame difference methods obtain and the prospect dynamic object two-value that background subtraction method obtains Figure carries out with operation, obtains final prospect dynamic object;
The position of prospect dynamic object is obtained by carrying out connected domain analysis to prospect dynamic object.
Further, described the step of obtaining prospect dynamic object binary map, includes:
The static video background image of monitor video is obtained according to video background modeling technique;
The extraction of prospect dynamic object is carried out to static video background image respectively according to three frame difference methods and background subtraction method, point Preliminary prospect dynamic object is not obtained;
By each frame of monitor videoIt makes the difference to obtain difference corresponding to each pixel with preliminary prospect dynamic object Value g1And g2:
If g1> k1Or g2> k2, wherein k1And k2Respectively corresponding adaptive threshold value, For the average gray value of static video background image, then the pixel being labeled as 1, other points are labeled as 0, Obtain video foreground dynamic object binary map.
Further, before described the step of carrying out geometric correction to static video background image, comprising:
Super-pixel segmentation is carried out to static video background image;
Priori knowledge based on ground Yu non-ground constructs the figure to extract from the static video background image after segmentation As the decision tree that feature is classification foundation;
The classification for being carried out horizontal bottom and non-bottom surface to the static video background image after segmentation using decision tree, is obtained quiet Above ground portion and non-above ground portion in state video background image;
Described the step of carrying out geometric correction to static video background image includes: to be carried on the back using homography matrix to static video Above ground portion in scape image is corrected as facing image;
If described face in image has empty point, corresponding points of the cavity point on static video background image are obtained, The gray value of corresponding points of the cavity point on static video background image is calculated using bilinear interpolation, and then is somebody's turn to do The gray value of cavity point, the static video background image after finally being corrected.
Further, the step packet facing static video background image and being mapped on 2D map by monitor video It includes:
According to the mutual mapping model that S1 is established, it is trapezoidal to obtain monitor video corresponding ken in geographical space;
According to the coordinate (X of trapezoidal four angle points of the keni, Yi), the side length on the trapezoidal each side of the ken is calculated in i ∈ [Isosorbide-5-Nitrae] Li, i ∈ [Isosorbide-5-Nitrae] distance l on map corresponding with the trapezoidal each side length of the keni, i ∈ [Isosorbide-5-Nitrae]: li=s × Li, i ∈ [Isosorbide-5-Nitrae], s For the scale bar of 2D map;
According to the length PL for facing static each side of video background image in monitor videoiWith its physical length l, calculating ratio Example zoom factor εi:
Based on scale factor, to facing after static video background image zooms in and out transformation, according in video camera The rotation angle and pitch angle of heart point coordinate, video camera carry out rotation and translation transformation to static video background image is faced, will be quiet State video background image is mapped on the correct position of 2D map;
The step that the prospect dynamic object by monitor video is mapped on 2D map includes:
Calculate the centre coordinate Centre of each prospect dynamic object in monitor video:
In formula, M is the number of pixels of prospect dynamic object, (xi, yi) be prospect dynamic object pixel coordinate;
According to scale factor εi, the scaling of equal proportion is carried out to prospect dynamic object;
According to the mutual mapping model of monitor video and 2D map, the centre coordinate of each prospect dynamic object is converted into 2D Geographical coordinate;
According to the direction of motion of the centre coordinate of each prospect dynamic object and prospect dynamic object, by the prospect of monitor video Dynamic object maps on 2D map, and the position of real-time update prospect dynamic object.
Further, the direction of motion of the prospect dynamic object is determined by the rotation angle of video camera.
Further, the mutual mapping model of the monitor video and 2D map includes video image space to geographical space Mapping model and geographical space to video image space model;
The mapping model of the video image space to geographical space indicates are as follows:
In formula, (XG,YG,ZG) be target space coordinate, (Xc, Yc, Zc) be video camera optic center point coordinate, (f, x, y) is sight line vector, and P and T are the spin matrix of video camera, and λ is that ray extends parameter;
The mapping model in the geographical space to video image space indicates are as follows:
Further, the video background modeling technique uses Vibe algorithm.
Further, described that super-pixel segmentation is carried out to static video background image to calculate using SLIC super-pixel segmentation Method.
Existing deficiency is merged with 2D map the utility model has the advantages that the present invention is based on current monitor videos, is proposed a kind of for big The method of Dip Angle Monitoring video and 2D Map Integration mainly solves dynamic object metamorphopsic distortion after oblique image geometric correction Problem;Simultaneously as the background of video remains unchanged in the video of gunlock shooting, thus need not each frame to video all into Row geometric correction, substantially reduces calculation amount, effectively improves the efficiency of monitor video geometric rectification of imagery, complete single monitor video with Integrated, expression of the enhancing two-dimensional map to dynamic object of two-dimensional map.
Detailed description of the invention
Fig. 1 is flow chart of the invention;
Fig. 2 is scape separating experiment result before and after video of the present invention;
Fig. 3 is present invention segmentation level road area results figure, wherein 3-1 is original video Background, and 3-2 is classification Set color effect picture respectively afterwards, 3-3 is the level road after segmentation;
Fig. 4 is video background image rectification result figure of the present invention, wherein 4-1 is video background image to be corrected, 4-2 For the result after correction;
Fig. 5 is general effect schematic diagram of the present invention, wherein 5-1 is the 359th frame video original image, and 5-2 is the 359th frame view The integrated result figure of frequency image and two-dimensional map.
Specific embodiment
The present invention is further explained with reference to the accompanying drawings and examples.
Basic ideas of the invention: video background modeling is carried out to monitor video first, and obtained static video is carried on the back Scape image carries out the automatic segmentation of flat road surface using decision tree, then carries out high inclination-angle geometric correction to it using homography matrix Processing;Using background subtraction method in such a way that three frame difference methods combine, video foreground dynamic object is extracted;It finally will be after correction Static video background image and the prospect dynamic object extracted are respectively mapped on 2D map, final to realize monitor video and two The integrated of map is tieed up, expression of the two-dimensional map to dynamic object is enhanced.
Embodiment 1:
As shown in Figure 1, the fusion method of a kind of the high inclination-angle video and 2D map of the present embodiment, comprising the following steps:
The first step, video background modeling:
Static video background image is obtained by video background modeling technique, i.e., static in video or movement is very Slow point.Wherein, background modeling technology uses Vibe algorithm, it is a kind of video background modeling algorithm of Pixel-level, algorithm Computational efficiency is high, there is certain robustness to noise, and is suitable for the complex scenes such as camera shake and illumination variation, while energy Good real-time is enough kept, can guarantee video background modeling quality and efficiency.
Second step, prospect dynamic object extract:
The prospect dynamic object refers to the mobile apparent point in monitor video, is usually expressed as in monitor video The pedestrian of vehicle, walking such as in traveling, before carrying out video in such a way that background subtraction method is combined with three frame difference methods The extraction of scape dynamic object, specific steps are as follows:
(1) background subtraction is done using the resulting static video background image of the first step, so that it is dynamic to extract preliminary prospect State target, then according to the average gray value of static video background imageSet two adaptive threshold value k1And k2:
(2) each frame of monitor video is readIt is right that each pixel institute of solution is made the difference with preliminary prospect dynamic object The difference g answered1And g2:
(3) if g1> k1Or g2> k2, then marking the pixel in prospect bianry image is that other points are labeled as 0, it can thus be concluded that preliminary video foreground dynamic object binary map.
(4) noise is more obvious in preliminary video foreground dynamic object binary map, poor using three frames to eliminate the noise The prospect dynamic object that the prospect dynamic object and background subtraction method that method obtains obtain carries out with operation, obtains final prospect Dynamic object result.
(5) prospect dynamic is determined by carrying out connected domain analysis (Connected Component) to prospect dynamic object The position of target, and solve the centre coordinate Centre of moving target:
In formula, M is the number of pixels of prospect dynamic object, (xi, yi) be prospect dynamic object pixel coordinate.
Third step, video background geometric correction:
The video background geometric correction is based on decision tree, and the static video background image that the first step is obtained carries out Image segmentation obtains flat road surface region in image, based on the high inclination-angle video image geometric correction algorithm singly answered, regards to static state Road surface region in frequency background image carries out geometric correction, specific steps are as follows:
(1) super-pixel segmentation is carried out to the static video background image that the first step obtains, using SLIC super-pixel point Algorithm is cut, with efficient processing speed, lower algorithm complexity and good partitioning boundary.
(2) by machine learning, the priori knowledge on ground and non-ground is obtained, constructs decision tree.Decision tree classification according to According to being the characteristics of image extracted from the static video background image after segmentation.It is total that the present embodiment has selected 10 kinds of characteristics of image Count foundation of 55 kinds of pixel characteristics as decision tree classification.
The characteristics of image of 1 decision tree classification of table
(3) to after segmentation video background image carry out level ground and non-ground classification, obtain above ground portion with it is non- Above ground portion.
(4) it is corrected as the above ground portion in static video background image to face image by homography matrix.Assuming that correcting The two images of front and back are respectively I1And I2, for video background image I to be corrected1Upper any point (x1,y1), it can be Image I2Corresponding points (x is found above2,y2).The point for correcting the two images of front and back, which meets simple list, to be related to:
x2=Hx1 (6)
Matrix H is 3 × 3 matrixes, is embodied as:
The coordinate of picture point after being corrected:
(5) under normal conditions, x2、y2For integer, therefore, it is necessary to be rounded to it, corresponding points (x2,y2) at gray scale Value is by point (x1,y1) at gray value determined.There is empty presence in the image obtained by this method, in order to eliminate Cavity, it would be desirable to by empty point (x2′,y2') detection, and ask it in image I1On corresponding points (x1′,y1'), then using double Linear interpolation method come calculate cavity point gray value, face static video background image after finally being corrected.
4th step, video sound target and the mutual mapping model of 2D map:
The mutual mapping model refers to the mutual mapping for realizing image space and geographical space, specific steps are as follows:
(1) mapping of the image space to geographical space.
Video image space is two-dimensional space, and geographical space is three-dimensional space, and monitor camera can will be in geographical space Atural object project in image space.Monitor camera imaging uses perspective projection, and model is as follows:
Above formula meaning are as follows: under video camera original state, position for video camera is set to (0,0,0), horizontal angle and pitch angle at this time It is all 0 °, sight line vector CP is (f, x, y);With camera rotation, rotation is expressed by the spin matrix of P, T in formula, sight to Amount is postrotational direction of visual lines multiplied by this spin matrix;Ray extends, i.e., multiplied by λ in formula, can obtain camera light Vector of the central point to object space point.Optic center point coordinate (the X of known video cameraC, YC, ZC) and optical centre to object space point Vector, so that it may obtain object space point coordinate (XG, YG, ZG).In formula, it is unknown quantity that left side of the equal sign, which is the spatial position of target,; Right side of the equal sign is the position of image pixel, the posture of video sensor, position and λ, is known quantity.
Image space is a two-dimensional space, wherein can with the numerous points on straight line in corresponding three-dimensional space, Above formula is the expression formula of this straight line.When λ is determined, the point in a unique three-dimensional space can be determined.
The distance for enabling camera optics central point to target surface is fD, the distance of optic center point to object space point is D, then by several What relationship is available:
The present invention is on the basis of level ground, H at this timeGEqual to the elevation on ground.Work as HGWhen known, λ can be released are as follows:
λ is substituted into formula 10, X can be obtainedGWith YG, so that it is determined that on image object spatial position.
(2) mapping of the geographical space to image space.
It is the inverse mapping that image is mapped to geographical space for the mapping from geographical space to image space.Formula 10 is asked Inverse, deformation, obtains following formula:
Right side of the equal sign includes the known quantities such as geospatial coordinates, video camera posture, camera position in formula.Left side of the equal sign is worked as When known to focal length f, λ can be obtained by formula 12:
Formula 15 is substituted into formula 10, the coordinate (x, y) of spatial point in image space can be calculated.
5th step, video sound target and 2D Map Integration:
The video object and the integrated of 2D map refers to the mutual mapping model for establishing monitor video Yu 2D map, will entangle The prospect dynamic object facing static video background image He extracting after just is mapped on two-dimensional map, realizes monitoring view The integrated presentation of frequency and two-dimensional map.
(1) the static video background of monitor video and the mapping of two-dimensional map, specific steps are as follows:
1. according to the inside and outside parameter of video camera, establish video image to 2D geographical spatial data mapping model;
2. it is trapezoidal to calculate monitor video image corresponding ken in geographical space according to the mapping model of foundation;
3. coordinate (the X of trapezoidal four angle points of the keni, Yi), i ∈ [Isosorbide-5-Nitrae] calculates the side length L on the trapezoidal each side of the keni, i ∈ [Isosorbide-5-Nitrae]
4. calculating the corresponding distance L on map of the trapezoidal each side length of the ken by the scale bar s of current 2D mapi, i ∈ [1, 4]
li=s × Li, i ∈ [Isosorbide-5-Nitrae] (16)
5. the length PL for facing static each side of video background image after known correctioni, i ∈ [Isosorbide-5-Nitrae] and its physical length L, map front and back image between there is scaling relationship, scale factors are as follows:
Carrying out scaling is in order to which monitor video background image is become suitable size and is added on map.To video After background image zooms in and out transformation, according to rotation angle, the pitch angle of camera center point coordinate and video camera, monitoring is regarded Frequency background image carries out rotation and translation transformation, and monitor video background image has been mapped on the correct position of two-dimensional map.
(2) the prospect dynamic object of monitor video and the mapping of two-dimensional map, specific steps are as follows:
1. extracting the prospect dynamic object in monitor video according to second step, and calculate each prospect dynamic object Centre coordinate;
2. according to scale factor εi, the scaling of equal proportion is carried out to dynamic object;
3. constructing the mapping model of monitor video and 2D geographical spatial data, base according to the inside and outside parameter of monitor camera In the mapping model, the centre coordinate of each moving target is converted into 2D geographical coordinate;
4. the dynamic foreground target of monitor video is mapped to two-dimentional geographical spatial data, the direction of motion of dynamic object is such as The headstock of moving vehicle is determined towards according to the rotation angle of video camera, by constantly updating the real time position of dynamic object, so that Dynamic object moves on two-dimensional map.
(3) both (1) and (2) mapping result is integrated again, is finally completed monitor video and two-dimensional map It is integrated.
Embodiment 2:
The first step, relevant device prepare: preparing a portable notebook computer, extra large high definition monitoring camera one.
Second step, static video background image are separated with prospect dynamic object: establishing video background using Vibe algorithm, so Afterwards according to the video background image zooming-out prospect dynamic object of foundation, as a result such as Fig. 2, it is followed successively by original image, video background respectively Image, video dynamic object;Wherein, in Fig. 2 the first row be the 75th frame of video processing result, the second row is the 359th frame of video Processing result.
Third step, static video background geometric rectification of imagery:
(1) the urban highway traffic figure provided using labelME database constructs decision tree as training pictures.Base In constructed decision tree, segmentation to experiment video data flat road surface, segmentation result such as Fig. 3.
(2) geometric correction is carried out to the video background image after segmentation.Fig. 3-3 is image to be corrected, by monitoring The inside and outside parameter of camera solves and corrects transformation homography matrix H, corrects to Fig. 3-3, as a result as shown in Figure 4.
4th step, video sound target and two-dimensional map are integrated:
Static video background image after correction is mapped in two-dimensional map with the prospect dynamic object extracted.
The present invention transfers Google Maps tile as base map using OpenLayers, is determined and is regarded according to camera interior and exterior parameter The mapping model of frequency image and 2D geodata realizes the mapping of video sound target to 2D geodata, wherein calls Ol.layer.Image () class, ol.style.Icon () class in OpenLayers can be by video background images and dynamic mesh Mark is added on Google Maps.The integrated experimentation result figure of video and two-dimensional map is as shown in Figure 5.
Embodiment 3:
Present embodiment discloses the emerging system of a kind of high inclination-angle video and 2D map, including network interface, memory and Processor;Wherein,
Network interface, during for being received and sent messages between other ext nal network elements, signal is sended and received;
Memory, for storing the computer program instructions that can be run on the processor;
Processor, for when running the computer program instructions, executing high inclination-angle video and 2D map in embodiment 1 The step of fusion method.
Embodiment 4:
Present embodiment discloses a kind of computer storage medium, the computer storage medium be stored with high inclination-angle video with The program of the fusion method of the program of the fusion method of 2D map, the high inclination-angle video and 2D map is by least one processor The step of fusion method of high inclination-angle video and 2D map in embodiment 1 is realized when execution.
The application is referring to method, the process of equipment (system) and computer program product according to the embodiment of the present application Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates, Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one The step of function of being specified in a box or multiple boxes.
Finally it should be noted that: the above embodiments are merely illustrative of the technical scheme of the present invention and are not intended to be limiting thereof, institute The those of ordinary skill in category field can still modify to a specific embodiment of the invention referring to above-described embodiment or Equivalent replacement, these are applying for this pending hair without departing from any modification of spirit and scope of the invention or equivalent replacement Within bright claims.

Claims (10)

1. a kind of fusion method of high inclination-angle video and 2D map, it is characterised in that: the following steps are included:
S1: according to camera parameters, the mutual mapping model of monitor video Yu 2D geographical spatial data is established;
S2: according to mutual mapping model, by the prospect dynamic mesh for facing static video background image and monitor video of monitor video Mark is mapped on 2D map, completes the integrated presentation of monitor video and 2D map.
2. the fusion method of a kind of high inclination-angle video and 2D map according to claim 1, it is characterised in that: the monitoring The acquisition for facing static video background image of video the following steps are included:
The static video background image of monitor video is obtained according to video background modeling technique;
Geometric correction is carried out to static video background image, its is obtained and corresponding faces image.
3. the fusion method of a kind of high inclination-angle video and 2D map according to claim 1, it is characterised in that: the monitoring The extraction step of prospect dynamic object in video includes:
By prospect dynamic object binary map that three frame difference methods obtain and the prospect dynamic object binary map that background subtraction method obtains into Row with operation obtains final prospect dynamic object;
The position of prospect dynamic object is obtained by carrying out connected domain analysis to prospect dynamic object.
4. the fusion method of a kind of high inclination-angle video and 2D map according to claim 3, it is characterised in that: the prospect The step of dynamic object binary map includes:
The static video background image of monitor video is obtained according to video background modeling technique;
The extraction of prospect dynamic object is carried out to static video background image respectively according to three frame difference methods and background subtraction method, respectively To preliminary prospect dynamic object;
By each frame of monitor videoMake the difference to obtain difference g corresponding to each pixel with preliminary prospect dynamic object1 And g2:
If g1> k1Or g2> k2, wherein k1And k2Respectively corresponding adaptive threshold value, For the average gray value of static video background image, then the pixel is labeled as 1, other points are labeled as 0, obtain video foreground Dynamic object binary map.
5. the fusion method of a kind of high inclination-angle video and 2D map according to claim 2, it is characterised in that: described right Static video background image carried out before the step of geometric correction, comprising:
Super-pixel segmentation is carried out to static video background image;
Priori knowledge based on ground Yu non-ground, building are special with the image extracted from the static video background image after segmentation Sign is the decision tree of classification foundation;
The classification for carrying out horizontal bottom and non-bottom surface to the static video background image after segmentation using decision tree, obtains static view Above ground portion and non-above ground portion in frequency background image;
Described the step of carrying out geometric correction to static video background image includes: using homography matrix to static video background figure Above ground portion as in is corrected as facing image;
If described face in image has empty point, corresponding points of the cavity point on static video background image are obtained, are utilized Bilinear interpolation calculates the gray value of corresponding points of the cavity point on static video background image, and then obtains the cavity The gray value of point, the static video background image after finally being corrected.
6. the fusion method of a kind of high inclination-angle video and 2D map according to claim 1, it is characterised in that: described to supervise Control video in the step that static video background image is mapped on 2D map of facing include:
According to the mutual mapping model of the S1 monitor video established and 2D geographical spatial data, monitor video is obtained in geographical space The corresponding ken is trapezoidal;
According to the coordinate (X of trapezoidal four angle points of the keni, Yi) i ∈ [Isosorbide-5-Nitrae], the side length L on the trapezoidal each side of the ken is calculatedi, i ∈ [Isosorbide-5-Nitrae] distance l on map corresponding with the trapezoidal each side length of the keni, i ∈ [Isosorbide-5-Nitrae]: li=s × Li, i ∈ [Isosorbide-5-Nitrae], s are for 2D The scale bar of figure;
According to the length PL for facing static each side of video background image in monitor videoiWith its physical length l, ratio contracting is calculated Put factor εi:
Based on scale factor, to facing after static video background image zooms in and out transformation, according to camera center point Coordinate, the rotation angle of video camera and pitch angle carry out rotation and translation transformation to static video background image is faced, static state are regarded Frequency background image is mapped on the correct position of 2D map;
The step that the prospect dynamic object by monitor video is mapped on 2D map includes:
Calculate the centre coordinate Centre of each prospect dynamic object in monitor video:
In formula, M is the number of pixels of prospect dynamic object, (xi, yi) be prospect dynamic object pixel coordinate;
According to scale factor εi, the scaling of equal proportion is carried out to prospect dynamic object;
According to the mutual mapping model of monitor video and 2D geographical spatial data, the centre coordinate of each prospect dynamic object is converted to 2D geographical coordinate;
According to the direction of motion of the centre coordinate of each prospect dynamic object and prospect dynamic object, by the prospect dynamic of monitor video Target maps on 2D map, and the position of real-time update prospect dynamic object.
7. the fusion method of a kind of high inclination-angle video and 2D map according to claim 6, it is characterised in that: the prospect The direction of motion of dynamic object is determined by the rotation angle of video camera.
8. the fusion method of a kind of high inclination-angle video and 2D map according to claim 1, it is characterised in that: the prison The mutual mapping model of control video and 2D geographical spatial data include video image space to the mapping model of geographical spatial data and Model of the geographical spatial data to video image space;
The mapping model of the video image space to geographical spatial data indicates are as follows:
In formula, (XG,YG,ZG) be target space coordinate, (Xc, Yc, Zc) be video camera optic center point coordinate, (f, x, y) For sight line vector, P and T are the spin matrix of video camera, and λ is that ray extends parameter;
The mapping model in the geographical spatial data to video image space indicates are as follows:
9. the fusion method of a kind of high inclination-angle video and 2D map according to claim 2 or 4, it is characterised in that: described Video background modeling technique uses Vibe algorithm.
10. the fusion method of a kind of high inclination-angle video and 2D map according to claim 5, it is characterised in that: described right It is using SLIC super-pixel segmentation algorithm that static video background image, which carries out super-pixel segmentation,.
CN201910350808.5A 2019-04-28 2019-04-28 Method for fusing large-dip-angle video and 2D map Expired - Fee Related CN110245199B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910350808.5A CN110245199B (en) 2019-04-28 2019-04-28 Method for fusing large-dip-angle video and 2D map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910350808.5A CN110245199B (en) 2019-04-28 2019-04-28 Method for fusing large-dip-angle video and 2D map

Publications (2)

Publication Number Publication Date
CN110245199A true CN110245199A (en) 2019-09-17
CN110245199B CN110245199B (en) 2021-10-08

Family

ID=67883630

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910350808.5A Expired - Fee Related CN110245199B (en) 2019-04-28 2019-04-28 Method for fusing large-dip-angle video and 2D map

Country Status (1)

Country Link
CN (1) CN110245199B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110764110A (en) * 2019-11-12 2020-02-07 深圳创维数字技术有限公司 Path navigation method, device and computer readable storage medium
CN112040265A (en) * 2020-09-09 2020-12-04 河南省科学院地理研究所 Multi-camera collaborative geographic video live broadcast stream generation method
CN112967214A (en) * 2021-02-18 2021-06-15 深圳市慧鲤科技有限公司 Image display method, device, equipment and storage medium
CN113033348A (en) * 2021-03-11 2021-06-25 北京文安智能技术股份有限公司 Overlook image correction method for pedestrian re-recognition, storage medium, and electronic device
CN113297950A (en) * 2021-05-20 2021-08-24 首都师范大学 Dynamic target detection method

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060083440A1 (en) * 2004-10-20 2006-04-20 Hewlett-Packard Development Company, L.P. System and method
CN101277429A (en) * 2007-03-27 2008-10-01 中国科学院自动化研究所 Method and system for amalgamation process and display of multipath video information when monitoring
WO2014170886A1 (en) * 2013-04-17 2014-10-23 Digital Makeup Ltd System and method for online processing of video images in real time
CN104581018A (en) * 2013-10-21 2015-04-29 北京航天长峰科技工业集团有限公司 Video monitoring method for realizing two-dimensional map and satellite image interaction
CN106780541A (en) * 2016-12-28 2017-05-31 南京师范大学 A kind of improved background subtraction method
CN107197200A (en) * 2017-05-22 2017-09-22 北斗羲和城市空间科技(北京)有限公司 It is a kind of to realize the method and device that monitor video is shown
CN108389396A (en) * 2018-02-28 2018-08-10 北京精英智通科技股份有限公司 A kind of vehicle matching process, device and charge system based on video
CN108960566A (en) * 2018-05-29 2018-12-07 高新兴科技集团股份有限公司 A kind of traffic Visualized Monitoring System

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060083440A1 (en) * 2004-10-20 2006-04-20 Hewlett-Packard Development Company, L.P. System and method
CN101277429A (en) * 2007-03-27 2008-10-01 中国科学院自动化研究所 Method and system for amalgamation process and display of multipath video information when monitoring
WO2014170886A1 (en) * 2013-04-17 2014-10-23 Digital Makeup Ltd System and method for online processing of video images in real time
CN104581018A (en) * 2013-10-21 2015-04-29 北京航天长峰科技工业集团有限公司 Video monitoring method for realizing two-dimensional map and satellite image interaction
CN106780541A (en) * 2016-12-28 2017-05-31 南京师范大学 A kind of improved background subtraction method
CN107197200A (en) * 2017-05-22 2017-09-22 北斗羲和城市空间科技(北京)有限公司 It is a kind of to realize the method and device that monitor video is shown
CN108389396A (en) * 2018-02-28 2018-08-10 北京精英智通科技股份有限公司 A kind of vehicle matching process, device and charge system based on video
CN108960566A (en) * 2018-05-29 2018-12-07 高新兴科技集团股份有限公司 A kind of traffic Visualized Monitoring System

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘洋: "监控视频与二维地图的集成研究", 《中国优秀硕士学位论文全文数据库 基础科学辑》 *
张兴国 等: "监控视频与2D地理空间数据互映射", 《武汉大学学报(信息科学版)》 *
莫林 等: "一种基于背景减除与三帧差分的运动目标检测算法", 《微计算机信息》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110764110A (en) * 2019-11-12 2020-02-07 深圳创维数字技术有限公司 Path navigation method, device and computer readable storage medium
CN110764110B (en) * 2019-11-12 2022-04-08 深圳创维数字技术有限公司 Path navigation method, device and computer readable storage medium
CN112040265A (en) * 2020-09-09 2020-12-04 河南省科学院地理研究所 Multi-camera collaborative geographic video live broadcast stream generation method
CN112967214A (en) * 2021-02-18 2021-06-15 深圳市慧鲤科技有限公司 Image display method, device, equipment and storage medium
CN113033348A (en) * 2021-03-11 2021-06-25 北京文安智能技术股份有限公司 Overlook image correction method for pedestrian re-recognition, storage medium, and electronic device
CN113297950A (en) * 2021-05-20 2021-08-24 首都师范大学 Dynamic target detection method
CN113297950B (en) * 2021-05-20 2023-02-17 首都师范大学 Dynamic target detection method

Also Published As

Publication number Publication date
CN110245199B (en) 2021-10-08

Similar Documents

Publication Publication Date Title
CN109003325B (en) Three-dimensional reconstruction method, medium, device and computing equipment
CN107292965B (en) Virtual and real shielding processing method based on depth image data stream
CN109387204B (en) Mobile robot synchronous positioning and composition method facing indoor dynamic environment
CN110245199A (en) A kind of fusion method of high inclination-angle video and 2D map
CN103247075B (en) Based on the indoor environment three-dimensional rebuilding method of variation mechanism
CN110288657B (en) Augmented reality three-dimensional registration method based on Kinect
CN103400409B (en) A kind of coverage 3D method for visualizing based on photographic head attitude Fast estimation
CN107833270A (en) Real-time object dimensional method for reconstructing based on depth camera
CN108537876A (en) Three-dimensional rebuilding method, device, equipment based on depth camera and storage medium
CN115082639B (en) Image generation method, device, electronic equipment and storage medium
CN103198488B (en) PTZ surveillance camera realtime posture rapid estimation
CN109314753A (en) Medial view is generated using light stream
CN113313828B (en) Three-dimensional reconstruction method and system based on single-picture intrinsic image decomposition
CN105005964A (en) Video sequence image based method for rapidly generating panorama of geographic scene
CN110211169B (en) Reconstruction method of narrow baseline parallax based on multi-scale super-pixel and phase correlation
CN115272591B (en) Geographic entity polymorphic expression method based on three-dimensional semantic model
CN114782628A (en) Indoor real-time three-dimensional reconstruction method based on depth camera
CN114677479A (en) Natural landscape multi-view three-dimensional reconstruction method based on deep learning
CN112822479A (en) Depth map generation method and device for 2D-3D video conversion
CN112862736A (en) Real-time three-dimensional reconstruction and optimization method based on points
CN107767393B (en) Scene flow estimation method for mobile hardware
CN108629742B (en) True ortho image shadow detection and compensation method, device and storage medium
CN113436130A (en) Intelligent sensing system and device for unstructured light field
CN116152442B (en) Three-dimensional point cloud model generation method and device
CN116704112A (en) 3D scanning system for object reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20211008

CF01 Termination of patent right due to non-payment of annual fee