CN102402855A - Method and system of fusing real-time panoramic videos of double cameras for intelligent traffic - Google Patents
Method and system of fusing real-time panoramic videos of double cameras for intelligent traffic Download PDFInfo
- Publication number
- CN102402855A CN102402855A CN2011102504432A CN201110250443A CN102402855A CN 102402855 A CN102402855 A CN 102402855A CN 2011102504432 A CN2011102504432 A CN 2011102504432A CN 201110250443 A CN201110250443 A CN 201110250443A CN 102402855 A CN102402855 A CN 102402855A
- Authority
- CN
- China
- Prior art keywords
- video
- video image
- real time
- panoramic video
- projective transformation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
The invention provides a method and a system of fusing real-time panoramic videos of double cameras for intelligent traffic. The method comprises the following steps of: a, inputting video images, extracting characteristics of the video images and matching them; b, carrying out RANSAC purification to the extracted characteristic matched dot pair, and working out a projection transformation matrix; c, transforming field of view to the lower part of a reference coordinate system by the projection transformation matrix; and d, calculating and generating a panoramic video image to realize fusion of panoramic videos. The panoramic video image with wide view angle and high resolution can be obtained by a panoramic video image fusion technique, so that real-time monitoring of multilane road can be implemented more conveniently and quickly, defect that the view angle of the current equipment cannot meet the practical need is made up, and meanwhile equipment cost is greatly reduced.
Description
Technical field
The present invention relates to intelligent transportation field, relate in particular to a kind of twin camera real time panoramic video fusion method and system that is used for intelligent transportation.
Background technology
In intelligent transportation field, satisfy the scope with great visual angle of image, guarantee again that simultaneously high resolving power, existing method can utilize with great visual angle camera lens such as fish eye lens, the first-class image that obtains the big visual field of high-resolution of wide-angle lens to satisfy actual requirement.
But these equipment prices are relatively expensive, and the image of taking has bigger deformation, when many applications, must carry out conversion and correction.At present, panoramic technique mainly is the splicing to picture, and is directed against video and is that high-resolution panoramic video integration technology does not still have appearance.
Summary of the invention
The object of the present invention is to provide a kind of twin camera real time panoramic video fusion method and system that is used for intelligent transportation, generate the high-resolution panoramic video in wide visual angle through common video camera, and can satisfy the performance requirement of monitoring in real time with realization.
The objective of the invention is to realize through following technical scheme.
A kind of twin camera real time panoramic video fusion method that is used for intelligent transportation may further comprise the steps:
A: inputted video image, video image is carried out feature extraction and coupling;
B: the characteristic matching point to extracting is purified to carrying out RANSAC, and calculates projective transformation matrix;
C: through projective transformation matrix with the visual field projective transformation under reference frame;
D: calculate and generate full-view video image, realize that panoramic video merges.
Preferably, also comprise judging whether current two-path video has the public domain among the step b,, get into step c if having; Otherwise the adjustment camera position lets two-path video that the overlapping region is arranged.
Preferably, steps d specifically comprises:
To the pixel in the public domain, calculate its weight coefficient in each visual field, it is following to calculate the weight coefficient formula:
Carry out weighted sum then, obtain the true pixel values in the panoramic picture at last, it is following to calculate the panorama fusion formula:
I(x,y)=ω
A(x,y)*I
A(x,y)+ω
B(x,y)*I
B(x,y)。
Preferably, said step a specifically comprises:
Extract rotation from video image, scale, brightness changes irrelevant proper vector;
Mate according to proper vector.
Preferably, said proper vector is extracted and may further comprise the steps:
S1: detect the yardstick spatial extrema;
S2: accurate location feature point position and unique point descriptor;
S3: generating feature vector.
A kind of twin camera real time panoramic video emerging system that is used for intelligent transportation comprises:
Input block is used for inputted video image;
The characteristic processing unit is used for video image is carried out feature extraction and coupling;
The computational analysis unit is used for the characteristic matching point that extracts is purified to carrying out RANSAC, and calculates projective transformation matrix;
The projective transformation unit, be used for through projective transformation matrix with the visual field projective transformation under reference frame;
The video integrated unit is used for calculating the generation full-view video image, realizes that panoramic video merges.
Preferably, said computational analysis unit is used to also judge whether current two-path video has the public domain, and for there not being the public domain, the adjustment camera position lets two-path video that the overlapping region is arranged.
Preferably, the said characteristic processing unit of carrying specifically comprises:
Extraction unit is used for extracting rotation from video image, scale, and brightness changes irrelevant proper vector;
Matching unit is used for mating according to proper vector.
The embodiment of the invention compared with prior art; The present invention is through video panorama image integration technology; Obtain the high-resolution full-view video image in wide visual angle; Enforcement that can more convenient and quicker is to the real-time monitoring of many cars to road, remedied the weak point that the existing equipment field angle can't satisfy actual needs, also greatly reduces equipment cost simultaneously.
Description of drawings
Fig. 1 is a panorama fusion coefficients synoptic diagram of the present invention;
Fig. 2 is a panoramic video fusion method process flow diagram of the present invention;
Fig. 3 is a system principle diagram of the present invention.
Embodiment
In order to make the object of the invention, technical scheme and advantage clearer,, the present invention is further elaborated below in conjunction with accompanying drawing and embodiment.Should be appreciated that specific embodiment described herein only in order to explanation the present invention, and be not used in qualification the present invention.
Video panorama image integration technology is to remove the redundancy between the frame of video through image processing method, thereby one section video information that video camera is taken is converted into the panoramic picture that a width of cloth comprises these all information of sequence.
The concrete scheme of panoramic video fusion method of the present invention is following:
One, video features extracts and coupling: the extraction of unique point and coupling are to adopt the SIFT algorithm among the present invention, and the SIFT algorithm can be divided into two processes, at first, extract rotation from image to be matched, and scale, brightness changes irrelevant proper vector; The extraction of proper vector mainly contains following a few step, the first step: detect the yardstick spatial extrema; Second step: accurate location feature point position and unique point descriptor; The 3rd step: generate the SIFT proper vector.Secondly, mate according to the SIFT proper vector, obtain the SIFT proper vector after; The similarity that adopts Euclidean distance to measure two width of cloth images; And carry out first search with preferential k-d tree, search 2 approximate arest neighbors unique points of each unique point, in these two unique points; Be less than certain proportion threshold value if the distance of arest neighbors is removed the following neighbour's distance, then accept this a pair of match point; This threshold value shows that through a large amount of experiments getting the accuracy of judging at 0.49 o'clock is 98%, and therefore, value is 0.49 in practical application.
Two, projective transformation matrix calculates: it is right that the characteristic matching point that the SIFT algorithm is extracted removes the error characteristic match point to the method for carrying out RANSAC and purifying; And go out parameter wherein through the projective transformation Model Calculation, final projective transformation matrix is calculated.
Three, panorama merges: to the pixel in the public domain, calculate its weight coefficient in each visual field, the weight coefficient computing formula is suc as formula shown in (1);
Wherein, make O be in the public domain more arbitrarily, ω
A(O) be the weight coefficient of O in the A of visual field, ω
B(O) be the weight coefficient of O in the B of visual field, we are through trying to achieve the bee-line d on O to two type of border
A, d
BMeasure ω
A(O), ω
B(O) carry out weighted sum then, obtain the true pixel values in the panoramic picture at last, panorama merges suc as formula shown in (2).
I(x,y)=ω
A(x,y)*I
A(x,y)+ω
B(x,y)*I
B(x,y) (2)
I (x, y) expression panoramic picture mid point (x, the pixel value of y) locating, I
A(x, y), I
B(x y) is respectively image A, B at point (x, the pixel value of y) locating, ω
A(x, y) presentation video A mid point (x, y) weight coefficient, ω
B(x, y) presentation video B mid point (x, weight coefficient y).
As shown in Figure 1, the fusion coefficients of public domain pixel is put the public domain distance by this and is measured the fusion weight coefficient.From Fig. 1, analyze and to know that border bad is among the A of visual field fully, so this borderline pixel weight coefficient should be 1 in A; And the weight coefficient in the B of visual field is 0, and when the pixel position was moved toward border bcd by border bad, the weight coefficient of pixel in the A of visual field reduced gradually; And the weight coefficient in the B of visual field increases gradually; When arriving border bad, the weight coefficient of pixel in the A of visual field is 0, and in B is 1.
Embodiment:
See also panoramic video fusion method process flow diagram shown in Figure 2, the concrete operations step is following:
Step 101: inputted video image;
Step 102: the video image to input adopts the SIFT algorithm to carry out feature extraction, and carries out characteristic matching;
Step 103: to extracting to such an extent that unique point is purified to carrying out RANSCA, it is right to reduce the error characteristic match point;
Step 104: judge whether current two-path video has the public domain, judge in view of the above whether two-path video can carry out panorama and merge,, need the adjustment camera position, let two-path video that the overlapping region is arranged if there is not the public domain;
Step 105: two-path video has the public domain, calculates projective transformation matrix;
Step 106: through projective transformation matrix with the two-path video projective transformation under reference frame;
Step 107: the generation full-view video image is calculated through formula (1) and formula (2) in the public domain to two-path video under the reference frame.
See also shown in Figure 3ly, system principle diagram of the present invention comprises: the input block, characteristic processing unit, computational analysis unit, projective transformation unit and the video integrated unit that connect successively;
Input block is used for inputted video image;
The characteristic processing unit is used for video image is carried out feature extraction and coupling;
The computational analysis unit is used for the characteristic matching point that extracts is purified to carrying out RANSAC, and calculates projective transformation matrix; Judge whether current two-path video has the public domain, for there not being the public domain, the adjustment camera position lets two-path video that the overlapping region is arranged.
The projective transformation unit, be used for through projective transformation matrix with the current video image projective transformation under reference frame;
The video integrated unit is used for calculating the generation full-view video image, realizes that panoramic video merges.
The said characteristic processing unit of carrying specifically comprises: extraction unit, be used for extracting rotation from video image, and scale, brightness changes irrelevant proper vector;
Matching unit is used for mating according to proper vector.
The above is merely preferred embodiment of the present invention, not in order to restriction the present invention, all any modifications of within spirit of the present invention and principle, being done, is equal to and replaces and improvement etc., all should be included within protection scope of the present invention.
Claims (8)
1. a twin camera real time panoramic video fusion method that is used for intelligent transportation is characterized in that, may further comprise the steps:
A: inputted video image, video image is carried out feature extraction and coupling;
B: the characteristic matching point to extracting is purified to carrying out RANSAC, and calculates projective transformation matrix;
C: through projective transformation matrix with the visual field projective transformation under reference frame;
D: calculate and generate full-view video image, realize that panoramic video merges.
2. the twin camera real time panoramic video fusion method that is used for intelligent transportation as claimed in claim 1 is characterized in that, also comprises judging whether current two-path video has the public domain among the step b, if having, gets into step c; Otherwise the adjustment camera position lets two-path video that the overlapping region is arranged.
3. the twin camera real time panoramic video fusion method that is used for intelligent transportation as claimed in claim 2 is characterized in that steps d specifically comprises:
To the pixel in the public domain, calculate its weight coefficient in each visual field, it is following to calculate the weight coefficient formula:
Carry out weighted sum then, obtain the true pixel values in the panoramic picture at last, it is following to calculate the panorama fusion formula:
I(x,y)=ω
A(x,y)*I
A(x,y)+ω
B(x,y)*I
B(x,y)。
4. the twin camera real time panoramic video fusion method that is used for intelligent transportation as claimed in claim 1 is characterized in that said step a specifically comprises:
Extract rotation from video image, scale, brightness changes irrelevant proper vector;
Mate according to proper vector.
5. the twin camera real time panoramic video fusion method that is used for intelligent transportation as claimed in claim 4 is characterized in that, said proper vector is extracted and may further comprise the steps:
S1: detect the yardstick spatial extrema;
S2: accurate location feature point position and unique point descriptor;
S3: generating feature vector.
6. a twin camera real time panoramic video emerging system that is used for intelligent transportation is characterized in that, comprising:
Input block is used for inputted video image;
The characteristic processing unit is used for video image is carried out feature extraction and coupling;
The computational analysis unit is used for the characteristic matching point that extracts is purified to carrying out RANSAC, and calculates projective transformation matrix;
The projective transformation unit, be used for through projective transformation matrix with the visual field projective transformation under reference frame;
The video integrated unit is used for calculating the generation full-view video image, realizes that panoramic video merges.
7. the twin camera real time panoramic video emerging system that is used for intelligent transportation as claimed in claim 6; It is characterized in that; Said computational analysis unit is used to also judge whether current two-path video has the public domain; For there not being the public domain, the adjustment camera position lets the two-path video overlapping region is arranged.
8. the twin camera real time panoramic video fusion method that is used for intelligent transportation as claimed in claim 6 is characterized in that the said characteristic processing unit of carrying specifically comprises:
Extraction unit is used for extracting rotation from video image, scale, and brightness changes irrelevant proper vector;
Matching unit is used for mating according to proper vector.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011102504432A CN102402855A (en) | 2011-08-29 | 2011-08-29 | Method and system of fusing real-time panoramic videos of double cameras for intelligent traffic |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011102504432A CN102402855A (en) | 2011-08-29 | 2011-08-29 | Method and system of fusing real-time panoramic videos of double cameras for intelligent traffic |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102402855A true CN102402855A (en) | 2012-04-04 |
Family
ID=45885025
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2011102504432A Pending CN102402855A (en) | 2011-08-29 | 2011-08-29 | Method and system of fusing real-time panoramic videos of double cameras for intelligent traffic |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102402855A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101984463A (en) * | 2010-11-02 | 2011-03-09 | 中兴通讯股份有限公司 | Method and device for synthesizing panoramic image |
CN102778980A (en) * | 2012-07-05 | 2012-11-14 | 中国电子科技集团公司第二十八研究所 | Fusion and interaction system for extra-large-breadth display contact |
CN102799375A (en) * | 2012-07-05 | 2012-11-28 | 中国电子科技集团公司第二十八研究所 | Image processing method for extra-large-format displayed contact fusion interaction system |
CN103150715A (en) * | 2013-03-13 | 2013-06-12 | 腾讯科技(深圳)有限公司 | Image stitching processing method and device |
CN103632626A (en) * | 2013-12-03 | 2014-03-12 | 四川省计算机研究院 | Intelligent tour guide realizing method and intelligent tour guide device based on mobile network and mobile client |
CN104077913A (en) * | 2013-03-27 | 2014-10-01 | 上海市城市建设设计研究总院 | Multi-view image information-fused traffic accident monitoring method and device |
CN105243655A (en) * | 2014-05-16 | 2016-01-13 | 通用汽车环球科技运作有限责任公司 | System and method for estimating vehicle dynamics using feature points in images from multiple cameras |
CN106204464A (en) * | 2015-05-28 | 2016-12-07 | 卡西欧计算机株式会社 | Image processing apparatus and image processing method |
CN106851130A (en) * | 2016-12-13 | 2017-06-13 | 北京搜狐新媒体信息技术有限公司 | A kind of video-splicing method and device |
CN108734655A (en) * | 2017-04-14 | 2018-11-02 | 中国科学院苏州纳米技术与纳米仿生研究所 | The method and system that aerial multinode is investigated in real time |
CN110719405A (en) * | 2019-10-15 | 2020-01-21 | 成都大学 | Multi-camera panoramic image stitching method based on binocular ranging, storage medium and terminal |
CN111105351A (en) * | 2019-12-13 | 2020-05-05 | 华中科技大学鄂州工业技术研究院 | Video sequence image splicing method and device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101442619A (en) * | 2008-12-25 | 2009-05-27 | 武汉大学 | Method for splicing non-control point image |
CN101556692A (en) * | 2008-04-09 | 2009-10-14 | 西安盛泽电子有限公司 | Image mosaic method based on neighborhood Zernike pseudo-matrix of characteristic points |
KR100951309B1 (en) * | 2008-07-14 | 2010-04-05 | 성균관대학교산학협력단 | New Calibration Method of Multi-view Camera for a Optical Motion Capture System |
CN101877140A (en) * | 2009-12-18 | 2010-11-03 | 北京邮电大学 | Panorama-based panoramic virtual tour method |
CN101950426A (en) * | 2010-09-29 | 2011-01-19 | 北京航空航天大学 | Vehicle relay tracking method in multi-camera scene |
US8018999B2 (en) * | 2005-12-05 | 2011-09-13 | Arcsoft, Inc. | Algorithm description on non-motion blur image generation project |
-
2011
- 2011-08-29 CN CN2011102504432A patent/CN102402855A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8018999B2 (en) * | 2005-12-05 | 2011-09-13 | Arcsoft, Inc. | Algorithm description on non-motion blur image generation project |
CN101556692A (en) * | 2008-04-09 | 2009-10-14 | 西安盛泽电子有限公司 | Image mosaic method based on neighborhood Zernike pseudo-matrix of characteristic points |
KR100951309B1 (en) * | 2008-07-14 | 2010-04-05 | 성균관대학교산학협력단 | New Calibration Method of Multi-view Camera for a Optical Motion Capture System |
CN101442619A (en) * | 2008-12-25 | 2009-05-27 | 武汉大学 | Method for splicing non-control point image |
CN101877140A (en) * | 2009-12-18 | 2010-11-03 | 北京邮电大学 | Panorama-based panoramic virtual tour method |
CN101950426A (en) * | 2010-09-29 | 2011-01-19 | 北京航空航天大学 | Vehicle relay tracking method in multi-camera scene |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101984463A (en) * | 2010-11-02 | 2011-03-09 | 中兴通讯股份有限公司 | Method and device for synthesizing panoramic image |
CN102778980A (en) * | 2012-07-05 | 2012-11-14 | 中国电子科技集团公司第二十八研究所 | Fusion and interaction system for extra-large-breadth display contact |
CN102799375A (en) * | 2012-07-05 | 2012-11-28 | 中国电子科技集团公司第二十八研究所 | Image processing method for extra-large-format displayed contact fusion interaction system |
CN102778980B (en) * | 2012-07-05 | 2015-07-08 | 中国电子科技集团公司第二十八研究所 | Fusion and interaction system for extra-large-breadth display contact |
CN102799375B (en) * | 2012-07-05 | 2015-08-19 | 中国电子科技集团公司第二十八研究所 | A kind of extra-large-breadth display contact merges the image processing method of interactive system |
CN103150715A (en) * | 2013-03-13 | 2013-06-12 | 腾讯科技(深圳)有限公司 | Image stitching processing method and device |
CN104077913A (en) * | 2013-03-27 | 2014-10-01 | 上海市城市建设设计研究总院 | Multi-view image information-fused traffic accident monitoring method and device |
CN103632626A (en) * | 2013-12-03 | 2014-03-12 | 四川省计算机研究院 | Intelligent tour guide realizing method and intelligent tour guide device based on mobile network and mobile client |
CN103632626B (en) * | 2013-12-03 | 2016-06-29 | 四川省计算机研究院 | A kind of intelligent guide implementation method based on mobile Internet, device and mobile client |
CN105243655B (en) * | 2014-05-16 | 2018-09-14 | 通用汽车环球科技运作有限责任公司 | The dynamic system and method for vehicle are estimated using the characteristic point in image |
CN105243655A (en) * | 2014-05-16 | 2016-01-13 | 通用汽车环球科技运作有限责任公司 | System and method for estimating vehicle dynamics using feature points in images from multiple cameras |
CN106204464A (en) * | 2015-05-28 | 2016-12-07 | 卡西欧计算机株式会社 | Image processing apparatus and image processing method |
CN106204464B (en) * | 2015-05-28 | 2019-05-03 | 卡西欧计算机株式会社 | Image processing apparatus, image processing method and recording medium |
CN106851130A (en) * | 2016-12-13 | 2017-06-13 | 北京搜狐新媒体信息技术有限公司 | A kind of video-splicing method and device |
CN108734655A (en) * | 2017-04-14 | 2018-11-02 | 中国科学院苏州纳米技术与纳米仿生研究所 | The method and system that aerial multinode is investigated in real time |
CN108734655B (en) * | 2017-04-14 | 2021-11-30 | 中国科学院苏州纳米技术与纳米仿生研究所 | Method and system for detecting multiple nodes in air in real time |
CN110719405A (en) * | 2019-10-15 | 2020-01-21 | 成都大学 | Multi-camera panoramic image stitching method based on binocular ranging, storage medium and terminal |
CN110719405B (en) * | 2019-10-15 | 2021-02-26 | 成都大学 | Multi-camera panoramic image stitching method based on binocular ranging, storage medium and terminal |
CN111105351A (en) * | 2019-12-13 | 2020-05-05 | 华中科技大学鄂州工业技术研究院 | Video sequence image splicing method and device |
CN111105351B (en) * | 2019-12-13 | 2023-04-18 | 华中科技大学鄂州工业技术研究院 | Video sequence image splicing method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102402855A (en) | Method and system of fusing real-time panoramic videos of double cameras for intelligent traffic | |
CN111968129B (en) | Instant positioning and map construction system and method with semantic perception | |
US11468585B2 (en) | Pseudo RGB-D for self-improving monocular slam and depth prediction | |
JP6031554B2 (en) | Obstacle detection method and apparatus based on monocular camera | |
CN103325112B (en) | Moving target method for quick in dynamic scene | |
WO2019029099A1 (en) | Image gradient combined optimization-based binocular visual sense mileage calculating method | |
US20150220791A1 (en) | Automatic training of a parked vehicle detector for large deployment | |
Liao et al. | Model-free distortion rectification framework bridged by distortion distribution map | |
CN105005964A (en) | Video sequence image based method for rapidly generating panorama of geographic scene | |
CN105894443A (en) | Method for splicing videos in real time based on SURF (Speeded UP Robust Features) algorithm | |
CN103903222A (en) | Three-dimensional sensing method and three-dimensional sensing device | |
CN104506800A (en) | Scene synthesis and comprehensive monitoring method and device for electronic police cameras in multiple directions | |
CN111008660A (en) | Semantic map generation method, device and system, storage medium and electronic equipment | |
CN105488777A (en) | System and method for generating panoramic picture in real time based on moving foreground | |
CN103700082B (en) | Image split-joint method based on dual quaterion relative orientation | |
Yang et al. | Unsupervised fisheye image correction through bidirectional loss with geometric prior | |
CN105335988A (en) | Hierarchical processing based sub-pixel center extraction method | |
Liu et al. | Uniseg: A unified multi-modal lidar segmentation network and the openpcseg codebase | |
Gao et al. | Sparse dense fusion for 3d object detection | |
Wong et al. | Vision-based vehicle localization using a visual street map with embedded SURF scale | |
CN103903269B (en) | The description method and system of ball machine monitor video | |
Mao et al. | Can we cover navigational perception needs of the visually impaired by panoptic segmentation? | |
Unger et al. | Multi-camera bird’s eye view perception for autonomous driving | |
CN102131078B (en) | Video image correcting method and system | |
CN114554158A (en) | Panoramic video stitching method and system based on road traffic scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20120404 |