CN105303518A - Region feature based video inter-frame splicing method - Google Patents

Region feature based video inter-frame splicing method Download PDF

Info

Publication number
CN105303518A
CN105303518A CN201410261394.6A CN201410261394A CN105303518A CN 105303518 A CN105303518 A CN 105303518A CN 201410261394 A CN201410261394 A CN 201410261394A CN 105303518 A CN105303518 A CN 105303518A
Authority
CN
China
Prior art keywords
image
pixel
formula
sin
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410261394.6A
Other languages
Chinese (zh)
Inventor
顾国华
韩鲁
刘恒建
余明
孙爱娟
任侃
钱惟贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201410261394.6A priority Critical patent/CN105303518A/en
Publication of CN105303518A publication Critical patent/CN105303518A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention discloses a region feature based video inter-frame splicing method. The method comprises: using a video camera and an inertial sensor and acquiring three-dimensional information of a target scene image and a motion parameter of the video camera, thereby estimating global optical flow of the image; using local fitting of the global optical flow to extract a local feature region which accords with global motion of the image; and in the local feature region, based on a modified FAST feature extraction operator, accurately estimating a global motion transformation matrix of the image, and combining an image fusion technology to complete inter-frame splicing. According to the region feature based video inter-frame splicing method provided by the present invention, on the premise of ensuring precision, a calculation amount is reduced, real-time performance of an algorithm is improved, and meanwhile, the effect of environment change on the image splicing is reduced.

Description

Joining method between a kind of frame of video based on provincial characteristics
Technical field
The invention belongs to technical field of image processing, be specifically related to joining method between a kind of frame of video based on provincial characteristics.
Background technology
Along with unmanned plane in military surveillance, fight calamities and provide relief and the widespread use in the field such as remote sensing remote measurement, the image mosaic technology of video of taking photo by plane causes the attention of Chinese scholars expert.Use image mosaic technology that the video sequence that unmanned plane obtains is spliced into a width and cover FR panorama sketch, be conducive to the accurate location of target, detection and tracking.The core of image mosaic and emphasis are image registration, and the precision of image registration and algorithm amount directly affect precision and the real-time of image mosaic.Taking photo by plane in video, the global motion of image typically refers to the motion of the image background caused due to aircraft and Airborne camera, and the target of motion is usually less, and therefore the global motion of image is occupied an leading position.
The joining method of existing video of taking photo by plane mainly is divided into direct method and characteristic method two class.Wherein direct method compares has what represent meaning to be that attiato realizes the overall motion estimation of image by least square method and then realizes image registration on the basis to image local block motion estimation, the method achieves good effect, but the method easily lost efficacy when image local difference is less, and larger by the impact of environment.By contrast, the application of characteristic method is comparatively extensive, and such as brown and lowe adopts SIFT to achieve the registration of different images and the image mosaic under being applied to off-line state, and the method precision is higher, but algorithm length consuming time, poor real; Steedly adopts MOPMulti-scaleOrientedPatches to realize registration counting yield between frame of video in video-splicing, but its real-time still cannot meet unmanned plane takes photo by plane the requirement that video splices fast.
Summary of the invention
The present invention proposes joining method between a kind of frame of video based on provincial characteristics, under the prerequisite ensureing precision, reduces calculated amount, improves the real-time of algorithm, reduces environmental change to the impact of image mosaic simultaneously.
In order to solve the problems of the technologies described above, the invention provides joining method between a kind of frame of video based on provincial characteristics, comprising the following steps:
Step one: synchro control video camera and inertial sensor work, uses video camera to obtain the depth information of each pixel in image to be spliced, uses inertial sensor to obtain the kinematic parameter of video camera, and calculates the global optical flow of image to be spliced;
Step 2: in image to be spliced, uses light stream fitting process in local to extract the local characteristic region meeting image global motion;
Step 3: in local characteristic region, uses the FAST unique point of FAST feature extraction operator extraction image;
Step 4: use SAD operator to carry out registration to the FAST unique point in reference picture and image to be spliced, calculates the global motion projective transformation matrix of image to be spliced based on FAST unique point in conjunction with RANSAC algorithm;
Step 5: combine and be fade-in gradually to go out Weighted Average Algorithm and complete the seamless spliced of image to be spliced and reference picture.
The present invention compared with prior art, its remarkable advantage is: in (1) practical application, because scene of taking photo by plane is comparatively complicated, the interference of the uncertainty that regional movement is estimated and moving target reduces the estimated accuracy of global motion, the present invention proposes the rapid registering algorithm that the image global motion based on provincial characteristics is estimated, improve precision; (2) hardware platform that the present invention is based on inertial sensor and Kinect video camera composition completes the calculating of image overall light stream, not only reduce calculated amount of the present invention, improve real-time, also eliminated the impact of environment for optical flow computation, there is higher environmental robustness; (3) the present invention carries out by extracting local characteristic region in entire image calculated amount and the computing time that image registration mode reduces image mosaic significantly; (4) the SIFT operator that the present invention adopts the FAST of improvement to extract operator replacement classical carries out feature calculation, again reduces the computation complexity of algorithm.
Accompanying drawing explanation
Fig. 1 be the present invention is based on provincial characteristics frame of video between joining method process flow diagram;
Fig. 2 is the FAST feature extraction operator schematic diagram that the present invention uses;
Fig. 3 is that being fade-in of using of the present invention gradually goes out algorithm schematic diagram.
Embodiment
As shown in Figure 1, joining method between the frame of video that the present invention is based on provincial characteristics, performing step is as described below:
Step one: synchro control video camera and inertial sensor work, uses video camera to obtain the depth information of each pixel in image to be spliced, uses inertial sensor to obtain the kinematic parameter of video camera, and calculates the global optical flow of image to be spliced;
Step 2: in image to be spliced, uses light stream fitting process in local to extract the local characteristic region meeting image global motion;
Step 3: in local characteristic region, uses the FAST unique point of FAST feature extraction operator extraction image;
Step 4: use SAD operator to carry out registration to the FAST unique point in reference picture and image to be spliced, calculates the global motion projective transformation matrix of image to be spliced based on FAST unique point in conjunction with RANSAC algorithm;
Step 5: combine and be fade-in gradually to go out Weighted Average Algorithm and complete the seamless spliced of image to be spliced and reference picture.
Being implemented as follows of above-mentioned steps one:
Step 1.1: the work of synchro control inertial sensor and Kinect video camera, video camera can adopt Kinect video camera, Kinect video camera comprises depth image simultaneously and obtains CCD and coloured image acquisition CCD, can obtain coloured image and depth image simultaneously; Inertial sensor comprises gyroscope and accelerometer.
Inertial sensor is used to obtain motion Eulerian angle [α, beta, gamma], the three-shaft displacement [t of video camera x, t y, t z], and join matrix [R, T] outward according to the motion that the beginning parameter transform model of video camera goes out video camera, wherein α is angle of nutation, β is angle of precession, γ is angle of rotation, t xfor the displacement in x direction, t yfor the displacement in y direction, t zfor the displacement in z direction; R is motion outer ginseng matrix rotation matrix, and T is the translation matrix of motion outer ginseng matrix, and the computing formula of rotation matrix R and translation matrix T is such as formula shown in (1), (2):
R = cos γ cos β cos γ sin β sin α - sin γ cos γ sin β cos α - sin α sin γ sin γ cos β sin α sin β sin γ + cos γ cos α sin γ sin β cos α + cos γ sin α - sin β cos β sin α cos β cos α - - - ( 1 )
T=[t x,t y,t z]'(2)
In formula (2), T is vector [t x, t y, t z] transposed form,
Video camera is used to obtain the depth information u of each pixel in image to be spliced,
Step 1.2: use light stream formulae discovery to go out the global optical flow d (u, v) of image to be spliced, wherein light stream formula is such as formula shown in (3), (4):
u 2 u 1 m 2 = KRK - 1 m 1 + 1 u 1 KT - - - ( 3 )
d(u,v)=m 2-m 1(4)
Wherein, the pixel coordinate m in reference picture is calculated by formula (3) 1pixel coordinate m in image to be spliced 2, calculate pixel coordinate m according to formula (4) 1global optical flow d (u, v), wherein u is pixel coordinate m 1the global optical flow of horizontal direction, v is pixel coordinate m 1the global optical flow of vertical direction, u 1, u 2be respectively m 1and m 2depth information, K is the internal reference matrix of video camera.
The patented claim that step one implementation procedure specifically can be 201410027349.4 see application number, name is called the method for registering images based on inertial sensor and Kinect video camera.
Being implemented as follows of above-mentioned steps two:
Step 2.1: become by Iamge Segmentation to be spliced size to be the multiple pieces of regions of w × w, w is the number of pixel in block region; In each piece of region, estimate the local affine transformations relation in each piece of region in conjunction with four-point method use global optical flow d (u, v), local affine transformations relation is as shown in formula (5):
u = a 1 x + a 2 y + a 3 v = a 4 x + a 5 y + a 6 - - - ( 5 )
In formula (5), the pixel coordinate that (x, y) is reference picture; Parameter a 1, a 2, a 3, a 4, a 5and a 6respectively as the partial projection transformation matrix H of the parameter factors structure agglomerated regions for matrix block, as shown in formula (6),
H block = a 1 a 2 a 3 a 4 a 5 a 6 0 0 1 - - - ( 6 )
Step 2.2: global optical flow d (u, v) calculated all in block region are substituted into partial projection transformation matrix H respectively blockin, calculate to obtain and estimate global optical flow value d'(u', v'), as shown in formula (7),
u ′ = a 1 x + a 2 y + a 3 v ′ = a 4 x + a 5 y + a 6 - - - ( 7 )
Step 2.3: according to estimation global optical flow value d'(u', v') and global optical flow value d (u, v) calculate error of fitting σ, as shown in formula (8),
σ = 1 w × w Σ [ u , v ] ∈ w × w | d ( u , v ) - d ′ ( u ′ , v ′ ) | 2 - - - ( 8 )
Step 2.4: when the error of fitting σ value in block region is less than the threshold value δ preset, extracts this block region; Otherwise, give up this block region; Finally using the local characteristic region of all block regions extracted as image mosaic.
The present invention has made improvement to the FAST feature extraction operator in step 3, is implemented as follows:
In local characteristic region, in order to obtain the relatively uniform FAST unique point of distribution, the present invention is that step-length scans in local characteristic region with D, is namely expert at on column direction, choose pixel centered by the pixel of D pixel, carry out judgement and the extraction of FAST unique point.The judgement of FAST unique point and extracting mode are as shown in Figure 2, pixel centered by scanning element, 16 pixels around it are selected to be pixel to be compared, wherein row is set to two pixels with the radius on column direction, the radius of diagonal is set to 1 pixel, the Pixel gray difference of computing center's pixel and surrounding 16 points, in the Pixel gray difference of these 16 points, the threshold value T (usual value is 12) preset is greater than if difference is positive quantity, or difference is that negative quantity is greater than the threshold value T preset, then think that this center pixel is FAST unique point.
In order to the impact getting rid of marginal point is gone forward side by side the low operand of a step-down, the gray scale difference value of the present invention first computing center's pixel and Fig. 2 mid point 1, point 9, point 5, point 13, the i.e. first gray scale difference value of four summit pixels on computing center's pixel and line direction and column direction, if difference is that positive quantity is less than 3 in these four gray scale difference values, or difference is that negative quantity is less than 3, then judge that this center pixel is not FAST unique point, without the need to carrying out the calculating of the gray scale difference value of center pixel and other pixels; If difference is that positive quantity is more than or equal to 3 in these four gray scale difference values, or difference is that negative quantity is more than or equal to 3, then proceed the calculating of center pixel and other 12 pixel gray scale difference values, and contrast with setting threshold value T, confirm whether this point is FAST unique point with this.
Being implemented as follows of above-mentioned steps four:
Step 4.1: use SAD operator to complete the registration of FAST unique point in the two field picture of front and back two, the basis for estimation of image registration adopts the computing method of Euclidean distance;
Step 4.2: complete asking for of image overall projection matrix H based on FAST feature point pairs in conjunction with RANSAC algorithm, wherein RANSAC algorithmic procedure is as described below:
In FAST feature point pairs, extract four pairs of feature point pairs arbitrarily, adopt four-point method to ask for overall projection matrix H, then bring remaining feature point pairs into formula (9), the unique point [x' of the rear two field picture estimated by overall projection matrix H q, y' q], if the unique point [x' estimated q, y' q] meeting formula (10), then this feature point pairs is set to intra-office point, the point namely meeting overall projection matrix H is right; If do not meet formula (10), then this feature point pairs is set to point not in the know, the point namely not meeting overall projection matrix H is right.
[x' q,y' q]=H*[x p,y p](9)
In formula (9), [x p, y p], [x q, y q] be respectively reference picture and treat the unique point coordinate of reference picture,
|[x' q,y' q]-[x q,y q]|≤T ran(10)
In formula (10), [x' q, y' q] for treating the estimated value of reference picture unique point, T ranfor weighing the estimated value [x' treating reference picture unique point q, y' q] and treat reference picture unique point actual coordinate [x q, y q] threshold value, be generally set as [0.1,0.1],
After calculating terminates, the unique point logarithm Num of point in statistics bureau, and judge whether the optimization model exporting overall affine matrix H, be specially: if Num>=M, wherein M is intra-office point number threshold value, then terminate RANSAC algorithm, and using current overall affine matrix H as optimization model; If Num < is M, and the cycle index of RANSAC algorithm is at setting threshold value N rANin, then in intra-office point, select four to try to achieve to putting and repeating step 4 calculating the new overall affine matrix H meeting formula (10) again, otherwise, using current overall affine matrix H as optimization model.
The present invention gradually to go out Weighted Average Algorithm and has done part to being fade-in in step 5 and improve, and is implemented as follows:
Suppose m 1(x, y), m 2(x, y) is respectively the grey scale pixel value of front frame reference picture and present frame image to be spliced, m (x, y) be merge after grey scale pixel value, be then fade-in gradually go out Weighted Average Algorithm calculating such as formula shown in (11):
m ( x , y ) = m 1 ( x , y ) ( x , y ) &Element; m 1 k 1 m 1 ( x , y ) + k 2 m 2 ( x , y ) ( x , y ) &Element; ( m 1 &cap; m 2 ) m 2 ( x , y ) ( x , y ) &Element; m 2 - - - ( 11 )
Wherein, k 1, k 2for weights, and meet k 1+ k 2=1.In order to make the overlapping region of image seamlessly transit better, present invention improves over k 1, k 2value, as shown in formula (12),
k 1 = m 2 m 1 + m 2 , k 2 = m 1 m 1 + m 2 - - - ( 12 ) .
In conjunction with formula (12), and as shown in Figure 3, according to the pixel distance relation of pixel and image to be spliced in the pixel distance of pixel in overlapping region and reference picture and overlapping region, the normalized mode of adoption rate carries out pixel value distribution, thus realize seamlessly transitting of image overlapping region pixel value, thus complete the seamless spliced of image.

Claims (7)

1. based on provincial characteristics frame of video between a joining method, it is characterized in that, comprise the following steps:
Step one: synchro control video camera and inertial sensor work, uses video camera to obtain the depth information of each pixel in image to be spliced, uses inertial sensor to obtain the kinematic parameter of video camera, and calculates the global optical flow of image to be spliced;
Step 2: in image to be spliced, uses light stream fitting process in local to extract the local characteristic region meeting image global motion;
Step 3: in local characteristic region, uses the FAST unique point of FAST feature extraction operator extraction image;
Step 4: use SAD operator to carry out registration to the FAST unique point in reference picture and image to be spliced, calculates the global motion projective transformation matrix of image to be spliced based on FAST unique point in conjunction with RANSAC algorithm;
Step 5: combine and be fade-in gradually to go out Weighted Average Algorithm and complete the seamless spliced of image to be spliced and reference picture.
2. as claimed in claim 1 based on provincial characteristics frame of video between joining method, it is characterized in that, in step one,
Inertial sensor is used to obtain motion Eulerian angle [α, beta, gamma], the three-shaft displacement [t of video camera x, t y, t z], and join matrix [R, T] outward according to the motion that the beginning parameter transform model of video camera goes out video camera, wherein α is angle of nutation, β is angle of precession, γ is angle of rotation, t xfor the displacement in x direction, t yfor the displacement in y direction, t zfor the displacement in z direction; R is motion outer ginseng matrix rotation matrix, and T is the translation matrix of motion outer ginseng matrix, and the computing formula of rotation matrix R and translation matrix T is such as formula shown in (1), (2):
R = cos &gamma; cos &beta; cos &gamma; sin &beta; sin &alpha; - sin &gamma; cos &gamma; sin &beta; cos &alpha; - sin &alpha; sin &gamma; sin &gamma; cos &beta; sin &alpha; sin &beta; sin &gamma; + cos &gamma; cos &alpha; sin &gamma; sin &beta; cos &alpha; + cos &gamma; sin &alpha; - sin &beta; cos &beta; sin &alpha; cos &beta; cos &alpha; - - - ( 1 )
T=[t x,t y,t z]'(2)
In formula (2), T is vector [t x, t y, t z] transposed form,
The computing method of the global optical flow of described image to be spliced are such as formula shown in (3) and (4):
u 2 u 1 m 2 = KRK - 1 m 1 + 1 u 1 KT - - - ( 3 )
d(u,v)=m 2-m 1(4)
Wherein, the pixel coordinate m in reference picture is calculated by formula (3) 1pixel coordinate m in image to be spliced 2, calculate pixel coordinate m according to formula (4) 1global optical flow d (u, v), wherein u is pixel coordinate m 1the global optical flow of horizontal direction, v is pixel coordinate m 1the global optical flow of vertical direction, u 1, u 2be respectively m 1and m 2depth information, K is the internal reference matrix of video camera.
3. as claimed in claim 1 based on provincial characteristics frame of video between joining method, it is characterized in that, the implementation procedure of step 2 is:
Step 2.1: become by Iamge Segmentation to be spliced size to be the multiple pieces of regions of w × w, w is the number of pixel in block region; In each piece of region, estimate the local affine transformations relation in each piece of region in conjunction with four-point method use global optical flow d (u, v), local affine transformations relation is as shown in formula (5):
u = a 1 x + a 2 y + a 3 v = a 4 x + a 5 y + a 6 - - - ( 5 )
In formula (5), the pixel coordinate that (x, y) is reference picture; Parameter a 1, a 2, a 3, a 4, a 5and a 6respectively as the partial projection transformation matrix H of the parameter factors structure agglomerated regions for matrix block, as shown in formula (6),
H block = a 1 a 2 a 3 a 4 a 5 a 6 0 0 1 - - - ( 6 )
Step 2.2: global optical flow d (u, v) calculated all in block region are substituted into partial projection transformation matrix H respectively blockin, calculate to obtain and estimate global optical flow value d'(u', v'), as shown in formula (7),
u &prime; = a 1 x + a 2 y + a 3 v &prime; = a 4 x + a 5 y + a 6 - - - ( 7 )
Step 2.3: according to estimation global optical flow value d'(u', v') and global optical flow value d (u, v) calculate error of fitting σ, as shown in formula (8),
&sigma; = 1 w &times; w &Sigma; [ u , v ] &Element; w &times; w | d ( u , v ) - d &prime; ( u &prime; , v &prime; ) | 2 - - - ( 8 )
Step 2.4: when the error of fitting σ value in block region is less than the threshold value δ preset, extracts this block region; Otherwise, give up this block region; Finally using the local characteristic region of all block regions extracted as image mosaic.
4. as claimed in claim 1 based on provincial characteristics frame of video between joining method, it is characterized in that, the implementation procedure of step 3 is:
Choose pixel centered by the pixel of D pixel, scan in local characteristic region, pixel centered by scanning element, 16 pixels around it are selected to be pixel to be compared, wherein row is set to two pixels with the radius on column direction, the radius of diagonal is set to a pixel, computing center's pixel and the Pixel gray difference of 16 pixels around it, if difference is positive quantity or difference is negative quantity, the threshold value T having one of them to be greater than to preset, then using this center pixel as FAST unique point.
5. as claimed in claim 4 based on provincial characteristics frame of video between joining method, it is characterized in that,
The gray scale difference value of four summit pixels on first computing center's pixel and line direction and column direction, if in these four gray scale difference values, difference is positive quantity or difference is that negative quantity has one of them to be less than threshold value 3, then judge that this center pixel is not FAST unique point, no longer carry out the calculating of center pixel and the gray scale difference value of remaining 12 pixels of surrounding and compare; If in these four gray scale difference values difference be just or difference be negative quantity one of them be more than or equal to threshold value 3, then proceed the calculating of center pixel and remaining 12 the pixel gray scale difference values of surrounding.
6. as claimed in claim 1 based on provincial characteristics frame of video between joining method, it is characterized in that, rapid four be implemented as follows:
Step 4.1: use SAD operator to complete reference picture and treat the registration of FAST unique point in reference picture, the basis for estimation of image registration adopts the computing method of Euclidean distance;
Step 4.2: complete asking for of image overall projection matrix H based on FAST feature point pairs in conjunction with RANSAC algorithm, wherein RANSAC algorithmic procedure is as described below:
In FAST feature point pairs, extract four pairs of feature point pairs arbitrarily, adopt four-point method to ask for overall projection matrix H, then bring remaining feature point pairs into formula (9), estimated the facial feature estimation value [x' of image to be spliced by overall projection matrix H q, y' q], if the unique point [x' estimated q, y' q] meet formula (10), then this feature point pairs is set to intra-office point; If do not meet formula (10), then this feature point pairs is set to point not in the know,
[x' q,y' q]=H*[x p,y p](9)
In formula (9), [x p, y p], [x q, y q] be respectively reference picture and treat the unique point coordinate of reference picture,
|[x' q,y' q]-[x q,y q]|≤T ran(10)
In formula (10), [x' q, y' q] for treating the estimated value of reference picture unique point, T ranfor weighing the estimated value [x' treating reference picture unique point q, y' q] and treat reference picture unique point actual coordinate [x q, y q] threshold value, be generally set as [0.1,0.1],
After calculating terminates, the unique point logarithm Num of point in statistics bureau, if Num>=M, wherein M is intra-office point number threshold value, then terminate RANSAC algorithm, and is exported as optimization model by current overall affine matrix H; If Num < is M, and judge the threshold value the N whether cycle index of RANSAC algorithm is presetting further rANin, if the threshold value N that the cycle index of RANSAC algorithm is presetting rANin, then in intra-office point, select four to try to achieve new overall affine matrix H to putting and repeating step 4 again; Otherwise, current overall affine matrix H is exported as optimization model.
7. as claimed in claim 1 based on provincial characteristics frame of video between joining method, it is characterized in that, be fade-in in step 5 gradually go out Weighted Average Algorithm calculating such as formula shown in (11):
m ( x , y ) = m 1 ( x , y ) ( x , y ) &Element; m 1 k 1 m 1 ( x , y ) + k 2 m 2 ( x , y ) ( x , y ) &Element; ( m 1 &cap; m 2 ) m 2 ( x , y ) ( x , y ) &Element; m 2 - - - ( 11 )
Wherein, m 1(x, y), m 2(x, y) is respectively the grey scale pixel value of front frame reference picture and present frame image to be spliced, and m (x, y) is the grey scale pixel value after merging, k 1, k 2for weights, and meet k 1+ k 2=1, k 1, k 2value as shown in formula (12),
k 1 = m 2 m 1 + m 2 , k 2 = m 1 m 1 + m 2 - - - ( 12 ) .
CN201410261394.6A 2014-06-12 2014-06-12 Region feature based video inter-frame splicing method Pending CN105303518A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410261394.6A CN105303518A (en) 2014-06-12 2014-06-12 Region feature based video inter-frame splicing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410261394.6A CN105303518A (en) 2014-06-12 2014-06-12 Region feature based video inter-frame splicing method

Publications (1)

Publication Number Publication Date
CN105303518A true CN105303518A (en) 2016-02-03

Family

ID=55200745

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410261394.6A Pending CN105303518A (en) 2014-06-12 2014-06-12 Region feature based video inter-frame splicing method

Country Status (1)

Country Link
CN (1) CN105303518A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809640A (en) * 2016-03-09 2016-07-27 长春理工大学 Multi-sensor fusion low-illumination video image enhancement method
CN106204418A (en) * 2016-06-24 2016-12-07 南京睿悦信息技术有限公司 Image warping method based on matrix inversion operation in a kind of virtual reality mobile terminal
CN106534616A (en) * 2016-10-17 2017-03-22 北京理工大学珠海学院 Video image stabilization method and system based on feature matching and motion compensation
CN106991645A (en) * 2017-03-22 2017-07-28 腾讯科技(深圳)有限公司 Image split-joint method and device
CN111583118A (en) * 2020-05-13 2020-08-25 创新奇智(北京)科技有限公司 Image splicing method and device, storage medium and electronic equipment
CN111639658A (en) * 2020-06-03 2020-09-08 北京维盛泰科科技有限公司 Method and device for detecting and eliminating dynamic characteristic points in image matching
CN113228102A (en) * 2019-01-09 2021-08-06 奥林巴斯株式会社 Image processing apparatus, image processing method, and image processing program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130286012A1 (en) * 2012-04-25 2013-10-31 University Of Southern California 3d body modeling from one or more depth cameras in the presence of articulated motion
CN103745449A (en) * 2013-12-24 2014-04-23 南京理工大学 Rapid and automatic mosaic technology of aerial video in search and tracking system
CN103745474A (en) * 2014-01-21 2014-04-23 南京理工大学 Image registration method based on inertial sensor and camera

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130286012A1 (en) * 2012-04-25 2013-10-31 University Of Southern California 3d body modeling from one or more depth cameras in the presence of articulated motion
CN103745449A (en) * 2013-12-24 2014-04-23 南京理工大学 Rapid and automatic mosaic technology of aerial video in search and tracking system
CN103745474A (en) * 2014-01-21 2014-04-23 南京理工大学 Image registration method based on inertial sensor and camera

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
席军强: "《车辆信息技术》", 31 December 2013 *
杨帆: "《数字图像处理及应用(MATLAB版)》", 30 September 2013 *
申浩等: "航拍视频帧间快速配准算法", 《航空学报》 *
袁梦笛: "基于特征点的红外图像拼接研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
许东等: "一种基于光流拟和的航拍视频图像全局运动估算方法", 《航空学报》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809640A (en) * 2016-03-09 2016-07-27 长春理工大学 Multi-sensor fusion low-illumination video image enhancement method
CN105809640B (en) * 2016-03-09 2019-01-22 长春理工大学 Low illumination level video image enhancement based on Multi-sensor Fusion
CN106204418B (en) * 2016-06-24 2019-02-12 南京睿悦信息技术有限公司 Image warping method based on matrix inversion operation in a kind of virtual reality mobile terminal
CN106204418A (en) * 2016-06-24 2016-12-07 南京睿悦信息技术有限公司 Image warping method based on matrix inversion operation in a kind of virtual reality mobile terminal
CN106534616A (en) * 2016-10-17 2017-03-22 北京理工大学珠海学院 Video image stabilization method and system based on feature matching and motion compensation
CN106534616B (en) * 2016-10-17 2019-05-28 北京理工大学珠海学院 A kind of video image stabilization method and system based on characteristic matching and motion compensation
CN106991645A (en) * 2017-03-22 2017-07-28 腾讯科技(深圳)有限公司 Image split-joint method and device
CN106991645B (en) * 2017-03-22 2018-09-28 腾讯科技(深圳)有限公司 Image split-joint method and device
US10878537B2 (en) 2017-03-22 2020-12-29 Tencent Technology (Shenzhen) Company Limited Image splicing method, apparatus, terminal, and storage medium
CN113228102A (en) * 2019-01-09 2021-08-06 奥林巴斯株式会社 Image processing apparatus, image processing method, and image processing program
CN111583118A (en) * 2020-05-13 2020-08-25 创新奇智(北京)科技有限公司 Image splicing method and device, storage medium and electronic equipment
CN111583118B (en) * 2020-05-13 2023-09-29 创新奇智(北京)科技有限公司 Image stitching method and device, storage medium and electronic equipment
CN111639658A (en) * 2020-06-03 2020-09-08 北京维盛泰科科技有限公司 Method and device for detecting and eliminating dynamic characteristic points in image matching

Similar Documents

Publication Publication Date Title
Casser et al. Depth prediction without the sensors: Leveraging structure for unsupervised learning from monocular videos
CN110349250B (en) RGBD camera-based three-dimensional reconstruction method for indoor dynamic scene
CN105303518A (en) Region feature based video inter-frame splicing method
CN104484648B (en) Robot variable visual angle obstacle detection method based on outline identification
CN106204595B (en) A kind of airdrome scene three-dimensional panorama monitoring method based on binocular camera
CN104463778B (en) A kind of Panoramagram generation method
CN111045017A (en) Method for constructing transformer substation map of inspection robot by fusing laser and vision
CN109544636A (en) A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
CN110689008A (en) Monocular image-oriented three-dimensional object detection method based on three-dimensional reconstruction
CN108242079A (en) A kind of VSLAM methods based on multiple features visual odometry and figure Optimized model
CN109472828B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
CN112801074B (en) Depth map estimation method based on traffic camera
Jia et al. Sensor fusion-based visual target tracking for autonomous vehicles with the out-of-sequence measurements solution
Wang et al. Unsupervised learning of monocular depth and ego-motion using multiple masks
CN103745474A (en) Image registration method based on inertial sensor and camera
CN108416798B (en) A kind of vehicle distances estimation method based on light stream
CN105160703A (en) Optical flow computation method using time domain visual sensor
Gurram et al. Monocular depth estimation by learning from heterogeneous datasets
CN114964276B (en) Dynamic vision SLAM method integrating inertial navigation
CN112907573B (en) Depth completion method based on 3D convolution
CN102609945A (en) Automatic registration method of visible light and thermal infrared image sequences
CN111325828A (en) Three-dimensional face acquisition method and device based on three-eye camera
CN114677531B (en) Multi-mode information fusion method for detecting and positioning targets of unmanned surface vehicle
CN106780309A (en) A kind of diameter radar image joining method
Herau et al. MOISST: Multimodal Optimization of Implicit Scene for SpatioTemporal Calibration

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160203