CN101626513A - Method and system for generating panoramic video - Google Patents

Method and system for generating panoramic video Download PDF

Info

Publication number
CN101626513A
CN101626513A CN200910109043A CN200910109043A CN101626513A CN 101626513 A CN101626513 A CN 101626513A CN 200910109043 A CN200910109043 A CN 200910109043A CN 200910109043 A CN200910109043 A CN 200910109043A CN 101626513 A CN101626513 A CN 101626513A
Authority
CN
China
Prior art keywords
video
visual field
multichannel
transformation matrix
projective transformation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN200910109043A
Other languages
Chinese (zh)
Inventor
裴继红
谢维信
何巧珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN200910109043A priority Critical patent/CN101626513A/en
Publication of CN101626513A publication Critical patent/CN101626513A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention is suitable for the field of video image processing technology, which provides a method and a system for generating panoramic video. The method comprises the following steps: collecting a plurality of channels of video with different viewpoints by a plurality of cameras; separating a background and a sport foreground of each channel of the video, and obtaining a plurality of channels of background video and a plurality of channels of sport foreground video; obtaining a projection transformation matrix according to the plurality of channels of background video; generating a background panoramic video according to the projection transformation matrix and the plurality of channels of background video, and generating a foreground panoramic video according to the projection transformation matrix and the plurality of channels of sport foreground video; and syncretizing the background panoramic video and the sport foreground panoramic video and generating a panoramic video. In the invention, the video of a plurality of the cameras having a part of overlap field automatically generates a panoramic dynamic video, the problems of ghost image and virtual image of sport targets in overlap area of a panoramic video field are solved, and the problem of poor stability of the automatic calculation of the projection transformation matrix of the cameras is solved.

Description

Panoramic video generates method and system
Technical field
The invention belongs to technical field of image processing, relate in particular to a kind of panoramic video and generate method and system.
Background technology
The multiple-camera panoramic video is meant with two or more than the video image that the video camera of two visual angles configuration obtains, merges a big visual field video that comprises each visual angle content that forms through splicing.
Two class modes are generally taked in the generation of multiple-camera panoramic video at present:
First kind mode, the video camera that each visual angle is different places single view position, space, the panoramic video of then each camera video splicing is permeated a wide-angle, annular or hemispherical visual field.Under the single view mode, to scene in the different cameras visual field and moving target, they are very little apart from the space length difference of each video camera, promptly the same object in two video camera overlapped fovs is very little at the depth of field difference of different cameras, the different cameras visual field is satisfied substantially the condition of identical affine transformation.Based on the technology of single view various visual angles, because the depth of field difference of different motion target is less to the full-view visual field influence, the technical difficulty of realization is little, existing comparatively ripe product.The overall view visual system Ladybug of Canada PointGrey company is a kind of single view panoramic vision product of six video cameras.Shenzhen research institute of Peking University and security protection scientific ﹠ technical corporation have developed jointly and similar demo system of the panoramic vision product Ladybug of external PointGrey company etc.
The second class mode, different position for video camera, each visual angle be in different points of view position, space, with the video-splicing of each a video camera panoramic video of big visual field that permeates.Because the locus difference at each video camera place, simple affine transformation condition is not satisfied in the visual field of different cameras.In the overlapped fov of two video cameras, different scenery are different with the depth of field of moving target in each video camera, moving target particularly, because the form of its motion does not have any constraint, its depth of field is along with motion is ceaselessly changing.Under this condition, panoramic video is in the overlapping region, visual field, and the ghost image of moving target and diplopia problem are general relatively more serious, and the technical difficulty of solution is big.In this class panoramic video system, multiple-camera is in the mode difference of space laying in addition, and the technical method of realization generally also is different.
According to bibliographical information, texas,U.S A﹠amp; The real time panoramic vision system that M university utilizes multiple-camera to develop to be used for self-navigation; NUS has developed a cover multiple-camera real time panoramic video conferencing system; U.S. Carnegie Mellon university has developed the panorama meeting video system that four video cameras of a cover constitute.Above-mentioned system or product substantially all are that function introduction or demonstration are introduced in report, do not provide concrete realization technology.
All in all, based on the panoramic video technology of many viewpoints various visual angles, the layout mode difference of position of each video camera space, the implementation of panoramic video is also different.Especially, in the overlapping region of the visual field of different cameras, the depth of field difference of moving target is big to the full-view visual field influence, and the technical difficulty of realization is big.Also there are many technological gaps in present many viewpoint various visual angles panoramic video technology.When prior art generates panoramic video by the many viewpoints of multiple-camera, exist the moving target of overlapping region, panoramic video visual field that the problem of ghost image and diplopia is arranged.
Summary of the invention
The purpose of the embodiment of the invention is to provide a kind of panoramic video generation method, is intended to solve in the existing panoramic video generation technique, and there is the problem of ghost image and diplopia in the moving target of the overlapping region, visual field of panoramic video.
The embodiment of the invention is achieved in that a kind of panoramic video generation method, said method comprising the steps of:
Gather the multi-channel video of different points of view by multiple cameras;
Separate background and sport foreground in the video of every road, obtain multichannel background video and multichannel sport foreground video;
Obtain projective transformation matrix according to described multichannel background video;
According to described projective transformation matrix and multichannel background video generation background panoramic video, generate the prospect panoramic video according to described projective transformation matrix and multichannel sport foreground video;
Described background panoramic video and prospect panoramic video are merged, generate panoramic video.
Another purpose of the embodiment of the invention is to provide a kind of panoramic video generation system, and described system comprises:
The multi-channel video collecting unit is used for the multi-channel video by multiple cameras collection different points of view;
Separative element is used for separating the background and the sport foreground of every road video that described multi-channel video collecting unit gathers, and obtains multichannel background video and multichannel sport foreground video;
The projective transformation matrix computing unit, the multichannel background video that is used for obtaining according to described separative element obtains projective transformation matrix;
Background panoramic video generation unit is used for the multichannel background video generation background panoramic video that the projective transformation matrix that calculates according to described projective transformation matrix computing unit and described separative element obtain;
Prospect panoramic video generation unit is used for the multichannel sport foreground video that the projective transformation matrix that calculates according to described projective transformation matrix computing unit and described separative element obtain and generates the prospect panoramic video;
The panoramic video generation unit is used for the background panoramic video of described background panoramic video generation unit generation and the prospect panoramic video of prospect panoramic video generation unit generation are merged, and generates panoramic video.
The embodiment of the invention is decomposed into video background data and video motion foreground data respectively by the video with the multichannel video camera, utilizes the two-path video background data visual field projective transformation matrix between two video cameras of calculating automatically.Background video is carried out projective transformation, the video after the conversion is embedded in the full-view visual field, and seamless fusion treatment is carried out in the overlapping region, visual field; The video motion foreground data is carried out projective transformation, and the foreground moving target of field of detection overlay region is carried out parallax correction and the fusion of prospect panorama to target; At last video background panorama and video foreground panorama are merged, the dynamic video of having realized having the multichannel video camera of the visual field of overlapping is generated as the panorama dynamic video automatically, solved ghost image and diplopia problem preferably at the moving target of overlapping region, panoramic video visual field, and the video camera projective transformation matrix automatically calculate in the problem of poor stability.
Description of drawings
Fig. 1 is the flow chart of the panoramic video generation method that provides of the embodiment of the invention;
Fig. 2 is the flow chart of the calculating projective transformation matrix that provides of the embodiment of the invention;
Fig. 3 is the algorithm flow chart of the calculated characteristics point set that provides of the embodiment of the invention;
Fig. 4 is that 128 dimensional feature vectors that the embodiment of the invention provides are described schematic diagram;
Fig. 5 is that the fusion coefficients of the triangle area weighting that provides of the embodiment of the invention is calculated schematic diagram;
Fig. 6 is overlapping region, the visual field imaging parallax schematic diagram that the embodiment of the invention provides;
Fig. 7 is the structure chart of the panoramic video generation system that provides of the embodiment of the invention.
Embodiment
In order to make purpose of the present invention, technical scheme and advantage clearer,, the present invention is further elaborated below in conjunction with drawings and Examples.Should be appreciated that specific embodiment described herein only in order to explanation the present invention, and be not used in qualification the present invention.
The embodiment of the invention is decomposed into multichannel background video and multichannel sport foreground video respectively by the multi-channel video with the multichannel camera acquisition, utilization multichannel background video wherein calculates the projective transformation matrix between the multi-channel video visual field automatically, according to projective transformation matrix the multichannel background video is carried out projective transformation, multichannel background video after the projective transformation is embedded in the full-view visual field, and carry out seamless fusion treatment, according to projective transformation matrix the sport foreground video is carried out projective transformation, sport foreground video to overlapping region, detected visual field carries out parallax correction, and the sport foreground panoramic video of finishing in the full-view visual field merges, last background panoramic video and prospect panoramic video merge, and have realized the automatic generation of panorama dynamic video.
Fig. 1 shows the flow chart of the panoramic video generation method that the embodiment of the invention provides, and details are as follows.
In step S101, gather the multi-channel video of different points of view by multiple cameras.
In step S102, separate background and sport foreground in the video of every road, obtain multichannel background video and multichannel sport foreground video.
Multi-channel video to the multiple cameras collection carries out background estimating and motion detection respectively, obtains multichannel background video and multichannel sport foreground video.In the prior art, video background method of estimation and sport foreground detection method have multiple, are preferable a kind of based on the background estimating of many gauss hybrid models and motion detection wherein, enumerate no longer one by one herein.
In step S103, obtain projective transformation matrix according to the multichannel background video.
Obtain specifically comprising of projective transformation matrix according to the multichannel background video: the characteristic point of obtaining every road background video; Characteristic point and arest neighbors, inferior nearest neighbor distance decision function according to every road background video obtain the set of candidate matches point; Set is purified to candidate matches point; Obtain projective transformation matrix according to the match point set after purifying.
The present invention see also Fig. 2 in order better to explain, for being example with the two-path video, details are as follows to obtain the process of projective transformation matrix according to the two-way background video:
In step S21, calculate feature point set D1, the D2 of two-way background video respectively, and among D1, the D2 each characteristic point to get 128 dimensional feature vectors be example, calculation process is seen Fig. 3, specifically details are as follows.
In step S211, by every road background video, calculate and make up Gauss's yardstick pyramid diagram picture, calculate and make up Gauss's yardstick difference pyramid diagram picture.Specific as follows:
The function of supposing the background video correspondence is f B(x, y), (x, y is σ) as formula (1) for gaussian kernel function G
G ( x , y , σ ) = 1 2 π σ 2 exp ( - x 2 + y 2 2 σ 2 ) - - - ( 1 )
In the formula (1), σ is a variance, general desirable empirical value σ=1.5, and exp () represents exponential function.Then Gauss's yardstick pyramid diagram is as f G(x, y is k) as formula (2)
f G(x,y,k)=f B(x,y)*G(x,y,2 kσ),k=0,1,2,... (2)
In the formula (2), * is a convolution algorithm.Gauss's yardstick difference pyramid diagram is as f D(x, y is k) as formula (3)
f D(x,y,k)=f G(x,y,k)-f G(x,y,k-1),k=1,2,... (3)
In step S212, calculate the Local Extremum set in the difference of Gaussian pyramid diagram picture.Suppose that the difference pyramid has s layer, s 〉=3.Local Extremum is specific as follows:
If (x y) is the pixel locus of difference of Gaussian pyramid diagram picture, k ∈ 1,2 ..., s} is the pyramidal layer of difference position.Order
F min ( x , y , k ) = 1 , f D ( x , y , k ) < f D ( x + m , y + n , k + l ) , &ForAll; m , n , l &Element; { - 1 , 0,1 } , | m | + | n | + | l | &NotEqual; 0 0 , otherwise
F max ( x , y , k ) = 1 , f D ( x , y , k ) > f D ( x + m , y + n , k + l ) , &ForAll; m , n , l &Element; { - 1 , 0,1 } , | m | + | n | + | l | &NotEqual; 0 0 , otherwise
Then Local Extremum set D1 is as the formula (4):
Dl={P=(x,y,k)|F min(x,y,k)+F max(x,y,k)≠0,(x,y)∈Z 2,k=2,3,...s-1} (4)
In step S213, calculate each some P=(x, y, one 128 dimensional feature vector k) among the Local Extremum set D1.Specific as follows: to extreme point P=(x, y, k), at original background video functions f B(x, y) in, so that (x y) gets 16 * 16 window W for the center 16, calculation window W 16In each pixel place function f B(x, gradient amplitude y) and direction.With W 16Be cut into size and be 4 * 4 subwindow, have 4 * 4=16 such subwindow, as shown in Figure 4.In each subwindow, calculate the gradient accumulated value of each direction, form one 8 dimension subvector by 8 directional statistics; Owing to have 4 * 4=16 such subwindow, then just produced the characteristic vector of 16 * 8=128 dimension at characteristic point P place.
In step S22, the match point of point in feature point set D2 among the calculated characteristics point set D1, the candidate matches point that obtains D1, D2 is gathered D.Be specially: get the characteristic point P among the D1 1i, in D2, calculate and P 1iNearest point of characteristic distance and characteristic distance time near some P 2n1, P 2n2, the distance between their characteristic vectors is distinguished as the formula (5):
d 1=d(P 1i,P 2n1) d 2=d(P 1i,P 2n2) (5)
If d 1/ d 2<δ then puts (P 1i, P 2n1) be a pair of candidate matches point, the general value of threshold value δ is preferable between 0.5~0.7.
Each point among D1, the D2 is carried out above-mentioned differentiation, obtain candidate matches point set D.
In step S23, D adopts the RANSAC algorithm to purify to the set of candidate matches point, the match point set Dc after obtaining purifying.
Wherein, the step of RANSAC purification algorithm is as follows:
Step1: in D, randomly draw 4 pairs of match points, wherein any 3 conllinear not, otherwise sample drawn again.
Step2:, calculate projective transformation matrix M according to 4 pairs of match points that extract.
Step3: by projective transformation matrix M, calculate among the D each match point to the distance under projective transformation, if distance is less than given threshold value, then this is called interior point under the M to match point.The set that point is formed among the D all is Di, the interior some number Ni of Di.
Step4: the random sampling test to Step1-Step3 is carried out m time.Maximum that time sampling test of point in choosing, as the formula (6), order
c = arg max i { N i , i = 1,2 , . . . m } - - - ( 6 )
Then Dc is the match point pair set after purifying through the RANSAC algorithm.
Calculate the method for projective transformation matrix M among the superincumbent Step2 by 4 pairs of match points, in looking geometry, mature technique is arranged more, repeat no more herein.
Set is obtained after the step of projective transformation matrix according to the match point after the purification, and panoramic video generation method also comprises: obtain optimum projective transformation matrix according to default error function, i.e. step S24.
In step S24, utilize the match point pair set Dc after purifying, use the mutual projected position error optimized Algorithm of match point symmetry to calculate projective transformation matrix M.Concrete grammar is:
If visual field A, the projective transformation matrix between the B are M, total n the match point of match point pair set Dc is right, appoints and gets match point to (P A(k), P B(k)) ∈ Dc, P A(k) ∈ A, P B(k) ∈ B, k=1...n.Suppose under the Metzler matrix effect P A(k) subpoint in the B of visual field is Q B(k), P B(k) subpoint in the A of visual field is Q A(k), as the formula (7).
P A ( k ) &RightArrow; M Q B ( k ) , P B ( k ) &RightArrow; M - 1 Q A ( k ) - - - ( 7 )
Then under the effect of projective transformation matrix M, the symmetry of match point pair set Dc projected position error function mutually is defined as shown in the formula (8).
E ( M , Dc ) = &Sigma; k = 1 n ( | | P A ( k ) - Q A ( k ) | | 2 + | | P B ( k ) - Q B ( k ) | | 2 ) - - - ( 8 )
Then optimum projective transformation matrix M *Can be by (M, optimization Dc) obtains, promptly shown in the formula (9) to E.
M * = arg min M E ( M , Dc ) - - - ( 9 )
During specific implementation, the optimization method of based target function has multiple, least square iterative method wherein, and genetic algorithms etc. all are feasible methods, enumerate no longer one by one herein.
Should be appreciated that when multi-channel video comprises three road videos at least it realizes that principle is identical with two-path video, can call various algorithms flexibly and realize, because the process relative complex is not described in detail in this.
In step S104,, generate the prospect panoramic video according to projective transformation matrix and multichannel sport foreground video according to projective transformation matrix and multichannel background video generation background panoramic video.
Step according to projective transformation matrix and multichannel background video generation background panoramic video is specially: according to projective transformation matrix the multichannel background video is projected to unified visual field; Obtain the overlapping region, visual field according to the multichannel background video in the same visual field; According to the overlapping region, visual field the multichannel background video in the unified visual field is carried out seamless fusion treatment.
In embodiments of the present invention, overlapping region, visual field between every two-path video is the convex quadrangle zone, according to the overlapping region, visual field the step that the multichannel background video in the unified visual field carries out seamless fusion treatment is specially: be divided into four triangles according to the overlapping region, visual field of naming a person for a particular job, any one in the overlapping region, visual field; Determine to merge weights according to leg-of-mutton area; According to arbitrarily any the position and merge weights the multichannel background video of the overlapping region, visual field in the unified visual field merged.
In specific implementation, be example with the two-path video equally, establish the projective transformation matrix M between the two-path video of two camera acquisitions, the unified visual field after the projection is C, the background video function of two video cameras is respectively f GA(x, y, t), f GB(t), the background panoramic video of twin camera generates step and is specially for x, y:
Step1: use projective transformation matrix M to f GA(x, y, t), f GB(x, y t) carry out projective transformation, project under the unified visual field C, and the image function after the conversion is respectively f MA(x, y, t), f MB(t), the corresponding visual field of two video cameras after the conversion is respectively A, B for x, y.Calculate overlapping region, the visual field abcd=A ∩ B of A, B, as shown in Figure 5.
Step2: calculate the pixel fusion coefficient w1 in the abcd of overlapping region, visual field, w2.It is specific as follows,
As shown in Figure 5, make P=(x, y) ∈ A ∩ B is the point among overlapping region, the visual field abcd of A, B, four borders of P and overlay region are formed four triangle abP, acP, cdP, bdP respectively, these four leg-of-mutton areas are respectively S1, S2, S3, S4.Make S M12(S1 S2) is minimum value among S1 and the S2, S to=min M34(S3 S4) is minimum value among S3 and the S4, then fusion coefficients such as formula (10) to=min
w 1 = S m 34 S m 12 + S m 34 , w 2 = S m 12 S m 12 + S m 34 = 1 - w 1 - - - ( 10 )
Step3: (t) ∈ A ∪ B is a point in the full-view visual field, then panorama background f for x, y to establish P= C(fusion t) as shown in Equation (11) for x, y.
f C ( P ) = f MA ( P ) P &Element; A - B w 1 &CenterDot; f MA ( P ) + w 2 &CenterDot; f MB ( P ) P &Element; A &cap; B f MB ( P ) P &Element; B - A , - - - ( 11 )
Wherein, A-B is meant the difference set of set A and set B in the formula (1), and B-A is meant the difference set of set B and set A.In case after projective transformation matrix was determined, then the full-view visual field overlapping region had just been determined, fusion coefficients w1, w2 have also just determined.Therefore fusion coefficients w1, w2 can only calculate once after calculating projective transformation matrix, and is stored as the form of look-up table, obtains by the look-up table mode during follow-up fusion.
Wherein, be specially according to projective transformation matrix and multichannel sport foreground video generation prospect panoramic video: multichannel sport foreground video-projection is arrived unified visual field according to projective transformation matrix; Multichannel sport foreground video in the unified visual field is merged.
Identical with background data projective transformation mode, at first the sport foreground data are carried out projective transformation according to projective transformation matrix M, multichannel sport foreground data projection is arrived unified visual field.
Because twin camera is in different points of view, the depth of field of same target between different cameras generally there are differences, and generally there is certain parallax in the same target after the conversion in the different visual fields overlapped fov zone (S104 draws by step).As shown in Figure 6, Object AAnd Object BFor the unified visual field of sport foreground after conversion of the same target correspondence in two visual fields is position in the overlapping region, visual field in the full-view visual field, Δ d is both displacement difference.
In embodiments of the present invention, the prospect panoramic video merges before the generation, need whether visual field A and the sport foreground among the B in the field of detection overlapping region are same target, when both are same target, need earlier visual field A in the overlapping region, visual field and B to be carried out parallax correction, otherwise need not to carry out parallax correction.
In embodiments of the present invention, the determination methods of same target specifically: whether the barycenter of judging the simply connected region of isolated sport foreground video in two visual fields is in the overlapping region, visual field.The sport foreground video that is in the overlapping region, visual field is mated association.Be specially:
If the moving target in the overlapped fov zone among the A of visual field is O A(i), i=1,2, L m, the moving target in the overlapped fov zone among the B of visual field is O B(j), j=1,2, L n.In embodiments of the present invention, moving target is generally a simply connected region, is not a point.
Calculate O A(i) area S A(i) (pixel count sum), i=1,2, L m; Calculate O B(j) area S B(j), j=1,2, L n.
Calculate O AThe length-width ratio L of boundary rectangle frame (i) A(i), i=1,2, L m; Calculate O BThe length-width ratio L of boundary rectangle frame (j) B(j), j=1,2, L n.
Calculate O A(i) RGB color histogram vector H A(i), i=1,2, L m; Calculate O B(j) RGB color histogram vector H B(j), j=1,2, L n.Wherein, H A(i) and H B(i) be 3 * 256=768 n dimensional vector n.
Set weighted value w S, w L, w HCalculate O A(i) and O B(j) matching distance, as shown in Equation (12).
d(O A(i),O B(j))=w S·||S A(i)-S B(j)||+w L·||L A(i)-L B(j)||+w H·||H A(i)-H B(j)|| (12)
Calculate the moving target O in overlapped fov zone among the A A(i), i=1,2, the moving target O in overlapped fov zone among L m and the B B(j), j=1,2, the correlation distance of L n, as shown in Equation (13).
T ik=d(O A(i),O B(k))=min{d(O A(i),O B(j)),j=1,2,...n} (13)
Wherein as the formula (14);
k = arg j min { d ( O A ( i ) , O B ( j ) ) , j = 1 , 2 , . . . n } - - - ( 14 )
Be O B(k) be in B with O A(i) moving target of matching distance minimum.
It is right to calculate the related moving target of coupling, and rule is:
If T Ik<δ 0, i=1,2, L m, then O A(i) and O B(k) coupling is the same target among two visual field A, the B; Otherwise, then in the B of visual field, do not have and O A(i) Pi Pei target.
After having determined the same target of coupling, the parallax correction in the overlapped fov zone is specific as follows,
Visual field A and visual field B are projected under the common coordinate system C, can select the coordinate system of one of them visual field to overlap with common coordinate system C.Suppose the benchmark visual field of B visual field as projection, B visual field target projection is the reference position of merging the visual field to the target location among the C, and definite problem of different target position transfers the position correction that the target in the A visual field is carried out corresponding to the coupling target among the B in the full-view visual field.When target entered A, overlapping region, B visual field by non-overlapped B field of view, visual field A projected to that the target location deducts Δ d among the common coordinate system C, is y CA=y A-Δ d; On the contrary, when target entered A, overlapping region, B visual field by non-overlapped A visual field, visual field A projected to that the target location adds Δ d, i.e. y among the common coordinate system C CA=y A+ Δ d.Wherein, y ABe the barycenter of the target among the A of the visual field horizontal coordinate to the geometric center of visual field A, Δ d is the barycenter horizontal parallax of the same target of the A of coupling, B visual field.Y in like manner BFor the barycenter of the target among the B of visual field horizontal coordinate, no longer describe in detail to the geometric center of visual field B.As shown in Figure 6.The bearing calibration of vertical parallax and the bearing calibration of horizontal parallax are similar.
Can learn that by foregoing the different motion prospect video of overlapping region, visual field correspondence is that the condition of same target is that the moving target barycenter of different motion prospect video is in the overlapping region, visual field.The motion barycenter of the sport foreground video in the A visual field occurs in the overlapping region, visual field of unified visual field, and the sport foreground target barycenter of B visual field is not when occurring in the overlapping region, visual field of unified visual field, same target can not be mated related computing, then can occur same target coverage phenomenon this moment when directly merging.At this moment sport foreground video that mates that need be in the overlapping region, visual field and the sport foreground video in another overlapping region, visual field mate, if the match is successful, then are considered as same target mutually, otherwise are considered as non-same target.
When the success of same object matching and its are merging after position in the visual field determines,, make that merging target the profile ghost image occurs because coupling objective contour size often has inconsistency.Perhaps when one of the same target of two visual fields complete and another when imperfect, also can cause syncretizing effect not good enough because of shape difference when multichannel sport foreground video merges.Need to merge template this moment and obtain the prospect panoramic video according to default prospect.Wherein prospect fusion template can be to obtain with following mode:
In the Non-overlapping Domain of A, B visual field, then prospect fusion template zone is exactly a foreground area, and template position remains unchanged in visual field separately;
In the overlapping region of A, B visual field, the sport foreground video to the same target of A, B visual field coupling by the parallax correction result calculated, moves to barycenter with two zones and overlaps, and the union of getting these two zones merges template M as prospect AB
After the acquisition prospect merged template, the fusion of prospect panoramic video can obtain in the following way:
In the Non-overlapping Domain of A, B visual field, directly use panorama zone that A, B visual field obtain respectively separately as the prospect panoramic video;
In the overlapping region of A, B visual field, the prospect of obtaining is merged template M ABBarycenter be placed on the centroid position of A, the uncorrected moving target in B visual field respectively, and calculate the common factor M of this template and A, B visual field respectively A=M AB∩ A, M B=M AB∩ B; If M AArea greater than M BArea, with template M ABBe placed on the centroid position place of the corresponding target in the original video that does not separate through the background prospect of visual field A, take out original video zone that template covers as the panorama foreground target; Otherwise, if M AArea less than M BArea, then in the original video of visual field B, carry out above-mentioned computing; The prospect in the video overlay zone that will obtain by above-mentioned computing is placed into the suitable position of full-view visual field by parallax correction.Merge by above-mentioned prospect, can realize that target area is big and that profile is more complete embeds in the prospect panorama, so not only can solve the ghost image problem, and also can finely merge when the body of same target has certain difference.
In the prior art, because the depth of field and visual angle is different between moving target and the background, can't use unified projective transformation to the while of the target and background in image registration, thereby make the panoramic picture after the fusion be easy to generate ghost image and diplopia problem, and the embodiment of the invention has effectively overcome these problems.
In step S105, background panoramic video and prospect panoramic video are merged, generate panoramic video.
Background panoramic video and prospect panoramic video are merged, obtain complete panoramic video.Be specially, establish f B(x, y is t) for calculating the background panorama that generates, f F(x, y is t) for calculating the prospect panorama that generates, complete panoramic video f T(x, y t) are obtained by formula (15).
f T ( x , y , t ) = f F ( x , y , t ) , | f F ( x , y , t ) | &NotEqual; 0 f B ( x , y , t ) , | f F ( x , y , t ) | = 0 , ( x , y ) &Element; A &cup; B - - - ( 15 )
The structure chart of the panoramic video generation system that the embodiment of the invention provides sees also Fig. 7, only show the part relevant with the embodiment of the invention for convenience of explanation, this system is built in the unit that software unit, hardware cell or the software and hardware of portable terminal or other-end equipment combine.
In embodiments of the present invention, system comprises multi-channel video collecting unit 71, separative element 72, projective transformation matrix computing unit 73, background panoramic video generation unit 74, prospect panoramic video generation unit 75 and panoramic video generation unit 76.
Multi-channel video collecting unit 71 is gathered the multi-channel video of different points of view by multiple cameras; Background and sport foreground in every road video that separative element 72 separation multi-channel video collecting units 71 are gathered obtain multichannel background video and multichannel sport foreground video; Projective transformation matrix computing unit 73 obtains projective transformation matrix according to the multichannel background video that separative element 72 obtains; The multichannel background video generation background panoramic video that projective transformation matrix that background panoramic video generation unit 74 calculates according to projective transformation matrix computing unit 73 and separative element 72 obtain; The multichannel sport foreground video that projective transformation matrix that prospect panoramic video generation unit 75 calculates according to projective transformation matrix computing unit 73 and separative element 72 obtain generates the prospect panoramic video; Panoramic video generation unit 76 merges the background panoramic video of background panoramic video generation unit 74 generations and the prospect panoramic video of prospect panoramic video generation unit 75 generations, generates panoramic video.
Wherein, projective transformation matrix computing unit 73 comprises:
The characteristic point acquisition module is used to obtain the characteristic point of every road background video that separative element 72 obtains;
Candidate matches point set acquisition module is used for the characteristic point of every road background video of obtaining according to the characteristic point acquisition module and default arest neighbors, inferior nearest neighbor distance decision function and obtains candidate matches point and gather;
The purification module is used for the candidate matches point set that candidate matches point set acquisition module obtains is purified;
The projective transformation matrix acquisition module is used for obtaining projective transformation matrix according to the match point set after the purification of purification module.
Embodiment repeats no more as mentioned above.
The present invention is by carrying out feature point extraction and characteristic point is mated automatically in the multichannel background video, and and then calculates projective transformation matrix between the multichannel visual field by the characteristic point of coupling.This method directly on original video extract minutiae and the method for calculating projective transformation matrix compare computational accuracy height, good stability.
For the projective transformation matrix that is optimized, projective transformation matrix computing unit 73 also comprises:
Optimum projective transformation matrix acquisition module, at least one projective transformation matrix that is used for obtaining according to the projective transformation matrix acquisition module and default error function obtain optimum projective transformation matrix.
Background panoramic video generation unit 74 comprises:
The background video projection module, the projective transformation matrix that calculates according to projective transformation matrix computing unit 73 projects to unified visual field with the multichannel background video that separative element 72 obtains;
Overlapping region, visual field acquisition module, the multichannel background video that is used for the unified visual field that projection obtains according to the background video projection module obtains the overlapping region, visual field;
The background video Fusion Module, the multichannel background video that is used for merging the unified visual field that projection obtains to the background video projection module, overlapping region, visual field that obtains according to overlapping region, visual field acquisition module carries out seamless fusion treatment.
Wherein, adopt the triangle area ratio method to determine to merge weights during the background panoramic video merges, eliminated the splicing vestige of public view field transitional region effectively, realized quick seamless fusion.
Embodiment repeats no more as mentioned above.
Simultaneously, prospect panoramic video generation unit 75 comprises:
Sport foreground video-projection module, the multichannel sport foreground video-projection that the projective transformation matrix that calculates according to projective transformation matrix computing unit 73 obtains separative element 72 arrives unified visual field;
Sport foreground video Fusion Module, the multichannel sport foreground video that is used for the unified visual field that projection obtains to sport foreground video-projection module merges.
Wherein sport foreground video Fusion Module further comprises:
Same target judge module, when being used for barycenter when the simply connected region of the multichannel sport foreground video of the unified visual field that the projection of sport foreground video-projection module obtains and all being in the overlapping region, visual field that overlapping region, visual field acquisition module obtains, related computing is carried out in overlapping region, visual field to the multichannel sport foreground video in the unified visual field, judges according to related operation result whether the overlapping region, visual field of the multichannel sport foreground video in the unified visual field is same target;
The parallax correction module, be used for when same target judge module judges that the overlapping region, visual field of the multichannel sport foreground video of unified visual field is same target, parallax correction is carried out in overlapping region, visual field to the multichannel sport foreground video in the unified visual field, and the multichannel sport foreground video after the parallax correction merges in the visual field to unifying to merge template according to default prospect, generates the prospect panoramic video.
Embodiment repeats no more as mentioned above.The embodiment of the invention obtains the multi-channel video of different points of view by the multichannel camera acquisition, multi-channel video decomposed respectively obtain multichannel background video and multichannel sport foreground video, utilize the multichannel background video to calculate projective transformation matrix automatically, and obtain the overlapping region, visual field, according to projective transformation matrix background video is carried out projective transformation, background video after the conversion is embedded in the full-view visual field, and the overlapping region, visual field carried out seamless fusion treatment, according to projective transformation matrix the sport foreground video is carried out projective transformation, the sport foreground video data of overlapping region, detected visual field correspondence is carried out parallax correction and the prospect panoramic video merges; Last background panoramic video and prospect panoramic video merge and obtain panoramic video, realized that the multichannel dynamic video that will have the visual field of overlapping is generated as the panorama dynamic video automatically, ghost image and diplopia problem have been solved preferably at overlapping region, panoramic video visual field moving target, the interference of having got rid of sport foreground of obtaining owing to projective transformation matrix has improved projective transformation matrix calculates and uses automatically between the video camera accuracy and stability greatly.The above only is preferred embodiment of the present invention, not in order to restriction the present invention, all any modifications of being done within the spirit and principles in the present invention, is equal to and replaces and improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1, a kind of panoramic video generation method is characterized in that, said method comprising the steps of:
Gather the multi-channel video of different points of view by multiple cameras;
Separate background and sport foreground in the video of every road, obtain multichannel background video and multichannel sport foreground video;
Obtain projective transformation matrix according to described multichannel background video;
According to described projective transformation matrix and multichannel background video generation background panoramic video, generate the prospect panoramic video according to described projective transformation matrix and multichannel sport foreground video;
Described background panoramic video and prospect panoramic video are merged, generate panoramic video.
2, the method for claim 1 is characterized in that, the described step of obtaining projective transformation matrix according to described multichannel background video is specially:
Obtain the characteristic point of every road background video;
Characteristic point and default arest neighbors, inferior nearest neighbor distance decision function according to described every road background video obtain the set of candidate matches point;
Described candidate matches point set is purified;
Obtain projective transformation matrix according to the match point set after purifying.
3, method as claimed in claim 2 is characterized in that, described according to the match point after purifying set obtain after the step of projective transformation matrix, described method also comprises:
Obtain optimum projective transformation matrix according to default error function;
Described multi-channel video is a two-path video, difference corresponding visual field A, B, and under the effect of projective transformation matrix M, the set of the candidate matches point of described multichannel background video comprises that n match point is to (P A(k), P B(k)), P A(k) ∈ A, P B(k) ∈ B, k=1~n, n are integer, wherein, P A(k) subpoint in the B of visual field is Q B(k), P B(k) subpoint in the A of visual field is Q A(k), then described error function is: &Sigma; k = 1 n ( | | P A ( k ) - Q A ( k ) | | 2 + | | P B ( k ) - Q B ( k ) | | 2 ) , The projective transformation matrix of the value minimum of described error function is optimum projective transformation matrix.
4, the method for claim 1 is characterized in that, described step according to described projective transformation matrix and multichannel background video generation background panoramic video is specially:
According to described projective transformation matrix the multichannel background video is projected to unified visual field;
Obtain the overlapping region, visual field according to the multichannel background video in the described unified visual field;
According to overlapping region, described visual field the multichannel background video in the described unified visual field is carried out seamless fusion treatment.
5, method as claimed in claim 4 is characterized in that, overlapping region, described visual field is the convex quadrangle zone, describedly according to overlapping region, described visual field the step that the multichannel background video in the described unified visual field carries out seamless fusion treatment is specially:
Be divided into four triangles according to the overlapping region, described visual field of naming a person for a particular job, any one in the overlapping region, described visual field;
Determine to merge weights according to described leg-of-mutton area;
According to described arbitrarily any the position and merge weights the multichannel background video in the described unified visual field merged.
6, the method for claim 1 is characterized in that, described step according to described projective transformation matrix and multichannel sport foreground video generation prospect panoramic video is specially:
According to described projective transformation matrix multichannel sport foreground video-projection is arrived described unified visual field;
Multichannel sport foreground video in the described unified visual field is merged;
The described step that multichannel sport foreground video in the described unified visual field is merged comprises:
When the barycenter of the simply connected region of the multichannel sport foreground video in the described unified visual field all is in overlapping region, described visual field, related computing is carried out in overlapping region, described visual field to the multichannel sport foreground video in the described unified visual field, judges according to related operation result whether the overlapping region, described visual field of the multichannel sport foreground video in the described unified visual field is same target;
When the overlapping region, described visual field of the multichannel sport foreground video in the described unified visual field is same target, parallax correction is carried out in the overlapping region, described visual field of the multichannel sport foreground video in the described unified visual field; And merge the multichannel sport foreground video of template after to parallax correction in the described unified visual field according to default prospect and merge, generate described prospect panoramic video.
7, a kind of panoramic video generation system is characterized in that, described system comprises:
The multi-channel video collecting unit is used for the multi-channel video by multiple cameras collection different points of view;
Separative element is used for separating the background and the sport foreground of every road video that described multi-channel video collecting unit gathers, and obtains multichannel background video and multichannel sport foreground video;
The projective transformation matrix computing unit, the multichannel background video that is used for obtaining according to described separative element obtains projective transformation matrix;
Background panoramic video generation unit is used for the multichannel background video generation background panoramic video that the projective transformation matrix that calculates according to described projective transformation matrix computing unit and described separative element obtain;
Prospect panoramic video generation unit is used for the multichannel sport foreground video that the projective transformation matrix that calculates according to described projective transformation matrix computing unit and described separative element obtain and generates the prospect panoramic video;
The panoramic video generation unit is used for the background panoramic video of described background panoramic video generation unit generation and the prospect panoramic video of prospect panoramic video generation unit generation are merged, and generates panoramic video.
8, system as claimed in claim 7 is characterized in that, described projective transformation matrix computing unit comprises:
The characteristic point acquisition module is used to obtain the characteristic point of every road background video that described separative element obtains;
Candidate matches point set acquisition module is used for the characteristic point of every road background video of obtaining according to described characteristic point acquisition module and default arest neighbors, inferior nearest neighbor distance decision function and obtains candidate matches point and gather;
The purification module is used for the candidate matches point set that described candidate matches point set acquisition module obtains is purified;
The projective transformation matrix acquisition module is used for obtaining projective transformation matrix according to the match point set after the described purification module purification;
Described projective transformation matrix computing unit also comprises:
Optimum projective transformation matrix acquisition module, at least one projective transformation matrix and the default error function that are used for obtaining according to described projective transformation matrix acquisition module obtain optimum projective transformation matrix.
9, system as claimed in claim 7 is characterized in that, described background panoramic video generation unit comprises:
The background video projection module, the projective transformation matrix that calculates according to described projective transformation matrix computing unit projects to unified visual field with the multichannel background video that described separative element obtains;
Overlapping region, visual field acquisition module, the multichannel background video that is used for the unified visual field that projection obtains according to described background video projection module obtains the overlapping region, visual field;
The background video Fusion Module, the multichannel background video that is used for merging the unified visual field that projection obtains to described background video projection module, overlapping region, visual field that obtains according to overlapping region, described visual field acquisition module carries out seamless fusion treatment.
10, system as claimed in claim 9 is characterized in that, described prospect panoramic video generation unit comprises:
Sport foreground video-projection module, the projective transformation matrix that calculates according to described projective transformation matrix computing unit arrives described unified visual field with the multichannel sport foreground video-projection that described separative element obtains;
Sport foreground video Fusion Module, the multichannel sport foreground video that is used for the unified visual field that projection obtains to described sport foreground video-projection module merges;
Wherein, described sport foreground video Fusion Module further comprises:
Same target judge module, when being used for barycenter when the simply connected region of the multichannel sport foreground video of the unified visual field that the projection of described sport foreground video-projection module obtains and all being in the overlapping region, visual field that overlapping region, described visual field acquisition module obtains, related computing is carried out in overlapping region, described visual field to the multichannel sport foreground video in the described unified visual field, judges according to related operation result whether the overlapping region, described visual field of the multichannel sport foreground video in the described unified visual field is same target;
The parallax correction module, be used for when described same target judge module judges that the overlapping region, described visual field of the multichannel sport foreground video of unified visual field is same target, parallax correction is carried out in overlapping region, described visual field to the multichannel sport foreground video in the described unified visual field, and merge the multichannel sport foreground video of template after to parallax correction in the described unified visual field according to default prospect and merge, generate described prospect panoramic video.
CN200910109043A 2009-07-23 2009-07-23 Method and system for generating panoramic video Pending CN101626513A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200910109043A CN101626513A (en) 2009-07-23 2009-07-23 Method and system for generating panoramic video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200910109043A CN101626513A (en) 2009-07-23 2009-07-23 Method and system for generating panoramic video

Publications (1)

Publication Number Publication Date
CN101626513A true CN101626513A (en) 2010-01-13

Family

ID=41522151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200910109043A Pending CN101626513A (en) 2009-07-23 2009-07-23 Method and system for generating panoramic video

Country Status (1)

Country Link
CN (1) CN101626513A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101931772A (en) * 2010-08-19 2010-12-29 深圳大学 Panoramic video fusion method, system and video processing device
CN102012213A (en) * 2010-08-31 2011-04-13 吉林大学 Method for measuring foreground height through single image
CN102314686A (en) * 2011-08-03 2012-01-11 深圳大学 Reference view field determination method, system and device of splicing type panoramic video
CN102999891A (en) * 2011-09-09 2013-03-27 中国航天科工集团第三研究院第八三五八研究所 Binding parameter based panoramic image mosaic method
CN103294024A (en) * 2013-04-09 2013-09-11 宁波杜亚机电技术有限公司 Intelligent home system control method
CN103795978A (en) * 2014-01-15 2014-05-14 浙江宇视科技有限公司 Multi-image intelligent identification method and device
CN104408701A (en) * 2014-12-03 2015-03-11 中国矿业大学 Large-scale scene video image stitching method
CN104519340A (en) * 2014-12-30 2015-04-15 余俊池 Panoramic video stitching method based on multi-depth image transformation matrix
CN105376504A (en) * 2014-08-27 2016-03-02 北京顶亮科技有限公司 High-speed swing mirror-based infrared imaging system and infrared imaging method
CN105765966A (en) * 2013-12-19 2016-07-13 英特尔公司 Bowl-shaped imaging system
CN105812649A (en) * 2014-12-31 2016-07-27 联想(北京)有限公司 Photographing method and device
CN106504306A (en) * 2016-09-14 2017-03-15 厦门幻世网络科技有限公司 A kind of animation fragment joining method, method for sending information and device
WO2017080206A1 (en) * 2015-11-13 2017-05-18 深圳大学 Video panorama generation method and parallel computing system
CN106851045A (en) * 2015-12-07 2017-06-13 北京航天长峰科技工业集团有限公司 A kind of image mosaic overlapping region moving target processing method
CN108352057A (en) * 2015-11-12 2018-07-31 罗伯特·博世有限公司 Vehicle camera system with polyphaser alignment
CN109769103A (en) * 2017-11-09 2019-05-17 株式会社日立大厦系统 Image monitoring system and image monitoring device
CN112738531A (en) * 2016-11-17 2021-04-30 英特尔公司 Suggested viewport indication for panoramic video
CN113557465A (en) * 2019-03-05 2021-10-26 脸谱科技有限责任公司 Apparatus, system, and method for wearable head-mounted display
CN116996695A (en) * 2023-09-27 2023-11-03 深圳大学 Panoramic image compression method, device, equipment and medium

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101931772A (en) * 2010-08-19 2010-12-29 深圳大学 Panoramic video fusion method, system and video processing device
CN101931772B (en) * 2010-08-19 2012-02-29 深圳大学 Panoramic video fusion method, system and video processing device
CN102012213A (en) * 2010-08-31 2011-04-13 吉林大学 Method for measuring foreground height through single image
CN102314686A (en) * 2011-08-03 2012-01-11 深圳大学 Reference view field determination method, system and device of splicing type panoramic video
CN102314686B (en) * 2011-08-03 2013-07-17 深圳大学 Reference view field determination method, system and device of splicing type panoramic video
CN102999891A (en) * 2011-09-09 2013-03-27 中国航天科工集团第三研究院第八三五八研究所 Binding parameter based panoramic image mosaic method
CN103294024A (en) * 2013-04-09 2013-09-11 宁波杜亚机电技术有限公司 Intelligent home system control method
CN103294024B (en) * 2013-04-09 2015-07-08 宁波杜亚机电技术有限公司 Intelligent home system control method
US10210597B2 (en) 2013-12-19 2019-02-19 Intel Corporation Bowl-shaped imaging system
US10692173B2 (en) 2013-12-19 2020-06-23 Intel Corporation Bowl-shaped imaging system
CN105765966A (en) * 2013-12-19 2016-07-13 英特尔公司 Bowl-shaped imaging system
CN103795978A (en) * 2014-01-15 2014-05-14 浙江宇视科技有限公司 Multi-image intelligent identification method and device
CN105376504A (en) * 2014-08-27 2016-03-02 北京顶亮科技有限公司 High-speed swing mirror-based infrared imaging system and infrared imaging method
CN104408701A (en) * 2014-12-03 2015-03-11 中国矿业大学 Large-scale scene video image stitching method
WO2016086754A1 (en) * 2014-12-03 2016-06-09 中国矿业大学 Large-scale scene video image stitching method
CN104519340B (en) * 2014-12-30 2016-08-17 余俊池 Panoramic video joining method based on many depth images transformation matrix
CN104519340A (en) * 2014-12-30 2015-04-15 余俊池 Panoramic video stitching method based on multi-depth image transformation matrix
CN105812649A (en) * 2014-12-31 2016-07-27 联想(北京)有限公司 Photographing method and device
CN105812649B (en) * 2014-12-31 2019-03-29 联想(北京)有限公司 A kind of image capture method and device
CN108352057B (en) * 2015-11-12 2021-12-31 罗伯特·博世有限公司 Vehicle camera system with multi-camera alignment
CN108352057A (en) * 2015-11-12 2018-07-31 罗伯特·博世有限公司 Vehicle camera system with polyphaser alignment
WO2017080206A1 (en) * 2015-11-13 2017-05-18 深圳大学 Video panorama generation method and parallel computing system
CN106851045A (en) * 2015-12-07 2017-06-13 北京航天长峰科技工业集团有限公司 A kind of image mosaic overlapping region moving target processing method
CN106504306B (en) * 2016-09-14 2019-09-24 厦门黑镜科技有限公司 A kind of animation segment joining method, method for sending information and device
CN106504306A (en) * 2016-09-14 2017-03-15 厦门幻世网络科技有限公司 A kind of animation fragment joining method, method for sending information and device
CN112738531A (en) * 2016-11-17 2021-04-30 英特尔公司 Suggested viewport indication for panoramic video
CN112738531B (en) * 2016-11-17 2024-02-23 英特尔公司 Suggested viewport indication for panoramic video
CN109769103A (en) * 2017-11-09 2019-05-17 株式会社日立大厦系统 Image monitoring system and image monitoring device
CN113557465A (en) * 2019-03-05 2021-10-26 脸谱科技有限责任公司 Apparatus, system, and method for wearable head-mounted display
CN116996695A (en) * 2023-09-27 2023-11-03 深圳大学 Panoramic image compression method, device, equipment and medium
CN116996695B (en) * 2023-09-27 2024-04-05 深圳大学 Panoramic image compression method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN101626513A (en) Method and system for generating panoramic video
US11288818B2 (en) Methods, systems, and computer readable media for estimation of optical flow, depth, and egomotion using neural network trained using event-based learning
Hee Lee et al. Motion estimation for self-driving cars with a generalized camera
Won et al. Omnimvs: End-to-end learning for omnidirectional stereo matching
CN103597514B (en) Object detection frame display device and object detection frame display packing
CN101950426B (en) Vehicle relay tracking method in multi-camera scene
CN101527046B (en) Motion detection method, device and system
CN101521753B (en) Image processing method and system
CN104346608B (en) Sparse depth figure denseization method and apparatus
CN101621634A (en) Method for splicing large-scale video with separated dynamic foreground
CN105608667A (en) Method and device for panoramic stitching
CN105005964A (en) Video sequence image based method for rapidly generating panorama of geographic scene
CN108932725B (en) Scene flow estimation method based on convolutional neural network
CN104794683B (en) Based on the video-splicing method scanned around gradual change piece area planar
JP5105481B2 (en) Lane detection device, lane detection method, and lane detection program
CN103024350A (en) Master-slave tracking method for binocular PTZ (Pan-Tilt-Zoom) visual system and system applying same
Cao et al. Learning independent object motion from unlabelled stereoscopic videos
CN105488777A (en) System and method for generating panoramic picture in real time based on moving foreground
CN102164269A (en) Method and device for monitoring panoramic view
CN102243764A (en) Motion characteristic point detection method and device
CN103793894A (en) Cloud model cellular automata corner detection-based substation remote viewing image splicing method
CN103440664A (en) Method, system and computing device for generating high-resolution depth map
Stucker et al. ResDepth: Learned residual stereo reconstruction
CN106251357A (en) Based on virtual reality and vision positioning system
CN113436130B (en) Intelligent sensing system and device for unstructured light field

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20100113