CN102521816A - Real-time wide-scene monitoring synthesis method for cloud data center room - Google Patents
Real-time wide-scene monitoring synthesis method for cloud data center room Download PDFInfo
- Publication number
- CN102521816A CN102521816A CN2011103801506A CN201110380150A CN102521816A CN 102521816 A CN102521816 A CN 102521816A CN 2011103801506 A CN2011103801506 A CN 2011103801506A CN 201110380150 A CN201110380150 A CN 201110380150A CN 102521816 A CN102521816 A CN 102521816A
- Authority
- CN
- China
- Prior art keywords
- image
- camera
- real
- video
- parameter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Abstract
The invention provides a real-time wide-scene monitoring synthesis method for a cloud data center room, which synthesizes a monitoring video into a wide-scene wide-angle video in real time by arranging two common cameras in a cloud data center room in the following steps: (1) carrying out a camera calibration and correction method; (2) matching key frames; and (3) carrying out a fusion method and a real-time wide-scene video synthesis method. The camera calibration and correction method (1) is characterized in that the inner parameter, the outer parameter and the distortion parameter of the camera are calibrated and corrected with a common checkerboard method in the computer visual domain to favorably correct the distortion of the camera; and a result is more scientific, objective and real; in the camera calibration and correction method which is the first step of the method, a calibration algorithm based on a plurality of free planes of Open computer vision (CV) is adopted, i.e. a 7*7 checkerboard image is held by hands, and the length and the width of each check are both 2cm; the checkerboard image is translated and rotated in front of a pickup camera to obtain images of different directions; and when enough images (at least 10 pieces) are collected, the inner parameter, the outer parameter and the distortion parameter of the pickup camera are obtained by a pickup camera calibration function of the Open CV so as to correct the frame image.
Description
Technical field
The present invention relates to computer application field, be specifically related to a kind of real-time wide field scape monitoring synthetic method of cloud data center machine room.
Background technology
Development along with Information technology; Cloud computing progressively becomes the development focus of industry, and the cloud computing service platform of domestic and international all big enterprises also begins to put into one after another a plurality of fields such as science, education, culture, health, government, high-performance calculation, ecommerce, Internet of Things to be used.
In order to ensure the safety of machinery and equipment, in the machine room of most of cloud computing data center supervisory system has been installed.But because the limitation of aspects such as hardware, the visual angle of each camera is limited in the supervisory system, can only photograph the sub-fraction zone in the machine room, can't obtain the video information in bigger zone, causes the vision blind spot.
At present, the wide field scape synthetic technology of still image is very ripe, but owing to the restriction of real-time video property requirement to the algorithm time complexity, adds the complicacy of video itself, makes the wide field scape of dynamic video become a difficult point problem.
To this problem, the present invention proposes a kind of real-time wide field scape monitoring synthetic method, only needs two cameras, can generate wide field scape extensive angle monitor video fast, accurately, in real time, is conveniently used in the machine room of cloud data center.
Summary of the invention
The present invention is directed to existing cloud data center machine room supervisory system and have the shortcoming of monitoring blind spot, propose the method that a kind of service routine mode obtains wide field scape monitor video in real time.
The objective of the invention is to realize, two common camera cameras be set in the cloud data center machine room pass through: 1) camera calibration and bearing calibration, 2 by following mode) key frame coupling and fusion method; 3) real-time wide field scape image synthesizing method synthesizes wide field scape extensive angle video in real time with monitor video, wherein:
1) camera calibration and bearing calibration are to demarcate the inside and outside parameter of camera and distortion parameter and correction through the vision field that uses a computer chessboard method commonly used, and the distortion of camera has been carried out revising preferably, and the result is science, objective, true more.Camera calibration and bearing calibration are the first steps of this method.Employing is based on the calibration algorithm on a plurality of free planes of OpenCV, i.e. hand-held 7 * 7 checkerboard image, and the length and width of each grid is 2cm, checkerboard image is placed on translation before the video camera, rotation, to obtain the image of different azimuth.When collecting enough images, more than at least 10, use the camera calibration function of OpenCV, obtain the inside and outside parameter and the distortion parameter of video camera, and then two field picture is proofreaied and correct;
2) key frame coupling and fusion method for the accuracy rate that improves coupling, the angle difference between the elimination key frame images, need at first carry out pre-service to image, are about to plane picture and carry out cylindrical surface projecting.Use the SIFT algorithm in two width of cloth images, to extract respectively then and have the yardstick unchangeability; And the unique point that not influenced by noise, luminance difference etc. obtains 128 dimension SIFT unique point descriptors, and then adopts present most widely used nearest neighbor search algorithm to seek the characteristic matching point; And write down the overlapping region between two width of cloth images; Use the progressive method that gradually goes out at last, the overlapping region of two width of cloth images is merged, spliced, obtain the wide field scape image of key frame;
3) real-time wide field scape image synthesizing method is the process that still image is converted into dynamic video, through obtaining the frame of video of two cameras in real time; Overlapping region between the corresponding frame is merged, splices, play continuously; Can obtain wide field scape monitor video, because after the splicing through the back key frame, each parameter of two cameras comprises that focal length, pixel are constant basically; The picture position that obtains is constant basically; The position of the unique point of each image is constant basically in the visual field, so the position of overlapping region is also constant basically between image, so; Only need to after all frames use progressively gradually to go out algorithm and carry out the synthetic wide scene graph of image co-registration operation and play, can realize that the real-time wide field scape of monitor video is synthetic.
The invention has the beneficial effects as follows: innovation part of the present invention just is: existing image wide field scape composition algorithm is improved, reduce time complexity, and be transplanted in the real-time monitoring system preferably.Through experimental verification, this method has real-time, accuracy, high efficiency, and visual effect is good, does not have tangible hysteresis phenomenon.
Description of drawings
Fig. 1 is a video synthesis flow synoptic diagram;
Fig. 2 is pinhole camera imaging model figure;
Fig. 3 is the European conversion figure between world coordinates and the camera coordinates;
Fig. 4 is a SIFT feature point extraction algorithm flow chart;
Fig. 5 is difference of gaussian space (DOG) synoptic diagram;
Fig. 6 is a gradient orientation histogram;
Fig. 7 is by unique point neighborhood gradient information generating feature point descriptor synoptic diagram;
Fig. 8 is the image co-registration synoptic diagram;
Fig. 9 is synthetic front and back effect comparison video interception.
Embodiment
Explanation at length below with reference to Figure of description method of the present invention being done.
With reference to accompanying drawing, embodiment of the present invention is set forth in detail.
Method of the present invention comprises: 1) camera calibration and bearing calibration, 2) key frame coupling and fusion method and 3) real-time wide field scape image synthesizing method.
1) camera calibration and bearing calibration are the first steps of this method.Employing is based on the calibration algorithm on a plurality of free planes of OpenCV, i.e. hand-held 7 * 7 checkerboard image, and the length and width of each grid is 2cm, checkerboard image is placed on translation before the video camera, rotation, to obtain the image of different azimuth.When collecting enough images (more than 10), use the camera calibration function of OpenCV, obtain the inside and outside parameter and the distortion parameter of video camera, and then two field picture is proofreaied and correct;
Inside and outside parameter and distortion parameter are following:
4 distortion factors are respectively: { 0.359114,0.129823 ,-0.00112584,0.00435681 }
Concrete derivation is following:
Video camera is a kind of mapping between the 3D world and the 2D image.Object in the three dimensions is imaging model to the projection relation as the plane, and desirable projection imaging model is the central projection of optics, i.e. pin-hole model.Following Fig. 3-1, f is a focal length of camera, and Z is the distance of video camera to object, and X is the length of object along transverse axis X axle, and x is the horizontal ordinate of subject image on the plane of delineation, so:
In like manner, Y is the length of object along longitudinal axis Y axle, and y is the ordinate of subject image on the plane of delineation, so have:
So can obtain next coordinate expression formula:
System is converted into the image pixel coordinate system with the image physical coordinates:
Wherein, u, v are respectively the pixel coordinate of image at transverse axis, the longitudinal axis;
,
is the picture centre coordinate;
,
is respectively the physical size of single pixel at transverse axis, the longitudinal axis;
,
is the number of pixels of unit length;
The homogeneous coordinates expression formula of formula (3-4) is:
(3-5)
Simultaneous formula (3-3) and (3-5) obtains:
So obtain:
Wherein,
;
is respectively X, the equivalent focal length of Y direction;
;
;
,
is the confidential reference items of video camera.
European conversion between world coordinates and the camera coordinates is as shown in Figure 3; C is the camera coordinate system initial point; (XC; YC; ZC) be camera coordinate system; O is the world coordinate system initial point; (
O,
O,
O) is world coordinate system.Point in the world coordinate system can transform to camera coordinate system through rotational transform matrix R and translation transformation matrix T.
Note is successively around X; Y; The anglec of rotation of Z axle is ψ; φ and θ; Then rotational transform matrix R be three matrixes
(ψ);
(φ) with
product (θ); Be R=
(ψ)
(φ)
(θ), wherein:
Thereby obtain:
Can find out that by following formula rotational transform matrix R only contains 3 independent variables, promptly rotation parameter (ψ, φ, θ).Add 3 elements (
in the translation transformation matrix T;
;
), these 6 parameters are called as external parameters of cameras;
2) key frame coupling and fusion method are second steps of this method.For the accuracy rate that improves coupling, the angle difference between the elimination key frame images, need at first carry out pre-service to image, be about to plane picture and carry out cylindrical surface projecting.Use the SIFT algorithm in two width of cloth images, to extract respectively then and have the yardstick unchangeability, and the unique point that not influenced by noise, luminance difference etc., 128 dimension SIFT unique point descriptors obtained
The concrete implementation procedure of SIFT feature point extraction algorithm is following:
1) the metric space extreme value detects
(1) sets up Gauss's metric space
The theoretical main thought of metric space utilizes gaussian kernel that image is carried out change of scale exactly, and then obtains the multiscale space expressed sequence of image, extract minutiae in these sequences then.Being defined as of two dimension gaussian kernel:
(4-1)
Can be by original image I (x; Y) with gaussian kernel function G (x with different scale factor; Y;
) convolution obtains two-dimensional image I (x; Y) the metric space function L (x under different scale; Y,
), i.e. L (x; Y;
)=I (x, y) * G (x, y;
), (*) expression convolution operation in the formula.Wherein
is scale factor; Its value is more little; Gaussian function is level and smooth more; The level and smooth degree of image is more little, otherwise level and smooth degree is big more.Simultaneously the image that obtains is carried out 2 demultiplications sampling, repeat scale factor and enlarge k convolution doubly, so just obtained the gaussian pyramid image of the different scale space different resolution of image;
(2) set up difference of gaussian pyramid (DOG)
The adjacent two layers image subtraction has just been obtained the difference of gaussian space, promptly DOG (Difference-Of-Gaussian) image D (δ), concrete computing formula is following for x, y:
D(x,y,δ)=L(x,y,kδ)-L(x,y,δ)=(G(x,y,kδ)-G(x,y,δ))*I(x,y) (4-2)
2002, Mikolajczyk was through experimental verification, compared with other unique points such as gradient, Hessian, Harris etc., and (x, y, peak point δ) provide stable characteristics to D.If k fixes, the influence of k-1 just can be eliminated so, and the peak point on the DOG figure is exactly the unique point that we will detect like this.In order to eliminate The noise; We enlarge k mode filtering doubly (being on each frequency multiplication) on each rank successively with scale factor and go out several gaussian image; Again gaussian image adjacent on each frequency multiplication is subtracted each other and obtain DOG figure; Search out then that all are the pixel of peak value on the DOG figure in its neighborhood, these points are candidate point;
(3) extreme point detects
In the DOG metric space pyramid of setting up; In order to detect the extreme point (maximum value and minimal value) in the difference of gaussian image; Each pixel in middle layer in the DOG metric space (bottom and top layer except) need follow with adjacent 8 pixels of one deck and its bilevel each 9 neighbor pixel altogether 26 neighbor pixels compare, all be Local Extremum to guarantee this point at metric space and two dimensional image space.
Difference of gaussian space (DOG) synoptic diagram is as shown in Figure 7; " stain " is as sample point to be compared; Compare with 8 pixels and two-layer up and down each 9 pixel adjacent in the same layer,, then extract this point if sample point is the extreme point (maximum value or minimum value) in these points; And write down the position and the yardstick of this point, otherwise continue relatively other pixel by this rule.It should be noted that the calculating that ground floor and last one deck do not participate in extracting extreme point.
) the location feature point
Because the DOG value is responsive to noise and edge, the extreme point that therefore obtains through top step probably is noise spot or frontier point, can influence final matching effect.These Local Extremum also will be passed through further detection could finally confirm as unique point.
Use three-dimensional quadratic function that Local Extremum is carried out match below, filtering out unique point, and confirm its yardstick and positional information.The portion's extreme point of setting a trap is
; Difference metric space function D (x so; Y, δ) at the Taylor expansion such as the formula (4-3) at this some place:
In the following formula, X=
is the side-play amount of sample.Suppose that three tomographic images in the DOG metric space are respectively
;
;
, then the concrete calculating of each item is following in the following formula:
In following formula, each derivative is respectively:
is made as 0 to formula (4-3) differentiate and with its value, can obtain extreme point
and the corresponding extreme value D (
) of X.
In addition; Also need remove the low unique point of contrast; Have only |
|>=0.03 o'clock; Just be regarded as strong unique point and remain, otherwise reject.Has very strong robustness through handling the unique point that remains like this.
) confirm the unique point direction
The rotation of image only can cause the rotation of characteristics of image direction.For making unique point have rotational invariance, need specify a principal direction for each unique point.This paper distributes through the gradient direction of statistical nature vertex neighborhood pixel, to obtain the greatest gradient direction in the unique point neighborhood, as the principal direction of unique point descriptor.The concrete gradient-norm value and the expression formula of gradient direction are:
Wherein, M (x, y) expression (x, the gradient-norm value of y) locating;
(x; Y) (yardstick that L uses is the yardstick of each unique point place DOG image for x, the gradient direction of y) locating in expression.
In actual computation, generally be to be to sample in the zone (inner like the circle of Fig. 4-6) at center, with the distribution of statistics with histogram gradient to unique point.Per 10 degree of general histogram are a post, totally 36 posts, and the effect of making a concerted effort of adding up these 36 direction gradients respectively, with the principal direction of histogrammic peak value as this unique point, as shown in Figure 7:
4) extract feature descriptor
Next extract the feature descriptor vector.In order to guarantee the rotational invariance of image, at first to coordinate axis be rotated to be the direction of unique point.Then get 8 * 8 totally 64 pixels symmetrically at characteristic neighborhood of a point (its place row and column except).In Fig. 4-7, the intersection of central two red lines of left figure is a unique point, representes its place metric space pixel on every side around each wicket of unique point, and the length of arrow is represented the mould value of this pixel gradient, the direction indication gradient direction of arrow.The scope (like circle among the figure, the pixel weights near more from unique point are big more, and the gradient contribution is big more) of Gauss's weighting is set.(red line is boundary in the left figure of Fig. 4-7 in per 4 * 4 small images then; Be divided into is 4 groups) on find the solution upper and lower, left and right, upper left, following, upper right, bottom right, the left side gradient orientation histogram of totally 8 directions; Calculate the accumulated value of each gradient direction, as a seed points, therefore; For 8 * 8 in the image of totally 64 pixels; A unique point is made up of 4 seed points, and each seed points comprises the gradient information of 8 directions, can form the SIFT unique point descriptor of 32 dimensions (2 * 2 * 8).The thinking of this associating neighborhood directivity information has fault-tolerance preferably for the characteristic matching that positioning error is arranged, and has also strengthened the noise resisting ability of algorithm simultaneously.
The Lowe suggestion is when actual computation; Around each unique point, divide 4 * 4 seed regions; So (each seed points comprises the gradient information of 8 directions to have formed the SIFT unique point descriptor of 128 dimensions; Totally 4 * 4 * 8=128 vector information), strengthen the robustness of coupling by this method.
The SIFT unique point descriptor that obtains through above method has had yardstick unchangeability and rotational invariance.At last, need carry out normalization to the length of unique point descriptor and handle, to remove the influence of illumination conversion.
So far, obtained each unique point full detail (x, y, δ, θ, FV), wherein (x y) is the locus of unique point, and δ is the scale factor of unique point, and θ is the principal direction of unique point, FV be 128 the dimension the unique point descriptor.
Fig. 8 is by unique point neighborhood gradient information generating feature point descriptor synoptic diagram;
And then adopt present most widely used arest neighbors (Nearest Neighbor) searching algorithm to seek the characteristic matching point, and write down the overlapping region between two width of cloth images.
Arest neighbors (Nearest Neighbor) searching algorithm is one of method of present most widely used searching characteristic matching point; The sample point that this method is at first obtained image to be matched is the Euclidean distance between each unique point in the benchmark image; Through judging the arest neighbors unique point distance and the ratio size of inferior neighbour's unique point distance confirm whether two unique points mate (the arest neighbors unique point is meant from the nearest unique point of sample point, promptly has the unique point of short Euclidean distance then; Inferior neighbour's unique point is meant from sample point time near unique point).
The formula of calculating Euclidean distance is (FV is 128 dimension descriptors of unique point in the formula) as follows:
In native system; Through ratio a threshold value (this paper is made as 0.4) is set and differentiates whether mate success for arest neighbors unique point distance and inferior neighbour's unique point distance; So just utilize the restrictive information between the match point, obtained comparatively stable characteristics match point.
In order further to improve the precision of coupling; Program has been carried out once reverse coupling again; Promptly select another width of cloth figure (not making the figure of image to be matched in the back calculating) as image to be matched; Calculate the ratio of arest neighbors unique point distance and inferior neighbour's unique point distance, get two match point intersection of sets collection (make the arest neighbors unique point distance of trying to achieve for twice and the ratio of inferior neighbour's unique point distance all satisfy the threshold value requirement, and two arest neighbors unique points distances being identical) then.
Use the progressive method that gradually goes out at last, the wide field scape image that obtains key frame is merged, spliced in the overlapping region of two width of cloth images.
The progressive mode that gradually goes out the gradual change of method employing weights, the pixel value V in the folded zone of ream weight
Last=(1-a) V
Left+ aV
Right, wherein, weights
To become that relevant (this paper gets with this distance of putting image boundary
=(L-x)/L, wherein x representes this and puts the distance of image boundary, L representes the width of overlapping region).For coloured image, can divide three components to carry out the progressive mode that gradually goes out respectively and synthesize.Adopt this method, the transition of pixel is very even, and the effect of generation is more much better than mean value method.Image co-registration is as shown in Figure 8.
3) real-time wide field scape image synthesizing method is the final step of this method.Because after the splicing through the back key frame; Each parameter of two cameras (focal length, pixel etc.) is constant basically; The picture position that obtains is constant basically, and the position of the unique point of each image is constant basically in the visual field, so the position of overlapping region is also constant basically between image.So, only need to after all frames use progressively gradually to go out algorithm and carry out the synthetic wide scene graph of image co-registration operation and play, can realize that the real-time wide field scape of monitor video is synthetic, synthetic before and after the effect video interception as shown in Figure 9.
Method of the present invention also can be used for monitoring camera video image synthetic under any environment.
Except that the described technical characterictic of instructions, be the known technology of those skilled in the art.
Claims (1)
1. the real-time wide field of a cloud data center machine room scape monitoring synthetic method is characterized in that: two common camera cameras are set in the cloud data center machine room pass through: 1) camera calibration and bearing calibration, 2) the key frame coupling; 3) fusion method and real-time wide field scape image synthesizing method synthesize wide field scape extensive angle video in real time with monitor video, wherein:
1) camera calibration and bearing calibration are to demarcate the inside and outside parameter of camera and distortion parameter and correction through the vision field that uses a computer chessboard method commonly used, and the distortion of camera has been carried out revising preferably; The result is science, objective, true more, and camera calibration and bearing calibration are the first steps of this method; Employing is promptly handed 7 * 7 checkerboard image based on the calibration algorithm on a plurality of free planes of OpenCV, and the length and width of each grid is 2cm; Checkerboard image is placed on translation before the video camera, rotation, to obtain the image of different azimuth, when collecting enough images; More than at least 10; Use the camera calibration function of OpenCV, obtain the inside and outside parameter and the distortion parameter of video camera, and then two field picture is proofreaied and correct;
2) key frame coupling and fusion method for the accuracy rate that improves coupling, the angle difference between the elimination key frame images, need at first carry out pre-service to image; Be about to plane picture and carry out cylindrical surface projecting, use the SIFT algorithm in two width of cloth images, to extract respectively then and have the yardstick unchangeability, and the unique point that not influenced by noise, luminance difference; Obtain 128 dimension SIFT unique point descriptors; And then adopt present most widely used nearest neighbor search algorithm to seek the characteristic matching point, and write down the overlapping region between two width of cloth images, use the progressive method that gradually goes out at last; The overlapping region of two width of cloth images is merged, spliced, obtain the wide field scape image of key frame;
3) real-time wide field scape image synthesizing method is the process that still image is converted into dynamic video, through obtaining the frame of video of two cameras in real time; Overlapping region between the corresponding frame is merged, splices, play continuously; Can obtain wide field scape monitor video, because after the splicing through the back key frame, each parameter of two cameras comprises that focal length, pixel are constant basically; The picture position that obtains is constant basically; The position of the unique point of each image is constant basically in the visual field, so the position of overlapping region is also constant basically between image, so; Only need to after all frames use progressively gradually to go out algorithm and carry out the synthetic wide scene graph of image co-registration operation and play, can realize that the real-time wide field scape of monitor video is synthetic.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011103801506A CN102521816A (en) | 2011-11-25 | 2011-11-25 | Real-time wide-scene monitoring synthesis method for cloud data center room |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011103801506A CN102521816A (en) | 2011-11-25 | 2011-11-25 | Real-time wide-scene monitoring synthesis method for cloud data center room |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102521816A true CN102521816A (en) | 2012-06-27 |
Family
ID=46292720
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2011103801506A Pending CN102521816A (en) | 2011-11-25 | 2011-11-25 | Real-time wide-scene monitoring synthesis method for cloud data center room |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102521816A (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103399652A (en) * | 2013-07-19 | 2013-11-20 | 哈尔滨工程大学 | 3D (three-dimensional) input method on basis of OpenCV (open source computer vision library) camera calibration |
CN103607542A (en) * | 2013-11-30 | 2014-02-26 | 深圳市金立通信设备有限公司 | Picture processing method and device and photographic equipment |
CN103945103A (en) * | 2013-01-17 | 2014-07-23 | 成都国腾电子技术股份有限公司 | Multi-plane secondary projection panoramic camera image distortion elimination method based on cylinder |
CN104240216A (en) * | 2013-06-07 | 2014-12-24 | 光宝电子(广州)有限公司 | Image correcting method, module and electronic device thereof |
CN104506840A (en) * | 2014-12-25 | 2015-04-08 | 桂林远望智能通信科技有限公司 | Real-time stereoscopic video stitching device and real-time stereoscopic video feature method |
CN104574401A (en) * | 2015-01-09 | 2015-04-29 | 北京环境特性研究所 | Image registration method based on parallel line matching |
CN106954044A (en) * | 2017-03-22 | 2017-07-14 | 山东瀚岳智能科技股份有限公司 | A kind of method and system of video panoramaization processing |
CN107133580A (en) * | 2017-04-24 | 2017-09-05 | 杭州空灵智能科技有限公司 | A kind of synthetic method of 3D printing monitor video |
CN107644394A (en) * | 2016-07-21 | 2018-01-30 | 完美幻境(北京)科技有限公司 | A kind of processing method and processing device of 3D rendering |
CN108683565A (en) * | 2018-05-22 | 2018-10-19 | 珠海爱付科技有限公司 | A kind of data processing system and method based on narrowband Internet of Things |
CN109615659A (en) * | 2018-11-05 | 2019-04-12 | 成都西纬科技有限公司 | A kind of the camera parameters preparation method and device of vehicle-mounted multiple-camera viewing system |
CN110050243A (en) * | 2016-12-21 | 2019-07-23 | 英特尔公司 | It is returned by using the enhancing nerve of the middle layer feature in autonomous machine and carries out camera repositioning |
CN110120012A (en) * | 2019-05-13 | 2019-08-13 | 广西师范大学 | The video-splicing method that sync key frame based on binocular camera extracts |
CN112837225A (en) * | 2021-04-15 | 2021-05-25 | 浙江卡易智慧医疗科技有限公司 | Method and device for automatically and seamlessly splicing vertical full-spine images |
CN112927128A (en) * | 2019-12-05 | 2021-06-08 | 晶睿通讯股份有限公司 | Image splicing method and related monitoring camera equipment |
CN114449130A (en) * | 2022-03-07 | 2022-05-06 | 北京拙河科技有限公司 | Multi-camera video fusion method and system |
CN114612613A (en) * | 2022-03-07 | 2022-06-10 | 北京拙河科技有限公司 | Dynamic light field reconstruction method and system |
CN116866522A (en) * | 2023-07-11 | 2023-10-10 | 广州市图威信息技术服务有限公司 | Remote monitoring method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101146231A (en) * | 2007-07-03 | 2008-03-19 | 浙江大学 | Method for generating panoramic video according to multi-visual angle video stream |
CN101520897A (en) * | 2009-02-27 | 2009-09-02 | 北京机械工业学院 | Video camera calibration method |
JP2010103730A (en) * | 2008-10-23 | 2010-05-06 | Clarion Co Ltd | Calibration device and calibration method of car-mounted camera |
-
2011
- 2011-11-25 CN CN2011103801506A patent/CN102521816A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101146231A (en) * | 2007-07-03 | 2008-03-19 | 浙江大学 | Method for generating panoramic video according to multi-visual angle video stream |
JP2010103730A (en) * | 2008-10-23 | 2010-05-06 | Clarion Co Ltd | Calibration device and calibration method of car-mounted camera |
CN101520897A (en) * | 2009-02-27 | 2009-09-02 | 北京机械工业学院 | Video camera calibration method |
Non-Patent Citations (1)
Title |
---|
王恺: "基于GPU的多摄像机全景视场拼接", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103945103A (en) * | 2013-01-17 | 2014-07-23 | 成都国腾电子技术股份有限公司 | Multi-plane secondary projection panoramic camera image distortion elimination method based on cylinder |
CN104240216A (en) * | 2013-06-07 | 2014-12-24 | 光宝电子(广州)有限公司 | Image correcting method, module and electronic device thereof |
CN103399652B (en) * | 2013-07-19 | 2017-02-22 | 哈尔滨工程大学 | 3D (three-dimensional) input method on basis of OpenCV (open source computer vision library) camera calibration |
CN103399652A (en) * | 2013-07-19 | 2013-11-20 | 哈尔滨工程大学 | 3D (three-dimensional) input method on basis of OpenCV (open source computer vision library) camera calibration |
CN103607542A (en) * | 2013-11-30 | 2014-02-26 | 深圳市金立通信设备有限公司 | Picture processing method and device and photographic equipment |
CN104506840A (en) * | 2014-12-25 | 2015-04-08 | 桂林远望智能通信科技有限公司 | Real-time stereoscopic video stitching device and real-time stereoscopic video feature method |
CN104574401A (en) * | 2015-01-09 | 2015-04-29 | 北京环境特性研究所 | Image registration method based on parallel line matching |
CN107644394A (en) * | 2016-07-21 | 2018-01-30 | 完美幻境(北京)科技有限公司 | A kind of processing method and processing device of 3D rendering |
CN107644394B (en) * | 2016-07-21 | 2021-03-30 | 完美幻境(北京)科技有限公司 | 3D image processing method and device |
CN110050243A (en) * | 2016-12-21 | 2019-07-23 | 英特尔公司 | It is returned by using the enhancing nerve of the middle layer feature in autonomous machine and carries out camera repositioning |
CN110050243B (en) * | 2016-12-21 | 2022-09-20 | 英特尔公司 | Camera repositioning by enhanced neural regression using mid-layer features in autonomous machines |
CN106954044A (en) * | 2017-03-22 | 2017-07-14 | 山东瀚岳智能科技股份有限公司 | A kind of method and system of video panoramaization processing |
CN107133580A (en) * | 2017-04-24 | 2017-09-05 | 杭州空灵智能科技有限公司 | A kind of synthetic method of 3D printing monitor video |
CN108683565B (en) * | 2018-05-22 | 2021-11-16 | 珠海爱付科技有限公司 | Data processing system based on narrowband Internet of things |
CN108683565A (en) * | 2018-05-22 | 2018-10-19 | 珠海爱付科技有限公司 | A kind of data processing system and method based on narrowband Internet of Things |
CN109615659A (en) * | 2018-11-05 | 2019-04-12 | 成都西纬科技有限公司 | A kind of the camera parameters preparation method and device of vehicle-mounted multiple-camera viewing system |
CN110120012A (en) * | 2019-05-13 | 2019-08-13 | 广西师范大学 | The video-splicing method that sync key frame based on binocular camera extracts |
CN110120012B (en) * | 2019-05-13 | 2022-07-08 | 广西师范大学 | Video stitching method for synchronous key frame extraction based on binocular camera |
CN112927128B (en) * | 2019-12-05 | 2023-11-24 | 晶睿通讯股份有限公司 | Image stitching method and related monitoring camera equipment thereof |
CN112927128A (en) * | 2019-12-05 | 2021-06-08 | 晶睿通讯股份有限公司 | Image splicing method and related monitoring camera equipment |
CN112837225A (en) * | 2021-04-15 | 2021-05-25 | 浙江卡易智慧医疗科技有限公司 | Method and device for automatically and seamlessly splicing vertical full-spine images |
CN112837225B (en) * | 2021-04-15 | 2024-01-23 | 浙江卡易智慧医疗科技有限公司 | Automatic seamless splicing method and device for standing full-spine images |
CN114449130A (en) * | 2022-03-07 | 2022-05-06 | 北京拙河科技有限公司 | Multi-camera video fusion method and system |
CN114612613B (en) * | 2022-03-07 | 2022-11-29 | 北京拙河科技有限公司 | Dynamic light field reconstruction method and system |
CN114612613A (en) * | 2022-03-07 | 2022-06-10 | 北京拙河科技有限公司 | Dynamic light field reconstruction method and system |
CN116866522A (en) * | 2023-07-11 | 2023-10-10 | 广州市图威信息技术服务有限公司 | Remote monitoring method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102521816A (en) | Real-time wide-scene monitoring synthesis method for cloud data center room | |
CN104867126B (en) | Based on point to constraint and the diameter radar image method for registering for changing region of network of triangle | |
Wang et al. | Digital image correlation in experimental mechanics and image registration in computer vision: Similarities, differences and complements | |
CN104574347B (en) | Satellite in orbit image geometry positioning accuracy evaluation method based on multi- source Remote Sensing Data data | |
CN103822616B (en) | A kind of figure segmentation retrains with topographic relief the Remote Sensing Images Matching Method combined | |
CN104867135B (en) | A kind of High Precision Stereo matching process guided based on guide image | |
CN111080529A (en) | Unmanned aerial vehicle aerial image splicing method for enhancing robustness | |
CN106485690A (en) | Cloud data based on a feature and the autoregistration fusion method of optical image | |
CN104599258B (en) | A kind of image split-joint method based on anisotropic character descriptor | |
CN104134200B (en) | Mobile scene image splicing method based on improved weighted fusion | |
Yang et al. | Polarimetric dense monocular slam | |
CN106485740A (en) | A kind of combination point of safes and the multidate SAR image registration method of characteristic point | |
CN109993800A (en) | A kind of detection method of workpiece size, device and storage medium | |
CN107292925A (en) | Based on Kinect depth camera measuring methods | |
CN107657644B (en) | Sparse scene flows detection method and device under a kind of mobile environment | |
CN103400388A (en) | Method for eliminating Brisk (binary robust invariant scale keypoint) error matching point pair by utilizing RANSAC (random sampling consensus) | |
Urban et al. | Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds | |
CN106023183B (en) | A kind of real-time Algorism of Matching Line Segments method | |
AliAkbarpour et al. | Fast structure from motion for sequential and wide area motion imagery | |
CN109272577B (en) | Kinect-based visual SLAM method | |
CN113674400A (en) | Spectrum three-dimensional reconstruction method and system based on repositioning technology and storage medium | |
Stentoumis et al. | A local adaptive approach for dense stereo matching in architectural scene reconstruction | |
CN108010075B (en) | Local stereo matching method based on multi-feature combination | |
CN112614167A (en) | Rock slice image alignment method combining single-polarization and orthogonal-polarization images | |
Xie et al. | Fine registration of 3D point clouds with iterative closest point using an RGB-D camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20120627 |