CN102521816A - Real-time wide-scene monitoring synthesis method for cloud data center room - Google Patents
Real-time wide-scene monitoring synthesis method for cloud data center room Download PDFInfo
- Publication number
- CN102521816A CN102521816A CN2011103801506A CN201110380150A CN102521816A CN 102521816 A CN102521816 A CN 102521816A CN 2011103801506 A CN2011103801506 A CN 2011103801506A CN 201110380150 A CN201110380150 A CN 201110380150A CN 102521816 A CN102521816 A CN 102521816A
- Authority
- CN
- China
- Prior art keywords
- images
- wide
- image
- camera
- real
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012544 monitoring process Methods 0.000 title claims abstract description 22
- 238000001308 synthesis method Methods 0.000 title claims abstract description 9
- 238000000034 method Methods 0.000 claims abstract description 50
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 15
- 238000012937 correction Methods 0.000 claims abstract description 11
- 230000000007 visual effect Effects 0.000 claims abstract description 7
- 238000007500 overflow downdraw method Methods 0.000 claims abstract 2
- 238000005562 fading Methods 0.000 claims description 6
- 230000000750 progressive effect Effects 0.000 claims description 6
- 230000015572 biosynthetic process Effects 0.000 claims description 5
- 230000004927 fusion Effects 0.000 claims description 5
- 238000003786 synthesis reaction Methods 0.000 claims description 5
- 230000002194 synthesizing effect Effects 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 4
- 238000010845 search algorithm Methods 0.000 claims description 4
- 230000003068 static effect Effects 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims 1
- 230000009466 transformation Effects 0.000 description 11
- 239000011159 matrix material Substances 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000014509 gene expression Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 241000287196 Asthenes Species 0.000 description 1
- 238000012935 Averaging Methods 0.000 description 1
- 241000282320 Panthera leo Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012887 quadratic function Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention provides a real-time wide-scene monitoring synthesis method for a cloud data center room, which synthesizes a monitoring video into a wide-scene wide-angle video in real time by arranging two common cameras in a cloud data center room in the following steps: (1) carrying out a camera calibration and correction method; (2) matching key frames; and (3) carrying out a fusion method and a real-time wide-scene video synthesis method. The camera calibration and correction method (1) is characterized in that the inner parameter, the outer parameter and the distortion parameter of the camera are calibrated and corrected with a common checkerboard method in the computer visual domain to favorably correct the distortion of the camera; and a result is more scientific, objective and real; in the camera calibration and correction method which is the first step of the method, a calibration algorithm based on a plurality of free planes of Open computer vision (CV) is adopted, i.e. a 7*7 checkerboard image is held by hands, and the length and the width of each check are both 2cm; the checkerboard image is translated and rotated in front of a pickup camera to obtain images of different directions; and when enough images (at least 10 pieces) are collected, the inner parameter, the outer parameter and the distortion parameter of the pickup camera are obtained by a pickup camera calibration function of the Open CV so as to correct the frame image.
Description
Technical Field
The invention relates to the field of computer application, in particular to a real-time wide-scene monitoring and synthesizing method for a cloud data center machine room.
Background
With the development of information technology, cloud computing is gradually becoming a development hotspot in the industry, and cloud computing service platforms of various home and abroad manufacturers are also beginning to be put into use in multiple fields of science, education, culture, sanitation, government, high-performance computing, electronic commerce, internet of things and the like.
In order to guarantee the safety of machine equipment, a monitoring system is installed in a machine room of most cloud computing data centers. However, due to limitations in aspects of hardware and the like, the visual angle of each camera in the monitoring system is limited, only a small area in the machine room can be shot, and video information in a larger area cannot be obtained, so that a visual blind spot is caused.
At present, the wide scene synthesis technology of a static image is mature, but the wide scene synthesis of a dynamic video becomes a difficult problem due to the limitation of video real-time requirement on algorithm time complexity and the complexity of the video.
Aiming at the problem, the invention provides a real-time wide-scene monitoring and synthesizing method, which can quickly, accurately and real-timely generate a wide-scene wide-angle monitoring video by only two cameras and is conveniently applied to a machine room of a cloud data center.
Disclosure of Invention
The invention provides a method for obtaining a wide-scene monitoring video in real time by using a program mode, aiming at the defect that the existing cloud data center machine room monitoring system has monitoring blind spots.
The invention aims to realize the following method that two common cameras are arranged in a cloud data center machine room, and the two cameras pass through the following steps: 1) a camera calibration and correction method, 2) a key frame matching and fusing method; 3) the real-time wide-scene video synthesis method synthesizes monitoring videos into wide-scene wide-angle videos in real time, wherein:
1) the camera calibration and correction method is characterized in that internal and external parameters and distortion parameters of the camera are calibrated and corrected by using a chessboard method commonly used in the field of computer vision, distortion of the camera is better corrected, and the result is more scientific, objective and true. The camera calibration and correction method is the first step of the method. A calibration algorithm of a plurality of free planes based on OpenCV is adopted, namely a handheld 7 x 7 chessboard image is used, the length and the width of each grid are both 2cm, and the chessboard image is placed in front of a camera to be translated and rotated so as to obtain images in different directions. When enough images are collected, at least 10 images are collected, internal and external parameters and distortion parameters of the camera are solved by using a camera calibration function of OpenCV, and then the frame images are corrected;
2) in order to improve the matching accuracy and eliminate the angle difference between key frame images, the key frame matching and fusing method needs to firstly preprocess the images, namely, carry out cylindrical projection on the plane images. Then, respectively extracting feature points which have scale invariance and are not influenced by noise, brightness difference and the like from the two images by using an SIFT algorithm to obtain 128-dimensional SIFT feature point descriptors, further searching feature matching points by using the most widely applied nearest neighbor search algorithm at present, recording an overlapping area between the two images, and finally fusing and splicing the overlapping areas of the two images by using a progressive gradual extraction method to obtain a wide-field image of a key frame;
3) a real-time wide-scene video synthesis method is a process of converting a static image into a dynamic video, and a wide-scene monitoring video can be obtained by acquiring video frames of two cameras in real time, fusing, splicing and continuously playing overlapping regions between corresponding frames, wherein after the splicing of a key frame in the previous step, parameters of the two cameras including focal length and pixels are basically unchanged, the positions of the obtained images are basically unchanged, and the positions of characteristic points of the images in a visual field are basically unchanged, so that the positions of the overlapping regions between the images are basically unchanged, and therefore, the real-time wide-scene synthesis of the monitoring video can be realized only by carrying out image fusion operation on all the following frames by using a progressive fading algorithm to synthesize a wide-scene image and play the wide-scene image.
The invention has the beneficial effects that: the innovation of the invention is that: the existing image wide scene synthesis algorithm is improved, the time complexity is reduced, and the method is better transplanted to a real-time monitoring system. Experiments prove that the method has the advantages of real-time performance, accuracy, high efficiency, good visual effect and no obvious hysteresis.
Drawings
FIG. 1 is a schematic view of a video composition flow;
FIG. 2 is a diagram of a pinhole camera imaging model;
FIG. 3 is a graph of the Euclidean transformation between world coordinates and camera coordinates;
FIG. 4 is a flow chart of the SIFT feature point extraction algorithm;
FIG. 5 is a schematic of Gaussian difference space (DOG);
FIG. 6 is a gradient direction histogram;
FIG. 7 is a diagram of feature point descriptor generation from feature point neighborhood gradient information;
FIG. 8 is a schematic view of image fusion;
fig. 9 is a composite pre-and post-effect comparison video shot.
Detailed Description
The method of the present invention is described in detail below with reference to the accompanying drawings.
Embodiments of the present invention will be explained in detail with reference to the accompanying drawings.
The method of the invention comprises the following steps: 1) a camera calibration and correction method, 2) a key frame matching and fusing method and 3) a real-time wide-scene video synthesis method.
1) The camera calibration and correction method is the first step of the method. A calibration algorithm of a plurality of free planes based on OpenCV is adopted, namely a handheld 7 x 7 chessboard image is used, the length and the width of each grid are both 2cm, and the chessboard image is placed in front of a camera to be translated and rotated so as to obtain images in different directions. When enough images (more than 10 images) are collected, solving internal and external parameters and distortion parameters of the camera by using a camera calibration function of OpenCV, and further correcting the frame images;
the intrinsic, extrinsic and distortion parameters are as follows:
the 4 distortion coefficients are respectively: { -0.359114,0.129823, -0.00112584,0.00435681}
The specific derivation process is as follows:
the camera is a mapping between the 3D world and the 2D image. The projection relation of an object in a three-dimensional space to an image plane is an imaging model, and an ideal projection imaging model is optical central projection, namely a pinhole model. 3-1, f is the camera focal length, Z is the camera-to-object distance, X is the object length along the X axis, which is the abscissa of the object image in the image plane, and thus:
Similarly, Y is the length of the object along the Y axis of the longitudinal axis, Y being the ordinate of the image of the object in the image plane, then there are:
Then the next coordinate expression can be obtained:
converting the image physical coordinate system into an image pixel coordinate system:
wherein u and v are pixel coordinates of the image on a horizontal axis and a vertical axis respectively;,is the image center coordinate;,the physical sizes of the single pixel in the horizontal axis and the vertical axis respectively;,is the number of pixels per unit length;
the homogeneous coordinate expression of formula (3-4) is:
(3-5)
simultaneous formulas (3-3) and (3-5) give:
thus, we obtain:
wherein,,respectively equivalent focal lengths in X and Y directions;,,,is the internal reference of the camera.
The Euclidean transformation between world coordinates and camera coordinates is shown in FIG. 3, C is the origin of the camera coordinate system, (XC, YC, ZC) is the camera coordinate system, O is the origin of the world coordinate system, (C) is the origin of the camera coordinate systemO,O,O) is the world coordinate system. Points in the world coordinate system may be transformed to the camera coordinate system by a rotation transformation matrix R and a translation transformation matrix T.
Noting that the rotation angles around the X, Y and Z axes are psi, phi and theta in sequence, the rotation transformation matrix R is three matrices(ψ), (phi) andproduct of (θ), i.e. R =(ψ)(φ)(θ), wherein:
thereby obtaining:
from the above equation, it can be seen that the rotation transformation matrix R contains only 3 independent variables, i.e., the rotation parameters (ψ, φ, θ). Plus 3 elements of the translation transformation matrix T: (,,) These 6 parameters are called camera external parameters;
2) the key frame matching and fusing method is the second step of the method. In order to improve the matching accuracy and eliminate the angle difference between the key frame images, the images need to be preprocessed, that is, the planar images need to be subjected to cylindrical projection. Then, feature points which have scale invariance and are not influenced by noise, brightness difference and the like are respectively extracted from the two images by using an SIFT algorithm to obtain 128-dimensional SIFT feature point descriptors
The specific implementation process of the SIFT feature point extraction algorithm is as follows:
1) scale space extremum detection
(1) Establishing a Gaussian scale space
The main idea of the scale space theory is to perform scale transformation on an image by using a Gaussian kernel so as to obtain a multi-scale space expression sequence of the image, and then extracting feature points from the sequence. The two-dimensional gaussian kernel is defined as:
(4-1)
the image may be represented by the original image I (x, y) and gaussian kernel functions G (x, y,) The convolution obtains the scale space function L (x, y,) I.e., the sum of L (x, y,)=I(x,y)*G(x,y,) In the formula (, denotes a convolution operation). WhereinThe smaller its value is, the smoother the gaussian function is, the less smooth the image is, and vice versa. Meanwhile, 2 times of reduction sampling is carried out on the obtained image, and the convolution of expanding the scale factor by k times is repeatedly carried out, so that Gaussian pyramid images with different scales and spaces and different resolutions of the image are obtained;
(2) establishing a Gaussian difference pyramid (DOG)
Subtracting two adjacent layers Of images to obtain a Difference space Of gaussians, namely a DOG (Difference-Of-Gaussian) image D (x, y, δ), wherein a specific calculation formula is as follows:
D(x,y,δ)=L(x,y,kδ)-L(x,y,δ)=(G(x,y,kδ)-G(x,y,δ))*I(x,y) (4-2)
in 2002, Mikolajczyk verified experimentally that the peak point of D (x, y, δ) provides the most stable feature compared to other feature points such as gradient, Hessian, Harris, etc. If k is fixed, then the influence of k-1 can be eliminated, so that the peak point on the DOG map is the feature point we want to detect. In order to eliminate the influence of noise, a plurality of Gaussian images are filtered in each order (namely on each frequency multiplication) in a mode that a scale factor is sequentially enlarged by k times, adjacent Gaussian images on each frequency multiplication are subtracted to obtain a DOG image, and then all pixel points which are peak values in the neighborhood of the DOG image are searched out, wherein the pixel points are candidate points;
(3) extreme point detection
In the established DOG scale space pyramid, in order to detect extreme points (maximum values and minimum values) in the gaussian difference image, each pixel point of the middle layer (except the bottommost layer and the topmost layer) in the DOG scale space needs to be compared with 26 adjacent pixel points in total of 8 adjacent pixel points of the same layer and 9 adjacent pixel points of the upper layer and the lower layer of the same layer, so as to ensure that the point is a local extreme point in the scale space and the two-dimensional image space.
A schematic diagram of a gaussian difference space (DOG) is shown in fig. 7, where a "black dot" is used as a sample point to be compared, and the sample point is compared with 8 adjacent pixel points in the same layer and 9 pixel points in the upper and lower layers, if the sample point is an extreme point (maximum or minimum) among the points, the point is extracted, and the position and scale of the point are recorded, otherwise, other pixel points are continuously compared according to the rule. It should be noted that the first and last layers do not participate in the computation of extracting the extreme points.
) Locating featuresDot
Since the DOG value is sensitive to noise and edges, the extreme point obtained by the above steps is likely to be a noise point or a boundary point, which may affect the final matching effect. These local extreme points are further detected and can be finally determined as feature points.
And fitting the local extreme points by using a three-dimensional quadratic function to screen out the characteristic points and determine the scale and position information of the characteristic points. Setting the local extreme point asThen the Taylor expansion of the difference scale space function D (x, y, δ) at this point is as in equation (4-3):
in the above formula, X =Is the offset of the sample. Suppose three layers of images in DOG scale space are respectively,,Then, the specific calculation of each term in the above formula is as follows:
in the above equation, the derivatives are:
by taking the derivative of the formula (4-3) and setting the value to 0, the extreme point of X can be obtainedAnd a corresponding extreme value D: ()。
In addition, it is also necessary to remove characteristic points with low contrast, only non-conductingIf | ≧ 0.03, the strong feature point is regarded as the strong feature point and retained, otherwise, the strong feature point is removed. The feature points retained by the processing have strong robustness.
) Determining feature point directions
Rotation of the image will only cause rotation of the direction of the image features. In order to make the feature points have rotational invariance, a principal direction needs to be assigned to each feature point. The method is characterized in that the maximum gradient direction in a feature point neighborhood is obtained by counting the gradient direction distribution of feature point neighborhood pixels and is used as the main direction of a feature point descriptor. The specific expressions of the gradient modulus and the gradient direction are as follows:
wherein m (x, y) represents a gradient modulus at (x, y),(x, y) represents the gradient direction at (x, y), and the scale used for L is the scale of the DOG image where each feature point is located.
In practical calculations, a region (e.g., inside the circle of fig. 4-6) centered on a feature point is typically sampled and the histogram is used to count the distribution of the gradient. In general, each 10 degrees of the histogram is a bin, and there are 36 bins, and the resultant effect of the 36 directional gradients is counted, and the peak of the histogram is taken as the main direction of the feature point, as shown in fig. 7:
4) extracting feature descriptors
Next, feature descriptor vectors are extracted. In order to ensure the rotation invariance of the image, the coordinate axes are firstly rotated to the directions of the characteristic points. Then 64 pixels of 8 x 8 are symmetrically taken in the neighborhood of the feature point (except for the row and column in which it is located). As in fig. 4-7, the intersection of the two red lines at the center of the left image is a feature point, each small window surrounding the feature point represents a pixel around the scale space where it is located, the length of the arrow represents the modulus of the gradient of the pixel, and the direction of the arrow represents the gradient direction. A range of gaussian weighting is set (as in the circle in the figure, the closer the pixel weight to the feature point is, the larger the gradient contribution is). Then, gradient direction histograms in 8 directions including up, down, left, right, left, up, left, down, right, up, down, up, right, up, down, up. The thought of combining the neighborhood directivity information has better fault tolerance for the feature matching with the positioning error, and simultaneously, the anti-noise capability of the algorithm is enhanced.
Lowe suggests that in actual calculation, 4 × 4 seed regions are divided around each feature point, so that 128-dimensional SIFT feature point descriptors (each seed point contains gradient information in 8 directions, and 4 × 4 × 8=128 vector information) are formed, and the robustness of matching is enhanced by this method.
The SIFT feature point descriptors obtained by the above method have scale invariance and rotation invariance. Finally, the length of the feature point descriptor needs to be normalized to remove the influence of the illumination transformation.
To this end, all information (x, y, δ, θ, FV) of each feature point is obtained, where (x, y) is the spatial position of the feature point, δ is the scale factor of the feature point, θ is the principal direction of the feature point, and FV is a 128-dimensional feature point descriptor.
FIG. 8 is a schematic diagram of feature point descriptor generation from feature point neighborhood gradient information;
and then, searching for a feature matching point by adopting a Nearest Neighbor (Nearest Neighbor) search algorithm which is most widely applied at present, and recording an overlapping area between two images.
The Nearest Neighbor (Nearest Neighbor) search algorithm is one of the most widely applied methods for finding feature matching points at present, and the method firstly finds the Euclidean distance between a sample point of an image to be matched and each feature point in a reference image, and then determines whether the two feature points are matched or not by judging the ratio of the Nearest Neighbor feature point distance to the next Neighbor feature point distance (the Nearest Neighbor feature point is the feature point closest to the sample point, namely the feature point with the shortest Euclidean distance, and the next Neighbor feature point is the feature point next closest to the sample point).
The formula for calculating the euclidean distance is as follows (FV is a 128-dimensional descriptor of the feature points in the formula):
in the system, whether the matching is successful or not is judged by setting a threshold (set as 0.4 in the text) for the ratio of the distance between the nearest neighbor feature point and the distance between the next neighbor feature point, so that the constraint information between the matching points is utilized to obtain more stable feature matching points.
In order to further improve the matching precision, the program performs a reverse matching again, that is, selects another image (an image of the image to be matched is not made in the previous calculation) as the image to be matched, calculates the ratio of the distance between the nearest neighbor feature point and the distance between the next neighbor feature point, and then takes the intersection of the two sets of matching points (so that the ratios of the distance between the nearest neighbor feature point and the distance between the next neighbor feature point obtained twice both meet the threshold requirement, and the distances between the two nearest neighbor feature points are the same).
And finally, fusing and splicing the overlapped areas of the two images by using a progressive fading method to obtain a wide-field image of the key frame.
The gradual weight change method is used to control the pixel value V of the overlapped arealast=(1-a)Vleft+aVrightWherein the weight valueTo be related to the distance of the point to the image boundary (taken here= (L-x)/L, where x represents the distance of the point to the image boundary and L represents the width of the overlap region). For color images, the images can be synthesized in a manner of gradually fading out in three components. With this method, the transition of the pixels is very uniform, resulting in a much better effect than the averaging method. Image fusion is shown in fig. 8.
3) The real-time wide-scene video synthesis method is the last step of the method. After the key frame of the previous step is spliced, the parameters (focal length, pixels and the like) of the two cameras are basically unchanged, the positions of the obtained images are basically unchanged, the positions of the characteristic points of the images in the visual field are basically unchanged, and the positions of the overlapping areas between the images are also basically unchanged. Therefore, the real-time wide scene composition of the monitoring video can be realized only by performing image fusion operation on all the following frames by using a progressive fading algorithm to synthesize a wide scene graph and playing the wide scene graph, and the screenshots of the effect video before and after the composition are shown in fig. 9.
The method of the invention can also be used for synthesizing the monitoring camera video image under any environment.
In addition to the technical features described in the specification, the technology is known to those skilled in the art.
Claims (1)
1. A real-time wide-scene monitoring and synthesizing method for a cloud data center machine room is characterized by comprising the following steps: set up two ordinary camera cameras in cloud data center computer lab and pass through: 1) a camera calibration and correction method, 2) key frame matching; 3) the fusion method and the real-time wide-scene video synthesis method synthesize the monitoring video into a wide-scene wide-angle video in real time, wherein:
1) a camera calibration and correction method is characterized in that internal and external parameters and distortion parameters of a camera are calibrated and corrected by using a common chessboard method in the field of computer vision, the distortion of the camera is well corrected, the result is more scientific, objective and real, the camera calibration and correction method is the first step of the method, a calibration algorithm of a plurality of free planes based on OpenCV, namely a handheld 7 x 7 chessboard image is adopted, the length and the width of each grid are 2cm, the chessboard image is placed in front of a camera to translate and rotate so as to obtain images in different directions, when enough images are collected, at least 10 images are obtained, the internal and external parameters and the distortion parameters of the camera are solved by using a camera calibration function of OpenCV, and further the frame image is corrected;
2) in order to improve the matching accuracy and eliminate the angle difference between key frame images, the key frame matching and fusing method comprises the steps of preprocessing the images, namely performing cylindrical projection on a plane image, extracting feature points which have scale invariance and are not influenced by noise and brightness difference from the two images by using an SIFT algorithm to obtain 128-dimensional SIFT feature point descriptors, searching for feature matching points by using the most widely applied nearest neighbor search algorithm at present, recording the overlapping area between the two images, and fusing and splicing the overlapping area of the two images by using a progressive gradual fading method to obtain a wide-field image of a key frame;
3) a real-time wide-scene video synthesis method is a process of converting a static image into a dynamic video, and a wide-scene monitoring video can be obtained by acquiring video frames of two cameras in real time, fusing, splicing and continuously playing overlapping regions between corresponding frames, wherein after the splicing of a key frame in the previous step, parameters of the two cameras including focal length and pixels are basically unchanged, the positions of the obtained images are basically unchanged, and the positions of characteristic points of the images in a visual field are basically unchanged, so that the positions of the overlapping regions between the images are basically unchanged, and therefore, the real-time wide-scene synthesis of the monitoring video can be realized only by carrying out image fusion operation on all the following frames by using a progressive fading algorithm to synthesize a wide-scene image and play the wide-scene image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011103801506A CN102521816A (en) | 2011-11-25 | 2011-11-25 | Real-time wide-scene monitoring synthesis method for cloud data center room |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011103801506A CN102521816A (en) | 2011-11-25 | 2011-11-25 | Real-time wide-scene monitoring synthesis method for cloud data center room |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102521816A true CN102521816A (en) | 2012-06-27 |
Family
ID=46292720
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2011103801506A Pending CN102521816A (en) | 2011-11-25 | 2011-11-25 | Real-time wide-scene monitoring synthesis method for cloud data center room |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102521816A (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103399652A (en) * | 2013-07-19 | 2013-11-20 | 哈尔滨工程大学 | 3D (three-dimensional) input method on basis of OpenCV (open source computer vision library) camera calibration |
CN103607542A (en) * | 2013-11-30 | 2014-02-26 | 深圳市金立通信设备有限公司 | Picture processing method and device and photographic equipment |
CN103945103A (en) * | 2013-01-17 | 2014-07-23 | 成都国腾电子技术股份有限公司 | Multi-plane secondary projection panoramic camera image distortion elimination method based on cylinder |
CN104240216A (en) * | 2013-06-07 | 2014-12-24 | 光宝电子(广州)有限公司 | Image correcting method, module and electronic device thereof |
CN104506840A (en) * | 2014-12-25 | 2015-04-08 | 桂林远望智能通信科技有限公司 | Real-time stereoscopic video stitching device and real-time stereoscopic video feature method |
CN104574401A (en) * | 2015-01-09 | 2015-04-29 | 北京环境特性研究所 | Image registration method based on parallel line matching |
CN106954044A (en) * | 2017-03-22 | 2017-07-14 | 山东瀚岳智能科技股份有限公司 | A kind of method and system of video panoramaization processing |
CN107133580A (en) * | 2017-04-24 | 2017-09-05 | 杭州空灵智能科技有限公司 | A kind of synthetic method of 3D printing monitor video |
CN107644394A (en) * | 2016-07-21 | 2018-01-30 | 完美幻境(北京)科技有限公司 | A kind of processing method and processing device of 3D rendering |
CN108683565A (en) * | 2018-05-22 | 2018-10-19 | 珠海爱付科技有限公司 | A kind of data processing system and method based on narrowband Internet of Things |
CN109615659A (en) * | 2018-11-05 | 2019-04-12 | 成都西纬科技有限公司 | A kind of the camera parameters preparation method and device of vehicle-mounted multiple-camera viewing system |
CN110050243A (en) * | 2016-12-21 | 2019-07-23 | 英特尔公司 | It is returned by using the enhancing nerve of the middle layer feature in autonomous machine and carries out camera repositioning |
CN110120012A (en) * | 2019-05-13 | 2019-08-13 | 广西师范大学 | The video-splicing method that sync key frame based on binocular camera extracts |
CN112215886A (en) * | 2020-10-10 | 2021-01-12 | 深圳道可视科技有限公司 | Panoramic parking calibration method and system |
CN112837225A (en) * | 2021-04-15 | 2021-05-25 | 浙江卡易智慧医疗科技有限公司 | Method and device for automatically and seamlessly splicing vertical full-spine images |
CN112927128A (en) * | 2019-12-05 | 2021-06-08 | 晶睿通讯股份有限公司 | Image splicing method and related monitoring camera equipment |
CN114449130A (en) * | 2022-03-07 | 2022-05-06 | 北京拙河科技有限公司 | Multi-camera video fusion method and system |
CN114612613A (en) * | 2022-03-07 | 2022-06-10 | 北京拙河科技有限公司 | Dynamic light field reconstruction method and system |
CN116866522A (en) * | 2023-07-11 | 2023-10-10 | 广州市图威信息技术服务有限公司 | Remote monitoring method |
CN117934574A (en) * | 2024-03-22 | 2024-04-26 | 深圳市智兴盛电子有限公司 | Method, device, equipment and storage medium for optimizing image of automobile data recorder |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101146231A (en) * | 2007-07-03 | 2008-03-19 | 浙江大学 | Method for generating panoramic video according to multi-visual angle video stream |
CN101520897A (en) * | 2009-02-27 | 2009-09-02 | 北京机械工业学院 | Video camera calibration method |
JP2010103730A (en) * | 2008-10-23 | 2010-05-06 | Clarion Co Ltd | Calibration device and calibration method of car-mounted camera |
-
2011
- 2011-11-25 CN CN2011103801506A patent/CN102521816A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101146231A (en) * | 2007-07-03 | 2008-03-19 | 浙江大学 | Method for generating panoramic video according to multi-visual angle video stream |
JP2010103730A (en) * | 2008-10-23 | 2010-05-06 | Clarion Co Ltd | Calibration device and calibration method of car-mounted camera |
CN101520897A (en) * | 2009-02-27 | 2009-09-02 | 北京机械工业学院 | Video camera calibration method |
Non-Patent Citations (1)
Title |
---|
王恺: "基于GPU的多摄像机全景视场拼接", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103945103A (en) * | 2013-01-17 | 2014-07-23 | 成都国腾电子技术股份有限公司 | Multi-plane secondary projection panoramic camera image distortion elimination method based on cylinder |
CN104240216A (en) * | 2013-06-07 | 2014-12-24 | 光宝电子(广州)有限公司 | Image correcting method, module and electronic device thereof |
CN103399652B (en) * | 2013-07-19 | 2017-02-22 | 哈尔滨工程大学 | 3D (three-dimensional) input method on basis of OpenCV (open source computer vision library) camera calibration |
CN103399652A (en) * | 2013-07-19 | 2013-11-20 | 哈尔滨工程大学 | 3D (three-dimensional) input method on basis of OpenCV (open source computer vision library) camera calibration |
CN103607542A (en) * | 2013-11-30 | 2014-02-26 | 深圳市金立通信设备有限公司 | Picture processing method and device and photographic equipment |
CN104506840A (en) * | 2014-12-25 | 2015-04-08 | 桂林远望智能通信科技有限公司 | Real-time stereoscopic video stitching device and real-time stereoscopic video feature method |
CN104574401A (en) * | 2015-01-09 | 2015-04-29 | 北京环境特性研究所 | Image registration method based on parallel line matching |
CN107644394A (en) * | 2016-07-21 | 2018-01-30 | 完美幻境(北京)科技有限公司 | A kind of processing method and processing device of 3D rendering |
CN107644394B (en) * | 2016-07-21 | 2021-03-30 | 完美幻境(北京)科技有限公司 | 3D image processing method and device |
CN110050243A (en) * | 2016-12-21 | 2019-07-23 | 英特尔公司 | It is returned by using the enhancing nerve of the middle layer feature in autonomous machine and carries out camera repositioning |
CN110050243B (en) * | 2016-12-21 | 2022-09-20 | 英特尔公司 | Camera repositioning by enhanced neural regression using mid-layer features in autonomous machines |
CN106954044A (en) * | 2017-03-22 | 2017-07-14 | 山东瀚岳智能科技股份有限公司 | A kind of method and system of video panoramaization processing |
CN107133580A (en) * | 2017-04-24 | 2017-09-05 | 杭州空灵智能科技有限公司 | A kind of synthetic method of 3D printing monitor video |
CN108683565A (en) * | 2018-05-22 | 2018-10-19 | 珠海爱付科技有限公司 | A kind of data processing system and method based on narrowband Internet of Things |
CN108683565B (en) * | 2018-05-22 | 2021-11-16 | 珠海爱付科技有限公司 | Data processing system based on narrowband Internet of things |
CN109615659A (en) * | 2018-11-05 | 2019-04-12 | 成都西纬科技有限公司 | A kind of the camera parameters preparation method and device of vehicle-mounted multiple-camera viewing system |
CN110120012B (en) * | 2019-05-13 | 2022-07-08 | 广西师范大学 | Video stitching method for synchronous key frame extraction based on binocular camera |
CN110120012A (en) * | 2019-05-13 | 2019-08-13 | 广西师范大学 | The video-splicing method that sync key frame based on binocular camera extracts |
CN112927128A (en) * | 2019-12-05 | 2021-06-08 | 晶睿通讯股份有限公司 | Image splicing method and related monitoring camera equipment |
CN112927128B (en) * | 2019-12-05 | 2023-11-24 | 晶睿通讯股份有限公司 | Image stitching method and related monitoring camera equipment thereof |
CN112215886A (en) * | 2020-10-10 | 2021-01-12 | 深圳道可视科技有限公司 | Panoramic parking calibration method and system |
CN112837225A (en) * | 2021-04-15 | 2021-05-25 | 浙江卡易智慧医疗科技有限公司 | Method and device for automatically and seamlessly splicing vertical full-spine images |
CN112837225B (en) * | 2021-04-15 | 2024-01-23 | 浙江卡易智慧医疗科技有限公司 | Automatic seamless splicing method and device for standing full-spine images |
CN114449130A (en) * | 2022-03-07 | 2022-05-06 | 北京拙河科技有限公司 | Multi-camera video fusion method and system |
CN114612613A (en) * | 2022-03-07 | 2022-06-10 | 北京拙河科技有限公司 | Dynamic light field reconstruction method and system |
CN114612613B (en) * | 2022-03-07 | 2022-11-29 | 北京拙河科技有限公司 | Dynamic light field reconstruction method and system |
CN116866522A (en) * | 2023-07-11 | 2023-10-10 | 广州市图威信息技术服务有限公司 | Remote monitoring method |
CN116866522B (en) * | 2023-07-11 | 2024-05-17 | 广州市图威信息技术服务有限公司 | Remote monitoring method |
CN117934574A (en) * | 2024-03-22 | 2024-04-26 | 深圳市智兴盛电子有限公司 | Method, device, equipment and storage medium for optimizing image of automobile data recorder |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102521816A (en) | Real-time wide-scene monitoring synthesis method for cloud data center room | |
CN105957015B (en) | A kind of 360 degree of panorama mosaic methods of threaded barrel inner wall image and system | |
CN104599258B (en) | A kind of image split-joint method based on anisotropic character descriptor | |
CN104574347B (en) | Satellite in orbit image geometry positioning accuracy evaluation method based on multi- source Remote Sensing Data data | |
CN111080529A (en) | Unmanned aerial vehicle aerial image splicing method for enhancing robustness | |
CN110992263B (en) | Image stitching method and system | |
Kurka et al. | Applications of image processing in robotics and instrumentation | |
CN103700099B (en) | Rotation and dimension unchanged wide baseline stereo matching method | |
CN109211198B (en) | Intelligent target detection and measurement system and method based on trinocular vision | |
CN111553939B (en) | Image registration algorithm of multi-view camera | |
CN112465877B (en) | Kalman filtering visual tracking stabilization method based on motion state estimation | |
CN110634137A (en) | Bridge deformation monitoring method, device and equipment based on visual perception | |
CN112288758B (en) | Infrared and visible light image registration method for power equipment | |
CN113393439A (en) | Forging defect detection method based on deep learning | |
Cvišić et al. | Recalibrating the KITTI dataset camera setup for improved odometry accuracy | |
CN109658366A (en) | Based on the real-time video joining method for improving RANSAC and dynamic fusion | |
Perdigoto et al. | Calibration of mirror position and extrinsic parameters in axial non-central catadioptric systems | |
CN116664892A (en) | Multi-temporal remote sensing image registration method based on cross attention and deformable convolution | |
CN113793266A (en) | Multi-view machine vision image splicing method, system and storage medium | |
CN115456870A (en) | Multi-image splicing method based on external parameter estimation | |
CN117036404A (en) | Monocular thermal imaging simultaneous positioning and mapping method and system | |
CN113706635B (en) | Long-focus camera calibration method based on point feature and line feature fusion | |
CN111127353A (en) | High-dynamic image ghost removing method based on block registration and matching | |
CN110120012A (en) | The video-splicing method that sync key frame based on binocular camera extracts | |
Wang et al. | A real-time correction and stitching algorithm for underwater fisheye images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20120627 |