CN111080719A - Video registration method and device - Google Patents

Video registration method and device Download PDF

Info

Publication number
CN111080719A
CN111080719A CN201911367602.XA CN201911367602A CN111080719A CN 111080719 A CN111080719 A CN 111080719A CN 201911367602 A CN201911367602 A CN 201911367602A CN 111080719 A CN111080719 A CN 111080719A
Authority
CN
China
Prior art keywords
dimensional
camera
matrix
orientation element
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911367602.XA
Other languages
Chinese (zh)
Inventor
韩宇韬
吕琪菲
张至怡
陈银
王逸涛
曹粕佳
党建波
阳松江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Aerospace Shenkun Technology Co ltd
Original Assignee
Sichuan Aerospace Shenkun Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Aerospace Shenkun Technology Co ltd filed Critical Sichuan Aerospace Shenkun Technology Co ltd
Priority to CN201911367602.XA priority Critical patent/CN111080719A/en
Publication of CN111080719A publication Critical patent/CN111080719A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts

Abstract

The application discloses a video registration method and a video registration device, wherein two-dimensional feature elements are extracted from a two-dimensional video image of a camera based on a computer visual feature extraction algorithm to serve as two-dimensional control conditions, and feature elements corresponding to the two-dimensional feature elements are selected from a three-dimensional virtual scene to serve as three-dimensional control conditions; calculating an inner orientation element matrix of the camera by utilizing a multi-item one-dimensional calibration method, and optimizing the precision of the inner orientation element matrix by adopting a light beam method to obtain inner orientation elements of the camera; and calculating the exterior orientation element of the camera by adopting a multi-feature rear intersection algorithm, a two-dimensional control condition and a three-dimensional control condition. The camera calibration method is simple in condition, simple and convenient to operate, high in precision and capable of conducting video registration conveniently and quickly.

Description

Video registration method and device
Technical Field
The present application relates to the field of video registration technologies, and in particular, to a video registration method and apparatus.
Background
Registering a video image taken by a real camera into a virtual three-dimensional scene to enhance the realism of the virtual three-dimensional scene is called video registration. When video registration is carried out, a virtual camera corresponding to a real camera is added in a virtual three-dimensional scene, camera parameters such as an inner orientation element, an outer orientation element and the like of the real camera need to be obtained, the inner orientation element is a parameter for describing a relevant position between a photographing center and a photo, the outer orientation element is a parameter for determining a space position and a posture of a photographing light beam at the photographing moment, the process of solving the camera parameters for carrying out video registration is called camera calibration in the field of computer vision and photogrammetry, the calculation of the inner orientation element can be called camera internal calibration, the calculation of the outer orientation element can be called camera external calibration, and the camera external calibration is also called rear intersection.
Currently, the current practice is. The methods of calibration in cameras can be divided into three-dimensional calibration, two-dimensional calibration, one-dimensional calibration and self-calibration according to the characteristics of the geometric dimensions of the calibration objects used. The external calibration method of the camera mainly comprises a collinear equation method, a pyramid method, a direct linear transformation method and a quaternion method.
However, when camera internal calibration is performed, although a three-dimensional calibration method or a two-dimensional calibration method can obtain higher calibration precision, due to the limitations of the manufacturing cost of calibration objects and the calibration environment, the calibration requirements in a video monitoring scene are difficult to meet in many cases; although the self-calibration method is flexible and convenient, the calibration operation is difficult to carry out under the condition of insufficient scene geometric knowledge or camera motion information; the one-dimensional calibration method is simpler and more convenient in construction and easier in calibration operation, but the one-dimensional calibration method has relatively poor calibration precision and noise suppression capability due to the fact that the number of control points is small and the geometric constraint is poor.
Moreover, when the camera external calibration is carried out, the collinear equation method is solved in an iterative mode, and a better initial value is needed; the pyramid method separately solves the external orientation line element and the external orientation angle element, but also needs the initial value of the line element; the direct linear change method does not need an initial value, but the required control condition is harsh; quaternion methods have been largely free of initial value dependence, but are more complex to use.
Meanwhile, when a camera is calibrated, at present, elements such as points, lines and circles are usually selected manually in a two-dimensional video, which may cause that the selected features are not significant features in a two-dimensional video scene, and manual selection is complicated.
In summary, a camera calibration method with simple conditions, simple operation and high precision is lacking at present, and video registration cannot be performed conveniently and quickly.
Disclosure of Invention
Aiming at the problems, the invention provides the camera calibration method which is simple in condition, simple and convenient to operate and high in precision, so that the video registration can be conveniently and quickly carried out.
Based on the purpose, the technical scheme provided by the application is as follows:
a video registration method, comprising:
extracting two-dimensional feature elements from a two-dimensional video image of a camera based on a computer visual feature extraction algorithm to serve as two-dimensional control conditions, and selecting feature elements corresponding to the two-dimensional feature elements from a three-dimensional virtual scene to serve as three-dimensional control conditions;
calculating an inner orientation element matrix of the camera by utilizing a multi-item one-dimensional calibration method, and optimizing the precision of the inner orientation element matrix by adopting a light beam method to obtain inner orientation elements of the camera;
and calculating the exterior orientation element of the camera by adopting a multi-feature rear intersection algorithm, the two-dimensional control condition and the three-dimensional control condition.
Preferably, the two-dimensional feature elements include corner points, horizontal lines, vertical lines, and horizontal circles;
the method for extracting two-dimensional feature elements from a two-dimensional video image of a camera based on a computer visual feature extraction algorithm as two-dimensional control conditions comprises the following steps:
extracting the corners in the two-dimensional video image by adopting a Harris corner detection algorithm;
performing edge detection processing on the two-dimensional video image by adopting a Canny operator, acquiring an edge binary image of the two-dimensional video image as input of Hough transformation, and determining the horizontal line, the vertical line and the horizontal circle in the two-dimensional video image;
extracting a predetermined number of the corner points, the horizontal lines, the vertical lines, and the horizontal circles as the two-dimensional control conditions.
Preferably, the calculating the internal orientation element matrix of the camera by using a plurality of one-dimensional calibration methods includes:
calculating an image pair basic matrix by adopting a multi-image one-dimensional calibration method based on the basic matrix, and acquiring an initial value of a camera matrix;
and calculating a three-dimensional conversion matrix of a projection space and an Euclidean space by using the geometric constraint of the one-dimensional calibration object and the initial value of the camera matrix, and solving the internal orientation element matrix.
Preferably, the calculating the external orientation element of the camera by using the multi-feature back intersection algorithm and the two-dimensional control condition and the three-dimensional control condition includes:
respectively analyzing geometric constraints of horizontal lines, vertical lines, line segments and horizontal circles in the two-dimensional video image on exterior orientation elements of the camera;
and constructing a linearization error equation according to the set constraint formula, and calculating the exterior orientation element according to the unified adjustment of the linearization error equation.
Preferably, the method further comprises the following steps:
and calculating a perspective projection matrix and an observation matrix of the camera in the three-dimensional virtual scene according to the inner orientation element and the outer orientation element, and registering the camera in the three-dimensional virtual scene.
A video registration apparatus, comprising:
the control condition selection module is used for extracting two-dimensional feature elements from a two-dimensional video image of the camera based on a computer visual feature extraction algorithm to serve as two-dimensional control conditions, and selecting feature elements corresponding to the two-dimensional feature elements from a three-dimensional virtual scene to serve as three-dimensional control conditions;
the inner orientation element calculation module is used for calculating an inner orientation element matrix of the camera by utilizing a multi-item one-dimensional calibration method, and optimizing the precision of the inner orientation element matrix by adopting a light beam method to obtain inner orientation elements of the camera;
and the external orientation element calculation module is used for calculating the external orientation element of the camera by adopting a multi-feature back intersection algorithm, the two-dimensional control condition and the three-dimensional control condition.
Preferably, the two-dimensional feature elements include corner points, horizontal lines, vertical lines, and horizontal circles;
the control condition selection module comprises:
the angular point extracting unit is used for extracting the angular points in the two-dimensional video image by adopting a Harris angular point detection algorithm;
a determining unit, configured to perform edge detection processing on the two-dimensional video image by using a Canny operator, acquire an edge binary image of the two-dimensional video image as an input of Hough transformation, and determine the horizontal line, the vertical line, and the horizontal circle in the two-dimensional video image;
a control condition determining unit configured to extract a predetermined number of the corner points, the horizontal lines, the vertical lines, and the horizontal circles as the two-dimensional control conditions.
Preferably, the inner orientation element calculation module includes:
the acquiring unit is used for calculating an image pair basic matrix by adopting a multi-image one-dimensional calibration method based on the basic matrix and acquiring an initial value of a camera matrix;
and the solving unit is used for calculating a three-dimensional conversion matrix of a projection space and an Euclidean space by utilizing the geometric constraint of the one-dimensional calibration object and the initial value of the camera matrix and solving the internal orientation element matrix.
Preferably, the external orientation element calculation module includes:
the analysis unit is used for respectively analyzing the geometric constraint formulas of horizontal lines, vertical lines, line segments and horizontal circles in the two-dimensional video image to the exterior orientation elements of the camera;
and the calculation unit is used for constructing a linearization error equation according to the set constraint formula, and calculating the external orientation element according to the unified adjustment of the linearization error equation.
Preferably, the method further comprises the following steps:
and the registration module is used for calculating a perspective projection matrix and an observation matrix of the camera in the three-dimensional virtual scene according to the inner orientation element and the outer orientation element and registering the camera in the three-dimensional virtual scene.
By applying the technical scheme, the video registration method and the video registration device provided by the application extract two-dimensional feature elements from a two-dimensional video image of a camera as two-dimensional control conditions based on a computer visual feature extraction algorithm, and select feature elements corresponding to the two-dimensional feature elements from a three-dimensional virtual scene as three-dimensional control conditions; calculating an inner orientation element matrix of the camera by utilizing a multi-item one-dimensional calibration method, and optimizing the precision of the inner orientation element matrix by adopting a light beam method to obtain inner orientation elements of the camera; and calculating the exterior orientation element of the camera by adopting a multi-feature rear intersection algorithm, a two-dimensional control condition and a three-dimensional control condition. According to the video registration method and device, when the multi-image one-dimensional calibration method and the light beam method are adopted to calculate the internal orientation elements, the required conditions are simple, and the calibration can be performed quickly and accurately; the external orientation elements are calculated by adopting a back intersection method with various characteristics, so that the problem of back intersection when the number of characteristic points is insufficient is solved, and the precision and the reliability of a calculation result can be ensured; the camera calibration method has the advantages that two-dimensional characteristic elements of the two-dimensional video image are automatically extracted by combining a computer vision technology, the calibration process is more convenient, the calibration characteristic is more obvious, the camera calibration method is simple in condition, simple and convenient to operate and high in precision, and video registration can be conveniently and rapidly carried out.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a video registration method provided in the present application;
fig. 2 is a schematic flowchart of another video registration method provided in the present application;
fig. 3 is a schematic flowchart of another video registration method provided in the present application;
fig. 4 is a schematic flowchart of another video registration method provided in the present application;
fig. 5 is a schematic structural diagram of a video registration apparatus provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The following will specifically describe the scheme of the present application by specific examples:
fig. 1 is a schematic flowchart of a video registration method provided in the present application.
Referring to fig. 1, a video registration method provided in an embodiment of the present application includes:
s100: extracting two-dimensional feature elements from a two-dimensional video image of a camera based on a computer visual feature extraction algorithm to serve as two-dimensional control conditions, and selecting feature elements corresponding to the two-dimensional feature elements from a three-dimensional virtual scene to serve as three-dimensional control conditions;
the characteristics of angular points, horizontal lines, vertical lines, horizontal circles and the like of the video image are automatically extracted by adopting a computer vision technology, so that the calibration process is more convenient, and the calibration characteristics are more obvious.
Referring to the description and the attached fig. 2, in the embodiment of the present application, the two-dimensional feature elements include corner points, horizontal lines, vertical lines, and horizontal circles;
extracting two-dimensional feature elements in a two-dimensional video image of a camera as a two-dimensional control condition based on a computer visual feature extraction algorithm, which may include:
s101: extracting angular points in the two-dimensional video image by adopting a Harris angular point detection algorithm;
the basic principle of extracting the corners in the video image by using Harris corner detection is as follows: the corner points are the most obvious and important features on an image, and for the first derivative, the change of the corner points in each direction is the largest, while the edge regions have obvious change in only one direction.
Harris corner formula:
Figure BDA0002338843790000061
wherein W (x, y) represents the moving window, and I (x, y) represents the intensity of the pixel gray value, and the range is 0-255. Calculating a partial derivative from the first order to the N order according to the Taylor series, and finally obtaining a Harris matrix formula:
Figure BDA0002338843790000062
the value of the window function w (X, Y) is 1, the M matrix is a partial derivative matrix, and Ix and Iy are the intensities of image pixel points in X and Y directions.
Calculating matrix eigenvalues from the matrix of Harris, and then calculating Harris corner response values:
R=det M-k(trace M)2
det M=λ1λ2
trace M=λ12
matrix calculation matrix eigenvalues λ 1, λ 2, K are coefficient values, usually ranging from 0.04 to 0.06.
S102: performing edge detection processing on the two-dimensional video image by adopting a Canny operator, acquiring an edge binary image of the two-dimensional video image as input of Hough transformation, and determining a horizontal line, a vertical line and a horizontal circle in the two-dimensional video image;
and (3) carrying out edge detection processing on the image by adopting a Canny operator, acquiring an image edge binary image as input of Hough transformation, searching specific positions of straight lines, circles or ellipses and the like in the image by clicking a manual mouse, and selecting a proper amount of angular points, horizontal lines, vertical lines and horizontal circles as two-dimensional control conditions.
Adopting a Canny operator step: a. using a Gaussian filter to smooth the image and filter out noise; b. calculating the gradient strength and direction of each pixel point in the image; c. applying Non-Maximum Suppression (Non-Maximum Suppression) to eliminate spurious responses caused by edge detection; d. applying Double-Threshold (Double-Threshold) detection to determine true and potential edges; e. edge detection is finally accomplished by suppressing isolated weak edges.
S103: a predetermined number of corner points, horizontal lines, vertical lines, and horizontal circles are extracted as two-dimensional control conditions.
The predetermined number here may be set as appropriate, such as 6 or more.
And corresponding corner points, horizontal lines, vertical lines, line segments and horizontal circles can be manually extracted through mouse clicking in the three-dimensional virtual scene to serve as three-dimensional control conditions.
S200: calculating an inner orientation element matrix of the camera by utilizing a multi-item one-dimensional calibration method, and optimizing the precision of the inner orientation element matrix by adopting a light beam method to obtain inner orientation elements of the camera;
referring to the description of the drawings, as shown in fig. 3, in the embodiment of the present application, calculating the internal orientation element matrix of the camera by using a multi-term one-dimensional calibration method may include:
s201: calculating an image pair basic matrix by adopting a multi-image one-dimensional calibration method based on the basic matrix, and acquiring an initial value of a camera matrix;
s202: and calculating a three-dimensional conversion matrix of a projection space and an Euclidean space by using the geometric constraint of the one-dimensional calibration object and the initial value of the camera matrix, and solving an internal orientation element matrix.
A multi-image one-dimensional calibration method based on a basic matrix is adopted, the method comprises the steps of firstly calculating an image pair basic matrix, obtaining a camera matrix initial value under a relatively accurate projection meaning in the initial operation of camera calibration, then fully utilizing geometric constraints contained in a one-dimensional calibration object to calculate a three-dimensional conversion matrix H of a projection space and an Euclidean space, and further solving an inner orientation element matrix of a camera.
The multi-image one-dimensional calibration method based on the basic matrix has the advantages of simple equipment required in the calibration process, convenience in operation, high calculation speed and no need of any priori knowledge, and the one-dimensional calibration object can make any rigid motion, so that a better calibration result can be obtained. The method is suitable for quick calibration of the camera in a non-laboratory environment.
The beam method is that in one image, the image points of the undetermined point and the control point, the photographing center and the corresponding ground point form a beam. The beam method area network aerial triangulation is characterized in that a beam of light composed of an image is used as a basic unit of adjustment, and a collinear equation of central projection is used as a basic equation of adjustment. The rays of the common points between the models are optimally intersected through the rotation and translation of each ray bundle in the space, the whole area is optimally incorporated into a ground coordinate system of a known control point, the rotation is equivalent to the outer orientation element of the ray bundle, the translation is equivalent to the space coordinate of a camera station point, so that an error equation unified in the whole area is established, and the six outer orientation elements of each picture in the whole area and the ground coordinate of all the points to be obtained are solved integrally. In the case of redundant observations, due to the measurement error of the coordinates of the image points, the so-called common intersection coordinates of adjacent images should be equal, and the encrypted coordinates of the control points should coincide with the ground measurement coordinates.
Although the intra-camera orientation element matrix is solved, the intra-orientation element matrix calculated from the basis matrix is not very accurate in consideration of the influence of noise and the interference of radial distortion on the collinearity. The beam method calculation is a nonlinear iterative process taking camera matrix spatial points as optimization variables and minimizing a reprojection error as an optimization target, and when image plane measurement points are subjected to Gaussian distribution of isotropic zero mean and are independently distributed, the beam method can obtain an optimization result in the maximum likelihood sense. Therefore, after the initial value of the in-camera orientation element matrix is calculated, the in-camera orientation element matrix is accurately calculated by adopting a light beam method.
The optimization steps of the light beam method mainly comprise:
(1) obtaining an external orientation element and an approximate value of coordinates of an undetermined point of each photo;
(2) starting from the coordinates of the control points and the image points of the undetermined points on each image, listing an error equation according to a collinearity equation;
(3) establishing a modified equation point by point, firstly solving one type of unknown number according to a solution method of cyclic block division, and usually firstly solving an external orientation element of each photo;
(4) and (4) solving the ground coordinates of the undetermined point according to the intersection in the front space, and taking the average value of the common points of the adjacent pictures as a final result.
The precision can be improved by utilizing the light beam method adjustment non-linear optimization of the reprojection error of the multi-feature back intersection.
S300: and calculating the exterior orientation element of the camera by adopting a multi-feature rear intersection algorithm, a two-dimensional control condition and a three-dimensional control condition.
Referring to fig. 4 in the specification, in the embodiment of the present application, the calculating the external orientation element of the camera by using the multi-feature back intersection algorithm and the two-dimensional control condition and the three-dimensional control condition may include:
s301: respectively analyzing geometric constraint formulas of horizontal lines, vertical lines, line segments and horizontal circles in the two-dimensional video image to exterior orientation elements of the camera;
from the equation of a straight line of the horizon and the ground measurement coordinates (U)1,V1,W1),(U2,V2,W2) And obtaining the constraint condition of the horizontal line to the exterior orientation element of the camera:
Figure BDA0002338843790000091
wherein A, B, C is the coefficient of the linear equation (U)1,V1,W1),(U2,V2,W2) Coordinates are measured for the ground.
"nadir" (x) on the image according to the vertical lined,yd) And obtaining the constraint condition of the vertical line to the exterior orientation element of the camera:
Figure BDA0002338843790000101
in the formula (x)d,yd) Is the image bottom point.
According to the line segment with the measured distance, the vanishing point (x) in the direction of the line segment can be calculatedv,yv) And obtaining the constraint condition of the vertical line to the exterior orientation element of the camera:
Figure BDA0002338843790000102
in the formula (x)v,yv) Is the vanishing point.
According to four points on a round ground and combining with the Cramer's law, the essential condition of the 4-point co-circle on the plane is as follows:
Figure BDA0002338843790000103
s302: and constructing a linearization error equation according to the set constraint formula, uniformly balancing according to the linearization error equation, and calculating the exterior orientation element.
The linearization error equations of the horizontal line, the vertical line, the line segment and the horizontal circle can be listed by combining equations (1), (2), (3) and (4).
Figure BDA0002338843790000104
Figure BDA0002338843790000105
In the formula, F0Substituting the approximate values of the exterior orientation elements into the F values, dX values calculated by the corresponding formula (5)s、dYs、dZs
Figure BDA0002338843790000106
d ω and dk are the correction numbers of the approximate values of the outer orientation elements.
And analyzing the geometric constraint of horizontal lines, vertical lines, line segments and horizontal circles on elements of the exterior orientation of the camera, resolving the characteristics of the elements into points, substituting the points into a collinear equation, and performing unified adjustment. Under the condition of certain characteristics, more observed quantities are introduced, and the accuracy and the stability of a calculation result can be improved to a certain extent.
The back intersection method with multiple characteristics can greatly reduce the dependence of the traditional back intersection method on the quantity and distribution of the characteristic points, solve the back intersection problem when the number of the characteristic points is insufficient, enable the back intersection method based on the collinear equation to have a wider application range, introduce more observed values, ensure the precision and reliability of a back intersection calculation result, and have stronger superiority compared with the traditional method.
Further, after the inside orientation element and the outside orientation element for video registration are obtained, the perspective projection matrix and the observation matrix of the camera in the three-dimensional virtual scene can be calculated according to the obtained inside orientation element and the obtained outside orientation element, and the camera is registered in the three-dimensional virtual scene.
The video registration method provided by the application comprises the steps of extracting two-dimensional characteristic elements from a two-dimensional video image of a camera based on a computer visual characteristic extraction algorithm to serve as two-dimensional control conditions, and selecting the characteristic elements corresponding to the two-dimensional characteristic elements from a three-dimensional virtual scene to serve as three-dimensional control conditions; calculating an inner orientation element matrix of the camera by utilizing a multi-item one-dimensional calibration method, and optimizing the precision of the inner orientation element matrix by adopting a light beam method to obtain inner orientation elements of the camera; and calculating the exterior orientation element of the camera by adopting a multi-feature rear intersection algorithm, a two-dimensional control condition and a three-dimensional control condition. According to the video registration method and device, when the multi-image one-dimensional calibration method and the light beam method are adopted to calculate the internal orientation elements, the required conditions are simple, and the calibration can be performed quickly and accurately; the external orientation elements are calculated by adopting a back intersection method with various characteristics, so that the problem of back intersection when the number of characteristic points is insufficient is solved, and the precision and the reliability of a calculation result can be ensured; the camera calibration method has the advantages that two-dimensional characteristic elements of the two-dimensional video image are automatically extracted by combining a computer vision technology, the calibration process is more convenient, the calibration characteristic is more obvious, the camera calibration method is simple in condition, simple and convenient to operate and high in precision, and video registration can be conveniently and rapidly carried out.
The technology can better solve the problems that the video scene environment is complex and tight control points are not suitable to be arranged, and is suitable for the registration task of the video images in the virtual three-dimensional scene.
Fig. 5 is a schematic structural diagram of a video registration apparatus provided in the present application.
Referring to fig. 5, a video registration apparatus according to an embodiment of the present application includes:
the control condition selection module 1 is used for extracting two-dimensional feature elements from a two-dimensional video image of a camera based on a computer visual feature extraction algorithm to serve as two-dimensional control conditions, and selecting feature elements corresponding to the two-dimensional feature elements from a three-dimensional virtual scene to serve as three-dimensional control conditions;
preferably, the two-dimensional feature elements include corner points, horizontal lines, vertical lines, and horizontal circles;
the control condition selection module 1 may include:
the angular point extraction unit is used for extracting angular points in the two-dimensional video image by adopting a Harris angular point detection algorithm;
the determining unit is used for carrying out edge detection processing on the two-dimensional video image by adopting a Canny operator, acquiring an edge binary image of the two-dimensional video image as the input of Hough transformation, and determining a horizontal line, a vertical line and a horizontal circle in the two-dimensional video image;
a control condition determining unit for extracting a predetermined number of corner points, horizontal lines, vertical lines, and horizontal circles as two-dimensional control conditions.
The inner orientation element calculation module 2 is used for calculating an inner orientation element matrix of the camera by utilizing a multi-item one-dimensional calibration method, and optimizing the precision of the inner orientation element matrix by adopting a light beam method to obtain inner orientation elements of the camera;
preferably, the inner orientation element calculation module 2 may include:
the acquiring unit is used for calculating an image pair basic matrix by adopting a multi-image one-dimensional calibration method based on the basic matrix and acquiring an initial value of a camera matrix;
and the solving unit is used for calculating a three-dimensional conversion matrix of the projection space and the Euclidean space by utilizing the geometric constraint of the one-dimensional calibration object and the initial value of the camera matrix and solving the internal orientation element matrix.
And the external orientation element calculation module 3 is used for calculating the external orientation element of the camera by adopting a multi-feature back intersection algorithm, a two-dimensional control condition and a three-dimensional control condition.
Preferably, the external orientation element calculation module 3 may include:
the analysis unit is used for respectively analyzing the geometric constraint formulas of horizontal lines, vertical lines, line segments and horizontal circles in the two-dimensional video image to the exterior orientation elements of the camera;
and the calculation unit is used for constructing a linearization error equation according to the set constraint formula, uniformly balancing according to the linearization error equation and calculating the exterior orientation element.
Further, the video registration apparatus may further include:
and the registration module is used for calculating a perspective projection matrix and an observation matrix of the camera in the three-dimensional virtual scene according to the internal orientation element and the external orientation element, and registering the camera in the three-dimensional virtual scene.
The video registration apparatus provided in the embodiment of the present application corresponds to the video registration method in the above method embodiment, and may refer to each other, which is not described in detail herein.
The application provides a video registration device, which is characterized in that two-dimensional feature elements are extracted from a two-dimensional video image of a camera based on a computer visual feature extraction algorithm to serve as two-dimensional control conditions, and feature elements corresponding to the two-dimensional feature elements are selected from a three-dimensional virtual scene to serve as three-dimensional control conditions; calculating an inner orientation element matrix of the camera by utilizing a multi-item one-dimensional calibration method, and optimizing the precision of the inner orientation element matrix by adopting a light beam method to obtain inner orientation elements of the camera; and calculating the exterior orientation element of the camera by adopting a multi-feature rear intersection algorithm, a two-dimensional control condition and a three-dimensional control condition. According to the video registration method and device, when the multi-image one-dimensional calibration method and the light beam method are adopted to calculate the internal orientation elements, the required conditions are simple, and the calibration can be performed quickly and accurately; the external orientation elements are calculated by adopting a back intersection method with various characteristics, so that the problem of back intersection when the number of characteristic points is insufficient is solved, and the precision and the reliability of a calculation result can be ensured; the camera calibration method has the advantages that two-dimensional characteristic elements of the two-dimensional video image are automatically extracted by combining a computer vision technology, the calibration process is more convenient, the calibration characteristic is more obvious, the camera calibration method is simple in condition, simple and convenient to operate and high in precision, and video registration can be conveniently and rapidly carried out.
The technology can better solve the problems that the video scene environment is complex and tight control points are not suitable to be arranged, and is suitable for the registration task of the video images in the virtual three-dimensional scene.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it should also be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The video registration method and apparatus provided by the present invention are described in detail above, and a specific example is applied in the text to explain the principle and the implementation of the present invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A video registration method, comprising:
extracting two-dimensional feature elements from a two-dimensional video image of a camera based on a computer visual feature extraction algorithm to serve as two-dimensional control conditions, and selecting feature elements corresponding to the two-dimensional feature elements from a three-dimensional virtual scene to serve as three-dimensional control conditions;
calculating an inner orientation element matrix of the camera by utilizing a multi-item one-dimensional calibration method, and optimizing the precision of the inner orientation element matrix by adopting a light beam method to obtain inner orientation elements of the camera;
and calculating the exterior orientation element of the camera by adopting a multi-feature rear intersection algorithm, the two-dimensional control condition and the three-dimensional control condition.
2. The video registration method of claim 1, wherein the two-dimensional feature elements comprise corner points, horizontal lines, vertical lines, and horizontal circles;
the method for extracting two-dimensional feature elements from a two-dimensional video image of a camera based on a computer visual feature extraction algorithm as two-dimensional control conditions comprises the following steps:
extracting the corners in the two-dimensional video image by adopting a Harris corner detection algorithm;
performing edge detection processing on the two-dimensional video image by adopting a Canny operator, acquiring an edge binary image of the two-dimensional video image as input of Hough transformation, and determining the horizontal line, the vertical line and the horizontal circle in the two-dimensional video image;
extracting a predetermined number of the corner points, the horizontal lines, the vertical lines, and the horizontal circles as the two-dimensional control conditions.
3. The video registration method of claim 1, wherein the calculating the internal orientation matrix of the camera using a plurality of one-dimensional scaling methods comprises:
calculating an image pair basic matrix by adopting a multi-image one-dimensional calibration method based on the basic matrix, and acquiring an initial value of a camera matrix;
and calculating a three-dimensional conversion matrix of a projection space and an Euclidean space by using the geometric constraint of the one-dimensional calibration object and the initial value of the camera matrix, and solving the internal orientation element matrix.
4. The method of video registration according to claim 1, wherein said computing the external orientation element of the camera using the multi-feature backprojection algorithm and the two-dimensional control condition and the three-dimensional control condition comprises:
respectively analyzing geometric constraints of horizontal lines, vertical lines, line segments and horizontal circles in the two-dimensional video image on exterior orientation elements of the camera;
and constructing a linearization error equation according to the set constraint formula, and calculating the exterior orientation element according to the unified adjustment of the linearization error equation.
5. The video registration method of claim 1, further comprising:
and calculating a perspective projection matrix and an observation matrix of the camera in the three-dimensional virtual scene according to the inner orientation element and the outer orientation element, and registering the camera in the three-dimensional virtual scene.
6. A video registration apparatus, comprising:
the control condition selection module is used for extracting two-dimensional feature elements from a two-dimensional video image of the camera based on a computer visual feature extraction algorithm to serve as two-dimensional control conditions, and selecting feature elements corresponding to the two-dimensional feature elements from a three-dimensional virtual scene to serve as three-dimensional control conditions;
the inner orientation element calculation module is used for calculating an inner orientation element matrix of the camera by utilizing a multi-item one-dimensional calibration method, and optimizing the precision of the inner orientation element matrix by adopting a light beam method to obtain inner orientation elements of the camera;
and the external orientation element calculation module is used for calculating the external orientation element of the camera by adopting a multi-feature back intersection algorithm, the two-dimensional control condition and the three-dimensional control condition.
7. The video registration apparatus of claim 6, wherein the two-dimensional feature elements comprise corner points, horizontal lines, vertical lines, and horizontal circles;
the control condition selection module comprises:
the angular point extracting unit is used for extracting the angular points in the two-dimensional video image by adopting a Harris angular point detection algorithm;
a determining unit, configured to perform edge detection processing on the two-dimensional video image by using a Canny operator, acquire an edge binary image of the two-dimensional video image as an input of Hough transformation, and determine the horizontal line, the vertical line, and the horizontal circle in the two-dimensional video image;
a control condition determining unit configured to extract a predetermined number of the corner points, the horizontal lines, the vertical lines, and the horizontal circles as the two-dimensional control conditions.
8. The video registration apparatus of claim 6, wherein the inner orientation element calculation module comprises:
the acquiring unit is used for calculating an image pair basic matrix by adopting a multi-image one-dimensional calibration method based on the basic matrix and acquiring an initial value of a camera matrix;
and the solving unit is used for calculating a three-dimensional conversion matrix of a projection space and an Euclidean space by utilizing the geometric constraint of the one-dimensional calibration object and the initial value of the camera matrix and solving the internal orientation element matrix.
9. The video registration apparatus of claim 6, wherein the external orientation element calculation module comprises:
the analysis unit is used for respectively analyzing the geometric constraint formulas of horizontal lines, vertical lines, line segments and horizontal circles in the two-dimensional video image to the exterior orientation elements of the camera;
and the calculation unit is used for constructing a linearization error equation according to the set constraint formula, and calculating the external orientation element according to the unified adjustment of the linearization error equation.
10. The video registration apparatus of claim 6, further comprising:
and the registration module is used for calculating a perspective projection matrix and an observation matrix of the camera in the three-dimensional virtual scene according to the inner orientation element and the outer orientation element and registering the camera in the three-dimensional virtual scene.
CN201911367602.XA 2019-12-26 2019-12-26 Video registration method and device Pending CN111080719A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911367602.XA CN111080719A (en) 2019-12-26 2019-12-26 Video registration method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911367602.XA CN111080719A (en) 2019-12-26 2019-12-26 Video registration method and device

Publications (1)

Publication Number Publication Date
CN111080719A true CN111080719A (en) 2020-04-28

Family

ID=70318370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911367602.XA Pending CN111080719A (en) 2019-12-26 2019-12-26 Video registration method and device

Country Status (1)

Country Link
CN (1) CN111080719A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1176559A2 (en) * 2000-07-21 2002-01-30 Sony Computer Entertainment America Prop input device and method for mapping an object from a two-dimensional camera image to a three-dimensional space for controlling action in a game program
CN107370780A (en) * 2016-05-12 2017-11-21 腾讯科技(北京)有限公司 Media push methods, devices and systems based on internet

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1176559A2 (en) * 2000-07-21 2002-01-30 Sony Computer Entertainment America Prop input device and method for mapping an object from a two-dimensional camera image to a three-dimensional space for controlling action in a game program
CN107370780A (en) * 2016-05-12 2017-11-21 腾讯科技(北京)有限公司 Media push methods, devices and systems based on internet

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
侯守明 等: "基于视觉的增强现实三维注册技术综述", 《系统仿真学报》, vol. 31, no. 11, pages 2206 - 2215 *
周凡: "视频影像增强虚拟三维场景的注册与渲染方法研究", 《中国博士学位论文全文数据库 信息科技辑》, no. 2014, pages 138 - 16 *

Similar Documents

Publication Publication Date Title
US9965870B2 (en) Camera calibration method using a calibration target
Abraham et al. Fish-eye-stereo calibration and epipolar rectification
CN107767440B (en) Cultural relic sequence image fine three-dimensional reconstruction method based on triangulation network interpolation and constraint
Wöhler 3D computer vision: efficient methods and applications
Herráez et al. 3D modeling by means of videogrammetry and laser scanners for reverse engineering
CN107155341B (en) Three-dimensional scanning system and frame
CN105379264A (en) System and method for imaging device modelling and calibration
CN109272574B (en) Construction method and calibration method of linear array rotary scanning camera imaging model based on projection transformation
CN111735439B (en) Map construction method, map construction device and computer-readable storage medium
WO2012048304A1 (en) Rapid 3d modeling
EP2022007A2 (en) System and architecture for automatic image registration
Hoegner et al. Mobile thermal mapping for matching of infrared images with 3D building models and 3D point clouds
CN114299156A (en) Method for calibrating and unifying coordinates of multiple cameras in non-overlapping area
Song et al. Modeling deviations of rgb-d cameras for accurate depth map and color image registration
Galego et al. Uncertainty analysis of the DLT-Lines calibration algorithm for cameras with radial distortion
Verykokou et al. Exterior orientation estimation of oblique aerial images using SfM-based robust bundle adjustment
CN113763478B (en) Unmanned vehicle camera calibration method, device, equipment, storage medium and system
CN112017259B (en) Indoor positioning and image building method based on depth camera and thermal imager
Deng et al. Automatic true orthophoto generation based on three-dimensional building model using multiview urban aerial images
RU2692970C2 (en) Method of calibration of video sensors of the multispectral system of technical vision
CN112819900B (en) Method for calibrating internal azimuth, relative orientation and distortion coefficient of intelligent stereography
CN111080719A (en) Video registration method and device
Tagoe et al. Determination of the Interior Orientation Parameters of a Non-metric Digital Camera for Terrestrial Photogrammetric Applications
CN114972536B (en) Positioning and calibrating method for aviation area array swing scanning type camera
Wu et al. Spatio-temporal fish-eye image processing based on neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination