CN111127564B - Video image correction method based on geometric positioning model - Google Patents

Video image correction method based on geometric positioning model Download PDF

Info

Publication number
CN111127564B
CN111127564B CN201911338034.0A CN201911338034A CN111127564B CN 111127564 B CN111127564 B CN 111127564B CN 201911338034 A CN201911338034 A CN 201911338034A CN 111127564 B CN111127564 B CN 111127564B
Authority
CN
China
Prior art keywords
camera
coordinates
image
model
ground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911338034.0A
Other languages
Chinese (zh)
Other versions
CN111127564A (en
Inventor
崔子豪
邢力
肖骥
赵李明
李小俊
刘驰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Smart City Research Institute Of China Electronics Technology Group Corp
Original Assignee
Smart City Research Institute Of China Electronics Technology Group Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smart City Research Institute Of China Electronics Technology Group Corp filed Critical Smart City Research Institute Of China Electronics Technology Group Corp
Priority to CN201911338034.0A priority Critical patent/CN111127564B/en
Publication of CN111127564A publication Critical patent/CN111127564A/en
Application granted granted Critical
Publication of CN111127564B publication Critical patent/CN111127564B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

A video image correction method based on a geometric positioning model comprises the following steps: s1: constructing an initial geometric positioning model of the camera; s2: the positioning accuracy of the geometric positioning model is improved through calibration; s3: constructing a geometric positioning model of the virtual camera; s4: and resampling the real image by using the virtual camera model to generate a virtual image. In the invention, when the video image is projected on the map for display, the projection error caused by the imaging parameter error of the camera and the elevation error of the projection surface is eliminated, and the projected image and the map base map can be completely superposed.

Description

Video image correction method based on geometric positioning model
Technical Field
The invention relates to the technical field of video image correction, in particular to a video image correction method based on a geometric positioning model.
Background
In the geometric positioning model of the camera, because the measurement of the internal and external orientation elements of the camera is inaccurate, namely measurement errors exist, certain positioning deviation exists when the positioning model is used for calculating the position of an image, and the deviation of different parts of the image may be inconsistent. The imaging parameter error can be divided into an outer orientation element error and an inner orientation element error, the outer orientation element error comprises a mounting position error and a shooting angle error of a video camera, the inner orientation element error comprises a focal length error of the camera, a size error of an imaging probe and distortion of a lens.
The current practice is to calculate and obtain these imaging parameter errors by calibrating the camera under measurement, and then recover the accurate geometric positioning model of the camera. However, in the current three-dimensional GIS platform, such as skyline, the parameter configuration of video projection only supports setting of the position, shooting angle and view angle in the horizontal direction of the camera, and cannot include complete geometric positioning model parameters. The configuration of skyline is applicable only when the camera completely has no pinhole imaging model with ideal internal orientation element errors, and is not in accordance with the actual imaging situation. Because of the imaging parameter error, the projected image has position deviation and deformation. In addition, when the video image is projected on a reference plane or projected on an uneven terrain surface but the elevation data of the terrain is not accurate, the projected terrain is deviated due to the elevation error, and the image is also displaced.
In sum, the imaging parameter error of the camera, the limitation of the GIS platform on the setting of the projection parameters of the camera, and the influence of the elevation error all result in that the image after the video projection accessed to the GIS platform cannot be completely overlapped with the map base map. Even if the imaging parameters are manually adjusted, errors caused by elements in the external orientation can be eliminated, the projected image still has local deformation, and splicing of multiple paths of videos and extraction of the motion trail of a moving target are not facilitated.
Disclosure of Invention
Objects of the invention
In order to solve the technical problems in the background art, the invention provides a video image correction method based on a geometric positioning model, which can eliminate projection errors caused by camera imaging parameter errors and projection surface elevation errors when a video image is projected on a map for display; within a certain precision range, the projected image and the map base map can be completely superposed.
(II) technical scheme
In order to solve the above problems, the present invention provides a video image correction method based on a geometric positioning model, comprising the following steps:
s1: constructing an initial geometric positioning model of the camera;
s101: acquiring imaging parameters of a camera, wherein the imaging parameters of the camera comprise initial external orientation element parameters of the camera and internal orientation element parameters of the camera; the initial external orientation element parameters of the camera are obtained by measuring the spatial position and the posture of the camera, and the internal orientation element parameters of the camera are obtained by calculating the delivery parameters of the camera;
s102: constructing a geometric positioning model by using the initial internal and external orientation elements of the camera as shown in formula A:
Figure BDA0002331503110000021
in formula A, x and y are coordinates of image points; Δ x and Δ y represent distortion errors of inner orientation elements, and f represents the main distance of the camera; x is the number of 0 ,y 0 Representing principal point coordinates; x, Y, Z is the geodetic coordinates of the ground point; x C 、Y C 、Z C Geodetic coordinates that are the camera's photographic center; r CW A rotation matrix from a camera coordinate system to a geodetic coordinate system; r U Is an offset matrix to absorb the installation error of the camera load; m is a scale factor;
s2: the positioning accuracy of the geometric positioning model is improved through calibration;
s201: calculating compensation parameters of the camera by using ground control point data, and recovering correct imaging parameters; the control point is a plurality of points with known ground coordinates, and the ground coordinates (X, Y, Z) of the control point and the image point coordinates (X, Y) of the control point on the image are obtained;
s202: solving a position compensation parameter, an attitude compensation parameter, a lens distortion parameter and the size of an imaging probe of the camera by using an indirect adjustment method in the field of photogrammetry so as to update the geometric positioning model in the S1 and provide the positioning precision of the model;
s3: constructing a geometric positioning model of a virtual camera, and eliminating camera imaging parameter errors by constructing a virtual camera; the specific form of the geometric imaging model of the virtual camera is shown as formula B:
Figure BDA0002331503110000031
in formula B, x and y are coordinates of image points; f represents the camera standoff; x is the number of 0 、y 0 Representing principal point coordinates; x, Y, Z are geodetic coordinates of the ground points; x C 、Y C 、Z C Geodetic coordinates that are the camera's photographic center; r CW A rotation matrix from a camera coordinate system to a geodetic coordinate system; m is a scale factor
S4: resampling the real image by using a virtual camera model to generate a virtual image; the specific steps for generating the virtual camera image are as follows:
s401: obtaining pixel coordinates (x) of virtual image P ,y P ) Converting the pixel coordinates into image point coordinates (x, y) of the virtual image;
s402: selecting a projection elevation model of a virtual camera geometric positioning model, wherein the projection elevation model can be a selected reference plane or a real terrain curved surface; substituting the image point coordinates (X, Y) of the virtual image into a formula B, and calculating the ground coordinates (X, Y, Z) of the projection point of the virtual image;
s403: selecting a ground real elevation model as a projection elevation model of a real camera geometric positioning model, substituting ground coordinates (X, Y, Z) into a formula A for calculation to obtain image point coordinates (X ', Y') corresponding to a real image, and converting the image point coordinates (X ', Y') into pixel coordinates (X ', Y') of the real image P’ ,y P’ );
S404: obtaining real image pixel coordinate (x) by bilinear difference P’ ,y P’ ) The gray value of (d) and assigning to the virtual image pixel (x) P ,y P ) The gray scale of (d);
s405: and traversing all pixels of the virtual image, and repeating S401-S404 to generate a final virtual camera image.
Preferably, in S1, the camera imaging geometric model is used to establish a one-to-one correspondence relationship between coordinates of the ground object point in the object space and coordinates of the ground object point in the image space, and includes two calculation processes: a model forward calculation process and a model backward calculation process;
a model forward calculation process for calculating object coordinates from image coordinates; formula A is the positive calculation process of the imaging geometric model, the scale factor m is an unknown quantity, and the coordinates (X, Y and Z) of the ground points are solved in an iterative mode, and the method comprises the following steps:
s1: calculating initial geodetic coordinates of the ground object points, assuming that the initial elevations of the ground object points are zero, i.e. h 0 =0, calculating initial geodetic coordinates (X) corresponding to image point coordinates (X, y) by using formula A and combining with earth ellipsoid formula 0 ,Y 0 ,Z 0 );
S2: will be geodetic coordinate (X) 0 ,Y 0 ,Z 0 ) Conversion into latitude and longitude representations (lon) 0 ,lat 0 ,h 0 ) (ii) a Reading an actual elevation value h ' from the ground elevation data, wherein the difference between the initial elevation value h0 and the initial elevation value h ' is dh = (h ' -h) 0 )。
S3: the elevation of the ground object point is taken as h = h', and geodetic coordinates (X, Y, Z) corresponding to the image point (X, Y) are recalculated;
s4: repeating the steps S1-S3, and performing iterative calculation until dh is smaller than a preset tolerance, and stopping iteration to obtain final geodetic coordinates (X, Y, Z);
a model back calculation process for calculating the geodetic coordinates of a given point to obtain the coordinates of a corresponding image point, namely from an object space to an image space; the method comprises the following steps:
s21: establishing an initial mapping relation between the ground object point coordinates and the image point coordinates, and performing initial prediction on the image point coordinates corresponding to the ground object point coordinates;
s22: calculating the corresponding ground feature point coordinates by using the predicted image point coordinates through model forward calculation, comparing the deviation of the corresponding ground feature point coordinates with the actual ground feature coordinates, and further updating the parameters of the prediction model until the deviation of the ground feature point coordinates calculated by using the predicted image point coordinates and the actual ground feature point coordinates is smaller than a preset limit difference, stopping iteration, and obtaining the image point coordinates corresponding to the final ground feature point coordinates; meanwhile, the homography transformation matrix is solved to predict the coordinates of the image points.
Preferably, in S3, the geometric imaging model of the virtual camera is an pinhole imaging model in an ideal state, no lens distortion exists, and the imaging range and resolution of the virtual camera are substantially the same as those of the real camera; because the GIS platform can only configure the transverse visual angle of the camera, the length and the width of the imaging probe element of the virtual camera are the same, and the external orientation element of the virtual camera is equal to the external orientation element of the real camera, namely the projection center coordinate and the attitude angle of the virtual camera are consistent with those of the real camera; the focal length of the virtual camera is equal to the focal length of the real camera; the size of an image sensor of a focal plane of the virtual camera is the same as that of an image sensor of a real camera, namely the total area of the area array imaging probe element is the same; the size of the imaging probe of the virtual camera is equal to the average value of the length and the width of the imaging probe of the real camera.
In the invention, the image under an ideal pinhole imaging mode can be simulated by constructing the virtual camera, the image deformation caused by the imaging parameter error of the camera is eliminated, and the image point offset caused by the elevation error caused by the projection datum plane is eliminated.
According to the invention, the positioning precision of the geometric positioning model is improved, and accurate motion trail extraction is convenient for a moving target.
According to the invention, the video image is conveniently accessed to the three-dimensional GIS platform for projection display. The projected image and the map base map are highly overlapped, and seamless splicing display of the multi-channel monitoring video data is facilitated.
Drawings
Fig. 1 is a schematic flow chart of a video image correction method based on a geometric positioning model according to the present invention.
Fig. 2 is a schematic diagram illustrating a corresponding relationship between virtual image coordinates and real image coordinates in the geometric orientation model-based video image correction method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings in conjunction with the following detailed description. It should be understood that the description is intended to be exemplary only, and is not intended to limit the scope of the present invention. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
As shown in fig. 1-2, the present invention provides a method for correcting a video image based on a geometric orientation model, which comprises the following steps:
s1: constructing an initial geometric positioning model of the camera;
s101: acquiring imaging parameters of a camera, wherein the imaging parameters of the camera comprise initial external orientation element parameters of the camera and internal orientation element parameters of the camera; the initial external orientation element parameters of the camera are obtained by measuring the spatial position and the posture of the camera, and the internal orientation element parameters of the camera are obtained by calculating the delivery parameters of the camera;
s102: constructing a geometric positioning model by using the initial internal and external orientation elements of the camera as shown in formula A:
Figure BDA0002331503110000071
in formula A, x and y are coordinates of image points; Δ x and Δ y represent distortion errors of inner orientation elements, and f represents the main distance of the camera; x is a radical of a fluorine atom 0 ,y 0 Representing principal point coordinates; x, Y, Z is the geodetic coordinates of the ground point; x C 、Y C 、Z C Geodetic coordinates that are the camera's photographic center; r CW A rotation matrix from a camera coordinate system to a geodetic coordinate system; r U Is an offset matrix to absorb mounting errors of the camera load; m is a scale factor;
s2: the positioning accuracy of the geometric positioning model is improved through calibration;
s201: calculating compensation parameters of the camera by using ground control point data, and recovering correct imaging parameters; the control points are a plurality of points with known ground coordinates, and the ground coordinates (X, Y, Z) of the control points and the image point coordinates (X, Y) of the control points on the image are obtained;
s202: solving a position compensation parameter, an attitude compensation parameter, a lens distortion parameter and the size of an imaging probe of the camera by using an indirect adjustment method in the field of photogrammetry so as to update the geometric positioning model in the S1 and provide the positioning precision of the model;
s3: constructing a geometric positioning model of a virtual camera, and eliminating camera imaging parameter errors by constructing a virtual camera; the geometric imaging model of the virtual camera is in a specific form as shown in formula B:
Figure BDA0002331503110000072
in the formula B, x and y are coordinates of image points; f represents the camera standoff; x is the number of 0 、y 0 Representing principal point coordinates; x, Y, Z are geodetic coordinates of the ground points; x C 、Y C 、Z C Geodetic coordinates of a camera photographing center; r CW A rotation matrix from a camera coordinate system to a geodetic coordinate system; m is a scale factor
S4: resampling the real image by using a virtual camera model to generate a virtual image; the specific steps for generating the virtual camera image are as follows:
s401: obtaining pixel coordinates (x) of virtual image P ,y P ) Converting the pixel coordinates into image point coordinates (x, y) of the virtual image;
s402: selecting a projection elevation model of a virtual camera geometric positioning model, wherein the projection elevation model can be a selected reference plane or a real terrain curved surface; substituting the image point coordinates (X, Y) of the virtual image into a formula B, and calculating the ground coordinates (X, Y, Z) of the projection point of the virtual image;
s403: selecting a ground real elevation model as a projection elevation model of a real camera geometric positioning model, substituting ground coordinates (X, Y and Z) into a formula A for calculation to obtain image point coordinates (X ', Y') corresponding to a real image, and converting the image point coordinates (X ', Y') into pixel coordinates (X ', Y') of the real image P’ ,y P’ );
S404: obtaining real image pixel coordinate (x) by bilinear difference P’ ,y P’ ) The gray value of (d) and assigning to the virtual image pixel (x) P ,y P ) The gray scale of (d);
s405: and traversing all pixels of the virtual image, and repeating S401-S404 to generate a final virtual camera image.
In the invention, the image under an ideal pinhole imaging mode can be simulated by constructing the virtual camera, the image deformation caused by the imaging parameter error of the camera is eliminated, and the image point offset caused by the elevation error caused by the projection datum plane is eliminated.
According to the invention, the positioning precision of the geometric positioning model is improved, and accurate motion track extraction is convenient for a moving target.
According to the invention, the video image is conveniently accessed to the three-dimensional GIS platform for projection display. The projected image and the map base map can be highly overlapped, and seamless splicing display of the multi-channel monitoring video data is facilitated.
In an alternative embodiment, in S1, the camera imaging geometric model is used to establish a one-to-one correspondence relationship between coordinates of a ground object point in an object space and coordinates of the ground object point in an image space, and includes two calculation processes: a model forward calculation process and a model backward calculation process;
a model forward calculation process for calculating object coordinates from image coordinates; formula A is the positive calculation process of the imaging geometric model, the scale factor m is an unknown quantity, and the coordinates (X, Y and Z) of the ground points are solved in an iterative mode, and the method comprises the following steps:
s1: calculating initial geodetic coordinates of the earth points, assuming that the initial elevation of the earth points is zero, i.e. h 0 =0, calculating initial geodetic coordinates (X) corresponding to image point coordinates (X, y) by using formula A and combining with earth ellipsoid formula 0 ,Y 0 ,Z 0 );
S2: will coordinate the earth (X) 0 ,Y 0 ,Z 0 ) Conversion into latitude and longitude representations (lon) 0 ,lat 0 ,h 0 ) (ii) a Reading an actual elevation value h ' from the ground elevation data, wherein the difference between the initial elevation value h0 and the initial elevation value h ' is dh = (h ' -h) 0 )。
S3: the elevation of the ground object point is taken as h = h', and geodetic coordinates (X, Y, Z) corresponding to the image point (X, Y) are recalculated;
s4: repeating the steps S1-S3, and performing iterative calculation until dh is smaller than a preset tolerance, and stopping iteration to obtain final geodetic coordinates (X, Y, Z);
a model back calculation process for calculating the coordinates of the corresponding image points from the geodetic coordinates of the given points, namely from the object space to the image space; the method comprises the following steps:
s21: establishing an initial mapping relation between the ground object point coordinates and the image point coordinates, and performing initial prediction on the image point coordinates corresponding to the ground object point coordinates;
s22: calculating the corresponding ground feature point coordinates by using the predicted image point coordinates through model forward calculation, comparing the deviation of the corresponding ground feature point coordinates with the actual ground feature coordinates, and further updating the parameters of the prediction model until the deviation of the ground feature point coordinates calculated by using the predicted image point coordinates and the actual ground feature point coordinates is smaller than a preset limit difference, stopping iteration, and obtaining the image point coordinates corresponding to the final ground feature point coordinates; meanwhile, the image point coordinates are predicted by solving the homography transformation matrix.
In an alternative embodiment, the number of pixels of the virtual camera image may be determined by the virtual camera model constructed in S3. The pixel number in the width direction of the virtual image is the width of the image sensor divided by the size of the virtual imaging detection element and is rounded, and the pixel number in the height direction of the virtual image is the height of the image sensor divided by the size of the virtual imaging detection element and is rounded. The gray value of each pixel of the virtual image can be obtained by re-acquiring the corresponding gray value of the pixel of the real image. The coordinate corresponding relation between the virtual image and the real image can be obtained by calculation through a positioning model of the virtual image and a positioning model of the real image;
in S3, the geometric imaging model of the virtual camera is a pinhole imaging model in an ideal state, lens distortion does not exist, and the imaging range and the resolution of the virtual camera are basically consistent with those of the real camera; because the GIS platform can only configure the transverse visual angle of the camera, the length and the width of the imaging probe element of the virtual camera are the same, and the external orientation element of the virtual camera is equal to the external orientation element of the real camera, namely the projection center coordinate and the attitude angle of the virtual camera are consistent with those of the real camera; the focal length of the virtual camera is equal to the focal length of the real camera; the size of an image sensor of a focal plane of the virtual camera is the same as that of an image sensor of a real camera, namely the total area of the area array imaging probe element is the same; the size of the imaging probe of the virtual camera is equal to the average value of the length and the width of the imaging probe of the real camera.
It is to be understood that the above-described embodiments of the present invention are merely illustrative of or explaining the principles of the invention and are not to be construed as limiting the invention. Therefore, any modification, equivalent replacement, improvement and the like made without departing from the spirit and scope of the present invention should be included in the protection scope of the present invention. Further, it is intended that the appended claims cover all such variations and modifications as fall within the scope and boundaries of the appended claims or the equivalents of such scope and boundaries.

Claims (3)

1. A video image correction method based on a geometric positioning model is characterized by comprising the following steps:
s1: constructing an initial geometric positioning model of the camera;
s101: acquiring imaging parameters of a camera, wherein the imaging parameters of the camera comprise initial external orientation element parameters of the camera and internal orientation element parameters of the camera; the initial external orientation element parameters of the camera are obtained by measuring the spatial position and the posture of the camera, and the internal orientation element parameters of the camera are obtained by calculating the delivery parameters of the camera;
s102: constructing a geometric positioning model by using the initial internal and external orientation elements of the camera as shown in formula A:
Figure FDA0002331503100000011
in formula A, x and y are coordinates of image points; Δ x and Δ y represent distortion errors of inner orientation elements, and f represents the main distance of the camera; x is the number of 0 ,y 0 Representing principal point coordinates; x, Y, Z is the geodetic coordinates of the ground point; x C 、Y C 、Z C Geodetic coordinates that are the camera's photographic center; r CW A rotation matrix from a camera coordinate system to a geodetic coordinate system; r U Is an offset matrix to absorb mounting errors of the camera load; m is a scale factor;
s2: the positioning accuracy of the geometric positioning model is improved through calibration;
s201: calculating compensation parameters of the camera by using ground control point data, and recovering correct imaging parameters; the control point is a plurality of points with known ground coordinates, and the ground coordinates (X, Y, Z) of the control point and the image point coordinates (X, Y) of the control point on the image are obtained;
s202: solving a position compensation parameter, a posture compensation parameter, a lens distortion parameter of the camera and the size of an imaging probe element by using an indirect adjustment method in the field of photogrammetry so as to update the geometric positioning model in the S1 and provide the positioning accuracy of the model;
s3: constructing a geometric positioning model of a virtual camera, and eliminating camera imaging parameter errors by constructing a virtual camera; the geometric imaging model of the virtual camera is in a specific form as shown in formula B:
Figure FDA0002331503100000021
in formula B, x and y are coordinates of image points; f represents the camera standoff; x is a radical of a fluorine atom 0 、y 0 Representing principal point coordinates; x, Y, Z are geodetic coordinates of the ground points; x C 、Y C 、Z C Geodetic coordinates that are the camera's photographic center; r CW A rotation matrix from a camera coordinate system to a geodetic coordinate system; m is a scale factor
S4: resampling the real image by using a virtual camera model to generate a virtual image; the specific steps for generating the virtual camera image are as follows:
s401: obtaining pixel coordinates (x) of virtual image P ,y P ) Converting the pixel coordinates into image point coordinates (x, y) of the virtual image;
s402: selecting a projection elevation model of a virtual camera geometric positioning model, wherein the projection elevation model can be a selected reference plane or a real terrain curved surface; substituting the image point coordinates (X, Y) of the virtual image into a formula B, and calculating the ground coordinates (X, Y, Z) of the projection point of the virtual image;
s403: selecting ground true elevation model as projection of true camera geometric positioning modelThe image elevation model is used for substituting the ground coordinates (X, Y and Z) into a formula A for calculation to obtain image point coordinates (X ', Y') corresponding to the real image, and converting the image point coordinates (X ', Y') into pixel coordinates (X ', Y') of the real image P’ ,y P’ );
S404: obtaining real image pixel coordinate (x) by bilinear difference P’ ,y P’ ) The gray value of (d) and assigning to the virtual image pixel (x) P ,y P ) The gray level of (d);
s405: and traversing all pixels of the virtual image, and repeating S401-S404 to generate a final virtual camera image.
2. The method according to claim 1, wherein in S1, the camera images a geometric model for establishing a one-to-one correspondence between coordinates of the ground object point in the object space and coordinates of the ground object point in the image space, which includes two calculation processes: a model forward calculation process and a model backward calculation process;
a model forward calculation process for calculating object coordinates from image coordinates; formula A is the positive calculation process of the imaging geometric model, the scale factor m is an unknown quantity, and the coordinates (X, Y, Z) of the ground points are solved in an iterative mode, and the method comprises the following steps:
s1: calculating initial geodetic coordinates of the earth points, assuming that the initial elevation of the earth points is zero, i.e. h 0 =0, calculating initial geodetic coordinates (X) corresponding to image point coordinates (X, y) by combining formula A and earth ellipsoid formula 0 ,Y 0 ,Z 0 );
S2: will coordinate the earth (X) 0 ,Y 0 ,Z 0 ) Conversion into latitude and longitude representations (lon) 0 ,lat 0 ,h 0 ) (ii) a Reading an actual elevation value h ' from the ground elevation data, wherein the difference between the initial elevation value h0 and the initial elevation value h ' is dh = (h ' -h) 0 );
S3: the elevation of the ground object point is taken as h = h', and geodetic coordinates (X, Y, Z) corresponding to the image point (X, Y) are recalculated;
s4: repeating the steps S1-S3, performing iterative calculation, and stopping iteration until dh is less than a preset limit difference to obtain final geodetic coordinates (X, Y, Z);
a model back calculation process for calculating the coordinates of the corresponding image points from the geodetic coordinates of the given points, namely from the object space to the image space; the method comprises the following steps:
s21: establishing an initial mapping relation between the ground object point coordinates and the image point coordinates, and performing initial prediction on the image point coordinates corresponding to the ground object point coordinates;
s22: calculating the corresponding ground feature point coordinates by using the predicted image point coordinates through model forward calculation, comparing the deviation of the corresponding ground feature point coordinates with the actual ground feature coordinates, and further updating the parameters of the prediction model until the deviation of the ground feature point coordinates calculated by using the predicted image point coordinates and the actual ground feature point coordinates is smaller than a preset limit difference, stopping iteration, and obtaining the image point coordinates corresponding to the final ground feature point coordinates; meanwhile, the image point coordinates are predicted by solving the homography transformation matrix.
3. The method for correcting video images based on geometric orientation models according to claim 1, wherein in S3, the geometric imaging model of the virtual camera is an aperture imaging model in an ideal state, no lens distortion exists, and the imaging range and resolution of the virtual camera are substantially consistent with those of the real camera; because the GIS platform can only configure the transverse visual angle of the camera, the length and the width of the imaging probe element of the virtual camera are the same, and the external orientation element of the virtual camera is equal to the external orientation element of the real camera, namely the projection center coordinate and the attitude angle of the virtual camera are consistent with those of the real camera; the focal length of the virtual camera is equal to the focal length of the real camera; the size of an image sensor of a focal plane of the virtual camera is the same as that of an image sensor of a real camera, namely the total area of the area array imaging probe element is the same; the imaging probe size of the virtual camera is equal to the average of the length and width of the imaging probe of the real camera.
CN201911338034.0A 2019-12-23 2019-12-23 Video image correction method based on geometric positioning model Active CN111127564B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911338034.0A CN111127564B (en) 2019-12-23 2019-12-23 Video image correction method based on geometric positioning model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911338034.0A CN111127564B (en) 2019-12-23 2019-12-23 Video image correction method based on geometric positioning model

Publications (2)

Publication Number Publication Date
CN111127564A CN111127564A (en) 2020-05-08
CN111127564B true CN111127564B (en) 2023-02-28

Family

ID=70501295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911338034.0A Active CN111127564B (en) 2019-12-23 2019-12-23 Video image correction method based on geometric positioning model

Country Status (1)

Country Link
CN (1) CN111127564B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113947653B (en) * 2021-09-27 2023-04-07 四川大学 Simulation method of real texture hair

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103345737A (en) * 2013-06-04 2013-10-09 北京航空航天大学 UAV high resolution image geometric correction method based on error compensation
EP2966863A1 (en) * 2014-07-10 2016-01-13 Seiko Epson Corporation Hmd calibration with direct geometric modeling
CN107144293A (en) * 2017-04-07 2017-09-08 武汉大学 A kind of geometric calibration method of video satellite area array cameras
CN110211054A (en) * 2019-04-28 2019-09-06 张过 A kind of undistorted making video method of spaceborne push-broom type optical sensor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103345737A (en) * 2013-06-04 2013-10-09 北京航空航天大学 UAV high resolution image geometric correction method based on error compensation
EP2966863A1 (en) * 2014-07-10 2016-01-13 Seiko Epson Corporation Hmd calibration with direct geometric modeling
CN107144293A (en) * 2017-04-07 2017-09-08 武汉大学 A kind of geometric calibration method of video satellite area array cameras
CN110211054A (en) * 2019-04-28 2019-09-06 张过 A kind of undistorted making video method of spaceborne push-broom type optical sensor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于数字检校场的"天绘一号"卫星在轨几何定标;胡振龙等;《航天返回与遥感》;20171015(第05期);全文 *

Also Published As

Publication number Publication date
CN111127564A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
CN106403902B (en) A kind of optical satellite in-orbit real-time geometry location method and system cooperateed with to star
CN110057295B (en) Monocular vision plane distance measuring method without image control
CN109115186B (en) 360-degree measurable panoramic image generation method for vehicle-mounted mobile measurement system
CN111007530B (en) Laser point cloud data processing method, device and system
KR101346323B1 (en) Method for self-calibration of non-metric digital camera using ground control point and additional parameter
CN107014399B (en) Combined calibration method for satellite-borne optical camera-laser range finder combined system
KR101965965B1 (en) A method of automatic geometric correction of digital elevation model made from satellite images and provided rpc
JP2010287156A (en) Model generation apparatus, model generation method, model generation program, point group image generation method and point group image generation program
CN106709944B (en) Satellite remote sensing image registration method
CN102661717A (en) Monocular vision measuring method for iron tower
CN110736447B (en) Vertical-direction horizontal position calibration method for integrated image acquisition equipment
CN107040695B (en) satellite-borne video image stabilization method and system based on RPC positioning model
CN110555813B (en) Rapid geometric correction method and system for remote sensing image of unmanned aerial vehicle
JP2002156229A (en) Mobile displacement measuring method and device for structure
CN107967700A (en) The in-orbit geometric correction of the wide working distance binocular camera of big visual field and precision test method
CN101655378A (en) Interior orientation parameter measurement method of digital photogrammetric camera
CN112270698A (en) Non-rigid geometric registration method based on nearest curved surface
KR101346192B1 (en) Aviation surveying system for correction realtime of aviation image
CN111127564B (en) Video image correction method based on geometric positioning model
CN110853140A (en) DEM (digital elevation model) -assisted optical video satellite image stabilization method
CN113763479B (en) Calibration method of refraction and reflection panoramic camera and IMU sensor
CN112985259B (en) Target positioning method and system based on multi-view vision
KR101183866B1 (en) Apparatus and method for real-time position and attitude determination based on integration of gps, ins and image at
KR101346206B1 (en) Aviation surveying system for processing the aviation image in gps
CN111402315A (en) Three-dimensional distance measuring method for adaptively adjusting base line of binocular camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant