CN102110300A - Three-dimensional model projecting method and device for imitating lens distortion - Google Patents

Three-dimensional model projecting method and device for imitating lens distortion Download PDF

Info

Publication number
CN102110300A
CN102110300A CN2009102439313A CN200910243931A CN102110300A CN 102110300 A CN102110300 A CN 102110300A CN 2009102439313 A CN2009102439313 A CN 2009102439313A CN 200910243931 A CN200910243931 A CN 200910243931A CN 102110300 A CN102110300 A CN 102110300A
Authority
CN
China
Prior art keywords
image coordinate
coordinate
distortion
camera
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2009102439313A
Other languages
Chinese (zh)
Inventor
谢宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Digital Video Beijing Ltd
Original Assignee
China Digital Video Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Digital Video Beijing Ltd filed Critical China Digital Video Beijing Ltd
Priority to CN2009102439313A priority Critical patent/CN102110300A/en
Publication of CN102110300A publication Critical patent/CN102110300A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Studio Devices (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a three-dimensional model projecting method for imitating lens distortion. The method comprises steps as follows: projecting the three-dimensional model onto an image plane according to a pin-hole imaging model, and obtaining an image coordinates (u, v) under an image coordinate system; obtaining a transformation relation of the image coordinates and computing a normalized image coordinates (x, y) under a camera coordinate system corresponding to the image coordinates (u, v) according to the transformation relation; and computing the distorted normalized image coordinates according to the normalized image coordinates (x, y) and projecting the distorted normalized image coordinates to the image plane, and overlaying the three-dimensional model by the distorted image coordinates on a real two-dimension image after the three-dimension model is protected on the image coordinates. The invention matches the three-dimension model and a real scene, so as to realize a more real and natural false-real combination.

Description

The three-dimensional model projecting method and the device of simulating lens distortion
Technical field
The present invention relates to the technical field of television program designing, particularly relate to a kind of three-dimensional model projecting method of simulating lens distortion and the three-dimensional model projection arrangement of a kind of simulating lens distortion.
Background technology
Virtual studio is the television program designing technology of developed recently a kind of uniqueness of getting up.It is digitized synthetic in real time that its essence is that the object movement image with the virtual three-dimensional model of computer manufacture and video camera shooting carries out, subject image and virtual background can be changed synchronously, thereby realize both fusions, to obtain perfectly synthetic picture.Virtual Studio Technology is to scratch on the basis of picture technology at traditional chroma key, Computerized three-dimensional graph technology and video synthetic technology have been made full use of, position and parameter according to video camera, the perspective relation and the prospect of three-dimensional virtual scene are consistent, after the process chroma key is synthetic, in the three-dimensional virtual scene that makes image in the prospect seem to be in computing machine fully to be produced, and can move therein, thereby create teletorium effect true to nature, that stereoscopic sensation is very strong.
Correspondingly, in Virtual Studio System, we need project to virtual three-dimensional model in the two dimensional image according to the attitude of current video camera and go, thereby realize the combination of actual situation.For the attitude of video camera, the perspective projection image-forming principle that generally all is based on pin-hole imaging model in the machine vision is determined, is well known that it has shines upon straight characteristics to straight line.
Yet in fact real camera lens and nonideal pin-hole imaging model have small distortion toward contact, and as shown in Figure 1, real camera lens tends to the linear projection in the reality scene is become the curve of the arc of strip on the image.At this moment, if directly virtual three-dimensional model being projected to image gets on, straight line is still straight line, will seem inharmonious with the scene in the image, lacks the sense of reality.
Summary of the invention
Technical matters to be solved by this invention provides a kind of three-dimensional model projecting method of simulating lens distortion, so that the real scene in three-dimensional model and the image is complementary, realizes truer, natural actual situation combination.
In order to solve the problems of the technologies described above, the embodiment of the invention discloses a kind of three-dimensional model projecting method of simulating lens distortion, comprising: three-dimensional model is pressed the pin-hole imaging model projection to the picture plane, the image coordinate under the acquisition image coordinate system (u, v);
Obtain the transformation relation of image coordinate, and according to described transformation relation computed image coordinate (u, v) the normalized image coordinate under the Dui Ying camera coordinate system (x, y);
According to described normalized image coordinate (x, y) the normalized image coordinate after the calculating distortion
Figure G2009102439313D00021
With the normalized image coordinate after the described distortion
Figure G2009102439313D00022
Project on the picture plane image coordinate after obtaining to distort
Figure G2009102439313D00023
With the image coordinate after the described distortion of three-dimensional model process projection
Figure G2009102439313D00024
After, in the real two dimensional image that is added to.
Preferably, the described step of obtaining the image coordinate transformation relation comprises:
Generating the camera intrinsic parameter projection matrix is:
K = f x 0 u 0 0 f y v 0 0 0 1 ;
By following formula is the normalization coordinate transform image coordinate:
u v 1 = f x 0 u 0 0 f y v 0 0 0 1 x y 1 ;
Wherein, described f x, f yBe focal length parameter, u 0, v 0Be the pixel coordinate of video camera photocentre on the picture plane;
Obtain following transformation relation according to described camera intrinsic parameter projection matrix:
x = u - u 0 f x , y = v - v 0 f y .
Preferably, the normalized image coordinate after the described distortion
Figure G2009102439313D00033
Calculate acquisition by following formula;
Figure G2009102439313D00034
Figure G2009102439313D00035
Wherein, r 2=x 2+ y 2, k 1And k 2Be coefficient of radial distortion.
Preferably, the image coordinate after the described distortion
Figure G2009102439313D00036
By described camera intrinsic parameter projection matrix, calculate acquisition as follows;
Figure G2009102439313D00037
Preferably, described k 1, k 2, f x, f y, u 0, v 0All adopt the Zhang Zhengyou camera marking method to obtain.
The embodiment of the invention also discloses a kind of three-dimensional model projection arrangement of simulating lens distortion, comprising:
The pin hole projecting cell is used for three-dimensional model by the pin-hole imaging model projection to the picture plane, obtain under the image coordinate system image coordinate (u, v);
Coordinate transformation unit is used to obtain the transformation relation of image coordinate, and according to described transformation relation computed image coordinate (u, v) the normalized image coordinate under the Dui Ying camera coordinate system (x, y);
Distortion coordinate pre-calculation unit is used for according to described normalized image coordinate (x, y) the normalized image coordinate after the calculating distortion
Figure G2009102439313D00041
Distortion coordinate projection unit is used for the normalized image coordinate after the described distortion
Figure G2009102439313D00042
Project on the picture plane image coordinate after obtaining to distort
Figure G2009102439313D00043
The actual situation superpositing unit is used for the image coordinate after the described distortion of three-dimensional model process projection
Figure G2009102439313D00044
After, in the real two dimensional image that is added to.
Preferably, described coordinate transformation unit comprises:
Matrix generates subelement, is used to generate described camera intrinsic parameter projection matrix and is:
K = f x 0 u 0 0 f y v 0 0 0 1 ;
By following formula is the normalization coordinate transform image coordinate:
u v 1 = f x 0 u 0 0 f y v 0 0 0 1 x y 1 ;
Wherein, described f x, f yBe focal length parameter, u 0, v 0Be the pixel coordinate of video camera photocentre on the picture plane; Described f x, f y, u 0, v 0All adopt the Zhang Zhengyou camera marking method to obtain.
Transformation relation is obtained subelement, is used for obtaining following computing formula according to described camera intrinsic parameter projection matrix:
x = u - u 0 f x , y = v - v 0 f y ;
The substitution computation subunit, be used for described image coordinate (u, v) in the above-mentioned formula of substitution, calculate normalized image coordinate under the corresponding camera coordinate system (x, y).
Preferably, described distortion coordinate pre-calculation unit comprises that formula calls subelement, is used to call the normalized image coordinate after following formula calculates distortion
Figure G2009102439313D00052
Figure G2009102439313D00053
Wherein, r 2=x 2+ y 2, k 1And k 2Be the coefficient of radial distortion that adopts the Zhang Zhengyou camera marking method to obtain.
Preferably, described distortion coordinate projection unit comprises that matrix calls subelement, is used to call the image coordinate after described camera intrinsic parameter projection matrix obtains described distortion as follows
Figure G2009102439313D00054
Figure G2009102439313D00055
Preferably, described device is arranged in GPU.
Compared with prior art, the present invention has the following advantages:
The present invention describes the distortion effects of camera lens by introducing distortion factor, and with this coefficient acting in the process of projection, allow three-dimensional model also produce distortion, make it with image in scene be complementary.Adopt after this technical finesse, virtual three-dimensional model and true environment are merged more, more true nature seems.
Description of drawings
Fig. 1 is the synoptic diagram that the linear projection that causes of lens distortion becomes curve;
Fig. 2 is the flow chart of steps of the three-dimensional model projecting method embodiment of a kind of simulating lens distortion of the present invention;
Fig. 3 is the structured flowchart of the three-dimensional model projection arrangement embodiment of a kind of simulating lens distortion of the present invention.
Embodiment
For above-mentioned purpose of the present invention, feature and advantage can be become apparent more, the present invention is further detailed explanation below in conjunction with the drawings and specific embodiments.
One of core idea of the embodiment of the invention is, considers because lens distortion in the process that virtual three-dimensional model is projected in the two dimensional image, tends to seem that the scene with in the image is inharmonious, lacks the sense of reality.So the inventor herein introduces the distortion effects that distortion factor is described camera lens, and with this coefficient acting in the process of projection, allow three-dimensional model also produce distortion, make it with image in scene be complementary.
Be well known that the distortion of camera lens mainly contains two kinds, a kind of is radial distortion, and a kind of is tangential distortion.Tangential distortion is very little, so in the present invention, we only consider the influence of radial distortion.The characteristics of radial distortion are, the range image center is far away more, and it is obvious more to distort, and the radian that straight line is projected into curve is big more.Therefore, radial distortion should be the function of picture point to centre distance.
For making those skilled in the art understand the present invention better, below simply introduce related image coordinate system of the embodiment of the invention and camera coordinate system.
The image of camera acquisition is converted to digital picture with the form of standard television signal through the high-speed image sampling system, and the input computing machine.Every width of cloth digital picture is the M*N array in computing machine, and (be called pixel, numerical value pixel) promptly is the brightness (or claiming gray scale) of picture point to each element in the image of the capable N row of M.Definition rectangular coordinate system u on image, v, (u v) is respectively the image coordinate system coordinate of unit with this pixel to the coordinate of each pixel.Because (u, v) a remarked pixel is arranged in the columns and the line number of array, does not express the position of this pixel in image with physical unit.Therefore, need set up the image coordinate system of representing with physical unit (millimeter) again.This coordinate system is an initial point with certain a bit (intersection point of the camera optical axis and the plane of delineation) in the image.X-axis and Y-axis respectively with u, the v axle is parallel.The physical size of each pixel on X-axis, Y direction is dX, dY, then can set up the relation of two coordinate systems.
The initial point of camera coordinates system is the video camera photocentre, the X of x axle and y axle and image, and Y-axis is parallel, and the z axle is a camera optical axis, and it is vertical with the plane of delineation.The intersection point of the optical axis and the plane of delineation is the initial point of image coordinate system, and the rectangular coordinate system of formation is called camera coordinate system.
With reference to figure 2, show the flow chart of steps of the three-dimensional model projecting method embodiment of a kind of simulating lens distortion of the present invention, specifically can may further comprise the steps:
Step 101, with three-dimensional model by the pin-hole imaging model projection to the picture plane, obtain under the image coordinate system image coordinate (u, v);
Be well known that the pin-hole imaging model is to be used for describing the arbitrfary point in the space and the geometric model of its corresponding relation between the imaging point on the image.These geometric model parameters are exactly the camera calibration parameter.Particularly, the camera calibration parameter comprises outer parameter and intrinsic parameter, and wherein, outer parameter comprises geological informations such as the position, attitude of video camera, and intrinsic parameter comprises focus of camera, optical parametrics such as principal point and lens distortion coefficient.
Step 102, obtain the transformation relation of image coordinate, and according to described transformation relation computed image coordinate (u, v) the normalized image coordinate under the Dui Ying camera coordinate system (x, y);
In a preferred embodiment of the present invention, this step can comprise following substep:
Substep S1, the described camera intrinsic parameter projection matrix of generation can be expressed as:
K = f x 0 u 0 0 f y v 0 0 0 1 ;
Substep S2, be the normalization coordinate transform image coordinate by following formula:
u v 1 = f x 0 u 0 0 f y v 0 0 0 1 x y 1 ; - - - ( 1 )
Wherein, described f x, f yBe focal length parameter, u 0, v 0Be the pixel coordinate of video camera photocentre on the picture plane.
Be above-mentioned f x, f y, u 0, v 0Can calculate easily by the Zhang Zhengyou camera marking method, Zhang Zhengyou considered radial distortion in 1998, had proposed a kind of method that can utilize several all inside and outside parameter of plane template calibrating camera---plane reference method.This method is calibrating camera accurately, and simple and easy to do.In above-mentioned formula (1), f x, f y, u 0, v 0Only relevant with intrinsic parameters of the camera, so above-mentioned matrix is in the prior art for being called as the camera intrinsic parameter projection matrix.
Particularly, fx=f/dX wherein, fy=f/dY is called the normalization focal length on x axle and the y axle; F is the focal length of camera, and dX and dY represent the size of unit picture element on u axle and the v axle respectively.What u0 and v0 then represented is optical centre, and promptly the intersection point of the camera optical axis and the plane of delineation is usually located at the picture centre place, so its value is often got resolution half.
Substep S3, the described camera intrinsic parameter projection matrix of foundation obtain the computing formula of following transformation relation:
x = u - u 0 f x , y = v - v 0 f y ;
Substep S4, according to described formula calculate described image coordinate (u, v) the normalized image coordinate under the Dui Ying camera coordinate system (x, y).
Pass through above-mentioned steps, can be simply be coordinate (limited point) under the camera coordinate system with image in the coordinate transform under the image coordinate system, travel through three-dimensional model and need not pointwise, each point in the model is all downconverted under the camera coordinate system from world coordinate system, effectively conserve system resources improves treatment effeciency.
In embodiments of the present invention, (x y) is meant normalized image coordinate under the desirable pin-hole camera model, distortionless to described normalized image coordinate.
Step 103, (x y) calculates normalized image coordinate after the distortion according to described normalized image coordinate
Figure G2009102439313D00083
As a kind of preferred embodiment, the normalized image coordinate after the described distortion
Figure G2009102439313D00084
Can calculate by following formula and obtain;
Figure G2009102439313D00091
Figure G2009102439313D00092
Wherein, r 2=x 2+ y 2, k 1And k 2Can be the coefficient of radial distortion that adopts the Zhang Zhengyou camera marking method to obtain.
Step 104, with the normalized image coordinate after the described distortion
Figure G2009102439313D00093
Project on the picture plane image coordinate after obtaining to distort
Figure G2009102439313D00094
In specific implementation, the image coordinate after the described distortion
Figure G2009102439313D00095
Can be calculated as follows acquisition by above-mentioned camera intrinsic parameter projection matrix;
Figure G2009102439313D00096
Step 105, with three-dimensional model through the image coordinate after the projection described distortion After, in the real two dimensional image that is added to.
The embodiment of the invention all only needs a projective transformation for the three-dimensional model of any complexity of reality, handles at image pixel exactly then.Because the size of general pattern is fixed, and number of pixels is limited, so treatment effeciency is very high.In specific implementation, the present invention can finish on GPU (Graphic Processing Unit, graphic process unit).
By above-mentioned steps, virtual three-dimensional model and true environment are merged more, realize the more actual situation combination of true nature.
The present invention is mainly used in Virtual Studio System, and, in the virtual sports system that further develops based on Virtual Studio System.Particularly, Virtual Studio System is to update along with computer technology develop rapidly and chroma key technique in recent years and a kind of new tv program producing system that occurs.In Virtual Studio System, the work state information of video camera sends graphics workstation to, computing machine obtains distance and the relative position between foreground object and the video camera according to this, can calculate the optimum size of virtual scene, position, and calculate the three-dimensional model that generates virtual scene on request.By virtual three-dimensional model is projected to the combination of realization actual situation in the two dimensional image according to the attitude of current video camera.
Need to prove, for aforesaid method embodiment, for simple description, so it all is expressed as a series of combination of actions, but those skilled in the art should know, the present invention is not subjected to the restriction of described sequence of movement, because according to the present invention, some step can adopt other orders or carry out simultaneously.
With reference to figure 3, show the structured flowchart of the three-dimensional model projection arrangement embodiment of a kind of simulating lens distortion of the present invention, specifically can comprise with lower unit:
Pin hole projecting cell 201 is used for three-dimensional model by the pin-hole imaging model projection to the picture plane, obtain under the image coordinate system image coordinate (u, v);
Coordinate transformation unit 202 is used to obtain the transformation relation of image coordinate, and according to described transformation relation computed image coordinate (u, v) the normalized image coordinate under the Dui Ying camera coordinate system (x, y);
Distortion coordinate pre-calculation unit 203 is used for according to described normalized image coordinate (x, y) the normalized image coordinate after the calculating distortion
Figure G2009102439313D00101
Distortion coordinate projection unit 204 is used for the normalized image coordinate after the described distortion
Figure G2009102439313D00102
Project on the picture plane image coordinate after obtaining to distort
Figure G2009102439313D00103
Actual situation superpositing unit 205 is used for the image coordinate after the described distortion of three-dimensional model process projection
Figure G2009102439313D00104
After, in the real two dimensional image that is added to.
In a preferred embodiment of the present invention, described coordinate transformation unit can comprise following subelement:
Matrix generates subelement, is used to generate described camera intrinsic parameter projection matrix and is:
K = f x 0 u 0 0 f y v 0 0 0 1 ;
Image coordinate varitron unit, being used for by following formula is the normalization coordinate transform image coordinate:
u v 1 = f x 0 u 0 0 f y v 0 0 0 1 x y 1 ;
Wherein, described f x, f yBe focal length parameter, u 0, v 0Be the pixel coordinate of video camera photocentre on the picture plane; Described f x, f y, u 0, v 0All adopt the Zhang Zhengyou camera marking method to obtain.
Transformation relation is obtained subelement, is used for obtaining following computing formula according to described camera intrinsic parameter projection matrix:
x = u - u 0 f x , y = v - v 0 f y ;
The substitution computation subunit, be used for described image coordinate (u, v) in the above-mentioned formula of substitution, calculate normalized image coordinate under the corresponding camera coordinate system (x, y).
In the present invention preferably, described distortion coordinate pre-calculation unit comprises that formula calls subelement, is used to call the normalized image coordinate after following formula calculates distortion
Figure G2009102439313D00115
Figure G2009102439313D00116
Wherein, r 2=x 2+ y 2, k 1And k 2Be the coefficient of radial distortion that adopts the Zhang Zhengyou camera marking method to obtain.
In specific implementation, described distortion coordinate projection unit can comprise that matrix calls subelement, is used to call described camera intrinsic parameter projection matrix, is calculated as follows:
Figure G2009102439313D00121
Obtain the image coordinate after the described distortion
Figure G2009102439313D00122
Preferably, described device can be arranged among the GPU.
For device embodiment, because it is similar substantially to method embodiment, so description is fairly simple, relevant part gets final product referring to the part explanation of method embodiment.
More than the three-dimensional model projecting method of a kind of simulating lens distortion provided by the present invention and the three-dimensional model projection arrangement of a kind of simulating lens distortion are described in detail, used specific case herein principle of the present invention and embodiment are set forth, the explanation of above embodiment just is used for helping to understand method of the present invention and core concept thereof; Simultaneously, for one of ordinary skill in the art, according to thought of the present invention, the part that all can change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.

Claims (10)

1. the three-dimensional model projecting method of a simulating lens distortion is characterized in that, comprising:
Three-dimensional model is pressed the pin-hole imaging model projection to the picture plane, and the image coordinate under the acquisition image coordinate system (u, v);
Obtain the transformation relation of image coordinate, and according to described transformation relation computed image coordinate (u, v) the normalized image coordinate under the Dui Ying camera coordinate system (x, y);
According to described normalized image coordinate (x, y) the normalized image coordinate after the calculating distortion
Figure F2009102439313C00011
With the normalized image coordinate after the described distortion
Figure F2009102439313C00012
Project on the picture plane image coordinate after obtaining to distort
Figure F2009102439313C00013
With the image coordinate after the described distortion of three-dimensional model process projection
Figure F2009102439313C00014
After, in the real two dimensional image that is added to.
2. the method for claim 1 is characterized in that, the described step of obtaining the image coordinate transformation relation comprises:
Generating the camera intrinsic parameter projection matrix is:
K = f x 0 u 0 0 f y v 0 0 0 1 ;
By following formula is the normalization coordinate transform image coordinate:
u v 1 = f x 0 u 0 0 f y v 0 0 0 1 x y 1 ;
Wherein, described f x, f yBe focal length parameter, u 0, v 0Be the pixel coordinate of video camera photocentre on the picture plane;
Obtain following transformation relation according to described camera intrinsic parameter projection matrix:
x = u - u 0 f x , y = v - v 0 f y .
3. method as claimed in claim 2 is characterized in that, the normalized image coordinate after the described distortion Calculate acquisition by following formula;
Figure F2009102439313C00024
Figure F2009102439313C00025
Wherein, r 2=x 2+ y 2, k 1And k 2Be coefficient of radial distortion.
4. method as claimed in claim 3 is characterized in that, the image coordinate after the described distortion
Figure F2009102439313C00026
By described camera intrinsic parameter projection matrix, calculate acquisition as follows;
Figure F2009102439313C00027
5. as claim 3 or 4 described methods, it is characterized in that described k 1, k 2, f x, f y, u 0, v 0All adopt the Zhang Zhengyou camera marking method to obtain.
6. the three-dimensional model projection arrangement of a simulating lens distortion is characterized in that, comprising:
The pin hole projecting cell is used for three-dimensional model by the pin-hole imaging model projection to the picture plane, obtain under the image coordinate system image coordinate (u, v);
Coordinate transformation unit is used to obtain the transformation relation of image coordinate, and according to described transformation relation computed image coordinate (u, v) the normalized image coordinate under the Dui Ying camera coordinate system (x, y);
Distortion coordinate pre-calculation unit is used for according to described normalized image coordinate (x, y) the normalized image coordinate after the calculating distortion
Figure F2009102439313C00028
Distortion coordinate projection unit is used for the normalized image coordinate after the described distortion
Figure F2009102439313C00029
Project on the picture plane image coordinate after obtaining to distort
Figure F2009102439313C000210
The actual situation superpositing unit is used for the image coordinate after the described distortion of three-dimensional model process projection
Figure F2009102439313C00031
After, in the real two dimensional image that is added to.
7. device as claimed in claim 6 is characterized in that, described coordinate transformation unit comprises:
Matrix generates subelement, is used to generate described camera intrinsic parameter projection matrix and is:
K = f x 0 u 0 0 f y v 0 0 0 1 ;
By following formula is the normalization coordinate transform image coordinate:
u v 1 = f x 0 u 0 0 f y v 0 0 0 1 x y 1 ;
Wherein, described f x, f yBe focal length parameter, u 0, v 0Be the pixel coordinate of video camera photocentre on the picture plane; Described f x, f y, u 0, v 0All adopt the Zhang Zhengyou camera marking method to obtain.
Transformation relation is obtained subelement, is used for obtaining following computing formula according to described camera intrinsic parameter projection matrix:
x = u - u 0 f x , y = v - v 0 f y ;
The substitution computation subunit, be used for described image coordinate (u, v) in the above-mentioned formula of substitution, calculate normalized image coordinate under the corresponding camera coordinate system (x, y).
8. device as claimed in claim 7 is characterized in that, described distortion coordinate pre-calculation unit comprises that formula calls subelement, is used to call the normalized image coordinate after following formula calculates distortion
Figure F2009102439313C00036
Figure F2009102439313C00037
Figure F2009102439313C00038
Wherein, r 2=x 2+ y 2, k 1And k 2Be the coefficient of radial distortion that adopts the Zhang Zhengyou camera marking method to obtain.
9. device as claimed in claim 8 is characterized in that, described distortion coordinate projection unit comprises that matrix calls subelement, is used to call the image coordinate after described camera intrinsic parameter projection matrix obtains described distortion as follows
Figure F2009102439313C00042
10. method as claimed in claim 9 is characterized in that described device is arranged in GPU.
CN2009102439313A 2009-12-25 2009-12-25 Three-dimensional model projecting method and device for imitating lens distortion Pending CN102110300A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009102439313A CN102110300A (en) 2009-12-25 2009-12-25 Three-dimensional model projecting method and device for imitating lens distortion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009102439313A CN102110300A (en) 2009-12-25 2009-12-25 Three-dimensional model projecting method and device for imitating lens distortion

Publications (1)

Publication Number Publication Date
CN102110300A true CN102110300A (en) 2011-06-29

Family

ID=44174447

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009102439313A Pending CN102110300A (en) 2009-12-25 2009-12-25 Three-dimensional model projecting method and device for imitating lens distortion

Country Status (1)

Country Link
CN (1) CN102110300A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102929089A (en) * 2012-11-19 2013-02-13 北京理工大学 Lighting device capable of improving resolution of multi-projector true three-dimensional display
CN108121941A (en) * 2016-11-30 2018-06-05 上海联合道路交通安全科学研究中心 A kind of object speed calculation method based on monitoring device
CN108227185A (en) * 2017-12-28 2018-06-29 深圳市泛海三江科技发展有限公司 A kind of optical lens image-forming correction method
CN109636874A (en) * 2018-12-17 2019-04-16 浙江科澜信息技术有限公司 A kind of threedimensional model perspective projection method, system and relevant apparatus
CN111415296A (en) * 2020-03-17 2020-07-14 东南数字经济发展研究院 Ground resolution calculation method for unmanned aerial vehicle oblique photography

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102929089A (en) * 2012-11-19 2013-02-13 北京理工大学 Lighting device capable of improving resolution of multi-projector true three-dimensional display
CN102929089B (en) * 2012-11-19 2015-08-19 北京理工大学 A kind of lighting device improving multi-projector formula true three-dimansioual display resolution
CN108121941A (en) * 2016-11-30 2018-06-05 上海联合道路交通安全科学研究中心 A kind of object speed calculation method based on monitoring device
CN108227185A (en) * 2017-12-28 2018-06-29 深圳市泛海三江科技发展有限公司 A kind of optical lens image-forming correction method
CN109636874A (en) * 2018-12-17 2019-04-16 浙江科澜信息技术有限公司 A kind of threedimensional model perspective projection method, system and relevant apparatus
CN109636874B (en) * 2018-12-17 2023-05-26 浙江科澜信息技术有限公司 Perspective projection method, system and related device for three-dimensional model
CN111415296A (en) * 2020-03-17 2020-07-14 东南数字经济发展研究院 Ground resolution calculation method for unmanned aerial vehicle oblique photography
CN111415296B (en) * 2020-03-17 2024-01-19 东南数字经济发展研究院 Ground resolution computing method for unmanned aerial vehicle oblique photography

Similar Documents

Publication Publication Date Title
Yang et al. Object detection in equirectangular panorama
CN102111561A (en) Three-dimensional model projection method for simulating real scenes and device adopting same
CN103081476B (en) The method and apparatus utilizing depth map information conversion 3-D view
CN113052835B (en) Medicine box detection method and system based on three-dimensional point cloud and image data fusion
CN104155765B (en) The method and apparatus of revision for 3-D image in spliced integration imaging display
CN108475327A (en) three-dimensional acquisition and rendering
CN104935909B (en) Multi-image super-resolution method based on depth information
CN110728671B (en) Dense reconstruction method of texture-free scene based on vision
CN107341832B (en) Multi-view switching shooting system and method based on infrared positioning system
CN106228507A (en) A kind of depth image processing method based on light field
CN110070598B (en) Mobile terminal for 3D scanning reconstruction and 3D scanning reconstruction method thereof
US9230330B2 (en) Three dimensional sensing method and three dimensional sensing apparatus
CN101631257A (en) Method and device for realizing three-dimensional playing of two-dimensional video code stream
CN110648274B (en) Method and device for generating fisheye image
CN105184857A (en) Scale factor determination method in monocular vision reconstruction based on dot structured optical ranging
CN102111562A (en) Projection conversion method for three-dimensional model and device adopting same
CN113643414B (en) Three-dimensional image generation method and device, electronic equipment and storage medium
CN105046649A (en) Panorama stitching method for removing moving object in moving video
CN102110300A (en) Three-dimensional model projecting method and device for imitating lens distortion
WO2023093739A1 (en) Multi-view three-dimensional reconstruction method
US11380063B2 (en) Three-dimensional distortion display method, terminal device, and storage medium
CN102110298A (en) Method and device for projecting three-dimensional model in virtual studio system
US10909752B2 (en) All-around spherical light field rendering method
KR20180000696A (en) A method and apparatus for creating a pair of stereoscopic images using least one lightfield camera
CN112529006B (en) Panoramic picture detection method, device, terminal and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20110629