CN102111562A - Projection conversion method for three-dimensional model and device adopting same - Google Patents
Projection conversion method for three-dimensional model and device adopting same Download PDFInfo
- Publication number
- CN102111562A CN102111562A CN2009102439296A CN200910243929A CN102111562A CN 102111562 A CN102111562 A CN 102111562A CN 2009102439296 A CN2009102439296 A CN 2009102439296A CN 200910243929 A CN200910243929 A CN 200910243929A CN 102111562 A CN102111562 A CN 102111562A
- Authority
- CN
- China
- Prior art keywords
- image coordinate
- distortion
- coordinate
- camera
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The invention discloses a projection conversion method for a three-dimensional model. The method comprises four steps as follows; the step one is achieved in the way that the three-dimensional model is projected on an image plane according to a pinhole imaging model to obtain image coordinates (u and v) in an image coordinate system; the step two is achieved in the way that the conversion relation of the image coordinates is obtained, and normalized image coordinates (x and y) in a camera coordinate system corresponding to the image coordinates (u and v) are calculated according to the conversion relation; the step three is achieved in that way that a distorted image coordinate is calculated according to the normalized image coordinates (x and y); and the step four is achieved in the way that the three-dimensional model is stacked to a real two-dimensional image after being projected to the distorted image coordinate. By adopting the invention, the three-dimensional model can be matched with the real scenes in the images, and more real and natural combination of void and solid is realized.
Description
Technical field
The present invention relates to the technical field of television program designing, particularly relate to a kind of projective transformation method of threedimensional model and a kind of projective transformation device of threedimensional model.
Background technology
Virtual studio is the television program designing technology of developed recently a kind of uniqueness of getting up.It is digitized synthetic in real time that its essence is that the object movement image with the virtual three-dimensional model of computer manufacture and video camera shooting carries out, subject image and virtual background can be changed synchronously, thereby realize both fusions, to obtain perfectly synthetic picture.Virtual Studio Technology is to scratch on the basis of picture technology at traditional chroma key, Computerized three-dimensional graph technology and video synthetic technology have been made full use of, position and parameter according to video camera, the perspective relation and the prospect of three-dimensional virtual scene are consistent, after the process chroma key is synthetic, in the three-dimensional virtual scene that makes image in the prospect seem to be in computer fully to be produced, and can move therein, thereby create television stadio effect true to nature, that third dimension is very strong.
Correspondingly, in Virtual Studio System, we need project to virtual threedimensional model in the two dimensional image according to the attitude of current video camera and go, thereby realize the combination of actual situation.For the attitude of video camera, the perspective projection image-forming principle that generally all is based on pin-hole imaging model in the machine vision is determined, is well known that it has shines upon straight characteristics to straight line.
Yet in fact real camera lens and nonideal pin-hole imaging model have small distortion toward contact, and as shown in Figure 1, real camera lens tends to the linear projection in the reality scene is become the curve of the arc of strip on the image.At this moment, if directly virtual threedimensional model being projected to image gets on, straight line is still straight line, will seem inharmonious with the scene in the image, lacks the sense of reality.
Summary of the invention
Technical problem to be solved by this invention provides a kind of projective transformation method of threedimensional model, so that the real scene in threedimensional model and the image is complementary, realizes truer, natural actual situation combination.
In order to solve the problems of the technologies described above, the embodiment of the invention discloses a kind of projective transformation method of threedimensional model, comprising:
Step S1, with threedimensional model by the pin-hole imaging model projection to the picture plane, obtain under the image coordinate system image coordinate (u, v);
Step S2, obtain the transformation relation of image coordinate, and according to described transformation relation computed image coordinate (u, v) the normalized image coordinate under the Dui Ying camera coordinate system (x, y);
Step S3, (x y), calculates the image coordinate after the distortion according to described normalized image coordinate
Step S4, with threedimensional model through the image coordinate after the projection described distortion
After, in the real two dimensional image that is added to.
Preferably, described step S3 comprises following substep:
The radial distortion model of substep S311, structure present image;
Substep S312, (x y) in the substitution radial distortion model, obtains the image coordinate after the distortion with described normalized image coordinate
Preferably, described step S3 comprises following substep:
Substep S321, (x y) calculates normalized image coordinate after the distortion according to described normalized image coordinate
Substep S322, with the normalized image coordinate after the described distortion
Project on the picture plane image coordinate after obtaining to distort
Preferably, described step S2 comprises following substep:
Substep S21, generation camera intrinsic parameter projection matrix are:
Generating the camera intrinsic parameter projection matrix is:
By following formula is the normalization coordinate transform image coordinate:
Wherein, described f
x, f
yBe focal length parameter, u
0, v
0Be the pixel coordinate of video camera photocentre on the picture plane;
Substep S22, the described camera intrinsic parameter projection matrix of foundation obtain following transformation relation:
Substep S23, with described image coordinate (u, v) in the above-mentioned formula of substitution, calculate normalized image coordinate under the corresponding camera coordinate system (x, y).
Preferably, described radial distortion model is:
Wherein, k
1, k
2Be coefficient of radial distortion; f
x, f
yBe focal length parameter, u
0, v
0Be the pixel coordinate of video camera photocentre on the picture plane; r
2=x
2+ y
2Described k
1, k
2, f
x, f
y, u
0, v
0Adopt the Zhang Zhengyou camera marking method to obtain.
Preferably, the normalized image coordinate after the described distortion
Calculate acquisition by following formula;
Wherein, r
2=x
2+ y
2, k
1And k
2Be the coefficient of radial distortion that adopts the Zhang Zhengyou camera marking method to obtain;
Image coordinate after the described distortion
Obtain as follows by the conversion of described camera intrinsic parameter projection matrix:
The embodiment of the invention also discloses a kind of projective transformation device of threedimensional model, comprising:
The pin hole projecting cell is used for threedimensional model by the pin-hole imaging model projection to the picture plane, obtain under the image coordinate system image coordinate (u, v);
Coordinate transformation unit is used to obtain the transformation relation of image coordinate, and according to described transformation relation computed image coordinate (u, v) the normalized image coordinate under the Dui Ying camera coordinate system (x, y);
The distortion computing unit is used for that (x y), calculates the image coordinate after the distortion according to described normalized image coordinate
The actual situation superpositing unit is used for the image coordinate after the described distortion of threedimensional model process projection
After, in the real two dimensional image that is added to.
Preferably, described distortion calculating unit pack is drawn together:
Distortion model constructor unit is used to construct the radial distortion model of present image;
Distortion coordinate Calculation subelement is used for that (x y) in the substitution radial distortion model, obtains the image coordinate after the distortion with described normalized image coordinate
Preferably, described distortion calculating unit pack is drawn together:
Distortion coordinate precomputation subelement is used for according to described normalized image coordinate (x, y) the normalized image coordinate after the calculating distortion
Distortion coordinate projection subelement is used for the normalized image coordinate after the described distortion
Project on the picture plane image coordinate after obtaining to distort
Preferably, described coordinate transformation unit comprises:
Matrix generates subelement, is used to generate described camera intrinsic parameter projection matrix and is:
Image coordinate varitron unit, being used for by following formula is the normalization coordinate transform image coordinate:
Wherein, described f
x, f
yBe focal length parameter, u
0, v
0Be the pixel coordinate of video camera photocentre on the picture plane; Described f
x, f
y, u
0, v
0All adopt the Zhang Zhengyou camera marking method to obtain.
Transformation relation is obtained subelement, is used for obtaining following computing formula according to described camera intrinsic parameter projection matrix:
The substitution computation subunit, be used for described image coordinate (u, v) in the above-mentioned formula of substitution, calculate normalized image coordinate under the corresponding camera coordinate system (x, y).
Compared with prior art, the present invention has the following advantages:
The present invention describes the distortion effects of camera lens by introducing distortion factor, and with this coefficient acting in the process of projection, allow threedimensional model also produce distortion, make it with image in scene be complementary.Adopt after this technical finesse, virtual three-dimensional model and true environment are merged more, more true nature seems.
Description of drawings
Fig. 1 is the schematic diagram that the linear projection that causes of lens distortion becomes curve;
Fig. 2 is the flow chart of steps of the projective transformation method embodiment 1 of a kind of threedimensional model of the present invention;
Fig. 3 is the flow chart of steps of the projective transformation method embodiment 2 of a kind of threedimensional model of the present invention;
Fig. 4 is the flow chart of steps of the projective transformation method embodiment 3 of a kind of threedimensional model of the present invention;
Fig. 5 is the structured flowchart of the projective transformation device embodiment of a kind of threedimensional model of the present invention.
Embodiment
For above-mentioned purpose of the present invention, feature and advantage can be become apparent more, the present invention is further detailed explanation below in conjunction with the drawings and specific embodiments.
For making those skilled in the art understand the present invention better, below simply introduce related image coordinate system of the embodiment of the invention and camera coordinate system.
The image of camera acquisition is converted to digital picture with the form of standard television signal through the high-speed image sampling system, and the input computer.Every width of cloth digital picture is the M*N array in computer, and (be called pixel, numerical value pixel) promptly is the brightness (or claiming gray scale) of picture point to each element in the image of the capable N row of M.Definition rectangular coordinate system u on image, v, (u v) is respectively the image coordinate system coordinate of unit with this pixel to the coordinate of each pixel.Because (u, v) a remarked pixel is arranged in the columns and the line number of array, does not express the position of this pixel in image with physical unit.Therefore, need set up the image coordinate system of representing with physical unit (millimeter) again.This coordinate system is an initial point with certain a bit (intersection point of the camera optical axis and the plane of delineation) in the image.X-axis and Y-axis respectively with u, the v axle is parallel.The physical size of each pixel on X-axis, Y direction is dX, dY, then can set up the relation of two coordinate systems.
The initial point of camera coordinates system is the video camera photocentre, the X of x axle and y axle and image, and Y-axis is parallel, and the z axle is a camera optical axis, and it is vertical with the plane of delineation.The intersection point of the optical axis and the plane of delineation is the initial point of image coordinate system, and the rectangular coordinate system of formation is called camera coordinate system.
One of core idea of the embodiment of the invention is, considers because lens distortion in the process that virtual threedimensional model is projected in the two dimensional image, tends to seem that the scene with in the image is inharmonious, lacks the sense of reality.So the inventor herein introduces the distortion effects that distortion factor is described camera lens, and with this coefficient acting in the process of projection, allow threedimensional model also produce distortion, make it with image in scene be complementary.
Be well known that the distortion of camera lens mainly contains two kinds, a kind of is radial distortion, and a kind of is tangential distortion.Tangential distortion is very little, so in the present invention, we only consider the influence of radial distortion.The characteristics of radial distortion are, the range image center is far away more, and it is obvious more to distort, and the radian that straight line is projected into curve is big more.Therefore, radial distortion should be the function of picture point to centre distance.
With reference to figure 2, show the flow chart of steps of the projective transformation method embodiment 1 of a kind of threedimensional model of the present invention, specifically can may further comprise the steps:
Step S 1, with threedimensional model by the pin-hole imaging model projection to the picture plane, obtain under the image coordinate system image coordinate (u, v);
Step S2, obtain the transformation relation of image coordinate, and according to described transformation relation computed image coordinate (u, v) the normalized image coordinate under the Dui Ying camera coordinate system (x, y);
Step S3, (x y), calculates the image coordinate after the distortion according to described normalized image coordinate
Step S4, with threedimensional model through the image coordinate after the projection described distortion
After, in the real two dimensional image that is added to.
Be well known that the pin-hole imaging model is to be used for describing the arbitrfary point in the space and the geometrical model of its corresponding relation between the imaging point on the image.These geometrical model parameters are exactly the camera calibration parameter.Particularly, the camera calibration parameter comprises outer parameter and intrinsic parameter, and wherein, outer parameter comprises geological informations such as the position, attitude of video camera, and intrinsic parameter comprises focus of camera, optical parametrics such as principal point and lens distortion coefficient.
In a preferred embodiment of the present invention, described step S2 can comprise following substep:
Substep S21, the described camera intrinsic parameter projection matrix of generation can be expressed as:
Generating the camera intrinsic parameter projection matrix is:
Substep S22, be the normalization coordinate transform image coordinate by following formula:
Wherein, described f
x, f
yBe focal length parameter, u
0, v
0Be the pixel coordinate of video camera photocentre on the picture plane.
Be above-mentioned f
x, f
y, u
0, v
0Can calculate easily by the Zhang Zhengyou camera marking method, Zhang Zhengyou considered radial distortion in 1998, had proposed a kind of method that can utilize several all inside and outside parameter of plane template calibrating camera---plane reference method.This method is calibrating camera accurately, and simple and easy to do.In above-mentioned formula (1), f
x, f
y, u
0, v
0Only relevant with intrinsic parameters of the camera, so above-mentioned matrix is in the prior art for being called as the camera intrinsic parameter projection matrix.
Particularly, fx=f/dX wherein, fy=f/dY is called the normalization focal length on x axle and the y axle; F is the focal length of camera, and dX and dY represent the size of unit picture element on u axle and the v axle respectively.What u0 and v0 then represented is optical centre, and promptly the intersection point of the camera optical axis and the plane of delineation is usually located at the picture centre place, so its value is often got resolution half.
Substep S23, the described camera intrinsic parameter projection matrix of foundation obtain the computing formula of following transformation relation:
Substep S24, according to described formula calculate described image coordinate (u, v) the normalized image coordinate under the Dui Ying camera coordinate system (x, y).
Pass through above-mentioned steps, can be simply be coordinate (limited point) under the camera coordinate system with image in the coordinate transform under the image coordinate system, travel through threedimensional model and need not pointwise, each point in the model is all downconverted under the camera coordinate system from world coordinate system, effectively conserve system resources improves treatment effeciency.
In embodiments of the present invention, (x y) is meant normalized image coordinate under the desirable pin-hole camera model, distortionless to described normalized image coordinate.
The embodiment of the invention all only needs a projective transformation for the threedimensional model of any complexity of reality, handles at image pixel exactly then.Because the size of general pattern is fixed, and number of pixels is limited, so treatment effeciency is very high.In specific implementation, the present invention can finish on GPU (Graphic Processing Unit, graphic process unit).
By above-mentioned steps, virtual three-dimensional model and true environment are merged more, realize the more actual situation combination of true nature.
Below (x y) is transformed to image coordinate after the distortion to described normalized image coordinate by two specific embodiments
Method describe in detail.
With reference to figure 3, show the flow chart of steps of the projective transformation method embodiment 2 of a kind of threedimensional model of the present invention, specifically can may further comprise the steps:
The radial distortion model of step 203, structure present image;
In a preferred embodiment of the present invention, described radial distortion model is:
Wherein, r
2=x
2+ y
2
With reference to figure 4, show the flow chart of steps of the projective transformation method embodiment 3 of a kind of threedimensional model of the present invention, specifically can may further comprise the steps:
As a kind of preferred embodiment, the normalized image coordinate after the described distortion
Can calculate by following formula and obtain;
Wherein, r
2=x
2+ y
2, k
1And k
2Can be the coefficient of radial distortion that adopts the Zhang Zhengyou camera marking method to obtain.
In specific implementation, the image coordinate after the described distortion
Can press the following formula conversion by above-mentioned camera intrinsic parameter projection matrix obtains;
Need to prove, for aforesaid each method embodiment, for simple description, so it all is expressed as a series of combination of actions, but those skilled in the art should know, the present invention is not subjected to the restriction of described sequence of movement, because according to the present invention, some step can adopt other orders or carry out simultaneously.
The present invention is mainly used in Virtual Studio System, and, in the virtual sports system that further develops based on Virtual Studio System.Particularly, Virtual Studio System is to update along with computer technology develop rapidly and chroma key technique in recent years and a kind of new tv program producing system that occurs.In Virtual Studio System, the work state information of video camera sends graphics workstation to, computer obtains distance and the relative position between foreground object and the video camera according to this, can calculate the optimum size of virtual scene, position, and calculate the threedimensional model that generates virtual scene on request.By virtual threedimensional model is projected to the combination of realization actual situation in the two dimensional image according to the attitude of current video camera.
With reference to figure 5, show the structured flowchart of the projective transformation device embodiment of a kind of threedimensional model of the present invention, specifically can comprise with lower unit:
Pin hole projecting cell 501 is used for threedimensional model by the pin-hole imaging model projection to the picture plane, obtain under the image coordinate system image coordinate (u, v);
Coordinate transformation unit 502 is used to obtain the transformation relation of image coordinate, and according to described transformation relation computed image coordinate (u, v) the normalized image coordinate under the Dui Ying camera coordinate system (x, y);
Actual situation superpositing unit 504 is used for the image coordinate after the described distortion of threedimensional model process projection
After, in the real two dimensional image that is added to.
In a preferred embodiment of the present invention, described coordinate transformation unit 502 can comprise following subelement:
Matrix generates subelement, is used to generate described camera intrinsic parameter projection matrix and is:
Image coordinate varitron unit, being used for by following formula is the normalization coordinate transform image coordinate:
Wherein, described f
x, f
yBe focal length parameter, u
0, v
0Be the pixel coordinate of video camera photocentre on the picture plane; Described f
x, f
y, u
0, v
0All adopt the Zhang Zhengyou camera marking method to obtain.
Transformation relation is obtained subelement, is used for obtaining following computing formula according to described camera intrinsic parameter projection matrix:
The substitution computation subunit, be used for described image coordinate (u, v) in the above-mentioned formula of substitution, calculate normalized image coordinate under the corresponding camera coordinate system (x, y).
As a preferred embodiment of the present invention, described distortion computing unit 503 can comprise following subelement:
Distortion model constructor unit is used to construct the radial distortion model of present image;
Distortion coordinate Calculation subelement is used for that (x y) in the substitution radial distortion model, obtains the image coordinate after the distortion with described normalized image coordinate
As another kind of preferred embodiment of the present invention, described distortion computing unit 503 can comprise following subelement:
Distortion coordinate precomputation subelement is used for according to described normalized image coordinate (x, y) the normalized image coordinate after the calculating distortion
Distortion coordinate projection subelement is used for the normalized image coordinate after the described distortion
Project on the picture plane image coordinate after obtaining to distort
Preferably, described device can be arranged among the GPU.
For device embodiment, because it is similar substantially to method embodiment, so description is fairly simple, relevant part gets final product referring to the part explanation of method embodiment.
More than the projective transformation method of a kind of threedimensional model provided by the present invention and a kind of projective transformation device of threedimensional model are described in detail, used specific case herein principle of the present invention and execution mode are set forth, the explanation of above embodiment just is used for helping to understand method of the present invention and core concept thereof; Simultaneously, for one of ordinary skill in the art, according to thought of the present invention, the part that all can change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.
Claims (10)
1. the projective transformation method of a threedimensional model is characterized in that, comprising:
Step S1, with threedimensional model by the pin-hole imaging model projection to the picture plane, obtain under the image coordinate system image coordinate (u, v);
Step S2, obtain the transformation relation of image coordinate, and according to described transformation relation computed image coordinate (u, v) the normalized image coordinate under the Dui Ying camera coordinate system (x, y);
Step S3, (x y), calculates the image coordinate after the distortion according to described normalized image coordinate
2. the method for claim 1 is characterized in that, described step S3 comprises following substep:
The radial distortion model of substep S311, structure present image;
3. the method for claim 1 is characterized in that, described step S3 comprises following substep:
Substep S321, (x y) calculates normalized image coordinate after the distortion according to described normalized image coordinate
4. the method for claim 1 is characterized in that, described step S2 comprises following substep: substep S21, generation camera intrinsic parameter projection matrix are:
Generating the camera intrinsic parameter projection matrix is:
By following formula is the normalization coordinate transform image coordinate:
Wherein, described f
x, f
yBe focal length parameter, u
0, v
0Be the pixel coordinate of video camera photocentre on the picture plane;
Substep S22, the described camera intrinsic parameter projection matrix of foundation obtain following transformation relation:
Substep S23, with described image coordinate (u, v) in the above-mentioned formula of substitution, calculate normalized image coordinate under the corresponding camera coordinate system (x, y).
5. method as claimed in claim 2 is characterized in that, described radial distortion model is:
Wherein, k
1, k
2Be coefficient of radial distortion; f
x, f
yBe focal length parameter, u
0, v
0Be the pixel coordinate of video camera photocentre on the picture plane; r
2=x
2+ y
2Described k
1, k
2, f
x, f
y, u
0, v
0Adopt the Zhang Zhengyou camera marking method to obtain.
6. method as claimed in claim 3 is characterized in that, the normalized image coordinate after the described distortion
Calculate acquisition by following formula;
Wherein, r
2=x
2+ y
2, k
1And k
2Be the coefficient of radial distortion that adopts the Zhang Zhengyou camera marking method to obtain;
Image coordinate after the described distortion
Obtain as follows by the conversion of described camera intrinsic parameter projection matrix:
7. the projective transformation device of a threedimensional model is characterized in that, comprising:
The pin hole projecting cell is used for threedimensional model by the pin-hole imaging model projection to the picture plane, obtain under the image coordinate system image coordinate (u, v);
Coordinate transformation unit is used to obtain the transformation relation of image coordinate, and according to described transformation relation computed image coordinate (u, v) the normalized image coordinate under the Dui Ying camera coordinate system (x, y);
The distortion computing unit is used for that (x y), calculates the image coordinate after the distortion according to described normalized image coordinate
8. device as claimed in claim 7 is characterized in that, described distortion is calculated unit pack and drawn together:
Distortion model constructor unit is used to construct the radial distortion model of present image;
9. device as claimed in claim 7 is characterized in that, described distortion is calculated unit pack and drawn together:
Distortion coordinate precomputation subelement is used for according to described normalized image coordinate (x, y) the normalized image coordinate after the calculating distortion
10. device as claimed in claim 7 is characterized in that, described coordinate transformation unit comprises:
Matrix generates subelement, is used to generate described camera intrinsic parameter projection matrix and is:
Image coordinate varitron unit, being used for by following formula is the normalization coordinate transform image coordinate:
Wherein, described f
x, f
yBe focal length parameter, u
0, v
0Be the pixel coordinate of video camera photocentre on the picture plane; Described f
x, f
y, u
0, v
0All adopt the Zhang Zhengyou camera marking method to obtain.
Transformation relation is obtained subelement, is used for obtaining following computing formula according to described camera intrinsic parameter projection matrix:
The substitution computation subunit, be used for described image coordinate (u, v) in the above-mentioned formula of substitution, calculate normalized image coordinate under the corresponding camera coordinate system (x, y).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009102439296A CN102111562A (en) | 2009-12-25 | 2009-12-25 | Projection conversion method for three-dimensional model and device adopting same |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009102439296A CN102111562A (en) | 2009-12-25 | 2009-12-25 | Projection conversion method for three-dimensional model and device adopting same |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102111562A true CN102111562A (en) | 2011-06-29 |
Family
ID=44175569
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2009102439296A Pending CN102111562A (en) | 2009-12-25 | 2009-12-25 | Projection conversion method for three-dimensional model and device adopting same |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102111562A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102745198A (en) * | 2012-07-23 | 2012-10-24 | 北京智华驭新汽车电子技术开发有限公司 | Auxiliary forward track device for vehicle |
CN102745138A (en) * | 2012-07-23 | 2012-10-24 | 北京智华驭新汽车电子技术开发有限公司 | Dual view-field dynamic-trajectory reverse image system |
CN107134000A (en) * | 2017-05-23 | 2017-09-05 | 张照亮 | A kind of three-dimensional dynamic images generation method and system for merging reality |
CN108307179A (en) * | 2016-08-30 | 2018-07-20 | 姜汉龙 | A kind of method of 3D three-dimensional imagings |
CN110175009A (en) * | 2019-04-19 | 2019-08-27 | 北京戴纳实验科技有限公司 | A kind of display methods of smart electronics menu |
CN113380088A (en) * | 2021-04-07 | 2021-09-10 | 上海中船船舶设计技术国家工程研究中心有限公司 | Interactive simulation training support system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1719477A (en) * | 2005-05-19 | 2006-01-11 | 上海交通大学 | Calibration method of pick up camera or photographic camera geographic distortion |
CN101520897A (en) * | 2009-02-27 | 2009-09-02 | 北京机械工业学院 | Video camera calibration method |
CN301038163S (en) * | 2008-09-17 | 2009-10-14 | 上海五洲药业股份有限公司 | Medicine packaging box (dihydroergot mesylate tablets 2) |
-
2009
- 2009-12-25 CN CN2009102439296A patent/CN102111562A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1719477A (en) * | 2005-05-19 | 2006-01-11 | 上海交通大学 | Calibration method of pick up camera or photographic camera geographic distortion |
CN301038163S (en) * | 2008-09-17 | 2009-10-14 | 上海五洲药业股份有限公司 | Medicine packaging box (dihydroergot mesylate tablets 2) |
CN101520897A (en) * | 2009-02-27 | 2009-09-02 | 北京机械工业学院 | Video camera calibration method |
Non-Patent Citations (2)
Title |
---|
罗瑜林: "基于数据拟合的摄像机标定及其应用研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
袁野等: "一种考虑二阶径向畸变的主动视觉自标定算法", 《中国图象图形学报》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102745198A (en) * | 2012-07-23 | 2012-10-24 | 北京智华驭新汽车电子技术开发有限公司 | Auxiliary forward track device for vehicle |
CN102745138A (en) * | 2012-07-23 | 2012-10-24 | 北京智华驭新汽车电子技术开发有限公司 | Dual view-field dynamic-trajectory reverse image system |
CN102745198B (en) * | 2012-07-23 | 2014-12-03 | 北京智华驭新汽车电子技术开发有限公司 | Auxiliary forward track device for vehicle |
CN108307179A (en) * | 2016-08-30 | 2018-07-20 | 姜汉龙 | A kind of method of 3D three-dimensional imagings |
CN107134000A (en) * | 2017-05-23 | 2017-09-05 | 张照亮 | A kind of three-dimensional dynamic images generation method and system for merging reality |
CN107134000B (en) * | 2017-05-23 | 2020-10-23 | 张照亮 | Reality-fused three-dimensional dynamic image generation method and system |
CN110175009A (en) * | 2019-04-19 | 2019-08-27 | 北京戴纳实验科技有限公司 | A kind of display methods of smart electronics menu |
CN110175009B (en) * | 2019-04-19 | 2022-04-12 | 北京戴纳实验科技有限公司 | Display method of intelligent electronic menu |
CN113380088A (en) * | 2021-04-07 | 2021-09-10 | 上海中船船舶设计技术国家工程研究中心有限公司 | Interactive simulation training support system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yang et al. | Object detection in equirectangular panorama | |
CN102111561A (en) | Three-dimensional model projection method for simulating real scenes and device adopting same | |
CN110728671B (en) | Dense reconstruction method of texture-free scene based on vision | |
CN101356546B (en) | Image high-resolution upgrading device, image high-resolution upgrading method image high-resolution upgrading system | |
CN108475327A (en) | three-dimensional acquisition and rendering | |
CN110070598B (en) | Mobile terminal for 3D scanning reconstruction and 3D scanning reconstruction method thereof | |
CN112525107B (en) | Structured light three-dimensional measurement method based on event camera | |
CN110648274B (en) | Method and device for generating fisheye image | |
CN113643414B (en) | Three-dimensional image generation method and device, electronic equipment and storage medium | |
CN113362457B (en) | Stereoscopic vision measurement method and system based on speckle structured light | |
CN102111562A (en) | Projection conversion method for three-dimensional model and device adopting same | |
JP2008016918A (en) | Image processor, image processing system, and image processing method | |
CN103065318B (en) | The curved surface projection method and device of multiple-camera panorama system | |
CN105046743A (en) | Super-high-resolution three dimensional reconstruction method based on global variation technology | |
CN107146287B (en) | Two-dimensional projection image to threedimensional model mapping method | |
CN105046649A (en) | Panorama stitching method for removing moving object in moving video | |
WO2023093739A1 (en) | Multi-view three-dimensional reconstruction method | |
CN102110300A (en) | Three-dimensional model projecting method and device for imitating lens distortion | |
CN102110298A (en) | Method and device for projecting three-dimensional model in virtual studio system | |
US20200380770A1 (en) | All-around spherical light field rendering method | |
Neumann et al. | Eyes from eyes: New cameras for structure from motion | |
CN102110299A (en) | Method and device for projecting application distortion in three-dimensional model | |
Neumann et al. | Eyes from eyes: analysis of camera design using plenoptic video geometry | |
CN107274449B (en) | Space positioning system and method for object by optical photo | |
CN111462199B (en) | Rapid speckle image matching method based on GPU |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C12 | Rejection of a patent application after its publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20110629 |