CN102110298A - Method and device for projecting three-dimensional model in virtual studio system - Google Patents

Method and device for projecting three-dimensional model in virtual studio system Download PDF

Info

Publication number
CN102110298A
CN102110298A CN2009102439281A CN200910243928A CN102110298A CN 102110298 A CN102110298 A CN 102110298A CN 2009102439281 A CN2009102439281 A CN 2009102439281A CN 200910243928 A CN200910243928 A CN 200910243928A CN 102110298 A CN102110298 A CN 102110298A
Authority
CN
China
Prior art keywords
image coordinate
coordinate
image
model
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2009102439281A
Other languages
Chinese (zh)
Other versions
CN102110298B (en
Inventor
谢宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Digital Video Beijing Ltd
Original Assignee
China Digital Video Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Digital Video Beijing Ltd filed Critical China Digital Video Beijing Ltd
Priority to CN200910243928.1A priority Critical patent/CN102110298B/en
Publication of CN102110298A publication Critical patent/CN102110298A/en
Application granted granted Critical
Publication of CN102110298B publication Critical patent/CN102110298B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Studio Devices (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a method for projecting a three-dimensional model in a virtual studio system, which comprises the following steps of: projecting the three-dimensional model to an image plane according to a pinhole imaging model to obtain an image coordinate (u, v) under an image coordinate system; acquiring a transformation relation of the image coordinate, and calculating a normalized image coordinate (x, y) under a coordinate system of a camera corresponding to the image coordinate (u, v) according to the transformation relation; constructing a radial distortion model of a current image; substituting the normalized image coordinate (x, y) into the radial distortion model to obtain a distorted image coordinate; projecting the three-dimensional model to the distorted image coordinate; and superposing the three-dimensional model into a real two-dimensional image. By the method provided by the invention, the three-dimensional model can be matched with a real scene in the image, so that more real and more natural virtual-real combination is realized.

Description

Three-dimensional model projecting method and device in the Virtual Studio System
Technical field
The present invention relates to the technical field of television program designing, particularly relate to three-dimensional model projecting method in a kind of Virtual Studio System and the three-dimensional model projection arrangement in a kind of Virtual Studio System.
Background technology
Virtual studio is the television program designing technology of developed recently a kind of uniqueness of getting up.It is digitized synthetic in real time that its essence is that the object movement image with the virtual three-dimensional model of computer manufacture and video camera shooting carries out, subject image and virtual background can be changed synchronously, thereby realize both fusions, to obtain perfectly synthetic picture.Virtual Studio Technology is to scratch on the basis of picture technology at traditional chroma key, Computerized three-dimensional graph technology and video synthetic technology have been made full use of, position and parameter according to video camera, the perspective relation and the prospect of three-dimensional virtual scene are consistent, after the process chroma key is synthetic, in the three-dimensional virtual scene that makes image in the prospect seem to be in computing machine fully to be produced, and can move therein, thereby create teletorium effect true to nature, that stereoscopic sensation is very strong.
Correspondingly, in Virtual Studio System, we need project to virtual three-dimensional model in the two dimensional image according to the attitude of current video camera and go, thereby realize the combination of actual situation.For the attitude of video camera, the perspective projection image-forming principle that generally all is based on pin-hole imaging model in the machine vision is determined, is well known that it has shines upon straight characteristics to straight line.
Yet in fact real camera lens and nonideal pin-hole imaging model have small distortion toward contact, and as shown in Figure 1, real camera lens tends to the linear projection in the reality scene is become the curve of the arc of strip on the image.At this moment, if directly virtual three-dimensional model being projected to image gets on, straight line is still straight line, will seem inharmonious with the scene in the image, lacks the sense of reality.
Summary of the invention
Technical matters to be solved by this invention provides the three-dimensional model projecting method in a kind of Virtual Studio System, so that the real scene in three-dimensional model and the image is complementary, realizes truer, natural actual situation combination.
In order to solve the problems of the technologies described above, the embodiment of the invention discloses the three-dimensional model projecting method in a kind of Virtual Studio System, comprising:
Three-dimensional model is pressed the pin-hole imaging model projection to the picture plane, and the image coordinate under the acquisition image coordinate system (u, v);
Obtain the transformation relation of image coordinate, and according to described transformation relation computed image coordinate (u, v) the normalized image coordinate under the Dui Ying camera coordinate system (x, y);
The radial distortion model of structure present image;
(x y) in the substitution radial distortion model, obtains the image coordinate after the distortion with described normalized image coordinate
Figure G2009102439281D00021
With the image coordinate after the described distortion of three-dimensional model process projection
Figure G2009102439281D00022
After, in the real two dimensional image that is added to.
Preferably, the described step of obtaining the image coordinate transformation relation comprises:
Generating the camera intrinsic parameter projection matrix is:
K = f x 0 u 0 0 f y v 0 0 0 1 ;
By following formula is the normalization coordinate transform image coordinate:
u v 1 = f x 0 u 0 0 f y v 0 0 0 1 x y 1 ;
Wherein, described f x, f yBe focal length parameter, u 0, v 0Be the pixel coordinate of video camera photocentre on the picture plane;
Obtain following transformation relation according to described camera intrinsic parameter projection matrix:
x = u - u 0 f x , y = v - v 0 f y .
Preferably, described radial distortion model is:
Figure G2009102439281D00034
Figure G2009102439281D00035
Wherein, k 1, k 2Be coefficient of radial distortion; f x, f yBe focal length parameter, u 0, v 0Be the pixel coordinate of video camera photocentre on the picture plane; r 2=x 2+ y 2
Preferably, described k 1, k 2, f x, f y, u 0, v 0Adopt the Zhang Zhengyou camera marking method to obtain.
The embodiment of the invention also discloses the three-dimensional model projection arrangement in a kind of Virtual Studio System, comprising:
The pin hole projecting cell is used for three-dimensional model by the pin-hole imaging model projection to the picture plane, obtain under the image coordinate system image coordinate (u, v);
The coordinate system transformation unit is used to obtain the transformation relation of image coordinate, and according to described transformation relation computed image coordinate (u, v) the normalized image coordinate under the Dui Ying camera coordinate system (x, y);
The distortion model tectonic element is used to construct the radial distortion model of present image;
Distortion coordinate Calculation unit is used for that (x y) in the substitution radial distortion model, obtains the image coordinate after the distortion with described normalized image coordinate
Figure G2009102439281D00041
The actual situation superpositing unit is used for the image coordinate after the described distortion of three-dimensional model process projection
Figure G2009102439281D00042
After, in the real two dimensional image that is added to.
Preferably, described coordinate system transformation unit comprises:
Matrix generates subelement, is used to generate described camera intrinsic parameter projection matrix and is:
K = f x 0 u 0 0 f y v 0 0 0 1 ;
By following formula is the normalization coordinate transform image coordinate:
u v 1 = f x 0 u 0 0 f y v 0 0 0 1 x y 1 ;
Wherein, described f x, f yBe focal length parameter, u 0, v 0Be the pixel coordinate of video camera photocentre on the picture plane.
Transformation relation is obtained subelement, is used for obtaining following computing formula according to described camera intrinsic parameter projection matrix:
x = u - u 0 f x , y = v - v 0 f y ;
The substitution computation subunit, be used for described image coordinate (u, v) in the above-mentioned formula of substitution, calculate normalized image coordinate under the corresponding camera coordinate system (x, y).
Preferably, described radial distortion model is:
Figure G2009102439281D00048
Wherein, k 1, k 2Be coefficient of radial distortion; f x, f yBe focal length parameter, u 0, v 0Be the pixel coordinate of video camera photocentre on the picture plane; r 2=x 2+ y 2
Preferably, described k 1, k 2, f x, f y, u 0, v 0Adopt the Zhang Zhengyou camera marking method to obtain.
Preferably, described device is arranged in GPU.
Compared with prior art, the present invention has the following advantages:
The present invention describes the distortion effects of camera lens by introducing distortion factor, and with this coefficient acting in the process of projection, allow three-dimensional model also produce distortion, make it with image in scene be complementary.Adopt after this technical finesse, virtual three-dimensional model and true environment are merged more, more true nature seems.
Description of drawings
Fig. 1 is the synoptic diagram that the linear projection that causes of lens distortion becomes curve;
Fig. 2 is the flow chart of steps of the three-dimensional model projecting method embodiment in a kind of Virtual Studio System of the present invention;
Fig. 3 is the structured flowchart of the three-dimensional model projection arrangement embodiment in a kind of Virtual Studio System of the present invention.
Embodiment
For above-mentioned purpose of the present invention, feature and advantage can be become apparent more, the present invention is further detailed explanation below in conjunction with the drawings and specific embodiments.
The present invention is mainly used in Virtual Studio System, and, in the virtual sports system that further develops based on Virtual Studio System.Particularly, Virtual Studio System is to update along with computer technology develop rapidly and chroma key technique in recent years and a kind of new tv program producing system that occurs.In Virtual Studio System, the work state information of video camera sends graphics workstation to, computing machine obtains distance and the relative position between foreground object and the video camera according to this, can calculate the optimum size of virtual scene, position, and calculate the three-dimensional model that generates virtual scene on request.By virtual three-dimensional model is projected to the combination of realization actual situation in the two dimensional image according to the attitude of current video camera.
Yet,, lack the sense of reality because lens distortion in the process that virtual three-dimensional model is projected in the two dimensional image, tends to seem that the scene with in the image is inharmonious.At this problem, the present invention considers to introduce the distortion effects that distortion factor is described camera lens, and with this coefficient acting in the process of projection, allow three-dimensional model also produce distortion, make it with image in scene be complementary.Be well known that the distortion of camera lens mainly contains two kinds, a kind of is radial distortion, and a kind of is tangential distortion.Tangential distortion is very little, so in the present invention, we only consider the influence of radial distortion.The characteristics of radial distortion are, the range image center is far away more, and it is obvious more to distort, and the radian that straight line is projected into curve is big more.Therefore, radial distortion should be the function of picture point to centre distance.
For making those skilled in the art understand the present invention better, below simply introduce related image coordinate system of the embodiment of the invention and camera coordinate system.
The image of camera acquisition is converted to digital picture with the form of standard television signal through the high-speed image sampling system, and the input computing machine.Every width of cloth digital picture is the M*N array in computing machine, and (be called pixel, numerical value pixel) promptly is the brightness (or claiming gray scale) of picture point to each element in the image of the capable N row of M.Definition rectangular coordinate system u on image, v, (u v) is respectively the image coordinate system coordinate of unit with this pixel to the coordinate of each pixel.Because (u, v) a remarked pixel is arranged in the columns and the line number of array, does not express the position of this pixel in image with physical unit.Therefore, need set up the image coordinate system of representing with physical unit (millimeter) again.This coordinate system is an initial point with certain a bit (intersection point of the camera optical axis and the plane of delineation) in the image.X-axis and Y-axis respectively with u, the v axle is parallel.The physical size of each pixel on X-axis, Y direction is dX, dY, then can set up the relation of two coordinate systems.
The initial point of camera coordinates system is the video camera photocentre, the X of x axle and y axle and image, and Y-axis is parallel, and the z axle is a camera optical axis, and it is vertical with the plane of delineation.The intersection point of the optical axis and the plane of delineation is the initial point of image coordinate system, and the rectangular coordinate system of formation is called camera coordinate system.
With reference to figure 2, show the flow chart of steps of the three-dimensional model projecting method embodiment in a kind of Virtual Studio System of the present invention, specifically can may further comprise the steps:
Step 101, with three-dimensional model by the pin-hole imaging model projection to the picture plane, obtain under the image coordinate system image coordinate (u, v);
Be well known that the pin-hole imaging model is to be used for describing the arbitrfary point in the space and the geometric model of its corresponding relation between the imaging point on the image.These geometric model parameters are exactly the camera calibration parameter.Particularly, the camera calibration parameter comprises outer parameter and intrinsic parameter, and wherein, outer parameter comprises geological informations such as the position, attitude of video camera, and intrinsic parameter comprises focus of camera, optical parametrics such as principal point and lens distortion coefficient.
Step 102, obtain the transformation relation of image coordinate, and according to described transformation relation computed image coordinate (u, v) the normalized image coordinate under the Dui Ying camera coordinate system (x, y);
In a preferred embodiment of the present invention, this step can comprise following substep:
Substep S1, the described camera intrinsic parameter projection matrix of generation can be expressed as:
K = f x 0 u 0 0 f y v 0 0 0 1 ;
Substep S2, be the normalization coordinate transform image coordinate by following formula:
u v 1 = f x 0 u 0 0 f y v 0 0 0 1 x y 1 ; - - - ( 1 )
Wherein, described f x, f yBe focal length parameter, u 0, v 0Be the pixel coordinate of video camera photocentre on the picture plane.
Be above-mentioned f x, f y, u 0, v 0Can calculate easily by the Zhang Zhengyou camera marking method, Zhang Zhengyou considered radial distortion in 1998, had proposed a kind of method that can utilize several all inside and outside parameter of plane template calibrating camera---plane reference method.This method is calibrating camera accurately, and simple and easy to do.In above-mentioned formula (1), f x, f y, u 0, v 0Only relevant with intrinsic parameters of the camera, so above-mentioned matrix is in the prior art for being called as the camera intrinsic parameter projection matrix.
Particularly, fx=f/dX wherein, fy=f/dY is called the normalization focal length on x axle and the y axle; F is the focal length of camera, and dX and dY represent the size of unit picture element on u axle and the v axle respectively.What u0 and v0 then represented is optical centre, and promptly the intersection point of the camera optical axis and the plane of delineation is usually located at the picture centre place, so its value is often got resolution half.
Substep S3, the described camera intrinsic parameter projection matrix of foundation obtain the computing formula of following transformation relation:
x = u - u 0 f x , y = v - v 0 f y ;
Substep S4, according to described formula calculate described image coordinate (u, v) the normalized image coordinate under the Dui Ying camera coordinate system (x, y).
Pass through above-mentioned steps, can be simply be coordinate (limited point) under the camera coordinate system with image in the coordinate transform under the image coordinate system, travel through three-dimensional model and need not pointwise, each point in the model is all downconverted under the camera coordinate system from world coordinate system, effectively conserve system resources improves treatment effeciency.
In embodiments of the present invention, (x y) is meant normalized image coordinate under the desirable pin-hole camera model, distortionless to described normalized image coordinate.
The radial distortion model of step 103, structure present image;
In a preferred embodiment of the present invention, described radial distortion model is:
Figure G2009102439281D00091
Figure G2009102439281D00092
Wherein, r 2=x 2+ y 2
Step 104, (x y) in the substitution radial distortion model, obtains the image coordinate after the distortion with described normalized image coordinate
Figure G2009102439281D00093
Step 105, with three-dimensional model through the image coordinate after the projection described distortion
Figure G2009102439281D00094
After, in the real two dimensional image that is added to.
The embodiment of the invention all only needs a projective transformation for the three-dimensional model of any complexity of reality, handles at image pixel exactly then.Because the size of general pattern is fixed, and number of pixels is limited, so treatment effeciency is very high.In specific implementation, the present invention can finish on GPU (Graphic Processing Unit, graphic process unit).
By above-mentioned steps, virtual three-dimensional model and true environment are merged more, realize the more actual situation combination of true nature.
Need to prove, for aforesaid method embodiment, for simple description, so it all is expressed as a series of combination of actions, but those skilled in the art should know, the present invention is not subjected to the restriction of described sequence of movement, because according to the present invention, some step can adopt other orders or carry out simultaneously.
With reference to figure 3, show the structured flowchart of the three-dimensional model projection arrangement embodiment in a kind of Virtual Studio System of the present invention, specifically can comprise with lower unit:
Pin hole projecting cell 201 is used for three-dimensional model by the pin-hole imaging model projection to the picture plane, obtain under the image coordinate system image coordinate (u, v);
Coordinate system transformation unit 202 is used to obtain the transformation relation of image coordinate, and according to described transformation relation computed image coordinate (u, v) the normalized image coordinate under the Dui Ying camera coordinate system (x, y);
Distortion model tectonic element 203 is used to construct the radial distortion model of present image;
Distortion coordinate Calculation unit 204 is used for that (x y) in the substitution radial distortion model, obtains the image coordinate after the distortion with described normalized image coordinate
Figure G2009102439281D00101
Actual situation superpositing unit 205 is used for the image coordinate after the described distortion of three-dimensional model process projection
Figure G2009102439281D00102
After, in the real two dimensional image that is added to.
In a preferred embodiment of the present invention, described coordinate system transformation unit can comprise following subelement:
Matrix generates subelement, is used to generate described camera intrinsic parameter projection matrix and is:
K = f x 0 u 0 0 f y v 0 0 0 1 ;
By following formula is the normalization coordinate transform image coordinate:
u v 1 = f x 0 u 0 0 f y v 0 0 0 1 x y 1 ;
Wherein, described f x, f yBe focal length parameter, u 0, v 0Be the pixel coordinate of video camera photocentre on the picture plane.
Transformation relation is obtained subelement, is used for obtaining following computing formula according to described camera intrinsic parameter projection matrix:
x = u - u 0 f x , y = v - v 0 f y ;
The substitution computation subunit, be used for described image coordinate (u, v) in the above-mentioned formula of substitution, calculate normalized image coordinate under the corresponding camera coordinate system (x, y).
In embodiments of the present invention preferably, described radial distortion model can be expressed as:
Figure G2009102439281D00111
Figure G2009102439281D00112
Wherein, k 1, k 2Be coefficient of radial distortion; f x, f yBe focal length parameter, u 0, v 0Be the pixel coordinate of video camera photocentre on the picture plane; r 2=x 2+ y 2
In specific implementation, described k 1, k 2, f x, f y, u 0, v 0Can adopt the Zhang Zhengyou camera marking method to obtain.
Preferably, described device can be arranged among the GPU.
For device embodiment, because it is similar substantially to method embodiment, so description is fairly simple, relevant part gets final product referring to the part explanation of method embodiment.
More than three-dimensional model projecting method in a kind of Virtual Studio System provided by the present invention and the three-dimensional model projection arrangement in a kind of Virtual Studio System are described in detail, used specific case herein principle of the present invention and embodiment are set forth, the explanation of above embodiment just is used for helping to understand method of the present invention and core concept thereof; Simultaneously, for one of ordinary skill in the art, according to thought of the present invention, the part that all can change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.

Claims (9)

1. the three-dimensional model projecting method in the Virtual Studio System is characterized in that, comprising:
Three-dimensional model is pressed the pin-hole imaging model projection to the picture plane, and the image coordinate under the acquisition image coordinate system (u, v);
Obtain the transformation relation of image coordinate, and according to described transformation relation computed image coordinate (u, v) the normalized image coordinate under the Dui Ying camera coordinate system (x, y);
The radial distortion model of structure present image;
(x y) in the substitution radial distortion model, obtains the image coordinate after the distortion with described normalized image coordinate
Figure F2009102439281C00011
With the image coordinate after the described distortion of three-dimensional model process projection
Figure F2009102439281C00012
After, in the real two dimensional image that is added to.
2. the method for claim 1 is characterized in that, the described step of obtaining the image coordinate transformation relation comprises:
Generating the camera intrinsic parameter projection matrix is:
K = f x 0 u 0 0 f y v 0 0 0 1 ;
By following formula is the normalization coordinate transform image coordinate:
u v 1 = f x 0 u 0 0 f y v 0 0 0 1 x y 1 ;
Wherein, described f x, f yBe focal length parameter, u 0, v 0Be the pixel coordinate of video camera photocentre on the picture plane;
Obtain following transformation relation according to described camera intrinsic parameter projection matrix:
x = u - u 0 f x , y = v - v 0 f y .
3. method as claimed in claim 2 is characterized in that, described radial distortion model is:
Figure F2009102439281C00023
Figure F2009102439281C00024
Wherein, k 1, k 2Be coefficient of radial distortion; f x, f yBe focal length parameter, u 0, v 0Be the pixel coordinate of video camera photocentre on the picture plane; r 2=x 2+ y 2
4. method as claimed in claim 3 is characterized in that, described k 1, k 2, f x, f y, u 0, v 0Adopt the Zhang Zhengyou camera marking method to obtain.
5. the three-dimensional model projection arrangement in the Virtual Studio System is characterized in that, comprising:
The pin hole projecting cell is used for three-dimensional model by the pin-hole imaging model projection to the picture plane, obtain under the image coordinate system image coordinate (u, v);
The coordinate system transformation unit is used to obtain the transformation relation of image coordinate, and according to described transformation relation computed image coordinate (u, v) the normalized image coordinate under the Dui Ying camera coordinate system (x, y);
The distortion model tectonic element is used to construct the radial distortion model of present image;
Distortion coordinate Calculation unit is used for that (x y) in the substitution radial distortion model, obtains the image coordinate after the distortion with described normalized image coordinate
Figure F2009102439281C00025
The actual situation superpositing unit is used for the image coordinate after the described distortion of three-dimensional model process projection
Figure F2009102439281C00026
After, in the real two dimensional image that is added to.
6. device as claimed in claim 5 is characterized in that, described coordinate system transformation unit comprises:
Matrix generates subelement, is used to generate described camera intrinsic parameter projection matrix and is:
K = f x 0 u 0 0 f y v 0 0 0 1 ;
By following formula is the normalization coordinate transform image coordinate:
u v 1 = f x 0 u 0 0 f y v 0 0 0 1 x y 1 ;
Wherein, described f x, f yBe focal length parameter, u 0, v 0Be the pixel coordinate of video camera photocentre on the picture plane.
Transformation relation is obtained subelement, is used for obtaining following computing formula according to described camera intrinsic parameter projection matrix:
x = u - u 0 f x , y = v - v 0 f y ;
The substitution computation subunit, be used for described image coordinate (u, v) in the above-mentioned formula of substitution, calculate normalized image coordinate under the corresponding camera coordinate system (x, y).
7. device as claimed in claim 6 is characterized in that, described radial distortion model is:
Figure F2009102439281C00035
Figure F2009102439281C00036
Wherein, k 1, k 2Be coefficient of radial distortion; f x, f yBe focal length parameter, u 0, v 0Be the pixel coordinate of video camera photocentre on the picture plane; r 2=x 2+ y 2
8. as claim 6 or 7 described devices, it is characterized in that described k 1, k 2, f x, f y, u 0, v 0Adopt the Zhang Zhengyou camera marking method to obtain.
9. device as claimed in claim 8 is characterized in that described device is arranged in GPU.
CN200910243928.1A 2009-12-25 2009-12-25 Three-dimensional model projecting method in Virtual Studio System and device Expired - Fee Related CN102110298B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN200910243928.1A CN102110298B (en) 2009-12-25 2009-12-25 Three-dimensional model projecting method in Virtual Studio System and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN200910243928.1A CN102110298B (en) 2009-12-25 2009-12-25 Three-dimensional model projecting method in Virtual Studio System and device

Publications (2)

Publication Number Publication Date
CN102110298A true CN102110298A (en) 2011-06-29
CN102110298B CN102110298B (en) 2015-09-23

Family

ID=44174445

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200910243928.1A Expired - Fee Related CN102110298B (en) 2009-12-25 2009-12-25 Three-dimensional model projecting method in Virtual Studio System and device

Country Status (1)

Country Link
CN (1) CN102110298B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103903255A (en) * 2012-12-31 2014-07-02 中国科学院深圳先进技术研究院 Ultrasound image segmentation method and system
CN105867615A (en) * 2016-03-24 2016-08-17 联想(北京)有限公司 Information processing method and electronic device
CN113706629A (en) * 2021-08-26 2021-11-26 南京航空航天大学 Part number identification system, method and device based on virtual projection
CN117934783A (en) * 2024-03-22 2024-04-26 深圳市其域创新科技有限公司 Augmented reality projection method, device, AR glasses and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040104935A1 (en) * 2001-01-26 2004-06-03 Todd Williamson Virtual reality immersion system
CN1741620A (en) * 2005-09-29 2006-03-01 北京理工大学 Enhancement actual fixed-point observation system for field digital three-dimensional reestablishing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040104935A1 (en) * 2001-01-26 2004-06-03 Todd Williamson Virtual reality immersion system
CN1741620A (en) * 2005-09-29 2006-03-01 北京理工大学 Enhancement actual fixed-point observation system for field digital three-dimensional reestablishing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
NEUMANN U. ET AL.: "Augmented virtual environments (AVE): dynamic fusion of imagery and 3D models", 《VIRTUAL REALITY, 2003. PROCEEDINGS. IEEE》 *
罗瑜林: "基于数据拟合的摄像机标定及其应用研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
袁野 等: "一种考虑二阶径向畸变的主动视觉自标定算法", 《中国图象图形学报》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103903255A (en) * 2012-12-31 2014-07-02 中国科学院深圳先进技术研究院 Ultrasound image segmentation method and system
CN103903255B (en) * 2012-12-31 2016-08-31 中国科学院深圳先进技术研究院 A kind of ultrasonic image division method and system
CN105867615A (en) * 2016-03-24 2016-08-17 联想(北京)有限公司 Information processing method and electronic device
CN113706629A (en) * 2021-08-26 2021-11-26 南京航空航天大学 Part number identification system, method and device based on virtual projection
CN113706629B (en) * 2021-08-26 2024-05-07 南京航空航天大学 Virtual projection-based part number identification method and device
CN117934783A (en) * 2024-03-22 2024-04-26 深圳市其域创新科技有限公司 Augmented reality projection method, device, AR glasses and storage medium
CN117934783B (en) * 2024-03-22 2024-05-28 深圳市其域创新科技有限公司 Augmented reality projection method, device, AR glasses and storage medium

Also Published As

Publication number Publication date
CN102110298B (en) 2015-09-23

Similar Documents

Publication Publication Date Title
CN102111561A (en) Three-dimensional model projection method for simulating real scenes and device adopting same
CN110728671B (en) Dense reconstruction method of texture-free scene based on vision
CN108475327A (en) three-dimensional acquisition and rendering
CN103081476B (en) The method and apparatus utilizing depth map information conversion 3-D view
US9230330B2 (en) Three dimensional sensing method and three dimensional sensing apparatus
CN110648274B (en) Method and device for generating fisheye image
CN113643414B (en) Three-dimensional image generation method and device, electronic equipment and storage medium
CN113362457B (en) Stereoscopic vision measurement method and system based on speckle structured light
CN104155765A (en) Method and equipment for correcting three-dimensional image in tiled integral imaging display
CN102111562A (en) Projection conversion method for three-dimensional model and device adopting same
CN101750029A (en) Characteristic point three-dimensional reconstruction method based on trifocal tensor
CN111161398B (en) Image generation method, device, equipment and storage medium
US20100158482A1 (en) Method for processing a video data set
CN102110300A (en) Three-dimensional model projecting method and device for imitating lens distortion
CN116168143A (en) Multi-view three-dimensional reconstruction method
CN102110298A (en) Method and device for projecting three-dimensional model in virtual studio system
US20200380770A1 (en) All-around spherical light field rendering method
CN112529006B (en) Panoramic picture detection method, device, terminal and storage medium
CN102110299A (en) Method and device for projecting application distortion in three-dimensional model
CN109801351A (en) Dynamic image generation method and processing equipment
CN102314682B (en) Method, device and system for calibrating camera
CN114332356A (en) Virtual and real picture combining method and device
CN116309854A (en) Method, device, equipment, system and storage medium for calibrating augmented reality equipment
CN101566784A (en) Method for establishing depth of field data for three-dimensional image and system thereof
Qin et al. Bullet-Time Video Synthesis Based on Virtual Dynamic Target Axis

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150923

Termination date: 20161225