CN102855660B - A kind of method and device determining the virtual scene depth of field - Google Patents

A kind of method and device determining the virtual scene depth of field Download PDF

Info

Publication number
CN102855660B
CN102855660B CN201210297558.1A CN201210297558A CN102855660B CN 102855660 B CN102855660 B CN 102855660B CN 201210297558 A CN201210297558 A CN 201210297558A CN 102855660 B CN102855660 B CN 102855660B
Authority
CN
China
Prior art keywords
virtual scene
field
virtual
depth
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210297558.1A
Other languages
Chinese (zh)
Other versions
CN102855660A (en
Inventor
刘超
卢伟超
张颖
马静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL Corp
Original Assignee
TCL Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TCL Corp filed Critical TCL Corp
Priority to CN201210297558.1A priority Critical patent/CN102855660B/en
Publication of CN102855660A publication Critical patent/CN102855660A/en
Application granted granted Critical
Publication of CN102855660B publication Critical patent/CN102855660B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The present invention is applicable to multimedia application field, and provide a kind of method and the device of determining the virtual scene depth of field, described method comprises: obtain the parameter information of virtual scene, the lateral resolution of display device and viewing distance; The depth of field of object in virtual scene is calculated according to the parameter information of described virtual scene, the lateral resolution of display device and viewing distance.By when virtual scene is built or virtual scene build after obtain the various information calculated needed for the depth of field, and the corresponding depth of field calculated in virtual scene, make the depth of field that the personnel of designing and developing just can obtain in virtual scene in the build process of virtual scene, in virtual scene, object just can revise Deep Canvas before showing on the display device simultaneously, shorten the debugging cycle of 3D display effect, and improve virtual scene development efficiency.

Description

A kind of method and device determining the virtual scene depth of field
Technical field
The invention belongs to multimedia application field, particularly relate to a kind of method and the device of determining the virtual scene depth of field.
Background technology
Along with the progress of science and technology, 3D display has become the image display technology of main flow, and 3D film, 3D picture are commercially prevailing.At present, in 3D display, generally adopt traditional virtual scene depth of field defining method, namely after virtual scene has been built, determined the depth of field in virtual scene by the parallax size measuring display element on display device.This method, owing to just can see concrete Deep Canvas on final display screen, is unfavorable for the amendment of the personnel of designing and developing to Deep Canvas.
Summary of the invention
The object of the embodiment of the present invention is to provide a kind of method and the device of determining the virtual scene depth of field, be intended to solve existing virtual scene depth of field acquisition methods and can only determine the depth of field in virtual scene by the parallax size of display element on measurement display device, be unfavorable for the problem of the personnel of designing and developing to the amendment of Deep Canvas.
The embodiment of the present invention is achieved in that a kind of method determining the virtual scene depth of field, and described method comprises:
Obtain the parameter information of virtual scene, the lateral resolution of display device and viewing distance;
The depth of field of object in virtual scene is calculated according to the parameter information of described virtual scene, the lateral resolution of display device and viewing distance.
Another object of the embodiment of the present invention is to provide the device determining the virtual scene depth of field, and described device comprises:
Virtual scene parameter acquiring unit, for obtaining the parameter information of virtual scene, the lateral resolution of display device and viewing distance;
Depth of field computing unit, for calculating the depth of field of object in virtual scene according to the parameter information of described virtual scene, the lateral resolution of display device and viewing distance.
In embodiments of the present invention, by when virtual scene is built or virtual scene build after obtain the various information calculated needed for the depth of field, and the corresponding depth of field calculated in virtual scene, make the depth of field that the personnel of designing and developing just can obtain in virtual scene in the build process of virtual scene, in virtual scene, object just can revise Deep Canvas before showing on the display device simultaneously, shorten the debugging cycle of 3D display effect, and improve virtual scene development efficiency.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the method preferred embodiment of the determination virtual scene depth of field of the present invention;
Fig. 2 is the schematic diagram choosing reference plane in the method preferred embodiment of the determination virtual scene depth of field of the present invention in virtual scene;
Fig. 3 be in the method preferred embodiment of the determination virtual scene depth of field of the present invention in virtual scene cone of sight with reference to relation of plane schematic diagram;
Fig. 4 is the structural representation of the device preferred embodiment of the determination virtual scene depth of field of the present invention;
Fig. 5 is the structural representation of another preferred embodiment of device of the determination virtual scene depth of field of the present invention.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
The embodiment of the present invention provides a kind of method determining the virtual scene depth of field, by when virtual scene is built or virtual scene build after obtain the various information calculated needed for the depth of field, and the corresponding depth of field calculated in virtual scene, make the personnel of designing and developing in the build process of virtual scene, just can obtain the depth of field of object in virtual scene, shorten the debugging cycle of stereoeffect, and improve scene development efficiency.
In order to technical solutions according to the invention are described, be described below by specific embodiment.
Embodiment one:
Be illustrated in figure 1 the process flow diagram determining the method for the virtual scene depth of field provided by the invention, for convenience of explanation, illustrate only the part relevant to the embodiment of the present invention.
In step S101, obtain the parameter information of virtual scene, the lateral resolution of display device and viewing distance.
In embodiments of the present invention, first when building virtual scene or virtual scene build after obtain the various information of the depth of field calculating object in virtual scene, wherein, the various information needed for the depth of field calculating object in virtual scene include but not limited to: the parameter information of virtual scene, the lateral resolution of display device and viewing distance; The parameter information of virtual scene includes but not limited to: the distance in virtual scene between reference plane and virtual camera, the distance in virtual scene between two virtual cameras, and in virtual scene, object is to the distance of reference plane, the cone of sight of two cameras in virtual scene.
The acquisition process of the various information calculated needed for the virtual scene depth of field will be described one by one below.
1) choose reference plane, obtain the positional information S_pos of reference plane.Reference plane is equivalent to the projection plane of virtual camera, and the reference plane therefore chosen must for being parallel to the line of two virtual cameras, and plane formed by the sight line sent perpendicular to two virtual cameras.For the negative parallax in human eye three-dimensional imaging, as shown in Fig. 2 (a), the extended line of the line of the object in the sight line that reference plane and two virtual cameras send and virtual scene intersects vertically; For the positive parallax in human eye three-dimensional imaging, as shown in Fig. 2 (b), the line of the object in the sight line that reference plane and two virtual cameras send and virtual scene intersects vertically; For the parallax free in human eye three-dimensional imaging, as shown in Fig. 2 (c), the line of the object in the sight line that reference plane and two virtual cameras send and virtual scene intersects vertically, and the position of intersection point just in time residing for the object in virtual scene, in Fig. 2, S is reference plane, El and Er is respectively the position of left and right residing for two virtual cameras, O is the mid point of two the virtual camera lines in left and right, G is the focus of two the virtual camera sight lines in left and right, namely the position residing for the object in virtual scene, Lv is the distance of virtual camera distance reference plane, O mfor the subpoint of mid point in reference plane of two the virtual camera lines in left and right, the projection about Sl and Sr is respectively during two virtual camera shot objects in reference plane.
2) by extracting the positional information of virtual camera, the distance Lv in virtual scene between reference plane and virtual camera is obtained.The positional information (Cam_L_pos) of the left virtual camera in virtual scene and the positional information (Cam_R_pos) of right virtual camera obtain by the member function getPosition () of class.Distance in virtual scene between reference plane and virtual camera obtains by following formula:
Lv=Cam_L_pos (Cam_R_pos) – S_pos, wherein, Lv is the distance in virtual scene between reference plane and virtual camera, and Cam_L_pos is the positional information of left virtual camera, Cam_R_pos is the positional information of right virtual camera, and S_pos is the positional information of reference plane.
3) distance in virtual scene between two virtual cameras is obtained by following formula.
Ev=abs(Cam_L_pos-Cam_R_pos), wherein, the distance between Ev two virtual cameras, for Cam_L_pos is the positional information of left virtual camera, Cam_R_pos is the positional information of right virtual camera, and abs () is the function taken absolute value.
4) by object in following formulae discovery virtual scene to the distance of reference plane.
F=Mod_pos-S_pos, wherein, F is that in virtual scene, object is to the distance of reference plane, and Mod_pos is the positional information of the object in virtual scene, and S_pos is the positional information of reference plane.
Usually, because objects different in virtual scene has different positional informations, different objects is also different to the distance of reference plane, therefore obtain interface Getposition () by object space and obtain the positional information of different objects, the difference then passing through obtained object location information Mod_pos and reference plane positional information S_pos to obtain in virtual scene object to the distance of reference plane.
5) when not having specific demand, in virtual scene, the cone of sight a of two cameras is obtained by the corresponding interface GetFOV () of virtual camera.
6) lateral resolution of display device is obtained by the corresponding interface in calling system.Wherein, display device lateral resolution is herein the resolution in the direction paralleled with the line of eyes.
7) after user chooses viewing location, the distance of user's viewing location to display device is obtained, i.e. viewing distance.
In embodiments of the present invention, the acquisition tandem of the various information calculated needed for the virtual scene depth of field does not limit.
In step s 102, the depth of field of object in virtual scene is calculated according to the parameter information of virtual scene, the lateral resolution of display device and viewing distance.
In embodiments of the present invention, formula OmG=L*x/(Y+x by presetting) calculate the depth of field of object in virtual scene, wherein, OmG represents the depth of field of object in virtual scene, and L represents viewing distance, x represents the parallax value that display device shows, Y represents the distance (unit is rice) between eyes, and optimum when the value of Y is 0.065, this numerical value can be set as other value, because this formula is prior art, concrete derivation is not described in detail at this.And the parallax value that display device shows obtains by formula x=X*F*Ev/2*Lv*tan (a/2) (Lv-F), wherein, X represents the lateral resolution of display device, F represents that in virtual scene, object is to the distance of reference plane, Ev represents the distance in virtual scene between two virtual cameras, Lv represents the distance in virtual scene between reference plane and virtual camera, and a represents the cone of sight of two virtual cameras in virtual scene.Formula x=X*F*Ev/2*Lv*tan (a/2) (Lv-F) is obtained by following method:
In virtual scene, as shown in Figure 3, the horizontal extent that can obtain by the cone of sight a of virtual camera, the distance Lv between reference plane S and virtual camera A the virtual scene that virtual camera captures is 2Lvtan(a/2).Suppose that the lateral resolution of stereoscopic display device used is X, horizontal range Xs so in virtual scene in reference plane and the horizontal range Xp corresponding relation on screen are X/2Lv*tan (a/2)=Xp/Xs, the alternate position spike Lxly of object captured by two virtual cameras in reference plane obtains by following relational expression: Lxly=(F*Ev)/(Lv-F), Ev represents the distance in virtual scene between two virtual cameras, F represents that in virtual scene, object is to the distance of reference plane, according to the relation between reference plane and screen, can show that the parallax value x that screen shows is (Xp/Xs) * Lxly.Therefore, x=(Xp/Xs) * Lxly=X*F*Ev/2*Lv*tan (a/2) (Lv-F).
Wherein, the derivation of relational expression Lxly=(F*Ev)/(Lv-F) is as follows:
Referring to Fig. 2 (a), be described for negative parallax at this, positive parallax and parallax free derivation similar, no longer elaborate at this, because triangle SrSlG is similar with triangle ElErG, then
SrSl/ElEr=O mG/OG;
Wherein ElEr is the distance between two virtual cameras, and Lv is the distance of virtual camera distance reference plane, drops on the distance between the picture point in reference plane, the position O so residing for object G when SrSl is two the virtual camera shot object G in left and right mg is just:
O mG=OG*SrSl/ElEr
Then SrSl=O mg*ElEr/ (Lv-OmG);
And in virtual scene, the alternate position spike Lxly of the object captured by two virtual cameras in reference plane is just equivalent to the SrSl in Fig. 2 (a); In virtual scene, object is equivalent to the O in Fig. 2 (a) to the distance F of reference plane mg; Distance Ev in virtual scene between two virtual cameras is equivalent to the ElEr in Fig. 2 (a), therefore can derive Lxly=(F*Ev)/(Lv-F).
Thus, as shown from the above formula, the depth of field of object in virtual scene just can be obtained by obtaining the parameter information of virtual scene, the lateral resolution of display device and viewing distance.
Preferably, in order to adjust in real time virtual scene, create virtual scene attribute and regulating interface, obtain the parameter information of virtual scene in step S101 after, regulated the parameter information of interface amendment virtual scene by this virtual scene attribute.Wherein, the parameter information of virtual scene that virtual scene attribute regulates interface to revise includes but not limited to: the positional information of reference plane in virtual scene, revises distance in virtual scene between two virtual cameras and by the distance, virtual camera cone of sight etc. of object in the positional information amendment virtual scene that changes reference plane in virtual scene to reference plane by the positional information of two virtual cameras in amendment virtual scene.
Another is preferred, in order to enable the depth of field real-time update of object in virtual scene, after putting up virtual scene, draw every two field picture of virtual scene to before display device, again obtain the parameter information of virtual scene, and calculate the depth of field of object in this two field picture in virtual scene according to the lateral resolution of display device, viewing distance and the parameter information of virtual scene that again obtains.
In embodiments of the present invention, by when virtual scene is built or virtual scene build after obtain the various information calculated needed for the depth of field, and the corresponding depth of field calculated in virtual scene, make the depth of field that the personnel of designing and developing just can obtain in virtual scene in the build process of virtual scene, in virtual scene, object just can revise Deep Canvas before showing on the display device simultaneously, shorten the debugging cycle of 3D display effect, and improve virtual scene development efficiency.
Embodiment two:
Fig. 4 is the apparatus structure schematic diagram determining the virtual scene depth of field provided by the invention, for convenience of explanation, illustrate only the part relevant to the embodiment of the present invention.Wherein:
Virtual scene parameter acquiring unit 41, for obtaining the parameter information of virtual scene, the lateral resolution of display device and viewing distance.
In embodiments of the present invention, the parameter information of virtual scene includes but not limited to: the distance in virtual scene between reference plane and virtual camera, distance in virtual scene between two virtual cameras, in virtual scene, object is to the distance of reference plane, the cone of sight of two cameras in virtual scene.
Depth of field computing unit 42, for calculating the depth of field of object in virtual scene according to the parameter information of virtual scene, the lateral resolution of display device and viewing distance.
In embodiments of the present invention, described depth of field computing unit 42 also comprises a subelement, wherein:
Depth of field formulae discovery unit 421, for being calculated the depth of field of object in virtual scene by formula OmG=L* (X*F*Ev/2*Lv*tan (a/2) (Lv-F))/Y+ (X*F*Ev/2*Lv*tan (a/2) (Lv-F)), wherein, OmG represents the depth of field of object in virtual scene, L represents viewing distance, parallax value x=X*F*Ev/2*Lv*tan (a/2) (Lv-F) that display device shows, wherein, X represents the lateral resolution of display device, F represents that in virtual scene, object is to the distance of reference plane, Ev represents the distance in virtual scene between two virtual cameras, Lv represents the distance in virtual scene between reference plane and virtual camera, a represents the cone of sight of two virtual cameras in virtual scene, Y represents the distance between eyes, it is optimum when the value of Y is 0.065, this numerical value can be set as other value.
Preferably, as shown in Figure 5, describedly determine that the device of the virtual scene depth of field also comprises:
Scene properties regulon 51, regulates the parameter information of interface amendment virtual scene for the virtual scene attribute by creating.
Depth of field real-time update unit 52, for drawing every two field picture of virtual scene to before display device, again obtain the parameter information of virtual scene, calculate the depth of field of object in virtual scene according to the lateral resolution of display device, viewing distance and the parameter information of virtual scene that again obtains.
In embodiments of the present invention, by when virtual scene is built or virtual scene build after obtain the various information calculated needed for the depth of field, and the corresponding depth of field calculated in virtual scene, make the depth of field that the personnel of designing and developing just can obtain in virtual scene in the build process of virtual scene, in virtual scene, object just can revise Deep Canvas before showing on the display device simultaneously, shorten the debugging cycle of 3D display effect, and improve virtual scene development efficiency.
One of ordinary skill in the art will appreciate that, the all or part of step realized in above-described embodiment method is that the hardware that can carry out instruction relevant by program has come, described program can be stored in a computer read/write memory medium, described storage medium, as ROM/RAM, disk, CD etc.
The foregoing is only preferred embodiment of the present invention, not in order to limit the present invention, all any amendments done within the spirit and principles in the present invention, equivalent replacement and improvement etc., all should be included within protection scope of the present invention.

Claims (8)

1. determine a method for the virtual scene depth of field, it is characterized in that, described method comprises the steps:
Obtain the parameter information of virtual scene, the lateral resolution of display device and viewing distance;
The depth of field of object in virtual scene is calculated according to the parameter information of described virtual scene, the lateral resolution of display device and viewing distance;
Wherein, the described depth of field calculated in virtual scene object according to the parameter information of described virtual scene, the lateral resolution of display device and viewing distance comprises:
By the depth of field of object in following formulae discovery virtual scene, wherein:
OmG=L*(X*F*Ev/2*Lv*tan(a/2)(Lv-F))/Y+(X*F*Ev/2*Lv*tan(a/2)(Lv-F));
OmG represents the depth of field of object in virtual scene, L represents viewing distance, X represents the lateral resolution of display device, F represents that in virtual scene, object is to the distance of reference plane, Ev represents the distance in virtual scene between two virtual cameras, Lv represents the distance in virtual scene between reference plane and virtual camera, and a represents the cone of sight of two virtual cameras in virtual scene, and Y represents the distance between eyes.
2. the method for claim 1, is characterized in that, the parameter information of described virtual scene comprises:
Distance in virtual scene between reference plane and virtual camera, the distance in virtual scene between two virtual cameras, the cone of sight of two cameras in object to the distance and virtual scene of reference plane in virtual scene.
3. method as claimed in claim 1 or 2, it is characterized in that, after the parameter information of described acquisition virtual scene, described method also comprises:
The parameter information of interface amendment virtual scene is regulated by the virtual scene attribute created.
4. method as claimed in claim 1 or 2, is characterized in that, before the every two field picture drawing virtual scene to display device, again obtains the parameter information of virtual scene;
The depth of field of object in every two field picture in virtual scene is calculated according to the lateral resolution of described display device, viewing distance and the parameter information of virtual scene that again obtains.
5. determine a device for the virtual scene depth of field, it is characterized in that, described device comprises:
Virtual scene parameter acquiring unit, for obtaining the parameter information of virtual scene, the lateral resolution of display device and viewing distance;
Depth of field computing unit, for calculating the depth of field of object in virtual scene according to the parameter information of described virtual scene, the lateral resolution of display device and viewing distance;
Wherein, described depth of field computing unit comprises:
Depth of field formulae discovery unit, for being calculated the depth of field of object in virtual scene by formula OmG=L* (X*F*Ev/2*Lv*tan (a/2) (Lv-F))/Y+ (X*F*Ev/2*Lv*tan (a/2) (Lv-F)), wherein, OmG represents the depth of field of object in virtual scene, L represents viewing distance, X represents the lateral resolution of display device, F represents that in virtual scene, object is to the distance of reference plane, Ev represents the distance in virtual scene between two virtual cameras, Lv represents the distance in virtual scene between reference plane and virtual camera, a represents the cone of sight of two virtual cameras in virtual scene, Y represents the distance between eyes.
6. device as claimed in claim 5, it is characterized in that, the parameter information of described virtual scene comprises:
Distance in virtual scene between reference plane and virtual camera, the distance in virtual scene between two virtual cameras, the cone of sight of two cameras in object to the distance and virtual scene of reference plane in virtual scene.
7. device as claimed in claim 5, it is characterized in that, described device also comprises:
Scene properties regulon, regulates the parameter information of interface amendment virtual scene for the virtual scene attribute by creating.
8. device as claimed in claim 5, it is characterized in that, described device also comprises:
Depth of field real-time update unit, for drawing every two field picture of virtual scene to before display device, again obtain the parameter information of virtual scene, calculate the depth of field of object in every two field picture in virtual scene according to the lateral resolution of described display device, viewing distance and the parameter information of virtual scene that again obtains.
CN201210297558.1A 2012-08-20 2012-08-20 A kind of method and device determining the virtual scene depth of field Expired - Fee Related CN102855660B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210297558.1A CN102855660B (en) 2012-08-20 2012-08-20 A kind of method and device determining the virtual scene depth of field

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210297558.1A CN102855660B (en) 2012-08-20 2012-08-20 A kind of method and device determining the virtual scene depth of field

Publications (2)

Publication Number Publication Date
CN102855660A CN102855660A (en) 2013-01-02
CN102855660B true CN102855660B (en) 2015-11-11

Family

ID=47402219

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210297558.1A Expired - Fee Related CN102855660B (en) 2012-08-20 2012-08-20 A kind of method and device determining the virtual scene depth of field

Country Status (1)

Country Link
CN (1) CN102855660B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2533553B (en) * 2014-12-15 2020-09-09 Sony Interactive Entertainment Inc Image processing method and apparatus
CN106254847B (en) * 2016-06-12 2017-08-25 深圳超多维光电子有限公司 A kind of methods, devices and systems for the display limit for determining stereoscopic display screen

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101453662A (en) * 2007-12-03 2009-06-10 华为技术有限公司 Stereo video communication terminal, system and method
CN101557536A (en) * 2009-05-14 2009-10-14 福建华映显示科技有限公司 Method for viewing depth-of-field fusion display
CN102256149A (en) * 2011-07-13 2011-11-23 深圳创维-Rgb电子有限公司 Three-dimensional (3D) display effect regulation method, device and television

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100302234A1 (en) * 2009-05-27 2010-12-02 Chunghwa Picture Tubes, Ltd. Method of establishing dof data of 3d image and system thereof

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101453662A (en) * 2007-12-03 2009-06-10 华为技术有限公司 Stereo video communication terminal, system and method
CN101557536A (en) * 2009-05-14 2009-10-14 福建华映显示科技有限公司 Method for viewing depth-of-field fusion display
CN102256149A (en) * 2011-07-13 2011-11-23 深圳创维-Rgb电子有限公司 Three-dimensional (3D) display effect regulation method, device and television

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
电视摄像中景深的计算与控制;李育林;《现代电影技术》;20110930(第9期);23-26页 *

Also Published As

Publication number Publication date
CN102855660A (en) 2013-01-02

Similar Documents

Publication Publication Date Title
EP3143596B1 (en) Method and apparatus for scanning and printing a 3d object
EP2236980B1 (en) A method for determining the relative position of a first and a second imaging device and devices therefore
JP5729915B2 (en) Multi-view video display device, multi-view video display method, and storage medium
CN106033621B (en) A kind of method and device of three-dimensional modeling
JP2018536915A (en) Method and system for detecting and combining structural features in 3D reconstruction
TWI547901B (en) Simulating stereoscopic image display method and display device
CN106210538A (en) Show method and apparatus and the program of image based on light field on a user device
US20130136302A1 (en) Apparatus and method for calculating three dimensional (3d) positions of feature points
CN102077246A (en) System and method for depth extraction of images with motion compensation
CN101689299A (en) System and method for stereo matching of images
US20190385323A1 (en) A method for generating layered depth data of a scene
WO2023169283A1 (en) Method and apparatus for generating binocular stereoscopic panoramic image, device, storage medium, and product
CN103247065B (en) A kind of bore hole 3D video generation method
KR20190027079A (en) Electronic apparatus, method for controlling thereof and the computer readable recording medium
JP2016504828A (en) Method and system for capturing 3D images using a single camera
WO2017177713A1 (en) Method and device for adjusting three-dimensional model
CN102855660B (en) A kind of method and device determining the virtual scene depth of field
CN103530869B (en) For mating the system and method that moving mass controls
KR20150040194A (en) Apparatus and method for displaying hologram using pupil track based on hybrid camera
US8260007B1 (en) Systems and methods for generating a depth tile
KR20160003355A (en) Method and system for processing 3-dimensional image
WO2023135891A1 (en) Calculation method and calculation device
KR102151250B1 (en) Device and method for deriving object coordinate
JP2014186587A (en) Image processing apparatus, control method thereof, and program
CN105138215A (en) Information processing method and electronic device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20151111