WO2011054132A1 - Method and device for adaptive 3d rendering - Google Patents

Method and device for adaptive 3d rendering Download PDF

Info

Publication number
WO2011054132A1
WO2011054132A1 PCT/CN2009/001235 CN2009001235W WO2011054132A1 WO 2011054132 A1 WO2011054132 A1 WO 2011054132A1 CN 2009001235 W CN2009001235 W CN 2009001235W WO 2011054132 A1 WO2011054132 A1 WO 2011054132A1
Authority
WO
WIPO (PCT)
Prior art keywords
viewer
display device
rendering
relative
position information
Prior art date
Application number
PCT/CN2009/001235
Other languages
French (fr)
Inventor
Sinan Shangguan
Lin Du
Xiaojun Ma
Jianping Song
Original Assignee
Thomson Licensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing filed Critical Thomson Licensing
Priority to PCT/CN2009/001235 priority Critical patent/WO2011054132A1/en
Publication of WO2011054132A1 publication Critical patent/WO2011054132A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/16Using real world measurements to influence rendering

Definitions

  • the present invention relates to 3D presentation, and more particularly, relates to a method and a device for adaptive 3D rendering based on viewer's position.
  • the 3D technology has been widely used in game, industrial design, interactive applications, etc. Great progress has been achieved in this area in terms of rendering speed, refinement of 3D model, aesthetic feeling of picture. In addition, many efforts have been put on the development of rendering algorithm since it has direct impact on user experience.
  • Camera model is the most popular approach used by rendering algorithms.
  • the camera model is defined by some formulas that use several parameters including view angle, near plane, far plane and projection window.
  • these parameters are normally pre-configured.
  • These rendering algorithms work under an assumption that a user/viewer does not move in front of a 3D display device.
  • the user/viewer may move around in front of the 3D display device when he/she is watching 3D contents.
  • Using pre-configured parameters will probably make the viewer feel uncomfortable as the 3D contents are not adaptively rendered in response to his movement in front of the 3D display device, and consequently the user experience is degraded.
  • a method for adaptive 3D rendering in a system where at least one 3D object is rendered on a display device comprises the step of: in response to a viewer's movement relative to the display device, changing the rendering of the at least one 3D object based on position information of the viewer relative to the display device.
  • a device for adaptive 3D rendering in a system where at least one 3D object is rendered on a display device.
  • the device comprises a 3D rendering module configured to change the rendering of the at least one 3D object based on position information of the viewer relative to the display device in response to the viewer's movement relative to the display device.
  • the rendering of 3D object is dynamically changed as the viewer moves in front of the display device, and thereby improving the user experience by making the viewer feel that he/she is watching an object in real environment.
  • Fig. 1 is a diagram illustrating a typical virtual camera model in a 3D rendering system according to the prior art
  • Fig. 2 is a diagram illustrating a simplified virtual camera model according to an embodiment of the present invention
  • Fig. 3 is a diagram illustrating the diagram of the Fig. 2 in a 3D coordinate system according to the embodiment of the present invention
  • Fig. 4 is a diagram illustrating an exemplary system for adaptive 3D rendering according to the embodiment of the present invention.
  • Fig. 5 is a diagram illustrating an example of adaptive 3D rendering according to the embodiment of the present invention.
  • Fig. 6 is a diagram illustrating another example of adaptive 3D rendering according to the embodiment of the present invention.
  • Fig. 7 is a flow chart illustrating a method for adaptive 3D rendering according to the embodiment of the present invention.
  • Fig. 8 is a diagram illustrating a third example of adaptive 3D rendering according to the present embodiment of the invention.
  • One embodiment of the invention is placed in a 3D rendering system, where the 3D objects rendered on the display device are created by use of 3D modeling technology.
  • 3D modelers are used in a wide variety of industries, such as medical industry, movie industry, video game industry, architecture industry etc. Many 3D modelers are general-purpose and can be used to produce models of various real-world entities, from plants to automobiles to people. Some are specially designed to model certain objects, such as chemical compounds or internal organs.
  • the 3D modelers allow users to create and alter models via their 3D mesh. Users can add, subtract, stretch and otherwise change the mesh to their desire. Models can be viewed from a variety of angles, usually simultaneously. Models can be rotated and the view can be zoomed in and out.
  • the 3D modelers can export their models to files, which can then be imported into other applications as long as the metadata is compatible.
  • modelers allow importers and exporters to be plugged-in, so they can read and write data in the native formats of other applications.
  • Most 3D modelers contain a number of related features, such as ray tracers and other rendering alternatives and texture mapping facilities. Some also contain features that support or allow animation of models. Some may be able to generate full-motion video of a series of rendered scenes (i.e. animation).
  • a 3D model can be created through a process called 3D modeling.
  • the 3D modeling is the process of developing a mathematical representation of any three-dimensional object (either inanimate or living) in 3D computer graphics.
  • the created 3D model represents a 3D object using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. Being a collection of data (points and other information), 3D models can be created manually or automatically. Because the 3D model contains all necessary information to define/describe itself, it can provide its different views corresponding to different viewing angles. Moreover, the 3D model can be zoomed in and zoomed out.
  • Fig. 1 is a diagram illustrating a typical virtual camera model in a 3D rendering system according to the prior art.
  • the visible space is a frustum that is defined by view angle (including horizontal field of view angle and vertical field of view angle), near plane and far plane.
  • Projection window is a 2D plane which is generated by projection of 3D objects which are in the frustum, and the projection window is used to display the 3D objects.
  • the shape of near plane and far plane is decided by the resolution of the display device.
  • Fig. 1 can be simplified as Fig. 2, which is a diagram illustrating a simplified virtual camera model and will be used below for description of the embodiment of the present invention.
  • Fig. 3 is a diagram illustrating the diagram of the Fig. 2 in a 3D coordinate system according to the present embodiment of the invention.
  • 3D rendering theory of the virtual camera model we can know that there are several parameters that determine the final result of 3D rendering, i.e. f, n, fov and aspect ratio:
  • the f and n represent distances from the virtual camera (i.e. center of projection in this embodiment) to the far plane and the near plane respectively.
  • the aspect ratio represents the ratio of rendering window's width and height. If 3D rendering window is working under a full-screen mode, the aspect ratio is recommended to use the ratio of display device's width and height to avoid distortion.
  • d in the Fig. 3 represents the width of near plane.
  • n and fov can be described as follow:
  • Fig. 4 is a diagram illustrating an exemplary system for adaptive 3D rendering according to the embodiment of the present invention.
  • the system comprises a display device, a position sensor and a 3D rendering device. Their functions are as follows:
  • the position sensor which is connected to the 3D rendering device, is used to detect viewer's position or viewer's eyes' position relative to the display device in real environment and transmit the detected position information to the 3D rendering device.
  • An example position sensor that is capable of locating the viewer's position is that the position sensor includes two position cameras and uses triangulation to determine the location.
  • many known technologies exist for detecting position information for example, the sensor disclosed in the US 2009/0051699, the user-worn unit disclosed in the WO200116929, etc.
  • the purpose of using the position sensor is to detect the position information of the viewer relative to the display device that the viewer watches.
  • the position sensor is mounted on the display device.
  • the display device connecting to the 3D rendering device is used to display 3D contents, e.g. 3D object(s).
  • the 3D rendering device is used to rendering the 3D contents based on the position relationship between the viewer and the display device, i.e. position information of the viewer relative to the display device.
  • it when receiving a message from the position sensor indicating the viewer moves, it will adaptively renders the 3D contents in response to viewer's movement.
  • Fig. 5 is a diagram illustrating an example of adaptive 3D rendering according to present embodiment of the invention.
  • the relationship of fov, n and d complies with the formula (2).
  • the d is the width of the display device and the n can be detected by the position sensor.
  • the position sensor detects the distance value n and transmits the distance value n to the 3D modeling device.
  • the 3D modeling device recalculates the fov by using the formula (2) and received distance value n, and changing the 3D rendering of the camera model with the recalculated fov and received distance value n.
  • Fig. 6 is a diagram illustrating another example of adaptive 3D rendering according to present embodiment of the invention.
  • the n' represents the distance between viewer and the display device; m represents the distance between viewer and Z axis (Z axis passes through the center of the display device and is perpendicular to the display device).
  • the n' and m can be detected by position sensor. So in the 3D coordinate system, virtual camera's position is (m, 0, -n), which is the same position as viewer's eyes. It is still set to look at O (0, 0, 0). In this scenario, the unknown parameters are fov and n. According to trigonometric theory, we can get the following formulas:
  • the 3D rendering device uses formula (4) and (6) to recalculate the parameters fov and n, and changes the 3D rendering of the camera model with the recalculated fov and n.
  • Fig. 7 is a flow chart illustrating a method for adaptive 3D rendering according to the present embodiment of the invention.
  • the 3D rendering device determines all or part of rendering parameters of the camera model based on the position information.
  • the determined rendering parameters comprise view angle fov and distance from the virtual camera to the near plane n.
  • rendering parameters to be determined varies from one implementation to another.
  • the method can also be applied to the situations where the vertical position relationship between the viewer and the display device changes or both the horizontal position relationship and the vertical position relationship change, i.e. the method can be applied to a situation where the viewer moves arbitrarily in front of the display device.
  • the position sensor has the ability to calculate the all or part of rendering parameters of the camera model, and it transmits the calculated rendering parameters to the 3D rendering device.
  • the step 702 is not necessary for the 3D rendering device.
  • the position of the virtual camera is not superposed with the viewer's eyes, and the near plane is not superposed with the display device. But the distance from the viewer's eyes to the display device is in a proportional relationship with the distance from the virtual camera to the near plane. And the proportion value is preconfigured and known to the 3D rendering device.
  • the position sensor and the 3D rendering device may belong to a same device, and a shared processor is used to calculate the parameters and render the 3D contents. So in this case, the position sensor needn't transmit the detected position information to the 3D rendering device.
  • the rendering of 3D objects is dynamically adjusted/changed based on detected position relationship between the viewer or viewer's eyes and the display device. More specifically, if the method is applied in the virtual camera model, they are the parameters of the camera model that are correspondingly changed with the position change of the viewer. However, it should be understood that other implementations can also be used to dynamically change the rendering of 3D objects as the viewer moves relative to the display device.
  • Fig. 8 is a diagram illustrating a third example of adaptive 3D rendering according to the present embodiment. In this example, we rotate the 3D objects based on position relationship between the viewer and the display device other than changing the parameters of the camera model, i.e.
  • the viewer moves from position P to position Q.
  • the angle between a line crossing the origin O and position P and a line crossing the origin O and position Q is ⁇ .
  • the angle ⁇ is directly detected by the position sensor if the position sensor permits detection of angle. Or the position sensor can only detect the distance between the viewer and the display device, distance between the viewer and a plane crossing the center of the display device and perpendicular to the plane of the display device etc. and the these distances are used to calculate the angle ⁇ based on some trigonometric function. After obtaining the angle ⁇ , the 3D objects are accordingly rendered by rotation of angle ⁇ .
  • the method of rotating 3D objects to show their views of a certain angle can be used in 3D rendering method other than virtual camera model.
  • the system sets a reference line or reference plane that crosses a certain point, e.g. the center of the display device, and is perpendicular to the plane of the display device. Once the viewer moves, the system detects or calculates the angle between the reference line or the reference plane and a line crossing the viewer's current position and the certain point. And then the detected or calculated angle is used to rotate the 3D object accordingly.

Abstract

It is provided a method for adaptive 3D rendering in a system where at least one 3D object is rendered on a display device. The method comprises the step of in response to a viewer's movement, changing the rendering of the at least one 3D object based on position information of the viewer relative to the display device.

Description

METHOD AND DEVICE FOR ADAPTIVE 3D RENDERING
TECHNICAL FIELD
The present invention relates to 3D presentation, and more particularly, relates to a method and a device for adaptive 3D rendering based on viewer's position.
BACKGROUND
The 3D technology has been widely used in game, industrial design, interactive applications, etc. Great progress has been achieved in this area in terms of rendering speed, refinement of 3D model, aesthetic feeling of picture. In addition, many efforts have been put on the development of rendering algorithm since it has direct impact on user experience.
Camera model is the most popular approach used by rendering algorithms. The camera model is defined by some formulas that use several parameters including view angle, near plane, far plane and projection window. In most of existing rendering algorithms which are based on the camera model (such as 3D projection), these parameters are normally pre-configured. These rendering algorithms work under an assumption that a user/viewer does not move in front of a 3D display device. However, in real environment, the user/viewer may move around in front of the 3D display device when he/she is watching 3D contents. Using pre-configured parameters will probably make the viewer feel uncomfortable as the 3D contents are not adaptively rendered in response to his movement in front of the 3D display device, and consequently the user experience is degraded.
Hence, there is a need to adaptively render 3D contents in response to viewer's movement in real environment.
SUMMARY According to an aspect of the present invention, it's provided a method for adaptive 3D rendering in a system where at least one 3D object is rendered on a display device. The method comprises the step of: in response to a viewer's movement relative to the display device, changing the rendering of the at least one 3D object based on position information of the viewer relative to the display device.
According to another aspect of the present invention, it's provided a device for adaptive 3D rendering in a system where at least one 3D object is rendered on a display device. The device comprises a 3D rendering module configured to change the rendering of the at least one 3D object based on position information of the viewer relative to the display device in response to the viewer's movement relative to the display device.
According to the aspect of the present invention, the rendering of 3D object is dynamically changed as the viewer moves in front of the display device, and thereby improving the user experience by making the viewer feel that he/she is watching an object in real environment.
It is to be understood that more aspects and advantages of the invention will be found in the following detailed description of the present invention. BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings illustrate embodiments of the invention together with the description which serves to explain the principle of the invention. Therefore, the invention is not limited to the embodiments. In the drawings:
Fig. 1 is a diagram illustrating a typical virtual camera model in a 3D rendering system according to the prior art;
Fig. 2 is a diagram illustrating a simplified virtual camera model according to an embodiment of the present invention; Fig. 3 is a diagram illustrating the diagram of the Fig. 2 in a 3D coordinate system according to the embodiment of the present invention;
Fig. 4 is a diagram illustrating an exemplary system for adaptive 3D rendering according to the embodiment of the present invention;
Fig. 5 is a diagram illustrating an example of adaptive 3D rendering according to the embodiment of the present invention;
Fig. 6 is a diagram illustrating another example of adaptive 3D rendering according to the embodiment of the present invention;
Fig. 7 is a flow chart illustrating a method for adaptive 3D rendering according to the embodiment of the present invention;
Fig. 8 is a diagram illustrating a third example of adaptive 3D rendering according to the present embodiment of the invention.
DETAILED DESCRIPTION
An embodiment of the present invention will now be described in detail in conjunction with the drawings. In the following description, some detailed descriptions of known functions and configurations may be omitted for clarity and conciseness.
The below embodiments are used to explain the principle of the invention and thereby should not be used to limit the scope of the invention. One embodiment of the invention is placed in a 3D rendering system, where the 3D objects rendered on the display device are created by use of 3D modeling technology.
Before introduction of the embodiments, some basic technologies are briefly introduced.
3D modelers are used in a wide variety of industries, such as medical industry, movie industry, video game industry, architecture industry etc. Many 3D modelers are general-purpose and can be used to produce models of various real-world entities, from plants to automobiles to people. Some are specially designed to model certain objects, such as chemical compounds or internal organs. In addition, the 3D modelers allow users to create and alter models via their 3D mesh. Users can add, subtract, stretch and otherwise change the mesh to their desire. Models can be viewed from a variety of angles, usually simultaneously. Models can be rotated and the view can be zoomed in and out. Moreover, the 3D modelers can export their models to files, which can then be imported into other applications as long as the metadata is compatible. Many modelers allow importers and exporters to be plugged-in, so they can read and write data in the native formats of other applications. Most 3D modelers contain a number of related features, such as ray tracers and other rendering alternatives and texture mapping facilities. Some also contain features that support or allow animation of models. Some may be able to generate full-motion video of a series of rendered scenes (i.e. animation). Currently, there are many commercial 3D modelers and application components, to name a few are 3ds MAX, Maya, AC3D etc.
With the aid of 3D modelers, a 3D model can be created through a process called 3D modeling. The 3D modeling is the process of developing a mathematical representation of any three-dimensional object (either inanimate or living) in 3D computer graphics. The created 3D model represents a 3D object using a collection of points in 3D space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. Being a collection of data (points and other information), 3D models can be created manually or automatically. Because the 3D model contains all necessary information to define/describe itself, it can provide its different views corresponding to different viewing angles. Moreover, the 3D model can be zoomed in and zoomed out.
Fig. 1 is a diagram illustrating a typical virtual camera model in a 3D rendering system according to the prior art. The visible space is a frustum that is defined by view angle (including horizontal field of view angle and vertical field of view angle), near plane and far plane. Projection window is a 2D plane which is generated by projection of 3D objects which are in the frustum, and the projection window is used to display the 3D objects. The shape of near plane and far plane is decided by the resolution of the display device.
In this embodiment, the near plane of the camera model and the projection window of the camera model are superposed, i.e. they are on a same plane, and the virtual camera is placed at the center of the projection and looks at or targets at the center of the near plane. Therefore, Fig. 1 can be simplified as Fig. 2, which is a diagram illustrating a simplified virtual camera model and will be used below for description of the embodiment of the present invention.
Fig. 3 is a diagram illustrating the diagram of the Fig. 2 in a 3D coordinate system according to the present embodiment of the invention. According to 3D rendering theory of the virtual camera model, we can know that there are several parameters that determine the final result of 3D rendering, i.e. f, n, fov and aspect ratio:
(1) The f and n represent distances from the virtual camera (i.e. center of projection in this embodiment) to the far plane and the near plane respectively.
(2) The fov represents the virtual camera's view angle, and more specifically, it's the horizontal field of the view angle in this figure.
(3) The aspect ratio represents the ratio of rendering window's width and height. If 3D rendering window is working under a full-screen mode, the aspect ratio is recommended to use the ratio of display device's width and height to avoid distortion.
Besides, d in the Fig. 3 represents the width of near plane. The relationship between d, n and fov can be described as follow:
Formula (1) get
Formula (2)
Figure imgf000006_0001
In order to describe the principal concept of our invention, the system is simplified in this embodiment as follows: 3D rendering window always works under full-screen mode, the display device is perpendicular to the y=0 plane, user's move is limited within the y = 0 plane, the near plane and the plane of the display device is superposed and their sizes are the same, and the center of the near plane is also used as origin of 3D coordinate system, far plane is set to maximum. So we can ignore the parameter f, and once a display device is chosen, the width of display device, i.e. the value of d, is a known and constant value.
Fig. 4 is a diagram illustrating an exemplary system for adaptive 3D rendering according to the embodiment of the present invention. The system comprises a display device, a position sensor and a 3D rendering device. Their functions are as follows:
(1) The position sensor, which is connected to the 3D rendering device, is used to detect viewer's position or viewer's eyes' position relative to the display device in real environment and transmit the detected position information to the 3D rendering device. An example position sensor that is capable of locating the viewer's position is that the position sensor includes two position cameras and uses triangulation to determine the location. In addition, many known technologies exist for detecting position information, for example, the sensor disclosed in the US 2009/0051699, the user-worn unit disclosed in the WO200116929, etc. The purpose of using the position sensor is to detect the position information of the viewer relative to the display device that the viewer watches. In the embodiment, the position sensor is mounted on the display device. However, a person skilled in the art will recognize a position sensor that is not mounted on the display device could also be used as long as it can detect the position information of the viewer relative to the display device. (2) The display device connecting to the 3D rendering device is used to display 3D contents, e.g. 3D object(s).
(3) The 3D rendering device is used to rendering the 3D contents based on the position relationship between the viewer and the display device, i.e. position information of the viewer relative to the display device. In addition, when receiving a message from the position sensor indicating the viewer moves, it will adaptively renders the 3D contents in response to viewer's movement.
In order to adaptively render the 3D contents, we set the position of viewer's eyes as the position of the virtual camera. Thus, as the viewer moves, the position of virtual camera is changed accordingly, and consequently, the view angle is recalculated based on the viewer's eyes' position relative to the display device, which will result in a change of the rendering of the 3D contents. Thus, the viewer feels the 3D content is changed as he/she moves, and thereby the user experience is improved.
Fig. 5 is a diagram illustrating an example of adaptive 3D rendering according to present embodiment of the invention. In this example, we assume that the viewer moves along a straight line of x=0 and y=0 and the virtual camera looks at the center of the display device, which is the origin O (0, 0, 0) of the 3D coordinate system. Under this assumption, the relationship of fov, n and d complies with the formula (2). The d is the width of the display device and the n can be detected by the position sensor. Thus, once the view moves in front of the display device, the position sensor detects the distance value n and transmits the distance value n to the 3D modeling device. The 3D modeling device recalculates the fov by using the formula (2) and received distance value n, and changing the 3D rendering of the camera model with the recalculated fov and received distance value n.
Fig. 6 is a diagram illustrating another example of adaptive 3D rendering according to present embodiment of the invention. In this example, the viewer move within y=0 plane. In the Fig. 6, the n' represents the distance between viewer and the display device; m represents the distance between viewer and Z axis (Z axis passes through the center of the display device and is perpendicular to the display device). The n' and m can be detected by position sensor. So in the 3D coordinate system, virtual camera's position is (m, 0, -n), which is the same position as viewer's eyes. It is still set to look at O (0, 0, 0). In this scenario, the unknown parameters are fov and n. According to trigonometric theory, we can get the following formulas:
fov in
tan(^— ) =— Formula (3)
2 w'
After conversion, we get
τη
fov = 2 * arctan(— ) Formula (4)
n cos(— ) =— Formula (5)
2 n
After conversion, we get
«'
n = — Formula (6)
cos(arctan(— ))
n '
Therefore, the 3D rendering device uses formula (4) and (6) to recalculate the parameters fov and n, and changes the 3D rendering of the camera model with the recalculated fov and n.
Fig. 7 is a flow chart illustrating a method for adaptive 3D rendering according to the present embodiment of the invention.
In the step 701 , the 3D rendering device receives a message containing position information of a viewer relative to the display device from the position sensor in response to the viewer's movement relative to the display device. As above two examples show, the position information may contain only the perpendicular distance from the viewer to the plane of the display device in the first example, or contain the n' and m in the second example. A person skilled in the art will recognize that the composition of the position information varies from one implementation to another and it varies in different rendering algorithms.
In the step 702, the 3D rendering device determines all or part of rendering parameters of the camera model based on the position information. As the above examples show, the determined rendering parameters comprise view angle fov and distance from the virtual camera to the near plane n. A person skilled in the art will recognize that rendering parameters to be determined varies from one implementation to another.
In the step 703, the 3D rendering device changes the 3D rendering of the camera model based on the determined rendering parameters. Specifically, the 3D rendering device uses the determined rendering parameters to invoke the camera model to re-render the 3D contents.
The above embodiment introduces the 3D rendering device changes the 3D rendering of 3D contents in response to the viewer moves within y=0 plane. A person skilled in the art will recognize that the method can also be applied to the situations where the vertical position relationship between the viewer and the display device changes or both the horizontal position relationship and the vertical position relationship change, i.e. the method can be applied to a situation where the viewer moves arbitrarily in front of the display device.
According to a variant of the present embodiment, the position sensor has the ability to calculate the all or part of rendering parameters of the camera model, and it transmits the calculated rendering parameters to the 3D rendering device. In this case, the step 702 is not necessary for the 3D rendering device.
According to a variant of the present embodiment, the position of the virtual camera is not superposed with the viewer's eyes, and the near plane is not superposed with the display device. But the distance from the viewer's eyes to the display device is in a proportional relationship with the distance from the virtual camera to the near plane. And the proportion value is preconfigured and known to the 3D rendering device. According to a variant of the present embodiment, the position sensor and the 3D rendering device may belong to a same device, and a shared processor is used to calculate the parameters and render the 3D contents. So in this case, the position sensor needn't transmit the detected position information to the 3D rendering device.
According to the above description of the present embodiment and its variants, the rendering of 3D objects is dynamically adjusted/changed based on detected position relationship between the viewer or viewer's eyes and the display device. More specifically, if the method is applied in the virtual camera model, they are the parameters of the camera model that are correspondingly changed with the position change of the viewer. However, it should be understood that other implementations can also be used to dynamically change the rendering of 3D objects as the viewer moves relative to the display device. Fig. 8 is a diagram illustrating a third example of adaptive 3D rendering according to the present embodiment. In this example, we rotate the 3D objects based on position relationship between the viewer and the display device other than changing the parameters of the camera model, i.e. we rotate the 3D objects with position of virtual camera, near plane and view angle etc unchanged in the camera model as the viewer moves. As shown in the Fig. 8, the viewer moves from position P to position Q. The angle between a line crossing the origin O and position P and a line crossing the origin O and position Q is φ. The angle φ is directly detected by the position sensor if the position sensor permits detection of angle. Or the position sensor can only detect the distance between the viewer and the display device, distance between the viewer and a plane crossing the center of the display device and perpendicular to the plane of the display device etc. and the these distances are used to calculate the angle φ based on some trigonometric function. After obtaining the angle φ, the 3D objects are accordingly rendered by rotation of angle φ. Further, it should be understood that the method of rotating 3D objects to show their views of a certain angle can be used in 3D rendering method other than virtual camera model. According to a variant, the system sets a reference line or reference plane that crosses a certain point, e.g. the center of the display device, and is perpendicular to the plane of the display device. Once the viewer moves, the system detects or calculates the angle between the reference line or the reference plane and a line crossing the viewer's current position and the certain point. And then the detected or calculated angle is used to rotate the 3D object accordingly.
According to a variant of the present embodiment, instead of detecting the position of the viewer's eyes, it detects the position of the viewer, e.g. the body of the viewer. In this case, it may only horizontally adjust the rendering of 3D contents and ignore the vertical change of the viewer, which means the viewers moves within a y=C plane (the C is a constant value).
A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made. For example, elements of different implementations may be combined, supplemented, modified, or removed to produce other implementations. Additionally, one of ordinary skill will understand that other structures and processes may be substituted for those disclosed and the resulting implementations will perform at least substantially the same function(s), in at least substantially the same way(s), to achieve at least substantially the same result(s) as the implementations disclosed. Accordingly, these and other implementations shall fall in the scope of the invention.

Claims

1. A method for adaptive 3D rendering in a system where at least one 3D object is rendered on a display device, characterized by comprising the step of: in response to a viewer's movement relative to the display device, changing the rendering of the at least one 3D object based on position information of the viewer relative to the display device.
2. The method of the claim 1 , characterized by further comprising: detecting the movement of the viewer relative to the display device in response to the viewer's movement relative to the display device.
3. The method of the claim 2, characterized by further comprising: determining position information of the viewer relative to the display device from the detection.
4. The method of the claim 1 , characterized in that the at least one 3D object can be rendered from views of different angles.
5. The method of the claim 4, characterized in that in response to the viewer's movement, the at least one 3D object is rendered in a view of certain angle determined based on the position information of the viewer relative to the display device.
6. The method of the claim 5, characterized in that each different moving position along the viewer's movement corresponds to a rendering of the at least one 3D object in a different view.
7. The method of the claim 1, characterized in that camera model is used to render the at least one 3D object.
8. The method of the claim 7, characterized in that near plane of the camera model and projection window of the camera model are superposed.
9. The method of the claim 7, characterized in that the distance from the viewer to the display device is in a proportional relationship with the distance from virtual camera of the camera model to the near plane.
10. The method of any of claims 7 to 9, characterized in that the step of changing the rendering of the at least one 3D object further comprising: changing parameters of the camera model based on position information of the viewer relative to the display device.
11. The method of the claim 1 , characterized in that the step of changing the rendering of the at least one 3D object further comprising: rotating the at least one 3D object based on the position information of the viewer relative to the display device, wherein the rotation angle of the at least one 3D object is associated with the viewer's movement.
12. A device for adaptive 3D rendering in a system where at least one 3D object is rendered on a display device, characterized by comprising: a 3D rendering module configured to change the rendering of the at least one 3D object based on position information of the viewer relative to the display device in response to the viewer's movement relative to the display device.
13. The device of the claim 12, characterized by further comprising: a position sensor module configured to detect viewer's movement relative to the display device, and the 3D rendering module is further configured to determine the position information of the viewer relative to the display device from the detection of the position sensor module.
PCT/CN2009/001235 2009-11-06 2009-11-06 Method and device for adaptive 3d rendering WO2011054132A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2009/001235 WO2011054132A1 (en) 2009-11-06 2009-11-06 Method and device for adaptive 3d rendering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2009/001235 WO2011054132A1 (en) 2009-11-06 2009-11-06 Method and device for adaptive 3d rendering

Publications (1)

Publication Number Publication Date
WO2011054132A1 true WO2011054132A1 (en) 2011-05-12

Family

ID=43969544

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2009/001235 WO2011054132A1 (en) 2009-11-06 2009-11-06 Method and device for adaptive 3d rendering

Country Status (1)

Country Link
WO (1) WO2011054132A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015130137A1 (en) * 2014-02-27 2015-09-03 Samsung Electronics Co., Ltd. Method and device for displaying three-dimensional graphical user interface screen

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005116807A1 (en) * 2004-05-28 2005-12-08 National University Of Singapore An interactive system and method
CN101231752A (en) * 2008-01-31 2008-07-30 北京航空航天大学 True three-dimensional panoramic display and interactive apparatus without calibration

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005116807A1 (en) * 2004-05-28 2005-12-08 National University Of Singapore An interactive system and method
CN101231752A (en) * 2008-01-31 2008-07-30 北京航空航天大学 True three-dimensional panoramic display and interactive apparatus without calibration

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015130137A1 (en) * 2014-02-27 2015-09-03 Samsung Electronics Co., Ltd. Method and device for displaying three-dimensional graphical user interface screen

Similar Documents

Publication Publication Date Title
CN107564089B (en) Three-dimensional image processing method, device, storage medium and computer equipment
US11010958B2 (en) Method and system for generating an image of a subject in a scene
WO2017113731A1 (en) 360-degree panoramic displaying method and displaying module, and mobile terminal
US20160343166A1 (en) Image-capturing system for combining subject and three-dimensional virtual space in real time
US20110248987A1 (en) Interactive three dimensional displays on handheld devices
CN106688231A (en) Stereo image recording and playback
JP2011090400A (en) Image display device, method, and program
TW201835723A (en) Graphic processing method and device, virtual reality system, computer storage medium
Jia et al. 3D image reconstruction and human body tracking using stereo vision and Kinect technology
US11189057B2 (en) Provision of virtual reality content
US20210082176A1 (en) Passthrough visualization
JP2022122876A (en) image display system
US20160067617A1 (en) Detecting the Changing Position Of A Face To Move And Rotate A Game Object In A Virtual Environment
US20220230399A1 (en) Extended reality interaction in synchronous virtual spaces using heterogeneous devices
CN107005689B (en) Digital video rendering
CN112929651A (en) Display method, display device, electronic equipment and storage medium
CN110286906A (en) Method for displaying user interface, device, storage medium and mobile terminal
JP6621565B2 (en) Display control apparatus, display control method, and program
US10902554B2 (en) Method and system for providing at least a portion of content having six degrees of freedom motion
TW202034688A (en) Image signal representing a scene
WO2012063911A1 (en) 3d content display device, and 3d content display method
WO2011054132A1 (en) Method and device for adaptive 3d rendering
JP6168597B2 (en) Information terminal equipment
JP2005011275A (en) System and program for displaying stereoscopic image
US20220044351A1 (en) Method and system for providing at least a portion of content having six degrees of freedom motion

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09851016

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09851016

Country of ref document: EP

Kind code of ref document: A1