GB2548080A - A method for image transformation - Google Patents

A method for image transformation Download PDF

Info

Publication number
GB2548080A
GB2548080A GB1602903.5A GB201602903A GB2548080A GB 2548080 A GB2548080 A GB 2548080A GB 201602903 A GB201602903 A GB 201602903A GB 2548080 A GB2548080 A GB 2548080A
Authority
GB
United Kingdom
Prior art keywords
image
viewing direction
main viewing
display unit
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1602903.5A
Other versions
GB201602903D0 (en
GB2548080B (en
Inventor
Oikkonen Markku
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia Technologies Oy
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Priority to GB1602903.5A priority Critical patent/GB2548080B/en
Publication of GB201602903D0 publication Critical patent/GB201602903D0/en
Priority to PCT/IB2017/050688 priority patent/WO2017141139A1/en
Publication of GB2548080A publication Critical patent/GB2548080A/en
Application granted granted Critical
Publication of GB2548080B publication Critical patent/GB2548080B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T3/06
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/0261Improving the quality of display appearance in the context of movement of objects on the screen or movement of the observer relative to the screen
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/045Zooming at least part of an image, i.e. enlarging it or shrinking it

Abstract

A method comprising: obtaining, by an immersive display unit, i.e. a Head Mounted Display (HMD), an image to be displayed; obtaining an input indicating a main viewing direction of a user 502 of the immersive display unit; determining a first value for an image modification parameter, i.e. zoom ratio, blurring strength, colour tone, colour selection, for an image element residing in the main viewing direction; determining a second value for the image modification parameter for image elements A deviating from the main viewing direction, where the second value is a function of the angle α between main viewing direction and the deviated direction; and displaying the image according to the determined values of the image modification parameter on the immersive display unit. The embodiment is directed towards processing an image based on the tracking of a users head movements, such that when their head moves, i.e. nodding, the distance to an object in the image changes. This process is based on the transformation of image regions based on the angular change between a persons position, and the object being viewed.

Description

A METHOD FOR IMAGE TRANSFORMATION Field
Various embodiments relate to video effects in immersive displays, and more particularly to a method for image transformation.
Background of the invention
In an immersive display, a viewer is surrounded by one or more images displayed either as a sub-section of a sphere or as the whole sphere around the viewer. The viewer may see the display image on a physical screen surface around him or use a head-mounted display (HMD) display. A head mounted display is a display device, worn on the head that has a small display optic in front of the eyes. One use case for the head mounted display is the ability to watch live or pre-recorded videos. Another use case is the ability to watch computer created content, such as three-dimensional (3D) games. When watching a 360 degree, i.e. the whole sphere, panoramic video with a head mounted display equipped with head tracking technology, a user may be able to feel much more immersed inside the world of the video compared to watching a conventional two-dimensional (2D) display. 3D stereo image effect in the video may enhance the immersive feeling even further. 360 degree stereo panorama videos are a known way to distribute 3D videos meant to be viewed in head mounted displays. While the computer created virtual reality content may be rendered to provide the viewer with an immersive experience having a deep feeling of “presence”, like interacting with and moving inside the 3D content stereo panorama videos based on filmed (captured e.g. by a stereo/multi-camera device) virtual reality content is only provided with a limited set of 3D effects and viewer interaction. The lack of many 3D effects in the video playback may reduce the feeling of presence and may give the video artificial look.
One such issue in the 360 degree stereo panorama video format, particularly when using head mounted displays, is that the experienced distance to an object in front of the viewer remains the same even if the viewer moves his/her head closer to the object. This is due to the fact the viewer is bound to the same position from where the camera has captured the view. In real life, when the head is moved forward/backward, the viewpoint moves slightly along the head and the foreground objects appear to move in relation to the background objects.
Summary
Various embodiments include a method, an apparatus and a computer program, which are characterized by what is stated in the independent claims. Various embodiments of the invention are disclosed in the dependent claims.
According to a first embodiment, there is disclosed a method comprising: obtaining, by an immersive display unit, at least one image to be displayed on the immersive display unit; obtaining an input indicating a main viewing direction of a user of the immersive display unit; determining a first value for at least one image modification parameter for an image element residing in the main viewing direction; determining a second value for the at least one image modification parameter for image elements deviating from said main viewing direction, the second value being a function of an angle between the main viewing direction and a direction from the user towards an image location deviating from the main viewing direction; and displaying the at least one image according to the determined values of the at least one image modification parameter on the immersive display unit.
According to an embodiment, the input is determined on the basis of a movement of the head of the user.
According to an embodiment, the input is obtained via a user interface of the immersive display unit.
According to an embodiment, the input is obtained automatically in response to the user positioning within the immersive display unit.
According to an embodiment, the input further indicates strength of the input in said main viewing direction.
According to an embodiment, the at least one image modification parameter is a zoom ratio of the image element.
According to an embodiment, the zoom ratio of the n*^ equally spaced image element around a 360 degree display circle is calculated as:
where m is the total number of image elements of equal width around a 360 degree display circle, a(n) is the angle deviating from the main viewing direction, corresponding to the n*^ image element before zooming, and a’(n) is the angle of a modified n*^ image element after zooming, and a’(n) being a function of a(n) and the user input value for image magnification in the main viewing direction.
According to an embodiment, the at least one image modification parameter is at least one of: - a blurring strength of the image element; - a color tone strength of the image element; - a color selection of the image element.
According to an embodiment, the immersive display unit is a head mounted device (HMD) with head tracking system or a curved display at least partly surrounding the head of the user.
According to an embodiment, the at least one image comprises video image data.
According to a second embodiment, there is provided an apparatus comprising an immersive display unit comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to at least: obtain at least one image to be displayed on the immersive display unit; obtain an input indicating a main viewing direction of a user of the immersive display unit; determine a first value for at least one image modification parameter for an image element residing in the main viewing direction; determine a second value for the at least one image modification parameter for image elements deviating from said main viewing direction, the second value being a function of an angle between the main viewing direction and a direction from the user towards an image location deviating from the main viewing direction; and display the at least one image according to the determined values of the at least one image modification parameter on the immersive display unit.
According to a third embodiment, there is provided a computer readable storage medium stored with code thereon for use by an apparatus, which when executed by a processor, causes the apparatus to perform the method according to any of the above embodiments.
These and other embodiment of the invention will become apparent in view of the detailed disclosure.
List of drawings
In the following, various embodiments of the invention will be described in more detail with reference to the appended drawings, in which
Fig. 1a shows an example of a multi-camera system as a simplified block diagram, in accordance with an embodiment;
Fig. 1b shows a perspective view of a multi-camera system, in accordance with an embodiment;
Fig. 2 shows an example of a video playback apparatus as a simplified block diagram, in accordance with an embodiment;
Fig. 3 shows a flow chart of an image transformation process according to an embodiment of the invention;
Figs. 4a, 4b illustrate an example of moving the display virtually closer to a viewer according to an embodiment of the invention;
Fig. 5 illustrates an example of geometric analysis for determining a modification ratio according to an embodiment of the invention;
Fig. 6 is a graph illustrating an example of image element movements according to an embodiment of the invention;
Fig. 7 is a graph illustrating an example of the magnification as a function of the original location of the image element according to an embodiment of the invention;
Fig. 8 shows a qualitative explanation of the results of the image transformation process; and
Figs. 9a, 9b show the basic principle of changing the image properties as a function of an angle in a cylindrical display and in a spherical display, respectively.
Description of embodiments
The following embodiments are exemplary. Although the specification may refer to "an", "one", or "some" embodiment(s) in several locations, this does not necessarily mean that each such reference is to the same embodiment(s), or that the feature only applies to a single embodiment. Single features of different embodiments may also be combined to provide other embodiments.
When 360 degree stereo panorama video is viewed in an immersive multimedia display unit, for example, a head mounted display with a video player software, a video player may be able to create the similar effect of the viewer moving in the immersed space which is present in the real world. The forward and/or backward motion of a head of the user may make the video more realistic looking as the objects in the foreground appear to move slightly in relation to background objects when the head is moved. However, considering stereo panorama videos based on filmed virtual reality content that is captured e.g. by a stereo- or multi-camera device, the effect of the movement of foreground and background objects relative to each other is difficult to achieve, at least in a computationally lightweight manner.
Figure 1a illustrates an example of a multi-camera system 100, which may be able to capture and produce 360 degree stereo panorama video. The multicamera system 100 comprises two or more camera units 102. In this example, the number of camera units 102 is eight, but may also be less than eight or more than eight. Each camera unit 102 is located at a different location in the multi-camera system, and may have a different orientation with respect to other camera units 102, so that they may capture a part of the 360 degree scene from different viewpoints substantially simultaneously. A pair of camera units 102 of the multi-camera system 100 may correspond with left and right eye viewpoints at a time. As an example, the camera units 102 may have an omnidirectional constellation, so that it has a 360° viewing angle in a 3D-space. In other words, such multi-camera system 100 may be able to see each direction of a scene so that each spot of the scene around the multicamera system 100 can be viewed by at least one camera unit 102 or a pair of camera units 102.
Without losing generality, any two camera units 102 of the multi-camera system 100 may be regarded as a pair of camera units 102. Hence, a multicamera system of two cameras may have only one pair of camera units, a multi-camera system of three cameras may have three pairs of camera units, a multi-camera system of four cameras may have six pairs of camera units, etc. Generally, a multi-camera system 100 comprising N camera units 102, where N is an integer greater than one, may have N(N-1)/2 pairs of camera units 102. Accordingly, images captured by the camera units 102 at a certain time may be considered as N(N-1)/2 pairs of captured images.
The multi-camera system 100 of Figure la may also comprise a processor 104 for controlling operations of the multi-camera system 100. There may also be a memory 106 for storing data and computer code to be executed by the processor 104, and a transceiver 108 for communicating with, for example, a communication network and/or other devices in a wireless and/or wired manner. The multi-camera systemlOO may further comprise a user interface (Ul) 110 for displaying information to the user, for generating audio signals, and/or for receiving user inputs. However, the multi-camera system 100 need not comprise each feature mentioned above, or may comprise other features as well. For example, there may be electric and/or mechanical elements for adjusting and/or controlling optics of the camera units 102.
Figure la also illustrates some operational elements which may be implemented, for example, as a computer code which can be executed in the processor 104, in hardware, or both to perform a desired function. An optical flow estimation 114 may perform optical flow estimation for pair of images of different camera units 102. Transform vectors or other information indicative of an amount interpolation/extrapolation to be applied to different parts of a viewport may have been stored into the memory 106 or they may be calculated e.g. as a function of the location of a pixel in question. The operation of the elements will be described later in more detail. It should be noted that there may also be other operational elements in the multi-camera system 100 than those depicted in Figure la.
Figure 1b shows a perspective view of the multi-camera system 100, in accordance with an embodiment. In Figure 1b seven camera units 102a— 102g can be seen, but the multi-camera system 100 may comprise even more camera units which are not visible from this perspective view. Figure 1b also shows two microphones 112a, 112b, but the apparatus may also comprise one microphone or more than two microphones.
In accordance with an embodiment, the multi-camera system 100 may be controlled by another device, wherein the multi-camera system 100 and the other device may communicate with each other and a user may use a user interface of the other device for entering commands, parameters, etc. and the user may be provided with information from the multi-camera system 100 via the user interface of the other device.
Some terminology regarding the multi-camera system 100 will now be shortly described. A viewport is a part of the scene which is displayed by a head mounted display at a time. Both left and right eye images may have overlapping, but slightly different viewports. A camera space, or camera coordinates, stands for a coordinate system of an individual camera unit 102 whereas a world space, or world coordinates, stands for a coordinate system of the multi-camera system 100 as a whole. An optical flow may be used to describe how objects, surfaces, and edges in a visual scene move or transform, when an observing point moves from a location of one camera to a location of another camera. In some embodiments, there need not be any actual movement but it may virtually be determined how the view of the scene might change when a viewing point is moved from one camera unit to another camera unit.
Figure 2 shows an example of a video playback apparatus 200 as a simplified block diagram, in accordance with an embodiment. A non-limiting example of video playback apparatus 200 includes, an immersive display unit. An example of the immersive display unit includes, but is not limited to, a head mounted display. The video playback apparatus 200 may comprise, for example, one or two displays 202 for video playback. When two displays are used a first display 202a may display images for a left eye and a second display 202b may display images for a right eye, in accordance with an embodiment. In case of only one display 202, that display 202 may be used to display images for the left eye on the left side of the display 202 and to display images for the right eye on the right side of the display 202. While describing various embodiments further below, it is assumed that the one or two displays are configured to create one or more images surrounding the viewer partly or completely, such as a head-mounted display with head tracking system or a cylindrical or a spherical display curving in 2D or in 3D at least partly, but possibly 360° around the viewer.
The video playback apparatus 200 may be provided with encoded data streams via a communication interface 204 and a processor 206 may perform control operations for the video playback apparatus 200 and may also perform operations to reconstruct video streams for displaying on the basis of received encoded data streams. The video playback apparatus 200 may further comprise a processor 206 for reconstructing video streams for displaying on the basis of received encoded data streams. There may also be a decoder 208 for decoding received data streams and a memory 210 for storing data and computer code. In an embodiment, the decoding 208 is implemented, for example, as a software code, which can be executed by the processor 206 to perform the desired function, in hardware, or in both. The video playback apparatus 200 may further comprise a user input 212 for receiving e.g. user instructions or inputs.
The video playback apparatus 200 may comprise an image modification unit 214 which may perform image modification on the basis of modification information provided by a user input and at least one image modification function and transform image elements as will be described later in this specification. In an embodiment, the image modification unit 214 can be implemented, for example, as a software code, which can be executed by the processor to perform the desired function, in hardware, or in both.
It should be noted that the video playback device 200 need not comprise each of the above elements or may also comprise other elements. For example, the decoding element 208 may be a separate device wherein that device may perform decoding operations and provide decoded data stream to the video playback device 200 for further processing and displaying decoded video streams.
In the following, the image transformation method in accordance with an embodiment is described in more detail with reference to the flow diagram of Figure 3. Video information to be processed may have been captured and processed by two or more camera units 102 to obtain a panorama video, for example a 360 degree panorama video. From this panorama video, a first stream of images representing e.g. left eye views and a second stream of images representing e.g. right eye views of the scene may be encoded.
In the method, an immersive display unit, for example a head mounted display, arranged to create an image surrounding a user obtains (300) at least one image to be displayed on the immersive display, and an input indicating a main viewing direction of the immersive display unit is obtained (302). In an embodiment, the input indicating the main viewing direction of the immersive display unit is given by a user. In another embodiment, the input indicating the main viewing direction is received automatically. A first value for at least one image modification parameter for an image element residing in the main viewing direction is determined (304). A second value is determined (306) for at least one image modification parameter for image elements deviating from the main viewing direction as a function of an angle between the main viewing direction and a direction from the user towards the image location deviating from the main viewing direction. Finally, the at least one image is displayed (308) according to the determined values of the at least one image modification parameter on the display.
According to an embodiment, the immersive display unit may be a head mounted device (HMD) with a head tracking system or a curved display at least partly surrounding the head of the user.
Hence, a straightfonA/ard and computationally light method for providing changes in the surrounding display image properties is provided herein, which increases the feeling of the user of the HMD to be present and immersed in the surrounding 3D space. When a change in the image properties is desired, the user of the HMD may give a simple user input for indicating the main viewing direction. Alternatively, the input for indicating the main viewing direction may be obtained automatically in response to the user positioning him/herself within the immersive display unit. A value of at least one image modification parameter in the main viewing direction is used as a reference value, and the values of the image modification parameter in other directions of the 3D image are calculated as a function of the angle deviating from the main viewing direction.
According to an embodiment, the user input is determined on the basis of a movement of the head of the user. When the user has the glasses on (i.e. the user is using the HMD), a movement of the head, such as a nod of the head forward, is an intuitive manner for the user to determine how the image is modified. For example, moving the head forward can modify the image so that the viewer has the impression that s/he moves closer to the image in front of him/her.
According to an embodiment, the user input is obtained via a user interface of the HMD. Herein, it is not necessary that the user has the glasses on, but the main viewing direction may be submitted via the user interface before the user starts wearing the HMD. It is also possible that there is an external user interface device connected to the HMD, and the user may give the user input via the external user interface device even if wearing the HMD.
According to an embodiment, the user input further indicates strength of the user input in the main viewing direction. The strength value of the user input may be used as the basis for adjusting the value of the image modification parameter in the main viewing direction. The strength may be obtained e.g. by a movement sensor connected to the HMD, which determines the acceleration and/or distance of the head movement. If an external user interface device connected to the HMD is used, the strength may be given as a numerical value or otherwise proportional to a predefined scale.
According to an embodiment, the at least one image modification parameter is a zoom ratio of the image element. Herein, in response to the user input, the element of the display screen residing in the main viewing direction may be enlarged, while the image parts on the left and right side of the viewer do not change in size, but just move a little backwards. This may be illustrated by Figures 4a and 4b which show top views of cylindrical displays and a viewer in the center of the display. In the initial state of Figure 4a, the viewer wears the HMD and sees a cylindrical display around him/her. In response to the user input triggering the modification, the part of the display screen residing in the main viewing direction (i.e. in front of the viewer) seems to get closer to the viewer. Thus, the viewer has the feeling that the display around him has moved to the dashed line position of Fig. 4b, although the physical display stays all the time in the original location (solid line).
In the following, a geometrical analysis of the zoom ratio modification is discussed more in detail by referring to Figure 5. For clarity, the analysis is for a cylindrical physical display around the user, but the principles presented herein can be extended to spherical displays and head-mounted displays, and other form of immersive display units, as well. The arc 500 represents the physical cylindrical display around the viewer 502 at a radius r. Let us take an example image element 504 on the cylindrical display 500 around the viewer, where the image element 504 is represented by an arc, which in its initial state starts at angle a from the main viewing direction (i.e. to the left from the upright direction on the image). The viewer sees the width of the arc of the image element 504 in angle β. It is noted that for illustrative purposes of the geometrical analysis, the angle β is here shown as much larger than 1 degree. In practical implementations, β would be « 1 degree.
Initially, the viewer 502 stays in the center of the cylindrical display. In response to a user input (e.g. a nod of the head), the dimensions of the image around the viewer change so that viewer feels that the circle of the display has moved distance k closer to the viewer. The virtual position of the ’’moved” display circle is marked in dashed line 500’. The viewer would see the original arc of the image element 504 now in a new location, in angle y, as the image element 504’.
Naturally, with respect to the viewer 502, the actual physical cylindrical display 500 around the viewer remains at the same location. Thus, in order to show the arc of the image element 504 in the new location, in angle y, to the viewer, the physical cylindrical display 500 should show the original image element 504 as the black arc 504” on the display circle.
Now, analyzing the grey triangle OAB, where the sides OA=r, AB=k are known, and OB=s is unknown, and the known angle a is between the sides OA and AB and the angle Θ between the sides OA and OB is unknown, we have:
(V (2)
Equations (1) and (2) can be combined as:
Equation (3) now defines how the original angle a (corresponding to the start edge of the chosen image element) changes to a+ Θ when the circle display of radius ris virtually moved by distance k.
For defining the zoom ratio, i.e. how much each image element is stretched or squeezed, a magnification factor M, corresponding to γ/β, can be calculated as follows:
The 360 degree display circle may be divided into m image elements of equal width, whereby the angle of one element from the viewer’s perspective corresponds to 360°/m. Considering the image element, its starting angle (i.e. the angle of the first side of the image element from the main viewing direction) is a(n) ={n-1)*360°/m.
With the image modification, the starting angle of each image element changes from a(n) to a’(n), which corresponds to a+ Θ, as shown above. The angle difference between a(n) and a(n+1) is a(n+1) - a(n) = 360°/m. The angle difference in the modified image between a’(n) and a’(n+1) is a’(n+1) - a’(n). Thus, the magnification factor M (i.e. zoom ratio) of the image element n is
(4) where m is the number of image elements of equal width around a 360 degree display circle with the viewer at the circle center, a(n) is the angle deviating from the main viewing direction, corresponding to the n*^ image element before zooming, and a’(n) is the angle of a modified n*^ image element after zooming, and a’(n) being a function of a(n) and the user input value for image magnification in the main viewing direction.
The analysis above with Eq. (3) and Eq. (4) applies to the width of the image elements around a cylindrical display on a horizontal plane that is perpendicular to the axis of the cylinder. In zooming, the height of each image element changes with the same magnification factor M as the width of the elements changes so that the aspect ratio of the image element remains unchanged.
Figure 6 shows an example of image element movements according to Eq. (3). The graph of Figure 6 shows an example, where k/r=0.2, i.e. the curved display is virtually moved one fifth closer to the viewer in the main viewing direction. The parameter m is selected as 100, which is large enough to obtain smooth output curves. The dashed line shows the original location of the image elements, whereas the solid line shows how the location of the image elements after the image modification. It can be seen that, for example, at 0 and 180 degrees there is no movement, whereas at around 90 and 270 degrees (i.e. directly left and right from the main viewing direction) the movements of the image elements are at maximum.
Figure 7 shows another example depicting the magnification as a function of the original location of the image element. Again, k/r=0.2. The dashed line shows the original magnification of the image elements, which equals to 1. The solid line shows the magnification ratio M according to Eq. (4); i.e. image element width in the new location vs. the element width in the original situation. The magnification is maximum (1.25) at 0 degrees (i.e. in the main viewing direction straight ahead from the viewer) and minimum (0.83) in the opposite direction. At around 90 and 270 degrees (left/right sides of the viewer), there is no magnification.
Herein, if k/r = -0.2, it would describe a situation where the viewer gives a user input in backward direction, e.g. moves his/her head backwards, in order to create the virtual feeling of moving backwards in the 3D space. In that situation, the solid magnification line of Figure 7 would be a mirror image relative to the dashed line with magnification=1.
Figure 8 shows a qualitative explanation of the above embodiments. In the original situation, such as in Figure 4a, the viewer sees the image elements around the sphere such that each image element has the same angular slot marked with evenly spaced black lines. After the virtual movement forward, the size of image elements is modified such that the new element borders are marked with grey lines and black knobs. As a result of the modification, in main the direction selected by the viewer the image elements have stretched, the image elements just on the sides of the viewer have not changed their size but a moved slightly backwards, and the image element just behind the viewer have squeezed. Altogether, the viewer experiences an illusion of a slight movement forward as the images in the main viewing direction grow and the other image elements change their size accordingly.
The above equations (3) and (4) of the analysis apply for the case where the display is of circular shape around the viewer. If the shape of the display is different, for example oval or polygon, the display shape needs to be projected on a virtual circle inside the shape, whereupon the calculations are performed in relation to the virtual surface of the circle, and then the transformed elements are projected back to the original display form.
According to an embodiment, the at least one image modification parameter is at least one of: - a blurring strength of the image element; - a color tone strength of the image element; - a color selection of the image element.
Thus, alternatively or additionally to modifying the zoom ratio of the image elements as a function of an angle between the main viewing direction and the direction towards an image location deviating from the main viewing direction, the above embodiments may be applied to other image modification parameters, as well. For example, the blurring strength can be applied such that the image elements close to the main viewing direction are shown sharp in focus, whereas the image elements deviating from the main viewing direction are shown biurred (out of focus),and the biurring strength depends on the deviating angle according to a pre-defined formula. In a similar manner, the tone of the color (brighter/darker) may be adjusted in the image elements deviating from the main viewing direction. Moreover, the color of the image elements may change according to a predefined scheme (e.g. red-purple-blue) in the image elements deviating from the main viewing direction.
In addition to video, the embodiments may be applied to still images, even in GIF format, presented on displays showing images which surround the viewer.
In addition to images, the embodiments may be applied to directional audio. For example, similarly to adjusting zoom ratio, the audio level may be raised in the direction where the image is enlarged.
According to an embodiment, two or more image modification parameters may be adjusted simultaneously.
The above embodiments have been mainly described in relation to cylindrical 2D 360 degree displays. Figure 9a shows the main direction vector r and the deviated direction vector s and the angle a between the vectors in a cylindrical display. However, the same principles may be applied also in. spherical displays. Figure 9b shows the main direction vector r and the deviated direction vector s and the angle a between the vectors inside a spherical display.
According to an embodiment, the viewer inside a spherical display or wearing a 360 degree head-mounted display may be watching a virtual reality video where s/he is moving, e.g. flying, in direction r. As a response to a user input, the surrounding image elements may change: the view in front of the viewer may remain in focus and in the original color, possibly provided with magnification at the same time, while the views by the sides of the viewer, i.e. on left, right, top and below, may become blurry and change colors or color tones. This would provide the viewer with the feeling that s/he is moving through more and less blurry rings of different colors.
As becomes evident from the above, the various embodiments may be applied when a user views the immersive video content and interacts with it in real-time by the applying the various image transformations (zooming, blurring etc.). Alternatively or additionally, the various embodiments may be applied in a post-production (e.g. editing) phase of the content. Herein, the editor (e.g. a person or a computer program) applies one or more of the image transformations to add desired effects to the capture footage. The footage with the transformations implemented may then be recorded for further use.
For 3D 360 degree displays, the image transformation procedure needs to be performed for the two display views, for the two eyes, separately. For 3D 360 degree displays, it is preferable to apply parallax correction procedures for enhancing the 3D effect by further adjusting the relative movement of foreground and background objects, thereby making the scene more realistic looking. Various parallax corrections are known, as such, in the zooming of 3D images.
For example, parallax motion effect may be achieved by embedding optical flow data from stereo images into the video stream and at the playback time slightly warping left and right eye images according to the viewport and optical flow data. The warping may be done so that the video quality or stereo effect does not degrade at all in the middle of the viewport, which is the area that may be critical for the perceived video quality. Instead the video quality and stereo effect may get degraded gradually towards the edges of the viewport, but human stereo vision ability is naturally already reduced in that direction.
In addition to the illusory linear movement of the viewer in the 3D (x,y,z) space, the image transformation may relate to creating the feeling of image rotation around one of the axis in the x,y,z, space. For example, the viewer may look straight ahead at one point in the horizon and starts to see the horizon of the image to shake around the point.
The above embodiments may provide various advantages. The feeling of immersiveness and interactivity is increased to the filmed virtual reality (VR) content that usually have image content wherein the viewer cannot move around and has no interaction possibility. The image modification algorithm is simple with a short processing time, thereby enabling the transformation to be performed even on live streaming VR content.
In general, the various embodiments may be implemented in hardware or special purpose circuits or any combination thereof. While various embodiments may be illustrated and described as block diagrams or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
Embodiments of the inventions may be practiced in various components such as integrated circuit modules. A skilled man appreciates that any of the embodiments described above may be implemented as a combination with one or more of the other embodiments, unless there is explicitly or implicitly stated that certain embodiments are only alternatives to each other.
The various embodiments can be implemented with the help of computer program code that resides in a memory and causes the relevant apparatuses to carry out the invention. Thus, the implementation may include a computer readable storage medium stored with code thereon for use by an apparatus, which when executed by a processor, causes the apparatus to perform the various embodiments or a subset of them. Additionally or alternatively, the implementation may include a computer program embodied on a non-transitory computer readable medium, the computer program comprising instructions causing, when executed on at least one processor, at least one apparatus to apparatus to perform the various embodiments or a subset of them. For example, an apparatus may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the terminal device to carry out the features of an embodiment.
The above-presented embodiments are not limiting, but it can be modified within the scope of the appended claims.

Claims (30)

  1. Claims:
    1. A method comprising: obtaining, by an immersive dispiay unit, at ieast one image to be dispiayed on the immersive display unit; obtaining an input indicating a main viewing direction of a user of the immersive display unit; determining a first value for at least one image modification parameter for an image element residing in the main viewing direction; determining a second value for the at least one image modification parameter for image elements deviating from said main viewing direction, the second value being a function of an angle between the main viewing direction and a direction from the user towards an image location deviating from the main viewing direction; and displaying the at least one image according to the determined values of the at least one image modification parameter on the immersive display unit.
  2. 2. The method according to claim 1, wherein the input is determined based on a head movement of the user.
  3. 3. The method according to claim 1, wherein the input is obtained via a user interface of the immersive display unit.
  4. 4. The method according to claim 1, wherein the input is obtained automatically as a response to the user positioning within the immersive display unit.
  5. 5. The method according to any preceding claim, wherein the input further indicates strength of the input in the main viewing direction.
  6. 6. The method according to any preceding claim, wherein the at least one image modification parameter is a zoom ratio of the image element.
  7. 7. The method according to claim 6, wherein the zoom ratio of the n^^ equally spaced image element around a 360 degree display circle is calculated as;
    where m is the total number of image elements of equal width around a 360 degree display circle, a(n) is the angle deviating from the main viewing direction, corresponding to the n*^ image element before zooming, and a’(n) is the angle of a modified n*^ image element after zooming, and a’(n) being a function of a(n) and the user input value for image magnification in the main viewing direction.
  8. 8. The method according to any preceding claim, wherein the at least one image modification parameter is at least one of: - a blurring strength of the image element; - a color tone strength of the image element; - a color selection of the image element.
  9. 9. The method according to any preceding claim, wherein the immersive display unit is a head mounted device (HMD) with head tracking system or a curved display at least partly surrounding the head of the user.
  10. 10. The method according to any of the preceding claims, wherein the at least one image comprises video image data.
  11. 11. A computer program embodied on a non-transitory computer readable medium, the computer program comprising instructions causing, when executed on at least one processor, at least one apparatus to: obtain, by an immersive display unit, at least one image to be displayed on the immersive display unit; obtain an input indicating a main viewing direction of a user of the immersive display unit; determine a first value for at least one image modification parameter for an image element residing in the main viewing direction; determine a second value for the at least one image modification parameter for image elements deviating from said main viewing direction, the second value being a function of an angle between the main viewing direction and a direction from the user towards an image location deviating from the main viewing direction; and display the at least one image according to the determined values of the at least one image modification parameter on the immersive display unit.
  12. 12. The computer program according to claim 11, further comprising instructions causing the at least one apparatus to determine the input based on a head movement of the user.
  13. 13. The computer program according to claim 11, further comprising instructions causing the at least one apparatus to obtain the input via a user interface of the immersive display unit.
  14. 14. The computer program according to claim 11, further comprising instructions causing the at least one apparatus to obtain the input automatically as a response to the user positioning within the immersive display unit.
  15. 15. The computer program according to any of claims 11 - 14, further comprising instructions causing the at least one apparatus to obtain, from the input, an indication of the strength of the input in the main viewing direction.
  16. 16. The computer program according to any of claims 11 - 15, wherein the at least one image modification parameter is a zoom ratio of the image element.
  17. 17. The computer program according to claim 16, further comprising instructions causing the at least one apparatus to calculate the zoom ratio of the n*^ equally spaced image element around a 360 degree display circle as;
    where m is the total number of image elements of equal width around a 360 degree display circle, a(n) is the angle deviating from the main viewing direction, corresponding to the n*^ image element before zooming, and a’(n) is the angle of a modified n*^ image element after zooming, and a’(n) being a function of a(n) and the user input value for image magnification in the main viewing direction.
  18. 18. The computer program according to any of claims 11 - 17, wherein the at least one image modification parameter is at least one of: - a blurring strength of the image element; - a color tone strength of the image element; - a color selection of the image element.
  19. 19. The computer program according to any of claims 11 - 18, wherein the immersive display unit is a head mounted device (HMD) with head tracking system or a curved display at least partly surrounding the head of the user.
  20. 20. The computer program according to any of claims 11 - 19, wherein the at least one image comprises video image data.
  21. 21. An apparatus comprising an immersive display unit comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to at least: obtain at least one image to be displayed on the immersive display unit; obtain an input indicating a main viewing direction of a user of the immersive display unit; determine a first value for at least one image modification parameter for an image element residing in the main viewing direction; determine a second value for the at least one image modification parameter for image elements deviating from said main viewing direction, the second value being a function of an angle between the main viewing direction and a direction from the user towards an image location deviating from the main viewing direction; and display the at least one image according to the determined values of the at least one image modification parameter on the immersive display unit.
  22. 22. The apparatus according to claim 21, further comprising computer program code configured to, with the at least one processor, cause the apparatus to determine the input based on a head movement of the user.
  23. 23. The apparatus according to claim 21, further comprising computer program code configured to, with the at least one processor, cause the apparatus to obtain the input via a user interface of the immersive display unit.
  24. 24. The apparatus according to claim 21, further comprising computer program code configured to, with the at least one processor, cause the apparatus to obtain the input automatically as a response to the user positioning within the immersive display unit.
  25. 25. The apparatus according to any of claims 21 - 24, wherein the input further indicates strength of the input in the main viewing direction.
  26. 26. The apparatus according to any of claims 21 - 25, wherein the at least one image modification parameter is a zoom ratio of the image element.
  27. 27. The apparatus according to claim 26, wherein the zoom ratio of the n^^ equally spaced image element around a 360 degree display circle is calculated as:
    where m is the total number of image elements of equal width around a 360 degree display circle, a(n) is the angle deviating from the main viewing direction, corresponding to the n^^ image element before zooming, and a’(n) is the angle of a modified n^^ image element after zooming, and a’(n) being a function of a(n) and the user input value for image magnification in the main viewing direction.
  28. 28. The apparatus according to any of claims 21 - 27, wherein the at least one image modification parameter is at least one of: - a blurring strength of the image element; - a color tone strength of the image element; - a color selection of the image element.
  29. 29. The apparatus according to any of claims 21 - 28, wherein the immersive display unit is a head mounted device (HMD) with head tracking system or a curved display at least partly surrounding the head of the user.
  30. 30. The apparatus according to any of claims 21 - 29, wherein the at least one image comprises video image data.
GB1602903.5A 2016-02-19 2016-02-19 A method for image transformation Expired - Fee Related GB2548080B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1602903.5A GB2548080B (en) 2016-02-19 2016-02-19 A method for image transformation
PCT/IB2017/050688 WO2017141139A1 (en) 2016-02-19 2017-02-08 A method for image transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1602903.5A GB2548080B (en) 2016-02-19 2016-02-19 A method for image transformation

Publications (3)

Publication Number Publication Date
GB201602903D0 GB201602903D0 (en) 2016-04-06
GB2548080A true GB2548080A (en) 2017-09-13
GB2548080B GB2548080B (en) 2021-07-14

Family

ID=55752888

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1602903.5A Expired - Fee Related GB2548080B (en) 2016-02-19 2016-02-19 A method for image transformation

Country Status (2)

Country Link
GB (1) GB2548080B (en)
WO (1) WO2017141139A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102019108999B4 (en) * 2019-04-05 2020-12-31 Psholix Ag Process for the immersive display of stereoscopic images and image sequences

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050156817A1 (en) * 2002-08-30 2005-07-21 Olympus Corporation Head-mounted display system and method for processing images

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1373967A2 (en) * 2000-06-06 2004-01-02 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. The extended virtual table: an optical extension for table-like projection systems
US7030876B2 (en) * 2002-12-04 2006-04-18 Hewlett-Packard Development Company, L.P. System and method for three-dimensional imaging
US9955209B2 (en) * 2010-04-14 2018-04-24 Alcatel-Lucent Usa Inc. Immersive viewer, a method of providing scenes on a display and an immersive viewing system
US10389992B2 (en) * 2014-08-05 2019-08-20 Utherverse Digital Inc. Immersive display and method of operating immersive display for real-world object alert
US9984505B2 (en) * 2014-09-30 2018-05-29 Sony Interactive Entertainment Inc. Display of text information on a head-mounted display
US9997199B2 (en) * 2014-12-05 2018-06-12 Warner Bros. Entertainment Inc. Immersive virtual reality production and playback for storytelling content
CN106303404A (en) * 2015-06-29 2017-01-04 北京智谷睿拓技术服务有限公司 Visual content transfer control method, sending method and device thereof
CN106293043B (en) * 2015-06-29 2023-11-10 北京智谷睿拓技术服务有限公司 Visual content transmission control method, transmission method and device thereof

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050156817A1 (en) * 2002-08-30 2005-07-21 Olympus Corporation Head-mounted display system and method for processing images

Also Published As

Publication number Publication date
GB201602903D0 (en) 2016-04-06
GB2548080B (en) 2021-07-14
WO2017141139A1 (en) 2017-08-24

Similar Documents

Publication Publication Date Title
US11575876B2 (en) Stereo viewing
US10096157B2 (en) Generation of three-dimensional imagery from a two-dimensional image using a depth map
US11010958B2 (en) Method and system for generating an image of a subject in a scene
KR102535947B1 (en) Apparatus and method for generating images
US9118894B2 (en) Image processing apparatus and image processing method for shifting parallax images
Thatte et al. Depth augmented stereo panorama for cinematic virtual reality with head-motion parallax
KR102531767B1 (en) Apparatus and method for generating images
WO2019043025A1 (en) Zooming an omnidirectional image or video
WO2020166376A1 (en) Image processing device, image processing method, and program
US11099392B2 (en) Stabilized and tracked enhanced reality images
KR20200128661A (en) Apparatus and method for generating a view image
WO2017141139A1 (en) A method for image transformation
WO2009109804A1 (en) Method and apparatus for image processing
CN113632458A (en) System, algorithm and design for wide angle camera perspective experience
JP7377861B2 (en) Image generation device and method

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20211014