GB2587010A - A method for generating and modifying non-linear perspective images of a 3D scene - Google Patents

A method for generating and modifying non-linear perspective images of a 3D scene Download PDF

Info

Publication number
GB2587010A
GB2587010A GB1913191.1A GB201913191A GB2587010A GB 2587010 A GB2587010 A GB 2587010A GB 201913191 A GB201913191 A GB 201913191A GB 2587010 A GB2587010 A GB 2587010A
Authority
GB
United Kingdom
Prior art keywords
image
scene
point
previous
viewport
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1913191.1A
Other versions
GB201913191D0 (en
GB2587010B (en
Inventor
Christian Pepperell Robert
Henry Joel Burleigh Alistair
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fovo Technology Ltd
Original Assignee
Fovo Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fovo Technology Ltd filed Critical Fovo Technology Ltd
Priority to GB1913191.1A priority Critical patent/GB2587010B/en
Publication of GB201913191D0 publication Critical patent/GB201913191D0/en
Priority to PCT/GB2020/052100 priority patent/WO2021048525A1/en
Publication of GB2587010A publication Critical patent/GB2587010A/en
Application granted granted Critical
Publication of GB2587010B publication Critical patent/GB2587010B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method of generating and modifying a non-linear perspective 2D image of a 3D scene is described. The method involving first processing an image of a 3D scene to generate a first set of data points representative of the 3D scene and 3D objects within the scene then defining a point of observation (PoO) within the 3D scene comprising a viewing point (VP) and viewing direction (VD). Next a 2D viewport comprising a pixel array is defined which is converted into a hemispherical viewport, wherein each pixel comprises a location defined in hemispherical coordinates with the centre of the hemisphere being located at the viewing point and the pole of the hemisphere being aligned with the viewing direction (VD). For each pixel of the hemispherical viewport, a probe ray is cast from the point of observation (PoO) via the pixel into the 3D scene to detect an intersection and for each intersection converting the 3D data point of the intersection into a second set of data points wherein each data point is relative to the point of observation (PoO). The point of observation (PoO) is then modified according to the second set and one or more image control functions, for each pixel of the hemispherical viewport. A trace ray is then cast from the modified point of observation (MPoO) and a variable number of reflections/intersections for said trace ray are detected. A final colour for each pixel of the viewport is then determined from the trace ray into a modified viewport then rendering the modified view port representative of a modified 2D image of the 3D scene into a 2D image on a display.

Description

A METHOD FOR GENERATING AND MODIFYING NON-LINEAR PERSPECTIVE IMAGES OF A
3D SCENE The present invention is directed to a method for generating and modifying a 2D image of a 3D scene that uses geometric information sampled from the 3D scene to control properties of the output 2D image projection. More specifically the method provides control over the layout and distribution of objects and space in the scene in order to create non-linear perspective images that can be manipulated in novel ways by users of devices embodying the invention.
Images of actual or imaginary 3D scenes are increasingly being generated, stored and modified using computer graphics systems. A 3D data set representing a real physical environment or imaginary scene may be captured or created with technologies including but not limited to optical cameras, optical camera arrays, image based 3D scanners, infrared scanners, lightfield cameras, photogrammetry systems, and laser scanners. A 3D data set representative of a real or imagined space can be generated using 3D modelling and design software. A 3D data set representative of an augmented or mixed reality can be created by combining measurements or optical capture of real physical space with a computer-generated environment. Once captured or generated the data representing a 3D scene are typically stored in the computer system as a set of 3D coordinates with values attached to each coordinate such as a colour, texture, or mapping values. To edit or view the images, the computer system converts this 3D data into an array of pixels that are displayed on a monitor, projector, or similar device usually via a graphical processing unit.
There are various methods used in computer graphics to convert the 3D data into a 2D array of pixels that are viewed on a display device. The two most common methods are rasterisation and ray tracing. Both methods rely entirely, or to a great extent, on the projective geometry of linear perspective, or variations on this geometry such as curvilinear perspective, in order to convert 3D data representing a scene into a 2D image on a display in a way that represents a 3D scene in a 2D image.
Many existing 2D or 3D software packages provide tools or mechanisms for modifying features of the image, such as parameters of the projective geometry used in the image, in order to satisfy various user requirements. For example, a typical 3D computer game or architectural visualisation software package will offer the user control over the horizontal field of view (FOV) in which the game scene or space is presented on the display. In this case, increasing the FOV may be required in order to display more of the 3D scene. To give another example, a user may wish to enlarge a certain portion of the image in order to see the information within it at a higher pixel resolution. To give a further example, a designer of an image, or a viewer of an image, may wish to modify the visual properties of the image in such a way that it better, or more naturally, represents the physical 3D world in the way it is commonly perceived by humans.
However, there are a number of problems and limitations with current methods of modifying the appearance of images of 3D scenes in devices that use rely on projective geometry of linear perspective to plot 3D data to a 2D image. The projective geometry of linear perspective, or other forms of projection based on linear-projected light rays, imposes a strictly defined relationship between the relative sizes of objects in the scene and their distance from the virtual pinhole through which they are imaged. This projective geometry effectively determines many properties of the resulting 2D image, including the size and position of objects in the scene, the occlusion paths by which objects are overlaid on each other, the light paths for shadows, reflections, highlights and other luminance and texture properties, and the dimensions of the FOV. These determined properties can then limit the ability of the person creating the image, or modifying the image, or viewing the image to adjust these properties in a visually satisfactory way, should it be necessary or desirable to do so.
A number of methods are available for modifying these properties in order to generate a different view of the 3D scene that overcomes some of these limitations. In optical devices such as cameras and light sensing devices, different lenses, lens filters, mirrors, or arrangements of lenses and light refractive, light reflective or light sensitive materials, can be used. In 3D computer rendering devices these optical modifications can be emulated using non-linear perspective projections, including fisheye, orthographic, equirectangular or other map projections, and cylindrical projections, all of which can be currently implemented in computer graphics systems. The major limitation of these optical and computational modification methods based on the linear projection of light rays is that they tend to produce various kinds of extreme or unnatural distortions of the objects and space being depicted, and as a result are not widely adopted in graphics systems or by consumers as they are generally considered not representative of 3D space as it is naturally perceived by humans. In addition, a wide-angle FOV by nature depicts more of a given scene than a narrow field of view as there is less opportunity for culling geometry that sits outside the viewing frame. As such a wide field of view of a given scene is traditionally computationally more expensive to render than a narrow FOV of the same scene.
To overcome some of these limitations, image designers can employ a number of commonly used post-production software tools to arbitrarily warp and manipulate a 2D image of a 3D scene in order to improve image layout according to parameters that may be desirable to end users. For example, a cinematographic special effects designer may use such a tool to resize portions of a moving image, to convert an image from one aspect ratio to another, or to produce special effects such as a specific distortion or warp to a video layer. Where such systems rely on the manipulation of captured or generated 2D image coordinates, there is a limit to the adjustments that can be applied before unpleasant or unnatural appearing artefacts are introduced. Moreover, 3D depth information in the scene is generally not available from 2D images, and this imposes limits on the extent to which the 3D depth properties of the scene can be modified.
The occlusion of objects in a 3D scene when viewed in a 2D image generated by linear perspective or related projective geometries is determined by the convergence of linear light rays through the pinhole of the camera that captured the image, meaning that it is not possible to display areas of a scene, or portions of objects in the scene, which are occluded by interposed objects. Consequently, designers are limited in the extent to which they can modify a 2D image, or moving image, since they cannot independently modify layers of depth in the depicted scene without costly image processing.
A further limitation of many optical and computer graphics systems is that the geometry of the images they display is not automatically modified in response to the physical actions of the viewer in the real world. For example, if the viewer physically moves closer to the display screen they might expect the structure of the scene to change to allow them to see elements of the scene more clearly. Or if the viewer is actively navigating through a represented 3D space as opposed to passively viewing it they might expect require the geometry of the scene to be presented to them differently. Systems exist that allow viewers to reposition a device in order to simulate scanning around a 3D space or object, including virtual reality headsets, 3602 videos, and structure from motion software, or to adjust properties of the image in other ways. But such systems, in general, still rely on linear perspective projective geometry, or some other method based on the linear projection of light rays, by default to display the image, and so suffer the same limitations noted above.
A method of generating and modifying a non-linear perspective 2D image of a 3D scene the method including the steps: - processing an image of a 3D scene to generate a first set of data points representative of the 3D scene and 3D objects within the scene; - defining a point of observation (P00) within the 3D scene, comprising a viewing point (VP) and viewing direction (VD); - defining a 2D viewport comprising a pixel array; - converting the 2D viewport into a hemispherical viewport, wherein each pixel comprises a location defined in hemispherical coordinates, the centre of the hemisphere is located at the viewing point, and the pole of the hemisphere is aligned with the viewing direction (VD); - for each pixel of the hemispherical viewport, casting a probe ray from the point of observation (PoO) via the pixel into the 3D scene to detect an intersection; - for each intersection, converting the 3D data point of the intersection into a second set of data points wherein each data point is relative to the point of observation (P00); - modifying the point of observation (PoO) according to the second set and one or more image control functions; - for each pixel of the hemispherical viewport, casting a trace ray from a modified point of observation (MP00); - reflecting and detecting the trace ray for a variable number of reflections/intersections; - determining a final colour for each pixel of the viewport from the trace ray into a modified viewport; - rendering the modified viewport representative of a modified 2D image of the 3D scene into a 2D image on a display.
In an embodiment the first set of data points represent a world coordinate system and the second set of data points represent a camera coordinate system.
In an embodiment a value for the longitude, latitude and theta is calculated for each intersection with relation to the point of observations (P00).
In an embodiment the method further comprises the step of modifying the modified viewport according to the second set and/or one or more image control functions.
In an embodiment the method further comprises the step of converting the modified viewport into a 2D pixel array.
In an embodiment, the set of data points representing the 3D scene comprises spatial coordinates, colour, texture, surface mapping data, motion data, object identification data, or other data necessary to represent a 3D scene.
In an embodiment, the image control functions can be applied individually or in combination to increase or decrease the vertical field of view of the scene represented by the image, ranging from any angle that is greater than 02 to any angle that is less than 3602, In an embodiment, the image control functions can be applied individually or in combination to increase or decrease the horizontal field of view of the scene represented by the image, ranging from any angle that is greater than 02 to any angle that is less than 3602, In an embodiment, the image control functions can be applied individually or in combination to increase or decrease the size of regions or objects located in the centre of the image relative to those located at the edge of the image ranging from any value that is greater than 0% to any value that is less than 1000% of actual size.
In an embodiment, the image control functions can be applied individually or in combination to increase or decrease the size of regions or objects located in the edge of the image relative to those located at the centre of the ranging from any value that is greater than 0% to any value that is less than 1000% of actual size.
In an embodiment, the image control functions can be applied individually or in combination to increase or decrease the amount of vertical curvature in the image from 0, where all vertical lines that are straight in the scene appear straight in the image, to 100 where all vertical lines that are straight in the scene appear as sections of circles in the image.
In an embodiment, the image control functions can be applied individually or in combination to increase or decrease the amount of horizontal curvature in the image from 0, where all horizontal lines that are straight in the scene appear straight in the image, to 100 where all horizontal lines that are straight in the scene appear as sections of circles in the image.
In an embodiment, the image control functions can be applied individually or in combination to increase or decrease the amount of straightness or curvature in the image as a function of depth, from 0 where the straightness of objects or regions in the image increases with depth, to 100 where the curvature of objects or regions in the image increases with depth.
In an embodiment, the image control functions have one or more definable parameters.
In an embodiment, the definable parameters can be set, via a user control interface.
In an embodiment, the user control interface consists of a set of manually operated sliders or knobs, either mechanical or electronic in form.
In an embodiment, the user control interface consists of one or more sensors, which detect user eye motion and position, head motion and position, and body motion and position relative to alter the parameters in real time.
In an embodiment, the definable parameters are predetermined, to generate certain characteristics.
In an embodiment, an additional display on which a linear perspective projection of the scene is presented at the same time as the modified view of the scene generated.
The invention may be performed in various ways and embodiments thereof will now be described, by way of example only, reference being made to the accompanying drawings, in which: Figure 1 is a flow chart of the present method embodied in a computer device; Figure 2 is a schematic diagram of a simplified approach to ray tracing; Figure 3 is a schematic diagram showing the conversion of a 2D Cartesian coordinate system into a spherical coordinate system; Figure 4 is a schematic diagram showing how the spherical coordinate system is used as a direction vector; Figure 5 is a schematic diagram showing how the initial world space position returned from the probe ray is converted; Figure 6 is a schematic diagram showing how the relative view space coordinate is calculated; Figure 7 is a schematic diagram showing how the spatial geometric information returned by the probe ray per pixel is used to iteratively adjust the origin and direction of a second ray; Figure 8 is a schematic diagram showing how the probe ray and trace ray combine to produce an image result in the format of an equirectangular image; Figure 9 is a schematic diagram showing how the resultant loosely equirectangular image is converted into another 2D Cartesian coordinate system; Figure 10 is a schematic diagram showing the produced image mapped onto a 2D Cartesian grid; Figure 11 is a schematic diagram showing the produced image on the 2D Cartesian grid being adjusted; Figure 12 is a schematic diagram of the user control interface; and, Figure 13 is a schematic diagram of the user sensor inputs.
Referring to Figure 1 of the drawings, there is shown an embodiment of the present method 100. There is shown a First Memory Storage Device 110 which holds 3D data 111 representing the 3D scene and objects in the scene, which exists in the form of pixels, voxels, point coordinates, colours, texture information, or other such data required to represent the scene. Data is retrieved from the First Memory Storage Device 110 and passed to the Central Processing Unit 112. Using known computational techniques for examples matrices, the data is passed to the Graphics Processor Unit 114 in such a way that their values can subsequently be transformed via a series of functions 115, each specified in the method. Each function 115 in the present method produces a set of 3D data points that are defined to take a path and trajectory through world space in order to intersect a projection surface (e.g. computationally modeled light rays). The intersection points (or convergence points) form the basis of an image to be projected 116 where the 3D data points must be flattened so that a display device can render the image in 2D, via a suitable Graphics Processing Unit 114, to a First Image Display Device 117.
The step of processing an image of a 3D scene to generate a set of data points representative of the 3D scene and 3D objects within the scene, may include: capturing the data of a 3D model such as a computer aided design (CAD) model or drawing; or, from a moving 3D scene such that may be generated in a computer game. The method may process snapshots of a model for postproduction effects or in real-time.
In the present method, a probe ray is fired into the scene from the point of observation (P00). The probe ray samples the geometry in the scene to return a collision point for every pixel with a 3D world coordinate including depth.
The direction and origin point of a second trace ray to sample pixel color is controlled by the 3D coordinates returned by the probe ray, through a set of user-manipulated image control functions that allow the image to be adjusted in a novel non-linear geometry sensitive and depth-based manner.
The 3D coordinates returned by the probe ray also control a second set of image control parameters attached to a two-dimensional image re-projection.
The functions described in the invention allow the user of a device embodying this process to generate an infinitely variable number of projections.
The functions may be mathematical conversions, these functions 115 are transformed according to algorithmic image modification parameters 121 preconfigured for multiple types of outcomes or by a user of the device embodying the method, or in response to the data from User Control Interface 1000 and Sensor Inputs 1100, and rendered to the First Image Display Device 117 via an orthogonal projection.
The image modification parameters 121 can be stored as numerical data or algorithms in a Second Memory Storage Device 120 and retrieved as necessary by the GPU 114, or saved to the Second Memory Storage Device 120. The diagram shows how the User Control Interface 80 and User Sensor Inputs 1100 are configured in relation to the rest of the device embodying the method. Settings from the User Control Interface 1000 are passed to the Graphics Processor Unit 114 in order to modify the functions 115 specified in the present method. Settings from the User Sensor Inputs 1100 are also passed to the Graphics Processor Unit 114 in order to modify the functions 115 specified in the present method 100.
Referring to Figures 2 and 3 of the drawings, there is shown schematic diagrams of the standard simplified approach to ray tracing using a single ray origin, also know as a point of observation (P00), linear rays (10) an image (12) and 2D Cartesian grid (14) representative of each ray direction per screen pixel. A 3D model (12) and a viewing position (PoO) to "look" at it from are shown. A "square" array (14) of pixels is placed in front of the viewpoint, in this example as a distance of one unit.
Referring to Figure 4 of the drawings, there is shown a schematic diagram of the conversion of a 2D Cartesian coordinate system into a hemispherical coordinate system (15) where every coordinate is representative of a ray direction per screen pixel, in relation to a position and direction of a view-point in 3D space. Using longitude and latitude values with left being -90 and right being +90 degrees, the pixels are wrapped around the viewpoint as a hemisphere, the hemisphere is rotated accordingly based on view direction.
Referring to Figure 5 of the drawings, there is shown a schematic diagram of how the spherical coordinate system (15) is used as a direction vector with which to calculate an intersection point for every probe ray (16) in an overall world space coordinate system. This step can be described as the probe ray. The probe ray is fired from an origin (PoO) through the pixel mesh for every pixel to get a collision point back if the ray hits something, thereby extracting a 3D world position including depth from the hit point.
Referring to Figure 6 of the drawings, there is shown an a schematic diagram of the initial world space position returned from the probe ray (16) is converted into a new value relative to a view space coordinate system on three axes. The world position of the hit points/intersections is converted into a relative camera coordinate positon and direction (x,y,z).
Referring to Figure 7 of the drawings, there is shown a schematic diagram of how the relative view space coordinate is also calculated in terms of longitude, latitude, and theta, with the forward direction of the view point being taken as 0 degrees on the horizontal and vertical axis.
Referring to Figure 8 of the drawings, there is shown a schematic diagram of how the spatial geometric information returned by the probe ray (16) per pixel is used to iteratively adjust the origin and direction of a second ray, the trace ray (17) per pixel in order to work out a final colour value of the pixel. This step can be described as the trace ray and as with a conventional ray tracing approach can be set to continue on into the scene for a set number of bounces in order to calculate standard material properties such as reflections. The trace ray is fired from a modified position of origin MP00 for each pixel, each pixel can have a different MPoO, the modification based on the camera coordinate position from the previous step.
Referring to Figure 9 of the drawings, there is shown a schematic diagram of how the probe ray (16) and trace ray (17) combine to produce an image result in the format of an equirectangular image, but with non-linear adjustments already rendered into the image based on the geometry adaptive trace ray origin and direction. The trace ray defines the colour of the pixel and may require multiple bounces/reflections to attain shadows, reflections and lighting.
Referring to Figure 10, there is shown a a schematic diagram of how the resultant loosely equirectangular image is converted into another 2D Cartesian grid and Figure 11 shows the 2D image according to parameters set by the user or designer of the device embodying the present method. The information returned from the probe ray can again be applied at this level to perform novel geometry or depth adaptive image effects.
Referring to Figure 12 of the drawings, there is shown a User Control Interface 1000 for use with an embodiment of the present method, in which a series of control sliders are preprogrammed to modify the functions disclosed in the present method and so applied to the 3D data in order to transform its appearance in the Image Display Device 117, the parameters of which can be modified freely via the User Control Interface.
In one embodiment of the method the sliders are preprogramed to control a series of geometrical transformations of the spatial structure of the represented 3D scene using the functions and parameters defined in the method. By way of illustration, slider A controls the amount of curvature in the vertical axis of the image using a suitable mathematical algorithm, with 0 being no curvature such that all vertical lines that are straight in the scene appear straight in the image, and 100 being full curvature, such that all vertical lines in the scene appear as sections of a circle; slider B controls the amount of curvature in the horizontal axis of the image using a suitable mathematical algorithm, with 0 being no curvature such that all horizontal lines that are straight in the scene appear straight in the image, and 100 being full curvature, such that all horizontal lines in the scene appear as sections of a circle; slider C controls the vertical field of view of the image using a suitable mathematical algorithm, with 0 being 02 and 100 being 3602; slider D controls the horizontal field of view of the image using a suitable mathematical algorithm, with 0 being 02 and 100 being 3602; slider E controls the size of objects or regions of the scene at the centre of the image relative to those outside the centre using a suitable mathematical algorithm, with 0 being 1% of actual size, 50 being actual size, and 100 being 1000% of actual size; slider F controls the size of objects or regions of the scene at the outer edges of the image relative to those in the centre using a suitable mathematical algorithm, with 0 being 1% of actual size, 50 being actual size, and 100 being 1000% of actual size; slider G controls the amount of curvature or straightness in the image as a function of depth in the scene using a suitable mathematical algorithm, with 0 being increased curvature with depth and 100 being increased straightness with depth.
Referring to Figure 13 of the drawings, there is shown a series of User Sensor Inputs 1100 for use with an embodiment of the present method. By way of illustration, a series of sliders are provided -from A to G -each of which can be set at a value between 0 and 100 according to data passed from one or a combination of the eye sensor 1101, head sensor 1102, a body position sensor 1103. The position of the slider determines a value that is passed to the Graphics Processing Unit 114.
In an embodiment of the method the sliders are preprogramed to control a series of geometrical transformations of the spatial structure of the represented 3D scene using the functions and parameters defined in the method in response to data passed from the User Sensor Inputs 1100.
By way of illustration, slider A receives data from the Eye Position Sensor 1101 and controls the amount of curvature in the image using a suitable mathematical algorithm, with 0 being no curvature such that all lines that are straight in the scene appear straight in the image when coinciding with the user's eye position in the image, and 100 being full curvature such that all lines in the scene coinciding with the user's eye position as detected in the image appear as sections of a circle; slider B receives data from the Head Position Sensor 1102 and controls the amount of curvature in the image using a suitable mathematical algorithm, with 0 being no curvature such that all lines that are straight in the scene appear straight in the image when the user's head position is detected at 10 cm or less from the Image Display Device 117, and 100 being full curvature, such that all lines in the scene appear as sections of a circle when the user's head position is detected at 10cm or less from the Image Display Device 117; slider C receives data from the Body Position Sensor 1103 and controls the field of view of the image using a suitable mathematical algorithm, with 0 being 502 when the body is detected at 20 cm or less from the Image Display Device 117 and 100 being 1802 when the body is detected at 20 cm or less from the Image Display Device 117; slider D receives data from the Eye Position Sensor 1101 and controls size to depth ratio of objects or regions in the scene, with 0 meaning objects coinciding with the user's eye position are decreased to 1% of actual size and 100 meaning objects coinciding with the user's eye position are increased to 1000% of actual size; slider E receives data from the Head Position Sensor 1102 and controls the size of objects or regions of the scene at the edge of the image relative to those at the centre using a suitable mathematical algorithm, with 0 being 1% of actual size, 50 being actual size, and 100 being 1000% of actual size.

Claims (21)

  1. CLAIMS1. A method of generating and modifying a non-linear perspective 2D image of a 3D scene the method including the steps: - processing an image of a 3D scene to generate a first set of data points representative of the 3D scene and 3D objects within the scene; - defining a point of observation (PoO) within the 3D scene, comprising a viewing point (VP) and viewing direction (VD); - defining a 2D viewport comprising a pixel array; - converting the 2D viewport into a hemispherical viewport, wherein each pixel comprises a location defined in hemispherical coordinates, the centre of the hemisphere is located at the viewing point, and the pole of the hemisphere is aligned with the viewing direction (VD); - for each pixel of the hemispherical viewport, casting a probe ray from the point of observation (PoO) via the pixel into the 3D scene to detect an intersection; - for each intersection, converting the 3D data point of the intersection into a second set of data points wherein each data point is relative to the point of observation (P00); - modifying the point of observation (P00) according to the second set and one or more image control functions; - for each pixel of the hemispherical viewport, casting a trace ray from a modified point of observation (MPo0); - reflecting and detecting the trace ray for a variable number of reflections/intersections; - determining a final colour for each pixel of the viewport from the trace ray into a modified viewport; - rendering the modified viewport representative of a modified 2D image of the 3D scene into a 2D image on a display.
  2. 2. The method of claim 1, wherein the first set of data points represent a world coordinate system.
  3. 3. The method of any previous claim, wherein the second set of data points represent a camera coordinate system.
  4. 4. The method of any previous claim, wherein a value for the longitude, latitude and theta is calculated for each intersection with relation to the point of observations (P00).
  5. 5. The method of any previous claim, further comprising the step of modifying the modified viewport according to the second set and/or one or more image control functions.
  6. 6. The method of any previous claim, further comprising the step of converting the modified viewport into a 2D pixel array.
  7. 7. The method of claim 2, wherein the set of data points representing the 3D scene comprises spatial coordinates, colour, texture, surface mapping data, motion data, object identification data, or other data necessary to represent a 3D scene.
  8. 8. The method of claim 3, wherein the second set of data points representing the 3D scene comprises spatial coordinates, colour, texture, surface mapping data, motion data, object identification data, or other data necessary to represent a 3D scene.
  9. 9. The method of any previous claim, wherein the image control functions can be applied individually or in combination to increase or decrease the vertical field of view of the scene represented by the image, ranging from any angle that is greater than 02 to any angle that is less than 3602,
  10. 10. The method of any previous claim, wherein the image control functions can be applied individually or in combination to increase or decrease the horizontal field of view of the scene represented by the image, ranging from any angle that is greater than 02 to any angle that is less than 3602,
  11. 11. The method of any previous claim, wherein the image control functions can be applied individually or in combination to increase or decrease the size of regions or objects located in the centre of the image relative to those located at the edge of the image ranging from any value that is greater than 0% to any value that is less than 1000% of actual size.
  12. 12. The method of any previous claim, wherein the image control functions can be applied individually or in combination to increase or decrease the size of regions or objects located in the edge of the image relative to those located at the centre of the ranging from any value that is greater than 0% to any value that is less than 1000% of actual size.
  13. 13. The method of any previous claim, wherein the image control functions can be applied individually or in combination to increase or decrease the amount of vertical curvature in the image from 0, where all vertical lines that are straight in the scene appear straight in the image, to 100 where all vertical lines that are straight in the scene appear as sections of circles in the image.
  14. 14. The method of any previous claim, wherein the image control functions can be applied individually or in combination to increase or decrease the amount of horizontal curvature in the image from 0, where all horizontal lines that are straight in the scene appear straight in the image, to 100 where all horizontal lines that are straight in the scene appear as sections of circles in the image.
  15. 15. The method of any previous claim, wherein the image control functions can be applied individually or in combination to increase or decrease the amount of straightness or curvature in the image as a function of depth, from 0 where the straightness of objects or regions in the image increases with depth, to 100 where the curvature of objects or regions in the image increases with depth.
  16. 16. The method of any previous claim, wherein the image control functions have one or more definable parameters.
  17. 17. The method of any previous claim, wherein the definable parameters can be set, via a user control interface.
  18. 18. The method of any previous claim, wherein the user control interface consists of a set of manually operated sliders or knobs, either mechanical or electronic in form.
  19. 19. The method of any previous claim, wherein the user control interface consists of one or more sensors, which detect user eye motion and position, head motion and position, and body motion and position relative to alter the parameters in real time.
  20. 20. The method of any previous claim, wherein the definable parameters are predetermined, to generate certain characteristics.
  21. 21. The method of any previous claim, further comprising rendering on an additional display a linear perspective projection of the 3D scene.
GB1913191.1A 2019-09-12 2019-09-12 A method for generating and modifying non-linear perspective images of a 3D scene Active GB2587010B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1913191.1A GB2587010B (en) 2019-09-12 2019-09-12 A method for generating and modifying non-linear perspective images of a 3D scene
PCT/GB2020/052100 WO2021048525A1 (en) 2019-09-12 2020-09-02 A method for generating and modifying non-linear perspective images of a 3d scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1913191.1A GB2587010B (en) 2019-09-12 2019-09-12 A method for generating and modifying non-linear perspective images of a 3D scene

Publications (3)

Publication Number Publication Date
GB201913191D0 GB201913191D0 (en) 2019-10-30
GB2587010A true GB2587010A (en) 2021-03-17
GB2587010B GB2587010B (en) 2023-10-18

Family

ID=68315326

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1913191.1A Active GB2587010B (en) 2019-09-12 2019-09-12 A method for generating and modifying non-linear perspective images of a 3D scene

Country Status (2)

Country Link
GB (1) GB2587010B (en)
WO (1) WO2021048525A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013010261A2 (en) * 2011-07-18 2013-01-24 Dog Microsystems Inc. Method and system for performing rendering

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013010261A2 (en) * 2011-07-18 2013-01-24 Dog Microsystems Inc. Method and system for performing rendering

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ANONYMOUS: "Ray tracing (graphics) - Wikipedia", 10 September 2016 (2016-09-10), XP055360675, Retrieved from the Internet <URL:https://en.wikipedia.org/w/index.php?title=Ray_tracing_(graphics)&oldid=738679597> [retrieved on 20170331] *
DANIEL OTT ET AL: "Simulating a virtual fisheye lens for the production of full-dome animations", 20070323; 1077952576 - 1077952576, 23 March 2007 (2007-03-23), pages 294 - 299, XP058263125, ISBN: 978-1-59593-629-5, DOI: 10.1145/1233341.1233394 *

Also Published As

Publication number Publication date
WO2021048525A1 (en) 2021-03-18
GB201913191D0 (en) 2019-10-30
GB2587010B (en) 2023-10-18

Similar Documents

Publication Publication Date Title
US5694533A (en) 3-Dimensional model composed against textured midground image and perspective enhancing hemispherically mapped backdrop image for visual realism
CN107564089B (en) Three-dimensional image processing method, device, storage medium and computer equipment
US11238558B2 (en) Method for generating and modifying images of a 3D scene
EP3057066B1 (en) Generation of three-dimensional imagery from a two-dimensional image using a depth map
US10127712B2 (en) Immersive content framing
US7259778B2 (en) Method and apparatus for placing sensors using 3D models
US20190164354A1 (en) Image generating apparatus and image generating method
JP5093053B2 (en) Electronic camera
JP2006293792A (en) Stereoscopic image generation device
KR20140100656A (en) Point video offer device using omnidirectional imaging and 3-dimensional data and method
US10893259B2 (en) Apparatus and method for generating a tiled three-dimensional image representation of a scene
CN107005689B (en) Digital video rendering
KR20190062102A (en) Method and apparatus for operating 2d/3d augument reality technology
WO2012140397A2 (en) Three-dimensional display system
CN115439616B (en) Heterogeneous object characterization method based on multi-object image alpha superposition
KR20110088995A (en) Method and system to visualize surveillance camera videos within 3d models, and program recording medium
US9897806B2 (en) Generation of three-dimensional imagery to supplement existing content
CN113366825B (en) Apparatus and method for generating image signal representing scene
CN110060349B (en) Method for expanding field angle of augmented reality head-mounted display equipment
CN111279293A (en) Method for modifying image on computing device
WO2021048525A1 (en) A method for generating and modifying non-linear perspective images of a 3d scene
Williams et al. InteractiveVirtual Simulation for Multiple Camera Placement
Bauermann et al. Low-complexity Image-based 3D Gaming.
de Sorbier et al. Depth Camera to Generate On-line Content for Auto-Stereoscopic Displays