GB2438260A - Using a base image to form a final image - Google Patents

Using a base image to form a final image Download PDF

Info

Publication number
GB2438260A
GB2438260A GB0603591A GB0603591A GB2438260A GB 2438260 A GB2438260 A GB 2438260A GB 0603591 A GB0603591 A GB 0603591A GB 0603591 A GB0603591 A GB 0603591A GB 2438260 A GB2438260 A GB 2438260A
Authority
GB
United Kingdom
Prior art keywords
image
sub
colour
information
base image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0603591A
Other versions
GB0603591D0 (en
Inventor
Anthony James Blackshaw
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GETME Ltd
Original Assignee
GETME Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GETME Ltd filed Critical GETME Ltd
Priority to GB0603591A priority Critical patent/GB2438260A/en
Publication of GB0603591D0 publication Critical patent/GB0603591D0/en
Publication of GB2438260A publication Critical patent/GB2438260A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

A base image (20), depicts a surface 22 having a covering 24, to form a final image which depicts the surface 22 and imitates the appearance of the covering 24 in a desired aspect but alters the appearance of the covering in a further aspect. The base image 20 comprises a plurality of original pixels containing desired information relating to the desired aspect. The method comprises the step of forming the final image using further information relating to the further aspect along with the desired information. The desired information may be lighting, tone , texture or a conbination of these. The further aspect may be colour. It is aimed at allowing a user to see the appearance of different coverings e.g. rugs, carpet on a surface such as a floor.

Description

<p>A METHOD AND SYSTEM OF USING A BASE IMAGE TO FORM</p>
<p>A FINAL IMAGE</p>
<p>The present invention relates to a method and system of using a base image, depicting a surface having a covering, to form a final image, depicting the surface and imitating the appearance of the covering in a desired aspect and altering the appearance of the covering in a further aspect. Particularly, but not exclusively, the invention can be used in applications such as modelling the layout of a room. For example the application may be used to determine the appearance of an image of a room having a surface, such as a floor, covered in a surface covering, such as a carpet.</p>
<p>According to a first aspect of the present invention there is provided a method of using a base image, depicting a surface having a covering, to form a final image, depicting the surface and imitating the appearance of the covering in a desired aspect and altering the appearance of the covering in a further aspect, wherein the base image comprises a plurality of original pixels containing desired information relating to the desired aspect, the method comprising the step of forming the final image using further information relating to the further aspect along with the desired information.</p>
<p>According to a further aspect of the present invention there is provided an image forming system comprising a processor arranged to carry out the method of the first aspect of the invention.</p>
<p>Embodiments of the present invention will now be described by way of example only with reference to the accompanying drawings in which; Figure 1 schematically illustrates an image forming system according to an embodiment of the present invention; Figure 2 schematically illustrates a base image which is used to form a final image using the system of Figure 1; Figure 3 schematically illustrates a method of using a base image to form a final image using the system illustrated in Figure 1; Figure 4 is a flowchart illustrating a method according to a further embodiment of the present invention; Figure 5 schematically illustrates a surface comprising sub-surfaces, a material comprising sub-materials and the material mapped to the surface according to the method of Figure 4; Figure 6 schematically illustrates how alpha data is used to overlay sub-materials upon each other in the method of Figure 4; Figure 7 schematically illustrates how data can be mapped relatively using the method of Figure 4; Figure 8 schematically illustrates how sub-surfaces are re-sized in the method of Figure 4; Figures 9a and 9b show a material mapped to a surface comprising a set of stairs which are ungrouped and grouped respectively; Figure 10 schematically illustrates a surface comprising sub-surfaces having two different materials mapped on to it; Figure 11 schematically illustrates how sub-surfaces re-size and align in the method of Figure 4; Figure 12 schematically shows sub-materials applied to sub-surfaces having different alignment settings in the method of Figure 4; Figure 13 schematically illustrates the effect of render indexes to re-order the rendering of sub-surfaces within the method of Figure 4; Figure 14a shows a material which is mapped to a surface shown in Figures 14b and 14c, the image shown in Figure 14c being a corrected version of the image shown in Figure 14b using the method of Figure 4; Figure 15 schematically illustrates a tiled rendering process used with the method of Figure 4; Figure 16 schematically illustrates a re-sizing process within the method of Figure 4; and Figure 17 schematically illustrates the use of a mask in the method of Figure 4.</p>
<p>Referring to Figures 1 to 3, in a first embodiment of the invention an image forming system 10 comprises a processor 12 (e.g. of a PC) in communication with a memory 14 (e.g. of a PC). The processor 12 is also in communication with a user interface 16 (e.g. a keyboard). The processor 12 is also in communication with a display 18, in the form of a screen in this embodiment. The processor 12 is arranged to run software arranged to carry out the method of the present invention as described in further detail below.</p>
<p>Referring to Figure 2 a base image 20 is shown. In this embodiment the base image comprises an image depicting a surface 22 which is a floor in a room shown in the base image 20. In other embodiments the surface 22 may be another surface such as a ceiling, wall, a surface of a piece of furniture or any other surface within the base image 20. The base image comprises a plurality of pixels. In this embodiment the base image is stored in the memory 14 of the PC of the image forming system 10. In other embodiments the base image may be stored on a remote memory or in any other type of memory e.g. temporary memory as and when it needs to be processed. In this embodiment the pixels making up the surface 22 have been pre-defined at the time of generating the base image 20. In this embodiment the base image 20 is not displayed on the screen 18. In other embodiments a user may be required to define the surface themselves and in such embodiments the base image will usually be displayed on the screen 18 and a user can define the surface by selecting areas of the image (i.e. selecting pixels which the user wants to have defined as the surface) using the user interface 16.</p>
<p>In this embodiment pixels which correspond to the surface 22 are pre-tagged as surface pixels so that the processor 12 recognises them as such when it interrogates the base image 20. The floor 22 has a covering 24 in the form of carpet in this embodiment. In other embodiments the covering may take any other suitable form -some limited examples are tiles, sheets, floorboards, a bare floor (i.e. a bare floor/surface is considered to have a plain covering). The carpet 24 has a desired aspect which is desired to be retained in the final image formed by the method of the present invention. The purpose of the present method is to use the base image 20 to form a final image which depicts the surface 22 and imitates the appearance of the covering 24 in the desired aspect but alters the appearance of the covering in a further aspect. In this embodiment the desired aspect comprises the aspect of lighting and the further aspect comprises the aspect of colour. The carpet in the original base image 20 is red and it desired to have a blue carpet in the final image.</p>
<p>Each pixel which makes up the surface 22 in the base image 20 comprises desired information relating to the desired aspect. In this embodiment the desired information comprises lighting information in the form of a numerical value associated with each pixel representing the lightness of the pixel in the base image.</p>
<p>Amongst other things lightness is an aspect which makes the base image realistic. As an illustrative example, if the base image is a photograph comprising a surface having a carpet covering it then one of the reasons it looks realistic is that there is lighting information which differs across the image dependent upon a source of light influencing the image. Other aspects which may make the image look realistic include tone information, texture information or similar (e.g. tone/texture information caused by reflections).</p>
<p>In the present invention the final image which is formed from the base image is intended to imitate realistically the appearance of the base image except for certain aspects of the surface covering which it is desired to change. In this embodiment this further aspect is the colour of the carpet (it is desired to view the carpet as a blue carpet instead of a red carpet).</p>
<p>In all other aspects the image is required to stay the same.</p>
<p>Referring to Figure 3 at step 30, the method of the present invention comprises using desired information, in the form of the lighting value associated with each surface pixel, along with further information, in the form of colour values associated with the surface covering for the final image as described below, to form the final image. In this embodiment the processor 12 accesses the base image 20 which is stored in the memory 14 and interrogates the base image 20 to find out which pixels are surface pixels. Then for each surface pixel the processor 12 reads the lightness value described above. Each pixel 12 also comprises colour values relating to the amount of red, green and blue colours in each pixel.</p>
<p>In other embodiments other standard colour determination schemes can be used with this invention. The processor 12 then generates a new final image which retains all pixels which are not surface pixels in an unchanged form. For pixels which are surface pixels, the final image comprises new pixels generated by the processor 12 -the new pixels have a lightness value which is based upon (and in this embodiment equal to) the lightness value of the corresponding pixels in the base image 20 and colour values (red, green, blue) which are added and which represent the colour values required to represent a blue carpet.</p>
<p>In other embodiments a new final image is not generated, instead the lightness values of the surface pixels are not read from the base image but the colour information in each surface pixel is simply deleted and replaced with the new colour information. In both embodiments the lightness information (i.e. the desired information) is used to form the final image along with some further information which relates to an aspect of the original base image which is required to be changed.</p>
<p>In this way a realistic final image is able to be formed. In prior image forming systems pixels identified as surface pixels are deleted altogether before new information is added to form a final image. Therefore, in the past it has been difficult to form realistic final images which include desired information such as lighting information which makes the final image appear as though it was taken realistically at the same time, place and location as the base image.</p>
<p>In this embodiment the base image 20 is a computer generated image. In other embodiments the base image may originate from a photograph.</p>
<p>Also in other embodiments, the processor 12 can be connected to an external network, such as the internet, which may provide a source for the software which provides the method of the present invention and/or details of surface coverings which are desired to be viewed in the final image.</p>
<p>Referring to Figure 4 a method according to a further embodiment of the present invention is depicted. The same apparatus as used in the system of the first embodiment is suitable for carrying out the method of this embodiment. In this embodiment the processor 12 is also in communication with the internet. In this embodiment the method comprises a number of additional features, none of which are essential to the invention and none of which are essential in combination with each other. Any particular additional feature can be added to the embodiment of the first invention to provide an alternative method according to this invention.</p>
<p>The software running on the processor 12 in this embodiment is obtainable by a remote server accessible by the internet. The remote server is accessible by subscription in this embodiment. A user is required to pay a subscription fee upon each use of the software application. In other embodiments a user may pay a predetermined flat fee which allows the user to use the application a set number of times. In this embodiment the separate application is given a name, Visualistic.</p>
<p>Visualistic is a local and web based application designed to provide photo realistic visuals (i.e. final images) of different materials/colours mapped onto surfaces within an existing photograph or generated image (i.e. a base image).</p>
<p>Advantageously, when mapping new materials/colours to a base image Visualistic retains lighting data (shadows, highlights and texture) whilst removing existing colour data. The new material/colour data is applied as colour only data via lighting and mask data (described below) to the base image. Retaining the lighting data allows Visualistic to create superior images.</p>
<p>Figure 4 is a high-level diagram showing the process of mapping material/colour data to a base image.</p>
<p>For clarity, the following definitions are provided for terms used in the</p>
<p>description of the visualistic application.</p>
<p>2D An environment or format that has 2 dimensions (x,y).</p>
<p>3D An environment or format that has 3 dimensions (x,y,z).</p>
<p>Anti-aliasing The process of smoothing out jagged lines in digital images by softening the transition of colour between background pixels and edge pixels.</p>
<p>Base image The original image before any colour or materials have been applied to it.</p>
<p>Field of view (FOV) The Field of View is how much of a scene rendered width / perspective is shown, based on the camera's focus and view angle.</p>
<p>FOV width = 180 * (1 -focus / view width) FOV height 180 * (1 -focus / view height) HTTP The Hypertext Transfer Protocol is the set of rules for exchanging files (text, graphic images, sound, video, and other multimedia files) on the World Wide Web.</p>
<p>Mask A 2D set of values between 0-255 that determine the intensity of applying data from one image to another image.</p>
<p>Pixel The information stored for a single grid point in the image. The complete image is a rectangular array of pixels.</p>
<p>RGB(A) Reference to the colour components in a pixel -red, green and blue (and alpha).</p>
<p>Sub-material A sub-material belongs to a surface material and contains colour data in the form of either a single colour or an image that will be mapped to a sub-surface.</p>
<p>Sub-surface A sub-material belongs to a surface and describes a part of the surface in the form of a 3D plane. Sub-materials map directly to sub-surfaces.</p>
<p>Surface A surface is a data format that holds one or more items of 3D geometric data that determines how surface materials are transferred from 2D data into 3D data.</p>
<p>Surface material A surface material is a data format that contains one or more items of colour/material (2D image) data and information on how to map such data to 3D surfaces within a view.</p>
<p>UV(W) Coordinate system used to describe the 2D (3D) points that determine the area of an image that is mapped a 3D surface.</p>
<p>View Visualistics 3D data format that stores information on how to map colour/material information onto 3D surfaces within the base image.</p>
<p>Virtual camera A camera within a view that simulates the properties of the original camera used to generate the base image.</p>
<p>XML (Extensible Markup Language) allows information and services to be encoded with meaningful structure and semantics that computers and humans can understand.</p>
<p>Referring to Figure 4, at step 40 initially surface material data, which consists of either one or more items of single colour or 2D image data (referred to as sub-materials within this document) is mapped onto a surface. A surface consists of one or more items of 3D geometry (referred to as sub-surfaces within this document) designed to approximate the shape of a real surface within the base image. In this embodiment the surface is defined by a user via the user interface 16.</p>
<p>The user has knowledge of the base image 50 and defines a surface by creating boundaries within the image within which the pixels are to be considered as surface pixels. At this stage the surface (which in this embodiment is a floor) only needs to be approximated in shape.</p>
<p>Advantageously this makes use of the Visualistic application easy and time efficient. It will be seen that in the base image 50 the area of the floor is interrupted by items of furniture and other obstacles which it would be time consuming to define at this stage. Visualistic allows the user to initially define the surface very approximately (discussed in further detail below).</p>
<p>Information within the surface material enables Visualistic to map the sub-materials correctly with sub-surfaces. This mapping and additional data within the sub-surfaces allows Visualistic to correctly layout complex materials that could contain borders, center pieces, etc. (described in further detail below).</p>
<p>At step 42, at the point the surface material data has been mapped onto a surface the data is held in 3D format. Before the surface material data can be applied to the base image it must be converted into a 2D format.</p>
<p>A virtual camera (described below) converts the 3D data into a 2D colour image. At this point the image contains only colour information from the original surface material.</p>
<p>As well as the creation of the 2D colour image the virtual camera also controls the depth of focus (see below) and size of the outputted 2D colour image (which is often many times bigger than the base image for enhanced quality, see below).</p>
<p>At step 44, the 2D colour image data received from the virtual camera is often many times bigger than the size of the base image and must be scaled down to match it's size before it can be applied to it.</p>
<p>A linear algorithm is used to scale the 2D colour image to the size of the base image, the process of linear scaling the image has the effect of anti-aliasing the colour data within the 2D colour image which improves the quality. Generally the larger the size of the 2D colour image outputted the higher the quality, however, this is limited by the size of the data held within the surface materials sub-materials, and also the size of the base image.</p>
<p>Once the 2D colour image has been scaled down additional image processing can be applied to sharpen the image and adjust brightness and contrast -the amounts of which are controlled by the author of the view and the values are stored within it.</p>
<p>At step 46, the 2D colour image is in a suitable format and it can be applied to the base image.</p>
<p>Because the 3D surface data is only an approximation of the shape of a real surface within the base image an additional item of 2D data (called a mask, described below) is required which provides information about the area (in pixels) that will have colour information applied and the area which will not have colour information applied.</p>
<p>Mask data is stored in a 2D grey-scale image -the mask image is always the same size as the base image. The value of each pixel within the mask determines how much colour data is removed from the base image, and how much new colour data is applied.</p>
<p>Before new colour data is applied to the base image, any existing colour data must be removed from the pixels within the base image, using the mask to determine which pixels are affected and how much colour is removed. Although colour is removed from an area within the base image the lighting data is not affected (see below).</p>
<p>Once only lighting data remains in the base image the new colour data can be applied. The mask data is used to determine which pixels within the 2D colour image are applied to which pixels in the base image and how much colour is applied. The process of applying a new colour to a pixel within the base image does not affect the tone (i.e. the lighting data) of the existing pixel (see below).</p>
<p>At step 48, the final stage of the process involves manipulating the re-coloured 2D image into a final image. This may involve the final image being resized, watermarked and converted to a standard image file format such as BMP, JPEG, PNG, etc. (described below).</p>
<p>Various details of the Visualistic application which have been described at a high level above will now be described in more detail below.</p>
<p>1.0 Surface materials Surface materials describe the material/colour that is applied to a surface within the view. Surface materials are made up of one or more sub-materials, and each sub-material is applied to one or more sub-surfaces.</p>
<p>Sub-materials consist of relative size data and colour data. The colour data will usually either be in the form of a single RGBA (red, green, blue, alpha) value or a 2D image.</p>
<p>The Surface material format has been designed to allow for the layout of complex materials onto surfaces. This means materials can have borders, center pieces, etc. 1.1 Support for complex material layouts (sub-materials) Often materials are more complex than a simple colour or repeating pattern. For example in the case of a carpet there may be borders and/or a center piece. To support for this each sub-material within a surface material is given an ID. The ID maps to a sub-surface within the surface which the surface material is being applied to. Figure 5 shows an example of how sub-materials map to sub-surfaces. Figure 5 shows initially a surface having sub-surfaces which are defined as either base, centre, North border, East border, South border, West border, Northeast corner, Southeast corner, Southwest corner or Northwest corner sub-surfaces.</p>
<p>The surface material comprises a number of different sub-materials which have IDs corresponding to different sub-surfaces. The effect of mapping the surface material shown in the key of Figure 5 onto the surface shown in Figure 5 results in the surface covering shown in Figure 5.</p>
<p>2D image data within sub-materials comprises colour and alpha values (red, green, blue, alpha) so that material data can correctly overlay on top of other material/colour data (see figure 6). Alpha values determine the priority of an image or part of an image when it is overlaid on another image or when another image is overlaid on it.</p>
<p>Sub-surfaces comprise layout settings that intelligently adjust the layout of sub-material data based on alignment settings, repeat settings, and the relative size of sub-materials and sub-surfaces (see below).</p>
<p>1.2 Generic measure for material patterns Visualistic provides a generic measurement unit (allowing for the user to work in whatever unit they want -metric, imperial, etc.) and allows for a relative size to be specified for both sub-materials and sub-surface.</p>
<p>The relative size allows the application to correctly map colour data onto a surface, in terms of both it's rendered size and repeat, independent of the actual size of the 2D material data or the sub-surface geometry it is being mapped to. Figure 7 shows an example of mapping data relatively, 2D sub-material data can vary in dimensions which in OpenGL and Direct3D (both of which are industry standard tools for rendering data onto a screen) can cause issues since many GFX cards do not support for textures that are not the correct size. So for example, although the width and height may vary they must be (in pixels) 2, 4, 8, 16, 32, 64, 128, 256, etc. To allow for 2D sub-material data of any size Visualistic places the data into an image of the closest correct size. The sub-surface geometry is then sub-divided down into quads comprising the relative size of the 2D sub-material data. UV coordinates then map the data to each quad correctly. Figure 8 demonstrates this process. UV co-ordinates define the position of the sub material in the 256 x 256 area.</p>
<p>2.0 Surfaces Surfaces translate 2D surface material data into a 3D format ready to be viewed by the virtual camera. Surfaces are made up of one or more sub-surfaces. A surface provides an approximation of the 3D surface of an object within the base image, they can approximate both simple (i.e. a floor) and complex (i.e. stairs) objects.</p>
<p>Each sub-surface has a material ID which relates to a sub-material within a surface material.</p>
<p>Surfaces consist of relative size data, 3D geometry data, alignment and repeat settings.</p>
<p>Surfaces/sub-surfaces have a group ID so that they can be grouped when surface materials/sub-materials are applied to them.</p>
<p>2.1 Surface and sub-surface groups Grouping allows for one or more surfaces within a view to have a surface material applied to them at the same time.</p>
<p>Advantageously this allows a user to apply a surface material such as a carpet either to one surface within the view or to a group of surfaces.</p>
<p>For example if a base image shows two floors, perhaps either side of a door way, grouping allows the user either to apply a surface material to just one of the floors or (using a group) to both.</p>
<p>Primarily this feature is a time saver. In complex scenes it can also allow non-expert users an accessible method for applying a material/colour to the view.</p>
<p>In addition sub-materials can also have a group name, and sequence value. This can be necessary when approximating complex 3D objects as it allows the application to know that the sub-material being applied to the sub-surface needs be treated it as though the sub-surface is part of one larger sub-surface that is the group.</p>
<p>Figures 9a and 9b highlight the use of sub-surface grouping by showing how it works for a set of stairs.</p>
<p>Figure 9a shows ten non-grouped sub-surfaces, because no relationship exists between the sub-surfaces the sub-material is applied to each one individually, and the pattern is not repeated correctly.</p>
<p>Figure 9b shows ten grouped sub-surfaces and in this case the sub-material can correctly be applied across all the sub-surfaces and the pattern repeats correctly. Since the sub-surfaces are grouped they are treated as a single surface by Visualistic.</p>
<p>2.2 Support for complex surface shapes (sub-surfaces) As discussed above, surfaces consist of one or more sub-surfaces, which allows for the approximation of simple (such as a floor) and complex surfaces (such as stairs) within the base image. Sub-surfaces also allow for complex materials (for example a carpet with borders and/or a center piece) to be laid out correctly (as described above).</p>
<p>Each sub-surface describes a plane within 3D space. The plane can be in any position at any angle and of any size. A single sub-surface is therefore limited in the surfaces it can approximate but as part of a larger group of sub-surfaces it can approximate far more complex surfaces.</p>
<p>2.3 Using IDs to map sub-materials on to sub-surfaces Each sub-surface has a material ID that corresponds to a sub-material.</p>
<p>This relationship allows the application to map the correct material/colour data to the sub-surface. A sub-surface can correspond to the same sub- material as another, and a sub-material can correspond to a group of sub-surfaces as though they where one sub-surface (as described above).</p>
<p>Whether a matching material ID corresponds to a sub-surface will depend on the surface material itself, for example there may be a sub-surface in place for center pieces, however if a surface material has no center piece data then the sub-surface will have no data applied to it and therefore will be excluded from the mapping process.</p>
<p>Because the sub-surface is simply ignored if no corresponding sub-material can be found, more than one material/colour layout can be supported for a surface.</p>
<p>Figure 10 shows two different surface materials being applied to the same surface resulting in different layouts. In Figure 10 the first surface material shown includes sub-materials having ID's corresponding to various borders. The second surface material shown does not have such sub-materials which have different IDs. Instead the second surface material only has a base material ID. Therefore different layout patterns are provided when the different surface materials are mapped onto the same surface.</p>
<p>2.4 Automatic alignment of sub-surfaces Sub-surfaces support an alignment setting which enables them to automatically align and resize themselves correctly to match the layout of the sub-material applied.</p>
<p>When a sub-surface has a sub-material applied it may change in size toaccommodate for the relative size of the sub-material. This would only be the case when the sub-surface does not allow the sub-material data to repeat in one or more directions. Information on repeat settings is described below.</p>
<p>Therefore if a sub-material is not allowed to repeat, when the sub-surface resizes it uses the alignment setting to determine it's new position after the resize. Figure 11 shows the possible alignments and how resizing affects the layout. Therefore depending upon the ID of the sub-surface it will re-size in a particular direction. Alignment means that borders, corners, center pieces, etc. can automatically resize themselves in the correct direction to support for the different relative sizes of sub-materials.</p>
<p>2.5 Repeat information for mapping sub-materials to sub-surfaces Sub-surfaces comprise a repeat setting which enables them to determine how a sub-material will be repeated within the area of the sub-surface.</p>
<p>The sub-surface can be repeated in both directions (i.e. in the case of a carpet), in one direction horizontally or vertically (i.e. in the case of a carpet border), or in no direction (i.e. in the case of a carpet center piece).</p>
<p>Figure 12 shows a sub-material, applied to four sub-surfaces with different alignment settings, Sub-surfaces are fixed in size in directions that they repeat, however in directions that they don't repeat they will resize to match the relative size of the sub-material (as previously described).</p>
<p>2.6 Manually setting the render index for sub-surfaces Sub-surfaces support for specifying a render index which determines the order in which they are rendered. In a typical 3D environment geometry is rendered in the order furthest from to closest to the virtual camera.</p>
<p>This causes an issue for sub-surfaces that overlap, for example in the case of a border on a carpet the two sub-surfaces could actually occupy the same space and be the same distance from the virtual camera.</p>
<p>The border should be rendered over the carpet, and this is where the render index is used. By giving the carpet a render index of 0 and the border a render index of 1 the application will render in the order of lowest render index first.</p>
<p>It would be possible to simply position the border slightly above the carpet, however since in real life they would occupy the same space the render index design advantageously provides greater realism, especially when more than one sub-surface overlaps.</p>
<p>Figure 13 demonstrates how render indexes re-order the rendering of sub-surfaces.</p>
<p>3.0 Camera simulation The task of a camera is to convert the 3D data into a 2D image. The present application has been designed to allow the user to realistically simulate the camera (whether real or simulated such as in a 3D modelling package) used to produce the base image.</p>
<p>As well as the actual image contents, a camera's properties also determine the perspective and aspect of an image.</p>
<p>The properties for the virtual camera include, * Position The virtual camera can be positioned at any 3D point within the view to match that of the original camera.</p>
<p>* Orientation The virtual camera can be rotated in any 3D direction (pitch, yaw, roll) to match that of the original camera.</p>
<p>* Field of view (FOV)</p>
<p>The virtual camera's FOV can be simulated to match the lens of the original camera (with a real camera this will be dependent on the size of lens used).</p>
<p>* Clipping range (near/far) Users can specify the distance range in which the virtual camera will calculate colour/material data. This can reduce the amount of work the application must do when applying colour/material data to a base image.</p>
<p>Support for multiple virtual cameras has been included and this gives the user the ability change surface within different camera views, for example, showing 3 or 4 different materials within the same room on different surfaces.</p>
<p>3.1 Depth of Focus In some cases there is a visual tare effect caused when mapping a material to a 3D surface (this issue does not exist when applying a colour). The extent of the effect is relative to the complexity and contrast of the material data.</p>
<p>Figure 14a illustrates an example of the problem. Referring to Figure 14b, at the top of the image around the chair a visual effect can be seen that looks similar to a pull or tare in the material. To remove this issue the application allows for the user to specify that the pattern is complicated. If the application knows that the pattern is complex it will progressively remove detail from the material the further away from the virtual camera that it is applied. This simulates the natural loss of focus of the human eye over distance. Figure 14c an example of the same image as shown in Figure 14b with this solution applied.</p>
<p>3.2 Support for high resolution output using tiled rendering Visualistic uses a high-performance 3D graphics library, OpenGL to render the 3D view data. Many existing applications use either OpenGL or DirectX to render 3D data because they are industry standards and provide both good quality and performance when rendering 3D data.</p>
<p>Both languages however are designed to render images no bigger than the screen resolution of the computer running the application. This limitation in render size poses two problems, firstly Visualistic renders the colour data at a size larger (in multiples x2, x3, x4, etc.) than that of the base image so that the colour data can be resized using a linear filter improving the quality, and for base images designed for print resolutions the size of the base image itself may well be larger than the screen resolution.</p>
<p>As an example, typically screen resolutions (in pixels) are between 600x480 and 1600x1200. For a 600x400 base image at medium quality Visualistic would render data out to a resolution of 2400x1600. Print resolution and quality typically requires resolutions of l0000x 10000 + +.</p>
<p>To solve this issue Visualistic renders it's 3D data in smaller chunks called tiles, then stitches the tiles back together once in 2D format. For each tile the camera is repositioned and its FOV changed to render a small portion of the overall 3D data. Figure 15 illustrates this process.</p>
<p>3.3 Tinting of camera output for enhanced quality Once the camera has converted the 3D data to 2D colour data the data is resized and tinted to enhance the quality. Firstly the colour data is resized to match the size of the base image, the process of resizing involves taking n number of pixels and converting them into a single pixel. For example scaling an image by a factor of 0.5 (half) involves converting 4 pixels (in a 2x2 array) into a single pixel. There are many defined filters used to convert data in this way, the filter used by Visualistic is a linear filter, which involves adding the value of all the pixels together and dividing them by their count. Figure 16 illustrates an example of converting four pixels into one.</p>
<p>The example above has been simplified to demonstrate the principle, in actual fact each pixel would contain three values, one for each colour component (red, green and blue). Sometimes this is referred to as a linear cubic filter because of the three components (n3).</p>
<p>Resizing has the effect of anti-aliasing the colour data, which advantageously provides a natural aesthetic. Because some data is lost in the resize process the colour data is render larger than the base image -the larger you render the colour data the less data will be lost and the greater the output quality. This rule is limited by the size of the original 2D data from the surface material. At some point the colour data will be rendered at a greater level of detail than the surface material can provide, at which point increasing the colour data size further has no effect (other than to slow the process down).</p>
<p>Once the resize has been performed optionally the image is tinted. Tinting in the case of Visualistic involves adjusting the brightness, contrast and saturation of the colour data. The tinting process helps to match the colour data's brightness, contrast and saturation with that of the base image's. Often these factors will vary dramatically depending on the camera and environment in which the original image was created.</p>
<p>4.0 Masks When the 2D colour data is applied to the base image a third image called a mask is used to determine which areas in the base image will have colour data applied to them. This is necessary because surfaces are only an approximation of the 3D object they describe. Figure 17 highlights the role of a mask.</p>
<p>4.1 Mask intensity As described above, masks determine which pixels in the base image have colour data applied to them, they also determine how much colour data is applied.</p>
<p>Each pixel within the mask has a gray-scale value of between 0 and 255.</p>
<p>This value determines how much the colour data will affect each pixel in the base image. A value of 0 means that the colour data will have no effect on the pixel, a value of 255 means the colour data will have the maximum affect on the colour of the pixel.</p>
<p>Supporting for different intensity within the mask provides smooth (anti-aliased) edges. It is also possible to simulate ambiance (low-levels of reflections onto secondary surface within the base image).</p>
<p>The following formula is used for determining the amount of colour data applied to a pixel within the base image: baselmagePx[ r, g, b I = baselmagePx[ r, g, b] * ( maskPx I colourPx[ r, g, b I = colourPx[ red, green, blue} * ( ( 255 -maskPx) / 255 outputPx[ r, g, b I = ( baselmagePx[ r, g, b I + colourPx[ r, g, b] ) / 2 (The characters r, g, b represent the colour components red, green and blue of a pixel, the mask pixel has only one colour component, this is sometimes represented as an 1 for luminance.) 5.0 Applying new material/colour data to the base image Visualistic applies colour data to the base image in a way such that the lighting information /tone of the base image remains intact and only the colour is changed.</p>
<p>Existing applications replace the base image pixels with colour and lighting information generated from rendering 3D geometry.</p>
<p>The problem with this method is that lighting information stored within each original base image pixel is lost. The lighting data provides information on the highlights and shadows within the base image as well as the texture of a surface.</p>
<p>Visualistic does not remove the lighting information from the base image pixels, instead it simple changes the colour of the pixel. By using the existing lighting data Visualistic is able to produce photo realistic versions of the base image with new material/colour data applied to surfaces (i.e. floors, walls, stairs, curtains, furniture, etc.) 5.1 Removing colour information from the base image Using the mask to determine which pixels are affected and by how much (described previously) the colour data in each pixel is converted to a gray value which represents the lighting value of that pixel.</p>
<p>The formula for calculating the lighting (gray) value of a pixel is as follows, baselmagePx[ r, g, b I lightingvalue = ( baselmagePx( r I + baselmagePx[ g I + baselmagePx[ b] ) / 3 (The characters r, g, b represent the colour components red, green and blue of a pixel, a lighting value of 1 can also be expressed as [1, 1, 1].</p>
<p>This allows us to replace a colour pixel in the base image with a grey pixel.) Once the lighting value has been calculated it replaces the original pixel within the base image and is ready to receive new colour data.</p>
<p>5.2 Overlaying new colour information on to the base image Once the existing colour has been removed from the pixels of the base image, the new colour data can be applied.</p>
<p>As with removing existing colour data the mask is used to determine which pixels are affected and by how much. The lighting value of each pixel is used to tone the colour data before it is applied to the base image.</p>
<p>The following algorithm is used for calculating the tone of a colour: # Read the pixel date from the workinglmage and colourlmage workingPixel = workinglmage.getpixel( ( x, y colourPixel = colourlmage.getpixel( ( x, y * Calculate the pixel lighting lighting = float( workingPixel( 0 1 + workingPixel( 1) + overlayPixel = ( 0, 0, 0 1 * Calculate the overlay if lighting < 128: # Multiple overlayPixel( 0 3 = colourPixel[ 0 3 * ( lighting / 128.0 overlayPixel[ 1 3 = colourPixel( 1 1 * ( lighting / 128.0 overlayPixel( 2 3 = colourPixel( 2 1 * ( lighting / 128.0 elif lighting > 128: Subtract inversePixel = ( 0, 0, 0 lightinglnverse = 255.0 -lighting inversePixel[ 0 3 = C 255.0 -colourPixel( 0 1) * lightinglnverse inversePixel( 1 3 = C 255.0 -colourPixel( 1 3) * lightinglnverse inversePixel( 2 1 = ( 255.0 -colourPixel( 2 3) * lightinglnverse overlayPixel( 0] = 255 -( inversePixel[ 0] / 128.0 overlayPixel( 1] = 255 -( inversePixel( 1] / 128.0 overlayPixelL 2] = 255 -( inversePixel[ 2 1 / 128.0 else: overlayPixel( 0] = colourPixel( 0 overlayPixel( 1] = colourPixel[ 1 overlayPixel( 2 1 = colourPixel( 2 (The characters r, g, b represent the colour components red, green and blue of a pixel.) 6.0 Final image manipulation Once new material/colour(s) has been applied to the base image it can be exported as a viewable image.</p>
<p>Visualistic supports for several settings when exporting the final image resizing, watermarking and optimizing.</p>
<p>6.1 Resizing The user can specify any final export size provided it is smaller than the original base image. The image can not be resized larger as this would dramatically reduce quality. The resize uses a linear filter.</p>
<p>The user can specify, A final size in width and height; * A final width or height in which case the non-specified dimension is calculated automatically in-line with the specified dimension to keep the aspect ratio; * A bounding width and height in which case the image is resized to fit within the specified dimensions and the aspect ratio is maintained.</p>
<p>6.2 Watermarking It is common practice for images, in particular those that will appear on the world wide web to be watermarked to help prevent material theft.</p>
<p>Because Visualistic has an on-line interface, users can specify a watermark image and optionally a masking image, which is(are) applied to the final image automatically.</p>
<p>6.3 Optimizing The final image can be saved in a number of different image formats including, JPG, PNG, TIF and BMP. Some of these formats support for compression options. For example the JPG format supports for lossy compression based on quality and is an ideal web format.</p>
<p>7.0 Remote services Visualistic has been designed to provide several remote services, ie ways of using the product in some form that are available without having a copy of the Visualistic designer application.</p>
<p>7.1 Web based interface Visualistic provides a HTTP (XML is used to actually call the application) accessible interface which allows view and surface material files to be sent to a server application which then returns the result of combining the data within the two files.</p>
<p>The services supports for caching previously generated images to improve performance.</p>
<p>The service can load view and surface material files from remote locations as well via the HTTP protocol. View and surface material files are compressed using binary zip format which provides maximum loss-less compression for data, advantageously keeping the amount of data that has to be sent to the server at a minimum.</p>
<p>7.2 Pack and go As well as a web accessible interface, Visualistic also has a distributable viewer which can be packed with pre-built view and surface-material files and sent to users by any method (i.e. email, CD, etc.) The viewer's interface is XML based and can be customized by a designer.</p>
<p>The viewer is cross platform and can run on Windows, Mac and Linux operating systems.</p>
<p>Various modifications may be made to the present invention without departing from its scope. For example other applications for the present image forming system and method will be evident to the skilled person.</p>
<p>For example the present method has been described in the context of an image forming system for use in viewing the appearance of different floor coverings in a room depicted in an image. This may be for use in producing a final image for a brochure for example. Alternatively it may for use in producing a final image in a showroom where a sales person wishes to create a final image showing one or more carpets or the floor coverings selected by a customer at the time of choosing a carpet.</p>
<p>Alternatively the application may be used by the customer themselves in a showroom or on the internet. Alternatively the application may be used to view other surface coverings on other surfaces -for example sand on a beach or different types of turf on a field/garden.</p>

Claims (1)

  1. <p>CLAIMS</p>
    <p>1. A method of using a base image, depicting a surface having a covering, to form a final image, depicting the surface and imitating the appearance of the covering in a desired aspect and altering the appearance of the covering in a further aspect, wherein the base image comprises a plurality of original pixels containing desired information relating to the desired aspect, the method comprising the step of forming the final image using further information relating to the further aspect along with the desired information.</p>
    <p>2. The method of claim 1 wherein the desired aspect makes the base image appear realistic and imitating the appearance of the covering in the desired aspect makes the final image appear realistic.</p>
    <p>3. The method of claim 1 or claim 2 wherein the desired aspect comprises lighting, tone, texture or any combination of these and the desired information comprises light, tone, texture information or any combination of these types of information.</p>
    <p>4. The method of any preceding claim wherein the further aspect comprises colour and the further information comprises colour information.</p>
    <p>5. The method of any preceding claim wherein the original pixels comprise undesired information relating to the further aspect.</p>
    <p>6. The method of claim 5 wherein the step of forming the final image comprises the steps of deleting the undesired information from the pixels of the original image and adding the further information to the pixels.</p>
    <p>7. The method of any of claims 1 to 5 wherein the step of forming the final image comprises the steps of obtaining the desired information from the pixels of the original image and generating new pixels of the final image taking into account the desired information.</p>
    <p>8. The method of claim 7 according to any of claims 3 to 6 wherein the step of obtaining the desired information comprises determining an average value of lightness per pixel.</p>
    <p>9. The method of claim 8 wherein the step of forming the final image comprises adding colour information in proportion dependent upon the determined average value of lightness per pixel.</p>
    <p>10. The method of any preceding claim wherein the step of altering the appearance of the covering comprises selecting an alternative covering having different said further information such that a user can view the appearance of a different covering on the surface.</p>
    <p>11. The method of any preceding claim comprising the further step of defining the surface within the base image.</p>
    <p>12. The method of claim 11 comprising the further step of applying a mask filter to more accurately define the surface within the base image.</p>
    <p>13. The method of any preceding claim wherein the base image comprises a plurality of surfaces and the step of forming the final image comprises using different further information relating to the further aspect for each surface.</p>
    <p>14. The method of any preceding claim wherein the base image comprises a plurality of surfaces forming a group of surfaces and the step of forming the final image comprises using the same further information relating to the further aspect for each surface in the group.</p>
    <p>15. The method of claim 14 comprising the step of selecting which of the plurality of surfaces form the group.</p>
    <p>16. The method of any preceding claim wherein the surface comprises a flat surface or a curved surface or any combination of these.</p>
    <p>17. The method of any preceding claim wherein the surface comprises a surface of a wall, floor, ceiling, window, door, building, vehicle, beach or any other suitable surface.</p>
    <p>18. The method of any preceding claim wherein the case image originates from a photograph.</p>
    <p>19. An image forming system comprising a processor arranged to carry out the method of any preceding claim.</p>
    <p>20. The system of claim 19 comprising a memory arranged to store information relating to alternative coverings.</p>
    <p>21. A method or system as hereinbefore described with reference to any one or more of the accompanying drawings.</p>
GB0603591A 2006-02-22 2006-02-22 Using a base image to form a final image Withdrawn GB2438260A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0603591A GB2438260A (en) 2006-02-22 2006-02-22 Using a base image to form a final image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0603591A GB2438260A (en) 2006-02-22 2006-02-22 Using a base image to form a final image

Publications (2)

Publication Number Publication Date
GB0603591D0 GB0603591D0 (en) 2006-04-05
GB2438260A true GB2438260A (en) 2007-11-21

Family

ID=36178588

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0603591A Withdrawn GB2438260A (en) 2006-02-22 2006-02-22 Using a base image to form a final image

Country Status (1)

Country Link
GB (1) GB2438260A (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060080125A1 (en) * 2004-09-03 2006-04-13 Andy Shipman Carpet simulation method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060080125A1 (en) * 2004-09-03 2006-04-13 Andy Shipman Carpet simulation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
http://www.businesstn.com/pub/1_12/features/7641-1.html, December 2004 *

Also Published As

Publication number Publication date
GB0603591D0 (en) 2006-04-05

Similar Documents

Publication Publication Date Title
CN111508052B (en) Rendering method and device of three-dimensional grid body
US6226005B1 (en) Method and system for determining and/or using illumination maps in rendering images
Hughes Computer graphics: principles and practice
US7728848B2 (en) Tools for 3D mesh and texture manipulation
US6707458B1 (en) Method and apparatus for texture tiling in a graphics system
JP5531093B2 (en) How to add shadows to objects in computer graphics
EP2951785B1 (en) Method and system for efficient modeling of specular reflection
US20140218356A1 (en) Method and apparatus for scaling images
WO2011077623A1 (en) Image processing device, image data generation device, image processing method, image data generation method, and data structure of image file
US20070139408A1 (en) Reflective image objects
JP2002183761A (en) Image generation method and device
JP3626144B2 (en) Method and program for generating 2D image of cartoon expression from 3D object data
BRPI0300983B1 (en) method for providing controllable texture sampling and co-processing device
US20030107572A1 (en) Method and apparatus for reducing the polygon count of a textured, three dimensional model of an object
US10089782B2 (en) Generating polygon vertices using surface relief information
WO2017123163A1 (en) Improvements in or relating to the generation of three dimensional geometries of an object
CN112991558A (en) Map editing method and map editor
JP4111577B2 (en) Illuminance map creation method, illumination map determination method, and illumination map creation system
US9317967B1 (en) Deformation of surface objects
JP2003168130A (en) System for previewing photorealistic rendering of synthetic scene in real-time
JP2011134101A (en) Image processor, image data generation device, image procession method, image data generation method, and data structure for image file
GB2438260A (en) Using a base image to form a final image
Brogni et al. An interaction system for the presentation of a virtual egyptian flute in a real museum
Liu A novel Mesa-based OpenGL implementation on an FPGA-based embedded system
Feibush et al. Texture rendering system for architectural design

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)