AU2018203328A1 - System and method for aligning views of a graphical object - Google Patents

System and method for aligning views of a graphical object Download PDF

Info

Publication number
AU2018203328A1
AU2018203328A1 AU2018203328A AU2018203328A AU2018203328A1 AU 2018203328 A1 AU2018203328 A1 AU 2018203328A1 AU 2018203328 A AU2018203328 A AU 2018203328A AU 2018203328 A AU2018203328 A AU 2018203328A AU 2018203328 A1 AU2018203328 A1 AU 2018203328A1
Authority
AU
Australia
Prior art keywords
fragment
view
warp
graphical object
texture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2018203328A
Inventor
Peter Alleine Fletcher
David Karlov
Timothy Stephen Mason
David Peter Morgan-Mar
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2018203328A priority Critical patent/AU2018203328A1/en
Publication of AU2018203328A1 publication Critical patent/AU2018203328A1/en
Abandoned legal-status Critical Current

Links

Landscapes

  • Image Generation (AREA)

Abstract

SYSTEM AND METHOD FOR ALIGNING VIEWS OF A GRAPHICAL OBJECT A system and method of aligning views of a three-dimensional graphical object. The method comprises receiving a plurality of views of the graphical object, the graphical object being partitioned into a plurality of fragments, wherein each view comprises texture data for at least a first fragment and a second fragment from the plurality of fragments; forming a first expanded view (720) for a first fragment by expanding a first view of the first fragment using texture data of the first view of a second fragment; and forming a second expanded view (720) for the first fragment by expanding a second view of the first fragment using texture data of the second view of the second fragment. The method further comprises aligning (730) the first expanded view and the second expanded view to determine a warp map between pixels of the first view of the first fragment and pixels of the second view of the first fragment; determining (740) a warp map between pixels of the first view of the second fragment and the second view of the second fragment using at least one warp vector of the warp map determined for the first fragment modified based on geometrical arrangement of the fragments; and aligning the first and second views of the graphical object based on the determined warp map. 14684374_1

Description

2018203328 11 May 2018
SYSTEM AND METHOD FOR ALIGNING VIEWS OF A GRAPHICAL OBJECT TECHNICAL FIELD [0001] The invention relates generally to image processing and more specifically to registering object surfaces and capturing the surface appearance of materials using imaging methods.
BACKGROUND [0002] Capturing a detailed three-dimensional (3D) model of an object has applications in many fields including industrial design, computer graphics, cultural heritage preservation, marketing and commerce, and other areas. A three-dimensional model of an object can be thought of as comprising the shape information plus the surface appearance information. The shape information is geometrical, whereas the surface appearance information governs how light interacts with the surface.
[0003] The surface appearance of materials produces the visual sensation of colour, but is not restricted to the diffusely reflected colour. Specular reflections add a visual sensation of glossiness, and details of the light reflection profile produce the wide variation in visual appearance of materials seen in the real world. Material surface appearance can be characterised by the bidirectional reflectance distribution function (BRDF). The BRDF is a reflectance function with values from 0 to 1, of six variables. The six variables comprise incoming illumination ray angle (0j, (pi), incoming wavelength λέ, outgoing ray angle (θ0, φ0 ), and outgoing wavelength λο. In practice, when using camera data, the wavelength dependence is integrated into red, green, and blue colour channels by camera colour response functions.
[0004] To estimate the BRDF at a given point on the surface of a material, multiple colour samples are required from different illumination and viewing angles. A discrete number of samples can be interpolated by a BRDF model to produce an estimate of the full BRDF at the given surface point. The number of samples required is typically of the order of 10 or more. The more samples, the more accurate the interpolated BRDF will be.
[0005] Characterising the surface appearance over the surface of a 3D object requires capturing multiple colour samples at many different points across the surface of the object. In many cases, the data collection can be combined with capture of the shape of the 3D object, as camera images may be used to estimate both the 3D geometry and either the diffuse colour or the BRDF.
14684374_1
2018203328 11 May 2018 [0006] One known method of estimating 3D object shape and BRDF is to assemble a rigid framework of cameras and lights, such that the cameras and lights are fixed in known relative positions and angles. The object to be scanned can be placed inside the framework and multiple images of the object can be captured by the cameras, under different, known, lighting conditions. The camera images can then be collated and used to assemble a 3D model of the object’s shape. Also, because the cameras are fixed and their positions known, the colour samples from the cameras can be collated with illumination and viewing angles to provide samples of the BRDF at points on the object surface, then these samples can be used to estimate the BRDF. The method has a disadvantage of requiring multiple cameras and lights in a large framework, much bigger than the object being scanned, as well as producing problems of expense and portability.
[0007] Another known method of estimating BRDF of a material sample is to place the sample under a kaleidoscope arrangement of mirrors, and illuminate the object with a projector with a similar virtual location to a camera by the use of a half-silvered mirror. The kaleidoscope projects the lights onto the sample from multiple angles, and also allows the camera to capture the appearance of a single point on the sample from multiple angles in one image. If the geometry of the kaleidoscope is known, the samples can be associated with incoming and outgoing light angles, thus providing multiple samples suitable for BRDF estimation. The method of BRDF estimation has a disadvantage of examining only a small patch of a surface and cannot reconstruct the geometry of a 3D object. Further, the method cannot simultaneously estimate multiple different BRDFs of different materials across an object, and incurs a disadvantage of requiring mirrors and a projector.
[0008] Another known method of estimating BRDF of a material sample is to capture images of the sample with a camera with and without flash illumination. If the material sample is statistically stationary over an extended area, then different parts of the sample captured by the camera can be considered to be views of a single point captured from different viewing angles, while the flash/no-flash image pair provides a diversity of lighting conditions. The views and lighting conditions provide a number of samples suitable for BRDF estimation. The method has a disadvantage of only estimating BRDF of an object that has an extended patch (with linear dimensions of the order of the distance of the camera from the surface) with the same reflectance characteristics across the surface. The method also cannot reconstruct the 3D geometry of an object.
14684374_1
2018203328 11 May 2018 [0009] One other known method of capturing the 3D shape of an object is to capture multiple images of the object from different viewing angles, and then to apply a shape estimation algorithm such as structure from motion. The method can make use of hand-held photos from an off-the-shelf camera. However, state of the art structure from motion algorithms have residual errors in the camera pose estimation which result in multi-pixel alignment errors between the various camera views. A single view may be selected to provide an estimate of the diffuse colour across the object, but the misalignment prevents multiple views from being collated into samples suitable for BRDF estimation. Attempting to estimate BRDF from known multiple image systems typically results in large errors.
[00010] There is need for a method and system that can capture both the 3D shape of an object and multiple well-aligned samples of densely distributed points on the surface to enable accurate BRDF estimation at all of those points, using relatively inexpensive and lightweight equipment, such as an off-the-shelf consumer camera.
SUMMARY [00011] It is an object of the present invention to substantially overcome, or at least ameliorate, at least one disadvantage of present arrangements.
[00012] One aspect of the present disclosure provides a method of aligning views of a threedimensional graphical object, the method comprising: receiving a plurality of views of the graphical object, the graphical object being partitioned into a plurality of fragments, wherein each view comprises texture data for at least a first fragment and a second fragment from the plurality of fragments; forming a first expanded view for a first fragment by expanding a first view of the first fragment using texture data of the first view of a second fragment; forming a second expanded view for the first fragment by expanding a second view of the first fragment using texture data of the second view of the second fragment; aligning the first expanded view and the second expanded view to determine a warp map between pixels of the first view of the first fragment and pixels of the second view of the first fragment; determining a warp map between pixels of the first view of the second fragment and the second view of the second fragment using at least one warp vector of the warp map determined for the first fragment modified based on geometrical arrangement of the fragments; and aligning the first and second views of the graphical object based on the determined warp map.
14684374_1
2018203328 11 May 2018 [00013] In another aspect, the second view of the second fragment is a reference view for the second fragment.
[00014] In another aspect, the method further comprises selecting, from the determined warp map, warp vectors corresponding to the second fragment for placement in a warp atlas associated with the first view of the graphical object.
[00015] In another aspect, determining a warp map between pixels of the first view of the second fragment and the second view of the second fragment further comprises regularizing warp vectors of the determined warp map between pixels of the first view of the second fragment and the second view of the second fragment.
[00016] In another aspect, the regularization comprises convolving the warp vectors using a Gaussian blur kernel with an RMS width associated with a width of expansion of the first and second views.
[00017] In another aspect, the method further comprises warping a texture atlas associated with the second fragment using the determined warp map.
[00018] In another aspect, the method further comprises determining reflectance properties of the graphical object based on the aligned first and second views of the graphical object using a predetermined model relating the aligned views of the graphical object and a lighting angle.
[00019] In another aspect, the method further comprises rendering the graphical object based on the alignment.
[00020] In another aspect, determining a warp map between pixels of the first view of the second fragment and the second view of the second fragment comprises determining relative displacement, orientation and scale in the texture data between the first fragment and the second fragment.
[00021] In another aspect, determining a warp map between pixels of the first view of the second fragment and the second view of the second fragment comprises transferring warp vectors from the warp map of the first fragment to a boundary of the second fragment based on the geometrical arrangement of the fragments in the texture data.
14684374_1
2018203328 11 May 2018 [00022] In another aspect, determining a warp map between pixels of the first view of the second fragment and the second view of the second fragment comprises transferring warp vectors from the warp map of the first fragment to a boundary of the second fragment and transforming the warp vectors based upon a relative displacement, orientation and scale in the texture data between the first fragment and the second fragment.
[00023] In another aspect, the first expanded view is formed based on a margin of the first fragment associated with the second fragment.
[00024] In another aspect, the first expanded view and the second expanded view are aligned using a multi-modal alignment method.
[00025] In another aspect, the alignment estimates a warp map using covariance-based mutual information for pixel locations within the overlapping region between the first and second views of the first fragment.
[00026] In another aspect, the method further comprises determining geometric information for each of the fragments using a parametric representation of the graphical object.
[00027] In another aspect, the method further comprises determining a pixel position in the parametric representation corresponding to a selected position on the graphical object; determining a plurality of pixel values for the determined pixel position using the aligned first and second views of the graphical object and corresponding viewpoints; and determining reflectance properties of the graphical object using a predetermined model relating the determined plurality of pixel values to a plurality of viewpoints and a lighting angle.
[00028] In another aspect, the pixel position corresponding to the selected position on the graphical object is determined using a mapping from the three-dimensional structure of the graphical object to a parametric representation of the graphical object.
[00029] Another aspect of the present disclosure provides a non-transitory computer readable storage medium storing program instructions aligning views of a three-dimensional graphical object, the program comprising: code for receiving a plurality of views of the graphical object, the graphical object being partitioned into a plurality of fragments, wherein each view comprises texture data for at least a first fragment and a second fragment from the plurality of
14684374_1
2018203328 11 May 2018 fragments; code for forming a first expanded view for a first fragment by expanding a first view of the first fragment using texture data of the first view of a second fragment; code for forming a second expanded view for the first fragment by expanding a second view of the first fragment using texture data of the second view of the second fragment; code for aligning the first expanded view and the second expanded view to determine a warp map between pixels of the first view of the first fragment and pixels of the second view of the first fragment; code for determining a warp map between pixels of the first view of the second fragment and the second view of the second fragment using at least one warp vector of the warp map determined for the first fragment modified based on geometrical arrangement of the fragments; and code for aligning the first and second views of the graphical object based on the determined warp map.
[00030] Another aspect of the present disclosure provides apparatus, comprising: a processor; and a memory device storing a software program for directing the processor to perform a method aligning views of a three-dimensional graphical object, the method comprising: receiving a plurality of views of the graphical object, the graphical object being partitioned into a plurality of fragments, wherein each view comprises texture data for at least a first fragment and a second fragment from the plurality of fragments; forming a first expanded view for a first fragment by expanding a first view of the first fragment using texture data of the first view of a second fragment; forming a second expanded view for the first fragment by expanding a second view of the first fragment using texture data of the second view of the second fragment; aligning the first expanded view and the second expanded view to determine a warp map between pixels of the first view of the first fragment and pixels of the second view of the first fragment; determining a warp map between pixels of the first view of the second fragment and the second view of the second fragment using at least one warp vector of the warp map determined for the first fragment modified based on geometrical arrangement of the fragments; and aligning the first and second views of the graphical object based on the determined warp map.
[00031] Another aspect of the present disclosure provides a system comprising: a processor; and a memory device storing a software program for directing the processor to perform a method comprising the steps of: receiving a plurality of views of a graphical object, the graphical object being partitioned into a plurality of fragments, wherein each view comprises texture data for at least a first fragment and a second fragment from the plurality of fragments; forming a first expanded view for a first fragment by expanding a first view of the first fragment using texture data of the first view of a second fragment; forming a second expanded
14684374_1
2018203328 11 May 2018 view for the first fragment by expanding a second view of the first fragment using texture data of the second view of the second fragment; aligning the first expanded view and the second expanded view to determine a warp map between pixels of the first view of the first fragment and pixels of the second view of the first fragment; determining a warp map between pixels of the first view of the second fragment and the second view of the second fragment using at least one warp vector of the warp map determined for the first fragment modified based on geometrical arrangement of the fragments; and aligning the first and second views of the graphical object based on the determined warp map.
BRIEF DESCRIPTION OF THE DRAWINGS [00032] One or more embodiments of the invention will now be described with reference to the following drawings, in which:
[00033] Fig. 1 is a schematic illustration of an image capture system for capturing threedimensional geometry information and colour sample information from a three-dimensional object;
[00034] Fig. 2 shows a method of estimating three-dimensional geometry information and colour sample information from a three-dimensional object;
[00035] Fig. 3 shows a method of generating a three-dimensional mesh data structure and a set of texture atlases describing a three-dimensional object;
[00036] Fig. 4 shows a data flow for a method of cross-view alignment;
[00037] Fig. 5 shows a method of aligning views of a graphical object;
[00038] Fig. 6 shows a method of selecting one view of a fragment chosen from a set of texture atlases;
[00039] Fig. 7 shows a method of aligning a texture atlas to a master atlas according to the method of Fig. 5;
[00040] Figs. 8A to 8C show a three-dimensional mesh data structure;
14684374_1
2018203328 11 May 2018 [00041] Figs. 9A and 9B collectively form a schematic block diagram representation of an electronic device upon which described arrangements can be practised;
[00042] Fig. 10 shows a schematic flow diagram showing a method of augmenting a mesh data structure;
[00043] Fig. 11A to 1 IE show an augmented three-dimensional mesh data structure;
[00044] Fig. 12 shows a method of generating a margin image;
[00045] Fig. 13 shows a neighbourhood query method used in Fig. 12;
[00046] Fig. 14A to 14C show a structure of working data associated with a 3D mesh;
[00047] Fig. 15A and Fig. 15A show geometric information as used in one implementation of the method of Fig. 5;
[00048] Fig. 16 shows geometric information as used in a further implementation of the method of Fig. 5;
[00049] Fig. 17 shows a method of generating geometric information as used in one implementation of the method of Fig. 5;
[00050] Fig. 18 shows a method of aligning a texture atlas to a master atlas as used in a further implementation of the method of Fig. 5;
[00051] Fig. 19 shows a method of adding fragment links, as used in the method of Fig. 13;
[00052] Fig. 20 shows a method of determining an anchor point, as used in the method of Fig. 13;
[00053] Fig. 21 shows a method of sampling from a neighbourhood query region; and [00054] Fig. 22 shows a method of rendering an object using the arrangements described.
14684374_1
2018203328 11 May 2018
DETAILED DESCRIPTION INCLUDING BEST MODE [00055] Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears.
[00056] Fig. 1 illustrates an imaging system 100 for capturing three-dimensional geometry information and colour sample information from a three-dimensional object 110. An image of the object 110 is captured from each of N different camera positions 120, 130 and 140. The arrangements described use the camera positions 120, 130 and 140 as an example for N= 3. However, further different camera positions (not shown) may also be used once N is greater than or equal to at least two camera positions. The number of camera positions or viewpoints is typically larger. The number of camera positions or viewpoint required typically increases based upon the complexity of the texture of the graphical image. The images may be captured by multiple cameras, one camera occupying each of the camera positions 120, 130 and 140, by a single camera moved serially to each of the camera positions 120, 130 and 140, or by an intermediate number of cameras moved serially such that at least one image is captured from each camera position 120, 130 and 140. The cameras may be any image capture device, such as a digital camera, a video camera or the like.
[00057] The set of N images of the object 110 captured from the N camera positions 120, 130 and 140 can be used to estimate the three-dimensional geometry of the object 110 and the surface material appearance at points on the surface of the object 110.
[00058] Fig. 2 is a schematic illustration of a data flow 200 for an object estimation method 300 as illustrated in Fig. 3. The object 110 is the three-dimensional object being captured. A set of N images 220 is produced by the camera or cameras capturing images from the N camera positions 120, 130 and 140. A 3D mesh data structure 230 is produced from the set of N images 220.
[00059] The 3D mesh data structure 230 is comprised of a plurality of faces modelling the surface shape of the object 110, represented diagrammatically in Fig. 2 by cross-hatching. The faces are typically triangular, but may be another shape. As described below with reference to step 340 of Fig. 3, the surface of the mesh 230 is divided into fragments 240 and has a mapping or parameterisation into UV space 250 relating 3D co-ordinates of the mesh surface to (u, v)
14684374_1
2018203328 11 May 2018 coordinates on one or more two-dimensional texture images. The mapping or parameterisation is referred to herein as a mapping function F.
[00060] As described below with reference to step 350 of Fig. 3, each image in the set of N images 220 is projected on to the surface of the 3D mesh 230 and then into UV space 250 via the mapping function F to produce a set of texture atlases 260. Each texture atlas corresponds to surface information mapped from a particular camera view such as 120, 130, or 140, and is stored as a texture image. Thus the set of N images 220 produces a set of N corresponding texture atlases 260.
[00061] Fig. 8A shows a data structure 800 of the 3D mesh 230. The structure 800 includes a 3D mesh data structure 820 comprising a vertex array 801, a face array 802, and one or more texture images 803. The vertex array 801 is an array of vertices. An example vertex 810 is shown in Fig. 8B. As shown in Fig. 8B, each vertex comprises an x 811, y 812, and z 813 coordinate value. The values 811 to 813 define the location of the vertex 810 in the 3D space of the mesh 230.
[00062] The face array 802 is an array of faces, an example face 830 being shown in Fig. 8C. As shown in Fig. 8C, each face comprises three references to vertices (VrefO 831, Vrefl 832, and Vref2 833). Each of the references 831 to 833 is an integer index into the corresponding Vertex array 801. Three vertex references are used in the case that any of the faces 830 are triangular. Four vertex references may be used for quadrangular faces. Other numbers of vertex references are also possible.
[00063] Each face further comprises a set of UV co-ordinates for each of the vertex references. For example, in Fig. 8C, uO 834 and vO 835 correspond to VrefO 831. Additionally, ul 836 and vl 837 correspond to Vrefl 832, and u2 838 and v2 839 correspond to Vref2 833. The UV coordinates 834 to 839 store the mapping function F from a triangular face, the face 830 on the 3D surface of the mesh 230, to a triangle in the 2D co-ordinate system of the texture image 803. The texture elements corresponding to positions along the edges and in the interior of face 830 are found by interpolating the UV space triangle formed by the UV co-ordinates 834-839 of the face 830. The UV co-ordinates 834-839 may be stored directly as UV co-ordinates in the face 830, or may be stored as UV references which index into a UV co-ordinate table stored separately in the mesh data structure 800.
14684374_1
2018203328 11 May 2018 [00064] Figs. 9A and 9B depict a general-purpose computer system 900, upon which the various arrangements described can be practiced.
[00065] As seen in Fig. 9A, the computer system 900 includes: a computer module 901; input devices such as a keyboard 902, a mouse pointer device 903, a scanner 926, a camera 927, and a microphone 980; and output devices including a printer 915, a display device 914 and loudspeakers 917. An external Modulator-Demodulator (Modem) transceiver device 916 may be used by the computer module 901 for communicating to and from a communications network 920 via a connection 921. The communications network 920 may be a wide-area network (WAN), such as the Internet, a cellular telecommunications network, or a private WAN. Where the connection 921 is a telephone line, the modem 916 may be a traditional “dial-up” modem. Alternatively, where the connection 921 is a high capacity (e.g., cable) connection, the modem 916 may be a broadband modem. A wireless modem may also be used for wireless connection to the communications network 920.
[00066] The computer module 901 typically includes at least one processor unit 905, and a memory unit 906. For example, the memory unit 906 may have semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The computer module 901 also includes an number of input/output (FO) interfaces including: an audio-video interface 907 that couples to the video display 914, loudspeakers 917 and microphone 980; an I/O interface 913 that couples to the keyboard 902, mouse 903, scanner 926, camera 927 and optionally a joystick or other human interface device (not illustrated); and an interface 908 for the external modem 916 and printer 915. In some implementations, the modem 916 may be incorporated within the computer module 901, for example within the interface 908. The computer module 901 also has a local network interface 911, which permits coupling of the computer system 900 via a connection 923 to a local-area communications network 922, known as a Local Area Network (LAN). As illustrated in Fig. 9A, the local communications network 922 may also couple to the wide network 920 via a connection 924, which would typically include a so-called “firewall” device or device of similar functionality. The local network interface 911 may comprise an Ethernet circuit card, a Bluetooth® wireless arrangement or an IEEE 802.11 wireless arrangement; however, numerous other types of interfaces may be practiced for the interface 911.
[00067] The set of images 220 can be captured by one or more remote cameras 995 at the positions 120, 130 and 140 and received via the network 920. The remote cameras 995 can be
14684374_1
2018203328 11 May 2018 any digital image capture device such as a digital camera, a smartphone and the like.
Alternatively, the set of images may be stored on a memory of a server computer 997, for example a cloud server, and received at the computer module 1001 via the network 920. In other arrangements, the set of images 220 may be stored in the memory 906 [00068] The I/O interfaces 908 and 913 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 909 are provided and typically include a hard disk drive (HDD) 910. Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 912 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (e.g., CD-ROM, DVD, Blu-ray Disc™), USB-RAM, portable, external hard drives, and floppy disks, for example, may be used as appropriate sources of data to the system 900.
[00069] The components 905 to 913 of the computer module 901 typically communicate via an interconnected bus 904 and in a manner that results in a conventional mode of operation of the computer system 900 known to those in the relevant art. For example, the processor 905 is coupled to the system bus 904 using a connection 918. Likewise, the memory 906 and optical disk drive 912 are coupled to the system bus 904 by connections 919. Examples of computers on which the described arrangements can be practised include IBM-PC’s and compatibles, Sun Sparcstations, Apple Mac™ or like computer systems.
[00070] The methods described may be implemented using the computer system 900 wherein the processes of Figs. 3, 5-7, 10-13 and 17-22 to be described, may be implemented as one or more software application programs 933 executable within the computer system 900. In particular, the steps of the methods described are effected by instructions 931 (see Fig. 9B) in the software 933 that are carried out within the computer system 900. The software instructions 931 may be formed as one or more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the described methods and a second part and the corresponding code modules manage a user interface between the first part and the user.
[00071] The software may be stored in a computer readable medium, including the storage devices described below, for example. The software is loaded into the computer system 900
14684374_1
2018203328 11 May 2018 from the computer readable medium, and then executed by the computer system 900. A computer readable medium having such software or computer program recorded on the computer readable medium is a computer program product. The use of the computer program product in the computer system 900 preferably effects an advantageous apparatus for aligning views of a graphical object.
[00072] The software 933 is typically stored in the HDD 910 or the memory 906. The software is loaded into the computer system 900 from a computer readable medium, and executed by the computer system 900. Thus, for example, the software 933 may be stored on an optically readable disk storage medium (e.g., CD-ROM) 925 that is read by the optical disk drive 912. A computer readable medium having such software or computer program recorded on it is a computer program product. The use of the computer program product in the computer system 900 preferably effects an apparatus for aligning views of a graphical object.
[00073] In some instances, the application programs 933 may be supplied to the user encoded on one or more CD-ROMs 925 and read via the corresponding drive 912, or alternatively may be read by the user from the networks 920 or 922. Still further, the software can also be loaded into the computer system 900 from other computer readable media. Computer readable storage media refers to any non-transitory tangible storage medium that provides recorded instructions and/or data to the computer system 900 for execution and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, DVD, Blu-ray™ Disc, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 901. Examples of transitory or non-tangible computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 901 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including e-mail transmissions and information recorded on Websites and the like.
[00074] The second part of the application programs 933 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUIs) to be rendered or otherwise represented upon the display 914. Through manipulation of typically the keyboard 902 and the mouse 903, a user of the computer system 900 and the application may manipulate the interface in a functionally adaptable manner to provide
14684374_1
2018203328 11 May 2018 controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 917 and user voice commands input via the microphone 980.
[00075] Fig. 9B is a detailed schematic block diagram of the processor 905 and a “memory” 934. The memory 934 represents a logical aggregation of all the memory modules (including the HDD 909 and semiconductor memory 906) that can be accessed by the computer module 901 in Fig. 9A.
[00076] When the computer module 901 is initially powered up, a power-on self-test (POST) program 950 executes. The POST program 950 is typically stored in a ROM 949 of the semiconductor memory 906 of Fig. 9A. A hardware device such as the ROM 949 storing software is sometimes referred to as firmware. The POST program 950 examines hardware within the computer module 901 to ensure proper functioning and typically checks the processor 905, the memory 934 (909, 906), and a basic input-output systems software (BIOS) module 951, also typically stored in the ROM 949, for correct operation. Once the POST program 950 has run successfully, the BIOS 951 activates the hard disk drive 910 of Fig. 9A. Activation of the hard disk drive 910 causes a bootstrap loader program 952 that is resident on the hard disk drive 910 to execute via the processor 905. This loads an operating system 953 into the RAM memory 906, upon which the operating system 953 commences operation. The operating system 953 is a system level application, executable by the processor 905, to fulfil various high level functions, including processor management, memory management, device management, storage management, software application interface, and generic user interface.
[00077] The operating system 953 manages the memory 934 (909, 906) to ensure that each process or application running on the computer module 901 has sufficient memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 900 of Fig. 9A must be used properly so that each process can run effectively. Accordingly, the aggregated memory 934 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the computer system 900 and how such is used.
14684374_1
2018203328 11 May 2018 [00078] As shown in Fig. 9B, the processor 905 includes a number of functional modules including a control unit 939, an arithmetic logic unit (ALU) 940, and a local or internal memory 948, sometimes called a cache memory. The cache memory 948 typically includes a number of storage registers 944 - 946 in a register section. One or more internal busses 941 functionally interconnect these functional modules. The processor 905 typically also has one or more interfaces 942 for communicating with external devices via the system bus 904, using a connection 918. The memory 934 is coupled to the bus 904 using a connection 919.
[00079] The application program 933 includes a sequence of instructions 931 that may include conditional branch and loop instructions. The program 933 may also include data 932 which is used in execution of the program 933. The instructions 931 and the data 932 are stored in memory locations 928, 929, 930 and 935, 936, 937, respectively. Depending upon the relative size of the instructions 931 and the memory locations 928-930, a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 930. Alternately, an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 928 and 929.
[00080] In general, the processor 905 is given a set of instructions which are executed therein. The processor 905 waits for a subsequent input, to which the processor 905 reacts to by executing another set of instructions. Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 902, 903, data received from an external source across one of the networks 920, 902, data retrieved from one of the storage devices 906, 909 or data retrieved from a storage medium 925 inserted into the corresponding reader 912, all depicted in Fig. 9A. The execution of a set of the instructions may in some cases result in output of data. Execution may also involve storing data or variables to the memory 934.
[00081] The described arrangements use input variables 954, which are stored in the memory 934 in corresponding memory locations 955, 956, 957. The described arrangements produce output variables 961, which are stored in the memory 934 in corresponding memory locations 962, 963, 964. Intermediate variables 958 may be stored in memory locations 959, 960, 966 and 967.
14684374_1
2018203328 11 May 2018 [00082] Referring to the processor 905 of Fig. 9B, the registers 944, 945, 946, the arithmetic logic unit (ALU) 940, and the control unit 939 work together to perform sequences of microoperations needed to perform “fetch, decode, and execute” cycles for every instruction in the instruction set making up the program 933. Each fetch, decode, and execute cycle comprises:
a fetch operation, which fetches or reads an instruction 931 from a memory location 928, 929, 930;
a decode operation in which the control unit 939 determines which instruction has been fetched; and an execute operation in which the control unit 939 and/or the ALU 940 execute the instruction.
[00083] Thereafter, a further fetch, decode, and execute cycle for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 939 stores or writes a value to a memory location 932.
[00084] Each step or sub-process in the processes of Figs. 3, 5-7, 10-13 and 17-22 is associated with one or more segments of the program 933 and is performed by the register section 944, 945, 947, the ALU 940, and the control unit 939 in the processor 905 working together to perform the fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 933.
[00085] The method of aligning views of the graphical object may alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the functions or sub functions of the method. Such dedicated hardware may include graphic processors, digital signal processors, or one or more microprocessors and associated memories.
[00086] Fig. 3 shows the object estimation method 300 of generating the three-dimensional mesh data structure 800 for the mesh 230 that estimates the three-dimensional shape of the object 110 and the set of texture atlases 260 that show the surface colour appearance of the object 110 from each of the camera positions 120, 130 and 140, given the set of images 220 of the object 110 captured from the camera positions 120, 130 and 140.
14684374_1
2018203328 11 May 2018 [00087] The method 300 is typically implemented as one or more modules of the application
933, executed under control of the processor 905 and stored in the memory 906.
[00088] The object estimation method 300 begins with a data receiving step 310. In execution of step 310 the set of images 220 captured from the camera positions 120, 130 and 140 are received by the processor 905. As described above, the set of images 220 may be received from a camera or from a memory of a server computer (such as the server 997) or the memory 906.
[00089] The method 300 progresses from step 310 to a structure from motion step 320. In execution of step 320, the processor 905 applies a structure from motion algorithm to determine a point cloud representation of the three-dimensional structure of the object 110, and an estimate of the pose (position and orientation in three dimensions) of the camera for each of the camera positions 120, 130 and 140. Structure from motion is a class of techniques used to reconstruct 3D geometry from two-dimensional (2D) images of objects captured from different viewing positions, using photogrammetric alignment of correspondence points. Structure from motion algorithms operate similarly to 3D reconstruction from stereographic imaging. The structure from motion algorithm may be an implementation as available in commercial software packages (for example Agisoft PhotoScan) or open source libraries (for example OpenCV).
[00090] The method 300 continues from step 320 to a meshing step 330. In execution of step 330, the point cloud representation is converted to a mesh representation 230 of the surface shape of the object 110 using a 3D mesh data structure 800. The mesh generating algorithm may be an implementation as available in commercial software packages (for example Agisoft PhotoScan) or open source libraries (for example OpenCV).
[00091] The method 300 continues from step 330 to a parametrisation step 340. In execution of step 330, the surface of the mesh 230 is split into the fragments 240 and parametrised so that the surface is mapped by a mapping function F to the two-dimensional parametric UV space 250. Splitting the parameterisation into fragments is advantageous compared to parametrising the mesh as a single contiguous surface, because this typically allows each fragment to have less distortion in the mapping from 3D to 2D. In particular, the use of fragments enables setting a maximum allowable distortion amount. Low distortion is desirable to reduce errors caused by interpolating texture pixels. The parametrisation algorithm may be an implementation as available in commercial software packages (for example Agisoft PhotoScan) or open source libraries (for example OpenCV).
14684374_1
2018203328 11 May 2018 [00092] In addition to producing the mapping function F, the parameterisation step 340 may also create a fragment identifier (ID) mask image in UV space. Each pixel of the fragment ID mask image is assigned a numeric value equal to a numeric identifier (ID) of the fragment in which the pixel lies, or a pre-determined value such as 0 for pixel which are not in any fragment. The mask image is not created in some implementations of step 340.
[00093] The method 300 continues from step 340 to a a projection step 350. In execution of step 350, each image from the set of images 220 captured from the camera positions 120, 130 and 140 is projected on to the surface of the 3D mesh data structure 230 and then via the mapping function F on to the two-dimensional UV space 250 to produce the texture atlas 260 for (associated with) a particular camera position or a camera viewpoint. Projection from the captured images 220 to the mesh 230 can be performed by ray casting each image pixel to determine which visible mesh triangle the pixel intersects, then performing a weighted colour averaging of all colour samples assigned to each mesh triangle. The mapping function F defines projection onto the UV plane 250, which may require interpolation. The projection algorithm may be an implementation as available in commercial software packages (for example Agisoft PhotoScan) or open source libraries (for example OpenCV).
[00094] The object estimation method 300 then terminates at step 399. The output from the object estimation method 300 is the 3D mesh 230 having data structure 800 representing the three-dimensional shape of the object 110, a set of texture atlases 260 comprising the colour information from the set of captured images mapped into UV space 250, and the set of camera poses, one for each camera position 120, 130, and 140. Each texture atlas corresponds to one of the captured images.
[00095] Given the outputs of the method 300, it is possible to project the colour information from each captured image on to the surface of the 3D mesh data structure 230, via the inverse mapping function K1. The inverse mapping function F_l operates to determine the colour seen at a given point on the three-dimensional object from the corresponding camera view (without reference to the original captured images 220). As described above, using structure from motion for alignment typically results in a problem of multi-pixel alignment errors between the various camera views (such as the viewpoints 120, 130 and 140) Typically, the colour information from each camera view is substantially misaligned with respect to the other camera views by multiple pixels. The misalignment means that colour samples from different camera views cannot be combined to provide accurate diffuse colour estimates or samples suitable for estimating the
14684374_1
2018203328 11 May 2018
BRDFs of points on the object 110. Attempting to solve the multi-pixel misalignment problem by aligning the texture atlases 260 with one another (prior to the inverse projection onto the mesh) using two-dimensional image alignment methods typically fails for the following reasons. Firstly, images captured from different camera views may contain differences in specular reflection or other reflection properties caused by complex BRDFs that change the colour and/or luminance of the surface. Secondly, the mapping F of the mesh representation 230 surface to the UV plane 250 is discontinuous, non-linear, and typically does not respect topology or orientation. Accordingly, points nearby one another in the texture atlas 260 may map to widely spaced points on the three-dimensional object 110. Furthermore, misalignment warp vectors of nearby points in the texture atlas set 260 may have different magnitudes and directions.
[00096] To accurately estimate the material BRDF at densely distributed points on an object, multiple image samples of each point from different camera views are required, with alignment errors of less than a pixel. The image sample from each viewpoint are referred to as “views” of the object. Solving the full three-dimensional alignment of all camera views is a non-linear problem in a high-dimensional space. Accordingly, achieving the required level of accuracy is typically computationally difficult. If the texture atlases 260 can be aligned with sub-pixel accuracy in two dimensions, then the texture atlases 260 can provide multiple colour samples suitable for dense BRDF estimation.
[00097] The alignment is achieved in the arrangements described using algorithms which determine warp vectors between corresponding pixels in pairs of texture maps. The resulting collection of warp vectors describes a warp between these pairs of texture maps.
[00098] To overcome the problems of aligning the texture atlases 260 requires an alignment method that is (a) non-rigid, to handle the non-rigid deformations introduced by mapping a three-dimensional manifold to a two-dimensional plane, (b) sensitive to the matching of geometric image features in common, such as distinctive feature points, but relatively insensitive to wide variations in image colour and luminance, and (c) able to deal with discontinuities in warp vector magnitude and direction without influencing the warp vectors in adjacent pixels.
[00099] If registration of texture atlases results in the displacement of fragment pixels at a position close to the fragment boundary, then the pixels close to the corresponding boundary in
14684374_1
2018203328 11 May 2018 an adjoining fragment must be displaced in a corresponding fashion, such that if the fragments were to be mapped back onto the mesh, then no discontinuity would result in the seams between fragments. Because registration can be less accurate at the edges of fragments where there is less context for the alignment, registration problems at seams are common. A method of reducing seam discontinuities by increasing alignment context and maintaining consistency between fragments to improve the quality of fragment seam registration between views (i.e. between different texture atlases) is described herein. No seam discontinuities exist in individual views from the initial parametrization of the mesh, but seam discontinuities will be introduced by any registration method that does not take account of pixels displaced near fragment boundaries and in which the pixel displacement in one fragment does not match the pixel displacement in a close neighbouring fragment. Where mismatching of pixel displacement occurs, the discontinuity in pixel displacements between fragments results in a discontinuity in texture appearance near fragment boundaries in registered views.
[000100] Fig. 4 is a schematic diagram illustrating a data flow 400 of a cross-view alignment method 500 as described below with reference to Fig. 5. As shown in Fig. 4, the UV space 250 is divided by the cross-view alignment method 500 into a set of fragments 420. The set of N texture atlases 260 is processed by the cross-view alignment method 500 to choose a texture atlas containing the best view of a subset of fragments 470 to form a master atlas 480.
[000101] Fig. 5 shows the cross-view alignment method 500 of aligning a set of textures atlases 260 to produce substantially aligned colour samples suitable for dense BRDF estimation of the surface material of the object 110. The method 500 is typically implemented as one or more modules of the application 933, executed under control of the processor 905 and stored in the memory 906.
[000102] Inputs to the cross-view alignment method 500 are: the 3D mesh 230 (having the data structure 800) of the object 110, a mapping of the mesh surface to a parametric representation in a two-dimensional UV space 250 according to a mapping function F, and a set of texture atlases 260, each texture atlas corresponding to a single camera view. The method 500 also receives the estimated poses output by the method 300.
[000103] The cross-view alignment method 500 begins with a fragment geometry determining step 510. In execution of step 510, fragments arising in the UV parameterisation according to the mapping 250 are identified, and geometric information for each fragment is determined.
14684374_1
2018203328 11 May 2018
The geometric information is used to relate each fragment in a parameterisation geometrically with the fragment’s neighbours. The geometric information includes information about the size, position, adjacency, and relative orientation of the fragment in question, and is used for several purposes as described below. Step 510 operates to generate the set of fragments 420. The determined geometric information represents a geometric arrangement of the fragments to form the graphical object.
[000104] Figs. 15 and 16 show examples of geometric information determined at step 510. A geometry sub-image which holds the geometric information may be generated by creating a margin image using a neighbourhood query margin image method 1200, described below with reference to Fig. 12.
[000105] The method 500 continues from step 510 to a view choosing step 520. In execution of step 520, for each of the fragments 420 defined in the fragment geometry determination step 510, one view of that fragment is chosen from the set of texture atlases 260 and designated a “best view” 470 of that fragment. The “best view” effectively provides a reference texture image fragment for use in cross-view alignment to determine reflectance properties of the surface of the object 110. The best view may also be referred to as a reference view or a master view. A method of choosing which view is the best view 470 for a given fragment is described below with reference to Fig. 6. An output of step 520 is a set of fragments defined in the fragment geometry calculation step 510, with each fragment accompanied by a reference to a view or a texture atlas corresponding to a best view of that fragment, such as the fragment 470.
[000106] The method 500 continues from step 520 to a selection step 530. In execution of step 530, an unprocessed fragment from the set of fragments defined in the geometry determination step 510 is selected and a view or a texture atlas is chosen which contains the best view of the selected unprocessed fragment. The texture atlas containing the best view of selected unprocessed best fragment is chosen using the reference output at step 520. The texture atlas corresponding to the best view of the selected unprocessed fragment is designated as the current reference texture atlas 480.
[000107] It is not necessary to select a “best view” of a fragment if the intention is to align a single pair of texture atlases. If all views are of good quality, it may also be convenient to use any texture atlas as a reference view for alignment. Therefore, steps 520 and 530 are optional, with texture atlas arbitrarily called the “best view” for purposes of explanation.
14684374_1
2018203328 11 May 2018 [000108] The method 500 continues from step 530 to an alignment step 540. In execution of step 540, each texture atlas in the set of texture atlases 260 is aligned to the current reference texture atlas 480 using a non-rigid multi-modal alignment method that is capable of maintaining consistency of alignment across fragment boundaries. An example alignment method is described below with reference to Figs. 7, 15A and 15B. Fig. 18 shows an alternative alignment method for use at step 540. The described methods combine a mutual-information-based optical flow algorithm with a method for transporting warp vectors between fragment boundaries to provide a consistent combination. Warp vectors calculated during the alignment step for the selected unprocessed best fragments are used to create a warp atlas for each texture atlas. The methods described relate to pairwise registration of each texture atlas to an atlas containing the best view of one or more fragments. A warp map between each view (sub-image) of a fragment and a corresponding best view is generated for each fragment. The warp atlas is created by combining the warp sub-images for each fragment.
[000109] The method 500 continues from step 540 to a check step 550. Execution of step 550 determines if all fragments defined in the fragment geometry determination step 510 have been processed through steps 530 and 540. If there are remaining unprocessed fragments (“Yes” at step 550), the method 500 is returned to step 530 to select the next view. If best views for all fragments have been used for aligning texture atlases (“No” at step 550), with one best view per fragment, then a single warp atlas has been assembled for each view, and each texture atlas can be warped into a common alignment with the best views and the method 500 continues to a warping step 560. The warp atlas relates to a collection of warp maps for each fragment of the graphical object 110.
[000110] Because the alignment method operates using different reference texture atlases potentially at every iteration from step 530 to step 550 of the method 500, seam discontinuities can still exist between fragments aligned using different reference texture atlases. However, the number of seam discontinuities is fewer, and the continuity of the warp between fragments aligned using the same views is improved. A discontinuity in the warp atlas results discontinuity in a warped texture, i.e. a visible seam discontinuity. By including neighbouring fragments in the fragment warp vectors more texture pixels can be included in the warping process, resulting in more texture information being made available for the BRDF estimation process. Making more texture information available results in an increase in quality of the BRDF estimation.
14684374_1
2018203328 11 May 2018 [000111] The BRDF estimation is the process of modelling material appearance using multiple views of a sample material taken from a diversity of view angles and with a diversity of lighting directions. To perform BRDF estimation accurately common points of the material being estimated are required to be aligned very accurately, ideally to within less than a quarter of a pixel.
[000112] In warping step 560, each texture atlas is warped using the corresponding determined warp vector atlas from step 540. The warping results in a set of aligned texture atlases in which all fragments are aligned to a composite view based upon a common set of best fragments.
[000113] The method 500 continues from step 560 to an export step 570. In execution of step 570, the aligned pixel colour samples from all of the set of aligned texture atlases are exported, together with the estimated camera poses for each camera view as determined by the structure from motion step 320, in a format suitable for BRDF estimation. For example one or more files containing the camera poses and the aligned colour samples for each pixel as viewed from each associated camera pose can be exported.
[000114] The method 500 continues from step 570 to a BRDF estimation step 580. In execution of step 580, the exported colour samples are used to estimate the BRDF at points on the surface of the 3D mesh 230 (having the data structure 800) of the object 110. The BRDF estimation uses a set of colour samples captured from different imaging positions relative to the surface and the lighting angle, such as the samples exported in step 570. As such, a set of colour samples for each position in the fragment, each colour sample corresponding to one of the plurality of viewpoints, is determined for the purposes of BRDF estimation. The set of colour samples for each position in the fragment can be determined by choosing a point on the surface of the mesh. Then, with reference to the camera pose and colour sample data exported at step 570, the colour samples are calculated by determining, from the geometry of the camera poses relative to the mesh, which pixels in the aligned colour samples correspond to the chosen point as seen from each camera position 1 ]0, 130 and 140. The corresponding pixels can be determined for example using the mapping F of the mesh representation 230 surface to the UV plane 250. The step of determining the set of colour samples is repeated for any chosen point on the mesh, producing colour samples of points on the surface of the object 110, as viewed from each camera position 120, 130 and 140. Due to execution of the alignment step 560, the colour samples are accurately aligned. The BRDF estimation of step 580 proceeds by fitting the samples to a model of the BRDF, for example a function relating surface colour to viewing
14684374_1
2018203328 11 May 2018 angle and lighting angle, such as the Phong model, the Cook-Torrance model, or the AshikmanShirley model, or other known models. The BRDF estimation may be performed by an implementation as available in commercial software packages using known methods.
[000115] Accordingly, step 580 operates to determine reflectance properties of the surface of the object 110 using the determined pixel values of step 560 with reference to the viewpoints 120, 130 and 140. The cross-view alignment method 500 terminates at end step 599 after execution of step 580.
[000116] Figs. 15 A and 15B are diagrams illustrating geometric information determined in step 510. Fig. 15A illustrates geometric information 1500 and Fig. 15B illustrates geometric information 1501. Fig. 15A illustrates a simple example for clarity of illustration, while Fig. 15B illustrates a more complex example in which geometric relationships between mesh fragments are more distorted by the mapping into UV space. The geometric information 1500 or 1501 is used to separate each texture atlas into sub-images and to assist in alignment by mitigating seam discontinuities. One implementation of separating a texture atlas into subimages is described in more detail in relation to Fig. 16.
[000117] A fragmented mesh 1510 is an example of the 3D mesh 230. In Fig. 15 A, a first texture image example 1520 of a texture image 803 of the mesh 230 uses a first parameterisation to the UV plane. In Fig. 15B, a second texture image example 1521 of a texture image 803 of the mesh 230 uses a second parameterisation to the UV plane. Three example fragments which are adjacent on the mesh are numbered 1, 2, and 3 in Fig. 15A. In Fig. 15A the fragments 1, 2, and 3 are located near one another in the first texture image example 1520 and without relative rotations or other warping.
[000118] Similarly, three example fragments which are adjacent on the mesh are numbered lb, 2b, and 3b in Fig. 15B. In Fig. 15B the fragments lb, 2b, and 3b are located in widely separated locations in the second example texture image example 1521 and have relative rotations, scalings, and warps. Fig. 15B can be considered a more realistic scenario than Fig. 15A. In general, fragments in a texture image 803 show non-linear warps resulting from the mapping of a three-dimensional manifold to the UV plane.
[000119] Dashed boxes 1530 (Fig. 15A) and 1531 (Fig. 15B) show the geometric information determined in step 510 associated with the fragments. The geometric information 1530 is
14684374_1
2018203328 11 May 2018 represented in Fig 15A by three geometry sub-images, 1540 for fragment 1, 1550 fragment 2, and 1560 for fragment 3. Similarly, the geometric information 1531 is represented in Fig 15B by three geometry sub-images, 1541 for fragment lb, 1551 fragment 2b, and 1561 for fragment 3b. Similar geometric information exists for all fragments in the texture image examples 1520 or 1521, not just the three fragments shown in Figs. 15A and 15B respectively.
[000120] A geometry sub-image for a fragment includes only geometrical relationships and not pixel intensities, and the UV parameterisation is the same for every texture atlas. Accordingly, the geometry sub-image for a fragment does not differ depending on which texture atlas or camera view is selected.
[000121] Each geometry sub-image is large enough to encompass the enclosed fragment in all texture atlases and also includes a margin of a predetermined size, for example, 5% of the width of the largest fragment, to contain adjacency information. The size of the margin can be determined by experimentation, as described hereafter in relation to Fig. 17. The information creates a mapping between the geometry sub-images and positions in UV space. A fragment 1570 represents a mapping not of pixel intensities but of the positions in geometry sub-image 1540 to fragment 1 in UV space, and partial fragments 1580 and 1590 represent mappings of the positions in the geometry sub-image adjacent to the main fragment which map to adjacent fragments 2 and 3. Similarly, in the alternative example of Fig. 15B, a fragment 1571 represents a mapping of the positions in geometry sub-image 1541 to the main fragment lb in UV space, and partial fragments 1581 and 1591 represent mappings of the positions in the geometry subimage adjacent to the main fragment which map to adjacent fragments 2b and 3b. It is not typically sufficient for the mappings to indicate only a displacement between the fragments. The reason is that in mapping from the three-dimensional mesh to the two-dimensional UV space, distortion will almost always be produced between the local surface geometry on the mesh and the corresponding fragment in UV space, much like the various mappings of a world globe onto a flat map, all of which have various imperfections. Even where the surface is developable, neighbouring fragments might not be oriented correctly with respect to their edges. A displacement alone is sufficient for indicating correspondences between UV pixels near the boundaries of neighbouring fragments, yet not to correct errors in these displacements. To correct errors in displacement, it is necessary to take into account at least the change in relative orientation and local scaling between the pixels. For example, if the boundary of one fragment is oriented at an angle of 90° with respect to the corresponding boundary in a second fragment, an adjustment in displacement of 1 pixel to the left in the first fragment corresponds to an
14684374_1
2018203328 11 May 2018 adjustment of 1 pixel downwards in the second fragment. Because of the distortion introduced when mapping from a curved surface to a flat surface, the adjustment required may change at different locations along the fragment boundary. Thus, when correcting displacements, incorporating at least a rotation and scale factor in addition to relative displacements between
UV map pixels is necessary.
[000122] Not every location on a fragment boundary need be related to the boundary of another fragment. A mesh need not form a closed surface, and may contain a boundary of its own. For example, a mesh might represent one side of a folded paper fan, and a mesh boundary would occur at the top of the fan. Fragments on the mesh boundary have boundary regions which do not have a neighbouring fragment.
[000123] The alignment step 540 is now described with reference to Fig. 7. Fig. 7 shows an alignment method 700 of aligning a texture atlas set 260 to a current reference texture atlas 480 while reducing seam discontinuities. The method 700 is typically implemented as one or more modules of the application 933, executed under control of the processor 905 and stored in the memory 906.
[000124] The inputs to alignment method 700 are a current reference (master) texture atlas 480 as produced by selection step 530, a current texture atlas from the set of texture atlases 260, a list of best fragments for the current reference texture atlas, and geometric information describing adjacency, scale, and orientation information relating each fragment geometrically to each of the adjacent fragments. The geometric information is determined at step 510, for example the geometric information 1530 or 1531.
[000125] The method 700 starts at a texture sub-image creation step 710. In execution of step 710, empty texture sub-images are created for each fragment with reference to the two texture atlases (from the sets 480 and 260) and the geometric information. Each sub-image is created of a size sufficient to enclose both the associated fragment and the number of pixels chosen for the boundary (margin) region.
[000126] The method 700 continues from step 710 to a texture copying step 720. In execution of step 720, the texture sub-images are populated with texture pixels. The received geometric information is used to determine the texture atlas location of any fragment portions, including both a main fragment (for example the fragment 1570 of the geometric sub-image 1540) and
14684374_1
2018203328 11 May 2018 any adjacent fragments (for example the fragments 1580 and 1590 of the sub-image 1540), associated with each sub-image. The texture atlas location information is used to copy portions of both main and adjacent fragment textures from the texture atlas into each texture sub-image, resulting in an expanded view of the main fragment. A texture sub-image differs from a geometric sub-image by inclusion of texture information. Because the texture sub-image contains not only the main fragment texture but also texture from adjacent fragments, more image context is available for the alignment operation than would be available from a single fragment texture alone. Step 720 is completed for each sub-image. Step 720 effectively forms expanded views for each fragment for each of the viewpoints 120, 130 and 140 containing that fragment. The expanded view includes the first sub-image or view (see for example 1570 of Fig. 15A) and texture pixels (texture data) from one or more adjacent fragments (see for example 1580 and 1590 of Fig. 15A). The sub-images each represent expanded views of a fragment.
[000127] The method 700 continues from step 720 to a registration step 730. In execution of step 730, non-rigid alignment is performed to determine warp vectors mapping pixels of the master texture sub-images to the texture sub-image. A suitable method of alignment is a covariance-based Mutual Information method. A method using covariance-based Mutual Information suitable for use at step 730 is described below with reference to step 1830 of Fig.
18. Because of possible registration errors at the boundaries of the discrete fragments, it is likely that the result of step 730 contains seam discontinuities in which the registration in the vicinity of the fragment boundary is not consistent with the registration of the adjacent fragment in the vicinity of the connected fragment boundary. Step 730 operates to align expanded subimages (views) for a fragment, as captured from different viewpoint such as the viewpoints 120, 130 and 140. Step 730 determines a warp map between pixels of the first and second views of a fragment.
[000128] The method 700 continues from step 730 to a warp vector combining step 740. In execution of step 740, seam discontinuities in the warp vectors of each fragment are mitigated. Each of the fragments is separately selected and the associated warp vectors combined. To correct seam discontinuities for a fragment, the geometric information (determined at step 510) is first used to determine portions of adjacent fragments and geometric relationship of the adjacent fragments to the selected fragment. Because the warp vectors in a boundary position of the selected fragment and the warp vectors in the related fragment portion represent coincident positions on the 3d mesh 230, the warp vectors also represent coincident positions on the surface of the 3d object. Thus, the two sets of warp vectors are related by the relative
14684374_1
2018203328 11 May 2018 displacement, orientation and scaling factor as specified by the supplied geometric information. To correct seam discontinuities, the warp vectors in a boundary position of the selected fragment are replaced with the warp vectors teleported from the related fragment position.
[000129] In the context of the present disclosure, “teleport” is used to mean that the warp vectors in the related fragment are transferred to the boundary of the selected fragment and modified by the associated displacement, orientation and scaling factors in order to reorient the warp vectors from the related fragment to the selected fragment. It is likely that the warp vectors determined in the interior of the selected fragment will not initially be consistent with the warp vectors teleported to the boundary of said fragment, but a regularization method can impose consistency, for example as described in relation to Fig. 18. Because the adjacency relationship is commutative, the correction as applied to selected fragment will be related to the correction as applied to neighbouring fragments, bringing the warp vectors in the boundary regions substantially into alignment. Therefore seam discontinuities are substantially removed.
[000130] Step 740 operates to determine a warp map between pixels of different views (for example first and second views) of a fragment using one or more warp vectors of a warp map determined at step 730 for views of another fragment. The warp vectors are selected based on geometrical arrangement of the fragments (for example scale and rotation of adjacent fragments). The step 740 is implemented for each of the fragments.
[000131] The method 700 continues from step 740 to a warp vector assembly step 750. In execution of step 750, the warp vectors assigned to the fragments contained only in the list of best fragments for the current master atlas are selected and assembled by placing the warp vectors in related pixel coordinate positions in the UV plane of the current warp atlas. Step 750 is implemented for the selected “best view” or reference fragment for each position only. The step 750 operates to select a set of warp vectors to align views of fragments of the graphical object 110 in a manner with decreased seam discontinuity. The alignment method 700 terminates at end step 799 after execution of step 750, returning the current warp atlas enhanced by the addition of warp vectors for the fragments in the list of best fragments.
[000132] An implementation of the view choosing step 520 is now described with reference to Fig. 6. Fig. 6 shows a view choosing method 600 of selecting one view of a fragment 420 chosen from the set of texture atlases 260. The method 600 is typically implemented as one or
14684374_1
2018203328 11 May 2018 more modules of the application 933, executed under control of the processor 905 and stored in the memory 906.
[000133] The method 600 starts at a fragment choosing step 610. In execution of the step 610, a current fragment of the set of fragments 420 is chosen for consideration, avoiding fragments which have already been considered in the view choosing method 600. The method 600 continues from step 610 to a view choosing step 620. In execution of the step 620, a current view of the current fragment is chosen for consideration, avoiding views of the current fragment which have already been considered in the view choosing method 600. The current view relates to one of the viewpoints 120, 130 and 140 corresponding to the fragment. Each view of the fragment corresponds to one of the set of texture atlases 260, and is associated with one of the camera viewpoints/poses 120, 130 and 140.
[000134] The method 600 continues from step 620 to an alignment feature quality step 630. In execution of step 630, an alignment feature quality parameter is determined for the current view of the current fragment with respect to the parametric representation of the surface (the UV plane 250). In one implementation, the alignment feature quality parameter is determined by performing a search within the UV pixels of the fragment for Harris corners in the image data of the parametrised camera view. Harris corner finding algorithms are known, and produce a measure of comer strength, based on the product of eigenvalues of the moment matrix of image gradients. To calculate the alignment feature quality parameter of a fragment, the corner strengths of a predetermined number nc of the strongest Harris comers may be summed. In alternative arrangements, other methods of calculating an alignment feature quality may be performed, for example determining the mean contrast level of patches tiled across the fragment, or determining the integrated Fourier amplitude of spatial frequencies above a predetermined threshold.
[000135] The method 600 continues from step 630 to a view decision step 640. In execution of step 640, the views of the current fragment are examined to see if any of those views have not yet been assigned an alignment feature quality parameter. If any views remain to be assigned an alignment feature quality parameter (“Yes” at step 640), the method 600 returns to the view choosing step 620. If all views of the current fragment have been assigned an alignment feature quality parameter (“No” at step 640), the view choosing method 600 continues to a view selection step 650.
14684374_1
2018203328 11 May 2018 [000136] In execution of view selection step 650, all the views of the current fragment are examined to determine which view has been assigned the greatest alignment feature quality parameter. The view of the current fragment that has been assigned the greatest alignment feature quality parameter is chosen as the best view 470 of the current fragment. If two or more views of the current fragment have equal greatest alignment feature quality parameters, one is chosen as the best view 470 of the current fragment by any means, such as random selection, or selecting the camera view with optical axis closest to parallel with the mesh surface normal, for example.
[000137] The method 600 continues from step 650 to a fragment decision step 660. In execution of step 660, the set of fragments 420 is examined to determine if any of the fragments have not yet been assigned a best view. If any fragments remain to be assigned a best view (“Yes” at step 660), the method returns to fragment choosing step 610. If all fragments have been assigned a best view (“No” at step 660), the view choosing method 600 terminates at step 699.
[000138] Fig. 17 shows an implementation of the step 510 to generate or determine geometric information for each fragment used for the alignment method 500. Fig. 17 shows a geometric information determination method 1700. The method 1700 is typically implemented as one or more modules of the application 933, executed under control of the processor 905 and stored in the memory 906.
[000139] The method 1700 starts at a mesh augmentation step 1705. In execution of step 1705, a method 1000 is invoked to determine additional information of the 3D mesh 230 for the data structure 80 to allow neighbourhood query margin images to be constructed. The method 1000 is described below with references to Figs. 10 and 11.
[000140] The method 1700 continues from step 1705 to a segmentation step 1710. In execution of step 1710, a fragment ID mask image is obtained. A fragment ID mask created by parameterisation step 340 as described above is obtained in step 1710.
[000141] The method 1700 continues from step 1710 to a choosing step 1720. In execution of step 1720, a next fragment I without generated geometric information is chosen as the current fragment,. The current fragment corresponds to a set of pixels sharing the same ID in the fragment ID mask image.
14684374_1
2018203328 11 May 2018 [000142] The method 1700 continues from step 1720 to a bounding box step 1730. In execution of step 1730, a bounding box is determined for the fragment ID mask image pixels, represented as an offset to the top-left of the bounding box within UV space, and a width and a height. A sub-image offset and position is determined from the bounding box which allows in UV space for a margin of adjacent pixels to be added around the fragment pixels. An increase of width of approximately 5% of the largest fragment is usually sufficient, or double the expected misalignment of different views in pixels. The expected misalignment can be predetermined empirically by measuring the average displacement between fragments obtained through the parametrization of a test object obtained under the same conditions. In some tests conducted an expected alignment of around 15 pixels was found. If a fragment is close to the boundary of the region of UV space represented by the texture atlases 260, the fragment sub-image positioned at the UV offset position might extend beyond the represented region of UV space, and thus the offset position may contain negative coordinates. Accordingly, handling the offset condition appropriately is important.
[000143] The method 1700 continues from step 1730 to a geometry sub-image creation step 1740. In execution of step 1740, a geometry sub-image representing the geometrical arrangement of the main fragment and adjacent fragments is created. The geometry sub-image is of the sub-image size, and contains five image channels: a (u, v) coordinate, a rotation angle Θ and scale factor m represented as a complex value (m cos Θ, m sin Θ), and a fragment ID F. The fragment ID channel value may be 0 wherever the sub-image contains no fragment. The geometry sub-image contains channel values not only for the current fragment, but for neighbouring fragments which surround the current fragment on the surface of the 3D mesh 230 within a predetermined margin distance.
[000144] The geometry sub-image may be created at step 1740 according to various methods. The geometry sub-image is preferably created by a neighbourhood query margin image method 1200, described below with reference to Fig. 12.
[000145] Where a group of pixels maps to the same adjacent fragment, the rotation angle and scale factor might vary from pixel to pixel due to distortion introduced during parametrization.
[000146] For pixels in the geometry sub-image which are not associated with either the main fragment or a margin fragment all values (ii, v, m, Θ, F) are written as 0.
14684374_1
2018203328 11 May 2018 [000147] The method 1700 continues from step 1750 to a decision step 1760. Execution of step 1760 determines if there are more fragments requiring the creation of geometric information. If more fragments need determination of geometric information (“Yes” at step 1760), the method 1700 is returned to choosing step 1720. If all the fragments have had geometry information generated (“No” at step 1760), the geometric information generation method finishes.
[000148] Fig. 10 shows a method 1000 of augmenting a mesh data structure, as implemented by the mesh augmentation step 1705 of Fig. 17. The method 1000 is typically implemented as one or more modules of the application 933, executed under control of the processor 905 and stored in the memory 906.
[000149] The method 1000 operates such that the mesh 230, having the data structure 800, is augmented to facilitate rapid calculation of a neighbourhood on the surface of the 3D mesh 230 within the 2D space 250. The augmentation method 1000 determines additional information of the mesh 230 to enable neighbourhood queries, including queries for pixel data or warp vectors, to be teleported across texture seams. The method 1000 is preferably performed as a precomputation step prior to execution of the method 500 However, other arrangements can be used in which the method 1000 is executed at the time of performing the cross-view alignment method 500, or at another time. The augmentations performed by augmentation method 1000 are shown in Figs. 11A to 1 IE, and are described together with the steps of the augmentation method 1000. The method 1000 generates an augmented mesh 1100 from the mesh 230 having the structure 800.
[000150] The augmentation method 1000 begins at a determining step 1010. In execution of step 1010, a vertex-face adjacency is determined. In the step 1010, for each vertex 810a in the vertex array 801 of the mesh 230, faces 830a of the mesh 230 adjacent to that vertex are determined. A face 830a is considered adjacent to a vertex 810a if one of the vertex references 831-833 of the face refers to the vertex 810a. The indexes of faces into the face array 802 which are adjacent to the vertex 810a are stored in a VFAdj array 1117 of that vertex as shown in Fig. 1 IB. Additionally, the ordinality of the vertex within each adjacent face is stored in a VFAdjindex array 1118 of the vertex. The ordinality of the vertex is a number 0, 1, or 2, indicating whether the vertex is referenced by the face as VrefO 831, Vrefl 832, or Vref2 833. The vertex-face adjacency may be determined by iterating through all faces 830a in the Face array 802, inspecting each vertex reference 831-833, and inserting a reference to the face in the
14684374_1
2018203328 11 May 2018
VFAdj array 1117 and VFAdjindex array 1118 of the corresponding vertex 8]0 to generate the augmented vertex 810a.
[000151] The method 1000 continues from step 1010 to a finding step 1020. In step 1020, the vertices on fragment boundaries are identified. A vertex 810a is on a fragment boundary if the vertex is referred to via a vertex reference 831-833 of a first face 830 with first UV co-ordinates 834-839, and is also referred to via a vertex reference 831-833 of a second face 830a with different UV co-ordinates 834-839. A fragment boundary may be found by iterating through each vertex 810a in the vertex array 801, and then inspecting each face adjacent to the vertex in the VFAdj array 1117. The Boolean flag OnFragBoundary 1116 is set to true for each augmented vertex 810a on a fragment boundary, and false for each other vertex.
[000152] The method 1000 continues from step 1020 to a determining step 1030. In step 1030 of the augmentation method 1000, a fragment identifier (ID) 1140 (shown in Fig. 11C) is determined for each face 830 of the mesh 230, such that faces in the same UV fragment are assigned the same fragment identifier. The faces within a fragment may be found using a region-growing algorithm. Given a seed face which is not in a fragment already identified, the face is given a unique fragment identifier value. The neighbouring faces are given the same fragment ID, as long as those neighbours share at least one vertex with the face having common UV co-ordinates 834-839. The neighbouring faces of a face 830a may be found by examining the adjacent faces 1117 of the vertices 810a and the references 831-833. The step 1030 continues to repeat for the neighbours of the neighbouring faces, until the region can grow no more. The set of candidate seed faces may be all faces. In some arrangements, processing may be reduced by considering only the set of faces adjacent to fragment boundary vertices (that is, vertices 810 for which OnFragBoundary 1116 is true) as candidate seed faces. At the completion of step 1030, each face 830a in the mesh 230 has a fragment identifier 1140 assigned, as shown as the augmented face 830a.
[000153] The method 1000 continues from step 1030 to a finding step 1040. In step 1040 of the augmentation method 1000, boundary vertex links to neighbouring fragments are found for each texture fragment of the mesh [\230. A boundary vertex link can be implemented as a vertex of a fragment, and the identifier (ID) of a neighbouring fragment at that vertex. An element FragmentBoundaryVertex 1160 (see Fig. 1 IE) shows the members of a boundary vertex link, comprising a vertex reference Vref 1161 indexing into the Vertex array 801; a u coordinate 1162 and v co-ordinate 1163 into the texture image 803 of the vertex in the current
14684374_1
2018203328 11 May 2018 fragment; and an identifier 1164 of a neighbouring fragment identifier at that vertex. A collection of FragmentBoundaryVertex elements 1160 corresponding to vertices at the boundary of a single fragment is stored as a FragmentBoundaryVertex array 1151 of a Fragment 1150 (see Fig. 1 ID). A collection of all such fragments 1150 is stored in a Fragment array 1105 of the augmented mesh 1100. The Fragment array 1105 may be determined by first iterating through each fragment boundary vertex (vertices 810 for which OnFragBoundary 1116 is true), and forming a list of the unique fragment identifiers and corresponding UV coordinates for the vertex 810, by examining the vertex’s adjacent faces via the VFAdj array 1117 of the vertex. The vertex is added as a FragmentBoundaryVertex 1160 to the FragmentBoundaryVertex array 1151 of a Fragment 1105 for each fragment identifier in the list, with NeighbourFragID 1164 set to the other fragment identifiers appearing in the list.
[000154] The method 1000 continues from step 1040 to a generating step 1050. In step 1050 of the augmentation method 1000, a fragment identifier mask image 1104 is generated (see Fig. 11A). The fragment identifier mask image 1104 is used during neighbourhood query operations, for example in Fig. 12, to identify the fragment associated with a point in UV space. The fragment identifier mask image 1104 is a texture image which contains fragment identifiers rather than RGB pixels or other data. In the regions of the texture where no fragment is present, a value which indicates a non-fragment, rather than a valid fragment identifier, is set. The fragment ID mask image 1104 may be a single-channel 16-bit image. The size of the fragment identifier mask image in pixels should be at least as large as the size of the texture image 803, and may be the same size. The fragment ID mask image 1104 can be generated by initialising an empty texture image, iterating through each face 830 in the Faces array 802 of the mesh 230, and rasterising a triangle onto the texture image whose corners are the UV co-ordinates 834-839 of each vertex of the face 830, and whose channel value is the fragment identifier 1140 of the face 830. The fragment identifier mask image 1104 is stored in the augmented mesh 1100. The augmentation method 1000 ends at step 1099 after execution of the step 1050.
[000155] Fig. 16 shows an example of geometric information 1600. The geometric information 1600 reflects the geometric information 1500 illustrated in Fig. 15A used to separate the fragments into sub-images and to assist in the alignment process and determined in step 510. A similar example may be performed with the geometric information 1501 illustrated in Fig. 15B.
[000156] A fragment offset 1691 is a (u, v) coordinate indicating the top left of the fragment sub-image in UV space. A fragment size 1692 gives the width and the height of the fragment
14684374_1
2018203328 11 May 2018 including a margin. A main fragment mapping 1670 maps pixels in the fragment geometry subimage to the region of UV space containing the main fragment, with an associated rotation of zero degrees and scale factor of one. Adjacent fragment mappings 1680 and 1690 map a portion of the margin of the main fragment to a portion of UV space corresponding to fragments 2 and 3, and have an associated rotation angle and scaling factor 1695 and 1696.
[000157] The alignment method 540 is described above with reference to the method 700 of Fig. 7. The step 540 is now described in relation to method 1800 of Fig. 18 using the fragment geometry information as described in Figures 15, 16, and 17. The method 1800 represents an alternative implementation of the step 540 to the method 700. The method 1800 is typically implemented as one or more modules of the application 933, executed under control of the processor 905 and stored in the memory 906.
[000158] The inputs to the alignment method 1800 are a current reference texture atlas 480 as produced by selection step 530, a current texture atlas from the set of texture atlases 260, a current warp atlas corresponding to the current texture atlas, a list of best fragments for the current reference texture atlas, and geometric information 1660 (consisting of geometry subimages along with a fragment offset position in UV space for each geometry sub-image). Each pixel of the geometry sub-images contains a mapping from a fragment sub-image position to a UV map position, scale and orientation, and fragment ID.
[000159] The method 1800 starts with a fragment splitting step, 1810. In execution of step 1810, a texture sub-image is created for each fragment of both input atlases, the sizes of the texture sub-images matching their associated geometry sub-images. Step 1810 operates in a similar manner to step 710 of Fig. 7.
[000160] The method 1800 continues from step 1810 to a texture copying step 1820. Step 1820 forms expanded view for each viewpoint of each fragment. In execution of step 1820, the texture sub-images are expanded to contain pixel data not only relating to the current fragment, but to neighbouring fragments which surround the current fragment on the surface of the 3D mesh 230 within some margin distance, in a similar way to the geometry sub-images. Unlike a geometry sub-image, which contains geometry data, a texture sub-image contains pixel data from one or more texture images 803 comprising texture atlases. A texture sub-image may be generated by creating a margin image using neighbourhood query margin image method 1200, described below with reference to Fig. 12. Alternatively, the geometry sub-image may be used
14684374_1
2018203328 11 May 2018 to warp the texture atlases into the texture sub-images using a conventional inverse-warping algorithm. For each pixel in a texture sub-image, a (u, v) coordinate is obtained from the corresponding pixel in the geometry sub-image, and a texture pixel is interpolated, for example by cubic interpolation, from the texture atlas into the texture sub-image. To avoid artifacts resulting from unoccupied texture atlas pixels being warped into parts of the UV space corresponding to surfaces of the mesh, the unoccupied pixels can be infdled after warping using known methods such as diffusion. Step 1820 will not only copy texture from the main fragment in the atlas, but also from adjacent atlas fragments, resulting in texture sub-images (views) containing more context for alignment operations than the main fragment alone. Step 1820 operates in a similar manner to step 720 of Fig. 7.
[000161] The method 1800 continues from step 1820 to a registration step 1830. In execution of the step 1830, non-rigid alignment is performed to create a warp vector sub-image which maps pixels from a master (reference or “best”) texture sub-image (view) to a current texture sub-image (view). Because the warp is determined between two texture sub-images, the warp vector sub-image is purely local, mapping from sub-image to sub-image, not UV map to UV map. The local difference between the warp and an identity transformation is relatively low, typically of the order of tens of pixels.
[000162] One implementation of a suitable non-rigid alignment method estimates a displacement field, or warp map, which is an array of 2D warp vectors, using covariance-based Mutual Information. In the warp map each vector describes the shift for a pixel from a master texture sub-image to a current texture sub-image.
[000163] The alignment quality associated with a mapping is measured using Mutual Information, a measure of pointwise statistical commonality between two images in terms of information theory. The mapping being assessed (from the master texture sub-image to the current texture sub-image) is applied to the master texture sub-image, and Mutual Information is measured between the current texture sub-image and the transformed master texture subimage. The colour information of each image is quantised independently into 256 colour clusters, for example by using the /c-means algorithm, for the purposes of calculating the Mutual Information. Each colour cluster is represented by a colour label (such as a unique integer per colour cluster in that image), and these labels are the elements over which the Mutual Information is determined.
14684374_1
2018203328 11 May 2018 [000164] A Mutual Information measure I for a first image containing a set of pixels associated with a set of labels A — {(¾} and a second image containing a set of pixels associated with a set of labels B = {bj}, is defined as follows in Equation (1):
(1) [000165] In Equation (1) P(cq, bf) is the joint probability value of the two labels αέ and bj cooccurring at the same pixel position, P(cq) and P(by) are the marginal probability distribution values of the respective labels cq and bj, and log2 is the logarithm function base 2. Further, i is the index of the label cq and j is the index of the label bj. If the product of the marginal probability values P(cq) and P(bj) is zero (0), then the pixel pair is ignored. According to Equation (1), the mutual information measure quantifies the extent to which labels co-occur at the same pixel position in the two images relative to the number of occurrences of those individual labels in the individual images. The extent of label co-occurrences is typically greater between aligned images than between unaligned images, according to the Mutual Information measure. In particular, one-dimensional histograms of labels in each image are used to determine the marginal probabilities of the labels (i.e. Pfcq) and P(b/)), and a pairwise histogram of co-located labels are used to determine the joint probabilities (i.e. P(cq, by)).
[000166] The Mutual Information measure may be determined only for pixel locations within the overlapping region. The overlapping region is determined for example by creating a pixel mask for the reference texture sub-image and the current texture sub-image, and applying the mapping being assessed to the texture atlas’s pixel mask producing a transformed texture atlas pixel mask. Locations are only within the overlapping region, and thus considered for the probability distribution, if they are within the intersection of the master atlas pixel mask and the transformed texture atlas pixel mask.
[000167] Alternatively, instead of creating a transformed texture atlas, the probability distributions for the Mutual Information measure can be directly determined from the master texture sub-image and the current texture sub-image and the mapping being assessed using a technique of Partial Volume Interpolation. According to Partial Volume Interpolation, histograms involving the transformed texture atlas are instead calculated by first transforming pixel positions (that is, integer-valued coordinates) of the master texture sub-image onto the coordinate space of the current texture sub-image using the mapping. The label associated with
14684374_1
2018203328 11 May 2018 each pixel of the master texture sub-image is spatially distributed across pixel positions surrounding the associated transformed coordinate (i.e. in the coordinate space of the current texture sub-image). The spatial distribution is controlled by a kernel of weights that sum to 1, centred on the transformed coordinate, for example a trilinear interpolation kernel or other spatial distribution kernels as known in the literature. Then histograms involving the transformed texture atlas are instead calculated using the spatially distributed labels.
[000168] The warp map is estimated by first creating an initial warp map. The initial warp map is the identity mapping consisting of a set of (0, 0) vectors representing zero displacement from the initial position. Alternatively, the initial warp map may be calculated using approximate camera viewpoints measured during image capture. Warp map estimation then proceeds by assigning colour labels to each pixel in the atlas images to be aligned, using colour clustering as described above. A first pixel is selected in the master texture sub-image, and a second pixel is determined in the current texture sub-image by using the initial warp map. A set of third pixels is selected from the current texture sub-image, using a 3><3 neighbourhood around the second pixel.
[000169] A covariance score is calculated for each pixel in the set of third pixels. The covariance score estimates the statistical dependence between the label of the first pixel and the labels of each of the third pixels. The covariance score (Qj) for labels (cq, b;) is calculated using the marginal and joint histograms determined using Partial Volume Interpolation, as described above. The covariance score is calculated using Equation (2):
C- =---------v })-------- (2) l,] P(ai,bj)+P(ai)P(bj)+£ [000170] In Equation (2) Ρ(αί( by) is the joint probability estimate of labels α(· and bj placed at corresponding positions of the master texture sub-image and the current texture sub-image determined based on the joint histogram of the master texture sub-image and current texture sub-image. P(cq) is the probability estimate of the label cq appearing in the master texture subimage determined based on the marginal histogram of the master texture sub-image, and P(bj) is the probability estimate of the label bj appearing in the current texture sub-image determined based on the histogram of the current texture sub-image, ε is a regularization term to prevent a division-by-zero error, and can be an extremely small value. Corresponding positions for pixels in the master texture sub-image and the current texture sub-image are determined using the
14684374_1
2018203328 11 May 2018 initial warp map. In Equation (2) the covariance score C^j is a ratio, where the numerator of the ratio is the joint probability estimate, and the denominator of the ratio is the joint probability estimate added to the product of the marginal probability estimates added to the regularization term.
[000171] The covariance score Cij has a value between 0 and 1. The covariance score Cij takes on values similar to a probability. When the two labels appear in both images, but rarely co-occur, Cfj approaches 0, i.e. P(cq, by) « P(aj)P(by). Qy is 0.5 where the two labels are statistically independent, i.e. Ρ(α;·, by) — Ρ(αι)Ρ^\ Cij approaches 1.0 as the two labels co-occur more often than not, i.e. P(tZj, by) » P(aj)P(by).
[000172] Candidate shift vectors are determined for each of the third pixels, where each candidate shift vector is the vector from the second pixel to one of the third pixels.
[000173] An adjustment shift vector is determined using a weighted sum of the candidate shift vectors for each of the third pixels, where the weight for each candidate shift vector is the covariance score for the corresponding third pixel. The adjustment shift vector is used to update the initial warp map, so that the updated warp map for the first pixel becomes a more accurate estimate of the alignment between the master texture sub-image and the current texture subimage. The process is repeated by selecting each first pixel in the master texture sub-image, and creating an updated warp map with increased accuracy.
[000174] The warp map estimation method of step 1830 determines whether alignment is completed based upon an estimate of convergence. Examples of suitable convergence completion tests are a predefined maximum iteration number, or a predefined threshold value which halts the iteration when the predefined threshold value is larger than the root-meansquare magnitude of the adjustment shift vectors corresponding to each vector in the warp map. The convergence completion tests may be determined by experimentation, for example. An example threshold value is 0.001 pixels. In some implementations, the predefined maximum iteration number is set to 1. In a majority of cases, however, to achieve accurate registration, the maximum iteration number is set to at least 10. For smaller images (e.g. 64x64 pixels) the maximum iteration number can be set to 100. If the alignment is completed, then the updated warp map becomes the final warp map. The final warp map is then used to align the images in registration step 1830, by applying the displacement vectors to the master texture sub-image.
14684374_1
2018203328 11 May 2018
The displacements may be applied using a suitable interpolation algorithm such as bicubic or sine interpolation.
[000175] Step 730 of Fig. 7 is typically implemented in a similar manner to step 1830.
[000176] The method 1800 continues from step 1830 to a warp vector combining step 1840. In execution of step 1840, warp vectors from adjacent sub-images are combined with the warp vectors within each main fragment. Step 1840 can be operated in a similar manner to step 740 of Fig. 7. In the example described, the current main fragment be I is neighbouring an adjacent fragment J. Fragment I has two sub-images under alignment, one the master texture sub-image, and one the current texture sub-image, both of the same size, and with a common UV map offset (u[, indicating the top left position of the sub-images with respect to the UV map. A geometry sub-image 6/ for fragment I contains geometry information for each position (x,y) in the sub-image, with the geometry information consisting of: a fragment number fixy, indicating the fragment in which associated texture information is found, and which is 0 if there is no associated fragment, a UV map position (uixy, vixy) to the associated fragment in the texture map, and local orientation and scale parameters in the form (mixy cos θιχγ, rrtixy sin which transform orientation and scale from positions in fragment I to positions in the adjacent fragment/. Fragment I also has a warp map sub-image, 1¾ which contains a displacement vector (x,·, y^) for each position in the sub-image giving displacements of positions in the master texture sub-image to positions in the current texture sub-image, i.e. an identity transformation would be the vector (0,0).
[000177] Similarly, for fragment /, are the sub-images Gj, Wj, and the UV map offset [uj, vf). The result of the warp vector combining step 1840 is an adjustment vector dy) which is added to the current displacement vector (xit y^j.
[000178] The warp combination step 1840 proceeds as follows. For each position (xit yf) in the warp vector sub-image Mfo if the vector sub-image is associated by the geometry sub-image with an adjacent fragment fixy — J, and I Ψ J and / Ψ 0, then a (uixy, vixy) coordinate is obtained from the corresponding position in the geometry sub-image, along with the rotation and scale parameters (mixy cos 0ixy, mixy sin θίχγ}. The associated pixel in the adjacent subimage is determined by subtracting the UV offset (uj, vf) for fragment / from the (uixy, vixy) coordinate to determine a location (χ;·, in the warp vector sub-image for fragment/,
14684374_1
2018203328 11 May 2018 (χ7·, y;) = (uixy, vixy) — (Uj, Vj). From the determined location a warp vector (p', q') can be interpolated from the warp vector sub-image Wjt, for example using cubic interpolation. The difference (dx, dy) between the warp vector (p', q’) and the identity warp vector is the vector (d'x, d'y) = (p', q') - [xj , yj). The difference (d! x, d'y)\s with respect to fragment J,which may be relatively rotated and scaled with respect to fragment I. Accordingly, a difference with respect to fragment I, (dx, dy), must be determined by inversely mapping this difference vector from fragment J to fragment I, (dx, dy) — (d/x cos Θ — d'y sin Θ, d^sin Θ + d'y cos d)/m, then replacing the pixel (x,, yQ in the warp vector sub-image by the teleported warp vector (x.· + dx, yq + dy). The pixel replacement operation replaces the warp vectors in the margins of each fragment with modified warp vectors teleported from adjacent fragments. At the pixel replacement, the warp vectors internal to the main fragment are as yet unaltered and may still contain seam discontinuities relative to adjacent fragments.
[000179] The method 1800 continues from step 1840 to a warp vector regularization step 1845. In execution of step 1845, seam discontinuities are ameliorated by regularizing the warp vectors in each warp vector sub-image. An effective way to regularize each warp vector sub-image is to convolve the warp vectors of the sub-image by a Gaussian blur kernel with an RMS width comparable to the width of the sub-image margins, for example, 20 pixels. If the warp vectors should not be smooth because of occlusions in the texture maps, improved results may be obtained by regularizing the warp vectors only in the vicinity of each fragment boundary.
[000180] The method 1800 continues from step 1845 to a warp vector assembly step 1850. Step 1850 operates in a similar manner to step 750 of Fig. 7. In execution of step 1850, the warp vectors in the warp vector sub-images for the list of best fragments associated with the current reference texture atlas are assembled by placement in the UV plane of the current warp atlas. For each warp vector at position (x, y) in each participating warp vector sub-image for fragment I, if the warp vector is not associated by the geometry sub-image with fragment I, the warp vector is ignored. If the warp is associated by the geometry sub-image with fragment I, a displaced position (x', y') is read from location (x, y) of the warp vector sub-image. One method for assembling the displaced position into the current warp atlas is to offset the subimage offset (Uj, Vi) and the displaced pixel (x', y'), so that the UV map location of the warp vector is (u, v) ~ (x, y) + (iq, tq), and the warp value assigned to this position is (u', ν') — (x', y') + (iq, vf. However, the method will only work if the pixel (xf, y') is within fragment I,
14684374_1
2018203328 11 May 2018 because some of the warp vectors will be mapped outside of the main fragment, resulting in gaps in the warped texture map.
[000181] An alternative method for assembling the pixel into the current warp atlas is to find the nearest pixel (X, Y) to the displaced pixel position (x, y), that is (X, K) — ([%' + 0.5], [y! + 0.5]), (where [a] is the floor of a), and, if (Ύ, 7) is within the fragment geometry sub-image, reading from it the corresponding fragment / and UV map position of the pixel (U, V). Because the pixel is in another fragment, the sub-pixel misalignment between (A, K) and (x', y') will be relatively scaled and rotated, thus the UV location of the warp vector is as before, (u, v) — (x, y) + (lip Vj), and the warp value assigned to this position (u', ν') as per Equation (3):
(u',v') = (iij, Vj) + (X, Y) + — X) cos Θ — (y' — 7) sin θ), m[(x' — X) sin Θ + (y' — 7) cos 0)^ (3) [000182] In this manner, the geometrical arrangement of the fragments modifies the warp vectors of the warp map. The alignment used as a step 540 to the alignment method 500 described in relation to Fig. 5 allows the creation of warp atlases in which every texture atlas can be aligned with minimal seam discontinuities to a set of master views, appropriate for BRDF estimation.
[000183] The fragment geometry determination step 510 and the geometry sub-image creation step 1740 both involve producing fragment data including a main fragment plus context around the main fragment margins, from fragments which neighbour the main fragment on the 3D mesh surface 240. The fragment margin images may be produced according to a margin image generation method 1200 of Fig. 12. The method 1200 is typically implemented as one or more modules of the application 933, executed under control of the processor 905 and stored in the memory 906. The method 1200 can be used to implement steps 710 and 720 of fig. 7 (or 1810 and 1820 of Fig. 18) as the margin image represents the expanded texture view. The method 1200 may also be implemented at step 740 of Fig. 7 (or step 1840 of Fig. 18). In implementations relating to step 740 or step 1840, warp vectors are copied to the expanded margins rather than texture data.
[000184] The method 1200 begins at a determining step 1210. At step 1210 a size of the selected fragment in UV space is determined. The fragment size may be found by expanding an empty bounding box to encompass all UV co-ordinates 1162-1163 of fragment boundary
14684374_1
2018203328 11 May 2018 vertices 1160 of the fragment boundary vertex list 1151 corresponding to the current fragment 1150. In the case that the mesh 230 is not a closed mesh, the fragment may contain vertices at a mesh boundary. In the case the mesh 230 is not a closed mesh, as well as expanding the bounding box to encompass all fragment boundary vertices (between multiple fragments), the bounding box is also expanded to encompass all mesh boundary vertices (vertices at a fragment boundary but not adjacent to another fragment). Mesh boundary vertices may be found according to known techniques, such as building a list of all edges, identifying edges adjacent to only a single face, and taking vertices of those edges as mesh boundary vertices. A technique for finding mesh boundary vertices may be performed during execution of the mesh augmentation method 1000 to avoid expensive computation during the alignment method 1800.
[000185] The method 12]00 continues from step 1210 to a creating step 1220. In step 1220, an empty margin image is created. The empty margin image is sized to hold not only the fragment but a margin around the fragment whose pixel values will be determined from the neighbouring fragments on the mesh 230. The size of the empty margin image is accordingly set to the fragment size, plus a value for both the U size and V size which is double the minimum boundary extent.
[000186] The method 1200 continues from step 1220 to a copying step 1230. In step 1230, the selected fragment is copied into the empty margin image. Pixels are conditionally copied from a rectangular region of the texture image 803 corresponding to the bounding box of the fragment. The pixels are copied conditionally because a fragment’s bounding box may contain other fragments. The pixels are copied only if the corresponding pixel value of the fragment mask image 1104 is the fragment identifier of the selected fragment. The pixels are copied into the margin image such that the top-left pixel of the fragment in the texture image 803 is conditionally copied to an offset into the margin image equal to the minimum boundary extent. By creating the margin image and copying the fragment in the manner described, an empty margin equal to the minimum boundary extent exists on all sides of the fragment.
[000187] The method 1200 continues from step 1230 to a setting step 1240. In step 1240, a query mask size is set. To ensure that the combined neighbourhood queries will surround the fragment on all sides by at least the minimum boundary extent, the query mask is set to double the minimum boundary extent. The method 1200 continues from step 1240 to a determining step 1250. In step 1250, a set of query positions is determined. The set of query positions may correspond to a sampling grid covering the margin image. To ensure that the combined
14684374_1
2018203328 11 May 2018 neighbourhood queries will generally surround the fragment on all sides by at least the minimum boundary extent, the grid spacing is set to the minimum boundary extent.
[000188] The method 1200 continues from step 1250 to a check step 1260. At step 1260, the margin image generation method 1200 checks whether there are any query positions which have not yet been processed. If so (“Y” at step 1260), the method 1200 continues to a check step 1270. At 1270, the query position is checked to see if the query position lies inside the selected fragment. The check may be performed by inspecting the pixel of the fragment mask image 1104 at the query position, and comparing that value to the fragment ID of the selected fragment. If the values do not match (“N” at step 1270), then the query position does not lie inside the selected fragment. In this case, the query position may be skipped, and the method 1200 returns to check step 1260. If, at step 1270, the query position is found to lie inside the selected fragment (“Y” at step 1270), the method 1200 continues to a neighbourhood query step 1280. Step 1280 performs a neighbourhood query, invoking a neighbourhood query method 1300. Step 1280 passes a query point and region, received at a step 1305 of neighbourhood query method 1300. The query position is passed as the query point, and the query mask size that was set in step 1240 is passed as the query region. The neighbourhood query performed at step 1280 returns the fragment neighbourhood around the query point as a list of anchor points.
[000189] The method 1200 continues from step 1280 to a sampling step 1290. At step 1290, the neighbourhood is sampled into the margin image. The step 1290 invokes a texture sampling method 2100, providing the list of anchor points from step 1280 as an input, and providing a location in the margin image centred at the query position as the sample space to receive texture samples. Before writing any texture pixel into the sample space, the existing sample space pixel value is checked at step 1290. If a value has already been written into the sample space pixel, then instead of overwriting it, the sampling operation is skipped. Additionally, step 1290 may be skipped in the case that the list of anchor points contains only a single anchor point.
[000190] After the texture has been sampled over the neighbourhood of the current query position in step 1290, the margin image creation method 1200 returns to check step 1260, which checks again whether there are any further query positions remaining to be processed. The method 1200 continues to perform neighbourhood queries and sampling the texture at the corresponding anchor points as long as there are any. If no further query positions remain to be processed in step 1260 (“N” at step 1260), then the margin image is complete, and the method 1200 terminates at step 1299.
14684374_1
2018203328 11 May 2018 [000191] The method 1200 can also be used to create a geometry sub-image by writing geometry data (UV co-ordinates, rotation and scale values, and fragment ID) instead of pixel data from a texture atlas 803. As such, to create a geometry sub-image, in a step 1230, when a fragment ID mask image pixel value matches the fragment ID of the current fragment, instead of writing pixel values, geometry data is written, consisting of the UV co-ordinates of the pixel, a scale of 1.0 and rotation of 0, and the fragment ID of the current fragment.
[000192] The neighbourhood query method 1300 as invoked by neighbourhood query step 1280 of method 12 is now described in relation to Fig. 13. The method 1300 is typically implemented as one or more modules of the application 933, executed under control of the processor 905 and stored in the memory 906.
[000193] The method 1300 uses the augmented mesh 1100 to perform neighbourhood queries which are robust to the presence of fragment seams.
[000194] The method 1300 starts at the receiving step 1305. In step 1305, a query point and an associated query region or query range are received. The query point is positioned within a particular (reference) fragment of a texture image of the three-dimensional graphic object. The query point identifies a point of interest on the mesh surface, and may be represented with UV co-ordinates. The query range identifies a region of interest or mask around the query point on the mesh surface, and may be represented by a distance within which the query neighbourhood is to be processed (in both U and V). The augmented mesh 1100 is represented by a number of fragments having UV coordinates.
[000195] The method 1300 progresses from step 1305 to an initialising step 1310. In step 1310, working data 1400 for the received query on the augmented mesh 1100 is initialised. The working data 1400 is shown in Figs. 14A to 14C. The working data 1400 consists of but is not limited to an AnchorPoint list 1401 and a FragmentLink list 1402, as shown in Fig. 14A. The AnchorPoint list 1401 is a list of anchor points, an example AnchorPoint 1410 being shown in Fig. 14B. Each AnchorPoint 1410 has a Fragment identifier (ID) 1411, UV co-ordinates 14121413 into the texture image 803, a scale value 1415, and an angle value 1416. The FragmentLink list 1402 is a list of fragment links, an example FragmentLink 1420shown in Fig. 14C. Each FragmentLink 1420 relates a first fragment of the augmented mesh 1100 to a neighbouring fragment via shared mesh geometry elements, and comprises a source fragment
14684374_1
2018203328 11 May 2018 identifier 1421, query UV co-ordinates 1422-1423, a target fragment identifier 1425, and a
GeometryLinks list 1426 containing the shared or common mesh geometry elements.
[000196] Step 1310 initialises the AnchorPoint list 1401 with a single AnchorPoint, the anchor point 1410, representing the received query point. The fragment identifier 1411 of the reference fragment on which the query point is placed may be determined by looking up the fragment mask image 1104 at the UV co-ordinates of the query point. The UV co-ordinates 1412-1413 of the anchor point 1410 are set to the UV co-ordinates of the received query point. The scale 1415 is set to 1.0, and the angle 1416 is set to 0. The FragmentLink list 1402 is initialised to an empty list.
[000197] The method 1300 continues from step 1310 to an adding step 1320. In step 1320, zero or more fragment links are populated for the initial query point, and added to the FragmentLink list 1402. Step 1320 invokes a fragment link creation method 1900, described hereafter with reference to Fig. 19.
[000198] The method 1300 continues from step 1320 to a check step 1330. The step 1330 checks whether the FragmentLink list 1402 is empty. If the FragmentLink list 1402 is not empty (“N” at step 1330), the method 1300 continues to a removing step 1340. At step 1340, a FragmentLink is selected from the FragmentLink list 1402. The FragmentLink 1420 with the greatest number of GeometryLinks in the corresponding GeometryLinks array 14]26 may be selected to improve the accuracy of cross-fragment geometry calculations. A FragmentLink may alternatively be selected arbitrarily. The selected FragmentLink is removed (“popped”) from the FragmentLink list 1402 of the working data 1400. If the AnchorPoint list 1401 contains an anchor point 1410 with a fragment ID 1411 matching the target fragment ID 1425 of the popped FragmentLink 1420, then the popped FragmentLink 1420 may be discarded, since an anchor point has already been found for the target fragment identifier 1425. In this event, the FragmentLink list may be checked for more FragmentLinks again at step 1330, and a different FragmentLink may be popped at step 1340.
[000199] The method 1300 continues from step 1340 to a determining step 1350. In step 1350, the selected fragment link is used to determine a new anchor point with respect to a target fragment for which an anchor point has not yet been determined, and adds the anchor point to the AnchorPoint list 1401. Step 1350 invokes anchor point determination method 2000, described below with reference to Fig. 20.
14684374_1
2018203328 11 May 2018 [000200] The query point and a pre-determined mask are passed to the anchor point determination method 2000. The method 1300 continues from step 1350 to an adding step 1360. At step 1360, zero or more fragment links are populated for the newly determined anchor point, and added to the FragmentLink list 1402. Step 1360 again invokes the fragment link creation method 1900.
[000201] In general, each anchor point determined in step 1350 may result in further fragment links being created in step 1360, and each fragment link determined at step 1360 may be used at a subsequent iteration of step 1350 to determine a new anchor point. Each subsequent anchor point may thus be determined based on common geometry data, for example a vertex and/or a point on an edge, associated with a first reference fragment and a target fragment, but also a second reference fragment (corresponding to the first target fragment) and second target fragment.
[000202] The neighbourhood query method 1300 then returns to the decision step 1330, and proceeds in a loop until FragmentLink list 1402 is empty. If FragmentLink list 1402 is empty (“Y” at step 1330), the method 1300 continues to a returning step 1370, in which the AnchorPoint list 1401 is returned, and the neighbourhood query method 1300 terminates at step 1399.
[000203] Steps 1330 to 1360 operate to determine an anchor point for each fragment of the three-dimensional graphic object associated with the query region corresponding to the query point. As each anchor point relates to a different fragment, each anchor point typically has a different location in the texture image.
[000204] The fragment link creation method 1900 of Fig. 19, as implemented at steps 1320 and 1360, is now described. The method 1900 finds neighbouring fragments within the query region and creates FragmentLink working data for each. The method 1900 is typically implemented as one or more modules of the application 933, executed under control of the processor 905 and stored in the memory 906.
[000205] The method 1900 begins at a receiving step 1905. In step 1905 the mask (query region) and anchor point of the query on the augmented mesh 1100 are received. The anchor point 1410 has a fragment ID 1411, a UV co-ordinate location 14]2-1413 into the texture image 803, a scale 1415, and an angle 1416. The mask may be represented by a range value, denoting
14684374_1
2018203328 11 May 2018 the number of pixels in each direction (up, down, left, or right) to which the mask extends. The representation may include a square mask, with a side length of twice the range value plus one. Other mask representations and mask shapes can be used.
[000206] The method 1900 continues from step 1905 to a determining step 1910. In step 1910, the fragment boundary vertices of the fragment associated with the anchor point 1410 are determined. The vertices are the contents of the FragmentBoundaryVertex array 1151 found by indexing the Fragment array 1105 of the augmented mesh 1100 by the ID of the current fragment, which is the fragment identifier 1411 of the anchor point 1410.
[000207] The method 1900 continues to a check step 1920. Steps 1920-1940 operate to search for vertices at the boundary of the current fragment which lie within the query mask. Step 1920 checks whether further fragment boundary vertices remain to be considered. If so (“Y” at step 1920), at the method 1900 continues to a check step 1930. At step 1930, the next fragment boundary vertex 1160 is selected, and is checked to see if the UV co-ordinates 1162-1163 of the selected boundary vertex lie within the query mask. For an anchor point 1410 derived from the initial query point, the anchor point scale 1415 is 1.0 and rotation 1416 is 0, as set in step 1310 of the method 1300. Accordingly, the fragment boundary vertex 1160 lies within the mask if the absolute difference of the U co-ordinate 1162 of the vertex and the anchor point’s U coordinate 1412 are less than or equal to the query mask range value, and the absolute difference of the V co-ordinate 11 ]3 of the vertex and the anchor point’s V co-ordinate 1413 are also less than or equal to the query mask range value. In general, an anchor point may have a non-unity scale and a non-zero rotation. The boundary vertex UV co-ordinates 1162-1163 are transformed into the anchor point’s frame of reference using standard trigonometry before testing if the boundary vertex lies within the bounds of the query mask.
[000208] If step 1930 determines that the boundary vertex 1160 lies within the query mask (“Y” at step 1930), the method 1900 continues to a selecting step 1940. At step 1940, the boundary vertex 1160 is marked as selected. The method 1900 then returns to step 1920. The method 1900 also returns to step 1920 from step 1930 if the boundary vertex was found to not lie within the query mask (“N” at step 1930).
[000209] Steps 1920-1940 continue to execute until all boundary vertices in the FragmentBoundaryVertex array 351 of the current fragment have been searched (that is step 1920 returns “N”). The steps 1920-1940 describe a brute search technique, but other search
14684374_1
2018203328 11 May 2018 methods such as grid-based partitioning are possible. When step 1920 finds that no more boundary vertices remain to be checked (“N” at step 1920), the fragment link creation method 1900 continues to a grouping step 1950. In step 1950 all fragment boundary vertices 1160 marked as selected in step 1940 are grouped according to their neighbouring fragment identifier 1164. The grouping may be done by initialising an empty list, keyed by fragment identifier, and then iterating through each selected fragment boundary vertex 1160, and inspecting the neighbouring fragment identifier 1164, which was initialised at step 1040 of mesh augmentation method 1000. If a list entry exists for the neighbouring fragment identifier 1164, then the fragment boundary vertex 1160 is appended to the corresponding list entry. Otherwise, a new list entry is created for the fragment boundary vertex 1160. The result of step 1950 is a list of neighbouring fragments, each neighbouring fragment comprising a set of fragment boundary vertices 1160.
[000210] The method 1900 continues from step 1950 to a check step 1960. Steps 1960-1990 operate to create a fragment link for each neighbouring fragment of the current fragment which was found from the selected fragment boundary vertices within the query region. The fragment link is used in the anchor point determination method 2000 (described in relation to Fig. 20) to position an anchor point with relation to a neighbouring fragment. The step 1960 checks whether any further neighbouring fragments remain to be processed. If so (“Y” at step 1960, the method 1900 continues to a selecting step 1970. At step 1970, the next neighbouring fragment is selected. In this way, if a single boundary vertex has multiple neighbours, each neighbour is considered in turn. The method 1900 continues from step 1970 to a check step 1980. The step 1980 checks whether enough boundary vertices were found to form a fragment link with the neighbouring fragment. Two or more boundary vertices are required to form a fragment link. If there are not enough vertices (“N” at step 1980), the fragment link creation method 1900 returns to step 1960.
[000211] If there are enough vertices (“Y” at step 1980), the method 1900 continues to a creating step 1990. A fragment link is created in execution of step 1990. The fragment link 1420 is created comprising a source fragment identifier 1421 of the current fragment identifier, as determined by the fragment identifier 1411 of the anchor point 1410; Query UV co-ordinates 1422-1423 equal to the UV co-ordinates 1412-1413 of the anchor point 1410; a target fragment identifier 1425 corresponding to the neighbouring fragment selected at step 1970, and a GeometryLinks array 1426 containing the set of fragment boundary vertices 1160 in the neighbouring fragment. Each element of the GeometryLinks array 1426 may be an integer
14684374_1
2018203328 11 May 2018 index into the FragmentBoundaryVertex array 1151 of the current fragment. The GeometryLinks array 1426 provides a geometric relationship between points on one side of the shared fragment boundary and points on the other side of the boundary. During the anchor point determination method 2000 (described in relation to Fig. 20), the geometric relationship of the GeometryLinks array 1426 allows the anchor point to be positioned in the second fragment, given a known anchor point position in the first fragment.
[000212] If the target fragment identifier 1425 of the created fragment link 1420 matches the fragment identifier 1411 of an anchor point 1410 within the AnchorPoint list 1401, then an anchor point for the target fragment has already been found, and the target link may be discarded. Otherwise, the created fragment link 1420 is added to the FragmentLink list 1402 of the working data 1400. The fragment link creation method 1900 returns from step 1990 to the step 1960.
[000213] At step 1960, if there are no further neighbouring fragments to be selected (“N” step 1960), the method 1900 terminates at step 1999.
[000214] The anchor point determination method 2000, as implemented at step 1350, is now described with reference to Fig. 2000. The method 2000 is typically implemented as one or more modules of the application 933, executed under control of the processor 905 and stored in the memory 906.
[000215] The method 2000 performs geometric computation on the FragmentLink 1420 in order to determine the angle, scale, and position of a corresponding anchor point in a neighbouring fragment. The method 2000 begins at a receiving step 2010. At step 2010 the FragmentLink 1420 is received. The FragmentLink 1420 comprises a source fragment identifier 1421; a query UV location 1422-1423; a target fragment identifier 1425, and a list of geometry links 1426 at the fragment boundary between the source and target fragments within the query region.
[000216] The method 2000 continues from step 2010 to a determining step 2020. Step 2020 determines reference points on the boundary of the source fragment. The determined reference points correspond to mesh geometry elements at the fragment boundary, such as vertices. The reference points are found using the GeometryLinks array 1426 of the received FragmentLink 1420. The pair of vertices in the GeometryLinks array 1426 which are the most distant from
14684374_1
2018203328 11 May 2018 each other are selected as the reference points. The points of the array 1426 are constrained to a boundary of the fragment relating to the query region as determined in step 1910 for each fragment.
[000217] The method 2000 continues from step 2020 to a sorting step 2030. The co-ordinates of the two reference points and the query point 1422-1423 form a triangle. In step 2030, the points of the triangle are sorted to form a consistent ordering. The ordering may be determined by first finding which side of the line from the first reference point to the second reference point the query point lies on. If the query point lies on one pre-determined side of the line (such as the left side), then the two reference points are swapped. If the query point does not lie on the predetermined side of the line for which a swap is required, then no change is made.
[000218] The method 2000 continues from step 2030 to a determining step 2040. In step 2040, a reference angle of the triangle is determined. The reference angle may be formed by taking the angle between the line from the first reference point to the second reference point, and the line from the first reference point to the query point.
[000219] The method 2000 continues from step 2040 to a determining step 2050. Step 2050 determines target points on the boundary of the target fragment, which correspond to the reference points on the boundary of the source fragment. Where the reference points lie on the source fragment identified by the source fragment identifier 1421, the target points are corresponding points lying on the target fragment identified by the target fragment identifier 1425. The vertex reference in the GeometryLinks array 1426 corresponding to the first reference point is looked up in the FragmentBoundaryVertex array 1160 member of the Fragment array 1105 corresponding to the source fragment ID 1421, to find the FragmentBoundaryVertex 1160 corresponding to the first vertex reference. The vertex reference Vref 1161 and UV co-ordinates 1162-1163 are found from the FragmentBoundaryVertex 1160.
[000220] The first target point has the same vertex reference Vref 1161, but different UV coordinates (the co-ordinates lie in the target fragment). The target point can be found by accessing the Vertex 810 with reference 1161 and inspecting the faces adjacent to the vertex via the corresponding VFAdj array 1117, until a neighbouring face is found whose fragment identifier 1140 matches the target fragment identifier 1425 of the FragmentLink 1420. The first target point is then given the UV co-ordinates 834-839 corresponding to the vertex 810 of the
14684374_1
2018203328 11 May 2018 first reference point within the found neighbouring face. The second target point is found in the same way, using the second reference point.
[000221] The method 2000 continues from step 2050 to a determining step 2060. Steps 20602080, to be described, operate to determine a target angle, target scale, and anchor point position respectively, relating the query point from the source fragment to the target fragment.
[000222] In step 2060, a relative local cross-fragment target angle is determined, representing the change in direction to the target fragment, relative to the initial fragment of the query. Because texture fragments may be packed into the texture image 803 at any orientation, any given reference direction in a first fragment (for example, pointing rightwards, in the direction of increasing X) may correspond to any different direction in a neighbouring fragment (for example, pointing diagonally down and left). Furthermore, such directions are not fixed for a whole fragment, but can vary according to the degree of distortion accepted in the parameterisation. The angle between the line from the first reference point to the second reference point and the reference direction is subtracted from the angle between the line from the first target point to the second target point and the reference direction. The difference between the angles is used to determine the relative target angle 1416 of the anchor point 1410. The reference direction can be an arbitrary direction for a single overall view or presentation of the object represented by the mesh 230 in UV space.
[000223] The method 2000 continues from step 2060 to a determining step 2070. In step 2070, a relative local cross-fragment target scale is determined, representing the change in local scale in the target fragment, relative to the initial fragment of the query. The cross-fragment target scale represents a ratio of a distance between the reference points and a distance between the target points. Because texture fragments may be packed at different scales in the texture image 803, the local scale in UV space may vary across a fragment boundary, even if the scale on the surface of the mesh 230 remains constant. Similarly to relative angle, the scale can vary over a fragment. The ratio of the distance between the reference points to the distance between the target points is an indicator of relative scale, and this ratio is set as the relative target scale 1415 associated with the anchor point 1410.
[000224] The method 2000 continues from step 2070 to a determining step 2080. Finally, in step 2080, the co-ordinates of the query point are determined with respect to the neighbouring fragment to be used as the anchor point’s location. Conceptually, a reference triangle formed
14684374_1
2018203328 11 May 2018 by the reference points and the query points 1422-1423 is a similar (same-angled) triangle to a target triangle formed by the target points and the anchor point. Given the query point, reference points, and target points, this allows the anchor point to be found. The anchor point is determined using the target points of step 2050 and the reference angle such that angles between the anchor point and the target points of step 2050 correspond to angles between the query point and the relevant reference points of step 2030. The anchor point may be determined as a displacement from the first target point. The distance of the displacement is equal to the distance of the query point from the reference point is multiplied by the relative scale 1415, and the direction of the displacement is the reference angle determined in step 2040 plus the relative angle 1416 of the anchor point. For a target fragment that does not contain the query point received at step 1905 (effectively a target fragment) the anchor point is located outside the target fragment.
[000225] The determined anchor point 1410 is added to the AnchorPoint list 1401 of the working data 1400 at step 2080. Upon completion of step 2080, the anchor point determination method 2000 terminates at step 2099.
[000226] The texture sampling method 2100, implemented at step 1290, operates on a list of anchor points, such as the anchor point produced by neighbourhood query method 1300, as previously described. The method 2100 is typically implemented as one or more modules of the application 933, executed under control of the processor 905 and stored in the memory 906.
[000227] The method 2100 begins at a determining step 2110. In step 2110 a set of sample space locations are determined. The sample space may for example be a square pixel grid.
[000228] The method 2100 continues from step 2110 to an anchor point check step 2120. Step 2120 checks whether there are more anchor points 1410 which have not yet been processed. If there are more anchor points to process (“Y” at step 2120), the method 2100 continues to a selecting step 2130. At step 2130, the next anchor point 1410 is selected.
[000229] The method 2100 continues from step 2130 to a sampling check step 2140. Step 2140 checks whether there are more sample space locations which have not yet been processed for the current anchor point. If so (“Y” at step 2140), the method 2100 continues to a mapping step 2150. Step 2150 maps the next sample space location to UV space (that is, to the co-ordinate system of texture image 803). The mapping is performed by transforming the sample space
14684374_1
2018203328 11 May 2018 location around the centre of the sample space according to the inverse of the scale 1415 and angle 1416 of the selected anchor point 1410, and translating the rotated and scaled sample space location so as to position the centre of the sample space at the UV co-ordinates 14121413 of the selected anchor point 1410.
[000230] The method 2100 continues from step 2150 to a mapping check step 2160. Step 2160 checks whether the next mapped sample space location lies within the current fragment.
Whether the next mapped sample space location lies within the current fragment is determined using the fragment ID 1411 of the selected anchor point 1410. Whether the next mapped sample space location lies within the current fragment may be determined by inspecting the value of the pixel of the fragment mask image 1104 at the mapped sample space location. If the value is equal to the fragment ID 1411 then the mapped sample space location is within the fragment and step 2160 returns “Y”. In this case, the method 2100 continues from step 2160 to a sampling step 2170. At step 2170, the texture is sampled at the mapped sampling location. A variety of sampling methods can be used. Nearest neighbour sampling may be used for example, in which the nearest pixel value of the texture image 803 is retrieved and written to the (unmapped) sample space location. Other sampling techniques such as bilinear sampling, bi-cubic sampling, or sine sampling may also be applied, taking pixel values from each contributing fragment as necessary to form a single sample space value.
[000231] For the purposes of generating a geometry sub-image in step 1290, step 2170 of the method 2100 can be modified. At step 2170, instead of sampling a pixel value of a texture image 803, the UV co-ordinates of the pixel value, the scale 1415 and angle 1416 of the selected anchor point 1410, and the fragment ID 1411 are written into the geometry image at each pixel sample space location.
[000232] After the location has been sampled in step 2170, or if in step 2160 the mapped sample space location is not within the fragment (“N” at step 2160), the sampling method 2100 returns to the sampling check step 2140. The loop from step 2140 to step 2170, repeats for each sample space location. When step 2140 identifies no unprocessed sample space locations (“N”) at step 2140, the method 21 ]00 returns to the anchor point check step 2120. The step 21 ]20 repeats until no more anchor points remain in the list of anchor points 1410. When no more anchor points remain to be processed (“N” at step 2120), the method 2100 terminates at step 2199.
14684374_1
2018203328 11 May 2018 [000233] In another implementation, a commercial software package or open source library implementation which performs the parameterisation step 340 does not provide a fragment ID mask image, and only provides a mesh data structure 800 including a parameterisation. If only the mesh data structure 800 and parameterisation are provided, the fragment ID mask image may be generated from the mesh data structure 800.
[000234] A first method of generating a fragment ID mask image is an image-based method, which uses the texture atlas data. Some pixels in a texture atlas contain texture information, and other pixels contain no information, having no mapping to any part of the surface of the mesh. Pixels containing texture information may be identified according to having a non-black colour, a non-transparent alpha channel value, or by some other means. The fragment ID mask image may be generated first by identifying the pixels occupied by texture information in any atlas, and assigning a value of ‘ 1 ’ to corresponding pixels in the fragment ID mask image, assigning a value of ‘0’ to pixels occupied by no texture information in any atlas. The resulting bitmap is segmented according to connected regions of pixels containing the value ‘1’, and each fragment is numbered from 1 up to the number of fragments.
[000235] A second method of generating a fragment ID mask image 1104 is step 1050 of the mesh augmentation method 1000, described above. In step 1050 an empty texture image has a triangle rasterised for each face in the mesh 200 Each triangle has co-ordinates corresponding to the UV co-ordinates of the face vertices, and a channel value that is the fragment ID of the face.
[000236] The first and second methods provide fragment ID mask image data used by the geometry sub-image creation step 1740, even if mask image information is not retained at parameterisation step 340.
[000237] In another embodiment, the geometry sub-image creation step 1740 step uses an alternative method of creating a geometry sub-image. One alternative known method constructs a 2D UV-space mesh of each fragment, adding extra triangles to the outside of each fragment boundary edge, corresponding to triangles which are across the fragment boundary on the 3D mesh. The method can be applied to an augmented mesh data structure 1130 to find the fragment ID F and UV co-ordinates (it, v) of mesh faces adjacent to a fragment boundary. Each extra triangle, from a neighbouring fragment, has a shared edge with a triangle in the current
14684374_1
2018203328 11 May 2018 fragment. The shared edge appears once in the 3D mesh representation, and twice in the UV space representation. A relative scale factor m for texture pixels within each extra triangle may be determined by the ratio of the length of the UV-space shared edge of the current fragment, and the length of the UV-space shared edge of the fragment of the extra triangle. A relative rotation value Θ for texture pixels within each extra triangle may be determined by taking the difference of a first angle of the UV-space shared edge of the current fragment, and a second angle of the UV-space shared edge of the neighbouring fragment. Pixels inside the current fragment are assigned a relative rotation value Θ of 0, and a relative scale factor m of 1.
[000238] Accordingly, a geometry sub-image may be created comprising geometry from the current fragment, and geometry from mesh faces from neighbouring fragments which are directly adjacent to the current fragment. Although geometry is only obtained from immediately neighbouring faces, rather than an arbitrary region within neighbouring fragments, the method can still provide added geometry context which can deal with discontinuities in warp vector magnitude and direction.
[000239] The fragment splitting step 1810 may also use an alternative method of creating a texture sub-image. As described above, a similar alternative method to create a texture subimage constructs a 2D UV-space mesh of each fragment, adding extra triangles to the outside of each fragment boundary edge, corresponding to triangles which are across the fragment boundary on the 3D mesh. Texture pixel data in each extra triangle may be determined by interpolating between the UV co-ordinates stored at each vertex of the extra triangle.
[000240] The resulting texture sub-image reduces unoccupied texture atlas pixels, and can reduce artifacts caused by applying warps to the texture data.
[000241] Using the generated warp atlases, texture atlases can be generated by warping in which each component fragment is aligned to a common best view. As described above, the texture atlases can be combined with knowledge of camera poses, light-source positions and surface normal properties to generate BRDF information for each (u,v) position in the texture atlas, and through parametrization, to each position on the generated mesh 230 representing the object.
14684374_1
2018203328 11 May 2018 [000242] The generated information, mesh 230, BRDF and diffuse colour, can be combined with user-entered parameters such as a virtual camera pose and virtual lighting to render realistic views of the object under many different conditions.
[000243] Fig. 22 shows a method 2200 of rendering a graphical object using the methods described. The method 2200 is typically implemented as one or more modules of the application 933, executed under control of the processor 905 and stored in the memory 906.
[000244] The method 2200 starts at a method 2210. In execution of step 2210, images of the object 110 are captured from the viewpoints 120, 130 and 140 using an image capture device such as the camera 995. The images are transmitted to the module 901.
[000245] The method 2200 continues from step 2210 to a generating step 2220. The step 2220 implements the method 300 using the captured images and outputs the 3D mesh 230 having data structure 800 representing the three-dimensional shape of the object 110, the set of texture atlases 260 comprising the colour information from the set of captured images mapped into UV space 250, and the set of camera poses, one for each camera position 120, 130, and 140.
[000246] The method 2200 continues from step 2220 to an aligning step 2230. The step 2230 implements the method 500. The method 500 operates to align views of the graphical object 110 at step 540, to warp texture atlases at step 560 and thereby generate an accurate BRDF estimate at step 580.
[000247] The method 2200 continues from step 2230 to a rendering step 2240. The step 240 operates to render the graphical object using the estimated BRDF values.
[000248] Each pixel in such a rendered image can be generated by identifying a sub-region on the mesh corresponding to the pixel, determining a diffuse colour for that sub-region by reference to the local mesh parametrization and the associated BRDF in texture space, determining vectors towards the virtual camera, towards any virtual light sources, and a surface normal for the mesh sub-region, and calculating a visible colour for that pixel by calculating a value for the BRDF associated with the calculated viewing and lighting angles, and in turn a (red, green, blue) tuple representing the rendered colour of that pixel.
14684374_1
2018203328 11 May 2018 [000249] When all pixels have been rendered, the result will be a relatively realistic view of the object, and the user can modify lighting and camera pose parameters to produce further realistic views of the object.
[000250] The method 2200 ends at 2299 after rendering step 2240.
[000251] In an example use of the arrangements described, BRDFs of surface materials on the object 110 may be estimated by capturing multiple photos of the object from different camera positions. Firstly, a user may capture images (for example photographs) of the object 110. Care may be taken to capture images that show every part of the exterior surface of the object 110, preferably in a plurality of the captured images. Any part of the object not captured in the images will not be able to have BRDF estimated for the surface material at that point. For each point on the surface of the object 110, generally the more camera positions that capture that point, the more accurate the resulting BRDF estimation will be. Users may take this into account when planning how many images to capture and from which camera positions. For objects with highly reflective surfaces, images may be captured in a high-dynamic range mode, to avoid saturation of specular highlights.
[000252] After capturing suitable images, the object estimation method 300 can be used to generate a 3D mesh data structure representing the object 110 and the set of texture atlases 260 representing the surface texture of the object 110 visible in each image. Then cross-view alignment method 500 may be used to estimate the BRDF of the surface material of the object 110 at each point visible in the photos. For the alignment step 540, either of the implementations described above may be used to align the set of texture atlases 260, to bring the colour samples as observed from the different camera positions substantially into alignment. The step 540 allows the now-aligned colour samples to be exported in export step 570, such that the aligned colour samples may be used for accurate BRDF estimation in BRDF estimation step 580.
[000253] The above described implementation of the method 500 results in a 3D mesh data structure representing the object 110, and estimated BRDFs of the surface material at each point on the surface of the object that has been captured by the set of photos. The method described thus advantageously allow accurate BRDF estimation from photos taken from unconstrained camera positions.
14684374_1
2018203328 11 May 2018 [000254] In a further example use of the arrangements described, BRDFs of surface materials on a 3D object 110 may be estimated by processing video sequences captured by a video camera in motion relative to the object 110. Frames of the video sequence may be treated as images of the object 110 from different camera positions, and then the object estimation method 300 and cross-view alignment method 500 may be used to estimate BRDFs of surface materials as described above.
[000255] In yet another example use of the methods described, BRDFs of surface materials on the 3D object 110 may be estimated by processing pre-existing digital images of the object 110, for example photos taken by tourists of a tourist attraction and posted on the Internet. Typically the images will be taken from different camera positions, and so the images may be used in the object estimation method 300 and cross-view alignment method 500, although additional colour calibration steps may need to be taken to correct for colour reproduction inconsistencies caused by lighting differences and camera colour sensitivity differences.
[000256] The arrangements described are applicable to the computer and data processing industries and particularly for the image processing and graphics rendering industries.
[000257] The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive.
[000258] In the context of this specification, the word “comprising” means “including principally but not necessarily solely” or “having” or “including”, and not “consisting only of’. Variations of the word comprising, such as “comprise” and “comprises” have correspondingly varied meanings.

Claims (20)

1. A method of aligning views of a three-dimensional graphical object, the method comprising:
receiving a plurality of views of the graphical object, the graphical object being partitioned into a plurality of fragments, wherein each view comprises texture data for at least a first fragment and a second fragment from the plurality of fragments;
forming a first expanded view for a first fragment by expanding a first view of the first fragment using texture data of the first view of a second fragment;
forming a second expanded view for the first fragment by expanding a second view of the first fragment using texture data of the second view of the second fragment;
aligning the first expanded view and the second expanded view to determine a warp map between pixels of the first view of the first fragment and pixels of the second view of the first fragment;
determining a warp map between pixels of the first view of the second fragment and the second view of the second fragment using at least one warp vector of the warp map determined for the first fragment modified based on geometrical arrangement of the fragments; and aligning the first and second views of the graphical object based on the determined warp map.
2. The method according to claim 1, wherein the second view of the second fragment is a reference view for the second fragment.
3. The method according to claim 2, further comprising selecting, from the determined warp map, warp vectors corresponding to the second fragment for placement in a warp atlas associated with the first view of the graphical object. .
14684374_1
2018203328 11 May 2018
4. The method according to claim 1, wherein determining a warp map between pixels of the first view of the second fragment and the second view of the second fragment further comprises regularizing warp vectors of the determined warp map between pixels of the first view of the second fragment and the second view of the second fragment.
5. The method according to claim 4, wherein the regularization comprises convolving the warp vectors using a Gaussian blur kernel with an RMS width associated with a width of expansion of the first and second views.
6. The method according to claim 1, further comprising warping a texture atlas associated with the second fragment using the determined warp map.
7. The method according to claim 1, further comprising determining reflectance properties of the graphical object based on the aligned first and second views of the graphical object using a predetermined model relating the aligned views of the graphical object and a lighting angle.
8. The method according to claim 1, further comprising rendering the graphical object based on the alignment.
9. The method according to claim 1, wherein determining a warp map between pixels of the first view of the second fragment and the second view of the second fragment comprises determining relative displacement, orientation and scale in the texture data between the first fragment and the second fragment.
10. The method according to claim 1, wherein determining a warp map between pixels of the first view of the second fragment and the second view of the second fragment comprises transferring warp vectors from the warp map of the first fragment to a boundary of the second fragment based on the geometrical arrangement of the fragments in the texture data.
11. The method according to claim 1, wherein determining a warp map between pixels of the first view of the second fragment and the second view of the second fragment comprises transferring warp vectors from the warp map of the first fragment to a boundary of the second fragment and transforming the warp vectors based upon a relative displacement, orientation and scale in the texture data between the first fragment and the second fragment.
14684374_1
2018203328 11 May 2018
12. The method according to claim 1, wherein the first expanded view is formed based on a margin of the first fragment associated with the second fragment.
13. The method according to claim 1, wherein the first expanded view and the second expanded view are aligned using a multi-modal alignment method.
14. The method according to claim 13, wherein the alignment method estimates a warp map using covariance-based mutual information for pixel locations within the overlapping region between the first and second views of the first fragment.
15. The method according to claim 1, further comprising determining geometric information for each of the fragments using a parametric representation of the graphical object.
16. The method according to claim 1, further comprising:
determining a pixel position in a parametric representation corresponding to a selected position on the graphical object;
determining a plurality of pixel values for the determined pixel position using the aligned first and second views of the graphical object and corresponding viewpoints; and determining reflectance properties of the graphical object using a predetermined model relating the determined plurality of pixel values to a plurality of viewpoints and a lighting angle.
17. The method according to claim 16, wherein the pixel position corresponding to the selected position on the graphical object is determined using a mapping from the threedimensional structure of the graphical object to a parametric representation of the graphical object.
18. A non-transitory computer readable storage medium storing program instructions aligning views of a three-dimensional graphical object, the program comprising:
code for receiving a plurality of views of the graphical object, the graphical object being partitioned into a plurality of fragments, wherein each view comprises texture data for at least a first fragment and a second fragment from the plurality of fragments;
14684374_1
2018203328 11 May 2018 code for forming a first expanded view for a first fragment by expanding a first view of the first fragment using texture data of the first view of a second fragment;
code for forming a second expanded view for the first fragment by expanding a second view of the first fragment using texture data of the second view of the second fragment;
code for aligning the first expanded view and the second expanded view to determine a warp map between pixels of the first view of the first fragment and pixels of the second view of the first fragment;
code for determining a warp map between pixels of the first view of the second fragment and the second view of the second fragment using at least one warp vector of the warp map determined for the first fragment modified based on geometrical arrangement of the fragments; and code for aligning the first and second views of the graphical object based on the determined warp map.
19. Apparatus, comprising:
a processor; and a memory device storing a software program for directing the processor to perform a method aligning views of a three-dimensional graphical object, the method comprising:
receiving a plurality of views of the graphical object, the graphical object being partitioned into a plurality of fragments, wherein each view comprises texture data for at least a first fragment and a second fragment from the plurality of fragments;
forming a first expanded view for a first fragment by expanding a first view of the first fragment using texture data of the first view of a second fragment;
forming a second expanded view for the first fragment by expanding a second view of the first fragment using texture data of the second view of the second fragment;
14684374_1
2018203328 11 May 2018 aligning the first expanded view and the second expanded view to determine a warp map between pixels of the first view of the first fragment and pixels of the second view of the first fragment;
determining a warp map between pixels of the first view of the second fragment and the second view of the second fragment using at least one warp vector of the warp map determined for the first fragment modified based on geometrical arrangement of the fragments; and aligning the first and second views of the graphical object based on the determined warp map.
20. A system comprising:
a processor; and a memory device storing a software program for directing the processor to perform a method comprising the steps of:
receiving a plurality of views of a graphical object, the graphical object being partitioned into a plurality of fragments, wherein each view comprises texture data for at least a first fragment and a second fragment from the plurality of fragments;
forming a first expanded view for a first fragment by expanding a first view of the first fragment using texture data of the first view of a second fragment;
forming a second expanded view for the first fragment by expanding a second view of the first fragment using texture data of the second view of the second fragment;
aligning the first expanded view and the second expanded view to determine a warp map between pixels of the first view of the first fragment and pixels of the second view of the first fragment;
determining a warp map between pixels of the first view of the second fragment and the second view of the second fragment using at least one warp vector of the warp map determined for the first fragment modified based on geometrical arrangement of the fragments; and
14684374_1
2018203328 11 May 2018 map.
aligning the first and second views of the graphical object based on the determined warp
Canon Kabushiki Kaisha
AU2018203328A 2018-05-11 2018-05-11 System and method for aligning views of a graphical object Abandoned AU2018203328A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2018203328A AU2018203328A1 (en) 2018-05-11 2018-05-11 System and method for aligning views of a graphical object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2018203328A AU2018203328A1 (en) 2018-05-11 2018-05-11 System and method for aligning views of a graphical object

Publications (1)

Publication Number Publication Date
AU2018203328A1 true AU2018203328A1 (en) 2019-11-28

Family

ID=68618217

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2018203328A Abandoned AU2018203328A1 (en) 2018-05-11 2018-05-11 System and method for aligning views of a graphical object

Country Status (1)

Country Link
AU (1) AU2018203328A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11593959B1 (en) * 2022-09-30 2023-02-28 Illuscio, Inc. Systems and methods for digitally representing a scene with multi-faceted primitives
CN116433794A (en) * 2023-06-14 2023-07-14 广东云湃科技有限责任公司 Processing method and system for three-dimensional model in CAE software
CN117522900A (en) * 2023-12-13 2024-02-06 南京理工大学泰州科技学院 Remote sensing image analysis method based on computer image processing

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11593959B1 (en) * 2022-09-30 2023-02-28 Illuscio, Inc. Systems and methods for digitally representing a scene with multi-faceted primitives
US11721034B1 (en) * 2022-09-30 2023-08-08 Illuscio, Inc. Systems and methods for digitally representing a scene with multi-faceted primitives
CN116433794A (en) * 2023-06-14 2023-07-14 广东云湃科技有限责任公司 Processing method and system for three-dimensional model in CAE software
CN116433794B (en) * 2023-06-14 2023-09-08 广东云湃科技有限责任公司 Processing method and system for three-dimensional model in CAE software
CN117522900A (en) * 2023-12-13 2024-02-06 南京理工大学泰州科技学院 Remote sensing image analysis method based on computer image processing
CN117522900B (en) * 2023-12-13 2024-05-17 南京理工大学泰州科技学院 Remote sensing image analysis method based on computer image processing

Similar Documents

Publication Publication Date Title
CN110675314B (en) Image processing method, image processing apparatus, three-dimensional object modeling method, three-dimensional object modeling apparatus, image processing apparatus, and medium
CN110490916B (en) Three-dimensional object modeling method and apparatus, image processing device, and medium
CN105374065B (en) Relightable textures for use in rendering images
US7574017B2 (en) Statistically comparing and matching plural sets of digital data
US7046840B2 (en) 3-D reconstruction engine
US20150325044A1 (en) Systems and methods for three-dimensional model texturing
CN105303599B (en) Reilluminable texture for use in rendering images
US20190188871A1 (en) Alignment of captured images by fusing colour and geometrical information
US8289318B1 (en) Determining three-dimensional shape characteristics in a two-dimensional image
US20090074238A1 (en) Method and System for Determining Poses of Objects from Range Images Using Adaptive Sampling of Pose Spaces
CN111862302B (en) Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium
US10853990B2 (en) System and method for processing a graphic object
AU2018203328A1 (en) System and method for aligning views of a graphical object
CN112055192B (en) Image processing method, image processing apparatus, electronic device, and storage medium
JP2003115042A (en) Method for evaluating three-dimensional shape model and method and device for generating the model
Pagés et al. Seamless, Static Multi‐Texturing of 3D Meshes
Pintus et al. Techniques for seamless color registration and mapping on dense 3D models
Meng et al. Active rectification of curved document images using structured beams
Kasper et al. Multiple point light estimation from low-quality 3D reconstructions
Jain et al. Panorama construction from multi-view cameras in outdoor scenes
AU2019201825A1 (en) Multi-scale alignment pattern
AU2018203329A1 (en) System and method for determining reflectance properties
Do et al. On multi-view texture mapping of indoor environments using Kinect depth sensors
AU2018208713A1 (en) System and method for calibrating a projection system
Bastian et al. Computing surface-based photo-consistency on graphics hardware

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application