GB2527582A - Three dimensional content creation system and method - Google Patents

Three dimensional content creation system and method Download PDF

Info

Publication number
GB2527582A
GB2527582A GB1411416.9A GB201411416A GB2527582A GB 2527582 A GB2527582 A GB 2527582A GB 201411416 A GB201411416 A GB 201411416A GB 2527582 A GB2527582 A GB 2527582A
Authority
GB
United Kingdom
Prior art keywords
image
images
primary object
content creation
creation system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1411416.9A
Other versions
GB201411416D0 (en
GB2527582B (en
Inventor
Vincent Naveen Morris
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MORRIS CONSULTANCY SERVICES Ltd
Original Assignee
MORRIS CONSULTANCY SERVICES Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MORRIS CONSULTANCY SERVICES Ltd filed Critical MORRIS CONSULTANCY SERVICES Ltd
Priority to GB1411416.9A priority Critical patent/GB2527582B/en
Publication of GB201411416D0 publication Critical patent/GB201411416D0/en
Publication of GB2527582A publication Critical patent/GB2527582A/en
Application granted granted Critical
Publication of GB2527582B publication Critical patent/GB2527582B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0641Shopping interfaces
    • G06Q30/0643Graphical representation of items or shoppers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • General Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A three dimensional content creation system is described which comprises a database and image generation means. The database is configured to store; a plurality of captured images of a primary object; and a display attribute such as dimensions, description or sound associated with the primary object. The captured images are processed to remove background data. The image generation means is configured to receive from the database at least a subset of the captured images, and to generate from the received images a set of target sequenced images for rendering a three dimensional representation of the primary object at a target Image display means. A user may interact with the 3D representation to zoom, rotate etc the object. In an embodiment the system is used in an on-line shop to enable purchasers to manipulate and inspect a vendors goods.

Description

THREE DIMENSIONAL CONTENT CREATION SYSTEM AND METHOD
Background of the Invention
Methods are known for creating three dimensional models of real objects.
However, these methods require the use of a range of devices such as a 2D-3D laser scanner, a projector and a camera. Methods involving these devices are complicated, time consuming and expensive to implement.
Summary of the Invention
According to an aspect of the invention, there is provided a three dimensional content creation system comprising: a database, configured to store, based on a plurality of captured images of a primary object and a display attribute associated with the primary object, a set of images representative of a three dimensional representation of the primary object; and image generation means configured to receive from the database at least a subset of the set of images, and to generate from the received images a set of target sequenced images for rendering the three dimensional representation of the primary object at a target image display means.
By storing a set of captured images representative of a three dimensional representation of a primary object, based on not only a plurality of captured images of the primary object but also on a display attribute associated with the primary object, fully interactive three dimensional models can be rendered at a target image display means. Each of these "target" three dimensional models has an appearance represented by a set of target sequenced images. A user can interact with these models at the target image display means and, in response to this interaction, the image generation means is configured to re-generate from the received images a set of target sequenced images for rendering the three dimensional representation of the primary object at the target image display means. For example, in the context of an online shop (see Figure 13), the present invention will enable a customer to manipulate a product (e.g camera) in order to view all parts of a product; they will be able to rotate the product, enlarge it, click on specific highlighted points (hot-points] on the product in order to adjust/activate individual elements of the product, and the image generation means will deliver the appropriate three dimensional content to the target image display means based on these actions.
Accordingly, the image generation means is able to respond to requests from a user to deliver to the target image display means, manipulate, re-generate and re-deliver to the target image display means appropriate three dimensional content representative of the primary object. By capturing the display attribute associated with the primary object, the present invention is able to generate three dimensional models which are capable of mimicking a user's interaction with a real object. Thus, in contrast to having photographs of products in online shops, the present invention makes it possible to embed 3D models of products into online shops. Shoppers can rotate products 3600 and inspect products in hi-resolution from all angles.
The display attribute may comprise one or both of a numerical value and a text descriptor. For example, a numerical value may be used to specify a specific value relating to a specific physical dimension of the primary object such as "length".
"width", "height", "depth", "weight" and "volume". Text descriptors may be used to indicate a specific dimension (e.g. "length", "width' "height", "depth", "weight" and "volume"] of the primary object.
An image of a secondary object may be generated by the image generation means, based on the numerical value, for rendering at the target image display means in proximity to the three dimensional representation of the primary object.
For example, to give a customer of an online shop a sense of scale of the primary object, a commonly recognisable object (e.g. a coin, a fruit, a human, a car, etc.) may be superimposed beside the three dimensional representation of the primary object.
The image generation means may decide on the kind of secondary object to superimpose depending on the size of the primary object The three dimensional content creation system may have requested from a user details of the dimensions of the primary object in advance of generating the image of the secondary object Thus, by generating an image of a secondary object in proximity to the three dimensional representation of the primary object, a customer can understand the scale of the primary object (i.e. the dimensional proportionality of the secondary object with respect to the primary object).
The image generation means may generate, based on the numerical value, an image of a deformable object for rendering at the target image display means and for representing the weight of the primary object with respect to the weight of the secondary object. For example, to give a customer of an online shop a sense of weight of the primary object, a commonly recognisable object may be superimposed beside the three dimensional representation of the primary object The image of the deformable object may be generated such that a first area of the deformable object for displaying the three dimensional representation of the primary object is deformed by an amount represented by the numerical value and a second area of the deformable object for displaying the image of the secondary object is deformed by an amount dependent on the numerical value associated with the primary object.
To give the customer a clearer indication of the weight of the primary object, the three dimensional representation of the primary object and the image of the secondary object can be displayed resting on the deformable object; whereby the deformable object is deformed in a first area for displaying the primary object by an amount proportional to the weight of the primary object and the deformable object is deformed in a second area for displaying the secondary object by an amount dependent on the weight of the primary object. The three dimensional content creation system may have requested a user to provide details of the dimensions (e.g. weight) of the primary object in advance of displaying the primary object and secondary object on the deformable object.
The display attribute may comprise an audio signal. The audio signal may be representative of audio generated when a user interacts with the primary object The image generation means may be configured to playback the audio when a user interacts with the three dimensional representation of the primary object at the target image display means. For example, the three dimensional content creation system can record sound emanating from the primary object or a verbal narrative generated by a user. When a user (e.g. a customer of an online shop) interacts virtually with the three dimensional representation of the primary object, the sounds are played back The image generation means may be configured to remove noise from the audio signal. For example, due to ambient conditions, various unwanted noise such as laptop fan noise or hard disk noise maybe present in the audio signal. The image generation means may be able to analyse the audio signal in order to identify and remove constant ambient noise and periodic ambient noise (e.g. hard disks clicks, etc.) to produce a relatively noise free audio signal.
The set of images may comprises a set of sequenced horizonta'-rotation images captured while the primary object is rotating about a horizontal axis and a set of sequenced vertical-rotation images captured while the primary object is rotating about a vertical axis. The database may be operable to store the set of sequenced vertical-rotation images in association with an image from the set of sequenced horizontal-rotation images. The set of images may further comprise a set of images captured while a movable part of the primary object is activated, and the database is operable to store, in association with an image from the set of sequenced horizontal-rotation images or from the set of sequenced vertical-rotation images, the set of images captured while the movable part of the primary object is activated.
By storing the images in the above described manner, the three dimensional content creation system is able to deliver content seamlessly and in the correct order to the target image display means in response to a user's selected action (e.g. drag/rotate, left/right, up/down, interaction, etc.) with respect to the three dimensional representation of the primary object.
A capturing means may be provided for capturing the plurality of images of the primary object and the display attribute associated with the primary object. For example) the capturing means could comprise a camera) a web cam, mobile telephone or other personal device.
A software application may provide the capturing means) the software application being operable to request a user to display the primary object in front of a camera so that the capturing means can capture a plurality of images of the primary object from a plurality of perspectives. For examp'e, the software application can take control of a webcam and can request a user (e.g. via the target image display means) to display the primary object in a variety of positions in front of the web cam so that the web cam can capture images from various angles or perspectives.
The software application may be operable to capture the display attribute by requesting the user to activate an interactive feature on the primary object in front of the camera so that the capturing means can capture a plurality of images while the interactive feature is active. For example, the software application prompts a user to activate some interactive feature(s) [e.g. button, dial, etc.] on the item and then repeats the process of capturing images.
As part of the process of cap Wring interactivity, the software application is operable to capture the display attribute by recording sound [e.g. using the web cam's microphone] emanating from the primary object and/or a verbal narrative by the user.
The software application may be operable to capture the display attribute by requesting the user to enter a numerical value representative of a physical dimension of the primary object. For example, the software application can request, via the target image display, a user to indicate one or more dimensions of the primary object.
The three dimensional content creation system may further comprise image processing means for performing a background image removal process on the plurality of captured images to identify and remove the background from each of the captured images. The background image removal process can comprise a green screen detection process for identifying a pixel as part of the background by extracting a red, green and blue [RGB] va'ue from a pix& of a captured image and identifying the pixel as being part of the background if the RGB value is approximately equal to a predetermined value. This screen detection process is able to identify the true background from the foreground object. A simple comparison of each pixel with the green colour will not work well as the green screen may have imperfections due to ambient lighting folds, creases and non-uniform stretch.
Alternatively, the background image removal process may comprise extracting a red, green and blue [RGB] value from a pixel from a first captured image, and identifying the pixel of the first captured image as being part of the background if its RGB value is approximately equal to an RGB value of a corresponding pixel of a second captured image. In this process, a set of images is analysed with the foreground object in varying rotational positions across the images. As a consequence, the foreground will change across the images. The background can then be extracted by separating the fixed portions of the images from the changing portions.
Alternatively, the background image removal process may comprise dividing a captured image into a predetermined number of grid cells, determining the blurriness of each grid cell, analysing the spatial spread of blurriness across the entire image and identiI'ing a background image as a set of grid cells that has the most amount of blurriness. In this process, the foreground object will have a sharper focus as compared to the background. Therefore, the background can be separated from the foreground by analysing the sharpness value present in various portions of the image.
The background removal process may further comprise an edge detection process comprising analysing surrounding areas of each pixel of a captured image to identi' changes in RGB values and identi'ing a pixel as part of an edge if there is a predetermined change in pixel values from one area to another. The aim of this process is to identil' all pixels that form part of edges in the image.
The image generation means may be configured to optimise a captured image by selectively brightening and darkening certain pixels of the image.
It will be appreciated from the above that the database and the image generation means may be implemented at a server, wherein the image generation means is responsive to a request to extract and generate the target sequenced images, the request indicating an image manipulation performed at the target image display means, the image generation means selecting the images for extracting and generating the appropriate corresponding target sequenced images based on the indication of the image manipulation provided by the request The manipulation may be selected from a group of operations comprising: rotation, pan, tilt, zoom in, zoom out, and point and click. A software application may provide the target image display means, the software application being operable to request the server to provide the set of target sequenced images in relation to the primary object; to receive from the server the requested set of target sequenced images; and to render the three dimensional representation of the primary object.
According to another aspect of the present invention, there is provided a three content creation method, comprising the steps of: storing, in a database, based on a plurality of captured images of a primary object and a display attribute associated with the primary object, a set of images representative of a three dimensional representation of the primary object; receiving from the database at least a subset of the set of images; and generating from the received images a set of target sequenced images for rendering the three dimensional representation of the primary object at a target image display means.
According to another aspect of the present invention, there is provided an image display apparatus configured to receive and display the set of target sequenced images on a display screen.
Further aspects of the present invention include a computer program and a storage medium for storing the computer program.
Brief Description of the Drawings
Embodiments of the present invention will now be described with reference to the following drawings, in which: Figure 1 schematically illustrates a three dimensional content creation system according to an embodiment of the invention; Figure 2 schematically illustrates an example method for a user to interactively view 3D graphics over the Internet; Figure 3 shows in bthck diagram form an exemplary system for creating and manipulating image objects; Figure 4 shows in flowchart form an example of the processing steps for creating a three dimensional representation of a real object; Figure 5 shows in flowchart form another examp'e of the processing steps for creating a three dimensional representation of a real object; Figures 6a, 6b and 6c show steps involved in a green screen detection algorithm; Figures 7a and 7b show steps involved in a Constant Vs. Changing image algorithm; Figures Ba, Bb and 8c show steps involved in an edge detection algorithm; Figures 9a and 9b show steps involved in a focus detection algorithm; Figure 9c shows steps involved in a sharpness detection algorithm; Figures ba, lob, lOc, lOd, be, lOf and log show steps involved in an image optimisation algorithm; Figures lla and llb show steps involved in a noise removal algorithm; Figure 12 shows steps involved in a sequencing algorithm; and Figure 13 shows an example three dimensional representation of a camera displayed in an online shop.
Description of the Example Embodiments
Referring to Figure 1, an example architecture according to an embodiment of the present invention is schematically illustrated. In Figure 1, a three dimensional content creation server 101 and a target image display means/image display apparatus 102 (for example, a client such as a desktop, tablet or mobile device] are provided. The three dimensional content creation server 101 comprises storage means (e.g. a database 103) for storing, based on a plurality of captured images (e.g still images or video] of a primary object and a display attribute (for example, information relating to one or more of audio, visual and physical features] associated with the primary object, a set of images representative of a three dimensional representation of the primary object; and image generation means 104 for receiving from the database 103 at least a subset of the set of images, and for generating from the received images a set of target sequenced images [e.g. a sequence of still images) for rendering the three dimensional representation of the primary object at the target image display means 102.
In operation, the target image display means 102 issues to the three dimensional content creation server 101 a request 106 for the three dimensional content creation server 101 to provide a set of target sequenced images suitable for rendering at the target image display means 102, a particular three dimensional view of the primary object. In response to the request 106, the image generation means 104 is operable to issue a request 108 to the database 103 to provide at least a subset of the set of images for an identified user command -the subset being those images and associated display attribute required to generate the set of target sequenced images for the requesting target image display means 102.
In response to the request 108, the database 103 provides to the image generation means 104 the requested subset of the set of images for the indicated user command in a message 109. The image generation means 104 then converts the images provided by the database 103 into the corresponding target sequenced images (three dimensional graphics] required by the target image display means 102. The image generation means 104 may generate, based on the display attribute (e.g. a numerical value)) an image of a secondary object for rendering at the target image display means 102 in proximity to the three dimensional representation of the primary object The image generation means 104 may also generate, based on the numerical value, an image of a deformable object for rendering at the target image display means 102, wherein the image of the deformable object is generated such that a first area of the deformable object which is for displaying the three dimensional representation of the primary object is deformed by an amount represented by the numerical value and a second area of the deformable object which is for displaying the image of the secondary object is deformed by an amount dependent on the numerical associated with the primary object. The display attribute may also comprise an audio signal. The image generation means 104 then formats the generated target sequenced images into a format (e.g. JPEG) appropriate for the image display means 102, tags the display attribute data (e.g. one or more of the image of the secondary object, the image of the deformable object and the audio signal) to the resulting formatted target sequenced images, and then sends the resulting formatted target sequenced images to the target image display means via the message 107. The target image display means 102 is then able to display the formatted target sequenced images, including reproducing the display attribute data (e.g. auxiliary display data).
In an alternative embodiment, the image generation means 104 resides at the client-side, Of course, it is to be understood that some aspects of the image generation means can reside server-side) whereas other aspects of the image generation means can reside client-side. Other than the highlighted difference, the architecture in this alternative embodiment will operate in a similar manner to that described with respect to Figure 1.
Figure 2 illustrates schematically an example method for a user to interactively view 3D graphics over the Internet. User 120 has access to the Internet 121, via a client device which has in memory standard web browser software 122 to access the World Wide Web. User 120 requests a 3D image over the Internet from the three dimensional content creation server 101 using a Javascript 1-ITML5 request 124. Upon receipt of the request 124, the server 101 retrieves the image data from the database and using the image generation means 104 renders the requested image in standard bitmap formats, typically in the form of jpeg file 34 ("JPG' or "JPEG'] or PNG file, which it delivers to the user's browser 122 in the form of an Javascript HTML5 response 126. The user's web browser can display the jpeg image via an HTML5 canvas element.
The invention provides its own 3D rendering application that runs on the three dimensional content creation server 101 and responds to commands from the user's web browser to manipulate, re-render and deliver new 3D rendered scenes back to the user's browser 122.
The server used in the present invention can maintain a pool of scenes to be rendered. The client in the present invention, which may run on a web browser or a dedicated application outside of a web browser environment, primarily takes user input to manipulate the scene. Examples of such manipulation are: i] Rotate the scene or object; H] Zoom into a scene or object and zoom out from a scene or object; and Hi] Change a material property of an object in the scene (e.g. by clicking on specific highlighted points on the object, which causes the object to animate itself to display one or more features]. The manipulation of the scene can be sent to the server 101 through an established client-server communication channel. The server in turn, applies the requested scene manipulations to the scene, and returns to the client, via the client-server communication channel, a newly rendered image. The rendered image is typically in a common web browser-compatible format such as JPEG or PNG.
Any number of protocols may be used to implement the client-server communication channel. The following are examples. In a standard web browser environment, AJAX (Asynchronous Javascript and XML) may be used by the client to send requests and receive the resuhing renders. Or in a standard web browser environment, a simple HTTP request may be used. In a standard web browser environment using a Adobe Flash plug-in) an XML Socket connection may used. In a dedicated non-web browser application, any number of protocols supported by the underlying operation system may be used.
Reference is now made to Figure 3, which shows in block diagram form an exemplary system for creating and manipulating image objects. The system is indicated generally by reference 130 and comprises one or more client service providers 140, a client process server 142, a client network server 144, and one or more client customer interfaces 146. The client service providers 140, for example, client computers or client sites, connect to the client process server 142 via the Internet 121. Similarly, the client customers 146 can access the client network server 144 via the Internet 121. While described in the context of the Internet 121, the system 130 may be implemented using other network architectures or types, for example, a wide area network [WAN) or a local area network (LAN) or a private network.
The client service provider 140 includes apparatus, for example, a camera (e.g. webcam), a microphone and a computer, for capturing or generating still or video images of an object and for capturing or generating display attributes (e.g. audio, visual or physica' information) associated with the object. The computer comprises a user interface or UI for accessing the client process server 142. The UI may comprise a browser based software module for an Internet implementation.
The client service provider 140 uses the UI to communicate with the client process server 142 in order to provide information [e.g. numerical values and/or text descriptors) related to the display attributes to the client process server 142. For example) the UI can be used by the client service provider 140 to receive requests from the client process server 142 to provide information to the client process server 142 relating to the physical attributes of the object, such as the size and/or weight of the object The client process server 142 utilizes the Internet 121 [or other communication network or channel] to connect with the client service provider 140. The client process server 142 executes a computer program or module to obtain the still images and the associated display attributes from the client service provider 140, for example, using a JPEG file transfer protocol or similar mechanism. Instead of receiving the images from the client service provider 140, the client process server 142 may generate the images itself The client process server 142 may include multiple input ports or interfaces to allow multiple clients or client service providers 140 to connect to the client process server 142 at the same time.
The client process server 142 comprises a computer or computers comprising a database 150 for storing based on a plurality of images of a primary object and a display attribute associated with the primary object, a set of images representative of a three dimensional representation of the primary object. The client processor server 142 executes one or more computer programs or software modules or code components (indicated generally by block 100 in FIG. 1], which, as will be described in more detail below, provides an image generation Function for receiving from the database 150 at least a subset of the set of images, and for generating from the received images a set of target sequenced images for rendering the three dimensional representation of the primary object at a target image display means (e.g. client customer 146). The converted files or image data are transferred to the client network server 144.
The client customer 146, indicated individually by references 146a, 146b, 146c and 146d in FIG. 3, access the images, e.g. image objects, on the client network server 144 via the Internet 121. As depicted in FIG. 3, the client network server 144 may include a direct access interface module 148. The direct access interface module 148 allows clients) i.e. via computers at the client service provider 140, to directly access the client network server 144, for example, via a dedicated communication link or channel. The client customer 146 comprises a computer with a user interface or UI for accessing the client network server 144. The UI may comprise a browser based software module for an Internet implementation. The client customer 146 uses the UI to view, display or otherwise manipulate the image data or objects.
For the example depicted in FIG. 3, the client process server 142 and the client network server 144 are shown as separate machines or modules. According to another example, the client process server 142 and the client network server 144 maybe combined on a single machine, i.e. computing apparatus.
Reference is next made to Figure 4, which shows in flowchart form an example of the processing steps for creating a three dimensional representation of a real object. The process is indicated generally by reference 200.
As indicated by step 201, the first step comprises a user (e.g a seller of a merchandise item) instructing the three dimensional content creation system to start creating a three dimensional model of an object (e.g. a merchandise item). In response, the client process server 142 executes a computer program or software module to instruct the client to create the still images, for example, using a camera such as a webcam or other known image capture devices as indicated by step 202.
The instruction can include instructions for a user to display the merchandise object in front of the webcam. In step 203, the seller holds the item in front of the camera and slowly rotates it. The system captures images (i.e. photographs) of various perspectives of the item as indicated in step 204. In step 205, the seller may be prompted by the client process server 142 to manipulate the merchandise item (e.g a camera) so that interactive/adjustable features (e.g. camera lens], are adjusted/exposed (e.g. by pressing a button) in front of the webcam and the system then captures images of various interactive perspectives of the item (step 206). As part of capturing interactivity, the system can record sound emanated from the merchandise item or a verbal narrative by a human. The system can also capture information about one ore more physical dimensions (weight) size, height, depth, volume) of the object by requesting a user to enter one or more numerical values.
As indicated in step 207, the client process server 142 executes a computer program or software model for removing background images from the captured images of the merchandise item (foreground images). The client process server 142 then converts the captured images into multiple three dimensional models, e.g one each for each interactive perspective, (step 208). The three dimensional models are then stored in the database (e.g. database 103 or 150) (step 209). According to an embodiment of the invention) display attribute data [e.g. information relating to audio, visual and/or physical attributes of the object) is tagged along with the captured images at upload to facilitate an improved "server side" stills to interactive image conversion and to create a more realistic/responsive experience for the user at the target image display means 102.
As indicated in step 210, in response to a request from a user (e.g. client customer 146), the system delivers three dimensional content to the user according to the user's selected action (e.g. drag to rotate -left/right/up/down, auto rotate -left/right/up/down, zoom -in/out from any rotated angle, interact -click on specific highlighted point(s) on the item, causing the item to animate itself to display one or more features).
To give a user [e.g. buyer) a sense of scale, step 210 comprises superimposing a commonly recognisable object beside the 3D merchandise item -e.g., a coin, a fruit, a human, a car, etc. The system will decide what kind of object to superimpose depending on the size of the 3D merchandise item. Previously, the system will have requested information from the user regarding the dimensions of the 3D Merchandise Item.
To give a user (e.g. buyer) a sense of weight, step 210 comprises superimposing a commonly recognisable object beside the 3D merchandise item, shown as resting on a tensile/elastic membrane/member. The weight of the superimposed object will pull the membrane down by a certain amount The 3D merchandise item will also be resting on a tensile membrane/elastic membrane/member, pulling it down by an amount proportional to its weight. The system will decide what kind of object to superimpose depending on the weight of the 3D merchandise item. Previously, the system will have requested information from the user regarding the dimensions of the 3D Merchandise Item.
Reference is next made to Figure 5, which shows in flowchart form another example of the processing steps for creating a three dimensional representation of a real object. The process is indicated generally by reference 500.
In the image capturing step 501, a camera [e.g. webcamj is used to capture images and uploads them to an image processing engine. In this step, the three dimensional content creation system uses a software module to take control of user's webcam via the user's web browser in order to capture images of an object (e.g. merchandise item). For example, Javascript embedded in PHP, obtains control over the user's webcam (e.g if permission is granted by the user). The Javascript instructs the webcam to take multiple pictures in quick succession while the user rotates the item in front of the webcam.
In the background removal step 502, the images are processed to identify and remove the background of the merchandise item using one or more of the following algorithms: a) Green Screen detection algorithm (b) Constant Vs. Changing image algorithm (c) Focus sharpness algorithm (d) Edge detection algorithm The edge detection algorithm can be used once the foreground object has been roughly identified using the "Constant Vs. Changing" and "Focus Sharpness" algorithms).
(e) Combine steps (a), (b), (c), (d) using the Combining algorithm.
In the image optimisation step 503, an image histogram is optimised by selectively brightening/darkening certain pixels (see Image Optimisation Algorithm).
In the image size optimisation step 504, the file size of each image is reduced (for faster network transit) without reducing quality.
In the sequencing step 505, a sequencer is configured to (1) sequence images left to right, (2) sequence images up/down and (3) sequence interactive images in a predetermined order using a Sequencing Algorithm.
In the sound processing step (not shown), sounds produced during the image capturing step are recorded. For example, when a seller captures the 3D merchandise item, the sounds produced from interacting with it is recorded by the webcam's microphone. Noise is removed using a Noise Removal Algorithm. When a user (e.g buyer) interacts virtually with the 3D merchandise item (e.g. by clicking on various points on the image), the sounds are replayed.
In the sizing step (not shown), the three dimensional content creation system obtains information about the size of the merchandise item. For example, the system may request the user (e.g. seller) to input the dimensions of the 3D merchandise item. From this information, the system decides upon an easily recognisable object that is comparable to the 3D merchandise item and then is able to superimpose the object on the images. The size of the image of the easily recognisable item is determined by a Sizing Algorithm.
In a weight calculation step (not shown), the three dimensional content creation system obtains information about the weight of the merchandise item. For example, the system asks the user (e.g. seller) to input the weight of the 3D merchandise item. From this information, the system decides upon an easily recognisable object that is comparable to the 3D merchandise item and then is able to superimpose the object on the images. Both the 3D merchandise item and the object can be shown to be resting on an elastic membrane, and both pulling down the membrane by amounts proportional to their weights. The depth of pulling of the easily recognisable object is determined by a Weight Algorithm.
In a storing step 506, the system stores the sequenced images in a database (e.g 103, 150).
In an image streaming step 507, an image server (e.g. client process server 142) is configured to serve up the right kind of images in the right sequence according to a user's (e.g. buyer's) request In an image display step 508, images are displayed in the right order in the user's web browser via an HTMLS canvas element The user is able to interact with the merchandise item by clicking on a hot point Algorithms Various example algorithms will now be described for processing images. Some of these algorithms refer to the following mathematical symbols: *, / and A For the avoidance of doubt, the arithmetic operator legend for these symbols is as follows: Multiplication operator: * Division operator: / Exponent operator: A Green Screen detection al2orithm A simple comparison of each pixel with the green colour will not work well as the green screen will have imperfections due to ambient lighting folds and creases and non-uniform stretch. Therefore the present inventors have devised the following algorithm to identifi the true background from the foreground object.
Reference is next made to Figures 6a, 6b and 6c, which show steps inv&ved in the green screen detection algorithm.
In step 1, it is assumed that the first and last rows of the image are guaranteed to be the background. RGB (Red, Green, Blue) information is extracted from each pixel from the first and last rows of the image. Next, the average of Average (AVrgb) of Red, Green, and Blue values for each pixel is computed. The average of "Avrgb of all pixels" in the first and last rows (AWl) is then computed.
In step 2, the RGB values of all pixels in the image moving from top-left to bottom-right of the image are extracted. For each pixeL the average of R, C and B values (AVrgb) are computed. Next, RGB values of 10 pixels preceding the "current pixel under analysis" are extracted. The average of R, G and B for each pixel are computed and then average it across the same chosen 10 pixels. This average is labelled as AV1Op.
In step 3, Avrgb (from step 2) with AVfl (from step 1) is compared and a check is made to see if the variance is within a predefined threshold.
In step 4 (not shown), AVrgb (from step 2) is compared with AV1Op (from step 2) and check is made to see if the variance is within a predefined threshold.
In step 5 (not shown), if the variance in step 3 is within the threshold, then the "current pixel under ana'ysis" is part of the "background". Otherwise it is part of the "foreground object".
In step 6 (not shown), if the variance in step 2 is within the threshold, then the "current pix& under analysis" is part of the "background" if the foreground has not been encountered in the pixel-line before. Otherwise it is part of the "foreground object".
In step 7 (not shown), if the conclusions from steps 5 and 6 are in conflict, then the conclusion from step 5 is honoured/chosen.
Constant Vs. Changing image algorithm In this method a set of images is analysed with the foreground object in rotational positions across the images. The background will be unchanging across the images. The foreground will change across the images. The background can be extracted by separating the fixed portions of the image from the changing ones.
Reference is next made to Figures 7a and 7b, which show steps involved in the Constant Vs. Changing image algorithm.
In step 1, RGB values of all pixels in all images in the set are extracted and an average of R, G and B values for each pixel [AVrgb) is computed. The Standard Deviation (SDxy) of each pixel position is computed as compared to pixels in other images at the same location. The average of all AVrgb is computed, lets call this AVAVxy.
In step 2, the first image is considered. Each pixel of the first image is analysed and the difference (Dxy) between the AVrgb and AVAVxy is computed.
In step 3 (not shown), if the difference Dxy is greater than SDxy, then the pixel under analysis is part of the foreground. Otherwise it is part of the
background.
Ede detection a&orithm The aim is to identi1 all pixels that form part of edges in the image.
Reference is next made to Figures Ba, Sb and Sc, which show steps involved in the edge detection algorithm.
In step 1, the RGB values of all pixels in the image are extracted and the average of R, G and B (AVrgb) for each pixel is computed.
In step 2, for each pixel under analysis) RGB values of 10 pixels preceding the "current pixel under analysis" is extracted. The average of R, G and B for each pixel is computed and then this is averaged across the same chosen 10 pixels. This average is labelled as AV10p.
In step 3, the standard deviation (SD) of the values of 11 pixels (the pixel under analysis and the 10 pixels preceding it) is computed.
Instep 4, if Avrgb [from stepl) differs from AV1Op (from step 2) more than SD, then the pixel under consideration is a first-order candidate (P1) for being a pixel in an edge.
In step 5, for all P1 pixels in the image, the surrounding 24 pixels are searched for other P1 pixels. If at least one P1 pixel is found, then the current P1 under analysis is labelled as P2. While labelling a pixel as P2, the neighbouring pixels which benefited from this pixel should be ignored, i.e. those pixels which were P1's due to the current pixel under analysis.
In step 6, repeat steps 4 and S until all P8 pixels in the image are found. The P8 pixels are part of edges in the image.
Focus sharpness algorithm It is expected that the foreground object will have a sharper focus as compared to the background. Therefore the present inventors recognised that the background can be separated from the foreground by analysing the sharpness (or lack of it) present in various portions of the image.
Reference is next made to Figures 9a and 9b, which show steps involved in the focus sharpness algorithm.
In step 1, the image is divided into squares of 1OX1O pixels (or any other suitable size). The sharpness value (Sv) of each square is determined. See sharpness detection algorithm below.
In step 2, if the Sv of a square is larger than a predefined threshold, then a check is made to determine if the Sv of the 8 squares that surround this square is also above the pre-defined threshold.
In step 3 (not shown), if the Sv of the 8 squares that surround this square is also above the pre-defined threshold, then it is determined that the pixels in the square is part of the foreground object. Otherwise it is part of the background object.
Sharpness detection algorithm The aim is to get a sharpness factor from each image or an image section.
Reference is next made to Figure 9c, which shows the steps involved in the sharpness detection algorithm.
This involves finding the number of pixels [PS] that form part of an edge using the Edge Detection Algorithm (as mentioned above).
Then, divide P8 by the total number of pixels in the image or image section.
This gives the sharpness factor.
Combining algorithm 1. If the green screen detection algorithm is used, then the edge detection algorithm can be applied to further improve results from the green screen detection algorithm.
2. If the green screen detection algorithm cannot be used, then the Constant Vs. Changing image algorithm and the focus sharpness algorithm are used independently followed by the edge detection algorithm to further improve results from the Constant Vs. Changing image algorithm and the focus sharpness algorithm.
Then the two results [from the Constant Vs. Changing image algorithm and the focus sharpness algorithm) are merged using the standard Sigma Clipping Algorithm.
Image Optimisation Algorithm Due to poor ambient lighting glare and shadows, the image can be optimised by automatically brightening certain areas and darkening others, while maintaining overall image fidelity.
The image optimisation process involves healing extremities of darkness & brightness, followed by adjusting the entire image for the "extreme darkness & brightness adjustments".
Reference is next made to Figures ba, lob, lOc, lOd, be, lOf and bOg, which show steps involved in the image optimisation algorithm.
In step 1 (not shown], RGB values of all pixels in all images in the set are extracted and the average of R, G and B values for each pix& (AVrgbJ is computed.
In step 2, an image histogram table in memory is constructed, with brightness levels on the X-axis and number of pixels for each brightness level on the Y-axis.
In step 3, using the image histogram, the brightness level under which 5% of the darkest pixels fall is determined. This brightness level is set as the dark point (Dp).
In step 4, using the image histogram, the brightness level above which 5% of the brightest pixels fall is determined. This brightness level is set as the light point (Lp).
In step 5, the brightness levels between the dark and light points is determined. This is identified as the optimised range (Or). The pixel muftiplication factor (PmfJ is computed using the formula: Pmf= 255/Or In step 6, the value of pixels below the dark point is set to zero and the value of pixels above the light point is set to 255.
In step 7, the value of the rest of the pixels is set using the formula: Pnew = (Fold -DpJ*Pmf, where Pnew is the new pixel value and Fold is the earlier pixel value.
Non-linearly apply a histogram stretch that improves contrast by darkening the midrange dark areas and by brightening the mid-range bright areas. This can be achieved by applying a Sine Curve full-cycle to the image histogram. The zero point of the sin curve will start at Dp, the lowest point on the sin curve will be at Mrda, the cross-over at zero of tile S curve will occur at Go, the highest point of the sin curve will occur at Mrba, and finally the next zero-crossing of the sin curve will occur at Lp.
In step 8, the mid-range dark area [Mrda) is identified by taking the 1/4th brightness level of image (use the changed image from step 7 above).
In step 9, the mid-range bright area (Mrba) is identified by taking the 3/4th brightness level of image (use the changed image from step 7 above).
In step 10, the cross-over point (Co) between dark and bright areas is identified by taking the 1/2th brightness evel of image (use the changed image from step 7 above).
In step 11, the range of the sin curve is computed. Dp will be -Sin(03 and Lp will be -Sin(2u), so that one cycle of the sin curve covers the entire image histogram.
In other words, the brightness range 0-255 will have to be mapped to O-2ir of the sin cycle. Therefore, each increment of Sin(x] will be 2rt/255.
In step 12, the sin curve on the histogram is applied. Multiply the value of each pixel using the formula: Pnew = Pold * -(Sin(x)] / 2, where x takes a value between 0 and 2'rr, incremented by 2u/255 Noise Remova' Algorithm Due to poor ambient conditions various other unwanted sounds may be present in the recording.
Reference is next made to Figures ha and lib, which show steps involved in the noise removal algorithm.
1. During sound capture, the three dimensional content creation system (i.e. software module) will ask the user for two seconds of silence (prior to starting the sound of the merchandise item].
2. The system will record this ambient noise. This ambient noise is labelled as Na.
3. Then the system will record the useful sound from the merchandise item. This is labelled as Sm.
4. A Fourier Analysis is conducted on Na and a set of harmonics are obtained.
5. The harmonics are added together to get a constant ambient noise (Nca), which does not change during the two second duration of Na.
6. The Nca is subtracted from Na to get a non-constant, varying or periodic noise (Nvp), like harddisk clicks etc. 7. Nca is immediately subtracted from Sm, removing the constant ambient noise.
8. Nvp is analysed for repeating instances of similar noise. Each instance of sound that gets saved as a separate file -Nvpl, Nvp2,...etc.
9. Sm is searched for Nvpl, Nvp2, etc. and when found, they are deleted from Sm, thus cleaning Sm of most of the background noise.
Sep uencing Algorithm Reference is next made to Figure 12, which shows steps involved in the sequencing algorithm.
1. The system asks the user to rotate the merchandise item horizontally.
Images captured are labelled Fit, H2, 113 Fin.
2. After the horizontal captures are done, the user is presented with a view of Hi and is then asked to position the merchandise item in exactly the same position as Hi. Then the user is asked to rotate the item in a vertical direction.
Captures are labelled as Via, Vib, Vic....Vim. Then a view of H2 is presented and the user positions the merchandise item appropriately and rotates vertically, while the system captures V2a, V2b, V2c V2m.
3. Step 2 is repeated until Vnm is captured.
4. The system this captures images numbering n + n*m.
5. While storing in the database, the horizontal-rotation images are stored sequentially -Hi, H2 Hn.
6. While storing in the database, the vertical-rotation images are stored in a manner that a set of them are attached to an H picture. For example: V3a, V3b,...V3m are attached to H3. And within a set of V pictures, they are stored sequentially in alphabetical order.
7. "Interactive feature" captures are either based on one H image or V image. The user is asked to choose one image (H or V] which contains the hot-point.
Then, the user is asked to position the merchandise item in exactly the same position as the chosen H or V. Once done, the system starts capturing images while the user activates the "interactivity item" on the merchandise item. While storing into the database, these set of "interactive feature" image captures will be attached to the selected H or V image.
Sizing Algorithm The aim is to display an easily recognisable object next to the merchandise item, so that a user can understand the scale of the merchandise item.
1. The system prompts the user to enter the dimension of the merchandise item in length, width and depth. The system computes the volume (Vmi] of the merchandise item.
2. An object of comparaffle size is selected from a database of "easily recognisable objects". Let's say the volume of this object is Vero.
3. Vmi and Vero may be of slightly different volumes. So, the easily recognisable object should be scaled up or down to match the scale of the merchandise item.
Aero = Ami * (Vero/Vmi)'(2/3) The length of the image of the easily recognisable item should be multiplied by Aero"(1/2) The width of the image of the easily recognisable item should be multiplied byAero"(1/2) Weight Algorithm The aim is to display an easily recognisable object next to the merchandise item and weigh both down with a mesh, so that a user can understand the weight of the merchandise item.
1. The system prompts the user to enter the weight of the merchandise item. This weight entry is labelled Wmi.
2. An object of comparable weight is selected from a database of "easily recognisable objects". Let's say the weight of this object is Wero.
3. The merchandise item is shown resting on an elastic mesh, which sags under the weight Wmi. The depth of sagging (Dsmi) is a fixed amount for all merchandise items.
4. The easily recognisable item is shown resting on an elastic mesh, which sags under the weight Wero. The depth of sagging (Dsero) is calculated as follows.
Dsero = Dsmi * Wero/Wmi Reference is next made to Figure 13, which shows an example three dimensional representation of a camera displayed in an online shop. In this context, the present invention will enable a customer of the online shop to manipulate a product (a camera in this example) in order to view all parts of a product; they will be able to rotate the product, enlarge it, click on specific highlighted points (hot-points) on the product in order to adjust/activate individual elements of the product, and the image generation means of the present invention will deliver the appropriate three dimensiona' content to the target image display means based on these actions. Thus, the present invention makes it possible, in a straight-forward and cost effective manner, to produce fufly interactive three dimensional models for a user to interact with.
Embodiments of the present invention have been described above. Further embodiments of the present invention can also be realized by apparatuses or systems that read out and execute programs recorded on a memory device to perform the functions of the above-described embodiment(s), and by a method, the steps of which are performed by) for example, reading out and executing a program recorded on a memory device to perform the functions of the above-described embodiment(s). For this purpose, the program may be provided to the three dimensional content creation system, for examp'e via a network or from a recording medium of various types serving as the memory device (e.g., computer-readable medium).
The implementation described above and illustrated in the drawings is just one possible implementation (with variations as described). The examples described are purely illustrative and the skilled reader will appreciate that many further modifications and variations are possible within the spirit and scope of the invention described herein.

Claims (34)

  1. CLAIMS1. A three dimensional content creation system comprising: a database, configured to store, based on a plurality of captured images of a primary object and a display attribute associated with the primary object, a set of images representative of a three dimensional representation of the primary object; and image generation means configured to receive from the database at least a subset of the set of images, and to generate from the received images a set of target sequenced images for rendering the three dimensional representation of the primary object at a target image display means.
  2. 2. A three dimensional content creation system according to claim 1, wherein the display attribute comprises one or both of a numerical value and a text descriptor.
  3. 3. A three dimensional content creation system according to claim 2, wherein the numerical value is representative of a physical dimension of the primary object.
  4. 4. A three dimensional content creation system according to claim 3, wherein the image generation means is configured to generate, based on the numerical value, an image of a secondary object for rendering at the target image display means in proximity to the three dimensional representation of the primary object.
  5. 5. A three dimensional content creation system according to claim 4. wherein the image generation means is configured to generate, based on the numerical value, an image of a deformable object for rendering at the target image display means and for representing the weight of the primary object with respect to the weight of the secondary object
  6. 6. A three dimensional content creation system according to claim 5, wherein the image generation means is configured to generate the image of the deformable object such that a first area of the deformable object for displaying the three dimensional representation of the primary object is deformed by an amount represented by the numerical value and a second area of the deformable object for displaying the image of the secondary object is deformed by an amount dependent on the numerical value associated with the primary object.
  7. 7. A three dimensional content creation system according to any preceding claim, wherein the display attribute comprises an audio signaL
  8. 8. A three dimensional content creation system according to claim 7, wherein audio signal is representative of audio generated when a user interacts with the primary object.
  9. 9. A three dimensional content creation system according to claim 8, wherein the image generation means is configured to playback the audio at the target image display means in response to a user interacting with the three dimensional representation of the primary object.
  10. 10. A three dimensional content creation system according to any of claims 7 to 9, wherein the image generation means is configured to remove noise from the audio signal.
  11. 11. A three dimensional content creation system according to any preceding claim, wherein the set of images comprises a set of sequenced horizonta'-rotation images captured while the primary object is rotating about a horizontal axis and a set of sequenced vertical-rotation images captured while the primary object is rotating about a vertical axis.
  12. 12. A three dimensional content creation system according to claim 11, wherein the set of images further comprises a set of images captured while a movable part of the primary object is activated, and the database is operable to store, in association with an image from the set of sequenced horizontal-rotation images or from the set of sequenced vertical-rotation images, the set of images captured while the movable part of the primary object is activated.
  13. 13. A three dimensional content creation system according to any preceding claim, further comprising capturing means for capturing the plurality of captured images of the primary object and the dispky attribute associated with the primary object.
  14. 14. A three dimensional content creation system according to claim 13, comprising a software application providing the capturing means, the software application being operable to request a user to display the primary object in front of a camera so that the capturing means can capture a plurality of images of the primary object from a plurality of perspectives.
  15. 15. A three dimensional content creation system according to claim 14, wherein the software application is operable to capture the dispby attribute by requesting the user to activate an interactive feature on the primary object in front of the camera so that the capturing means can capture a plurality of images while the interactive feature is active.
  16. 16. A three dimensional content creation system according to claim 14 or claim 15, wherein the software application is operable to capture the display attribute by recording sound emanating from the primary object and/or recording a verbal narrative by the user.
  17. 17. A three dimensional content creation system according to any of claims 14 to 16, wherein the software application is operable to capture the display attribute by requesting the user to enter a numerical value representative of a physical dimension of the primary object.
  18. 18. A three dimensional content creation system according to any preceding claim, further comprising image processing means for performing a background image removal process on the plurality of captured images to identify and removethe background from each of the captured images.
  19. 19. A three dimensional content creation system according to claim 18, wherein the background image removal process comprises a green screen detection process for identifying a pixel as part of the background by extracting a red, green and blue (RGB] value from a pixel of a captured image and identifying the pixel as being part of the background if the RGB value is approximately equal to a predetermined value.
  20. 20. A three dimensional content creation system according to claim 18, wherein the background image removal process comprises extracting a red, green and blue (RGB] value from a pixel from a first captured image, and identifying the pixel of the first captured image as being part of the background if its RGB value is approximately equal to an RGB value of a corresponding pixel of a second captured image.
  21. 21. A three dimensional content creation system according to claim 18, wherein the background image removal process comprises dividing a captured image into a predetermined number of grid cells, determining the blurriness of each grid cell, analysing the spatial spread of blurriness across the entire image and identifying a background image as a set of grid cells that has the most amount of blurriness.
  22. 22. A three dimensional content creation system according to any of claims 19 to 21, wherein the background removal process comprises an edge detection process comprising analysing surrounding areas of each pixel of a captured image to identify changes in RGB values and identifying a pixel as part of an edge if there is a predetermined change in pixel values from one area to another.
  23. 23. A three dimensional content creation system according to any preceding claim, wherein the image generation means is configured to optimise a captured image by selectively brightening and darkening certain pixels of the image.
  24. 24. A three dimensional content creation system according to any preceding claim, wherein the database and the image generation means are implemented on a server.
  25. 25. A three dimensional content creation system according to claim 24, wherein the image generation means is responsive to a request to extract and generate the target sequenced images, the request indicating an image manipulation performed at the target image disp'ay means, the image generation means selecting the images for extracting and generating the appropriate corresponding target sequenced images based on the indication of the image manipulation provided by the request
  26. 26. A three dimensional content creation system according to claim 25, wherein the manipulation is selected from a group of operations comprising: rotation, pan, tilt, zoom in, zoom out, and point and click
  27. 27. A three dimensional content creation system according to any of claims 24 to 26, comprising: a software application providing the target image display means, the software application being operable to request the server to provide the set of target sequenced images in relation to the primary object; to receive from the server the requested set of target sequenced images; and to render the three dimensional representation of the primary object.
  28. 28. A three dimensional content creation method, comprising the steps of: storing, in a database, based on a plurality of captured images of a primary object and a display attribute associated with the primary object, a set of images representative of a three dimensional representation of the primary object; receiving from the database at least a subset of the set of images; and generating from the received images a set of target sequenced images for rendering the three dimensional representation of the primary object at a target image display means.
  29. 29. An image display apparatus configured to receive and display the set of target sequenced images of any of claims ito 27 on a display screen.
  30. 30. A program, which when executed by a computer, causes the computer to execute a method according to claim 28.
  31. 31. A storage medium storing the program according to claim 30.
  32. 32. A three dimensional content creation system substantially as hereinbefore described with reference to the accompanying drawings.
  33. 33. A three dimensional content creation method substantially as hereinbefore described with reference to the accompanying drawings.
  34. 34. An image display apparatus substantially as hereinbefore described with reference to the accompanying drawings.
GB1411416.9A 2014-06-26 2014-06-26 Three dimensional content creation system and method Expired - Fee Related GB2527582B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1411416.9A GB2527582B (en) 2014-06-26 2014-06-26 Three dimensional content creation system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1411416.9A GB2527582B (en) 2014-06-26 2014-06-26 Three dimensional content creation system and method

Publications (3)

Publication Number Publication Date
GB201411416D0 GB201411416D0 (en) 2014-08-13
GB2527582A true GB2527582A (en) 2015-12-30
GB2527582B GB2527582B (en) 2020-09-16

Family

ID=51410197

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1411416.9A Expired - Fee Related GB2527582B (en) 2014-06-26 2014-06-26 Three dimensional content creation system and method

Country Status (1)

Country Link
GB (1) GB2527582B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019173012A1 (en) * 2018-03-08 2019-09-12 Ebay Inc. Online pluggable 3d platform for 3d representations of items

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1093089A2 (en) * 1999-10-11 2001-04-18 Inmax T&C Co., Ltd. System and method for quasi three-dimensional multimedia image processing from two-dimensional image data
WO2003049455A1 (en) * 2001-11-30 2003-06-12 Zaxel Systems, Inc. Image-based rendering for 3d object viewing
US20050253840A1 (en) * 2004-05-11 2005-11-17 Kwon Ryan Y W Method and system for interactive three-dimensional item display
US20120239513A1 (en) * 2011-03-18 2012-09-20 Microsoft Corporation Virtual closet for storing and accessing virtual representations of items
WO2013191689A1 (en) * 2012-06-20 2013-12-27 Image Masters, Inc. Presenting realistic designs of spaces and objects

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1093089A2 (en) * 1999-10-11 2001-04-18 Inmax T&C Co., Ltd. System and method for quasi three-dimensional multimedia image processing from two-dimensional image data
WO2003049455A1 (en) * 2001-11-30 2003-06-12 Zaxel Systems, Inc. Image-based rendering for 3d object viewing
US20050253840A1 (en) * 2004-05-11 2005-11-17 Kwon Ryan Y W Method and system for interactive three-dimensional item display
US20120239513A1 (en) * 2011-03-18 2012-09-20 Microsoft Corporation Virtual closet for storing and accessing virtual representations of items
WO2013191689A1 (en) * 2012-06-20 2013-12-27 Image Masters, Inc. Presenting realistic designs of spaces and objects

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019173012A1 (en) * 2018-03-08 2019-09-12 Ebay Inc. Online pluggable 3d platform for 3d representations of items
US11048374B2 (en) 2018-03-08 2021-06-29 Ebay Inc. Online pluggable 3D platform for 3D representations of items

Also Published As

Publication number Publication date
GB201411416D0 (en) 2014-08-13
GB2527582B (en) 2020-09-16

Similar Documents

Publication Publication Date Title
WO2018095142A1 (en) Livestream interaction method and apparatus
US10452920B2 (en) Systems and methods for generating a summary storyboard from a plurality of image frames
JP6780117B2 (en) Intelligent automatic cropping of images
KR102120046B1 (en) How to display objects
DK2909576T3 (en) SYSTEM AND PROCEDURE FOR AUTOMATIC OPTICAL IMAGE OF A SUBJECT
US11367259B2 (en) Method for simulating natural perception in virtual and augmented reality scenes
US8878897B2 (en) Systems and methods for sharing conversion data
US11354889B2 (en) Image analysis and processing pipeline with real-time feedback and autocapture capabilities, and visualization and configuration system
US11348248B1 (en) Automatic image cropping systems and methods
JP6882868B2 (en) Image processing equipment, image processing method, system
CN109255767B (en) Image processing method and device
KR102007432B1 (en) System of 3-Dimensional Video Generation and Provision
CN111724231A (en) Commodity information display method and device
JP6505646B2 (en) Image generation apparatus, image generation method, and program
GB2527582A (en) Three dimensional content creation system and method
CN116486018A (en) Three-dimensional reconstruction method, apparatus and storage medium
CN108781280B (en) Test method, test device and terminal
KR20230016781A (en) A method of producing environmental contents using AR/VR technology related to metabuses
JP2015125543A (en) Line-of-sight prediction system, line-of-sight prediction method, and line-of-sight prediction program
CN113362351A (en) Image processing method and device, electronic equipment and storage medium
US20240193851A1 (en) Generation of a 360-degree object view by leveraging available images on an online platform
CN112233103B (en) Three-dimensional house model quality evaluation method and device and computer readable storage medium
CN112634460B (en) Outdoor panorama generation method and device based on Haar-like features
JP7439042B2 (en) Image processing device, image processing method and program
CN111192276B (en) Image processing method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20220626