EP2195781A1 - Système d'achats en ligne et procédé utilisant une reconstruction tridimensionnelle - Google Patents

Système d'achats en ligne et procédé utilisant une reconstruction tridimensionnelle

Info

Publication number
EP2195781A1
EP2195781A1 EP08800261A EP08800261A EP2195781A1 EP 2195781 A1 EP2195781 A1 EP 2195781A1 EP 08800261 A EP08800261 A EP 08800261A EP 08800261 A EP08800261 A EP 08800261A EP 2195781 A1 EP2195781 A1 EP 2195781A1
Authority
EP
European Patent Office
Prior art keywords
user
dimensional
reconstruction
environment
visual media
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP08800261A
Other languages
German (de)
English (en)
Inventor
Christian Laforte
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Feeling Software
Original Assignee
Feeling Software
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Feeling Software filed Critical Feeling Software
Publication of EP2195781A1 publication Critical patent/EP2195781A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Definitions

  • the present invention relates to an online shopping system and method using 3D reconstruction.
  • the present invention further relates to a 3D reconstruction method.
  • profiling information includes primarily textual hints (e.g. the presence of specific words in the text of the website), personal information entered by the user (e.g. the age and gender), geography (inferred from the IP address of the client computer) and previous internet browsing history (often identified through client-side cookies).
  • CAD computer aided design
  • Photogrammetric tools such as PhotomodelerTM from Eos Systems
  • 3D scanners such as the ZScannerTM product line from Z Corp, can acquire detailed and precise 3D environments. However, compared with standard digital cameras, these special hardware solutions are expensive, bulky and hard to operate. Due to the active nature of the commercial 3D scanners (e.g. the use of lasers or light projectors), only the largest and most powerful 3D scanners can acquire large scale environments.
  • Some automatic 3D reconstruction systems using pictures can combine the information from a plurality of pictures to reconstruct a point cloud and present these pictures interactively.
  • reconstructing a precise 3D textured mesh from this point cloud is non-trivial and not demonstrated.
  • these systems suffer from the limitation that, once acquired, an environment cannot be edited by the user.
  • the present invention relates to a system and method that enable users to reconstruct a precise and realistic virtual 3D representation of an existing environment, such as their home, office or other interior, from visual media such as photos or video. Once the environment is reconstructed, the user can perform activities such as visualizing and collaboratively planning virtual renovations, seeing and purchasing new furniture and accessories, listing his or her house for sale, or share their 3D models and designs, renovation and shopping tips with friends and the general pubiic. This enables advertisers to advertise to specific segments of consumers, according to the content of their environment
  • Figure 1 is a schematic view of computing devices connected to a online shopping and 3D reconstruction system through a network;
  • Figure 2 is a flow diagram depicting the process of a user accessing the online shopping and 3D reconstruction system
  • Figure 3 is a schematic diagram of the online shopping and 3D reconstruction system
  • Figure 4 is a flow diagram depicting a process used to automatically reconstruct a 3D environment or 3D products from standard visual medta
  • Figure 5 is a flow diagram depicting a feature matching process used by the process of Figure 4.
  • the non-limitative illustrative embodiment of the present invention provides a system and method that allows a user to automatically reconstruct an annotated virtual 3D environment from a real environment, to personalize this environment and to shop, communicate and get entertained within it.
  • an online shopping and 3D reconstruction system which includes a 3D reconstruction algorithm that can reconstruct one or more 3D models precisely and realistically from visual media ⁇ e g standard photos or video) with or without human intervention
  • the system further includes a recognition algorithm capable of recognizing 3D objects in a 2D image or 3D scene, and inferring an entire 3D scene from one or more images
  • the system also includes a search engine capable of matching complex visual, textual and other type of data applicable to searches, automatic recommendation and advertisement selection
  • an Internet connection 120 such as, for example Ethernet (broadband, high-speed), wireless WiFt, cable Internet, satellite connection, cellular or satellite network etc
  • the online shopping and 3D reconstruction system 130 includes a 3D server 134, a user database 136 and a 3D content database 138, all of which will be detailed further below
  • a personal computer 112 should be construed as to comprise as well a laptop computer 114, a cell phone 115, a tablet computer 116, a personal assistant device 117 a kiosk or any other such computing device, on which may run a browser application
  • a user's personal computer 112 should run a browser application compatible with a custom client plug-in that may be automatically downloaded when the user first connects to the online shopping and 3D reconstruction system 130 Before the client plug-in is downloaded, the user may be prompted to accept the installation on their personal computer 112 The user may then be asked where to download the plug-in or it may be automatically installed in a default location The ptug- ⁇ n may be automatically updated when necessary, e g during visits to the online shopping and 3D reconstruction system 130 or according to a given schedule.
  • the client plug-in may be a Flash component or a Javascript implementation.
  • the client plug-in may be replaced by an applet implemented, for example, in Java.
  • the 3D reconstruction part of the online shopping and 3D reconstruction system 130 may be implemented as a standalone application driven, for example, by DVD-Rom on a personal computer, a video game console such as the Playstation 3, or a peer-to-peer appiication.
  • each class 130 may be categorized in various classes, each class having a typical usage scenario, for example:
  • Consumer a user that has logged in and may create a virtual environment, personalize it, shop within it, communicate with other users and use the entertained features of the online shopping and 3D reconstruction system 130;
  • Advertiser a user that can create and monitor advertising campaigns
  • Merchant a user that can manage and promote an online store within the online shopping and 3D reconstruction system 130;
  • a user that has access to special features (e.g. export plans to AutoCAD).
  • special features e.g. export plans to AutoCAD.
  • merchants and professionals may also want to advertise their services and products, in which case a user may belong to the merchant or professional class as well as the advertiser class, thus having access to the features of all the classes the user belongs to. It is also to be understood that some classes may be omitted or that additional classes may be added.
  • Some features may be restricted to specific user cutzes (e.g. merchant, advertiser, etc.) Some features may be charged, e.g. a user may have to pay on a per-use basis, using a subscription model or as a reward for completing some task such as watching advertising or recommending the online shopping and 3D reconstruction system 130 to some friends and having them register.
  • Some features may be charged, e.g. a user may have to pay on a per-use basis, using a subscription model or as a reward for completing some task such as watching advertising or recommending the online shopping and 3D reconstruction system 130 to some friends and having them register.
  • Session information may be stored on the connection server 132, on the user personal computer 112 or in a combination of both.
  • FIG. 2 there is shown a flow diagram of an illustrative example of a process 200 executed when a user accesses the online shopping and 3D reconstruction system 130.
  • the steps of the process 200 are indicated by blocks 202 to 228.
  • the process 200 starts at block 202 where the user connects to the connection server 132 using a personal computer 112 on which runs a browser application with the custom client plug-in. If the user is connecting to the online shopping and 3D reconstruction system 130 for the first time, as mentioned previously, the custom client plug-in may be automatically downloaded or the user may be prompted to accept the download.
  • the user is given the opportunity to log into the online shopping and 3D reconstruction system 130 using an assigned login and password (or by any other means such as, for example, biometrics). If the user logs in the connection server 132 validates the login and password of the user by querying the user database 136 which, if the user is registered, sends back information associated with the user such as, for example, the user profile, preferences, class or classes, etc., and then proceeds to block 212,
  • the process 200 verifies if the user belongs to the consumer class. If so, at biock 214, the process 200 enables the consumer activities for that user. The process 200 then proceeds to block 216.
  • the process 200 verifies if the user belongs to the professional class. If so, at block 218, the process 200 enables the professional activities for that user. The process 200 then proceeds to bfock 220.
  • the process 200 verifies if the user belongs to the advertiser class. If so, at block 222, the process 200 enables the advertiser activities for that user. The process 200 then proceeds to block 224.
  • the process 200 verifies if the user belongs to the merchant class. If so, at block 226, the process 200 enables the merchant activities for that user. The process 200 then proceeds to block 228.
  • the process 200 displays a menu of activities available to the user depending on the ciass(es) the user belongs to through a user interface (Ul) which wilt be further detailed beiow.
  • Ul user interface
  • the user may then interact with the online shopping and 3D reconstruction system 130 through the Ul 27 using the available list of activities determined at block 228 of Figure 2
  • the Ul 27 is composed of a dynamic website on the connection server 132 and a 3D engine 28 (possibly included in a plug-in) that allows the user to interactively navigate and personalize aspects of a 3D environment 38 or 3D products 39 offered for sale or display using a web browser on his or her personal computer 112 (see Figure 1)
  • These 3D environments 38 and 3D products 39 may be created by some process 32 external to the online shopping and 3D reconstruction system 130, e g by a specialized artist or 3D scanner imported from external applications or devices through the import module 31 They may also be created manually by a user of the consumer class through a process used
  • the online shopping and 3D reconstruction system 130 also includes a 3D reconstruction module 11 that can automatically reconstruct a 3D environment 38 or 3D products 39 from standard visual media 10, such as a few pictures or a short video submitted through the U!
  • the 3D reconstruction module 11 can also accept user commands to increase its precision, robustness or performance
  • a recognition module 12 may be used in order to recognize 3D products 39 present in the 3D environment 38 or map products identified in the input visual media 10 to 3D models that have been learnt from other input visual media or from other environments This also allows the recognition module 12 to automatically reconstruct a precise 3D environment 38 including individual 3D products 39 with their associated 3D poses from a single image
  • the recognition module 12 may also compare an image with any object currently in the 3D content database 138 and return a probability that the object is present in the image
  • the 3D reconstruction 11 and recognition 12 modules with be further detailed below [0040]
  • the consumer may initiate a search or ask the system for a recommendation, which ultimately will result in one or more queries to the search engine 37 with criteria that may include text, images, colors and texture.
  • the results of some of these queries may be retrieved directly from the 3D content database 138.
  • Others may be evaluated using the inference engine 43 which applies inference rules 44 to perform data mining on known information, e.g. the colors of objects present in a room or the probabilities returned by the recognition module 12, to produce new probabilistic results.
  • the consumer may also use the shop function 16 to shop for new products within a 3D environment 38.
  • the consumer may use the search capability provided by the search engine 37, ask the online shopping and 3D reconstruction system 130 for recommendations that would fit his or her taste, follow a link seen through an advertisement or through another 3D environment, or browse through the available 3D products 39 and compare by category, price or other criteria.
  • a product can be added in the selected 3D environment 38 and, if the product is available for sale, the consumer may select an appropriate merchant from the merchant data 40 of the user database 136 and piace an order which may be fulfilled by the selected merchant through the online shopping and 3D reconstruction system 130 itself or through a third party.
  • the consumer may enter his or her persona!
  • the oniine shopping and 3D reconstruction system 130 regularly imports through the import module 31 internal merchant data 40 (e.g. price, product availability and new products). Orders to external merchants are submitted through the export module 30.
  • Consumers may also use the communication 17 function of the consumer activities module 14 to communicate with each other through common means (e.g. message boards, chat, reviews) and other means such as 3D chat and by coilaboratively designing shared 3D environments 38.
  • Special versions of the Ul 27 may also be integrated in other websites, applications or devices, allowing, for example, a retailer to host a branded version of the 3D technology on their website.
  • consumers who participate in social websites could integrate their personal 3D environment 38 in their personal web page within the social website, or have their friends from the social website be notified of actions such as creating a new 3D environment 38 or purchasing new 3D products 39.
  • Any activity e.g. clicks, selections, modifications
  • a consumer may be collected in the corresponding consumer data 41 of the user database 136, e.g. to be used as input to the inference engine 43, to reproduce problems or to allow the user to undo recent steps.
  • activities performed by other types of users e.g. merchants, advertisers and professionals, may also be collected in the corresponding merchant 40, advertiser 42 and professional 45 data of the user database 136 for the same purpose.
  • Consumers may also get entertained using the entertainment function 18 of the consumer activities module 14. For example, they may play games in a virtual environment. They may also export and/or integrate the 3D environment 38 inside games and other applications and devices such as, for example, The SimsTM, Google EarthTM, real-estate sites such as MLS, etc.
  • a 3D content server 29 stores 3D data in a file format semantically similar to COLLADA, which may optionally be binarized, compressed and encrypted.
  • Multiple 3D content servers 29 containing read-only copies of a given 3D object e.g. 3D environments 38 or 3D products 39
  • one 3D content server 29 is then defined as authoritative and handles write operations such as modification and creation of new 3D objects.
  • the 3D content servers 29 are updated asynchronously, using a low-priority process. When a 3D content server 29 goes down or is overloaded, the authority for its 3D objects is transferred to another 3D content server 29, providing load balancing.
  • Special functions may be offered to some user classes such as the advertiser and merchant classes. For example, advertiser related functions from the advertiser activities module 20 and merchant related functions from the merchant activities module 23. Advertisers have access to an expanded U! 27 that allows them to use the manage campaigns function 21 and the monitor campaigns function 22. Likewise, merchants may use the manage stores function 24 and monitor stores function 25.
  • Another class of users which may have access to special functions is the professional class.
  • a professional may have access to, through the professional activities module 54, functions such as the personalize function 55, the design function 56, the communicate function 57 and the import/export files function 58.
  • the personalize 55 and communicate 57 functions may be similar to those already described for the consumer class of users, i.e. 15 and 17.
  • the design function 56 as mentioned previously, this function may be used to perform advanced design activities, such as designing the air conditioning or electrical subsystem inside a given 3D environment 38.
  • the import/export files function 58 it may be used, for example, to import a 3D environment 38 through the import module 31 , modify it and then exporting it to another system, application or device using the export module 30
  • Data stored in the 3D content database 138 may be imported via the import module 31 using a combination of techniques such as manual entry by a user import from external sources of data using web services, XML databases, web crawling and screen-scraptng, etc
  • attributes specific to the given object e g a door may have options such as whether or not it includes a window When the window option is enabled a child window is automatically created and parented under the window object
  • a rule-based system also allows an attribute to affect other attributes and children attributes, e g a sofa may be available in several standard dimensions and colors
  • the overall look of 3D products 39 (excluding fighting effects) is called the material A material includes variations of color texture fabric, paint, finish etc.
  • Displayable objects e g 3D products 39 typically have one or more materials assigned to them Optionally it can also have standard material variations For example, the type of wood and the finish of a piece of furniture may be selected by the consumer from some !tst
  • Additional materials can be applied on top of an existing materia! by a consumer (or professionai)
  • a chair could be covered with paint or even with carpet, tinfoil or cement
  • These materials can be created from scratch or they can be copied from other 3D products 39, from a material library, etc
  • the 3D engine 28 takes a scenegraph representation as input and produces one or more image
  • the 3D engine 28 may also include an input mechanism e g to allow users to select move and personalize objects
  • the 3D engine 28 may also support state-of-the-art features such as global illumination, subsurface scattering, tone mapping in order to produce a photorealistic result at interactive rate
  • the 3D engine 28 may use pre-loading e g when doing a search for 3D products 39, the 3D engine 28 may immediately start downloading the visible results from a 3D content server 29 since it is likely that a consumer will want to examine and possibly add one of them
  • a user has several options in order to create a 3D environment 38
  • the user may use the personalization function 15 (see Figure 3) to create a 3D environment 38 from scratch, by first drawing floor plans, then placing windows, doors, etc
  • Other systems known in the art may also be used for this purpose for example US Patent No 7,277 572 entitled Three-dimensional interior design system Create a 3D environment from scratch
  • a 2D floor layout is created. This may be accomplished using a feature sets similar to those of other 2D illustration tools such as Adobe IllustratorTM.
  • the user can draw individual edges or polygons that are completed automatically.
  • a brush is selected to change mode between drawing walls, doors, windows, etc.
  • Doors and windows are given standard dimensions that can optionally be overridden by the user.
  • Rooms and other objects can be named and the name, as well as automatically computed dimensions, are automatically displayed in the center of each room.
  • Typical visual ruler and grid snapping tools are provided. Default units are selected according to the user's geography.
  • an image may be used as a background, e.g. blueprint of a room.
  • snap edge orientation to 10/20/30/45/90 degrees and snap to closest corner/edge may be used.
  • Fiji up walls with windows and doors, floor with carpet and wood.
  • Wall height can be specified by user; otherwise a default height is used. Lengths may be specified numerically.
  • Special tools may be provided to bu ⁇ d parametric fixtures such as staircases, kitchen cabinets, etc.
  • Features that are common in home design tools such as Punch!TM and Home ArchitectTM can also be integrated within the online shopping and 3D reconstruction system 130.
  • the exterior of the house may be specified using a sim ⁇ ar interface as One interior.
  • the ideas described herein may also be applied to designing other 3D environments including but not limited to cars, boats, airplanes, gardens, factories, schools, hospitals, restaurants, etc.
  • a user may select one or more existing 3D environment 38 Users can locate an existing 3D environment 38 through a variety of means including the search engine 37, links to external websites or emails. Elements of a specific 3D environment 38 can be cut-and- pasted into a different one. Links to existing 3D environment 38 may be provided by other users, e.g. fellow consumers or professionals (for example real-estate agents).
  • a 3D Environment 38 may also be acquired through other sources of
  • 3D data such as traditional 3D active scanners, stereoscopic rigs, data captured from a 3D software (e.g. OpenGL driver) or hardware.
  • 3D software e.g. OpenGL driver
  • FIG. 4 there is shown a flow diagram of an illustrative example of a process 300 used by the 3D reconstruction module 11 to automatically reconstruct a 3D environment 38 or 3D products 39 from standard visua! media.
  • the steps of the process 300 are indicated by blocks 301 to 319.
  • the process 300 starts at block 301 where the 3D reconstruction moduie 1 1 is provided visual media data consisting of at least one image, in the form of an unsorted sequence of images, or at least one video.
  • the 3D reconstruction module 11 may be provided with additional information to increase the reconstruction speed, precision or robustness.
  • the additional information may include, for example:
  • EXIF tags which are automatically written by most digital cameras, typically include camera parameter such as field of view, GPS and compass data, presence of options such as flash, etc. These parameters may be used as priors by further modules, e.g. to compute an approximate intrinsic or extrinsic matrix; and
  • user-provided hints for example the user could provide measurements such as the ceiling height and could also click on a wall on a picture to indicate that the stated wall is facing North).
  • the process 300 may perform, at block 303, distortion and biur correction.
  • the correction may be performed using one or more algorithms.
  • the distortion correction may be performed by using the EXIF tags directly.
  • an optimization technique may be used to maximize straight edges.
  • the user could provide pictures of a calibration rig (e.g. a grid) from which the distortion parameters may be robustly estimated.
  • a database of known lenses parameters couid also be consulted and updated on demand by first identifying the camera and lens name.
  • Motion blur due to camera shake may be reduced using, for example, the technique described in the paper entitled "Removing camera shake from a single image" by Rob Fergus, Barun Singh, Aaron Hertzmann, Sam T. Roweis and William T.
  • This step may be omitted if the distortion and blur are considered negligible, e.g. by using a tripod and corrective lenses.
  • the result of this step may take the form of a corrected image and/or a transformation function that maps input pixels in output pixels and vice-versa.
  • an automatic or user-provided 2D segmentation step may be performed, which may make the feature matching of upcoming block 305 more robust, e.g. to background clutter.
  • the result of this segmentation process is a list of layers (optionally with partial transparency). Each layer can then be fed separately to the feature matching of block 305, to produce points and region descriptors for each layer, eliminating the impact of background clutter.
  • a number of automatic segmentation and low-level object detection algorithms could be used, such as the algorithms described in “Background cutout with automatic object discovery” by David Liu and Tsuhan Chen [2] and in “TextonBoost for Image Understanding: Multi-Class Object Recognition and Segmentation by Jointly Modeling Texture, Layout, and Context” by J. Shotton, J. Winn, C. Rother, and A. Criminisi [3],
  • the user may also provide additional information to assist the segmentation algorithm, e.g. draw simple marks or an approximate box over different layers, which may then be used as input for a semi-interactive algorithm, such as described in, for example, "Lazy Snapping” by Yin Li, Jian Sun, Chi-Keu ⁇ g Tang, and Heung-Yeung Shum [4] and "GrabCut — Interactive Foreground Extraction using Iterated Graph Cuts" by Rother, Kolmogorov and Blake [5],
  • feature matching is applied, which may use one or more types of features to identify correspondence points and edges in order to estimate camera calibration parameters Patrs of images are considered in an order e g provided by the user, by their time EXiF tag or by sorting them according to the similarity of their image or detected features
  • the feature matching process then returns of probable features, expressed in pixel and/or relative 3D space, a list of camera matrices (possibly including intrinsic and extrinsic matrix) and a probability that the two images were matched properly.
  • bundle adjustment or another type of global optimization algorithm may be performed in order to combine more than two matched images to further refine the 3D reconstruction and reduce the reconstruction error
  • the frequency of this process may vary to balance between quality and reconstruction speed
  • the bundle adjustment may be performed after every pair of image has been matched every few images or once at the end Several successive passes of feature matching and bundle adjustment can be performed to find more features of correspondences and match pairs of images that may otherwise have been impossible to match without knowing the approximate camera pose
  • a deeper reconstruction e g deep stereo reconstruction may then be applied to produce many more feature matches
  • a deeper reconstruction e g deep stereo reconstruction
  • the algorithm described in ''Accurate, Dense, and Robust Multi-View Stereopsis" by Yasutaka Furukawa and Jean Ponce [6] may be used
  • the application of such an algorithm may result in a dense reconstruction in an intermediate output form such a depth or disparity maps, or to a dense patch cloud
  • a 3D segmentation may be performed
  • the end result is one or more segmented 3D data (e g patches or meshes if the operation was performed after the mesh generation of block 308) that correspond to semanticaily different parts, e.g. a floor, walls, ceiling and individual pieces of furniture.
  • recognition and isolation of parts is first attempted using the recognition module. Parts that are not recognized may then be inputted into a general-purpose 3D segmentation algorithm such as, for example, the one described in "Partitioning 3D Surface Meshes Using Watershed Segmentation” by Alan P. Mangan and Ross T. Whitaker [7].
  • the mesh generation may vary in implementation according to the type of data output by the dense stereo reconstruction.
  • a point cloud is generated by the dense stereo reconstruction algorithm and a Poisson Surface Reconstruction is performed to produce an optimal surface.
  • An optional sub-step would be to fill holes by examining the nearby geometry and local curvature, matching some standard shapes such as piece-wise planar intersections and conies, and filling up the missing geometry accordingly. This would aliow, for example, pipes sections to be rendered perfectly cylindricai.
  • texture coordinates may be generated automatically, e.g. using, for example, Bruno Levy's "Least Squares Conforma! Maps for Automatic Texture Atlas Generation” [8J. Colors and textures may be interpolated from source points or projected back from source images. Holes in the texture space (e.g. due to missing data, such as part of the surface of the floor hidden below a carpet) may be filled ⁇ e.g. using an inpainting technique such as described, for example, in "Image Inpainting" by M. Bertalm ⁇ o, G. Sapiro, V. Case ⁇ es and C. Ballester [9] or may be flagged (e.g. in a high-contrast color) to indicate to the user the locations where more pictures should be taken.
  • inpainting technique such as described, for example, in "Image Inpainting” by M. Bertalm ⁇ o, G. Sapiro, V. Case ⁇ es and C. Ballester [9] or may be flagged (e.g. in
  • the illumination, albedo and reflectance can then be estimated by proceeding with lighting estimation. For example, this may be achieved using the algorithm described in "Sparse Lumigraph Relighting by Illumination and Reflectance Estimation from Multi-View Images” by Tianli Yu, Hongcheng Wang, Narendra Ahuja and Wei-Chao Chen [10].
  • the final result is a mesh with albedo and reflectance texture that may be used directly by the online shopping and 3D reconstruction system 130 or other 3D applications, to reproduce a 3D environment 38 or 3D products 39, This step may optionally produce a detailed bump map that can further increase the realism of the reconstructed object.
  • additional data may be provided by the user, at block 311, to a 3D alignment process at block 312.
  • This process may also infer the alignment directly from the reconstructed 3D data or from the input images.
  • a rule-based approach could combine simple rules such as assuming that the ceiling generally points up in pictures, that the ceiling lies on the smallest axis when measured against the width or height of a room, or that the ceiling has little or no furniture or fixtures on it, and conversely that floors have furniture on them.
  • the scale may be probabilistically inferred from furniture size, door size, beds (since four standard sizes exist in the USA and they can be readily recognized from their proportions), etc.
  • These rules can be learnt probabilistically from existing rooms, e.g. through Bayesian or Markovian techniques, allowing the 3D reconstruction process 300 to get better at guessing dimensions and orientation.
  • the north facing wail may be assumed to be the wall directly in front of the first picture, or could be inferred by comparing the brightness of different windows and by estimating the sun's general direction according to the time of day at which the picture was taken.
  • the user may also provide more information, such as georeferenced coordinates of specific points or additional distance measures, to further increase the reconstruction precision and to allow the environment to be positioned precisely relative to other georeferenced environments or objects.
  • the 3D environment 38 and/or 3D products 39 are then produced at blocks 313 and 314, respectively.
  • the 3D products 39 may then be provided to block 315 where a recognition engine may use one or more image-based or geometry-based algorithms to produce partial or complete matches and/or to identify significant hierarchical parts and variations. For example, two algorithms may be used: one using 2D images an input, the other using a 3D mesh.
  • the recognition may be performed directly from 2D images using automatic image recognition algorithms such as, for example, Fabien Scalzo's "Learned Hierarchical models” [11] or algorithms designed for the Pascal VOC challenge [12].
  • automatic image recognition algorithms such as, for example, Fabien Scalzo's "Learned Hierarchical models” [11] or algorithms designed for the Pascal VOC challenge [12].
  • the algorithm accessing, at block 316, the 3D products 39 in the 3D content database 138 and given enough sample images, the algorithm automatically learns a hierarchical image-based recognition model for each of the 3D products 39 in the 3D content database 138. Given sets of sample images in different poses, the algorithms can identify 3D products 39 from the 3D content database 138 in a given image, returning, at block 319, a match probability and optionally, at block 318, a 3D pose.
  • Running a recognition algorithm on each input image may increase the robustness and precision. For example, if three images agree that it is highly probable that a specific chair is present in the image with similar poses, this information may be combined (e.g. minimize error using least square methods) to identify a more precise 3D pose. To identify the precise 3D pose, a pose estimation may then be performed using the random sample consensus (RANSAC) algorithm in a manner analogous to camera pose estimation. Provided that enough representative objects have been iearnt, the algorithm described can reconstruct a complex 3D scene using a single image.
  • RANSAC random sample consensus
  • the recognition engine may also perform a 3D recognition using a
  • 3D shape matching algorithm such as, for example, the algorithm described in "Symmetry .descriptors and 3D shape matching" by Kazhdan, M., Funkhouser, T. and Rusinkiewicz, S [13], using as shapes the 3D products 39 from the 3D content database 138.
  • the recognition engine is not able to return exact matches, as long as it produces probabilistic matches that are more frequently right than wrong the matched probability provides valuable information for some types of applications.
  • the most probable matches could be used as a criteria for choosing advertisements or to populate a list of products that are likely to interest the user, e.g. when he fills up a room.
  • the advertisements selected this way would likely be more focused on their target market than completely random selections, and therefore have more value for advertisers.
  • FIG. 5 there is shown a flow diagram of an illustrative example of a feature matching process 400 that may be used at block 305 of Figure 4. The steps of the process 400 are indicated by blocks 401 to 410.
  • the process 400 starts by detecting common feature descriptors such as points and regions (e.g. local invariant feature algorithms such as SIFT and MSER), at block 401 , and edges, at block 402.
  • common feature descriptors such as points and regions (e.g. local invariant feature algorithms such as SIFT and MSER)
  • SIFT and MSER local invariant feature algorithms
  • edges at block 402.
  • putative features are detected and then matched, e.g. through cross-correlation using various distance functions. This process may be optimized using high-dimensional acceleration structures such as K-D trees. Putative edges can be further matched using approximate parallel and vanishing point constraints. [0086] At block 404 through a projection process matched edges can produce intersection points that may not be visible tn the original images (e g occluded or outside of the original image) but may provide additional precision for the outliers elimination process of block 406
  • the putative feature points obtained, at block 405, will generally generate a large number of outliers ( ⁇ e false matches) which may be eliminated, at block 406 by applying an outliers elimination process
  • This process may be implemented by minimizing the fundamental matrix estimation error, e g using a RANSAC algorithm or more involved algorithms such as degenerate sample consensus (DEGENSAC) or quasi-degenerate sample consensus (QDEGSAC) algorithms, for increased robustness to (quas ⁇ -)planar scenes
  • the process 400 may proceed back to block 403 for a new pass of feature matching, taktng into consideration the approximate epipolar constraint to find new matches and eliminate incorrectly matched but otherwise similar features This loop may occur until convergence or after a predetermined number of iterations or a predetermined time period has elapsed If the error is still too large the matching process may have insufficient or incorrect input data, e g if the two images have no points in common This specific image pair can be ignored accordingly since the individual images are likely to match others if the user has taken enough pictures covering the entire object or environment of interest
  • a camera estimation process estimates the relative camera position and o ⁇ entation corresponding to the second image Details of the process may be found, for example, in 'Multiple View Geometry in Computer Vision 2 nd edition' by Richard Hartley and Andrew Zisserman [14] Other types of features, such as detected curve segments, may be used to produce more correspondences that could increase the quality of reconstruction if successful, the final output of the feature matching process 400 is a list of probable features, at block 408, expressed in pixel and/or relative 3D space a list of camera matrices at block 409 (possibly including intrinsic and extrinsic matrix) and a probability, at block 410, that the two images were matched properly
  • the 3D reconstruction process 300 executed by the 3D reconstruction module 11 is expected to take a noticeable time to perform (e g more than one second) the user may be notified of its progress or completion, e g by email, using a progress bar or through an audio notification
  • a visual preview may be shown to the user as the 3D reconstruction process 300 progresses This visual preview may be displayed in a combination of 2D (e g top or elevation view), in 3D or using a stereoscopic display
  • the 3D reconstruction process 300 may also be done in-line, while recording a vtdeo or taking pictures, and the online shopping and 3D reconstruction system 130 may notify the user once enough data has been recorded and analyzed
  • the 3D reconstruction process 300 may be implemented in an iterative approach wherein each iteration may increase the reconstruction precision For example, this may allow users to get a fast, draft-quality reconstruction in a few seconds, and to automatically update to a high-preosion reconstruction at a later time
  • the illustrative embodiment disclosed herein describes the 3D reconstruction process 300 operations as sequential It is to be understood that many of the operations may be optional may be run more than once or in parallel (including on more than one computing device, such as CPUs and GPUs) or be ordered differently
  • the 3D segmentation process at block 307B may be performed before or after the mesh generation at block 308, and after the recognition engine at block 315
  • new optional operations may be inserted in between
  • the 3D reconstruction process 300 may be integrated inside a device (e g embedded inside a digital camera or cell phone) or exposed as a web service as a peer-to-peer or as stand-alone application or as a p!ug- ⁇ n to a CAD application
  • Users can select one or more 3D products 39 (e g wall, furniture, accessory) through several mechanism e g by clicking its surface in a 2D or 3D view, by searching by name or by selecting scenegraph nodes in a tree-view
  • 3D products 39 can be assigned manually by the user to a 'user display layer' and the display layers can be enabled/disabled to show/hide many objects
  • a 'category display layer allows users to globally hide specific 3D products 39 such as furniture, accessories, etc
  • An 'isolate selected ' mode is also available with several possible settings isolate selected isolate selected and neighbors Isolate selected only shows the selected object, e g wall Isolate selected with neighbors shows neighboring sections of connected walls ceiling and floor to allow the user to visualize the selected wall in context
  • isolate selected isolate selected and neighbors Isolate selected only shows the selected object, e g wall Isolate selected with neighbors shows neighboring sections of connected walls ceiling and floor to allow the user to visualize the selected wall in context
  • ail 3D products 39, including furniture are shown
  • Walls ceiling and floor may be modified in similar ways and wiil described globally as "room surfaces" Most of these surfaces are roughly flat and can be described with a closed polygon
  • the surfaces can also be curved in 3D space and described with a parametric surface representation (e g nonuniform rational B-spi ⁇ nes (NURBS)) Any representation may be used as long as the positions in 3D space can be picked and the representation subdivided using constructive solid geometry (CSG).
  • NURBS nonuniform rational B-spi ⁇ nes
  • Furniture e.g. 3D products 39
  • Furniture may be found and selected by a user in many ways, e.g.: from lists, by drag-a ⁇ d-dropping from other rooms, and through search queries.
  • Furniture can generally be personalized.
  • the materia! of most 3D products 39 can be overridden by the user to simulate adding a coat of paint or covering it with a fabric.
  • a chair may contain a seat, one or several legs, a back, arm rests and a cushion.
  • 3D products 39 can be positioned and oriented using visual manipulators similar to those of MayaTM or 3ds MaxTM.
  • 3D products 39 such as pieces of furniture, can "snap" onto other specific 3D products 39.
  • the 3D products . 39 may snap in specific locations and/or re-orient themselves. For example, a bookshelf snaps to the closest nearby wall ⁇ so its back touches it) and snaps vertically onto the floor. Accessories snap vertically to stand on the floor, on a piece of furniture, etc.
  • Some 3D products 39 can be modeled to include light sources.
  • the light bulb in a table lamp can be modeled as a point light source or spot light source parented under the lamp.
  • the illumination of the sun, the moon or other environment lighting can be simulated.
  • the rendering quality can be adjusted and multiple Tenderers can be used to produce more realistic results.
  • Users can search for 3D objects 39 using a simple text string (since each product has associated text strings such as, for example, a name, a description, an author name, etc.), or may use more complex search criteria types such as desired colors, dimensions, desired rating (derived from user reviews), manufacturer name, etc.
  • Each search criteria type may have one or more special visual interfaces exposed through the Ul 27. For example, a color histogram could be specified using a color wheel or a color histogram curve.
  • a color histogram may be computed from a user-provided picture. Coior histograms may be converted to and from textual color names using a look-up table, possibly in a language-specific way.
  • Queries are handled using an extended version of SQL. Parameters may come directly from the 3D content database 138 or be computed by the inference engine 43. Special comparison operators may exist for some criteria types. For example, the colors may be compared using a histogram difference in hue-saturation-vaiue (HSV) color space, allowing different objects (e.g. a lamp and a carpet) to be compared color-wise.
  • HSV hue-saturation-vaiue
  • the search engine 37 can be combined with the inference engine 43.
  • the inference engine 43 uses inference rules 44 to discover new patterns in the existing data of the 3D content database 138.
  • the inference rules 44 may be hard-coded (e.g. comparison based on color harmony theory [15] or "Color Harmonization" by Cohen-Or, D., Sorkine, O., GaI, R. Leyvand, T. and Y. Xu 116]), data-driven (e.g. inputted textua ⁇ y by a user or through a graphical user interface) or learnt from existing data.
  • the inference rules 44 may take many forms including a standard programming expression, a regular expression, or a Bayesian or Markovian network.
  • Bayesian or Markovian networks may be learnt from patterns found in the 3D content database 138 to produce highly precise findings with little or no human intervention. For example, some consumers may both fill in a census that asks them whether they have children or not, and reconstruct their home. Assuming a sufficient number of samples, a Bayesian or Markovian network can identify that the presence of a child's bed or a child's toy (detected by the recognition engine 12) makes it very probable that a child lives in the house. By testing relationships between random attributes in a background process, the online shopping and 3D reconstruction system 130 can learn a wide range of causal relationships with little or no human intervention.
  • the inference engine 43 and the recognition engine 12 may run in parallel, in the background or in a low-priority thread. Information generated by these engines may be cached to increase performance.
  • Advertising can take the form of any number of media, including text, images, audio, animations, video, video games and promotional items. Advertising may be passive or interactive. The advertisement may be distributed ir a variety of ways including inside the online shopping and 3D reconstruction system 130 website, in emails and newsletters, in third-party websites, in regular mail or on interactive displays in regular stores.
  • Examples of advertising may include;
  • Advertising may be charged to advertisers in a variety of ways including per click, per thousands of impressions or per total broadcasts time. Advertisers may also- be charged when a user performs a task, e.g. answers a questionnaire for a chance to win some prize. Advertisements may be selected according to a scheduling algorithm, e.g. highest bidder or round-robin. Advertisement placement may also be optimized using machine learning techniques.
  • an advertiser may first proceed with one or more search queries that will match the target market. These search queries can refer to any attribute stored in the user database 136, the 3D content database 138 or resulting from an inference rule 44. To respect user privacy, queries can be filtered, e.g. so that criteria like names, street address and phone numbers are not accepted, and returned search results must match a preset minimum of matches (e.g. 10000). The advertiser may get a preview of the search results, e.g. 10 typical results in random order.
  • user personal information such as country of residence, age, gender (for example, when searching for a merchant, the online shopping and 3D reconstruction system 130 may only return merchants that are operating in the same country as the user);
  • usage history e.g. purchase, search, view, review history (for example, an advertiser may want to only advertise to consumers who have purchased or searched for a vacuum cleaner in the past year);
  • criteria specified by the user e.g. colors, patterns and desired price range
  • explicit consumer preferences e.g. the consumer may request recommendations for specific types of objects and may provide additiona! constraints and rules, e.g. allergies to specific materials r dislike for specific colors, etc., these may be extracted through questionnaires or in some cases, through learning and classification techniques;
  • time of year e.g. Christmas and Halloween decorations, or BBQ during spring.
  • an advertiser may target women aged 21 to 30 with young children, in specific geographies, during specific time periods (e.g. afternoon in local time), that used specific search queries.
  • the advertiser may also specify a maximum advertising budget for the campaign, e.g. $1000 per day.
  • the online shopping and 3D reconstruction system 130 may allow advertisers to specify several advertisements to test concurrently, and automatically adopt the most popular, e.g. after it has been shown a pre-determined number of times. Learning techniques may be applied to automaticaily refine the criteria used to define the target market, e.g. by letting the advertiser select sample ⁇ earch results which are of particular interest or which are not interesting at all.
  • the online shopping and 3D reconstruction system 130 may also support third-party advertising solutions, e.g. by automatically converting the available information into a suitable format, such as a text tags stored in a temporary HTML document for Google AdsenseTM.
  • This conversion nay be done using any number of algorithms, for example, by simply concatenating the names of 3D products 39 present in a 3D environment 38, i.e. "chair, wooden table, blue carpet".
  • the text strings could be sorted or repeated to attach a higher importance, e.g. using a distinctiveness criteria (lower frequency items first) or according to the importance or probability inferred from the environment.
  • a distinctiveness criteria lower frequency items first
  • more important text tags may be converted into HTML headers, repeated or shown in meta tags, while less important text tags corresponding could be converted into regular text or written in smaller font.
  • the search engine 37 can return any number of outputs, including possible combinations of: ⁇ Advertisements;
  • the output can be presented in a variety of ways, including: » table to compare similar products; and
  • Advertisers and merchants may request usage reports and may be granted access to information contained in the user 136 and/or 3D content 138 databases, in its entirety or in some condensed format. To respect consumer privacy, the information may be filtered, e.g. to remove the names, address and credit card information. Reports may be accessed on demand or at specific times, e.g. generated once a day. An advertiser can specify which fields and filter rules he may be interested in. For example, these filter rules could be converted into appropriate SQL queries.
  • Usage statistics may also be grouped to produce condensed reports. For example, this could allow an advertiser or a merchant to find the most popular products in a specific category, to compute the average purchase price for wooden furniture in a given time period or geography, or to find products that are frequently searched for by a specific segment of the consumer market. Like any features of the online shopping and 3D reconstruction system 130, any aspect of this operation could be offered at additional charge or restricted to a subset of user classes.
  • a consumer first add 3D products 39, e.g. by searching, by asking for a recommendation, by following a link for an advertised product, or by cut-and-pasting from other 3D environments 38 or from lists compiled by other users. Once a user is satisfied, he or she simply clicks an order button.
  • the order menu allows a consumer to compare between several merchants for each product, showing information such as price, applicable taxes, shipping cost, total cost and estimated delivery time. If a merchant is present more than once, the consumer may combine two products in one order, e.g. to reduce shipping costs.
  • a recommend button finds a compromise in total price and delivery time while reducing risks by choosing local and well-rated merchants.
  • the online shopping and 3D reconstruction system 130 may distinguish between three major types of merchants: external merchants, partner merchants and consumer merchants.
  • An external merchant is a merchant that operates its own official website outside of the online shopping and 3D reconstruction system 130 and typically has an affiliate program. For example, an external merchant may offer a commission, for example 5%, on each sale that is forwarded through the online shopping and 3D reconstruction system 130.
  • the plug-in on the user's persona! computer 112 may operate the transaction automatically by sending the appropriate HTTP requests directly, or may open a web page for the user to review and confirm the order transaction. In either case, personal, shipping and payment information may be filled automatically from the consumer data 41 of the user database 136.
  • external merchants may or may not manage their own merchant data 40 within the user database 136.
  • the reconstruction of the 3D products 39 and the data entry may be handled by some other party, such as the online shopping and 3D reconstruction system 130 staff.
  • a partner merchant is a merchant that manages its own virtual store within the online shopping and 3D reconstruction system 130.
  • the partner merchant uses the 3D reconstruction module 11 to reconstruct goods for sale.
  • This type of merchant also keeps 3D products 39 descriptions, pricing and availability up-to-date.
  • the online shopping and 3D reconstruction system 130 processes the payment and notifies the merchant, e.g. through email, phone, SMS or other means.
  • the third common type of merchant is a consumer that may want to sell a second hand product.
  • a consumer may become a merchant by selecting 3D objects 39 that he or she wants to sell.
  • the consumer merchant may ask the online shopping and 3D reconstruction system 130 for a comparison with similar 3D products 39 available for sale, to get a better feel on the street price to be expected. He or she may then select "sell" to set the desired starting price, indicate whether the sale should operate at a fixed price or as an auction, the time limit, the acceptable buyer geographies, etc.
  • the plug-in that is usually installed on the user's personal computer 112 may be integrated in external websites, aflowing these external websites to embed the online shopping and 3D reconstruction system 130 content in web pages of their choice. This would allow the online shopping and 3D reconstruction system 130 content to be visible on other websites, possibly with extra buttons such as "buy now” and "see in your home", that may re-direct to appropriate pages on the online shopping and 3D reconstruction system 130 pages.
  • the process 300 of Figure 4 used by the 3D reconstruction module 11 , may be further used in third party applications as a stand-alone application or may also be used to provide further services within the online shopping and 3D reconstruction system 130.
  • Sample applications are listed below, though it is to be understood that this is not a limitative list and that other applications may also incorporate the 3D reconstruction process 300 of Figure 4.
  • Printing applications e.g. on regular paper or in 3D models, to make custom doll houses.
  • Simulation applications e.g. by integrating a physics or fluid simulation engine it may be possible to simulate an earthquake or fire.
  • Results may be visualized and recommendations may be shown to the user, e.g. to use fire-proof drapes or to install an emergency staircase.
  • Results may aiso be exported in other applications and shared with others, e.g. government officials.
  • Moving applications with automatic re-layout e.g. through relaxation al ⁇ orithms.
  • alfowio ⁇ users to easilv see and load objects from one room to another.
  • Objects that are not expected to fit in the new room may be automatically listed and sold on a classified or an auction site on the online shopping and 3D reconstruction system 130 or outside (e.g. eBayTM).
  • Environment recognition applications e.g. the 3D recognition can be extended to an entire environment. For example, if a toilet is detected it increases the chance that toilet paper is nearby or other related articles. These rules could be learnt from models tagged by humans. It may also be possible to infer the manufacturer, the model number, the year a furniture was made (usefu! for antiques) and style (e.g. modern, roccoco). Recommendations could match a painting with a room or authentic accessories with an antique. The recommendation could find products available from partner merchants (registered with the on the online shopping and 3D reconstruction system 130) or from external services (e.g. eBayTM). [00139] Environment mapping applications, e.g. finding specific objects (e.g. lost keys), helping a robot navigate, advertising real-estate for sale or for rent, teleconference/3D chat, children surveillance
  • Inspection applications e.g. by applying 3D reconstruction at different time intervals, thus enabling the comparison and the identification of ⁇ tr ⁇ cturai problems, e.g. new cracks that could indicate structural damage, damage assessment for insurance claims, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Finance (AREA)
  • Theoretical Computer Science (AREA)
  • Strategic Management (AREA)
  • General Physics & Mathematics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention concerne un système et un procédé permettant à des utilisateurs de réaliser des activités centrées sur leur environnement et leur domicile, notamment de créer automatiquement un modèle tridimensionnel précis de leur domicile, bureau ou autre cadre intérieur, à partir de photographies ou de vidéo standard, de visualiser et de personnaliser un aspect quelconque du modèle tridimensionnel créé, d'acheter des meubles et des accessoires au sein de cet environnement tridimensionnel, depuis des objets d'art jusqu'à des produits de vente en gros, et de partager leurs modèles et créations tridimensionnels, ainsi que des conseils de rénovation et d'achat avec des amis et le public en général.
EP08800261A 2007-08-30 2008-09-02 Système d'achats en ligne et procédé utilisant une reconstruction tridimensionnelle Withdrawn EP2195781A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US93576507P 2007-08-30 2007-08-30
PCT/CA2008/001551 WO2009026726A1 (fr) 2007-08-30 2008-09-02 Système d'achats en ligne et procédé utilisant une reconstruction tridimensionnelle

Publications (1)

Publication Number Publication Date
EP2195781A1 true EP2195781A1 (fr) 2010-06-16

Family

ID=40386635

Family Applications (1)

Application Number Title Priority Date Filing Date
EP08800261A Withdrawn EP2195781A1 (fr) 2007-08-30 2008-09-02 Système d'achats en ligne et procédé utilisant une reconstruction tridimensionnelle

Country Status (3)

Country Link
EP (1) EP2195781A1 (fr)
CA (1) CA2735680A1 (fr)
WO (1) WO2009026726A1 (fr)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9646340B2 (en) 2010-04-01 2017-05-09 Microsoft Technology Licensing, Llc Avatar-based virtual dressing room
US9098873B2 (en) 2010-04-01 2015-08-04 Microsoft Technology Licensing, Llc Motion-based interactive shopping environment
US8694553B2 (en) 2010-06-07 2014-04-08 Gary Stephen Shuster Creation and use of virtual places
EP2864961A4 (fr) 2012-06-21 2016-03-23 Microsoft Technology Licensing Llc Construction d'avatar au moyen d'une caméra de profondeur
CA2843576A1 (fr) * 2014-02-25 2015-08-25 Evelyn J. Saurette Methode informatisee de ventes immobilieres
US20150306824A1 (en) * 2014-04-25 2015-10-29 Rememborines Inc. System, apparatus and method, for producing a three dimensional printed figurine
CN104077462A (zh) * 2014-07-23 2014-10-01 上海中信信息发展股份有限公司 一种文物数字模型三维标注方法
CN106652005A (zh) * 2016-09-27 2017-05-10 成都盈同乐家信息技术有限公司 一种虚拟家居场景3d设计器及系统
US10699323B1 (en) 2019-08-21 2020-06-30 Capital One Services, Llc Vehicle identification driven by augmented reality (AR)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050015360A (ko) * 2003-08-05 2005-02-21 황후 쓰리디 아바타를 이용한 전자쇼핑몰 시스템 및 쇼핑방법
SE528068C2 (sv) * 2004-08-19 2006-08-22 Jan Erik Solem Med Jsolutions Igenkänning av 3D föremål
WO2006126205A2 (fr) * 2005-05-26 2006-11-30 Vircomzone Ltd. Systemes, utilisations et procedes d'affichage graphique

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2009026726A1 *

Also Published As

Publication number Publication date
WO2009026726A1 (fr) 2009-03-05
CA2735680A1 (fr) 2009-03-05

Similar Documents

Publication Publication Date Title
US11367250B2 (en) Virtual interaction with three-dimensional indoor room imagery
US11714518B2 (en) Method and system for virtual real estate tours and virtual shopping
US11244223B2 (en) Online garment design and collaboration system and method
US10235810B2 (en) Augmented reality e-commerce for in-store retail
US9420253B2 (en) Presenting realistic designs of spaces and objects
US10628666B2 (en) Cloud server body scan data system
US7523411B2 (en) Network-linked interactive three-dimensional composition and display of saleable objects in situ in viewer-selected scenes for purposes of object promotion and procurement, and generation of object advertisements
US10922716B2 (en) Creating targeted content based on detected characteristics of an augmented reality scene
US20180121988A1 (en) Product recommendations based on augmented reality viewpoints
WO2009026726A1 (fr) Système d'achats en ligne et procédé utilisant une reconstruction tridimensionnelle
US20210383115A1 (en) Systems and methods for 3d scene augmentation and reconstruction
US7062722B1 (en) Network-linked interactive three-dimensional composition and display of saleable objects in situ in viewer-selected scenes for purposes of promotion and procurement
US11640672B2 (en) Method and system for wireless ultra-low footprint body scanning
US10497053B2 (en) Augmented reality E-commerce
US20170132841A1 (en) Augmented reality e-commerce for home improvement
JP2022531536A (ja) セマンティック融合
US20020085046A1 (en) System and method for providing three-dimensional images, and system and method for providing morphing images
US11948057B2 (en) Online garment design and collaboration system and method
US11335065B2 (en) Method of construction of a computer-generated image and a virtual environment
WO2018182938A1 (fr) Procédé et système de balayage de corps sans fil à ultra faible encombrement
Sawiros et al. Next-gen e-commerce in the metavers
Nagashree et al. Markerless Augmented Reality Application for Interior Designing
Morin 3D Models for...
Delamore et al. Everything in 3D: developing the fashion digital studio
KR20240096365A (ko) 다수의 사용자가 뉴럴 래디언스 필드 모델을 생성하고 사용할 수 있는 플랫폼

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20100330

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA MK RS

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20110927