US20150310662A1 - Procedural authoring - Google Patents
Procedural authoring Download PDFInfo
- Publication number
- US20150310662A1 US20150310662A1 US14/737,098 US201514737098A US2015310662A1 US 20150310662 A1 US20150310662 A1 US 20150310662A1 US 201514737098 A US201514737098 A US 201514737098A US 2015310662 A1 US2015310662 A1 US 2015310662A1
- Authority
- US
- United States
- Prior art keywords
- model
- person
- images
- true
- computer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/001—Texturing; Colouring; Generation of texture or colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/10—Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/97—Determining parameters from multiple pictures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/36—Level of detail
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2016—Rotation, translation, scaling
Definitions
- browsing experiences related to digital media are comprised of images or other visual components of a fixed spatial scale, generally based upon settings associated with an output display screen resolution and/or the amount of screen real estate allocated to a viewing application, e.g., the size of a browser that is displayed on the screen to the user.
- displayed data is typically constrained to a finite or restricted space correlating to a display component (e.g., monitor, LCD, etc.).
- digital media e.g., photography, images, video, etc.
- images or other visual components of a fixed spatial scale generally based upon settings associated with an output display screen resolution and/or the amount of screen real estate allocated to a viewing application, e.g., the size of a browser that is displayed on the screen to the user.
- displayed data is typically constrained to a finite or restricted space correlating to a display component (e.g., monitor, LCD, etc.).
- there is an increasing use of digital media based upon decreased size and cost
- Tags are keywords associated with a piece of content that can describe the content, or indicate a word, phrase, acronym, or the like pertinent to aspects of the content.
- Tags are often generated by a content provider (e.g., a publisher, owner, photography, etc.) to associate with media content and to give a short description of the content to a recipient. Such description can be useful to quickly determine whether time should be spent reviewing the content, whether it should be saved and reviewed later, or whether it should be discarded, for instance.
- tags, subject lines, and the like have become useful to reduce the time required in perusing the massive amounts of data available remotely and/or locally.
- the subject innovation relates to systems and/or methods that facilitate leveraging a 3D object constructed from 2D imagery to generate a model with real world accurate dimensions, proportions, scaling, etc.
- a content aggregator can collect and combine a plurality of two dimensional (2D) images or content to create a three dimensional (3D) image, wherein such 3D image can be explored (e.g., displaying each image and perspective point) in a virtual environment.
- a model component can extrapolate a true 3D geometric model from the 3D object in which such model can have true 3D geometry and attributes (e.g., dimensions, proportions, surfaces, scales, lengths, size, color, texture, physical properties, weight, chemical composition, etc.).
- the model created can be an accurate representation of a real object within the physical real world based in part upon the 2D images depicting or displaying such real object within the physical real world.
- the model component can evaluate a 3D object (e.g., sampling objects or features from the real world based from 2D images associated with such 3D object) in order to create a model.
- Such model can be further analyzed with dimensionality reduction techniques to identify those objects or features that can be reduced to a low-dimensional manifold (e.g., possibly a telephone or a coffee table).
- the low-dimensional manifold for an object is ascertained (e.g., a true object)
- that object as well as various associated features can be mapped to a procedural authoring environment.
- various features of the object can be modified (e.g., twisting a knob or some other tools, etc.).
- 3D objects that accurately depict a scene with as much realism as a photograph can now be modified or authored in much the same way as are virtual worlds in, say, a gaming environment, yet with living photographic quality/detail rather than virtual renditions.
- the innovation can also provide for a new way of classification as well utilizing automatic tagging.
- methods are provided that facilitate generating a proportional scaled version of a real object from a 3D object.
- FIG. 1 illustrates a block diagram of an exemplary system that facilitates generating a model with true 3D geometry characteristics from a 3D image or object.
- FIG. 2 illustrates a block diagram of an exemplary system that facilitates creating an object from a true 3D geometric model having a low-dimensional manifold.
- FIG. 3 illustrates a block diagram of an exemplary system that facilitates automatically identifying and tagging objects from a true 3D geometric model created from a 3D image or object.
- FIG. 4 illustrates a block diagram of an exemplary system that facilitates utilizing a true object identified from the true 3D geometric model.
- FIG. 5 illustrates a block diagram of exemplary system that facilitates utilizing a display technique and/or a browse technique in accordance with the subject innovation.
- FIG. 6 illustrates a block diagram of an exemplary system that facilitates automatically identifying real world properties and dimensions from a 3D image or object created from 2D content.
- FIG. 7 illustrates an exemplary methodology for providing an object with a low-dimensional manifold from a true 3D geometric model, wherein the object can be modified.
- FIG. 8 illustrates an exemplary methodology that facilitates extrapolating a true 3D geometric model with real-world accurate dimensions and automatically tagging identified objects within such model.
- FIG. 9 illustrates an exemplary networking environment, wherein the novel aspects of the claimed subject matter can be employed.
- FIG. 10 illustrates an exemplary operating environment that can be employed in accordance with the claimed subject matter.
- ком ⁇ онент can be a process running on a processor, a processor, an object, an executable, a program, a function, a library, a subroutine, and/or a computer or a combination of software and hardware.
- an application running on a server and the server can be a component.
- One or more components can reside within a process and a component can be localized on one computer and/or distributed between two or more computers.
- the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter.
- article of manufacture as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media.
- computer readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips . . . ), optical disks (e.g., compact disk (CD), digital versatile disk (DVD) . . . ), smart cards, and flash memory devices (e.g., card, stick, key drive . . . ).
- a carrier wave can be employed to carry computer-readable electronic data such as those used in transmitting and receiving electronic mail or in accessing a network such as the Internet or a local area network (LAN).
- LAN local area network
- FIG. 1 illustrates a system 100 that facilitates generating a model with true 3D geometry characteristics from a 3D image or object.
- the system 100 can include a content aggregator 102 that can collect a plurality of two dimensional (2D) content (e.g., media data, images, video, photographs, metadata, trade cards, any media representing a portion of a physical real world, a picture of an object, a content representing an item, a content depicting an entity, a corporeal object within the real world, etc.) to create a three dimensional (3D) virtual environment (e.g., a 3D environment 104 ) that can be explored (e.g., displaying each image and perspective point).
- 2D two dimensional
- the 3D environment 106 can include two or more 2D images each having a specific perspective or point-of-view.
- the 2D images can be aggregated or collected by the content aggregator 102 in order to construct a 3D image or object within the 3D environment 104 , wherein construction or assembly can be based upon each 2D image perspective.
- a model component 106 can extrapolate and create a model having true 3D geometry and attributes (e.g., dimensions, proportions, surfaces, scales, lengths, size, color, texture, physical properties, weight, chemical composition, etc.) in which such model can be accurate to the represented 3D image or object representing a portion of a physical real world.
- the true 3D geometric model created by the model component 106 can be further utilized to identify and tag objects (discussed below) or to create low-dimensional manifolds for identified objects (discussed below).
- authentic views e.g., pure views from images
- synthetic views e.g., interpolations between content such as a blend projected onto the 3D model.
- the content aggregator 102 can aggregate a large collection of photos of a place or an object, analyze such photos for similarities, and display such photos in a reconstructed 3D space to create a 3D object, depicting how each photo relates to the next.
- the 3D image or object within the 3D environment 104 that can be explored, navigated, browsed, etc.
- the 3D constructed object can be from any suitable 2D content such as, but not limited to, images, photos, videos (e.g., a still frame of a video, etc.), audio, pictures, etc.
- the collected content can be from various locations (e.g., the Internet, local data, remote data, server, network, wirelessly collected data, etc.). For instance, large collections of content (e.g., gigabytes, etc.) can be accessed quickly (e.g., seconds, etc.) in order to view a scene from virtually any angle or perspective.
- the content aggregator 102 can identify substantially similar content and zoom in to enlarge and focus on a small detail.
- the content aggregator 102 can provide at least one of the following: 1) walk or fly through a scene to see content from various angles; 2) seamlessly zoom in or out of content independent of resolution (e.g., megapixels, gigapixels, etc.); 3) locate where content was captured in relation to other content; 4) locate similar content to currently viewed content; and 5) communicate a collection or a particular view of content to an entity (e.g., user, machine, device, component, etc.).
- an entity e.g., user, machine, device, component, etc.
- a 3D environment can be explored in which the 3D image can be a cube.
- This cube can be created by combining a first image of a first face of the cube (e.g., the perspective is facing the first face of the cube), a second image of a second face of the cube (e.g., the perspective is facing the second face of the cube), a third image of a third face of the cube (e.g., the perspective is facing the third face of the cube), a fourth image of a fourth face of the cube (e.g., the perspective is facing the fourth face of the cube), a fifth image of a fifth face of the cube (e.g., the perspective is facing the fifth face of the cube), and a sixth image of a sixth face of the cube (e.g., the perspective is facing the sixth face of the cube).
- a 3D image of the cube can be created within the 3D environment 106 which can be displayed, viewed, navigated, browsed, and the like.
- each of the images for the cube that are aggregated together can share at least a portion of content (e.g., a first image of the cube is a first face and a portion of a second face also contained in the second image, etc.) or a portion of a perspective of the image.
- the angular gap between images can be less than thirty (30) degrees for 3D registration.
- an statue can include a plurality of images from varying points of view such that the images capture the statue from all sides. These images can be aggregated and aligned to create a 3D object of the statue.
- the photographs or images of the cube can be representative of a cube in a physical real world in which the cube has particular attributes such as size, dimensions, proportions, color, weight, physical properties, chemical compositions, etc.
- the model component 106 can evaluate the constructed 3D image or object in order to create a model with real life 3D geometry and attributes.
- Such model generated from the 3D object or image can include accurate dimensions, proportions, scales, lengths, physical properties, surfaces, textures, and the like for the cube in the physical real world.
- the model component 106 can extrapolate a true 3D geometry of the 3D image or object (here the cube) created from the 3D photographs of such cube.
- This true 3D model can be imported into other applications, virtual environments, and the like. Moreover, this extrapolated model can be utilized to identify objects or items (e.g., the cube as a whole, an ancillary object within the photos of the cube, etc.) which can be reduced to a low-dimensional manifold (discussed below).
- objects or items e.g., the cube as a whole, an ancillary object within the photos of the cube, etc.
- system 100 can include any suitable and/or necessary interface component (not shown), which provides various adapters, connectors, channels, communication paths, etc. to integrate the model component 106 into virtually any operating and/or database system(s) and/or with one another.
- the interface component can provide various adapters, connectors, channels, communication paths, etc., that provide for interaction with the content aggregator 102 , the 3D environment 104 , the model component 106 , and any other device and/or component associated with the system 100 .
- FIG. 2 illustrates a system 200 that facilitates creating an object from a true 3D geometric model having a low-dimensional manifold.
- the system 200 can include the model component 106 that can generate a 3D model with true and accurate dimensions to the physical real world in which such 3D model is based upon a 3D object or image constructed from two or more 2D images of an entity (e.g., an item, a person, a landscape, scenery, buildings, objects, animals, devices, goods, etc.) within the physical real world.
- the 3D object or image can be created from two or more 2D content (e.g., images, still frames, portion of video, etc.) based upon their perspectives or point-of-views.
- the content aggregator 102 can collection 2D images related to a particular entity and construct a 3D object within the 3D environment 104 based upon each image's perspective or point-of-view. Such constructed 3D object can be viewed, browsed, navigated, and the like.
- the model component 106 can evaluate the 3D object in order to create a true 3D geometric model of such object or a portion of the object.
- a digital camera can capture a plurality of photographs of a house from various angles in a physical real world. From the collection of photographs, a 3D object can be constructed, wherein a portion of the 3D object is represented by a photograph from a perspective or point-of-view from which the photograph was taken.
- the 3D object can be viewed (e.g., illustrating the 2D content utilized to construct such 3D object of the house), navigated, or browsed.
- a virtual tour can be given within the 3D environment of the 3D image representing the house.
- the house can be represented as a 3D object within the 3D environment constructed from the plurality of photographs taken from the digital camera.
- the 3D object can be evaluated in order to generate a true 3D geometric model of such house.
- the true 3D geometric model can have accurate dimensions, proportions, surfaces, scales, lengths, size, color, texture, physical properties, weight, chemical composition, etc., wherein accuracy is in comparison to the house in the physical real world.
- the true 3D geometric model can be a computerized replicate of accurate scale and properties of the 3D object or image.
- the system 200 can include an editor component 202 that enables a portion of the true 3D geometric model to be modified.
- the true 3D geometric model can be modified or manipulated in accordance to one's liking.
- the model component 106 can generate the 3D geometric model in which portions of the model are created with a low-dimensional manifold or having a low-dimension.
- the editor component 202 enables a low-dimensional manifold or the low-dimensional object associated with the model to be modified or manipulated to create new objects or modified objects from the originally extracted low-dimensional manifold or object.
- dimensionality reduction can be implemented on the true 3D geometric model in order to reduce a high-dimensionality object to a reduced number of dimensions but while maintaining recognizable representation.
- a 3D object may be constructed from photos of a human face in which a human face can include a high number of dimensions, yet, the human face can be reduced to a lower number of dimensions and still maintain the recognizable traits (e.g., cheeks, eyes, nose, mouth, etc.).
- the system 200 can create a virtual representation of a real object (e.g., content from the physical real world is the basis for the object depicted in the content within a virtual reality).
- objects such as a window, a door, or the like can be identified and reduced to low-dimensional manifolds.
- the editor component 202 can allow such low-dimensional manifolds or identified objects, or the model as a whole to be modified, edited, changed, manipulated, and the like.
- the door can be modified to be a circular door rather than a standard rectangle door.
- the true 3D geometric model can be of a human face, in which the editor component 202 can allow modification. For instance, eyes on the face can be moved closer together or further apart, the shape can be changed, the cheek bones can be exaggerated, the mouth can be scaled to a smaller size, etc.—the face, in general, can be distorted.
- the editor component 202 can employ procedural authoring as in creating a new object based off at least one of the low-dimensional manifold created from the true 3D geometric model, a high-dimensional manifold, a portion of the true 3D geometric model, an object or item identified within the true 3D geometric model, or the true 3D geometric model.
- surface reconstruction can be used to reconstruct 2D manifolds, or surfaces, from disorganized point clouds (e.g., collection of images, collection of 2D content, etc.). For instance, techniques associated with computer vision can be employed. Moreover, once a point cloud has been converted to a parametrized surface, it can be treated as one instance among an ensemble. For example, synths (e.g., 3D objects, 3D images created from 2D content, etc.) of many faces or multiple synths of a set of French doors can form an ensemble to recover latent degrees of freedom (e.g., eyebrows going up and down, or the doors opening and closing, etc.).
- synths e.g., 3D objects, 3D images created from 2D content, etc.
- latent degrees of freedom e.g., eyebrows going up and down, or the doors opening and closing, etc.
- Dimensionality reduction can also be used to recover the effects of changing time of day and weather on a 3D object or image, say for instance, the Lincoln Memorial, given a plurality of images of the Lincoln Memorial aggregated or synthed together.
- a 3D object or image say for instance, the Lincoln Memorial
- a large ensemble of synths with respect to such surface variations can be used with the substantially similar dimensionality reduction techniques in order to identify common materials and their properties in general under variable lighting and environmental conditions.
- the system 200 can further include a data store 204 that can include any suitable data related to the content aggregator 102 , the 3D environment 104 , the model component 106 , etc.
- the data store 204 can include, but not limited to including, 2D content, 3D object data, 3D true geometric models, extrapolations between a 3D object and a true 3D geometric model, dimensional analysis data, low-dimensional manifold data, manifold data, objects created from the 3D true geometric model, items created from the 3D true geometric model, user preferences, user settings, configurations, scripted movements, transitions, 3D environment data, 3D construction data, mappings between 2D content and 3D object or image, etc.
- nonvolatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
- Volatile memory can include random access memory (RAM), which acts as external cache memory.
- RAM is available in many forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), Rambus direct RAM (RDRAM), direct Rambus dynamic RAM (DRDRAM), and Rambus dynamic RAM (RDRAM).
- SRAM static RAM
- DRAM dynamic RAM
- SDRAM synchronous DRAM
- DDR SDRAM double data rate SDRAM
- ESDRAM enhanced SDRAM
- SLDRAM Synchlink DRAM
- RDRAM Rambus direct RAM
- DRAM direct Rambus dynamic RAM
- RDRAM Rambus dynamic RAM
- FIG. 3 illustrates a system 300 that facilitates automatically identifying and tagging objects from a true 3D geometric model created from a 3D image or object.
- the system 300 can include the content aggregator 102 that can construct a 3D image or object from two or more 2D images or photographs having respective point-of-views of the physical real world.
- the 3D image or object can be navigated, browsed, viewed, and/or displayed within the 3D environment 104 .
- the 3D environment can be accessed locally, remotely, and/or any suitable combination thereof.
- the 3D image or object and/or the 2D content can be accessed locally, remotely, and/or any suitable combination thereof.
- a user can log into a first host for the remote 3D environment 104 and access a 3D object in which the 2D content is located on a second host.
- the model component 106 can provide dimensional analysis in order to generate a true 3D geometric model having identical attributes to the object in the physical real world of which the 2D content depicts.
- the 2D content can be photographs or video that portrays a car in the physical real world.
- Such photographs or video can be collected to construct a 3D image within a virtual 3D environment 106 by the content aggregator 102 .
- the 3D object can be a 3D virtual representation of the car.
- Such 3D object can be utilized to extrapolate a true 3D geometric model of the car, wherein the model includes accurate size, scaling, proportions, dimensions, etc.
- a measurement of a wheelbase for the car within the model can be accurate to the wheelbase for the car in the physical real world (e.g., including a scaling factor, without a scaling factor, etc.).
- the true 3D geometric model can be further utilized to identify objects (e.g., a muffler, a bumper, a light, a windshield wiper, etc.), utilized in other applications or environments (e.g., virtual environments, procedural environments, drafting applications, etc.), utilized to create new objects based on the identified objects (e.g., a modified muffle, a modified bumper, a modification to the car, etc.).
- the true 3D geometric model can be utilized to identify a low-dimensional manifold of a car, to which a user can modify such manifold to create a disparate car with a disparate true 3D geometric model.
- the true 3D geometric model can be any suitable model with true 3D geometry and attributes (e.g., dimensions, proportions, surfaces, scales, lengths, size, color, texture, physical properties, weight, chemical composition, etc.) in which such model can be accurate to the represented 3D image or object representing a portion of a physical real world (e.g., an entity depicted within the 2D content or images).
- the true 3D geometric model can be, but is not limited to, a graphical representation, a blueprint, a wire framework, a wire frame, a wire frame model, a skeleton, etc.
- the model component 106 can include an analyzer 302 and a tagger 304 .
- the analyzer 304 can evaluate the true 3D geometric model in order to identify an object or item (e.g., also referred to as a true object). In other words, by evaluating the true 3D geometric model, particular portions may be more identifiable as objects in comparison to other objects.
- the tagger 304 can associate a metadata tag or a portion of data describing the object.
- dimensional analysis can be utilized to facilitate identifying objects, wherein objects with a low-dimension can be more identifiable than objects with a high dimension. In such example, the low-dimension objects identified can be tagged by the tagger 304 .
- a catalog or data store (e.g., data store 204 ) can include tagged information.
- the true 3D geometric model can be evaluated utilizing dimensional analysis in order to identify objects.
- the objects can be a low-dimensional version of the house (e.g., reducing the true 3D geometric model to core features, etc.) or a low-dimensional object included within the photographs of the house such as a shutter, an address plate, a mailbox, a lawn chair, a table, etc.
- Such identified objects and items can be tagged with metadata for description.
- such identified objects can further be leveraged to identify other objects or items from the true 3D geometric model.
- an identified lawn chair can be leveraged (e.g., the characteristics, dimensions, attributes, etc.) in order to identify a recliner or any other related variation of the lawn chair.
- the human face is remarkably well suited to computerized synthesis, whereas for some other objects (e.g., animals) such is not the case.
- One explanation for this trait is that the human face can be reduced to a low-dimension manifold which allows for ready computational synthesis.
- the system 300 provides for sampling objects or features from the real world in order to identify those objects or features that can be reduced to a low-dimensional manifold (e.g., possibly a telephone or a coffee table).
- the low-dimensional manifold for an object is ascertained, that object as well as various associated features can be mapped to a procedural authoring environment.
- various features of the object can be modified simply by twisting a knob or some other tool in the procedural environment.
- synths e.g., 3D objects, etc.
- synths that accurately depict a scene with as much realism as a photograph can now be modified or authored in much the same way as are virtual worlds in, say, a gaming environment, yet with living photographic quality/detail rather than virtual renditions.
- the innovation can also provide for a new way of classification as well (e.g., tagging).
- FIG. 4 illustrates a system 400 that facilitates utilizing a true object identified from the true 3D geometric model.
- the system 400 can include the model component 106 that can analyze a 3D object constructed by the content aggregator 102 that assembles two or more photographs that depict a portion of the physical real world based upon each photograph's point-of-view. Based upon such analysis, the model component 106 can extrapolate physical real world properties and create a model that has such real world properties (e.g., dimensions, proportions, surfaces, scales, lengths, size, color, texture, physical properties, weight, chemical composition, etc.).
- real world properties e.g., dimensions, proportions, surfaces, scales, lengths, size, color, texture, physical properties, weight, chemical composition, etc.
- This true 3D geometric model can be, but is not limited to being, a graphical representation, a blueprint, a wire framework, a wire frame, a wire frame model, a skeleton, and or any other displayable item that represents a portion of the 3D object with real world accurate attributes/properties.
- the true 3D geometric model can be analyzed with dimensional analysis in order to identify objects or items that are recognizable.
- a portion of the true 3D geometric model can be identified as a low-dimensional manifold (e.g., a muffler, a rear-view mirror, etc.).
- identifying a portion of the true 3D geometric model as a low-dimensional manifold such portion of the model can be a true object (e.g., the true 3D geometric model can comprise of a plurality of true objects, wherein a true object is a portion of the true 3D geometric model that has been identified and is recognizable with dimensional analysis).
- This true object or the identifiable portion of the true 3D geometric model can be implemented in connection with a virtual environment 402 , a portion of application and/or software 404 , and/or a disparate 3D object framework 406 .
- the true object can be imported into a virtual environment 402 in order, wherein such true object is a virtual representation of real objects.
- the real life objects from the 2D images or photographs can be the basis of the virtual reality.
- a collection of photos of a famous building can be aggregated and assembled to construct a 3D object of such famous building.
- This 3D object can be the basis for the extrapolation of a true 3D geometric model having physical real world dimensions, properties, attributes, etc.
- objects and/or items can be readily identifiable utilizing, for example, dimensional analysis. These identified objects or items can be imported into the virtual environment 402 . In other words, rather than creating the famous building, the famous building can be imported based on the extrapolated data from the 3D object created from 2D content.
- a social environment or network can allow a user to create an avatar, a house, etc., wherein the true 3D geometric model can be utilized therewith.
- the virtual environment can be a social network, an online community, an online virtual world, a 3D online virtual world, etc.
- true object or identified portion of true 3D geometric model can be further utilized with an application or software 404 .
- the true 3D geometric model can be utilized with a drafting application based on the architectural accurate characteristics. With dimensions, proportions, and attributes reflecting those of the physical real world, the following can utilize the true object: drafting applications, simulators (e.g., car crash simulating programs, a program or application that simulates reactions to a stimulus, natural disaster scenario, etc.), graphic designer programs, programs utilizing blueprint information, applications, geographic applications, mapping programs, navigation programs, designer software, etc.
- simulators e.g., car crash simulating programs, a program or application that simulates reactions to a stimulus, natural disaster scenario, etc.
- graphic designer programs programs utilizing blueprint information, applications, geographic applications, mapping programs, navigation programs, designer software, etc.
- the true object can further be utilized in connection with the 3D object as a 3D object framework 406 .
- the true object can be a skeleton for the 3D object it originated (e.g., exposed in areas that are not represented by 2D content within the assembled 3D object), wherein 2D content can be overlaid upon the skeleton.
- the true object can be utilized to create or construct a 3D object in connection with mapping 2D content onto the 3D object.
- FIG. 5 illustrates a system 500 that facilitates utilizing a display technique and/or a browse technique in accordance with the subject innovation.
- the system 500 can include the content aggregator 102 , the 3D environment 104 , and the model component 106 as described above.
- the system 500 can further include a display engine 502 that enables seamless pan and/or zoom interaction with any suitable data (e.g., 3D object data, 2D imagery, content, the true 3D geometric model, a portion of the true 3D geometric model, an object identified from the true 3D geometric model, a modified portion of the true 3D geometric model, etc.), wherein such data can include multiple scales or views and one or more resolutions associated therewith.
- suitable data e.g., 3D object data, 2D imagery, content, the true 3D geometric model, a portion of the true 3D geometric model, an object identified from the true 3D geometric model, a modified portion of the true 3D geometric model, etc.
- the display engine 502 can manipulate an initial default view for displayed data by enabling zooming (e.g., zoom in, zoom out, etc.) and/or panning (e.g., pan up, pan down, pan right, pan left, etc.) in which such zoomed or panned views can include various resolution qualities.
- zooming e.g., zoom in, zoom out, etc.
- panning e.g., pan up, pan down, pan right, pan left, etc.
- the display engine 502 enables visual information to be smoothly browsed regardless of the amount of data involved or bandwidth of a network.
- the display engine 502 can be employed with any suitable display or screen (e.g., portable device, cellular device, monitor, plasma television, etc.).
- the display engine 502 can further provide at least one of the following benefits or enhancements: 1) speed of navigation can be independent of size or number of objects (e.g., data); 2) performance can depend on a ratio of bandwidth to pixels on a screen or display; 3) transitions between views can be smooth; and 4) scaling is near perfect and rapid for screens of any resolution. It is to be appreciated and understood that the display engine 502 can be substantially similar to the display engine 102 described above.
- an image can be viewed at a default view with a specific resolution.
- the display engine 502 can allow the image to be zoomed and/or panned at multiple views or scales (in comparison to the default view) with various resolutions.
- a user can zoom in on a portion of the image to get a magnified view at an equal or higher resolution.
- the image can include virtually limitless space or volume that can be viewed or explored at various scales, levels, or views with each including one or more resolutions.
- an image can be viewed at a more granular level while maintaining resolution with smooth transitions independent of pan, zoom, etc.
- a first view may not expose portions of information or data on the image until zoomed or panned upon with the display engine 502 .
- a browsing engine 504 can also be included with the system 500 .
- the browsing engine 504 can leverage the display engine 502 to implement seamless and smooth panning and/or zooming for any suitable data browsed in connection with at least one of the Internet, a network, a server, a website, a web page, the 3D environment 106 , the true 3D geometric model, a portion of the true 3D geometric model, an object identified from the true 3D geometric model, a modified portion of the true 3D geometric model, and the like.
- the browsing engine 504 can be a stand-alone component, incorporated into a browser, utilized with in combination with a browser (e.g., legacy browser via patch or firmware update, software, hardware, etc.), and/or any suitable combination thereof.
- the browsing engine 504 can be incorporate Internet browsing capabilities such as seamless panning and/or zooming to an existing browser.
- the browsing engine 504 can leverage the display engine 502 in order to provide enhanced browsing with seamless zoom and/or pan on a 3D object or a true 3D geometric model, wherein various scales or views can be exposed by smooth zooming and/or panning.
- FIG. 6 illustrates a system 600 that employs intelligence to facilitate automatically identifying real world properties and dimensions from a 3D image or object created from 2D content.
- the system 600 can include the content aggregator 102 , the 3D environment 104 , and the model component 102 , which can be substantially similar to respective aggregators, environments, and components described in previous figures.
- the system 600 further includes an intelligent component 602 .
- the intelligent component 602 can be utilized by the model component 106 to facilitate constructing a true 3D geometric model from a 3D image assembled from 2D images or photography.
- the intelligent component 602 can infer true 3D geometry, a true 3D geometric model from a 3D object, a physical real world dimension, a physical real world proportion, an attribute reflective of the physical real world, identifiable objects from a true 3D geometric model, a low—dimensional manifold, a tag for an identified object or item, a reduction of an item or object to a lower dimension, import configurations, user preferences, virtual environment import settings, virtual model extrapolation data, etc.
- the intelligent component 602 can employ value of information (VOI) computation in order to identify optimal dimensional reduction settings to identify and reduce objects from a true 3D geometric model. For instance, by utilizing VOI computation, the most ideal and/or appropriate dimensions of an identified object can be maintained and an optimal low-dimensional manifold can be generated. Moreover, it is to be understood that the intelligent component 602 can provide for reasoning about or infer states of the system, environment, and/or user from a set of observations as captured via events and/or data. Inference can be employed to identify a specific context or action, or can generate a probability distribution over states, for example. The inference can be probabilistic—that is, the computation of a probability distribution over states of interest based on a consideration of data and events.
- Inference can also refer to techniques employed for composing higher-level events from a set of events and/or data. Such inference results in the construction of new events or actions from a set of observed events and/or stored event data, whether or not the events are correlated in close temporal proximity, and whether the events and data come from one or several event and data sources.
- Various classification (explicitly and/or implicitly trained) schemes and/or systems e.g., support vector machines, neural networks, expert systems, Bayesian belief networks, fuzzy logic, data fusion engines . . . ) can be employed in connection with performing automatic and/or inferred action in connection with the claimed subject matter.
- Such classification can employ a probabilistic and/or statistical-based analysis (e.g., factoring into the analysis utilities and costs) to prognose or infer an action that a user desires to be automatically performed.
- a support vector machine (SVM) is an example of a classifier that can be employed. The SVM operates by finding a hypersurface in the space of possible inputs, which hypersurface attempts to split the triggering criteria from the non-triggering events.
- Other directed and undirected model classification approaches include, e.g., na ⁇ ve Bayes, Bayesian networks, decision trees, neural networks, fuzzy logic models, and probabilistic classification models providing different patterns of independence can be employed. Classification as used herein also is inclusive of statistical regression that is utilized to develop models of priority.
- the model component 106 can further utilize a presentation component 604 that provides various types of user interfaces to facilitate interaction between a user and any component coupled to the model component 106 .
- the presentation component 604 is a separate entity that can be utilized with the model component 106 .
- the presentation component 604 can provide one or more graphical user interfaces (GUIs), command line interfaces, and the like.
- GUIs graphical user interfaces
- a GUI can be rendered that provides a user with a region or means to load, import, read, etc., data, and can include a region to present the results of such.
- These regions can comprise known text and/or graphic regions comprising dialogue boxes, static controls, drop-down-menus, list boxes, pop-up menus, as edit controls, combo boxes, radio buttons, check boxes, push buttons, and graphic boxes.
- utilities to facilitate the presentation such as vertical and/or horizontal scroll bars for navigation and toolbar buttons to determine whether a region will be viewable can be employed.
- the user can interact with one or more of the components coupled and/or incorporated into the model component 106 .
- the user can also interact with the regions to select and provide information via various devices such as a mouse, a roller ball, a touchpad, a keypad, a keyboard, a touch screen, a pen and/or voice activation, a body motion detection, for example.
- a mechanism such as a push button or the enter key on the keyboard can be employed subsequent entering the information in order to initiate the search.
- a command line interface can be employed.
- the command line interface can prompt (e.g., via a text message on a display and an audio tone) the user for information via providing a text message.
- command line interface can be employed in connection with a GUI and/or API.
- command line interface can be employed in connection with hardware (e.g., video cards) and/or displays (e.g., black and white, EGA, VGA, SVGA, etc.) with limited graphic support, and/or low bandwidth communication channels.
- FIGS. 7-8 illustrate methodologies and/or flow diagrams in accordance with the claimed subject matter.
- the methodologies are depicted and described as a series of acts. It is to be understood and appreciated that the subject innovation is not limited by the acts illustrated and/or by the order of acts. For example acts can occur in various orders and/or concurrently, and with other acts not presented and described herein. Furthermore, not all illustrated acts may be required to implement the methodologies in accordance with the claimed subject matter.
- those skilled in the art will understand and appreciate that the methodologies could alternatively be represented as a series of interrelated states via a state diagram or events.
- the methodologies disclosed hereinafter and throughout this specification are capable of being stored on an article of manufacture to facilitate transporting and transferring such methodologies to computers.
- the term article of manufacture, as used herein, is intended to encompass a computer program accessible from any computer-readable device, carrier, or media.
- FIG. 7 illustrates a method 700 that facilitates providing an object with a low-dimensional manifold from a true 3D geometric model, wherein the object can be modified.
- two or more images related to a real environment can be received.
- the two or more images can be any suitable 2D media or content such as, but not limited to, video, photography, a photo, a picture, a still frame from a video, etc.
- the two or more images can represent or depict a portion of a physical real world (e.g., a photograph of a bird depicts the bird in the physical real world).
- a 3D object can be generated by constructing the two or more 2D images based at least in part upon a perspective of each 2D image. For example, a collection of photographs can be assembled to create a 3D representation of the objects or portion of the physical real world depicted in the photographs. In one example, a first photo of a right side, a second photo of a left side, and a third photo of a top side can be arranged based on their perspective to create a 3D object that can be displayed, browsed, navigated, explored, etc.
- a model having true 3D geometry relative to the real environment can be extrapolated from the 3D object.
- the 3D object can be evaluated and a 3D model having accurate dimensions, properties, attributes, scales, etc. can be created.
- the true 3D geometric model can have true geometry in comparison to the real world, as well as real world dimensions, proportions, surfaces, scales, lengths, size, color, texture, physical properties, weight, chemical composition, etc.
- This true 3D geometric model can be illustrated as, but is not limited to being, a graphical representation, a blueprint, a wire framework, a wire frame, a wire frame model, a skeleton, and or any other displayable item that represents a portion of the 3D object with real world accurate attributes/properties.
- a modification to the model can be provided.
- the true 3D geometric model can be modified, manipulated, or edited.
- the true 3D geometric model can be evaluated with dimensional analysis in order to identify an object having a low-dimension.
- Such identified object with low-dimension can be modified according to user preferences, etc.
- the object can be manipulated.
- a human face can have a plurality of dimensions but can be reduced to a lower amount of dimensions representing core features (e.g., face identified with core features such as eyes, nose, mouth, etc.). This human face can be manipulated by, for instance, changing the distance between eyes, modifying the mouth shape, distorting the nose, etc.
- FIG. 8 illustrates a method 800 for extrapolating a true 3D geometric model with real-world accurate dimensions and automatically tagging identified objects within such model.
- a 3D object can be constructed from two or more 2D images based in part upon a point-of-view for each image.
- a 3D object or image can be created to enable exploration within a 3D virtual environment, wherein the 3D object or image is constructed from 2D content of the object or image.
- the 2D imagery is combined in accordance with the perspective or point-of-view of the imagery to enable an assembled 3D object that can be navigated and viewed (e.g., the 3D object as a whole includes a plurality of 2D images or content).
- 2D pictures of a pyramid can be aggregated to assemble a 3D object that can be navigated or browsed in a 3D virtual environment.
- the aggregated or collected 2D content can be any suitable number of images or content.
- the 3D object can be evaluated to create a model with true 3D geometry.
- a model can be extrapolated from the 3D object, in which the model can have real world attributes such as dimensions, proportions, surfaces, scales, lengths, size, color, texture, physical properties, weight, chemical composition, etc., wherein such attributes reflect those in real life.
- a true object can be automatically identified utilizing, for instance, dimensional analysis.
- a portion of the true 3D geometric model can be identified as a low-dimensional manifold utilizing dimensional analysis.
- such portion of the model can be a true object (e.g., the true 3D geometric model can comprise of a plurality of true objects, wherein a true object is a portion of the true 3D geometric model that has been identified and is recognizable with dimensional analysis).
- the object can be tagged based on the identification. In other words, the identified portion of the true 3D geometric model can be tagged with a portion of metadata describing the identified object or item.
- FIGS. 9-10 and the following discussion is intended to provide a brief, general description of a suitable computing environment in which the various aspects of the subject innovation may be implemented.
- the model component can extrapolate a true 3D geometric model accurate to real-world dimensions from a 3D image or object created from 2D content, as described in the previous figures, can be implemented in such suitable computing environment.
- the claimed subject matter has been described above in the general context of computer-executable instructions of a computer program that runs on a local computer and/or remote computer, those skilled in the art will recognize that the subject innovation also may be implemented in combination with other program modules.
- program modules include routines, programs, components, data structures, etc., that perform particular tasks and/or implement particular abstract data types.
- inventive methods may be practiced with other computer system configurations, including single-processor or multi-processor computer systems, minicomputers, mainframe computers, as well as personal computers, hand-held computing devices, microprocessor-based and/or programmable consumer electronics, and the like, each of which may operatively communicate with one or more associated devices.
- the illustrated aspects of the claimed subject matter may also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network. However, some, if not all, aspects of the subject innovation may be practiced on stand-alone computers.
- program modules may be located in local and/or remote memory storage devices.
- FIG. 9 is a schematic block diagram of a sample-computing environment 900 with which the claimed subject matter can interact.
- the system 900 includes one or more client(s) 910 .
- the client(s) 910 can be hardware and/or software (e.g., threads, processes, computing devices).
- the system 900 also includes one or more server(s) 920 .
- the server(s) 920 can be hardware and/or software (e.g., threads, processes, computing devices).
- the servers 920 can house threads to perform transformations by employing the subject innovation, for example.
- the system 900 includes a communication framework 940 that can be employed to facilitate communications between the client(s) 910 and the server(s) 920 .
- the client(s) 910 are operably connected to one or more client data store(s) 950 that can be employed to store information local to the client(s) 910 .
- the server(s) 920 are operably connected to one or more server data store(s) 930 that can be employed to store information local to the servers 920 .
- an exemplary environment 1000 for implementing various aspects of the claimed subject matter includes a computer 1012 .
- the computer 1012 includes a processing unit 1014 , a system memory 1016 , and a system bus 1018 .
- the system bus 1018 couples system components including, but not limited to, the system memory 1016 to the processing unit 1014 .
- the processing unit 1014 can be any of various available processors. Dual microprocessors and other multiprocessor architectures also can be employed as the processing unit 1014 .
- the system bus 1018 can be any of several types of bus structure(s) including the memory bus or memory controller, a peripheral bus or external bus, and/or a local bus using any variety of available bus architectures including, but not limited to, Industrial Standard Architecture (ISA), Micro-Channel Architecture (MSA), Extended ISA (EISA), Intelligent Drive Electronics (IDE), VESA Local Bus (VLB), Peripheral Component Interconnect (PCI), Card Bus, Universal Serial Bus (USB), Advanced Graphics Port (AGP), Personal Computer Memory Card International Association bus (PCMCIA), Firewire (IEEE 1394), and Small Computer Systems Interface (SCSI).
- ISA Industrial Standard Architecture
- MSA Micro-Channel Architecture
- EISA Extended ISA
- IDE Intelligent Drive Electronics
- VLB VESA Local Bus
- PCI Peripheral Component Interconnect
- Card Bus Universal Serial Bus
- USB Universal Serial Bus
- AGP Advanced Graphics Port
- PCMCIA Personal Computer Memory Card International Association bus
- Firewire IEEE 1394
- SCSI Small Computer Systems Interface
- the system memory 1016 includes volatile memory 1020 and nonvolatile memory 1022 .
- the basic input/output system (BIOS) containing the basic routines to transfer information between elements within the computer 1012 , such as during start-up, is stored in nonvolatile memory 1022 .
- nonvolatile memory 1022 can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
- Volatile memory 1020 includes random access memory (RAM), which acts as external cache memory.
- RAM is available in many forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), Rambus direct RAM (RDRAM), direct Rambus dynamic RAM (DRDRAM), and Rambus dynamic RAM (RDRAM).
- SRAM static RAM
- DRAM dynamic RAM
- SDRAM synchronous DRAM
- DDR SDRAM double data rate SDRAM
- ESDRAM enhanced SDRAM
- SLDRAM Synchlink DRAM
- RDRAM Rambus direct RAM
- DRAM direct Rambus dynamic RAM
- RDRAM Rambus dynamic RAM
- Disk storage 1024 includes, but is not limited to, devices like a magnetic disk drive, floppy disk drive, tape drive, Jaz drive, Zip drive, LS-100 drive, flash memory card, or memory stick.
- disk storage 1024 can include storage media separately or in combination with other storage media including, but not limited to, an optical disk drive such as a compact disk ROM device (CD-ROM), CD recordable drive (CD-R Drive), CD rewritable drive (CD-RW Drive) or a digital versatile disk ROM drive (DVD-ROM).
- an optical disk drive such as a compact disk ROM device (CD-ROM), CD recordable drive (CD-R Drive), CD rewritable drive (CD-RW Drive) or a digital versatile disk ROM drive (DVD-ROM).
- a removable or non-removable interface is typically used such as interface 1026 .
- FIG. 10 describes software that acts as an intermediary between users and the basic computer resources described in the suitable operating environment 1000 .
- Such software includes an operating system 1028 .
- Operating system 1028 which can be stored on disk storage 1024 , acts to control and allocate resources of the computer system 1012 .
- System applications 1030 take advantage of the management of resources by operating system 1028 through program modules 1032 and program data 1034 stored either in system memory 1016 or on disk storage 1024 . It is to be appreciated that the claimed subject matter can be implemented with various operating systems or combinations of operating systems.
- Input devices 1036 include, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, joystick, game pad, satellite dish, scanner, TV tuner card, digital camera, digital video camera, web camera, and the like. These and other input devices connect to the processing unit 1014 through the system bus 1018 via interface port(s) 1038 .
- Interface port(s) 1038 include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB).
- Output device(s) 1040 use some of the same type of ports as input device(s) 1036 .
- a USB port may be used to provide input to computer 1012 , and to output information from computer 1012 to an output device 1040 .
- Output adapter 1042 is provided to illustrate that there are some output devices 1040 like monitors, speakers, and printers, among other output devices 1040 , which require special adapters.
- the output adapters 1042 include, by way of illustration and not limitation, video and sound cards that provide a means of connection between the output device 1040 and the system bus 1018 . It should be noted that other devices and/or systems of devices provide both input and output capabilities such as remote computer(s) 1044 .
- Computer 1012 can operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s) 1044 .
- the remote computer(s) 1044 can be a personal computer, a server, a router, a network PC, a workstation, a microprocessor based appliance, a peer device or other common network node and the like, and typically includes many or all of the elements described relative to computer 1012 .
- only a memory storage device 1046 is illustrated with remote computer(s) 1044 .
- Remote computer(s) 1044 is logically connected to computer 1012 through a network interface 1048 and then physically connected via communication connection 1050 .
- Network interface 1048 encompasses wire and/or wireless communication networks such as local-area networks (LAN) and wide-area networks (WAN).
- LAN technologies include Fiber Distributed Data Interface (FDDI), Copper Distributed Data Interface (CDDI), Ethernet, Token Ring and the like.
- WAN technologies include, but are not limited to, point-to-point links, circuit switching networks like Integrated Services Digital Networks (ISDN) and variations thereon, packet switching networks, and Digital Subscriber Lines (DSL).
- ISDN Integrated Services Digital Networks
- DSL Digital Subscriber Lines
- Communication connection(s) 1050 refers to the hardware/software employed to connect the network interface 1048 to the bus 1018 . While communication connection 1050 is shown for illustrative clarity inside computer 1012 , it can also be external to computer 1012 .
- the hardware/software necessary for connection to the network interface 1048 includes, for exemplary purposes only, internal and external technologies such as, modems including regular telephone grade modems, cable modems and DSL modems, ISDN adapters, and Ethernet cards.
- the terms (including a reference to a “means”) used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., a functional equivalent), even though not structurally equivalent to the disclosed structure, which performs the function in the herein illustrated exemplary aspects of the claimed subject matter.
- the innovation includes a system as well as a computer-readable medium having computer-executable instructions for performing the acts and/or events of the various methods of the claimed subject matter.
- an appropriate API, tool kit, driver code, operating system, control, standalone or downloadable software object, etc. which enables applications and services to use the advertising techniques of the invention.
- the claimed subject matter contemplates the use from the standpoint of an API (or other software object), as well as from a software or hardware object that operates according to the advertising techniques in accordance with the invention.
- various implementations of the innovation described herein may have aspects that are wholly in hardware, partly in hardware and partly in software, as well as in software.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
A three dimensional avatar of a person may be created based on a plurality of images of a person in a physical environment. The three dimensional avatar may have dimensions that are scaled based on dimensions of the person. The three dimensional avatar may be imported into a virtual environment, such as a virtual gaming environment.
Description
- This application claims priority to and is a continuation of U.S. patent application Ser. No. 14/286,264, filed on May 23, 2014 which claims priority to and is a continuation of U.S. patent application Ser. No. 12/116,323, filed on May 7, 2008, now U.S. Pat. No. 8,737,721, the entire contents of which are incorporated herein by reference.
- Conventionally, browsing experiences related to digital media (e.g., photography, images, video, etc.), web pages, or other web-displayed content are comprised of images or other visual components of a fixed spatial scale, generally based upon settings associated with an output display screen resolution and/or the amount of screen real estate allocated to a viewing application, e.g., the size of a browser that is displayed on the screen to the user. In other words, displayed data is typically constrained to a finite or restricted space correlating to a display component (e.g., monitor, LCD, etc.). Moreover, there is an increasing use of digital media based upon decreased size and cost of related devices (e.g., digital cameras, video cameras, digital video cameras, cellular phones with media capture, etc.) and increased availability, usability, and resolution.
- With the increase of such data, mechanisms have been developed to sort and/or classify in order to facilitate summarization or review. As the Internet and private intranets have grown, as user-based connection bandwidths have increased, and as more individuals obtain personal and mobile computing devices, the volume of online data has also increased- such volumes can be overwhelming. With an increase in information comes a need to parse information for relevancy, storage, retrieval, reference, and the like.
- One technique for categorizing media content or digital media, such as pictures or video clips, is the use of metadata tags. Tags are keywords associated with a piece of content that can describe the content, or indicate a word, phrase, acronym, or the like pertinent to aspects of the content. Tags are often generated by a content provider (e.g., a publisher, owner, photography, etc.) to associate with media content and to give a short description of the content to a recipient. Such description can be useful to quickly determine whether time should be spent reviewing the content, whether it should be saved and reviewed later, or whether it should be discarded, for instance. In such a manner, tags, subject lines, and the like have become useful to reduce the time required in perusing the massive amounts of data available remotely and/or locally.
- The following presents a simplified summary of the innovation in order to provide a basic understanding of some aspects described herein. This summary is not an extensive overview of the claimed subject matter. It is intended to neither identify key or critical elements of the claimed subject matter nor delineate the scope of the subject innovation. Its sole purpose is to present some concepts of the claimed subject matter in a simplified form as a prelude to the more detailed description that is presented later.
- The subject innovation relates to systems and/or methods that facilitate leveraging a 3D object constructed from 2D imagery to generate a model with real world accurate dimensions, proportions, scaling, etc. A content aggregator can collect and combine a plurality of two dimensional (2D) images or content to create a three dimensional (3D) image, wherein such 3D image can be explored (e.g., displaying each image and perspective point) in a virtual environment. A model component can extrapolate a true 3D geometric model from the 3D object in which such model can have true 3D geometry and attributes (e.g., dimensions, proportions, surfaces, scales, lengths, size, color, texture, physical properties, weight, chemical composition, etc.). In other words, the model created can be an accurate representation of a real object within the physical real world based in part upon the 2D images depicting or displaying such real object within the physical real world. In general, the model component can evaluate a 3D object (e.g., sampling objects or features from the real world based from 2D images associated with such 3D object) in order to create a model. Such model can be further analyzed with dimensionality reduction techniques to identify those objects or features that can be reduced to a low-dimensional manifold (e.g., possibly a telephone or a coffee table).
- According to one aspect, once the low-dimensional manifold for an object is ascertained (e.g., a true object), that object as well as various associated features can be mapped to a procedural authoring environment. As a result, various features of the object (or the overall representation of the object) can be modified (e.g., twisting a knob or some other tools, etc.). In accordance therewith, 3D objects that accurately depict a scene with as much realism as a photograph can now be modified or authored in much the same way as are virtual worlds in, say, a gaming environment, yet with living photographic quality/detail rather than virtual renditions. In addition to modifying or authoring, the innovation can also provide for a new way of classification as well utilizing automatic tagging. In other aspects of the claimed subject matter, methods are provided that facilitate generating a proportional scaled version of a real object from a 3D object.
- The following description and the annexed drawings set forth in detail certain illustrative aspects of the claimed subject matter. These aspects are indicative, however, of but a few of the various ways in which the principles of the innovation may be employed and the claimed subject matter is intended to include all such aspects and their equivalents. Other advantages and novel features of the claimed subject matter will become apparent from the following detailed description of the innovation when considered in conjunction with the drawings.
-
FIG. 1 illustrates a block diagram of an exemplary system that facilitates generating a model with true 3D geometry characteristics from a 3D image or object. -
FIG. 2 illustrates a block diagram of an exemplary system that facilitates creating an object from a true 3D geometric model having a low-dimensional manifold. -
FIG. 3 illustrates a block diagram of an exemplary system that facilitates automatically identifying and tagging objects from a true 3D geometric model created from a 3D image or object. -
FIG. 4 illustrates a block diagram of an exemplary system that facilitates utilizing a true object identified from the true 3D geometric model. -
FIG. 5 illustrates a block diagram of exemplary system that facilitates utilizing a display technique and/or a browse technique in accordance with the subject innovation. -
FIG. 6 illustrates a block diagram of an exemplary system that facilitates automatically identifying real world properties and dimensions from a 3D image or object created from 2D content. -
FIG. 7 illustrates an exemplary methodology for providing an object with a low-dimensional manifold from a true 3D geometric model, wherein the object can be modified. -
FIG. 8 illustrates an exemplary methodology that facilitates extrapolating a true 3D geometric model with real-world accurate dimensions and automatically tagging identified objects within such model. -
FIG. 9 illustrates an exemplary networking environment, wherein the novel aspects of the claimed subject matter can be employed. -
FIG. 10 illustrates an exemplary operating environment that can be employed in accordance with the claimed subject matter. - The claimed subject matter is described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the subject innovation. It may be evident, however, that the claimed subject matter may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing the subject innovation.
- As utilized herein, terms “component,” “system,” “data store,” “engine,” “tagger,” “analyzer,” “aggregator,” “environment,” “framework,” and the like are intended to refer to a computer-related entity, either hardware, software (e.g., in execution), and/or firmware. For example, a component can be a process running on a processor, a processor, an object, an executable, a program, a function, a library, a subroutine, and/or a computer or a combination of software and hardware. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and a component can be localized on one computer and/or distributed between two or more computers.
- Furthermore, the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. For example, computer readable media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips . . . ), optical disks (e.g., compact disk (CD), digital versatile disk (DVD) . . . ), smart cards, and flash memory devices (e.g., card, stick, key drive . . . ). Additionally it should be appreciated that a carrier wave can be employed to carry computer-readable electronic data such as those used in transmitting and receiving electronic mail or in accessing a network such as the Internet or a local area network (LAN). Of course, those skilled in the art will recognize many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter. Moreover, the word “exemplary” is used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs.
- Now turning to the figures,
FIG. 1 illustrates asystem 100 that facilitates generating a model with true 3D geometry characteristics from a 3D image or object. Thesystem 100 can include acontent aggregator 102 that can collect a plurality of two dimensional (2D) content (e.g., media data, images, video, photographs, metadata, trade cards, any media representing a portion of a physical real world, a picture of an object, a content representing an item, a content depicting an entity, a corporeal object within the real world, etc.) to create a three dimensional (3D) virtual environment (e.g., a 3D environment 104) that can be explored (e.g., displaying each image and perspective point). For instance, the3D environment 106 can include two or more 2D images each having a specific perspective or point-of-view. In particular, the 2D images can be aggregated or collected by thecontent aggregator 102 in order to construct a 3D image or object within the3D environment 104, wherein construction or assembly can be based upon each 2D image perspective. With this 3D image or object created from two or more 2D images/content, amodel component 106 can extrapolate and create a model having true 3D geometry and attributes (e.g., dimensions, proportions, surfaces, scales, lengths, size, color, texture, physical properties, weight, chemical composition, etc.) in which such model can be accurate to the represented 3D image or object representing a portion of a physical real world. The true 3D geometric model created by themodel component 106 can be further utilized to identify and tag objects (discussed below) or to create low-dimensional manifolds for identified objects (discussed below). - In order to provide a
complete 3D environment 104 to a user within the virtual environment, authentic views (e.g., pure views from images) are combined with synthetic views (e.g., interpolations between content such as a blend projected onto the 3D model). For instance, thecontent aggregator 102 can aggregate a large collection of photos of a place or an object, analyze such photos for similarities, and display such photos in a reconstructed 3D space to create a 3D object, depicting how each photo relates to the next. Moreover, the 3D image or object within the3D environment 104 that can be explored, navigated, browsed, etc. It is to be appreciated that the 3D constructed object (e.g., image, etc.) can be from any suitable 2D content such as, but not limited to, images, photos, videos (e.g., a still frame of a video, etc.), audio, pictures, etc. It is to be appreciated that the collected content can be from various locations (e.g., the Internet, local data, remote data, server, network, wirelessly collected data, etc.). For instance, large collections of content (e.g., gigabytes, etc.) can be accessed quickly (e.g., seconds, etc.) in order to view a scene from virtually any angle or perspective. In another example, thecontent aggregator 102 can identify substantially similar content and zoom in to enlarge and focus on a small detail. Thecontent aggregator 102 can provide at least one of the following: 1) walk or fly through a scene to see content from various angles; 2) seamlessly zoom in or out of content independent of resolution (e.g., megapixels, gigapixels, etc.); 3) locate where content was captured in relation to other content; 4) locate similar content to currently viewed content; and 5) communicate a collection or a particular view of content to an entity (e.g., user, machine, device, component, etc.). - For example, a 3D environment can be explored in which the 3D image can be a cube. This cube can be created by combining a first image of a first face of the cube (e.g., the perspective is facing the first face of the cube), a second image of a second face of the cube (e.g., the perspective is facing the second face of the cube), a third image of a third face of the cube (e.g., the perspective is facing the third face of the cube), a fourth image of a fourth face of the cube (e.g., the perspective is facing the fourth face of the cube), a fifth image of a fifth face of the cube (e.g., the perspective is facing the fifth face of the cube), and a sixth image of a sixth face of the cube (e.g., the perspective is facing the sixth face of the cube). By aggregating the images of the cube based on their perspectives or point-of-views, a 3D image of the cube can be created within the
3D environment 106 which can be displayed, viewed, navigated, browsed, and the like. It is to be appreciated that each of the images for the cube that are aggregated together can share at least a portion of content (e.g., a first image of the cube is a first face and a portion of a second face also contained in the second image, etc.) or a portion of a perspective of the image. Moreover, it is to be appreciated and understood that the angular gap between images can be less than thirty (30) degrees for 3D registration. In another example, an statue can include a plurality of images from varying points of view such that the images capture the statue from all sides. These images can be aggregated and aligned to create a 3D object of the statue. - Following the above example, the photographs or images of the cube can be representative of a cube in a physical real world in which the cube has particular attributes such as size, dimensions, proportions, color, weight, physical properties, chemical compositions, etc. The
model component 106 can evaluate the constructed 3D image or object in order to create a model withreal life 3D geometry and attributes. Such model generated from the 3D object or image can include accurate dimensions, proportions, scales, lengths, physical properties, surfaces, textures, and the like for the cube in the physical real world. In general, themodel component 106 can extrapolate a true 3D geometry of the 3D image or object (here the cube) created from the 3D photographs of such cube. This true 3D model can be imported into other applications, virtual environments, and the like. Moreover, this extrapolated model can be utilized to identify objects or items (e.g., the cube as a whole, an ancillary object within the photos of the cube, etc.) which can be reduced to a low-dimensional manifold (discussed below). - In addition, the
system 100 can include any suitable and/or necessary interface component (not shown), which provides various adapters, connectors, channels, communication paths, etc. to integrate themodel component 106 into virtually any operating and/or database system(s) and/or with one another. In addition, the interface component can provide various adapters, connectors, channels, communication paths, etc., that provide for interaction with thecontent aggregator 102, the3D environment 104, themodel component 106, and any other device and/or component associated with thesystem 100. -
FIG. 2 illustrates asystem 200 that facilitates creating an object from a true 3D geometric model having a low-dimensional manifold. Thesystem 200 can include themodel component 106 that can generate a 3D model with true and accurate dimensions to the physical real world in which such 3D model is based upon a 3D object or image constructed from two or more 2D images of an entity (e.g., an item, a person, a landscape, scenery, buildings, objects, animals, devices, goods, etc.) within the physical real world. For instance, the 3D object or image can be created from two or more 2D content (e.g., images, still frames, portion of video, etc.) based upon their perspectives or point-of-views. In general, thecontent aggregator 102can collection 2D images related to a particular entity and construct a 3D object within the3D environment 104 based upon each image's perspective or point-of-view. Such constructed 3D object can be viewed, browsed, navigated, and the like. Moreover, themodel component 106 can evaluate the 3D object in order to create a true 3D geometric model of such object or a portion of the object. - For example, a digital camera can capture a plurality of photographs of a house from various angles in a physical real world. From the collection of photographs, a 3D object can be constructed, wherein a portion of the 3D object is represented by a photograph from a perspective or point-of-view from which the photograph was taken. The 3D object can be viewed (e.g., illustrating the 2D content utilized to construct such 3D object of the house), navigated, or browsed. For example, a virtual tour can be given within the 3D environment of the 3D image representing the house. In other words, the house can be represented as a 3D object within the 3D environment constructed from the plurality of photographs taken from the digital camera. Furthermore, the 3D object can be evaluated in order to generate a true 3D geometric model of such house. The true 3D geometric model can have accurate dimensions, proportions, surfaces, scales, lengths, size, color, texture, physical properties, weight, chemical composition, etc., wherein accuracy is in comparison to the house in the physical real world. In other words, the true 3D geometric model can be a computerized replicate of accurate scale and properties of the 3D object or image.
- The
system 200 can include aneditor component 202 that enables a portion of the true 3D geometric model to be modified. In general, the true 3D geometric model can be modified or manipulated in accordance to one's liking. For example, themodel component 106 can generate the 3D geometric model in which portions of the model are created with a low-dimensional manifold or having a low-dimension. Theeditor component 202 enables a low-dimensional manifold or the low-dimensional object associated with the model to be modified or manipulated to create new objects or modified objects from the originally extracted low-dimensional manifold or object. - For instance, dimensionality reduction can be implemented on the true 3D geometric model in order to reduce a high-dimensionality object to a reduced number of dimensions but while maintaining recognizable representation. For instance, a 3D object may be constructed from photos of a human face in which a human face can include a high number of dimensions, yet, the human face can be reduced to a lower number of dimensions and still maintain the recognizable traits (e.g., cheeks, eyes, nose, mouth, etc.). By enabling portions of the true 3D geometric model (based from the 3D object) to be reduced to a low-dimensional manifold, the
system 200 can create a virtual representation of a real object (e.g., content from the physical real world is the basis for the object depicted in the content within a virtual reality). Following the example with the house being represented by a 3D object and a true 3D geometric model being extrapolated there from, objects such as a window, a door, or the like can be identified and reduced to low-dimensional manifolds. Theeditor component 202 can allow such low-dimensional manifolds or identified objects, or the model as a whole to be modified, edited, changed, manipulated, and the like. For example, the door can be modified to be a circular door rather than a standard rectangle door. - In another example, the true 3D geometric model can be of a human face, in which the
editor component 202 can allow modification. For instance, eyes on the face can be moved closer together or further apart, the shape can be changed, the cheek bones can be exaggerated, the mouth can be scaled to a smaller size, etc.—the face, in general, can be distorted. Theeditor component 202 can employ procedural authoring as in creating a new object based off at least one of the low-dimensional manifold created from the true 3D geometric model, a high-dimensional manifold, a portion of the true 3D geometric model, an object or item identified within the true 3D geometric model, or the true 3D geometric model. - It is to be appreciated that surface reconstruction can be used to reconstruct 2D manifolds, or surfaces, from disorganized point clouds (e.g., collection of images, collection of 2D content, etc.). For instance, techniques associated with computer vision can be employed. Moreover, once a point cloud has been converted to a parametrized surface, it can be treated as one instance among an ensemble. For example, synths (e.g., 3D objects, 3D images created from 2D content, etc.) of many faces or multiple synths of a set of French doors can form an ensemble to recover latent degrees of freedom (e.g., eyebrows going up and down, or the doors opening and closing, etc.). Dimensionality reduction can also be used to recover the effects of changing time of day and weather on a 3D object or image, say for instance, the Lincoln Memorial, given a plurality of images of the Lincoln Memorial aggregated or synthed together. In this case, there are not multiple synths or 3D objects, but there are many different time-of-day and weather photos contributing to the synth thus, an ensemble in this case is over renditions of a common patch based on different 2D content or images. In addition, a large ensemble of synths with respect to such surface variations can be used with the substantially similar dimensionality reduction techniques in order to identify common materials and their properties in general under variable lighting and environmental conditions.
- The
system 200 can further include adata store 204 that can include any suitable data related to thecontent aggregator 102, the3D environment 104, themodel component 106, etc. For example, thedata store 204 can include, but not limited to including, 2D content, 3D object data, 3D true geometric models, extrapolations between a 3D object and a true 3D geometric model, dimensional analysis data, low-dimensional manifold data, manifold data, objects created from the 3D true geometric model, items created from the 3D true geometric model, user preferences, user settings, configurations, scripted movements, transitions, 3D environment data, 3D construction data, mappings between 2D content and 3D object or image, etc. - It is to be appreciated that the
data store 204 can be, for example, either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. By way of illustration, and not limitation, nonvolatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory can include random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), Rambus direct RAM (RDRAM), direct Rambus dynamic RAM (DRDRAM), and Rambus dynamic RAM (RDRAM). Thedata store 204 of the subject systems and methods is intended to comprise, without being limited to, these and any other suitable types of memory. In addition, it is to be appreciated that thedata store 204 can be a server, a database, a hard drive, a pen drive, an external hard drive, a portable hard drive, and the like. -
FIG. 3 illustrates asystem 300 that facilitates automatically identifying and tagging objects from a true 3D geometric model created from a 3D image or object. Thesystem 300 can include thecontent aggregator 102 that can construct a 3D image or object from two or more 2D images or photographs having respective point-of-views of the physical real world. The 3D image or object can be navigated, browsed, viewed, and/or displayed within the3D environment 104. It is to be appreciated that the 3D environment can be accessed locally, remotely, and/or any suitable combination thereof. Moreover, it is to be appreciated that the 3D image or object and/or the 2D content can be accessed locally, remotely, and/or any suitable combination thereof. For example, a user can log into a first host for theremote 3D environment 104 and access a 3D object in which the 2D content is located on a second host. As described above, themodel component 106 can provide dimensional analysis in order to generate a true 3D geometric model having identical attributes to the object in the physical real world of which the 2D content depicts. - In an example, the 2D content can be photographs or video that portrays a car in the physical real world. Such photographs or video can be collected to construct a 3D image within a
virtual 3D environment 106 by thecontent aggregator 102. By assembling the imagery (e.g., photos, video, etc.) based upon a related perspective or point-of-view, the 3D object can be a 3D virtual representation of the car. Such 3D object can be utilized to extrapolate a true 3D geometric model of the car, wherein the model includes accurate size, scaling, proportions, dimensions, etc. For example, a measurement of a wheelbase for the car within the model can be accurate to the wheelbase for the car in the physical real world (e.g., including a scaling factor, without a scaling factor, etc.). The true 3D geometric model can be further utilized to identify objects (e.g., a muffler, a bumper, a light, a windshield wiper, etc.), utilized in other applications or environments (e.g., virtual environments, procedural environments, drafting applications, etc.), utilized to create new objects based on the identified objects (e.g., a modified muffle, a modified bumper, a modification to the car, etc.). In one example, the true 3D geometric model can be utilized to identify a low-dimensional manifold of a car, to which a user can modify such manifold to create a disparate car with a disparate true 3D geometric model. - It is to be appreciated and understood that the true 3D geometric model can be any suitable model with true 3D geometry and attributes (e.g., dimensions, proportions, surfaces, scales, lengths, size, color, texture, physical properties, weight, chemical composition, etc.) in which such model can be accurate to the represented 3D image or object representing a portion of a physical real world (e.g., an entity depicted within the 2D content or images). For example, the true 3D geometric model can be, but is not limited to, a graphical representation, a blueprint, a wire framework, a wire frame, a wire frame model, a skeleton, etc.
- The
model component 106 can include ananalyzer 302 and atagger 304. Theanalyzer 304 can evaluate the true 3D geometric model in order to identify an object or item (e.g., also referred to as a true object). In other words, by evaluating the true 3D geometric model, particular portions may be more identifiable as objects in comparison to other objects. Upon identification or an object from the true 3D geometric model being recognized, thetagger 304 can associate a metadata tag or a portion of data describing the object. In one example, dimensional analysis can be utilized to facilitate identifying objects, wherein objects with a low-dimension can be more identifiable than objects with a high dimension. In such example, the low-dimension objects identified can be tagged by thetagger 304. Furthermore, a catalog or data store (e.g., data store 204) can include tagged information. - For instance, following the example of the 3D object of a house created from 2D photographs of a house in the physical real world, the true 3D geometric model can be evaluated utilizing dimensional analysis in order to identify objects. Here, the objects can be a low-dimensional version of the house (e.g., reducing the true 3D geometric model to core features, etc.) or a low-dimensional object included within the photographs of the house such as a shutter, an address plate, a mailbox, a lawn chair, a table, etc. Such identified objects and items can be tagged with metadata for description. Moreover, such identified objects can further be leveraged to identify other objects or items from the true 3D geometric model. For example, an identified lawn chair can be leveraged (e.g., the characteristics, dimensions, attributes, etc.) in order to identify a recliner or any other related variation of the lawn chair.
- In another example, based upon recent work associated with modeling human faces, it has been determined that the human face is remarkably well suited to computerized synthesis, whereas for some other objects (e.g., animals) such is not the case. One explanation for this trait is that the human face can be reduced to a low-dimension manifold which allows for ready computational synthesis. The
system 300 provides for sampling objects or features from the real world in order to identify those objects or features that can be reduced to a low-dimensional manifold (e.g., possibly a telephone or a coffee table). - According to one aspect, once the low-dimensional manifold for an object is ascertained, that object as well as various associated features can be mapped to a procedural authoring environment. As a result, various features of the object (or the overall representation of the object) can be modified simply by twisting a knob or some other tool in the procedural environment. In accordance therewith, synths (e.g., 3D objects, etc.) that accurately depict a scene with as much realism as a photograph can now be modified or authored in much the same way as are virtual worlds in, say, a gaming environment, yet with living photographic quality/detail rather than virtual renditions. In addition to modifying or authoring, the innovation can also provide for a new way of classification as well (e.g., tagging).
-
FIG. 4 illustrates asystem 400 that facilitates utilizing a true object identified from the true 3D geometric model. Thesystem 400 can include themodel component 106 that can analyze a 3D object constructed by thecontent aggregator 102 that assembles two or more photographs that depict a portion of the physical real world based upon each photograph's point-of-view. Based upon such analysis, themodel component 106 can extrapolate physical real world properties and create a model that has such real world properties (e.g., dimensions, proportions, surfaces, scales, lengths, size, color, texture, physical properties, weight, chemical composition, etc.). This true 3D geometric model can be, but is not limited to being, a graphical representation, a blueprint, a wire framework, a wire frame, a wire frame model, a skeleton, and or any other displayable item that represents a portion of the 3D object with real world accurate attributes/properties. - As discussed, the true 3D geometric model can be analyzed with dimensional analysis in order to identify objects or items that are recognizable. For instance, in the car example discussed above, a portion of the true 3D geometric model can be identified as a low-dimensional manifold (e.g., a muffler, a rear-view mirror, etc.). By identifying a portion of the true 3D geometric model as a low-dimensional manifold, such portion of the model can be a true object (e.g., the true 3D geometric model can comprise of a plurality of true objects, wherein a true object is a portion of the true 3D geometric model that has been identified and is recognizable with dimensional analysis).
- This true object or the identifiable portion of the true 3D geometric model can be implemented in connection with a
virtual environment 402, a portion of application and/orsoftware 404, and/or a disparate3D object framework 406. The true object can be imported into avirtual environment 402 in order, wherein such true object is a virtual representation of real objects. In other words, the real life objects from the 2D images or photographs can be the basis of the virtual reality. For example, a collection of photos of a famous building can be aggregated and assembled to construct a 3D object of such famous building. This 3D object can be the basis for the extrapolation of a true 3D geometric model having physical real world dimensions, properties, attributes, etc. From this true 3D geometric model, objects and/or items can be readily identifiable utilizing, for example, dimensional analysis. These identified objects or items can be imported into thevirtual environment 402. In other words, rather than creating the famous building, the famous building can be imported based on the extrapolated data from the 3D object created from 2D content. For instance, a social environment or network can allow a user to create an avatar, a house, etc., wherein the true 3D geometric model can be utilized therewith. It is to be appreciated that the virtual environment can be a social network, an online community, an online virtual world, a 3D online virtual world, etc. - The true object or identified portion of true 3D geometric model can be further utilized with an application or
software 404. For instance, the true 3D geometric model can be utilized with a drafting application based on the architectural accurate characteristics. With dimensions, proportions, and attributes reflecting those of the physical real world, the following can utilize the true object: drafting applications, simulators (e.g., car crash simulating programs, a program or application that simulates reactions to a stimulus, natural disaster scenario, etc.), graphic designer programs, programs utilizing blueprint information, applications, geographic applications, mapping programs, navigation programs, designer software, etc. - The true object can further be utilized in connection with the 3D object as a
3D object framework 406. In particular, the true object can be a skeleton for the 3D object it originated (e.g., exposed in areas that are not represented by 2D content within the assembled 3D object), wherein 2D content can be overlaid upon the skeleton. In another instance, the true object can be utilized to create or construct a 3D object in connection withmapping 2D content onto the 3D object. -
FIG. 5 illustrates asystem 500 that facilitates utilizing a display technique and/or a browse technique in accordance with the subject innovation. Thesystem 500 can include thecontent aggregator 102, the3D environment 104, and themodel component 106 as described above. Thesystem 500 can further include adisplay engine 502 that enables seamless pan and/or zoom interaction with any suitable data (e.g., 3D object data, 2D imagery, content, the true 3D geometric model, a portion of the true 3D geometric model, an object identified from the true 3D geometric model, a modified portion of the true 3D geometric model, etc.), wherein such data can include multiple scales or views and one or more resolutions associated therewith. In other words, thedisplay engine 502 can manipulate an initial default view for displayed data by enabling zooming (e.g., zoom in, zoom out, etc.) and/or panning (e.g., pan up, pan down, pan right, pan left, etc.) in which such zoomed or panned views can include various resolution qualities. Thedisplay engine 502 enables visual information to be smoothly browsed regardless of the amount of data involved or bandwidth of a network. Moreover, thedisplay engine 502 can be employed with any suitable display or screen (e.g., portable device, cellular device, monitor, plasma television, etc.). Thedisplay engine 502 can further provide at least one of the following benefits or enhancements: 1) speed of navigation can be independent of size or number of objects (e.g., data); 2) performance can depend on a ratio of bandwidth to pixels on a screen or display; 3) transitions between views can be smooth; and 4) scaling is near perfect and rapid for screens of any resolution. It is to be appreciated and understood that thedisplay engine 502 can be substantially similar to thedisplay engine 102 described above. - For example, an image can be viewed at a default view with a specific resolution. Yet, the
display engine 502 can allow the image to be zoomed and/or panned at multiple views or scales (in comparison to the default view) with various resolutions. Thus, a user can zoom in on a portion of the image to get a magnified view at an equal or higher resolution. By enabling the image to be zoomed and/or panned, the image can include virtually limitless space or volume that can be viewed or explored at various scales, levels, or views with each including one or more resolutions. In other words, an image can be viewed at a more granular level while maintaining resolution with smooth transitions independent of pan, zoom, etc. Moreover, a first view may not expose portions of information or data on the image until zoomed or panned upon with thedisplay engine 502. - A
browsing engine 504 can also be included with thesystem 500. Thebrowsing engine 504 can leverage thedisplay engine 502 to implement seamless and smooth panning and/or zooming for any suitable data browsed in connection with at least one of the Internet, a network, a server, a website, a web page, the3D environment 106, the true 3D geometric model, a portion of the true 3D geometric model, an object identified from the true 3D geometric model, a modified portion of the true 3D geometric model, and the like. It is to be appreciated that thebrowsing engine 504 can be a stand-alone component, incorporated into a browser, utilized with in combination with a browser (e.g., legacy browser via patch or firmware update, software, hardware, etc.), and/or any suitable combination thereof. For example, thebrowsing engine 504 can be incorporate Internet browsing capabilities such as seamless panning and/or zooming to an existing browser. For example, thebrowsing engine 504 can leverage thedisplay engine 502 in order to provide enhanced browsing with seamless zoom and/or pan on a 3D object or a true 3D geometric model, wherein various scales or views can be exposed by smooth zooming and/or panning. -
FIG. 6 illustrates asystem 600 that employs intelligence to facilitate automatically identifying real world properties and dimensions from a 3D image or object created from 2D content. Thesystem 600 can include thecontent aggregator 102, the3D environment 104, and themodel component 102, which can be substantially similar to respective aggregators, environments, and components described in previous figures. Thesystem 600 further includes anintelligent component 602. Theintelligent component 602 can be utilized by themodel component 106 to facilitate constructing a true 3D geometric model from a 3D image assembled from 2D images or photography. For example, theintelligent component 602 can infer true 3D geometry, a true 3D geometric model from a 3D object, a physical real world dimension, a physical real world proportion, an attribute reflective of the physical real world, identifiable objects from a true 3D geometric model, a low—dimensional manifold, a tag for an identified object or item, a reduction of an item or object to a lower dimension, import configurations, user preferences, virtual environment import settings, virtual model extrapolation data, etc. - The
intelligent component 602 can employ value of information (VOI) computation in order to identify optimal dimensional reduction settings to identify and reduce objects from a true 3D geometric model. For instance, by utilizing VOI computation, the most ideal and/or appropriate dimensions of an identified object can be maintained and an optimal low-dimensional manifold can be generated. Moreover, it is to be understood that theintelligent component 602 can provide for reasoning about or infer states of the system, environment, and/or user from a set of observations as captured via events and/or data. Inference can be employed to identify a specific context or action, or can generate a probability distribution over states, for example. The inference can be probabilistic—that is, the computation of a probability distribution over states of interest based on a consideration of data and events. Inference can also refer to techniques employed for composing higher-level events from a set of events and/or data. Such inference results in the construction of new events or actions from a set of observed events and/or stored event data, whether or not the events are correlated in close temporal proximity, and whether the events and data come from one or several event and data sources. Various classification (explicitly and/or implicitly trained) schemes and/or systems (e.g., support vector machines, neural networks, expert systems, Bayesian belief networks, fuzzy logic, data fusion engines . . . ) can be employed in connection with performing automatic and/or inferred action in connection with the claimed subject matter. - A classifier is a function that maps an input attribute vector, x=(x1, x2, x3, x4, xn), to a confidence that the input belongs to a class, that is, f(x)=confidence(class). Such classification can employ a probabilistic and/or statistical-based analysis (e.g., factoring into the analysis utilities and costs) to prognose or infer an action that a user desires to be automatically performed. A support vector machine (SVM) is an example of a classifier that can be employed. The SVM operates by finding a hypersurface in the space of possible inputs, which hypersurface attempts to split the triggering criteria from the non-triggering events. Intuitively, this makes the classification correct for testing data that is near, but not identical to training data. Other directed and undirected model classification approaches include, e.g., naïve Bayes, Bayesian networks, decision trees, neural networks, fuzzy logic models, and probabilistic classification models providing different patterns of independence can be employed. Classification as used herein also is inclusive of statistical regression that is utilized to develop models of priority.
- The
model component 106 can further utilize apresentation component 604 that provides various types of user interfaces to facilitate interaction between a user and any component coupled to themodel component 106. As depicted, thepresentation component 604 is a separate entity that can be utilized with themodel component 106. However, it is to be appreciated that thepresentation component 604 and/or similar view components can be incorporated into themodel component 106 and/or a stand-alone unit. Thepresentation component 604 can provide one or more graphical user interfaces (GUIs), command line interfaces, and the like. For example, a GUI can be rendered that provides a user with a region or means to load, import, read, etc., data, and can include a region to present the results of such. These regions can comprise known text and/or graphic regions comprising dialogue boxes, static controls, drop-down-menus, list boxes, pop-up menus, as edit controls, combo boxes, radio buttons, check boxes, push buttons, and graphic boxes. In addition, utilities to facilitate the presentation such as vertical and/or horizontal scroll bars for navigation and toolbar buttons to determine whether a region will be viewable can be employed. For example, the user can interact with one or more of the components coupled and/or incorporated into themodel component 106. - The user can also interact with the regions to select and provide information via various devices such as a mouse, a roller ball, a touchpad, a keypad, a keyboard, a touch screen, a pen and/or voice activation, a body motion detection, for example. Typically, a mechanism such as a push button or the enter key on the keyboard can be employed subsequent entering the information in order to initiate the search. However, it is to be appreciated that the claimed subject matter is not so limited. For example, merely highlighting a check box can initiate information conveyance. In another example, a command line interface can be employed. For example, the command line interface can prompt (e.g., via a text message on a display and an audio tone) the user for information via providing a text message. The user can then provide suitable information, such as alpha-numeric input corresponding to an option provided in the interface prompt or an answer to a question posed in the prompt. It is to be appreciated that the command line interface can be employed in connection with a GUI and/or API. In addition, the command line interface can be employed in connection with hardware (e.g., video cards) and/or displays (e.g., black and white, EGA, VGA, SVGA, etc.) with limited graphic support, and/or low bandwidth communication channels.
-
FIGS. 7-8 illustrate methodologies and/or flow diagrams in accordance with the claimed subject matter. For simplicity of explanation, the methodologies are depicted and described as a series of acts. It is to be understood and appreciated that the subject innovation is not limited by the acts illustrated and/or by the order of acts. For example acts can occur in various orders and/or concurrently, and with other acts not presented and described herein. Furthermore, not all illustrated acts may be required to implement the methodologies in accordance with the claimed subject matter. In addition, those skilled in the art will understand and appreciate that the methodologies could alternatively be represented as a series of interrelated states via a state diagram or events. Additionally, it should be further appreciated that the methodologies disclosed hereinafter and throughout this specification are capable of being stored on an article of manufacture to facilitate transporting and transferring such methodologies to computers. The term article of manufacture, as used herein, is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. -
FIG. 7 illustrates amethod 700 that facilitates providing an object with a low-dimensional manifold from a true 3D geometric model, wherein the object can be modified. Atreference numeral 702, two or more images related to a real environment can be received. For example, the two or more images can be any suitable 2D media or content such as, but not limited to, video, photography, a photo, a picture, a still frame from a video, etc. It is to be appreciated that the two or more images can represent or depict a portion of a physical real world (e.g., a photograph of a bird depicts the bird in the physical real world). - At
reference numeral 704, a 3D object can be generated by constructing the two or more 2D images based at least in part upon a perspective of each 2D image. For example, a collection of photographs can be assembled to create a 3D representation of the objects or portion of the physical real world depicted in the photographs. In one example, a first photo of a right side, a second photo of a left side, and a third photo of a top side can be arranged based on their perspective to create a 3D object that can be displayed, browsed, navigated, explored, etc. - At
reference numeral 706, a model having true 3D geometry relative to the real environment can be extrapolated from the 3D object. The 3D object can be evaluated and a 3D model having accurate dimensions, properties, attributes, scales, etc. can be created. In particular, the true 3D geometric model can have true geometry in comparison to the real world, as well as real world dimensions, proportions, surfaces, scales, lengths, size, color, texture, physical properties, weight, chemical composition, etc. This true 3D geometric model can be illustrated as, but is not limited to being, a graphical representation, a blueprint, a wire framework, a wire frame, a wire frame model, a skeleton, and or any other displayable item that represents a portion of the 3D object with real world accurate attributes/properties. - At
reference numeral 708, a modification to the model can be provided. For example, the true 3D geometric model can be modified, manipulated, or edited. In one specific instance, the true 3D geometric model can be evaluated with dimensional analysis in order to identify an object having a low-dimension. Such identified object with low-dimension can be modified according to user preferences, etc. With the identified object having a low-dimension but still having recognizable core features, the object can be manipulated. For example, a human face can have a plurality of dimensions but can be reduced to a lower amount of dimensions representing core features (e.g., face identified with core features such as eyes, nose, mouth, etc.). This human face can be manipulated by, for instance, changing the distance between eyes, modifying the mouth shape, distorting the nose, etc. -
FIG. 8 illustrates amethod 800 for extrapolating a true 3D geometric model with real-world accurate dimensions and automatically tagging identified objects within such model. Atreference numeral 802, a 3D object can be constructed from two or more 2D images based in part upon a point-of-view for each image. In general, a 3D object or image can be created to enable exploration within a 3D virtual environment, wherein the 3D object or image is constructed from 2D content of the object or image. The 2D imagery is combined in accordance with the perspective or point-of-view of the imagery to enable an assembled 3D object that can be navigated and viewed (e.g., the 3D object as a whole includes a plurality of 2D images or content). For example, 2D pictures of a pyramid (e.g., a first picture of a first side, a second picture of a second side, a third picture of a third side, a fourth picture of a fourth side, and a fifth picture of a bottom side) can be aggregated to assemble a 3D object that can be navigated or browsed in a 3D virtual environment. It is to be appreciated that the aggregated or collected 2D content can be any suitable number of images or content. - At
reference numeral 804, the 3D object can be evaluated to create a model with true 3D geometry. For example, a model can be extrapolated from the 3D object, in which the model can have real world attributes such as dimensions, proportions, surfaces, scales, lengths, size, color, texture, physical properties, weight, chemical composition, etc., wherein such attributes reflect those in real life. Atreference numeral 806, a true object can be automatically identified utilizing, for instance, dimensional analysis. A portion of the true 3D geometric model can be identified as a low-dimensional manifold utilizing dimensional analysis. By identifying a portion of the true 3D geometric model as a low-dimensional manifold, such portion of the model can be a true object (e.g., the true 3D geometric model can comprise of a plurality of true objects, wherein a true object is a portion of the true 3D geometric model that has been identified and is recognizable with dimensional analysis). Atreference numeral 808, the object can be tagged based on the identification. In other words, the identified portion of the true 3D geometric model can be tagged with a portion of metadata describing the identified object or item. - In order to provide additional context for implementing various aspects of the claimed subject matter,
FIGS. 9-10 and the following discussion is intended to provide a brief, general description of a suitable computing environment in which the various aspects of the subject innovation may be implemented. For example, the model component can extrapolate a true 3D geometric model accurate to real-world dimensions from a 3D image or object created from 2D content, as described in the previous figures, can be implemented in such suitable computing environment. While the claimed subject matter has been described above in the general context of computer-executable instructions of a computer program that runs on a local computer and/or remote computer, those skilled in the art will recognize that the subject innovation also may be implemented in combination with other program modules. Generally, program modules include routines, programs, components, data structures, etc., that perform particular tasks and/or implement particular abstract data types. - Moreover, those skilled in the art will appreciate that the inventive methods may be practiced with other computer system configurations, including single-processor or multi-processor computer systems, minicomputers, mainframe computers, as well as personal computers, hand-held computing devices, microprocessor-based and/or programmable consumer electronics, and the like, each of which may operatively communicate with one or more associated devices. The illustrated aspects of the claimed subject matter may also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network. However, some, if not all, aspects of the subject innovation may be practiced on stand-alone computers. In a distributed computing environment, program modules may be located in local and/or remote memory storage devices.
-
FIG. 9 is a schematic block diagram of a sample-computing environment 900 with which the claimed subject matter can interact. Thesystem 900 includes one or more client(s) 910. The client(s) 910 can be hardware and/or software (e.g., threads, processes, computing devices). Thesystem 900 also includes one or more server(s) 920. The server(s) 920 can be hardware and/or software (e.g., threads, processes, computing devices). Theservers 920 can house threads to perform transformations by employing the subject innovation, for example. - One possible communication between a
client 910 and aserver 920 can be in the form of a data packet adapted to be transmitted between two or more computer processes. Thesystem 900 includes acommunication framework 940 that can be employed to facilitate communications between the client(s) 910 and the server(s) 920. The client(s) 910 are operably connected to one or more client data store(s) 950 that can be employed to store information local to the client(s) 910. Similarly, the server(s) 920 are operably connected to one or more server data store(s) 930 that can be employed to store information local to theservers 920. - With reference to
FIG. 10 , anexemplary environment 1000 for implementing various aspects of the claimed subject matter includes acomputer 1012. Thecomputer 1012 includes aprocessing unit 1014, asystem memory 1016, and asystem bus 1018. Thesystem bus 1018 couples system components including, but not limited to, thesystem memory 1016 to theprocessing unit 1014. Theprocessing unit 1014 can be any of various available processors. Dual microprocessors and other multiprocessor architectures also can be employed as theprocessing unit 1014. - The
system bus 1018 can be any of several types of bus structure(s) including the memory bus or memory controller, a peripheral bus or external bus, and/or a local bus using any variety of available bus architectures including, but not limited to, Industrial Standard Architecture (ISA), Micro-Channel Architecture (MSA), Extended ISA (EISA), Intelligent Drive Electronics (IDE), VESA Local Bus (VLB), Peripheral Component Interconnect (PCI), Card Bus, Universal Serial Bus (USB), Advanced Graphics Port (AGP), Personal Computer Memory Card International Association bus (PCMCIA), Firewire (IEEE 1394), and Small Computer Systems Interface (SCSI). - The
system memory 1016 includesvolatile memory 1020 andnonvolatile memory 1022. The basic input/output system (BIOS), containing the basic routines to transfer information between elements within thecomputer 1012, such as during start-up, is stored innonvolatile memory 1022. By way of illustration, and not limitation,nonvolatile memory 1022 can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.Volatile memory 1020 includes random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), Rambus direct RAM (RDRAM), direct Rambus dynamic RAM (DRDRAM), and Rambus dynamic RAM (RDRAM). -
Computer 1012 also includes removable/non-removable, volatile/non-volatile computer storage media.FIG. 10 illustrates, for example adisk storage 1024.Disk storage 1024 includes, but is not limited to, devices like a magnetic disk drive, floppy disk drive, tape drive, Jaz drive, Zip drive, LS-100 drive, flash memory card, or memory stick. In addition,disk storage 1024 can include storage media separately or in combination with other storage media including, but not limited to, an optical disk drive such as a compact disk ROM device (CD-ROM), CD recordable drive (CD-R Drive), CD rewritable drive (CD-RW Drive) or a digital versatile disk ROM drive (DVD-ROM). To facilitate connection of thedisk storage devices 1024 to thesystem bus 1018, a removable or non-removable interface is typically used such asinterface 1026. - It is to be appreciated that
FIG. 10 describes software that acts as an intermediary between users and the basic computer resources described in thesuitable operating environment 1000. Such software includes anoperating system 1028.Operating system 1028, which can be stored ondisk storage 1024, acts to control and allocate resources of thecomputer system 1012.System applications 1030 take advantage of the management of resources byoperating system 1028 throughprogram modules 1032 andprogram data 1034 stored either insystem memory 1016 or ondisk storage 1024. It is to be appreciated that the claimed subject matter can be implemented with various operating systems or combinations of operating systems. - A user enters commands or information into the
computer 1012 through input device(s) 1036.Input devices 1036 include, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, joystick, game pad, satellite dish, scanner, TV tuner card, digital camera, digital video camera, web camera, and the like. These and other input devices connect to theprocessing unit 1014 through thesystem bus 1018 via interface port(s) 1038. Interface port(s) 1038 include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB). Output device(s) 1040 use some of the same type of ports as input device(s) 1036. Thus, for example, a USB port may be used to provide input tocomputer 1012, and to output information fromcomputer 1012 to anoutput device 1040.Output adapter 1042 is provided to illustrate that there are someoutput devices 1040 like monitors, speakers, and printers, amongother output devices 1040, which require special adapters. Theoutput adapters 1042 include, by way of illustration and not limitation, video and sound cards that provide a means of connection between theoutput device 1040 and thesystem bus 1018. It should be noted that other devices and/or systems of devices provide both input and output capabilities such as remote computer(s) 1044. -
Computer 1012 can operate in a networked environment using logical connections to one or more remote computers, such as remote computer(s) 1044. The remote computer(s) 1044 can be a personal computer, a server, a router, a network PC, a workstation, a microprocessor based appliance, a peer device or other common network node and the like, and typically includes many or all of the elements described relative tocomputer 1012. For purposes of brevity, only amemory storage device 1046 is illustrated with remote computer(s) 1044. Remote computer(s) 1044 is logically connected tocomputer 1012 through anetwork interface 1048 and then physically connected viacommunication connection 1050.Network interface 1048 encompasses wire and/or wireless communication networks such as local-area networks (LAN) and wide-area networks (WAN). LAN technologies include Fiber Distributed Data Interface (FDDI), Copper Distributed Data Interface (CDDI), Ethernet, Token Ring and the like. WAN technologies include, but are not limited to, point-to-point links, circuit switching networks like Integrated Services Digital Networks (ISDN) and variations thereon, packet switching networks, and Digital Subscriber Lines (DSL). - Communication connection(s) 1050 refers to the hardware/software employed to connect the
network interface 1048 to thebus 1018. Whilecommunication connection 1050 is shown for illustrative clarity insidecomputer 1012, it can also be external tocomputer 1012. The hardware/software necessary for connection to thenetwork interface 1048 includes, for exemplary purposes only, internal and external technologies such as, modems including regular telephone grade modems, cable modems and DSL modems, ISDN adapters, and Ethernet cards. - What has been described above includes examples of the subject innovation. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the claimed subject matter, but one of ordinary skill in the art may recognize that many further combinations and permutations of the subject innovation are possible. Accordingly, the claimed subject matter is intended to embrace all such alterations, modifications, and variations that fall within the spirit and scope of the appended claims.
- In particular and in regard to the various functions performed by the above described components, devices, circuits, systems and the like, the terms (including a reference to a “means”) used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., a functional equivalent), even though not structurally equivalent to the disclosed structure, which performs the function in the herein illustrated exemplary aspects of the claimed subject matter. In this regard, it will also be recognized that the innovation includes a system as well as a computer-readable medium having computer-executable instructions for performing the acts and/or events of the various methods of the claimed subject matter.
- There are multiple ways of implementing the present innovation, e.g., an appropriate API, tool kit, driver code, operating system, control, standalone or downloadable software object, etc. which enables applications and services to use the advertising techniques of the invention. The claimed subject matter contemplates the use from the standpoint of an API (or other software object), as well as from a software or hardware object that operates according to the advertising techniques in accordance with the invention. Thus, various implementations of the innovation described herein may have aspects that are wholly in hardware, partly in hardware and partly in software, as well as in software.
- The aforementioned systems have been described with respect to interaction between several components. It can be appreciated that such systems and components can include those components or specified sub-components, some of the specified components or sub-components, and/or additional components, and according to various permutations and combinations of the foregoing. Sub-components can also be implemented as components communicatively coupled to other components rather than included within parent components (hierarchical). Additionally, it should be noted that one or more components may be combined into a single component providing aggregate functionality or divided into several separate sub-components, and any one or more middle layers, such as a management layer, may be provided to communicatively couple to such sub-components in order to provide integrated functionality. Any components described herein may also interact with one or more other components not specifically described herein but generally known by those of skill in the art.
- In addition, while a particular feature of the subject innovation may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms “includes,” “including,” “has,” “contains,” variants thereof, and other similar words are used in either the detailed description or the claims, these terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.
Claims (21)
1-20. (canceled)
21. One or more computer-readable media storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform acts comprising:
obtaining, from one or more camera devices, a plurality of images of a person in a physical environment;
generating a three dimensional (3D) avatar of the person based at least in part on the plurality of images, the 3D avatar having dimensions that are scaled based at least in part on dimensions of the person; and
importing the 3D avatar into a virtual gaming environment.
22. The one or more computer-readable media of claim 21 , wherein the acts further comprise:
reducing a number of features of the 3D avatar to a set of core features.
23. The one or more computer-readable media of claim 21 , wherein the generating comprises generating a facial surface for the 3D avatar that reflects a facial surface of the person.
24. The one or more computer-readable media of claim 21 , wherein the generating comprises generating a surface for the 3D avatar from the plurality of images of the person.
25. The one or more computer-readable media of claim 21 , wherein the plurality of images of the person includes at least two images that represent different points-of-view of the person.
26. The one or more computer-readable media of claim 21 , wherein the acts further comprise:
causing a feature of the 3D avatar to be exaggerated.
27. The one or more computer-readable media of claim 26 , wherein the causing the feature of the 3D avatar to be exaggerated comprises adjusting a size of the feature of the 3D avatar.
28. The one or more computer-readable media of claim 21 , wherein a surface of the 3D avatar has a color that matches a color of a corresponding surface on the person.
29. A system comprising:
one or more processors;
memory communicatively coupled to the one or more processors;
a content aggregator stored in the memory and executable by the one or more processors to collect a plurality of images of a person in a physical environment;
a model component stored in the memory and executable by the one or more processors to:
create, based at least in part on the plurality of images, a three dimensional (3D) model representing an avatar of the person, the 3D model having dimensions that are based at least in part on dimensions of the person; and
cause the 3D model to be used within a virtual environment.
30. The system of claim 29 , wherein the model component is further configured to reduce features of the 3D model to a set of core features.
31. The system of claim 29 , wherein the 3D model includes a surface that reflects a facial surface of the person.
32. The system of claim 29 , wherein the plurality of images of the person includes at least two images that represent different points-of-view of the person.
33. The system of claim 29 , further comprising:
an editor component stored in the memory and executable by the one or more processors to cause a feature of the 3D model to be exaggerated by adjusting a size of the feature of the 3D model.
34. The system of claim 29 , wherein a surface of the 3D model has a color that matches a color of a corresponding surface on the person.
35. A method comprising:
receiving, by a computing device and from one or more cameras, a plurality of images of a person in a physical environment;
generating, by the computing device and based at least in part on the plurality of images, a three dimensional (3D) model representing an avatar of the person, the 3D model including surfaces that are scaled based at least in part on surfaces of the person; and
causing, by the computing device, the 3D model to be used within a virtual environment.
36. The method of claim 35 , wherein the 3D model includes a surface that reflects a facial surface of the person.
37. The method of claim 35 , wherein the plurality of images of the person includes at least two images that represent different points-of-view of the person.
38. The method of claim 35 , further comprising:
adjusting a size of a feature of the 3D model.
39. The method of claim 35 , wherein a surface of the 3D model has a color that matches a color of a corresponding surface on the person.
40. The method of claim 35 , further comprising:
reducing a number of features of the 3D model to a set of core features.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/737,098 US20150310662A1 (en) | 2008-05-07 | 2015-06-11 | Procedural authoring |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/116,323 US8737721B2 (en) | 2008-05-07 | 2008-05-07 | Procedural authoring |
US14/286,264 US9659406B2 (en) | 2008-05-07 | 2014-05-23 | Procedural authoring |
US14/737,098 US20150310662A1 (en) | 2008-05-07 | 2015-06-11 | Procedural authoring |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/286,264 Continuation US9659406B2 (en) | 2008-05-07 | 2014-05-23 | Procedural authoring |
Publications (1)
Publication Number | Publication Date |
---|---|
US20150310662A1 true US20150310662A1 (en) | 2015-10-29 |
Family
ID=41266935
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/116,323 Active 2031-03-16 US8737721B2 (en) | 2008-05-07 | 2008-05-07 | Procedural authoring |
US14/286,264 Active US9659406B2 (en) | 2008-05-07 | 2014-05-23 | Procedural authoring |
US14/737,098 Abandoned US20150310662A1 (en) | 2008-05-07 | 2015-06-11 | Procedural authoring |
US15/474,933 Active US10217294B2 (en) | 2008-05-07 | 2017-03-30 | Procedural authoring |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/116,323 Active 2031-03-16 US8737721B2 (en) | 2008-05-07 | 2008-05-07 | Procedural authoring |
US14/286,264 Active US9659406B2 (en) | 2008-05-07 | 2014-05-23 | Procedural authoring |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/474,933 Active US10217294B2 (en) | 2008-05-07 | 2017-03-30 | Procedural authoring |
Country Status (1)
Country | Link |
---|---|
US (4) | US8737721B2 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150243071A1 (en) * | 2012-06-17 | 2015-08-27 | Spaceview Inc. | Method for providing scale to align 3d objects in 2d environment |
US20150332509A1 (en) * | 2014-05-13 | 2015-11-19 | Spaceview Inc. | Method for moving and aligning 3d objects in a plane within the 2d environment |
US9659406B2 (en) | 2008-05-07 | 2017-05-23 | Microsoft Technology Licensing, Llc | Procedural authoring |
US11317082B2 (en) | 2018-01-25 | 2022-04-26 | Sony Corporation | Information processing apparatus and information processing method |
Families Citing this family (55)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7963448B2 (en) * | 2004-12-22 | 2011-06-21 | Cognex Technology And Investment Corporation | Hand held machine vision method and apparatus |
US9552506B1 (en) | 2004-12-23 | 2017-01-24 | Cognex Technology And Investment Llc | Method and apparatus for industrial identification mark verification |
US9734376B2 (en) | 2007-11-13 | 2017-08-15 | Cognex Corporation | System and method for reading patterns using multiple image frames |
US8346017B2 (en) * | 2008-04-30 | 2013-01-01 | Microsoft Corporation | Intermediate point between images to insert/overlay ads |
US20090295791A1 (en) * | 2008-05-29 | 2009-12-03 | Microsoft Corporation | Three-dimensional environment created from video |
US8334900B2 (en) * | 2008-07-21 | 2012-12-18 | The Hong Kong University Of Science And Technology | Apparatus and method of optical imaging for medical diagnosis |
EP2452228A4 (en) * | 2009-07-10 | 2015-06-03 | Front Street Invest Man Inc As Manager For Front Street Diversified Income Class | Method and apparatus for generating three dimensional image information using a single imaging path |
KR20110051044A (en) * | 2009-11-09 | 2011-05-17 | 광주과학기술원 | Method and apparatus for providing users with haptic information on a 3-d object |
US8694553B2 (en) * | 2010-06-07 | 2014-04-08 | Gary Stephen Shuster | Creation and use of virtual places |
JP5704854B2 (en) * | 2010-07-26 | 2015-04-22 | オリンパスイメージング株式会社 | Display device |
JP5652097B2 (en) * | 2010-10-01 | 2015-01-14 | ソニー株式会社 | Image processing apparatus, program, and image processing method |
US20120179983A1 (en) * | 2011-01-07 | 2012-07-12 | Martin Lemire | Three-dimensional virtual environment website |
US8675953B1 (en) * | 2011-02-02 | 2014-03-18 | Intuit Inc. | Calculating an object size using images |
US8314790B1 (en) * | 2011-03-29 | 2012-11-20 | Google Inc. | Layer opacity adjustment for a three-dimensional object |
US9724600B2 (en) * | 2011-06-06 | 2017-08-08 | Microsoft Technology Licensing, Llc | Controlling objects in a virtual environment |
US9606992B2 (en) * | 2011-09-30 | 2017-03-28 | Microsoft Technology Licensing, Llc | Personal audio/visual apparatus providing resource management |
US9336625B2 (en) * | 2011-10-25 | 2016-05-10 | Microsoft Technology Licensing, Llc | Object refinement using many data sets |
US9443353B2 (en) * | 2011-12-01 | 2016-09-13 | Qualcomm Incorporated | Methods and systems for capturing and moving 3D models and true-scale metadata of real world objects |
US9626798B2 (en) | 2011-12-05 | 2017-04-18 | At&T Intellectual Property I, L.P. | System and method to digitally replace objects in images or video |
US9236024B2 (en) | 2011-12-06 | 2016-01-12 | Glasses.Com Inc. | Systems and methods for obtaining a pupillary distance measurement using a mobile computing device |
US20130286161A1 (en) * | 2012-04-25 | 2013-10-31 | Futurewei Technologies, Inc. | Three-dimensional face recognition for mobile devices |
US9483853B2 (en) | 2012-05-23 | 2016-11-01 | Glasses.Com Inc. | Systems and methods to display rendered images |
US9235929B2 (en) | 2012-05-23 | 2016-01-12 | Glasses.Com Inc. | Systems and methods for efficiently processing virtual 3-D data |
US9286715B2 (en) | 2012-05-23 | 2016-03-15 | Glasses.Com Inc. | Systems and methods for adjusting a virtual try-on |
US20150006361A1 (en) | 2013-06-28 | 2015-01-01 | Google Inc. | Extracting Card Data Using Three-Dimensional Models |
US10387729B2 (en) | 2013-07-09 | 2019-08-20 | Outward, Inc. | Tagging virtualized content |
US10529026B2 (en) * | 2013-07-16 | 2020-01-07 | Esurance Insurance Services, Inc. | Property inspection using aerial imagery |
US9607437B2 (en) * | 2013-10-04 | 2017-03-28 | Qualcomm Incorporated | Generating augmented reality content for unknown objects |
US9229674B2 (en) | 2014-01-31 | 2016-01-05 | Ebay Inc. | 3D printing: marketplace with federated access to printers |
CN104143212A (en) * | 2014-07-02 | 2014-11-12 | 惠州Tcl移动通信有限公司 | Reality augmenting method and system based on wearable device |
BR112017004783A2 (en) * | 2014-09-10 | 2017-12-12 | Hasbro Inc | toy system with manually operated digitizer |
CN104504175A (en) * | 2014-11-27 | 2015-04-08 | 上海卫星装备研究所 | Simulation system and simulation method for spacecraft assembling |
US9595037B2 (en) | 2014-12-16 | 2017-03-14 | Ebay Inc. | Digital rights and integrity management in three-dimensional (3D) printing |
US20160167307A1 (en) * | 2014-12-16 | 2016-06-16 | Ebay Inc. | Systems and methods for 3d digital printing |
US9857939B2 (en) * | 2015-02-27 | 2018-01-02 | Accenture Global Services Limited | Three-dimensional virtualization |
WO2016138567A1 (en) * | 2015-03-05 | 2016-09-09 | Commonwealth Scientific And Industrial Research Organisation | Structure modelling |
WO2016171730A1 (en) * | 2015-04-24 | 2016-10-27 | Hewlett-Packard Development Company, L.P. | Three-dimensional object representation |
US10169917B2 (en) | 2015-08-20 | 2019-01-01 | Microsoft Technology Licensing, Llc | Augmented reality |
US10235808B2 (en) | 2015-08-20 | 2019-03-19 | Microsoft Technology Licensing, Llc | Communication system |
US9881584B2 (en) * | 2015-09-10 | 2018-01-30 | Nbcuniversal Media, Llc | System and method for presenting content within virtual reality environment |
US10168152B2 (en) * | 2015-10-02 | 2019-01-01 | International Business Machines Corporation | Using photogrammetry to aid identification and assembly of product parts |
US10599879B2 (en) * | 2016-06-17 | 2020-03-24 | Dassault Systemes Simulia Corp. | Optimal pressure-projection method for incompressible transient and steady-state navier-stokes equations |
US10360736B2 (en) * | 2017-03-15 | 2019-07-23 | Facebook, Inc. | Visual editor for designing augmented-reality effects |
US11436811B2 (en) | 2017-04-25 | 2022-09-06 | Microsoft Technology Licensing, Llc | Container-based virtual camera rotation |
JP2018195241A (en) * | 2017-05-22 | 2018-12-06 | ソニー株式会社 | Information processing apparatus, information processing method, and program |
CN108379840A (en) * | 2018-01-30 | 2018-08-10 | 珠海金山网络游戏科技有限公司 | A kind of system and method for virtual scene simulation legitimate object model |
CN108961395B (en) * | 2018-07-03 | 2019-07-30 | 上海亦我信息技术有限公司 | A method of three dimensional spatial scene is rebuild based on taking pictures |
US10733800B2 (en) * | 2018-09-17 | 2020-08-04 | Facebook Technologies, Llc | Reconstruction of essential visual cues in mixed reality applications |
EP3671660A1 (en) * | 2018-12-20 | 2020-06-24 | Dassault Systèmes | Designing a 3d modeled object via user-interaction |
CN110058421B (en) * | 2019-05-05 | 2023-11-21 | 成都工业学院 | Photo-induced compatible stereoscopic display printed matter |
WO2020248000A1 (en) * | 2019-06-13 | 2020-12-17 | 4D Mapper Pty Ltd | A contained area network and a processor |
KR102506701B1 (en) * | 2019-12-20 | 2023-03-06 | 우이시 테크놀로지스 (저지앙) 리미티드 | 3D reconstruction method, device, system and computer readable storage medium |
CN113478833B (en) * | 2021-06-28 | 2022-05-20 | 华中科技大学 | 3D printing forming method based on skeleton line contour recognition and region segmentation |
US11836205B2 (en) | 2022-04-20 | 2023-12-05 | Meta Platforms Technologies, Llc | Artificial reality browser configured to trigger an immersive experience |
US11755180B1 (en) | 2022-06-22 | 2023-09-12 | Meta Platforms Technologies, Llc | Browser enabled switching between virtual worlds in artificial reality |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040247174A1 (en) * | 2000-01-20 | 2004-12-09 | Canon Kabushiki Kaisha | Image processing apparatus |
Family Cites Families (66)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5255352A (en) | 1989-08-03 | 1993-10-19 | Computer Design, Inc. | Mapping of two-dimensional surface detail on three-dimensional surfaces |
US5301117A (en) | 1991-10-30 | 1994-04-05 | Giorgio Riga | Method for creating a three-dimensional corporeal model from a very small original |
US5818959A (en) | 1995-10-04 | 1998-10-06 | Visual Interface, Inc. | Method of producing a three-dimensional image from two-dimensional images |
US5748199A (en) | 1995-12-20 | 1998-05-05 | Synthonics Incorporated | Method and apparatus for converting a two dimensional motion picture into a three dimensional motion picture |
GB2317007B (en) | 1996-09-04 | 2000-07-26 | Spectrum Tech Ltd | Contrast determining apparatus and methods |
US6009210A (en) * | 1997-03-05 | 1999-12-28 | Digital Equipment Corporation | Hands-free interface to a virtual reality environment using head tracking |
US6094215A (en) | 1998-01-06 | 2000-07-25 | Intel Corporation | Method of determining relative camera orientation position to create 3-D visual images |
US6333749B1 (en) | 1998-04-17 | 2001-12-25 | Adobe Systems, Inc. | Method and apparatus for image assisted modeling of three-dimensional scenes |
US6434265B1 (en) | 1998-09-25 | 2002-08-13 | Apple Computers, Inc. | Aligning rectilinear images in 3D through projective registration and calibration |
US6608628B1 (en) * | 1998-11-06 | 2003-08-19 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration (Nasa) | Method and apparatus for virtual interactive medical imaging by multiple remotely-located users |
US6310619B1 (en) * | 1998-11-10 | 2001-10-30 | Robert W. Rice | Virtual reality, tissue-specific body model having user-variable tissue-specific attributes and a system and method for implementing the same |
US6278460B1 (en) | 1998-12-15 | 2001-08-21 | Point Cloud, Inc. | Creating a three-dimensional model from two-dimensional images |
JP2000207575A (en) | 1999-01-08 | 2000-07-28 | Nadeisu:Kk | Space fusing device and application devices adapting the same |
US6456287B1 (en) | 1999-02-03 | 2002-09-24 | Isurftv | Method and apparatus for 3D model creation based on 2D images |
US6571024B1 (en) | 1999-06-18 | 2003-05-27 | Sarnoff Corporation | Method and apparatus for multi-view three dimensional estimation |
JP3387856B2 (en) | 1999-08-06 | 2003-03-17 | キヤノン株式会社 | Image processing method, image processing device, and storage medium |
US6549201B1 (en) | 1999-11-23 | 2003-04-15 | Center For Advanced Science And Technology Incubation, Ltd. | Method for constructing a 3D polygonal surface from a 2D silhouette by using computer, apparatus thereof and storage medium |
US7657083B2 (en) * | 2000-03-08 | 2010-02-02 | Cyberextruder.Com, Inc. | System, method, and apparatus for generating a three-dimensional representation from one or more two-dimensional images |
US7065242B2 (en) * | 2000-03-28 | 2006-06-20 | Viewpoint Corporation | System and method of three-dimensional image capture and modeling |
US7254265B2 (en) | 2000-04-01 | 2007-08-07 | Newsight Corporation | Methods and systems for 2D/3D image conversion and optimization |
US7027642B2 (en) * | 2000-04-28 | 2006-04-11 | Orametrix, Inc. | Methods for registration of three-dimensional frames to create three-dimensional virtual models of objects |
EP1371019A2 (en) | 2001-01-26 | 2003-12-17 | Zaxel Systems, Inc. | Real-time virtual viewpoint in simulated reality environment |
US7194112B2 (en) | 2001-03-12 | 2007-03-20 | Eastman Kodak Company | Three dimensional spatial panorama formation with a range imaging system |
US7717708B2 (en) * | 2001-04-13 | 2010-05-18 | Orametrix, Inc. | Method and system for integrated orthodontic treatment planning using unified workstation |
US7156655B2 (en) * | 2001-04-13 | 2007-01-02 | Orametrix, Inc. | Method and system for comprehensive evaluation of orthodontic treatment using unified workstation |
US20040247157A1 (en) * | 2001-06-15 | 2004-12-09 | Ulrich Lages | Method for preparing image information |
US7046840B2 (en) * | 2001-11-09 | 2006-05-16 | Arcsoft, Inc. | 3-D reconstruction engine |
DE10156908A1 (en) * | 2001-11-21 | 2003-05-28 | Corpus E Ag | Determination of a person's shape using photogrammetry, whereby the person wears elastic clothing with photogrammetric markings and stands on a surface with reference photogrammetric markings |
US6795069B2 (en) | 2002-05-29 | 2004-09-21 | Mitsubishi Electric Research Laboratories, Inc. | Free-form modeling of objects with variational implicit surfaces |
US7643669B2 (en) * | 2002-07-10 | 2010-01-05 | Harman Becker Automotive Systems Gmbh | System for generating three-dimensional electronic models of objects |
GB0224449D0 (en) * | 2002-10-21 | 2002-11-27 | Canon Europa Nv | Apparatus and method for generating texture maps for use in 3D computer graphics |
US7146036B2 (en) * | 2003-02-03 | 2006-12-05 | Hewlett-Packard Development Company, L.P. | Multiframe correspondence estimation |
US7142726B2 (en) | 2003-03-19 | 2006-11-28 | Mitsubishi Electric Research Labs, Inc. | Three-dimensional scene reconstruction from labeled two-dimensional images |
US7212664B2 (en) | 2003-08-07 | 2007-05-01 | Mitsubishi Electric Research Laboratories, Inc. | Constructing heads from 3D models and 2D silhouettes |
US7747067B2 (en) * | 2003-10-08 | 2010-06-29 | Purdue Research Foundation | System and method for three dimensional modeling |
FI117490B (en) * | 2004-03-15 | 2006-10-31 | Geodeettinen Laitos | Procedure for defining attributes for tree stocks using a laser scanner, image information and interpretation of individual trees |
US7436988B2 (en) * | 2004-06-03 | 2008-10-14 | Arizona Board Of Regents | 3D face authentication and recognition based on bilateral symmetry analysis |
US7292257B2 (en) | 2004-06-28 | 2007-11-06 | Microsoft Corporation | Interactive viewpoint video system and process |
US7697748B2 (en) | 2004-07-06 | 2010-04-13 | Dimsdale Engineering, Llc | Method and apparatus for high resolution 3D imaging as a function of camera position, camera trajectory and range |
US7627173B2 (en) * | 2004-08-02 | 2009-12-01 | Siemens Medical Solutions Usa, Inc. | GGN segmentation in pulmonary images for accuracy and consistency |
US20070065002A1 (en) * | 2005-02-18 | 2007-03-22 | Laurence Marzell | Adaptive 3D image modelling system and apparatus and method therefor |
EP1851727A4 (en) * | 2005-02-23 | 2008-12-03 | Craig Summers | Automatic scene modeling for the 3d camera and 3d video |
US7352370B2 (en) * | 2005-06-02 | 2008-04-01 | Accuray Incorporated | Four-dimensional volume of interest |
KR20060131145A (en) | 2005-06-15 | 2006-12-20 | 엘지전자 주식회사 | Randering method of three dimension object using two dimension picture |
WO2007027847A2 (en) * | 2005-09-01 | 2007-03-08 | Geosim Systems Ltd. | System and method for cost-effective, high-fidelity 3d-modeling of large-scale urban environments |
US8625854B2 (en) * | 2005-09-09 | 2014-01-07 | Industrial Research Limited | 3D scene scanner and a position and orientation system |
US7840032B2 (en) | 2005-10-04 | 2010-11-23 | Microsoft Corporation | Street-side maps and paths |
WO2007041696A2 (en) | 2005-10-04 | 2007-04-12 | Alexander Eugene J | System and method for calibrating a set of imaging devices and calculating 3d coordinates of detected features in a laboratory coordinate system |
US20070104360A1 (en) | 2005-11-09 | 2007-05-10 | Smedia Technology Corporation | System and method for capturing 3D face |
US7605817B2 (en) | 2005-11-09 | 2009-10-20 | 3M Innovative Properties Company | Determining camera motion |
US7840042B2 (en) | 2006-01-20 | 2010-11-23 | 3M Innovative Properties Company | Superposition for visualization of three-dimensional data acquisition |
US7542886B2 (en) * | 2006-01-27 | 2009-06-02 | Autodesk, Inc. | Method and apparatus for extensible utility network part types and part properties in 3D computer models |
US7856125B2 (en) | 2006-01-31 | 2010-12-21 | University Of Southern California | 3D face reconstruction from 2D images |
US8477154B2 (en) * | 2006-03-20 | 2013-07-02 | Siemens Energy, Inc. | Method and system for interactive virtual inspection of modeled objects |
US20070237356A1 (en) | 2006-04-07 | 2007-10-11 | John Dwinell | Parcel imaging system and method |
WO2008002630A2 (en) | 2006-06-26 | 2008-01-03 | University Of Southern California | Seamless image integration into 3d models |
US7840046B2 (en) | 2006-06-27 | 2010-11-23 | Siemens Medical Solutions Usa, Inc. | System and method for detection of breast masses and calcifications using the tomosynthesis projection and reconstructed images |
US20080112610A1 (en) * | 2006-11-14 | 2008-05-15 | S2, Inc. | System and method for 3d model generation |
US8059124B2 (en) * | 2006-11-28 | 2011-11-15 | Adobe Systems Incorporated | Temporary non-tiled rendering of 3D objects |
US8049658B1 (en) | 2007-05-25 | 2011-11-01 | Lockheed Martin Corporation | Determination of the three-dimensional location of a target viewed by a camera |
DE602007003849D1 (en) * | 2007-10-11 | 2010-01-28 | Mvtec Software Gmbh | System and method for 3D object recognition |
US8737721B2 (en) | 2008-05-07 | 2014-05-27 | Microsoft Corporation | Procedural authoring |
US8204299B2 (en) | 2008-06-12 | 2012-06-19 | Microsoft Corporation | 3D content aggregation built into devices |
US8743114B2 (en) * | 2008-09-22 | 2014-06-03 | Intel Corporation | Methods and systems to determine conservative view cell occlusion |
US8295589B2 (en) * | 2010-05-20 | 2012-10-23 | Microsoft Corporation | Spatially registering user photographs |
US8363930B1 (en) * | 2012-07-23 | 2013-01-29 | Google Inc. | Use of materials and appearances to merge scanned images |
-
2008
- 2008-05-07 US US12/116,323 patent/US8737721B2/en active Active
-
2014
- 2014-05-23 US US14/286,264 patent/US9659406B2/en active Active
-
2015
- 2015-06-11 US US14/737,098 patent/US20150310662A1/en not_active Abandoned
-
2017
- 2017-03-30 US US15/474,933 patent/US10217294B2/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040247174A1 (en) * | 2000-01-20 | 2004-12-09 | Canon Kabushiki Kaisha | Image processing apparatus |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9659406B2 (en) | 2008-05-07 | 2017-05-23 | Microsoft Technology Licensing, Llc | Procedural authoring |
US10217294B2 (en) | 2008-05-07 | 2019-02-26 | Microsoft Technology Licensing, Llc | Procedural authoring |
US20150243071A1 (en) * | 2012-06-17 | 2015-08-27 | Spaceview Inc. | Method for providing scale to align 3d objects in 2d environment |
US11869157B2 (en) | 2012-06-17 | 2024-01-09 | West Texas Technology Partners, Llc | Method for providing scale to align 3D objects in 2D environment |
US11182975B2 (en) | 2012-06-17 | 2021-11-23 | Atheer, Inc. | Method for providing scale to align 3D objects in 2D environment |
US10796490B2 (en) | 2012-06-17 | 2020-10-06 | Atheer, Inc. | Method for providing scale to align 3D objects in 2D environment |
US10216355B2 (en) * | 2012-06-17 | 2019-02-26 | Atheer, Inc. | Method for providing scale to align 3D objects in 2D environment |
US10635757B2 (en) | 2014-05-13 | 2020-04-28 | Atheer, Inc. | Method for replacing 3D objects in 2D environment |
US10296663B2 (en) * | 2014-05-13 | 2019-05-21 | Atheer, Inc. | Method for moving and aligning 3D objects in a plane within the 2D environment |
US9977844B2 (en) | 2014-05-13 | 2018-05-22 | Atheer, Inc. | Method for providing a projection to align 3D objects in 2D environment |
US10867080B2 (en) | 2014-05-13 | 2020-12-15 | Atheer, Inc. | Method for moving and aligning 3D objects in a plane within the 2D environment |
US9971853B2 (en) | 2014-05-13 | 2018-05-15 | Atheer, Inc. | Method for replacing 3D objects in 2D environment |
US11341290B2 (en) | 2014-05-13 | 2022-05-24 | West Texas Technology Partners, Llc | Method for moving and aligning 3D objects in a plane within the 2D environment |
US11544418B2 (en) | 2014-05-13 | 2023-01-03 | West Texas Technology Partners, Llc | Method for replacing 3D objects in 2D environment |
US20150332509A1 (en) * | 2014-05-13 | 2015-11-19 | Spaceview Inc. | Method for moving and aligning 3d objects in a plane within the 2d environment |
US11914928B2 (en) | 2014-05-13 | 2024-02-27 | West Texas Technology Partners, Llc | Method for moving and aligning 3D objects in a plane within the 2D environment |
US11317082B2 (en) | 2018-01-25 | 2022-04-26 | Sony Corporation | Information processing apparatus and information processing method |
Also Published As
Publication number | Publication date |
---|---|
US20170206714A1 (en) | 2017-07-20 |
US10217294B2 (en) | 2019-02-26 |
US20140254921A1 (en) | 2014-09-11 |
US20090279784A1 (en) | 2009-11-12 |
US8737721B2 (en) | 2014-05-27 |
US9659406B2 (en) | 2017-05-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10217294B2 (en) | Procedural authoring | |
US8346017B2 (en) | Intermediate point between images to insert/overlay ads | |
US8204299B2 (en) | 3D content aggregation built into devices | |
US20090254867A1 (en) | Zoom for annotatable margins | |
CA2704706C (en) | Trade card services | |
US20090295791A1 (en) | Three-dimensional environment created from video | |
Tatzgern | Situated visualization in augmented reality | |
US11734888B2 (en) | Real-time 3D facial animation from binocular video | |
Bauer et al. | UASOL, a large-scale high-resolution outdoor stereo dataset | |
Soliman et al. | Artificial intelligence powered Metaverse: analysis, challenges and future perspectives | |
CN116319862A (en) | System and method for intelligently matching digital libraries | |
Kerim et al. | NOVA: Rendering virtual worlds with humans for computer vision tasks | |
Storeide et al. | Standardization of digitized heritage: a review of implementations of 3D in cultural heritage | |
US20090172570A1 (en) | Multiscaled trade cards | |
AU2023204419A1 (en) | Multidimentional image editing from an input image | |
US10970330B1 (en) | Method of searching images using rotational gesture input | |
Du | Fusing multimedia data into dynamic virtual environments | |
Petric et al. | Real teaching and learning through virtual reality | |
Lenkoe | Enhancing spatial image datasets for utilisation in a simulator | |
Fime et al. | Automatic Scene Generation: State-of-the-Art Techniques, Models, Datasets, Challenges, and Future Prospects | |
Mehta | Virtual reality applications in the field of architectural reconstructions | |
Beebe | A Complete Bibliography of Publications in IEEE MultiMedia | |
Sauter | Introduction to crime scene reconstruction using real-time interactive 3D technology | |
Schofield | Graphical evidence: forensic animations and virtual reconstructions | |
WO2019023959A1 (en) | Smart terminal-based spatial layout control method and spatial layout control system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICROSOFT CORPORATION, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ARCAS, BLAISE AGUERA Y.;BREWER, BRETT D;FAROUKI, KARIM;AND OTHERS;SIGNING DATES FROM 20080428 TO 20080506;REEL/FRAME:036267/0316 Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC., WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:036267/0403 Effective date: 20141014 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |