JP2018163692A - System and method for rendering augmented reality content by albedo model - Google Patents
System and method for rendering augmented reality content by albedo model Download PDFInfo
- Publication number
- JP2018163692A JP2018163692A JP2018117683A JP2018117683A JP2018163692A JP 2018163692 A JP2018163692 A JP 2018163692A JP 2018117683 A JP2018117683 A JP 2018117683A JP 2018117683 A JP2018117683 A JP 2018117683A JP 2018163692 A JP2018163692 A JP 2018163692A
- Authority
- JP
- Japan
- Prior art keywords
- content
- albedo
- rendering
- shading
- digital representation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 95
- 230000003190 augmentative Effects 0.000 title claims abstract description 17
- 230000000875 corresponding Effects 0.000 claims abstract description 31
- 238000005286 illumination Methods 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 11
- 239000000463 materials Substances 0.000 claims description 10
- -1 GLOH Proteins 0.000 claims description 4
- 230000001815 facial Effects 0.000 claims description 4
- 239000011521 glasses Substances 0.000 claims description 4
- 241000723353 Chrysanthemum Species 0.000 claims description 3
- 235000005633 Chrysanthemum balsamita Nutrition 0.000 claims description 3
- 101710045059 KCS2 Proteins 0.000 claims description 3
- 241000196324 Embryophyta Species 0.000 claims description 2
- 239000008264 clouds Substances 0.000 claims description 2
- 230000004044 response Effects 0.000 claims description 2
- 239000000758 substrates Substances 0.000 claims description 2
- 238000002604 ultrasonography Methods 0.000 claims description 2
- 281999990011 institutions and organizations companies 0.000 claims 2
- 238000001914 filtration Methods 0.000 claims 1
- 230000001131 transforming Effects 0.000 abstract description 5
- 238000000034 methods Methods 0.000 description 13
- 238000005516 engineering processes Methods 0.000 description 8
- 239000000203 mixtures Substances 0.000 description 6
- 210000003491 Skin Anatomy 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 5
- 230000002085 persistent Effects 0.000 description 5
- 210000001519 tissues Anatomy 0.000 description 4
- 210000001508 Eye Anatomy 0.000 description 3
- 235000013361 beverage Nutrition 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 241000563994 Cardiopteridaceae Species 0.000 description 2
- 210000000887 Face Anatomy 0.000 description 2
- 238000010411 cooking Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000002093 peripheral Effects 0.000 description 2
- 241000905358 Actaea pachypoda Species 0.000 description 1
- 235000016795 Cola Nutrition 0.000 description 1
- 240000001644 Cola acuminata Species 0.000 description 1
- 235000011824 Cola pachycarpa Nutrition 0.000 description 1
- 210000000088 Lip Anatomy 0.000 description 1
- 235000011829 Ow cola Nutrition 0.000 description 1
- 241000282320 Panthera leo Species 0.000 description 1
- 244000269722 Thea sinensis Species 0.000 description 1
- 206010046306 Upper respiratory tract infection Diseases 0.000 description 1
- 239000011324 beads Substances 0.000 description 1
- 238000006243 chemical reactions Methods 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 239000000428 dust Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing Effects 0.000 description 1
- 239000000976 inks Substances 0.000 description 1
- 239000011159 matrix materials Substances 0.000 description 1
- 238000005259 measurements Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006011 modification reactions Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 239000007787 solids Substances 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/80—Shading
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2215/00—Indexing scheme for image rendering
- G06T2215/16—Using real world measurements to influence rendering
Abstract
Description
This application claims priority from US Provisional Application No. 61 / 992,804, filed May 13, 2014, the entire text of which is incorporated herein by reference.
The technical field of the present invention relates to augmented reality technology.
The following background art description provides information that may be useful in understanding the present invention. Not all of the information described herein should be construed as related to the prior art, the present invention, and all publications mentioned directly or indirectly should not be construed as prior art.
With the widespread use of camera-equipped mobile devices (eg, mobile phones, tablets, game systems, etc.), expanded or virtual reality content is increasingly required. Augmented reality content can be superimposed on an image of an object in the real world to enhance the user's feeling of use. For example, U.S. Patent No. 6,546,309, filed June 29, 2001 by Gazzuolo, with the name "Virtual Fitting Room", includes a process for determining a user's size from a mathematical model of the body. Have been described. The size is used for adjusting a clothing model that can be superimposed on the user's image. That is, according to the method of Gazzuolo, the user can try on clothes “virtually”.
There are many examples in which augmented reality (abbreviated as AR) content is superimposed on the real world in a variety of markets. However, when a person sees the content, the superimposed AR content is felt as a product. This is because AR content is rendered as rough computer-generated graphics that do not fit the real-world sense sensed by the device's sensors.
Attempts have also been made to improve content to pursue more naturalness. US Patent Publication No. 2013/0002698, filed June 30, 2011, entitled “Virtual Lens-Rendering Augmented Reality Lens” by Geiger et al., Describes a technique for correcting lighting characteristics of a scene from ambient lighting information. Is described. Such a modification makes the re-rendered scene more realistic.
There is also an attempt to correct an image using object information. For example, US Pat. No. 8,538,144, entitled “Methods and Systems for Color Correction of 3D Images”, filed internationally on November 21, 2006, by Benitez et al. It is described that color information is corrected by using. Further, in US Patent Publication No. 2014/0085625 by the name “Skin and Other Surface Classification using Albedo” by Ahmed et al., Albedo of an object is determined using albedo information about the type of material (eg, skin). The
In US Pat. No. 7,324,688, filed February 14, 2005, by Moghdamdam, the name “Face Relighting for Normalization of Direction Lighting” uses albedo to determine the illumination direction from the input image. Yes. Mohghdam uses skin albedo information to build an albedo map from a person's face image from which lighting information can be obtained.
All publications specified in this article are hereby incorporated by reference as if each publication or patent application was specifically and individually designated to be incorporated by reference. If the definition or usage of a term in an incorporated reference material does not match or differ from the definition of that term in this article, the definition in this article applies to that term and the definition in the reference article does not apply.
The following description provides information that may be useful in understanding the present invention. Not all of the information described herein should be construed as related to the prior art, the present invention, and all publications mentioned directly or indirectly should not be construed as prior art.
In some embodiments, properties such as quantity, concentration and reaction conditions of materials used to describe and claim certain embodiments of the invention have been modified, for example, by the term “about”. Understood. Thus, in some embodiments, the numerical parameters ruled in the specification and appended claims are approximate, and may vary depending on the desired characteristics to be achieved in the specific embodiment. In some embodiments, numerical parameters should be interpreted in light of normal rounding in view of the number of significant digits listed. Although the numerical ranges and parameters that define the broad range of some embodiments of the invention are approximate, the numerical values ruled in a particular example are accurately described so that they can be implemented. Numerical values listed in some embodiments of the present invention may include errors that necessarily arise from the standard deviations found in the respective measurements.
Unless the context is different, all ranges defined in this article should be interpreted as including upper and lower limits, and ranges that do not define upper and lower limits include only commercially practical values. Should be interpreted. Similarly, all values listed should be construed to include intermediate values, unless context dictates otherwise.
In this document and in the claims, the singular forms also include the plural unless the context clearly dictates otherwise. Also, in this paper, the expression “in” also includes the meanings “in” and “on”, unless the context clearly contradicts.
In this paper, a range of values is listed merely as a substitute for referring to each individual value within that range. Unless otherwise stated in this article, individual values are incorporated into this article as individually mentioned. All methods described in this article can be performed in any suitable order unless otherwise ruled in this article or in a clearly contradictory context. Any or all examples or exemplary words (eg, “etc.”) relating to an embodiment of this document are merely intended to better describe the invention and are claimed in other ways. It is not intended to limit the scope of the invention being described. No language in the specification should be construed as indicating a non-claimed component as essential to the practice of the invention.
The alternative components or embodiment sets of the invention disclosed herein are not to be construed as limiting. Each set of members may be referred to or claimed individually or in any combination with another member or other component of the set listed herein. One or more members in a set may be included in or deleted from a set for convenience and / or patentability. When so included or deleted, this specification is to be construed as including the set modified to satisfy all Markush format descriptions as used in the appended claims.
For example, it is not well known that there are situations where a true albedo of a constructed object (eg, a toy or a car) can be recognized (or estimated) in advance. Such information makes it possible to render augmented reality content more realistically in an environment near the rendering device. Accordingly, there is a need for rendering augmented reality content based on known object properties such as albedo.
The subject of the present invention is an apparatus, system, and system capable of displaying augmented reality content on an image of a real world object in such a way that the augmented reality (AR) content appears to be a natural part of the real world environment. Provide a method. One aspect of the present subject matter includes a computer-implemented method for rendering AR content. In one embodiment, the method includes obtaining one or more predefined 3D albedo models of the object. The albedo model preferably includes information about the reflectivity of the object and geometric information of the model, such as normal vectors. The AR model further includes known features about the object. The method further includes deriving features (eg, FAST, SIFT, Harris corners, etc.) from a digital representation of the object (possibly an image or video frame containing a digital representation of the object). The rendering device then obtains AR content based on the observation features, where the AR content includes information about how it is displayed on the object (eg, object model information, known Features, video, program instructions, etc.). The rendering device can derive the posture of the object from the digital representation based on the observed object feature. The rendering device then fits the albedo model to the posture, which can be based on a comparison of the observation features of the object and known features incorporated into the albedo model. The rendering device uses the digital representation and the fitted albedo model to derive observation shading. Use observation shading to derive an estimated shading model (sometimes referred to as an environmental shading model in this paper). In some embodiments, a sensor environment error map is derived that includes sensor error (s), object distortion or misalignment (eg, dust, scratches, dirt, etc.) or other parameters. The sensor environment error map shows the difference between the desired shape of the object and the actual shape captured by the sensor. The method further includes generating environmentally adjusted AR content by applying the estimated shading model to the AR content. In this way, the rendering device converts the AR content into content having an appearance that is considered to be close to the appearance of the imaged object. Finally, the environmentally tuned AR content is rendered on the device and presented to the user for use. In some embodiments, environmental artifacts that are not identifiable in the environmental error map are also rendered with the AR content to provide a more realistic feel.
Various objects, features, aspects, and effects of the subject of the present invention will become more apparent from the following description of preferred embodiments. Reference is made to the accompanying drawings, wherein like reference numerals refer to like elements.
FIG. 1 schematically illustrates a method for rendering AR content.
FIG. 2 schematically illustrates object tracking and AR content rendering of a portion of the method of FIG.
FIG. 3 shows the characteristics of building a 3D albedo model from a known object.
FIG. 4 shows the alignment of the 3D albedo model to the observed posture of the object.
FIG. 5 graphically illustrates the process of obtaining adjusted and rendered augmented reality content.
FIG. 6 shows a graphical representation of the estimated shading model of FIG. 5 with respect to surface normals.
FIG. 7 graphically illustrates further processing for adjusting augmented reality rendering for the artifacts identified in the environmental error map.
FIG. 8 is an illustration of a computer system that may be included in or represent one or more rendering devices and / or other computers used to implement instruction codes included in a computer program product according to an embodiment of the present invention. An example is shown.
Any language directed to a computer may be any suitable combination of computing devices, including servers, interfaces, systems, databases, agents, peers, engines, controllers, or other types of computing device structures that operate individually or collectively. Note that it is read to contain. It is understood that the computing device includes a processor configured to execute software instructions stored on a tangible, non-transitory computer readable storage medium (eg, hard drive, solid state drive, RAM, flash, ROM, etc.). Like. The software instructions preferably configure the computing device to provide the roles, responsibilities, or other functions described below for the disclosed device. The disclosed techniques can also be implemented as a computer program product that includes a non-transitory computer-readable medium that stores software instructions that cause a processor to perform the disclosed steps. In a particularly preferred embodiment, various servers, systems, databases or interfaces can be standardized based on HTTP, HTTPS, AES, public private key exchange, web service API, known financial transaction protocols, or other electronic information exchange methods. Data is exchanged using a specified protocol or algorithm. Data exchange may occur via a packet switched network, the Internet, a LAN, a WAN, a VPN, or other type of packet switched network.
It will be appreciated that the disclosed technique provides many technical advantages, including fitting an albedo model to an observed object. Based on this alignment, the computing device can determine the situation in which the sensor observed the object. The AR content can be provided or rendered for use by the user based on the situation.
In the following, a number of exemplary embodiments of the present subject matter will be described. Although each of these embodiments consists of some combination of the elements of the invention, it will be understood that the subject matter of the invention is considered to include any combination of the elements disclosed. Thus, if one embodiment includes elements A, B, C and another embodiment includes elements B, D, the subject of the present invention is A, B, even if not explicitly stated herein. Other combinations of C or D are also considered to be included.
As used in this article, the term “connected to” refers to direct connection (two elements are connected in contact with each other) and indirect connection (at least one additional element is the two Intended to include both) (placed between elements). Therefore, “connected to” and “connected to” are used synonymously.
In the following description, the subject matter of the present invention is presented in the context of overlaying AR content on a toy. When the AR content is displayed together with an image of a real-world object, the AR content is adjusted so that it naturally blends into the surrounding objects. The described example describes superimposing AR content (for example, animation or the like) indicating the doll's face on the doll image. It will be appreciated that the disclosed technology is applicable to other types of objects such as print media, medical images, vehicles, buildings, equipment, or other types of objects.
FIG. 1 shows a method 100 for rendering AR content. FIGS. 3 and 4 show details about some steps of the method 100 and will be described in conjunction with the method 100. Method 100 is a collection of steps performed by a rendering device configured or programmed to operate according to the described steps. A rendering device is a computing device having at least one processor and memory that stores software instructions that cause the device to render AR content as disclosed. Examples of devices that can be configured or programmed to operate as rendering devices include mobile phones, smartphones, tablets, fablets, game enclosures, cameras or video cameras, vehicle navigation systems, robots, security systems, portable game consoles, kiosks , Equipment, or other types of computing devices.
The method 100 optionally begins at step 110 that includes generating a predefined 3D albedo model of the object. The model can be generated by various techniques according to the characteristics of the object. In the example shown in FIG. 3, the object is a doll 301. The albedo model 306 can be defined based on known geometric characteristics of the doll or known characteristics of the doll material. Thus, the model may be constructed from a computer generated model of the doll and a bill of material that indicates the characteristics of the material of the doll.
In FIG. 3, a graphical representation 302 of the albedo model 306 is also shown for clarity of illustration. It will be appreciated that the albedo model 306 is preferably a 3D model that may include a mesh. The albedo model has a number of properties that are useful in the disclosed technology. First, the model includes normal vectors 307 distributed on the surface of the model. The normal vector is perpendicular to the surface. For each normal vector, the model further includes albedo information corresponding to that location. The albedo information indicates the actual reflection characteristics of the object at that location. For example, albedo information corresponding to a doll's facial skin material may indicate a slightly non-reflective matte plastic, and albedo information corresponding to a doll's eye material may indicate a highly reflective glass or plastic bead. . Thus, albedo information can be discontinuous through the model surface. On the other hand, the albedo information may be continuous. This is a case where the albedo model information can be calculated surface geometrically. In this case, the temporarily generated albedo model is obtained. Instead of a full albedo model, temporary things (eg functions, software instructions, rules, geometric properties, etc.) can be transferred to the rendering device. Thus, the rendering device can generate a partial albedo model temporarily instead of incurring the bandwidth cost of transferring a pre-built model. This approach is considered effective in situations where bandwidth is limited or expensive. For example, it is a case of a mobile phone with a limited data plan.
The 3D albedo may be decomposed into parts associated with the object. In the illustrated example, the model includes two eye portions 304 and a face portion 303. It will be appreciated that a 3D model may have multiple sites if it is advantageous for rendering AR content. Furthermore, each site may have its own lighting policy. Illumination rules set for corresponding parts in the 3D albedo model. The lighting rule defines how the corresponding element of the AR content should be rendered when presented by being superimposed on a display device. For example, Lambert lighting techniques can be used because the doll's skin or face surface is matte finished. Since the doll's eyes can include highly reflective glass or plastic, the lighting rules can include specular lighting instructions. Further lighting rules include phone lighting, phone shading, Gaussian filters and other lighting algorithms. Illumination rules include facial features, weapons, panels, clothes, vehicle features, print ink types, tissue types, substrates, game features, material types, and other object features.
The 3D albedo model may further include an alignment feature 305. This ensures that the model is correctly matched to the corresponding real-world object. Alignment features for image data may include features derived by image processing algorithms such as SIFT, BRISK, SURF, FAST, BRIEF, Harris Corners, Edges, DAISY, GLOH, HOG, EOG, TILT. This feature is advantageous because it allows the rendering device to identify the correct model or object in the field. Each feature in the albedo model may include descriptor values, 3D coordinates in the model, and other information.
The 3D albedo model may be built during offline processing by a device other than a rendering device, such as part of an object capture engine. For example, when a doll designer builds a doll model with a CAD system, the CAD system builds an albedo model as a computer-generated object model. Each polygon of the mesh in the computer generated model may include a normal vector along with corresponding albedo information. Furthermore, it will be appreciated that the computer generated model may show only a portion of an object, such as a doll's face. Thus, a 3D albedo model can include one or more of toys, vehicles, faces, commercial products, print media, vending machines, equipment, plants, signs, tissues, patients, game components, people or their faces and other types of objects. May be shown. In another embodiment, the device may construct a 3D image of the doll during runtime. Features observed by the sensor from multiple viewpoints (preferably with different lighting conditions) can be averaged to generate an estimated albedo model during runtime.
Step 115 includes the rendering device obtaining a predefined 3D albedo model of the object. Again, taking a doll as an example, the rendering device may include a tablet device configured to have an app that interacts with the doll, including an albedo model of the doll. Alternatively, the rendering device may obtain a digital representation (eg, image, video, audio, etc.) of the object as described in step 120. And the rendering device is known from a digital representation, such as US Pat. No. 7,016,532, entitled “Image Capture and Identification System and Process”, filed Nov. 5, 2001, co-owned by Bonyck et al. Recognize objects using technology. After recognition, the rendering device may retrieve the albedo model from a database or other type of data store using characteristics derived from the digital representation (eg, image characteristics, descriptors, etc.). Further, as described above, the apparatus, in another embodiment, averages the features observed by the sensor from multiple viewpoints (preferably with different lighting conditions) to generate an estimated albedo model, thereby enabling the doll's during runtime. A 3D representation may be constructed.
Step 130 includes the rendering device deriving features from the digital representation of the object. The derived features can take various forms depending on the aspect of the digital representation. The rendering device may apply one or more feature detection algorithms to the digital representation as in step 135 to generate image data features. As an example of the algorithm, SIFT (Lowe, filed on March 6, 2000, entitled “Method and Apparatus for Identifying Scale, Innovative Features in an Image and Use of the United States”) , 711, 293), BRISK, SURF, FAST, BRIEF, Harris Corners, edges, DAISY, GLOH, Histograms of Gradients (HOG), Edge Orientation Histograms (EOG), TIT (Ma11 et al., 29 May. The name of the application “Robust Re” Overy of Transform U.S. Pat. No. 8,463,073 of Invariant Low-Rank Textures "), and the like. It will be appreciated that the derived features may be similar to the features used to obtain the 3D albedo model described above.
Step 140 includes the rendering device obtaining AR content associated with the object based on the features derived in step 130. AR content may be indexed in a database or data structure depending on the descriptor corresponding to the object. For example, AR content including computer animation of a doll can be stored in the memory of a tablet that is operating an application. Then, if the descriptor of the digital representation sufficiently matches that used for the animation content index in the memory, the animation corresponding to the doll can be extracted. In some embodiments, AR content or a pointer to AR content may be indexed in a data structure corresponding to a k-nearest neighbor (kNN) lookup, such as a spill tree or k-d tree, for example. For example, as in step 145, the method may further include searching the AR content based on the descriptor corresponding to the feature. Further, AR content may be obtained from a database, remote search, search engine, or other data storage.
The AR content may be local to the rendering device as described above. Alternatively, AR content may be stored remotely. In such an embodiment, the rendering device may obtain the AR content using an address, such as a kNN lookup search result. Examples of addresses include URLs, file handles, URIs, IP addresses, indexes, document object identifiers (DOI), or other types of addresses.
AR content itself includes a wide variety of content. More preferred content includes visual content that can be rendered and displayed on a display screen connected to a rendering device. Exemplary AR content includes games, applications, videos, images, animations, 3D rendering objects, object meshes, animation meshes, age projected animations, medical images, clothes, makeup, glasses, furniture, wearable accessories (For example, a ring or a necklace), a person, an avatar, a pet, a coupon, a store shelf, a sign, a part of an anatomical chart, an ultrasound image, or other types of articles. Visual content is preferred as the AR content, but the AR content may be in other forms such as voice or touch. In another embodiment, AR content may be retrieved later in the illustrated method flow. For example, it may be searched after deriving 167 the estimated shading model. In general, the order of execution of steps can be changed from that shown, unless the execution of a step requires the use of results obtained by the completion of another step in context.
Step 150 includes the rendering device deriving the posture of the object from the digital representation. The posture can be determined based on some information. In some embodiments, the rendering device can store an object model such as a 3D albedo model that includes known reference features as described above. After the observed features are derived from the digital representation, the posture of the object model can be determined so that the known reference features and the observed features match or are relatively positioned. In embodiments where AR content is superimposed on the image of the object, the posture information is useful. The posture information is also effective when the AR content is juxtaposed with the object in the display image. Returning to the doll example, the AR content may be a fairy present at the point where the doll points or looks. Note that the posture is oriented with respect to a sensor that acquires a digital representation, such as a camera that captures an image of the object.
As described above, the 3D albedo model may include known reference features. Albedo model features are available at step 160. This step involves applying an albedo model to the posture and establishing simultaneous equations to obtain an object shading model solution. As in step 165, the albedo model can be fitted by fitting known features in the albedo model to corresponding features derived from the digital representation. After fitting the albedo model and the image, the observation shading data can be derived using the difference between them.
FIG. 4 shows a state in which a 3D albedo model of the doll's face is applied to the observation image of the doll. FIG. 4 shows a digital representation 401 of an object, here the face of a doll. The object feature 405 can be derived from the digital representation using known image feature algorithms such as SIFT, FAST, for example. Certain features 405 of the digital representation 401 may be compared to predetermined training features 305 from the representation 302 of the albedo model 306. As a result, the matched features can be used to fit the image 401 to the albedo model 306 (shown by the graphical representation 302 in FIG. 4). It is emphasized here that not all features, or their descriptors, need to be used or perfectly matched.
Step 167 of FIG. 1 includes the rendering device deriving an estimated shading model from the observation shading data. The observation shading data corresponds to recognition of the shading of an object in the sensor environment by one or more sensors that acquire a digital representation, and an estimated shading model is derived from the data to obtain an AR object corresponding to the imaging object. Apply estimated shading to the image. In view of the fact that object shading data is derived from both the observation posture and the predefined albedo model, the estimated shading model may include pixel level information corresponding to the object. For example, the illumination of the pixel corresponding to the display image of the object can be derived from the calculated albedo from the albedo model or the observation color value of the pixel of the posture at the time of actual imaging. The estimated shading model can be regarded as a transformation that converts an ideal state of computer graphic information into a state that conforms to the shading of the sensor observation environment.
At this point, the rendering device has two pieces of information about the object. The rendering device knows how the object actually looks in the environment to the sensor that captures the digital representation, and from the predefined 3D albedo model, what the object looks like in its original ideal state They also know what to do. By combining these pieces of information, a parameter that is not grasped by the shading model is estimated and implemented as an estimated object shading model.
The estimated shading model contains a great deal of information about the lighting environment, but the source / type of information on which each is based need not necessarily be distinguished. For example, it is not necessary to determine the actual illumination source for the estimated shading model to capture the illumination characteristics of the target object. This is possible by determining the difference, for example at the pixel level, between the object observed by the sensor and the known ideal state of the object based on the albedo model. Alternatively, the environmental error map may be derived by comparing an actual representation (based on a digital representation of a captured image of an object) with an assumed representation (based on an estimated shading model for an albedo model). Such an error map can identify artifacts corresponding to the sensor or present in the environment due to other factors. As an example, let us consider a case where fingerprint stains are attached to the lens surface of a mobile phone. Such dirt does not affect the illumination of the environment (and thus the assumed shading of the object), but does affect the characteristics of the acquired digital representation. Furthermore, dirt affects the display or rendering of the captured image to the user. With the sensor environment error map, such sensor-related abnormality can be understood without accurately estimating the abnormality. In general, a technique using a sensor error map is effective in that it provides a low-load method for determining the influence of environmental artifacts on imaging data during imaging by a sensor. It will be appreciated that a sensor such as a camera is the final entry point when data from the environment enters the device. The data collected by the sensor represents the observation environment state of the object. Thus, in another embodiment (further described in FIG. 7), an environmental error map may be used to augment the method described in FIG. In other words, a more realistic feeling can be obtained by identifying sensor environment abnormalities included in the rendering of AR content.
The estimated shading model and sensor environment error map are not essential, but may have many features. As one of the features, the sensor environment map may include an environment illumination map. The illumination map can be derived by comparing the albedo model of the object to the observed color of the object. Further, as described above, the sensor error map can indicate deviation from a predetermined state, dirt, lens shape, scratches, and the like. The sensor environment map may further include a noise map indicating the influence of the environment on the digital representation acquisition. To explain the noise, consider the technology that implements the disclosed technology by acquiring digital representations with an ultrasonic transducer. For example, a noise map is composed of noise related to tissue and reflection. In such embodiments, for example, due to the tissue density of the object, an acoustic equivalent to the albedo model of the object can be obtained. Such an embodiment shows that environmental models and maps can contribute instead of or in addition to the estimated shading model to render AR content and make it feel more realistic. Further embodiments may include a sensor environment map that includes deviations and distortions from the natural state of the observed object. For example, a face of a doll may include a mark drawn with a scratch or a pen. These features can be reflected and included in the AR content to be rendered. The rendering device can observe such distortion based on the difference between the known object model and the observed object. Known object models may be incorporated into the albedo model or distinguished.
Step 170 includes generating environmentally adjusted AR content by applying an estimated shading model to the AR content. As described above, the estimated shading model (sometimes referred to as the environment shading model in this paper) is a transformation that changes AR content from its more ideal state to a state that is more suitable for the object as observed by the rendering device. Represent. The AR content can be adjusted by applying one or more lighting rule groups from the 3D albedo model to the part of the AR content corresponding to the part of the model, as in step 175. Geometric constraints (eg, polygons, bounding boxes, etc.), recognition features (eg, descriptors, keypoints, etc.), or other matching techniques enable correct matching of the rule group to the AR content site.
For example, consider a case where a doll such as a Princess doll made by Disney (registered trademark) is sold together with a downloadable augmented reality application. The app is intended to allow children to interact with the princess at a tea party. A child shoots a real-time video of a doll with a tablet, for example. On the other hand, the application superimposes AR content as an animation of a doll's face that speaks or responds to a child's question. The animated lips, eyes, and skin are individually adjusted according to the rules in the albedo model and are therefore presented very naturally to the child.
Step 180 includes the rendering device rendering the environmentally adjusted AR content. The rendering process includes converting AR content according to rules generated from sensor environment map information and other elements in the system. For example, the posture information can be used to deal with gaze or hidden surface removal. In addition, the position or orientation of the sensor that obtains the digital representation can be used to translate the AR content to the correct position for rendering.
Step 190 includes presenting environmentally adjusted AR content rendered on a display. The display may be provided integrally with the rendering device. For example, as described in the above example where the tablet is a rendering device, the rendering device may be a tablet. In other embodiments, the display may be remote from the rendering device. For example, the display device may be a computer screen of a client computer, and the rendering device may be a web server or service that provides the rendering service over the Internet. The rendered environmentally adjusted AR content can be overlaid on at least a partial image of the object, as in step 195. Alternatively, the adjusted AR content may be arranged relative to the position of the object in the display. Further, the adjusted AR content may not be presented in the screen in a state outside the field of view but can be presented if it is within the field of view. For example, the AR content of the doll may include an image of a friend sitting near the position of the doll, and the image may not be displayed regularly. AR content can be arranged according to the characteristics of the estimated environment map. For example, a method of rendering a cloud in the darkest part of the environment map that is at a predetermined radius away from the object may be executed.
Given that the environment in which a target object is observed may vary significantly, a rendering device according to some embodiments may be capable of real-time tracking of the object as described in FIG. The tracking function described in FIG. 2 may be considered part of the rendering process as described for step 180 of method 100.
In step 200, at least some of the features derived from the digital representation are tracked. It will be appreciated that the feature being tracked need not necessarily correspond to the feature that was originally used for object recognition. For example, the rendering device may recognize an object using SIFT features and descriptors and then perform tracking using FAST features. In step 205, the updated posture of the tracked object is estimated and the current object shading model is updated. In step 210, the environmentally adjusted AR content is re-rendered in response to movement of features specifically associated with the imaging sensor. The re-rendered content can take into account changes in the posture of the object, changes in facial expression, predictive motion, or other factors related to the movement of the object or feature. In one embodiment, at 205, the posture is reestimated to update the shading model, and the environmentally adjusted AR content at 210 is substantially aligned with the frames of the video sequence (eg, 10 fps, 20 fps, (30 fps, 60 fps, etc.) are re-rendered. Thereby, a natural impression can be given to the user.
The disclosed technology provides interesting functions. One feasible function is to use a known albedo model of an object (eg, person, vehicle, etc.) and drop that object into an old video sequence. For example, a 3D albedo model can be constructed from known objects present in old video sequences (eg black and white images, old sitcoms, etc.). By comparing the old recorded video of the object with the newly generated albedo model, the rendering device can determine the transformations necessary to drop the AR content into the recorded video. Such known objects include existing buildings, cars, furniture, and other objects in the sequence.
When used for shopping, retailers and product providers can use the disclosed technology to improve the user's shopping experience. A merchandise provider can provide a known albedo model, such as a beverage can of its own product. When the user images the beverage can, the AR content is dropped into the environment, and the user can operate the beverage can. For example, Coca-Cola (registered trademark) can provide an application using a known albedo model that presents an AR that reinforces cooking of a recipe for using Cola (registered trademark) for cooking.
FIG. 5 shows graphically the image processing performed in an embodiment of the present invention. A 3D albedo model 501 of the doll's face is acquired and applied to the image 502 of the doll's face taken (observed). Note that the image 502 includes an artifact 511 that may be caused by dirt on the lens of a tablet or other device that captures the image. The generation of an error map corresponding to the artifact and usable for augmenting the rendering of the AR content will be further described with reference to FIG. Returning to FIG. 5, the observation shading 503 is extracted from the image 502 using the albedo model 501. As will be described later, an estimated shading model 504 is acquired from the observation shading 503. Observation shading 503 is used to modify AR content 505 to correspond to lighting conditions in a particular environment of the device that renders AR content. Specifically, the relighting content 506 is rendered by combining the shading model 504 with the AR content 505.
In the following, the mathematical aspect of the subject will be described in accordance with the image processing flow of FIG.
In general, the value I c (p) of the color channel c at a pixel pεR 2 (R: white R representing a set of real numbers) in the image I can be modeled as a function of albedo and shading (here Albedo c (p) εR, Shading c (p) εR).
As a general case, all the pixels in the image correspond to a known object in order to simplify the description. The actual shading model estimation is based only on a subset of image pixels corresponding to known objects. After fitting, each pixel in the image corresponds to a 3D albedo model, A c : R 3 → R. In this way, for the pixel p, the 3D position XεR 3 , its normal NεR 3 and the albedo A c (X) εR corresponding to the channel are obtained.
Given an albedo for each 3D position on the model, after fitting, the observation shading can be extracted by the following equation:
This is established because the correspondence between each observation pixel p and the 3D point X is confirmed, and the following equation is also established.
Furthermore, the 3D normal line N in each pixel p is also confirmed. In many cases, shading by environmental lighting can be modeled using only the normal line, so the following equation holds.
Here, N in Expression 4 is a normal corresponding to p, and S c : R 3 → R is a function.
Furthermore, a true shading model can be approximated as the following quadratic function.
Here, Q c is a 3 × 3 matrix, and NT is a transpose of N.
Then, Q c can be estimated by minimizing the following function.
However, in that case, including interference with additional noise models from several distributions, limitations on the sparsity of optimization functions or image domain errors, additional limitations based on other possible matters about the sensor or environment, etc. It seems to be a complicated optimization problem.
Once the shading model parameter Q c is obtained for each image channel, simply projecting the estimated Q c or the AR object albedo model into S c (N) * A c (X) it is (a true 3D geometric characteristics, albedo model a c is obtained) every AR content was capable of rendering.
6, for explanation, a shading model estimated by Q c, by visualizing with respect to the normal to the surface of a sphere or a cube, is to illustrate that the generated shading environment maps 504N.
FIG. 7 shows a flow of higher-level processing for incorporating an environmental abnormality into the rendering of AR content. An estimated rendering result 702 of a known 3D object is obtained using the estimated shading model 504 of FIG. Then, an environment error map 701 is generated by back-projecting the difference between the assumed appearance 702 and the observed image 502 according to the surface of the known object. The map shown in FIG. 7 is a textured cube showing backprojection error artifacts. Here, the artifact 511 of the observation image 502 is shown in the map 701 as illustrated. The map 701 can then be used to augment the rendering 506 to generate a rendering 706 that includes the artifact 511. Thereby, a more realistic AR rendering 706 can be realized. The embodiment of FIG. 7 can be advantageous for obtaining dirt, scratches and other sensor-related artifacts on the lens. However, with the type of error map shown, the various types of visual artifacts due to the environment can be acquired. Also, other types of error maps can be generated based on the difference between the estimated rendering of the object and the actual image.
FIG. 8 is a computer system that may be included in or represents one or more rendering devices and / or other computers used to implement instruction codes included in a computer program product 8060 according to an embodiment of the present invention. An example of 8000 is shown. The computer program product 8060 may instruct one or more computers, such as the computer system 8000, to perform processing to implement the example method steps performed by the embodiments referred to herein. Contains executable code on the media. The electronic readable medium may be any non-transitory medium that stores information electronically and may be accessed locally or remotely, eg, via a network connection. The medium may include a plurality of geographically distributed media configured to store different portions of executable code, each at different locations and / or at different times. The executable instruction code in the electronic readable medium directs the illustrated computer system 8000 to perform the various exemplary tasks described herein. Such executable code that directs to perform the tasks described herein will typically be implemented by software. However, without deviating from the invention, it is not uncommon for a computer or other electronic device to utilize code implemented in hardware to perform many or all of the identified tasks. The merchant will understand. Those skilled in the art will also appreciate that various changes in executable code may implement exemplary methods within the spirit and scope of the present invention.
Code or a copy of the code in computer program product 4060 is read by one or more persistent storage devices 8070 and / or memory 8010 for execution by processor 8020 and is communicatively coupled to system 4000 for storage. It may exist in a storage medium (not shown separately). Computer system 8000 further includes an I / O subsystem 8030 and peripheral devices 8040. The I / O subsystem 8030, peripheral device 8040, processor 8020, memory 8010, and persistent storage device 8060 are connected via a bus 8050. Similar to persistent storage 8070, which may include computer program product 8060, and any other persistent storage, memory 8010 is a non-transitory medium (even if implemented as a typical volatile computer memory device). . Further, in addition to storing computer program product 8060 for performing the processes described herein, memory 8010 and / or persistent storage device 8060 may store various data elements referred to and illustrated herein. Those skilled in the art will also understand that it may be configured.
Those skilled in the art will appreciate that the computer system 8000 is merely an example of a system in which a computer program product according to embodiments of the present invention may be implemented. To give just one example of another embodiment, the execution of the instructions included in the computer program product according to the embodiment of the present invention is distributed via a plurality of computers, such as via a computer in a distributed computer network. May be.
It will be appreciated by those skilled in the art that many variations besides those described are possible without departing from the inventive concepts herein. Accordingly, the subject matter of the present invention is not limited to others within the spirit of the appended claims. Moreover, in interpreting the specification and claims, all terms should be interpreted in the broadest possible meaning depending on the context. In particular, the term “comprising” should be construed as referring non-exclusively to an element, component, or step, and the stated element, component, or step is explicitly It is shown to be present, utilized or combined with other elements, components or steps not mentioned. In the specification and claims, when referring to at least one selected from the group consisting of A, B, C..., N, not N in addition to A, or N etc. in addition to B, Only one element from this group should be construed as necessary.
Claims (35)
- A method for rendering augmented reality content,
Obtaining a predefined 3D albedo model of the object by means of a rendering device;
Deriving recognition features from the digital representation of the object by the rendering device;
Obtaining augmented reality (AR) content for the object based on the recognition features by the rendering device;
Deriving the posture of the object from the digital representation by the rendering device;
Fitting the albedo model to the posture based on the determined features as a subset of the recognition features corresponding between the albedo model and the digital representation by the rendering device;
Deriving observation shading data by the rendering device based on a shading difference between the digital representation and the albedo model;
Using the albedo model and the observation shading data comprising determining at least one illumination environment difference between the digital representation and a known ideal state of the object determined based on the albedo model. Deriving an estimated object shading model;
Generating environmentally adjusted AR content by applying the estimated object shading model to the AR content by the rendering device;
Rendering the environmentally adjusted AR content by the rendering device. - The method of claim 1, wherein the predefined 3D albedo model comprises a computer generated object model.
- The method of claim 2, wherein the computer-generated object model comprises a mesh.
- The method of claim 2, wherein the computer-generated object model includes a model of a portion of the object.
- The method of claim 1, wherein deriving a recognition feature from the digital representation includes applying at least one feature detection algorithm to the digital representation.
- 6. The method of claim 5, wherein the feature detection algorithm comprises at least one of the following algorithms: SIFT, BRISK, SURF, FAST, BREF, Harris Corners, Edges, DAISY, GLOH, HOG, EOG, TILT.
- The method of claim 1, wherein obtaining the AR content comprises searching the AR content based on a descriptor corresponding to the recognition feature.
- The method of claim 1, further comprising obtaining the digital representation by the rendering device.
- The method of claim 1, further comprising deriving a sensor environment map including an environment illumination map based on the estimated object shading model.
- The method of claim 9, wherein the sensor environment map includes a noise map.
- The method of claim 9, wherein the sensor environment map includes a sensor error map.
- The pre-defined 3D albedo model represents at least one known albedo information of toys, vehicles, faces, commercial products, print media, vending machines, equipment, plants, signs, organizations, patients, game components. The method described.
- The AR content includes games, applications, videos, images, animations, 3D rendering objects, object meshes, animation meshes, age projected animations, medical images, clothes, makeup, glasses, furniture, wearable accessories, people, The method of claim 1, comprising at least one of an avatar, a pet, a coupon, a store shelf, a sign, an anatomical chart, and an ultrasound image.
- The method of claim 1, further comprising tracking at least some of the recognition features in real time.
- The method of claim 14, wherein tracking at least a portion of the recognition features includes tracking features within a frame of a video sequence.
- 15. The method of claim 14, further comprising re-rendering the environmentally adjusted AR content in response to movement of the tracked recognition feature.
- The method of claim 16, further comprising re-rendering the environmentally adjusted AR content in accordance with a frame rate of the video sequence.
- The method of claim 17, wherein the frame rate is 30 fps or more.
- The method of claim 1, wherein the environmentally adjusted AR content includes animation.
- The method of claim 1, wherein rendering the environmentally adjusted AR content includes overlaying the environmentally adjusted AR content on an image of at least a portion of the object.
- The method of claim 1, wherein rendering the environmentally adjusted AR content includes presenting the environmentally adjusted AR content to at least a portion of the object on a display.
- The method of claim 1, wherein the predefined 3D albedo model includes known features located within the model.
- Applying the predefined 3D albedo model to the posture includes deriving a corresponding feature from the digital representation and applying the predefined 3D albedo model to the posture based on the known feature. The method of claim 20.
- The method of claim 1, wherein the predefined 3D albedo model includes an illumination policy that includes an illumination rule group corresponding to a corresponding portion of the predefined 3D albedo model.
- Generating the environmentally adjusted AR content includes applying at least one of the lighting rules to a portion of the AR content corresponding to a portion of the predefined 3D albedo model. 25. The method of claim 24.
- The lighting rule group is a part of the predefined 3D albedo model for at least one of facial features, weapons, panels, clothes, vehicle features, object organization, substrates, game features, and material types. The method of claim 24, comprising a lighting rule.
- The method of claim 1, wherein the predefined 3D albedo model is generated from a plurality of training images of the object acquired by the rendering device in various lighting conditions and / or from various viewpoints.
- 28. The method of claim 27, wherein at least one of the plurality of training images corresponds to the digital representation of the object from which the recognition features are derived.
- The training images are acquired by the rendering device in parallel with deriving the posture of the object from the digital representation, and the predefined 3D albedo model is obtained during runtime for the posture of each new training image. 28. The method of claim 27, wherein the method is updated each time the fit is finished.
- 30. The method of claim 29, wherein at least one of online averaging and Bayesian filtering is utilized to update the predefined 3D albedo model during runtime.
- Rendering the environmentally adjusted AR content displays the environmentally adjusted AR content at a spatial position relative to the object derived from environmental features suggested by the estimated object shading model. The method of claim 1 comprising:
- 32. The method of claim 31, wherein the AR content is at least one of a cloud located radially from a darkest point and a light source located radially from the brightest point of the estimated object shading model in the estimated object shading model. .
- Generating an estimated rendering result of the object using the estimated object shading model and the predefined 3D albedo model;
Using the digital representation of the object and the observation shading data to identify one or more environmental artifacts in the digital representation;
The method of claim 1, further comprising: rendering by the rendering device at least a portion of the one or more environmental artifacts with the environmentally adjusted AR content. - In addition to the estimated object shading model, deriving one or more other environment maps that can be used to modify the AR content to render the AR content more adapted to the environment of the rendering device. The method of claim 1 further comprising:
- A computer program recorded on a non-transitory computer readable medium containing instructions executable by one or more computer processors of one or more computers, comprising:
Deriving image recognition features from a digital representation of the object acquired in the rendering device;
Fitting the albedo model of the object to the digital representation of the object using corresponding features between the albedo model and the digital representation, the determined features as a subset of the recognition features; ,
Deriving observation shading data from the digital representation of the object obtained in the rendering device and the shading difference of the object with the albedo model;
Determining an estimated object shading model from the observation shading data comprising determining at least one illumination environment difference between the digital representation and a known ideal state of the object determined based on the albedo model. Deriving,
A computer program for executing, in the rendering device, generating augmented reality (AR) content adjusted environmentally by applying the estimated object shading model to AR content related to the object.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201461992804P true | 2014-05-13 | 2014-05-13 | |
US61/992,804 | 2014-05-13 |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date | |
---|---|---|---|---|
JP2016566971 Division | 2015-05-13 |
Publications (2)
Publication Number | Publication Date |
---|---|
JP2018163692A true JP2018163692A (en) | 2018-10-18 |
JP6644833B2 JP6644833B2 (en) | 2020-02-12 |
Family
ID=54480652
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2016566971A Active JP6360200B2 (en) | 2014-05-13 | 2015-05-13 | System and method for rendering augmented reality content with an albedo model |
JP2018117683A Active JP6644833B2 (en) | 2014-05-13 | 2018-06-21 | System and method for rendering augmented reality content with albedo model |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2016566971A Active JP6360200B2 (en) | 2014-05-13 | 2015-05-13 | System and method for rendering augmented reality content with an albedo model |
Country Status (4)
Country | Link |
---|---|
US (4) | US9805510B2 (en) |
JP (2) | JP6360200B2 (en) |
CN (2) | CN110363840A (en) |
WO (1) | WO2015175730A1 (en) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110363840A (en) | 2014-05-13 | 2019-10-22 | 河谷控股Ip有限责任公司 | It is rendered by the augmented reality content of albedo model, system and method |
US9582890B2 (en) * | 2014-05-19 | 2017-02-28 | Ricoh Company, Ltd. | Superpixel-based image segmentation using shading and albedo decomposition |
US9922452B2 (en) * | 2015-09-17 | 2018-03-20 | Samsung Electronics Co., Ltd. | Apparatus and method for adjusting brightness of image |
US20170169612A1 (en) | 2015-12-15 | 2017-06-15 | N.S. International, LTD | Augmented reality alignment system and method |
WO2017131771A1 (en) * | 2016-01-29 | 2017-08-03 | Hewlett-Packard Development Company, L.P. | Identify a model that matches a 3d object |
CN106887045A (en) * | 2017-01-18 | 2017-06-23 | 北京商询科技有限公司 | A kind of house ornamentation method for designing and system based on mixed reality equipment |
US10549853B2 (en) | 2017-05-26 | 2020-02-04 | The Boeing Company | Apparatus, system, and method for determining an object's location in image video data |
US10789682B2 (en) * | 2017-06-16 | 2020-09-29 | The Boeing Company | Apparatus, system, and method for enhancing an image |
CN107330980A (en) * | 2017-07-06 | 2017-11-07 | 重庆邮电大学 | A kind of virtual furnishings arrangement system based on no marks thing |
US10535160B2 (en) | 2017-07-24 | 2020-01-14 | Visom Technology, Inc. | Markerless augmented reality (AR) system |
US10282913B2 (en) | 2017-07-24 | 2019-05-07 | Visom Technology, Inc. | Markerless augmented reality (AR) system |
US10572716B2 (en) * | 2017-10-20 | 2020-02-25 | Ptc Inc. | Processing uncertain content in a computer graphics system |
CN108416902A (en) * | 2018-02-28 | 2018-08-17 | 成都果小美网络科技有限公司 | Real-time object identification method based on difference identification and device |
US10818093B2 (en) | 2018-05-25 | 2020-10-27 | Tiff's Treats Holdings, Inc. | Apparatus, method, and system for presentation of multimedia content including augmented reality content |
Family Cites Families (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6711293B1 (en) | 1999-03-08 | 2004-03-23 | The University Of British Columbia | Method and apparatus for identifying scale invariant features in an image and use of same for locating an object in an image |
US6298148B1 (en) * | 1999-03-22 | 2001-10-02 | General Electric Company | Method of registering surfaces using curvature |
JP4213327B2 (en) | 1999-07-12 | 2009-01-21 | 富士フイルム株式会社 | Method and apparatus for estimating light source direction and three-dimensional shape, and recording medium |
US6546309B1 (en) | 2000-06-29 | 2003-04-08 | Kinney & Lange, P.A. | Virtual fitting room |
US6850872B1 (en) | 2000-08-30 | 2005-02-01 | Microsoft Corporation | Facial image processing methods and systems |
US7016532B2 (en) | 2000-11-06 | 2006-03-21 | Evryx Technologies | Image capture and identification system and process |
WO2002041249A2 (en) | 2000-11-08 | 2002-05-23 | Yale University | Illumination based image synthesis |
US7262770B2 (en) | 2002-03-21 | 2007-08-28 | Microsoft Corporation | Graphics image rendering with radiance self-transfer for low-frequency lighting environments |
US7301547B2 (en) * | 2002-03-22 | 2007-11-27 | Intel Corporation | Augmented reality system |
US7249005B2 (en) * | 2004-08-17 | 2007-07-24 | Dirtt Environmental Solutions Ltd. | Design software incorporating efficient 3-D rendering |
US7324688B2 (en) | 2005-02-14 | 2008-01-29 | Mitsubishi Electric Research Laboratories, Inc. | Face relighting for normalization of directional lighting |
JP2007156561A (en) | 2005-11-30 | 2007-06-21 | Canon Inc | Augmented reality presenting method and system |
WO2007139070A1 (en) | 2006-05-29 | 2007-12-06 | Panasonic Corporation | Light source estimation device, light source estimation system, light source estimation method, device having increased image resolution, and method for increasing image resolution |
JP4883774B2 (en) | 2006-08-07 | 2012-02-22 | キヤノン株式会社 | Information processing apparatus, control method therefor, and program |
GB0616685D0 (en) | 2006-08-23 | 2006-10-04 | Warwick Warp Ltd | Retrospective shading approximation from 2D and 3D imagery |
US20080071559A1 (en) * | 2006-09-19 | 2008-03-20 | Juha Arrasvuori | Augmented reality assisted shopping |
KR101342987B1 (en) | 2006-11-21 | 2013-12-18 | 톰슨 라이센싱 | Methods and systems for color correction of 3d images |
FR2911707B1 (en) | 2007-01-22 | 2009-07-10 | Total Immersion Sa | METHOD AND DEVICES FOR INCREASED REALITY USING REAL - TIME AUTOMATIC TRACKING OF TEXTURED, MARKER - FREE PLANAR GEOMETRIC OBJECTS IN A VIDEO STREAM. |
GB0712690D0 (en) | 2007-06-29 | 2007-08-08 | Imp Innovations Ltd | Imagee processing |
US8090160B2 (en) | 2007-10-12 | 2012-01-03 | The University Of Houston System | Automated method for human face modeling and relighting with application to face recognition |
CN102047651B (en) | 2008-06-02 | 2013-03-13 | 松下电器产业株式会社 | Image processing device and method, and viewpoint-converted image generation device |
JP5237066B2 (en) * | 2008-11-28 | 2013-07-17 | キヤノン株式会社 | Mixed reality presentation system, mixed reality presentation method, and program |
US8797321B1 (en) * | 2009-04-01 | 2014-08-05 | Microsoft Corporation | Augmented lighting environments |
US20110234631A1 (en) * | 2010-03-25 | 2011-09-29 | Bizmodeline Co., Ltd. | Augmented reality systems |
KR20110107545A (en) * | 2010-03-25 | 2011-10-04 | 에스케이텔레콤 주식회사 | Augmented reality system and method using recognition light source, augmented reality processing apparatus for realizing the same |
KR101293776B1 (en) * | 2010-09-03 | 2013-08-06 | 주식회사 팬택 | Apparatus and Method for providing augmented reality using object list |
US8463073B2 (en) | 2010-11-29 | 2013-06-11 | Microsoft Corporation | Robust recovery of transform invariant low-rank textures |
US9164723B2 (en) | 2011-06-30 | 2015-10-20 | Disney Enterprises, Inc. | Virtual lens-rendering for augmented reality lens |
CN103765867A (en) | 2011-09-08 | 2014-04-30 | 英特尔公司 | Augmented reality based on imaged object characteristics |
CN102426695A (en) * | 2011-09-30 | 2012-04-25 | 北京航空航天大学 | Virtual-real illumination fusion method of single image scene |
KR102010396B1 (en) * | 2011-11-29 | 2019-08-14 | 삼성전자주식회사 | Image processing apparatus and method |
US8872853B2 (en) * | 2011-12-01 | 2014-10-28 | Microsoft Corporation | Virtual light in augmented reality |
US9330500B2 (en) * | 2011-12-08 | 2016-05-03 | The Board Of Trustees Of The University Of Illinois | Inserting objects into content |
CN102568026B (en) * | 2011-12-12 | 2014-01-29 | 浙江大学 | Three-dimensional enhancing realizing method for multi-viewpoint free stereo display |
US20140063017A1 (en) * | 2012-08-31 | 2014-03-06 | Greatbatch Ltd. | Method and System of Model Shading and Reduction of Vertices for 3D Imaging on a Clinician Programmer |
US20140085625A1 (en) | 2012-09-26 | 2014-03-27 | Abdelrehim Ahmed | Skin and other surface classification using albedo |
US9524585B2 (en) * | 2012-11-05 | 2016-12-20 | Microsoft Technology Licensing, Llc | Constructing augmented reality environment with pre-computed lighting |
US10062210B2 (en) * | 2013-04-24 | 2018-08-28 | Qualcomm Incorporated | Apparatus and method for radiance transfer sampling for augmented reality |
US9299188B2 (en) * | 2013-08-08 | 2016-03-29 | Adobe Systems Incorporated | Automatic geometry and lighting inference for realistic image editing |
US8976191B1 (en) * | 2014-03-13 | 2015-03-10 | Qualcomm Incorporated | Creating a realistic color for a virtual object in an augmented reality environment |
US20150325048A1 (en) * | 2014-05-06 | 2015-11-12 | Mobile R&D Inc. | Systems, methods, and computer-readable media for generating a composite scene of a real-world location and an object |
CN110363840A (en) | 2014-05-13 | 2019-10-22 | 河谷控股Ip有限责任公司 | It is rendered by the augmented reality content of albedo model, system and method |
-
2015
- 2015-05-13 CN CN201910630479.XA patent/CN110363840A/en active Search and Examination
- 2015-05-13 WO PCT/US2015/030675 patent/WO2015175730A1/en active Application Filing
- 2015-05-13 US US14/711,763 patent/US9805510B2/en active Active
- 2015-05-13 JP JP2016566971A patent/JP6360200B2/en active Active
- 2015-05-13 CN CN201580038001.8A patent/CN106575450B/en active IP Right Grant
-
2017
- 2017-09-18 US US15/707,815 patent/US10192365B2/en active Active
-
2018
- 2018-06-21 JP JP2018117683A patent/JP6644833B2/en active Active
-
2019
- 2019-01-28 US US16/259,774 patent/US10685498B2/en active Active
-
2020
- 2020-05-26 US US16/883,966 patent/US20200286296A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
CN110363840A (en) | 2019-10-22 |
US20200286296A1 (en) | 2020-09-10 |
US20150332512A1 (en) | 2015-11-19 |
JP6360200B2 (en) | 2018-07-18 |
US20190206137A1 (en) | 2019-07-04 |
JP6644833B2 (en) | 2020-02-12 |
US9805510B2 (en) | 2017-10-31 |
US20180005453A1 (en) | 2018-01-04 |
US10192365B2 (en) | 2019-01-29 |
WO2015175730A1 (en) | 2015-11-19 |
CN106575450A (en) | 2017-04-19 |
JP2017524999A (en) | 2017-08-31 |
US10685498B2 (en) | 2020-06-16 |
CN106575450B (en) | 2019-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10565796B2 (en) | Method and system for compositing an augmented reality scene | |
US10102639B2 (en) | Building a three-dimensional composite scene | |
US9910275B2 (en) | Image processing for head mounted display devices | |
Kholgade et al. | 3D object manipulation in a single photograph using stock 3D models | |
US9348950B2 (en) | Perceptually guided capture and stylization of 3D human figures | |
US9940756B2 (en) | Silhouette-based object and texture alignment, systems and methods | |
US20180012411A1 (en) | Augmented Reality Methods and Devices | |
Sandbach et al. | Static and dynamic 3D facial expression recognition: A comprehensive survey | |
CN104937635B (en) | More hypothesis target tracking devices based on model | |
CN104508709B (en) | Animation is carried out to object using human body | |
ES2693028T3 (en) | System and method for deriving accurate body size measurements from a sequence of 2D images | |
US9418475B2 (en) | 3D body modeling from one or more depth cameras in the presence of articulated motion | |
CN106803267B (en) | Kinect-based indoor scene three-dimensional reconstruction method | |
Bogo et al. | Detailed full-body reconstructions of moving people from monocular RGB-D sequences | |
US9235928B2 (en) | 3D body modeling, from a single or multiple 3D cameras, in the presence of motion | |
US9898844B2 (en) | Augmented reality content adapted to changes in real world space geometry | |
US20160128450A1 (en) | Information processing apparatus, information processing method, and computer-readable storage medium | |
CN103975365B (en) | Methods and systems for capturing and moving 3d models and true-scale metadata of real world objects | |
Jeni et al. | Dense 3D face alignment from 2D video for real-time use | |
US9855496B2 (en) | Stereo video for gaming | |
US20150279113A1 (en) | Method and system for representing a virtual object in a view of a real environment | |
JP6323040B2 (en) | Image processing apparatus, image processing method, and program | |
US9646340B2 (en) | Avatar-based virtual dressing room | |
Wu et al. | Real-time shading-based refinement for consumer depth cameras | |
US9626801B2 (en) | Visualization of physical characteristics in augmented reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A521 | Written amendment |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20180711 |
|
A621 | Written request for application examination |
Free format text: JAPANESE INTERMEDIATE CODE: A621 Effective date: 20180711 |
|
RD02 | Notification of acceptance of power of attorney |
Free format text: JAPANESE INTERMEDIATE CODE: A7422 Effective date: 20190605 |
|
RD04 | Notification of resignation of power of attorney |
Free format text: JAPANESE INTERMEDIATE CODE: A7424 Effective date: 20190618 |
|
A977 | Report on retrieval |
Free format text: JAPANESE INTERMEDIATE CODE: A971007 Effective date: 20190805 |
|
A131 | Notification of reasons for refusal |
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20190820 |
|
A521 | Written amendment |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20191119 |
|
TRDD | Decision of grant or rejection written | ||
A01 | Written decision to grant a patent or to grant a registration (utility model) |
Free format text: JAPANESE INTERMEDIATE CODE: A01 Effective date: 20191210 |
|
A61 | First payment of annual fees (during grant procedure) |
Free format text: JAPANESE INTERMEDIATE CODE: A61 Effective date: 20200108 |
|
R150 | Certificate of patent or registration of utility model |
Ref document number: 6644833 Country of ref document: JP Free format text: JAPANESE INTERMEDIATE CODE: R150 |