CN115315728A - System and method for virtual adaptation - Google Patents
System and method for virtual adaptation Download PDFInfo
- Publication number
- CN115315728A CN115315728A CN202180016193.8A CN202180016193A CN115315728A CN 115315728 A CN115315728 A CN 115315728A CN 202180016193 A CN202180016193 A CN 202180016193A CN 115315728 A CN115315728 A CN 115315728A
- Authority
- CN
- China
- Prior art keywords
- model
- user
- product
- fit
- deformation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/16—Cloth
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2021—Shape modification
Landscapes
- Engineering & Computer Science (AREA)
- Architecture (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
Described herein is a system and technique for determining a level of fit of a product relative to a user. The technique involves obtaining a first 3D model associated with a user and obtaining a second 3D model associated with a product. In the system, the second 3D model is segmented into a plurality of segments, wherein each segment of the plurality of segments is associated with one or more material properties. The technique also involves: fitting the second 3D model onto the first 3D model such that each segment of the plurality of segments is deformed by the first 3D model according to the one or more material properties associated with each segment; and determining a fit level of the user and the product based on the deformation of each of the plurality of sections.
Description
Cross Reference to Related Applications
Priority of U.S. provisional patent application No. 62/987,196 entitled "SYSTEM AND METHOD FOR VIRTUAL FITTING" filed 3, 9, 2020, the entire contents of which are incorporated herein by reference.
Background
A virtual fit application enables a user to visualize various products in relation to a particular person. There are several different types of virtual adaptation applications based on various methods. However, each available type of virtual adaptation application has a number of drawbacks.
Embodiments of the present invention address these and other problems, individually and collectively.
Disclosure of Invention
The present invention relates generally to methods and systems related to virtual adaptation applications. More specifically, embodiments of the present invention provide methods and systems for determining a level of fit for a user and a product. Embodiments of the present invention are applicable to a variety of applications in virtual reality and computer-based adaptation systems.
The method involves obtaining a first 3D model associated with a user and obtaining a second 3D model associated with a product. The second 3D model may be segmented into a plurality of segments, wherein each segment of the plurality of segments is associated with one or more material properties. The method also involves: fitting the second 3D model onto the first 3D model such that each segment of the plurality of segments is deformed by the first 3D model according to the one or more material properties associated with each segment; and determining a fit level of the user and the product based on the deformation of each of the plurality of sections. Embodiments of the present invention are applicable to a variety of applications in virtual reality and computer-based adaptation systems.
One embodiment of the present disclosure is directed to a method comprising: receiving an adaptation request comprising at least an indication of a user and a product; obtaining a first 3D model associated with a user; obtaining a second 3D model associated with the product, the second 3D model segmented into a plurality of segments, wherein each segment of the plurality of segments is associated with one or more material properties; fitting the second 3D model onto the first 3D model such that each segment of the plurality of segments is deformed by the first 3D model according to the one or more material properties associated with each segment; and determining a fit level for the user and the product based on the deformation of each of the plurality of sections.
Another embodiment of the disclosure is directed to a system comprising a processor and a memory, the memory including instructions that, when executed with the processor, cause the system to at least: receiving an adaptation request including at least an indication of a user and a product; obtaining a first 3D model associated with a user; obtaining a second 3D model associated with the product, the second 3D model segmented into a plurality of segments, wherein each segment of the plurality of segments is associated with one or more material properties; fitting the second 3D model onto the first 3D model such that each segment of the plurality of segments is deformed by the first 3D model according to the one or more material properties associated with each segment; and determining a fit level for the user and the product based on the deformation of each of the plurality of sections.
Yet another embodiment of the present disclosure is directed to a non-transitory computer-readable medium storing specific computer-executable instructions that, when executed by a processor, cause a computer system to at least: receiving an adaptation request comprising at least an indication of a user and a product; obtaining a first 3D model associated with a user; obtaining a second 3D model associated with the product, the second 3D model segmented based on the material property; fitting the second 3D model onto the first 3D model such that each section of the second 3D model is deformed by the first 3D model according to the material properties of that section; and determining a fit level of the user and the product based on the deformation of each section.
Many benefits are obtained by the present system compared to conventional systems. For example, embodiments of the present disclosure relate to methods and systems that provide an accurate level of fit of a particular product relative to a user. While there are many virtual fit systems available, they do not provide accurate fit information. For example, some systems use a digital avatar of a human user and then fit digital clothing onto the avatar. Some systems use cameras or other devices to track the movement of the user or the position of the joints of the user (e.g., knee or elbow), and use these joints to drive the motion of the virtual avatar. Some other systems track the user's profile. The image of the virtual garment is then morphed and then superimposed over the user image to create an enhanced view as if the user were wearing the garment. However, these virtual fit systems fail to take into account the material properties of the garment model, which may lead to inaccurate delineation of the garment. In the described system, the product model is segmented and material properties are stored in association with respective sections of the product model. During the fitting process, the amount of deformation is determined based on these material properties. By taking into account the properties of the product model based on the material of the product, the product model can be adjusted during the virtual fit, while the product model is not limited to a fixed size. This improves the accuracy of the fit and a fit based on body shape can be achieved. These and other embodiments of the present invention, along with many of its advantages and features, will be described in more detail below in conjunction with the following text and accompanying drawings.
Drawings
FIG. 1 depicts an illustrative example of a system in which a record of an object and location data for the object may be automatically generated in accordance with at least some embodiments.
FIG. 2 depicts a system architecture of a system for determining a level of fit of a user and a product in accordance with at least some embodiments.
FIG. 3 depicts a technique for segmenting a 3D model of a product in accordance with at least some embodiments.
Fig. 4 is a simplified flow diagram illustrating a method of determining a level of fit of a product and a user according to an embodiment of the present invention.
FIG. 5 depicts an illustrative example of a technique for obtaining a 3D model using sensor data in accordance with at least some embodiments.
FIG. 6 illustrates an example technique for fitting a 3D model of a product onto a 3D model of a user in order to determine a level of fit, in accordance with at least some embodiments.
FIG. 7 shows a flow diagram depicting a process for determining a fit level of a product and a user in accordance with at least some embodiments.
FIG. 8 illustrates an example of components of a computer system 800, according to some embodiments.
Detailed Description
The present invention relates generally to methods and systems related to virtual reality applications. More specifically, embodiments of the present invention provide methods and systems for determining a fit level for a user and a product. Embodiments of the present invention are applicable to a variety of applications in virtual reality and computer-based adaptation systems.
FIG. 1 depicts an illustrative example of a system in which fit levels may be generated for users and products in some embodiments of the invention. In fig. 1, a user device 102 may be used to provide a request for product adaptation information to a mobile application server 104. In some cases, the user device may be used to obtain user data 106, which user data 106 may be provided to the mobile application server 104 for use in generating product fit information. Mobile application server 104 may include or have access to object model data 108, and product data 110 may be obtained from object model data 108 to fulfill the request. Mobile application server 104 may be configured to combine user data 106 and product data 110 to determine a level of fit for a particular user. In some embodiments, mobile application server 104 may provide virtual fit data 112 back to user device 102, which virtual fit data 112 may be presented on a display for viewing by a user.
In an example, user device 102 represents a suitable computing device that includes one or more Graphics Processing Units (GPUs), one or more General Purpose Processors (GPPs), and one or more memories storing computer-readable instructions executable by at least one of the processors to perform the various functions of the disclosed embodiments. For example, the user device 102 may be any one of a smartphone, tablet, laptop, personal computer, game console, or smart television. The user device 102 may additionally include a range camera (i.e., a depth sensor) and/or an RGB optical sensor (e.g., a camera).
The user device may be used to capture and/or generate user data 106. The user data 106 may include information related to a particular user (e.g., a user of the user device 102) for which a level of fit with respect to a product should be determined. The user data 106 may include data about the user that may be used to determine the adaptation level. For example, the user data 106 may include a size (dimension) of the user. User data 106 may be captured in any suitable format. For example, the user data 106 may include a point cloud, a 3D mesh or model, or a string of characters including measurements at predetermined locations. In some cases, capturing user data 106 may involve receiving information about the user manually entered into user device 102. For example, a user may enter measurements of various parts of his or her body via a keypad. In some cases, capturing user data 106 may involve using a camera and/or a depth sensor to capture image/depth information related to the user. The user device 102 may also be configured to generate a 3D model from the captured image/depth information. This process is described in more detail below with respect to fig. 5.
The object model data 108 may include any computer-readable storage medium on which one or more 3D models are stored. For example, object model data 108 may be a database maintained by a mobile application server or another server. The 3D model stored in the object model data 108 may represent a product wearable by the user, such as a garment (e.g., a garment) or an accessory. In some embodiments, the object model data 108 may store 3D models of multiple versions (e.g., different sizes and/or styles) of a product. Upon receiving a request to determine a level of fit for a particular user and a particular product, mobile application server 104 retrieves product data 110 from object model data 108, the product data 110 including a 3D model associated with the particular product.
Upon receiving user data 106 and product data 110, mobile application server 104 determines a level of fit by fitting a 3D model of the product (product data 110) onto the 3D model of the user (user data 106). This may involve deforming portions of the 3D model of the product based on material properties associated with the portions. For example, portions of the 3D model of the product may be associated with levels of elasticity or rigidity such that portions of the 3D model may be deformed according to the levels of elasticity or rigidity.
In some embodiments, the level of fit may be determined based on the degree to which the 3D model is deformed. For example, where a portion of the 3D model is associated with a level of elasticity (e.g., an amount by which the portion may stretch), the level of fit may be determined based on a degree to which the portion of the 3D model is stretched. In other words, the fit level may involve measuring how much a portion of the 3D model is stretched as a percentage of the total amount that the portion may be stretched. If the amount by which the portion of the 3D model is stretched is high relative to the total elasticity (e.g., the amount by which the portion can be stretched), it may be determined that the product is tight-fitting (light fit). In some cases, if the amount by which the portion is stretched is greater than a threshold fit value, it may be determined that the product does not fit the user. Different portions of the 3D model of the product may each be associated with different threshold adaptation values.
In some embodiments, the level of fit may be determined based on an amount of space between one or more portions of the 3D model of the product and the 3D model of the user. For example, a total volume of space may be determined for a volume located between the 3D model of the product and the 3D model of the user. In this example, if the total spatial volume is greater than some threshold fit value, it may be determined that the product does not fit the user. In some cases, such a threshold adaptation value may be proportional to the size of the 3D model of the product or the 3D model of the user.
In some cases, determining that the product does not fit the user (e.g., determining a low fit level) may prompt mobile application server 104 to automatically (e.g., without further prompting by the user) determine a fit level for a different version of the product (e.g., a different size or style of product) by retrieving a second 3D model associated with the version of the product and repeating the above process.
For clarity, a certain number of components are shown in fig. 1. However, it should be understood that embodiments of the invention may include more than one of each component. Moreover, some embodiments of the invention may include fewer than or more than all of the components shown in FIG. 1. In addition, the components in FIG. 1 may communicate via any suitable communication medium, including the Internet, using any suitable communication protocol.
FIG. 2 depicts a system architecture of a system for determining a level of fit of a user and a product in accordance with at least some embodiments. In fig. 2, user device 202 may communicate with a number of other components, including at least mobile application server 204. Mobile application server 204 may perform at least a portion of the processing functions required for mobile applications installed on user devices. The user device 202 and the mobile application server 204 may be examples of the user device 102 and the mobile application server 104, respectively, described with respect to fig. 1.
User device 202 may be any suitable electronic device capable of providing at least a portion of the capabilities described herein. In particular, the user device 202 may be any electronic device capable of capturing user data and/or presenting rendered images. In some embodiments, the user device may be able to establish a communication session with another electronic device (e.g., mobile application server 204) and transmit/receive data from the electronic device. The user device may include the ability to download and/or execute mobile applications. User devices may include mobile communication devices as well as personal computers and thin client devices. In some embodiments, the user device may comprise any portable electronic device having primary functions related to communications. For example, the user device may be a smart phone, a Personal Data Assistant (PDA), or any other suitable handheld device. The user device may be implemented as a stand-alone unit with various components (e.g., input sensors, one or more processors, memory, etc.) integrated into the user device. References in this disclosure to an "output" of a component or an "output" of a sensor do not necessarily imply that the output is transmitted outside of the user device. The output of the various components may still be within the stand-alone unit defining the user device.
In one illustrative configuration, user device 202 may include at least one memory 206 and one or more processing units (or one or more processors) 208. The one or more processors 208 may be suitably implemented in hardware, computer-executable instructions, firmware, or a combination thereof. Computer-executable instructions or firmware implementations of the one or more processors 208 may include computer-executable instructions or machine-executable instructions written in any suitable programming language to perform the various functions described. The user device 202 may also include one or more input sensors 210 for receiving user and/or environmental inputs. There may be a variety of input sensors 210 capable of detecting user or environmental inputs, such as accelerometers, camera devices, depth sensors, microphones, global positioning system (e.g., GPS) receivers, and so forth. The one or more input sensors 210 may include a ranging camera device (e.g., a depth sensor) capable of generating a range image, as well as a camera device configured to capture image information.
For purposes of this disclosure, a ranging camera (e.g., a depth sensor) may be any device configured to identify a distance or range of one or more objects from the ranging camera. In some embodiments, the range camera may generate a range image (or range map) in which the pixel value corresponds to the detected range for that pixel. The pixel values may be obtained directly in physical units (e.g., meters). In at least some embodiments of the present disclosure, a user device may employ a ranging camera that operates using structured light. In a range camera operating using structured light, a projector projects light onto one or more objects in a structured pattern. The light may be in a range outside the visible range (e.g., infrared or ultraviolet). The range camera may be equipped with one or more camera devices configured to obtain an image of the object using the reflection pattern. Distance information may then be generated based on the distortion in the detected pattern. It should be noted that although the present disclosure focuses on the use of a range camera using structured light, the described system will be operable with any suitable type of range camera, including those operating using stereo triangulation, sheet light triangulation, time of flight, interferometry, coded apertures, or any other suitable technique for distance detection.
The memory 206 may store program instructions that are loadable and executable on the one or more processors 208, as well as data generated during the execution of these programs. Depending on the configuration and type of user device 202, memory 206 may be volatile (e.g., random Access Memory (RAM)) and/or non-volatile (e.g., read Only Memory (ROM), flash memory, etc.). User device 202 may also include additional storage 212, such as removable storage or non-removable storage, including but not limited to magnetic storage, optical disk, and/or tape storage. The disk drives and their associated computer-readable media may provide non-volatile storage of computer-readable instructions, data structures, program modules and other data for the computing devices. In some implementations, the memory 206 may include a variety of different types of memory, such as Static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), or ROM. Turning to more detail of memory 206, memory 206 may include an operating system 214 and one or more application programs or services for implementing features disclosed herein, including at least one mobile application 216. The memory 206 may also include application data 218, the application data 218 providing information to be generated by the mobile application 216 and/or consumed by the mobile application 216. In some embodiments, the application data 218 may be stored in a database.
For purposes of this disclosure, a mobile application may be any set of computer-executable instructions installed on user device 202 and executed from user device 202. The mobile application may be installed on the user device by the manufacturer of the user device or by another entity. In some embodiments, mobile application 216 may enable a user device to establish a communication session with mobile application server 204, where mobile application server 204 provides backend support for mobile application 216. Mobile application server 204 may maintain account information associated with a particular user device and/or user. In some embodiments, the user may be required to log into the mobile application to access the functionality provided by the mobile application 216.
In accordance with at least some embodiments, mobile application 216 is configured to provide user information to mobile application server 204 and present information received from mobile application server 204 to the user. More specifically, the mobile application 216 is configured to obtain measurement data for the user and submit the measurement data to the mobile application server 104 associated with the request for the fit level for the product. In some embodiments, the mobile application 216 may also receive an indication from a user for whom the fit level of the product is to be determined.
In accordance with at least some embodiments, the mobile application 216 can receive output from the input sensors 210 and generate a 3D model based on the output. For example, the mobile application 216 may receive depth information (e.g., a range image) from a depth sensor (e.g., a ranging camera), such as the depth sensor previously described with respect to the input sensor 210, and image information from a camera input sensor. Based on this information, the mobile application 216 can determine the boundaries of the object (e.g., user) to be identified. For example, an abrupt change in depth within the depth information may indicate a boundary or contour of the object. In another example, the mobile application 216 can utilize one or more machine vision techniques and/or machine learning to identify the boundaries of the object. In this example, the mobile application 216 may receive image information from the camera input sensor 210 and may identify potential objects within the image information based on changes in color or texture data detected within the image or based on learned patterns. In some embodiments, the mobile application 216 may cause the user device 202 to transmit output obtained from the input sensors 210 to the mobile application server 204, and the mobile application server 204 may then perform one or more object recognition techniques on the output.
The user device 202 may also include one or more communication interfaces 220 that enable the user device 202 to communicate with any other suitable electronic devices. In some embodiments, the communication interface 220 may enable the user device 202 to communicate with other electronic devices on a network (e.g., on a private network). For example, the user device 202 may include a bluetooth wireless communication module that allows it to communicate with another electronic device. User device 202 may also include one or more input/output (I/O) devices and/or ports 222, for example, for enabling connection to a keyboard, mouse, pen, voice input device, touch input device, display, speakers, printer, etc.
In some embodiments, the user device 202 may communicate with the mobile application server 204 via a communication network. The communication network may include any one or combination of many different types of networks (e.g., wired, internet, wireless, cellular, and other private and/or public networks). Further, the communication network may include a plurality of different networks. For example, the user device 202 may communicate with a wireless router using a Wireless Local Area Network (WLAN), which may then route the communication to the mobile application server 204 over a public network (e.g., the internet).
Mobile application server 204 may be any computing device or devices configured to perform one or more computations on behalf of mobile application 216 on user device 202. In some embodiments, mobile application 216 may be in periodic communication with mobile application server 204. For example, the mobile application 216 may receive updates, push notifications, or other instructions from the mobile application server 204. In some embodiments, mobile application 216 and mobile application server 204 may utilize proprietary encryption and/or decryption schemes to secure communications between the two. In some embodiments, mobile application server 204 may be executed by one or more virtual machines implemented in a hosted computing environment. The managed computing environment may include one or more rapidly provisioned and released computing resources, which may include computing devices, networking devices, and/or storage devices. The managed computing environment may also be referred to as a cloud computing environment.
In one illustrative configuration, mobile application server 204 may include at least one memory 224 and one or more processing units (or one or more processors) 226. The one or more processors 226 may be suitably implemented in hardware, computer-executable instructions, firmware, or a combination thereof. The computer-executable instructions or firmware implementations of the one or more processors 206 may include computer-executable instructions or machine-executable instructions written in any suitable programming language to perform the various functions described.
Memory 224 may store program instructions that are loadable and executable on one or more processors 226, as well as data generated during the execution of these programs. Depending on the configuration and type of mobile application server 204, memory 224 may be volatile (e.g., random Access Memory (RAM)) and/or non-volatile (e.g., read Only Memory (ROM), flash memory, etc.). Mobile application server 204 may also include additional storage 228, such as removable or non-removable storage, including, but not limited to, magnetic, optical, and/or tape storage. The disk drives and their associated computer-readable media may provide non-volatile storage of computer-readable instructions, data structures, program modules and other data for the computing devices. In some implementations, the memory 224 may include a variety of different types of memory, such as Static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), or ROM. Turning to more details of the memory 224, the memory 224 may include an operating system 230 and one or more applications or services for implementing features disclosed herein, including at least a module for fitting a 3D model of a product onto a 3D model of a user (morphing module 232) and/or a module for determining a level of fit of a combination of the 3D model of the product and the 3D model of the user (fit estimation module 234). The memory 224 may also include: account data 236, the account data 236 providing information associated with a user account maintained by the described system; user model data 238, the user model data 238 maintaining a 3D model associated with each user of the account; and/or object model data 240, the object model data 240 maintaining a 3D model associated with a number of objects (products). In some embodiments, one or more of account data 236, user model data 238, or object model data 240 may be stored in a database. In some embodiments, object model data 240 may be an electronic catalog that includes data related to objects available from the sale of a resource provider, such as a retailer or other suitable merchant.
Removable and non-removable memory 224 and additional storage 228 are examples of computer-readable storage media. For example, computer-readable storage media may include volatile or nonvolatile, removable or non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data. As used herein, the term "module" may refer to a programming module executed by a computing system (e.g., a processor) installed on mobile application server 204 and/or executing from mobile application server 204. Mobile application server 204 may also contain one or more communication connections 242 that allow mobile application server 204 to communicate with a stored database, another computing device or server, a user terminal, and/or other components of the described system. Mobile application server 204 may also include one or more input/output (I/O) devices and/or ports 244, for example, for enabling connection to a keyboard, mouse, pen, voice input device, touch input device, display, speakers, printer, or the like.
Turning to more details of memory 224, memory 224 may include a morphing module 232, an adaptation estimation module 234, a database containing account data 236, a database containing user model data 238, and/or a database containing object model data 240.
In some embodiments, the deformation module 232 may be configured, in conjunction with the processor 226, to apply the deformation to the 3D model of the product to fit it onto the 3D model of the user. The deformation module 232 may access one or more rules that delineate how a particular product type (e.g., shirt, pants, etc.) should be deformed (e.g., stretched and/or bent) to fit onto the user model. To fit the 3D model of the product onto the 3D model of the user, the deformation module 232 may align (snap) certain portions of the 3D model of the product onto certain portions of the 3D model of the user. For example, the 3D model of the shirt may be positioned such that the sleeves of the 3D model of the shirt surround the arms of the 3D model of the user. Additionally, the 3D model of the shirt may also be positioned such that the collar of the 3D model of the shirt encompasses the neck of the 3D model of the user. The rest of the 3D model of the shirt may then be deformed such that by stretching and bending portions of the 3D model of the shirt, the interior of the 3D model of the shirt is located outside or along the exterior of the 3D model of the user. The deformation module 232 may determine a deformation level for each section of the 3D model of the product after fitting the 3D model of the product onto the 3D model of the user. In other words, the deformation module 232 may record the amount of stretching or bending that has been performed for each section of the 3D model of the product. For example, the deformation module 232 may subtract the amount of surface area of a particular section of the 3D model of the product prior to fitting the particular section onto the 3D model of the user from the amount of surface area of the particular section after fitting the same particular section of the 3D model of the product onto the 3D model of the user. In this example, the difference in surface area will indicate the amount that the segment has been stretched. The deformation module 232 may also be configured to change a pose of the 3D model of the user in order to estimate a level of deformation of the 3D model of the product for the pose. In some cases, the deformation module 232 may determine the deformation level for several different poses.
In some embodiments, the fit estimation module 234 may be configured to determine the fit level of the product and the user in conjunction with the processor 226. To this end, the fit estimation module 234 may determine whether the deformation level determined by the deformation module 232 for each section of the 3D model of the product is within an acceptable deformation range based on the material properties of that section.
Each 3D model of the product stored within the object model data 240 may be associated with various material properties. In some cases, the 3D model of the product may be segmented into a plurality of segments, where each segment is associated with a separate material property. In some cases, a 3D model of a product or a section of the 3D model of a product may be associated with a particular material, and the fit estimation module 234 may be configured to determine one or more material characteristics based on the material (e.g., from a material information database). In determining the adaptation level, the adaptation estimation module 234 may determine a maximum level of deformation for each section of the 3D model of the product based on the corresponding material properties of that section. For example, the maximum available amount of stretch for a segment may be determined based on the elastic value of the segment. Then, the fit estimation module 234 may determine, for each section of the 3D model of the product, whether the deformation level determined by the deformation module 232 is greater than a maximum deformation level for that section. This may be repeated for many different poses of the 3D model of the user. If the level of deformation determined by the deformation module 232 is greater than the maximum level of deformation for the segment, the fit estimation module 234 may determine that the product cannot fit the user. In these cases, mobile application server 204 may select different versions of the product to determine the fit level. For example, mobile application server 204 may select the next larger size product and determine the fit level for the product.
In some embodiments, the fit estimation module 234 may determine that the product is too large for the user. To this end, the fit estimation module 234 may determine that one or more sections of the 3D model of the product extend beyond one or more boundaries associated with the product type. For example, if the sleeve sections of the 3D model of the shirt extend beyond the wrist of the 3D model of the user, the fit estimation module 234 may determine that the product is too large for the user. In some embodiments, the fit estimation module 234 may determine that the product is too large for the user based on the volume of the space located between the 3D model of the product and the 3D model of the user exceeding a certain threshold. In these cases, mobile application server 204 may select a different version of the product to determine the fit level. For example, mobile application server 204 may select the next smaller sized product and determine the fit level for the product. If the mobile application server 204 encounters a situation where the current product is too large and the next smaller sized product is too small, the mobile application server 204 may provide a notification that the combination of the particular user and product is ill-fitting.
As described herein, the fitness level may be represented by a numerical value. Such a value may correspond to the determined deformation level divided by the maximum deformation level. In some cases, the fit level may represent an overall value of the product. For example, the fit level of the product may be generated as an average of the fit levels of each section of the 3D model of the product.
In some embodiments, each object entry within the object model data 240 may be associated with a three-dimensional (3D) model of the object. In these embodiments, the 3D model may be combined with a second 3D model of the user and provided to the mobile application 216, causing the user device 202 to display the combination of 3D models on a display of the user device. The mobile application 216 may enable a user of the user device 202 to move, rotate, or otherwise reposition the combination of 3D models to see how the combination will look at the new location.
FIG. 3 depicts a technique for segmenting a 3D model of a product in accordance with at least some embodiments. In particular, FIG. 3 depicts an illustrative example of a 3D model of a product 302 that may be stored in an object model database (e.g., object model data 240 of FIG. 2). In fig. 3, the 3D model 302 is depicted as being segmented into a plurality of segments 304 (a-E), each segment including information about a different portion of the 3D model 302. Such a 3D model 302 may be segmented in any logical manner. For example, where the 3D model is a garment, the 3D model may be segmented along seams of the garment. In another example, the 3D model may be segmented according to material type.
The system described herein may store indications of a number of material properties associated with the 3D model, some of which may be stored in association with particular sections of the 3D model. Some material properties may indicate physical properties of the material (e.g., elasticity or stiffness) that affect how the material may deform, while some material properties may indicate visual properties of the material (e.g., gloss or reflectivity) that affect how the material appears. In some embodiments, a zone may be associated with a particular material, and the material characteristics of the zone may be determined by querying a material characteristics database.
The annotations may be associated with their respective sections in any suitable manner. In some embodiments, such annotations may be stored as metadata appended to a particular section of the 3D model. In some cases, a user may manually generate annotations of a 3D model in a computer-aided drafting (CAD) software application. For example, a manufacturer of a product may generate a 3D model of the product, segment the 3D model based on any logical features, and then annotate each section with its material properties. The 3D model may then be provided to a mobile application server, where the 3D model may be stored in association with a particular product in an object model database.
By way of example illustrating material properties associated with a section of a 3D model, annotation 306 represents various material information stored in relation to section 304 (a) of 3D model 302. Each other section 304 (B-E) may include an annotation similar to annotation 306 that includes an indication of the material and/or material characteristics. Furthermore, boundaries between sections may be individually associated with annotations. For example, seam 308 may represent a boundary between sections 304 (B) and 304 (E) and may include annotations 310 associated with the line material used to connect the two sections.
Fig. 4 is a simplified flowchart illustrating a method of determining a level of fit of a product and a user according to an embodiment of the present invention. The flow is described in connection with a computer system, which is an example of a computer system described herein. Some or all of the operations of the flow may be implemented via specific hardware on a computer system and/or may be implemented as computer readable instructions stored on a non-transitory computer readable medium of a computer system. As stored, the computer readable instructions represent programmable modules comprising code executable by a processor of a computer system. Execution of such instructions configures the computer system to perform the corresponding operations. Each programmable module represents, in combination with a processor, a component for performing one or more corresponding operations. While the operations are shown in a particular order, it should be understood that the particular order is not required and that one or more operations may be omitted, skipped, and/or reordered.
At 402, the method includes receiving a request to determine a fit level of a product and a user. For example, a user may request a determination of the fit level of a particular product offered by an online retailer with respect to the user when considering whether to purchase the product. The user may submit such a request in order to receive an appropriate size recommendation (since garment sizes typically vary from brand to brand). The request is sent by a user device operated by a user and subsequently received by the mobile application server.
At 404, the method includes identifying a user associated with the received request. In some embodiments, a user maintains an account with a mobile application server and is identified by login credentials provided by the user. In some embodiments, the user is identified via a user device identifier received in the request.
At 406, the method includes retrieving the user model for use in determining the level of adaptation for the request. In some embodiments, this involves retrieving a 3D model of the user stored in association with the identified user. In some embodiments, this involves generating a 3D model of the user based on information about the user (e.g., measurement data). In some embodiments, the 3D model of the user may be received from the user device together with or separately from the request. It should be noted that techniques for generating a 3D model of an object (e.g., a user) using a user device are described in more detail below with respect to fig. 5.
At 408, the method includes identifying a product associated with the request. In some embodiments, the request includes an identifier of the product, which may be used to identify the product. For example, the request includes a Stock Keeping Unit (SKU) number or other suitable identifier. Alternatively, a user viewing a website dedicated to a particular product may initiate a request via a button on the website, in which case the product is identified by being associated with the website. At 410, the method includes retrieving a 3D model of the product. This involves querying the object model database for the identified product.
At 412, the method includes adapting the retrieved 3D model of the product to the 3D model of the user. To do so, each contact point on the 3D model of the product is aligned to a particular vertex or point located on the user's 3D model. The 3D model of the product is then deformed such that the inner surface of the 3D model of the product is outside (i.e., does not overlap) the outer surface of the 3D model of the user. The amount of deformation of the 3D model of the product is recorded. Techniques for adapting a 3D model of a product to a 3D model of a user are described in more detail below with reference to fig. 6.
At 414, the method includes comparing that the recorded 3D model of the product is deformed by an amount greater than some threshold deformation value. The threshold deformation value is determined as a function of one or more material property values stored in relation to the 3D model (or a segment therein). Upon determining that the amount of deformation of the 3D model of the product has exceeded the threshold deformation value, the method produces a determination that the product is unsuitable for the user. The method also involves updating the size and/or style of the product at 416, and then repeating the method from step 410.
At 418, if the amount of deformation of the 3D model of the product does not exceed the threshold deformation value, the method includes determining a level of fit of the product and the user. More specifically, the method includes determining the fit level as a numerical value based on a relationship between a deformation amount of the 3D model of the product and a threshold deformation value. The digital adaptation level is then provided in response to the request.
It should be appreciated that the specific steps illustrated in FIG. 4 provide a particular method of determining a fit level of a product and a user according to an embodiment of the present invention. As described above, other orders of steps may also be performed according to alternative embodiments. For example, alternative embodiments of the present invention may perform the above steps in a different order. Moreover, the various steps shown in fig. 4 may include multiple sub-steps that may be performed in various orders as appropriate to the individual step. In addition, additional steps may be added or removed depending on the particular application. One of ordinary skill in the art would recognize many variations, modifications, and alternatives.
FIG. 5 depicts an illustrative example of a technique for obtaining a 3D model using sensor data in accordance with at least some embodiments. In accordance with at least some embodiments, sensor data 502 may be obtained from one or more input sensors installed on a user device. The captured sensor data 502 includes image information 504 captured by the camera device and depth map information 506 captured by the depth sensor.
As described above, the sensor data 502 may include image information 504. One or more image processing techniques may be applied to the image information 504 to identify one or more objects within the image information 504. For example, edge detection may be used to identify a portion 508 within the image information 504 that includes an object. To this end, discontinuities in brightness, color, and/or texture may be identified across an image in order to detect edges of various objects within the image. Portion 508 depicts an illustrative example image of a chair in which such discontinuities have been emphasized.
As also described above, the sensor data 502 may include depth information 506. In the depth information 506, each pixel may be assigned a value representing a distance between the user device and a particular point corresponding to the location of the pixel. The depth information 506 may be analyzed to detect abrupt changes in depth within the depth information 506. For example, an abrupt change in distance may indicate an edge or boundary of an object within the depth information 506.
In some embodiments, sensor data 502 may include both image information 504 and depth information 506. In at least some of these embodiments, the object may first be identified in the image information 504 or the depth information 506, and various attributes of the object may be determined from the other information. For example, edge detection techniques may be used to identify the portion 508 of the image information 504 that includes the object. This portion 508 may then be mapped to a corresponding portion 510 in the depth information to determine depth information for the identified object (e.g., point cloud). In another example, a portion 510 including an object may first be identified within the depth information 506. In this example, the portion 510 may then be mapped to a corresponding portion 508 in the image information to determine an appearance attribute (e.g., a color or texture value) of the identified object.
In some embodiments, various attributes of the identified objects in the sensor data 502 (e.g., color, texture, point cloud data, object edges) may be used as input to a machine learning module to identify or generate a 3D model 512 that matches the identified objects. In some embodiments, a point cloud of objects may be generated from depth information and/or image information and compared to point cloud data stored in a database to identify a best-matching 3D model. Alternatively, sensor data 502 may be used to generate a 3D model of an object (e.g., a user or a product). To this end, a mesh may be created from point cloud data obtained from portion 510 of depth information 506. The system may then map appearance data from the portion of image information 504 corresponding to portion 510 to the mesh to generate a base 3D model. Although specific techniques are described, it should be noted that there are many techniques for identifying a particular object from the sensor output.
As described elsewhere, sensor data captured by a user device (e.g., user device 102 of fig. 1) may be used to generate a 3D model of a user using the techniques described above. The 3D model of the user may then be provided to a mobile application server as user data. In some embodiments, the sensor data may be used to generate a 3D model of the product, which may then be stored in object model database 238. For example, a user wishing to sell a product may capture sensor data related to the product from his or her user device. The user device of the user may then generate a 3D model in the manner described above, and may provide the 3D model to the mobile application server.
FIG. 6 illustrates an example technique for fitting a 3D model of a product onto a 3D model of a user in order to determine a level of fit in accordance with at least some embodiments. Illustratively, FIG. 6 depicts a 3D model 602 of a user and a 3D model 604 of a product. It should be noted that image data need not be included in the 3D model unless the 3D model is to be rendered (and the 3D model may not be rendered). Thus, for the purposes of this technique, a 3D model may include only structural (e.g., mesh) data.
In some embodiments, the 3D model 604 of the product may include multiple contact points that are intended to snap to specific areas of the 3D model 602 of the user. For example, a contact point 606 (A) on a shoulder strap of the 3D model 604 of the product may be aligned to a particular vertex or point 606 (B) located on a shoulder of the 3D model 602 of the user. This may be repeated for multiple contact points located in the 3D model 604 of the entire product. In some embodiments, the contact points for the product may be automatically determined based on the category or type of the 3D model 604 of the product. For example, all 3D models of a T-shirt may utilize the same set of contact points. In some embodiments, the contact points for a particular 3D model 604 of a product may be set by a user (e.g., the manufacturer of the product).
Once the set of contact points has been matched to the appropriate points on the user's 3D model 602, a rigid body transformation can be estimated from the corresponding points. Rigid body transformations may consist of translations and rotations. By applying a rigid body transformation to the 3D model 604 of the product, the 3D model 604 of the product and the 3D model 602 of the user are approximately aligned in 3D. Then, by adding a small translation to the contact point of the 3D model 604 of the product, the contact point of the 3D model 604 of the product may be aligned to a corresponding point on the 3D model 602 of the user. Once the multiple contact points (e.g., 606) have been aligned to the appropriate points on the user's 3D model 602, the 3D model 604 of the product may be deformed such that the inner surface of the 3D model 604 of the product rests on the outer surface of the user's 3D model 602 or is outside the outer surface of the user's 3D model 602. This may involve stretching and/or bending the 3D model 604 of the product in order to move the one or more surfaces toward the exterior of the 3D model 602 of the user without moving the contact point of the 3D model 604 of the product away from its current location on the 3D model 602 of the user. As the 3D model 604 of the product is deformed in this manner, the amount of deformation associated with each portion of the 3D model 604 of the product is recorded. During this step, a physics engine may be used to accurately predict the deformation of the material in the 3D model 604 of the product as the 3D model 604 of the product deforms. The adjustment of certain portions of the 3D model 604 of the product may be accomplished using a mesh editing technique (e.g., laplace surface editing) in which one or more ROIs (regions of interest) of the product mesh may be adjusted according to the set of parameters.
In other words, the adaptation process may be accomplished by: a set of parameters that control deformation of one or more ROIs of the product model is adjusted until the product model fits the user model. The set of parameters may be defined as a set of measurements (e.g., displacement of each vertex). The process may be formulated as an optimization process, where some different optimization algorithm may be used to find the best set of parameters that minimize one or more cost functions. The one or more cost functions may be defined in a number of ways. One or more cost functions may be defined as the number of penetrations between meshes of two 3D models. It may also be defined as the average distance between the body mesh to the vertices of the clothing mesh, etc.
Once the inner surface of the 3D model 604 of the product rests on or is outside the outer surface of the 3D model 602 of the user, an adapted model 608 is generated. Once the adapted model has been generated, the technique involves determining whether the amount of deformation associated with each portion of the 3D model 604 of the product required to generate the adapted model is within an acceptable threshold. To this end, material property values are obtained for each section, and a threshold value is determined from these material property values. The amount of deformation associated with each section of the 3D model may then be compared to a respective threshold. For example, the material property value may indicate an elasticity value for a particular segment. In this example, the maximum amount that the segment can be stretched is determined from the elasticity value. The amount that a segment is stretched to create the adapted model 608 is then compared to the maximum amount that the segment can be stretched. If a section is stretched by more than the maximum amount that the section can stretch, the product cannot fit the user. The fit level may be determined if the amount of deformation associated with each section of the 3D model is within an acceptable threshold (e.g., does not exceed the threshold). By way of example, the adaptation level may be determined as a ratio of the amount by which a segment is stretched divided by a maximum threshold by which the segment may be stretched. In some cases, the adaptation level may correspond to a comfort level, as a higher degree of deformation relative to a maximum deformation threshold may equate to a lower comfort level.
Alternatively or additionally, the adaptation level may be determined based on an amount of space between one or more portions of the 3D model of the product and the 3D model of the user. For example, the total volume of the space between the 3D model of the product and the 3D model of the user may be determined. In this example, if the total volume of space is greater than some threshold fit value, it may be determined that the product does not fit the user (e.g., the product is too large or uncomfortable). In some cases, such a threshold adaptation value may be proportional to the size of the 3D model of the product or the 3D model of the user.
In some embodiments, a particular type of product may be associated with a plurality of fitting reference points 610 (A-D), which fitting reference points 610 (A-D) represent particular reference locations along which deformation should be measured relative to a threshold deformation value. In these embodiments, the amount of deformation of the 3D model 604 of the product required to generate the adapted model 608 is measured at a reference location and then compared to a threshold value for that reference location. As described above, the threshold value for the reference location may be determined based on material characteristics associated with the section of the 3D model 604 of the product that includes the reference location.
FIG. 7 shows a flow diagram depicting a process for determining a level of fit of a product and a user in accordance with at least some embodiments. Process 700 depicted in fig. 7 may be performed by a mobile application server (e.g., mobile application server 204 of fig. 2) in communication with a user device (e.g., user device 202 of fig. 2).
Process 700 involves receiving an adaptation request for a user and a product at 702. The request may be received from a user device, and the user may be identified by being associated with the user device.
Process 700 involves, at 704, obtaining a first 3D model associated with a user. In some embodiments, the first 3D model is received and stored by the mobile application server prior to receiving the adaptation request. In some embodiments, the first 3D model may be received with an adaptation request.
Process 700 involves, at 706, obtaining a second 3D model associated with the product. The second 3D model is segmented into a plurality of segments, wherein each segment of the plurality of segments is associated with one or more material properties. Such material properties may include one or more of an elasticity value and a stiffness value.
Process 700 involves, at 708, fitting a second 3D model onto the first 3D model such that each segment of the plurality of segments is deformed by the first 3D model according to one or more material properties associated with each segment.
Process 700 involves determining a fit level for a user and a product based on a deformation of each of a plurality of sections, at 710. In some embodiments, the level of fitting is determined based at least in part on the amount of deformation required to fit the second 3D model onto the first 3D model. In some embodiments, the level of fit is determined based at least in part on a volume of space determined to be between the second 3D model and the first 3D model. In some embodiments, the fit level includes a numerical value corresponding to a deformation of each of the plurality of sections relative to a deformation threshold level.
Process 700 also involves providing a size recommendation to the user at 712. In some embodiments, the product is associated with a first size or style, and the method further comprises recommending a second size or style of the product to the user upon determining that an amount of deformation required to fit the second 3D model onto the first 3D model is greater than a threshold deformation value.
Process 700 also involves, at 714, providing a second 3D model adapted to the first 3D model to the user device. When a second 3D model adapted to the first 3D model is received at the user device, the second 3D model adapted to the first 3D model may be rendered on a display of the user device.
FIG. 8 illustrates an example of components of a computer system 800, according to some embodiments. Computer system 800 is an example of the computer systems described above. Although these components are shown as belonging to the same computer system 800, the computer system 800 may also be distributed.
Computer system 800 includes at least a processor 802, memory 804, storage 806, input/output (I/O) peripherals 808, communication peripherals 810, and an interface bus 812. Interface bus 812 is configured to communicate, transfer, and transport data, control, and commands between the various components of computer system 800. Memory 804 and storage 806 include computer-readable storage media, such as RAM, ROM, electrically erasable programmable read-only memory (EEPROM), hard drives, CD-ROM, optical storage, magnetic storage, electronic non-volatile computer storage (e.g., FLASH) TM Memory) and other tangible storage media. Any such computer-readable storage media may be configured to store instructions or program code embodying aspects of the present disclosure. Memory 804 and storage 806 also include computer-readable signal media. A computer readable signal medium includes a propagated data signal with computer readable program code embodied therein. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any combination thereof. Computer-readable signal media includes any computer-readable media that is not computer-readable storage media and that can communicate, propagate, or transport a program for use by or in connection with computer system 800.
In addition, memory 804 includes an operating system, programs, and applications. The processor 802 is configured to execute stored instructions and includes, for example, a logic processing unit, a microprocessor, a digital signal processor, and other processors. The memory 804 and/or the processor 802 may be virtualized and may be hosted within another computer system, such as a cloud network or a data center. The I/O peripherals 808 include user interfaces (e.g., keyboard, screen (e.g., touch screen), microphone, speaker, other input/output devices) and computing components (e.g., graphics processing unit, serial port, parallel port, universal serial bus, and other input/output peripherals). I/O peripheral 808 connects to processor 802 through any port coupled to interface bus 812. Communication peripheral devices 810 are configured to facilitate communications between computer system 800 and other computing devices over a communication network and include, for example, network interface controllers, modems, wireless and wired interface cards, antennas, and other communication peripheral devices.
While the present subject matter has been described in detail with respect to specific embodiments thereof, it will be appreciated that those skilled in the art, upon attaining an understanding of the foregoing, may readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, it is to be understood that the present disclosure has been presented for purposes of illustration and not limitation, and does not preclude inclusion of such modifications, variations and/or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art. Indeed, the methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the disclosure. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the disclosure.
Unless specifically stated otherwise, it is appreciated that throughout the description, discussions utilizing terms such as "processing," "computing," "calculating," "determining," and "identifying" refer to the action and processes of a computing device (e.g., one or more computers or similar electronic computing devices) that manipulates and transforms data represented as physical electronic or magnetic quantities within memories, registers, or other information storage, transmission, or display devices of the computing platform.
The one or more systems discussed herein are not limited to any particular hardware architecture or configuration. The computing device may include any suitable arrangement of components that provide results conditional on one or more inputs. Suitable computing devices include a multi-purpose microprocessor-based computer system that accesses stored software that programs or configures the computer system from a general-purpose computing device to a special-purpose computing device that implements one or more embodiments of the present subject matter. Any suitable programming, scripting, or other type of language or combination of languages may be used to implement the teachings contained herein in software for programming or configuring a computing device.
Embodiments of the methods disclosed herein may be performed in the operation of such computing devices. The order of the blocks presented in the above examples may be changed-e.g., the blocks may be reordered, combined, and/or broken into sub-blocks. Some blocks or processes may be performed in parallel.
Conditional language used herein, such as "can," "might," "right," "may (may)," e.g., "and the like, are generally intended to convey that certain examples include certain features, elements and/or steps and other examples do not, unless specifically stated otherwise or otherwise understood in the context of use. Thus, such conditional language is not generally intended to imply: one or more examples may require features, elements, and/or steps in any way or one or more examples may include logic for deciding, with or without author input or prompting, whether such features, elements, and/or steps are included or are to be performed in any particular example.
The terms "comprising," "including," "having," and the like, are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and the like. Furthermore, the term "or" is used in its inclusive sense (and not in its exclusive sense) such that, when used, e.g., to connect a list of elements, the term "or" means one, some, or all of the elements in the list. The use of "adapted to" or "configured to" herein means open and inclusive language that does not exclude an apparatus being adapted to or configured to perform an additional task or step. In addition, the use of "based on" means open and inclusive in that a process, step, calculation, or other action that is "based on" one or more recited conditions or values may in fact be based on additional conditions or values in addition to those recited. Similarly, a use of "based at least in part on" means open and inclusive in that a process, step, calculation, or other action that is "based at least in part on" one or more recited conditions or values may actually be based on additional conditions or values in addition to those recited. The headings, lists, and labels included herein are for convenience of description only and are not meant to be limiting.
The various features and processes described above may be used independently of one another or may be combined in various ways. All possible combinations and sub-combinations are intended to fall within the scope of the present disclosure. Moreover, certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular order, and the blocks or states associated therewith may be performed in other suitable orders. For example, the blocks or states described may be performed in an order different than the order specifically disclosed, or multiple blocks or states may be combined in a single block or state. The example blocks or states may be performed in series, in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed examples. Similarly, the example systems and components described herein may be configured differently than described. For example, elements may be added, removed, or rearranged compared to the disclosed examples.
Claims (20)
1. A method, comprising:
receiving an adaptation request, the adaptation request including at least an indication of a user and a product;
obtaining a first 3D model associated with the user;
obtaining a second 3D model associated with the product, the second 3D model segmented into a plurality of segments, wherein each segment of the plurality of segments is associated with one or more material properties;
fitting the second 3D model onto the first 3D model such that each segment of the plurality of segments is deformed by the first 3D model according to one or more material properties associated with each segment; and
determining a fit level of the user and the product based on the deformation of each of the plurality of sections.
2. The method of claim 1, wherein the level of fitting is determined based at least in part on an amount of deformation required to fit the second 3D model onto the first 3D model.
3. The method of claim 1, wherein the fitting level is determined based at least in part on a volume determined as a space between the second 3D model and the first 3D model.
4. The method of claim 1, wherein the product is associated with a first size or style, the method further comprising: recommending a second size or style of the product to the user upon determining that an amount of deformation required to fit the second 3D model onto the first 3D model is greater than a threshold deformation value.
5. The method of claim 1, wherein the fitting level comprises a numerical value corresponding to a deformation of each of the plurality of sections relative to a deformation threshold level.
6. The method of claim 1, wherein the material property comprises one or more of an elasticity value or a stiffness value.
7. The method of claim 1, further comprising: providing the second 3D model adapted to the first 3D model to a user device for rendering the second 3D model adapted to the first 3D model on a display of the user device.
8. A system, comprising:
a processor; and
a memory comprising instructions that, when executed with the processor, cause the system to at least:
receiving an adaptation request, the adaptation request including at least an indication of a user and a product;
obtaining a first 3D model associated with the user;
obtaining a second 3D model associated with the product, the second 3D model segmented into a plurality of segments, wherein each segment of the plurality of segments is associated with one or more material properties;
fitting the second 3D model onto the first 3D model such that each segment is deformed by the first 3D model according to one or more material properties associated with each segment of the plurality of segments; and
determining a fit level of the user and the product based on the deformation of each of the plurality of sections.
9. The system of claim 8, wherein the level of fitting is determined based at least in part on an amount of deformation required to fit the second 3D model onto the first 3D model.
10. The system of claim 8, wherein the level of fit is determined based at least in part on a volume of space determined to be between the second 3D model and the first 3D model.
11. The system of claim 8, wherein the product is associated with a first size or style, and wherein the instructions further cause the system to: recommending a second size or style of the product to the user upon determining that an amount of deformation required to fit the second 3D model onto the first 3D model is greater than a threshold deformation value.
12. The system of claim 8, wherein the fit level comprises a numerical value corresponding to a deformation of each of the plurality of sections relative to a deformation threshold level.
13. The system of claim 8, wherein the material property comprises one or more of an elasticity value or a stiffness value.
14. The system of claim 8, wherein the instructions further cause the system to: providing the second 3D model adapted to the first 3D model to a user device for rendering the second 3D model adapted to the first 3D model on a display of the user device.
15. A non-transitory computer-readable medium storing specific computer-executable instructions that, when executed by a processor, cause a computer system to at least:
receiving an adaptation request, the adaptation request including at least an indication of a user and a product;
obtaining a first 3D model associated with the user;
obtaining a second 3D model associated with the product, the second 3D model segmented based on material properties;
fitting the second 3D model onto the first 3D model such that each section of the second 3D model is deformed by the first 3D model according to its material properties; and
determining a fit level of the user and the product based on the deformation of each section.
16. The non-transitory computer-readable medium of claim 15, wherein the level of fitting is determined based at least in part on an amount of deformation required to fit the second 3D model onto the first 3D model.
17. The non-transitory computer-readable medium of claim 15, wherein the level of fit is determined based at least in part on a volume of space determined to be between the second 3D model and the first 3D model.
18. The non-transitory computer-readable medium of claim 15, wherein the product is associated with a first size, and wherein the instructions further cause the computer system to: recommending a second size or style of the product to the user upon determining that an amount of deformation required to fit the second 3D model onto the first 3D model is greater than a threshold deformation value.
19. The non-transitory computer-readable medium of claim 15, wherein the fit level comprises a numerical value corresponding to a deformation of each section of the second 3D model relative to a deformation threshold level.
20. The non-transitory computer-readable medium of claim 15, wherein the material characteristic comprises one or more of an elasticity value or a stiffness value.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202062987196P | 2020-03-09 | 2020-03-09 | |
US62/987,196 | 2020-03-09 | ||
PCT/CN2021/078533 WO2021179936A1 (en) | 2020-03-09 | 2021-03-01 | System and method for virtual fitting |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115315728A true CN115315728A (en) | 2022-11-08 |
Family
ID=77671213
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202180016193.8A Pending CN115315728A (en) | 2020-03-09 | 2021-03-01 | System and method for virtual adaptation |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115315728A (en) |
WO (1) | WO2021179936A1 (en) |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140201023A1 (en) * | 2013-01-11 | 2014-07-17 | Xiaofan Tang | System and Method for Virtual Fitting and Consumer Interaction |
CN103714466A (en) * | 2013-12-24 | 2014-04-09 | 中国科学院深圳先进技术研究院 | Virtual clothes fitting system and virtual clothes fitting method |
JP2015184875A (en) * | 2014-03-24 | 2015-10-22 | 株式会社東芝 | Data processing device and data processing program |
WO2016135078A1 (en) * | 2015-02-23 | 2016-09-01 | Fittingbox | Process and method for real-time physically accurate and realistic-looking glasses try-on |
WO2018183291A1 (en) * | 2017-03-29 | 2018-10-04 | Google Llc | Systems and methods for visualizing garment fit |
-
2021
- 2021-03-01 WO PCT/CN2021/078533 patent/WO2021179936A1/en active Application Filing
- 2021-03-01 CN CN202180016193.8A patent/CN115315728A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
WO2021179936A1 (en) | 2021-09-16 |
WO2021179936A9 (en) | 2022-09-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10740941B2 (en) | Processing user selectable product images and facilitating visualization-assisted virtual dressing | |
US12125095B2 (en) | Digital wardrobe | |
US11734740B2 (en) | Garment size mapping | |
US10163003B2 (en) | Recognizing combinations of body shape, pose, and clothing in three-dimensional input images | |
US11798261B2 (en) | Image face manipulation | |
US10964078B2 (en) | System, device, and method of virtual dressing utilizing image processing, machine learning, and computer vision | |
KR102346320B1 (en) | Fast 3d model fitting and anthropometrics | |
CN107967693B (en) | Video key point processing method and device, computing equipment and computer storage medium | |
US20220188897A1 (en) | Methods and systems for determining body measurements and providing clothing size recommendations | |
JP2019503906A (en) | 3D printed custom wear generation | |
US20150134302A1 (en) | 3-dimensional digital garment creation from planar garment photographs | |
WO2016019033A2 (en) | Generating and utilizing digital avatar data | |
US20150269759A1 (en) | Image processing apparatus, image processing system, and image processing method | |
CN107609946B (en) | Display control method and computing device | |
US11893681B2 (en) | Method for processing two-dimensional image and device for executing method | |
WO2021179936A1 (en) | System and method for virtual fitting | |
WO2022081745A1 (en) | Real-time rendering of 3d wearable articles on human bodies for camera-supported computing devices | |
CN111557022B (en) | Two-dimensional image processing method and device for executing the method | |
WO2021179919A1 (en) | System and method for virtual fitting during live streaming | |
US20240071019A1 (en) | Three-dimensional models of users wearing clothing items | |
US20240320917A1 (en) | Diffusion based cloth registration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |