US20200380333A1 - System and method for body scanning and avatar creation - Google Patents
System and method for body scanning and avatar creation Download PDFInfo
- Publication number
- US20200380333A1 US20200380333A1 US16/853,167 US202016853167A US2020380333A1 US 20200380333 A1 US20200380333 A1 US 20200380333A1 US 202016853167 A US202016853167 A US 202016853167A US 2020380333 A1 US2020380333 A1 US 2020380333A1
- Authority
- US
- United States
- Prior art keywords
- avatar
- data
- consumer
- garment
- virtual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G06K9/00369—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/16—Cloth
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
Definitions
- the invention relates to a system and method for body scanning and avatar creation. More specifically, a scanning system and method using a range camera produces an avatar and provides for draping and display of virtual garments using augmented reality.
- an apparatus for 3D virtual try-on of apparel on an avatar.
- the system for 3D virtual try-on of apparel on an avatar is disclosed.
- a method of fitting a garment on a person's body online comprises receiving specifications of a garment, receiving body specifications of one or more fit models, receiving one or more grade rules, receiving one or more fabric specifications, and receiving specifications of a consumer's body.
- the value of one or more fabric constants are determined according to the received one or more fabric specifications.
- One or more virtual garments in graded sizes are created and stored in a database based on the received garment specifications and fabric constants.
- one or more graded virtual fit models are created and stored in a database based on the received specifications of the fit model.
- Each virtual garment is draped on the related virtual fit model to create a fit-model drape.
- An avatar is received or created to represent a consumer's body shape.
- a selected one of the virtual garments is determined that represents a closest size for fitting on the avatar.
- the selected virtual garment is then re-draped on the consumer avatar.
- the consumer drape can then be viewed in 3D on the web or in a software application on any computing device. Data regarding the result of the virtual try-on process can then be utilized by the retailer, the consumer, and/or a third party.
- This virtual try-on data can be in the form of visual data or quantitative data that can be interpreted to determine the goodness of a garment's fit. Specifically, consumers can be presented with such data to assess the appropriate size and the goodness of a garment's fit, retailers can utilize such data for assessing how their garments are performing on their customer's bodies, and finally, such data can be used as a predictive tool for recommending further garments to consumers (e.g., in a predictive, search or decision engine).
- a method of fitting a garment on a person's body online comprises receiving specifications of a garment, receiving specifications of a fit model, receiving a digital pattern corresponding to the fit model, receiving one or more grade rules, and receiving one or more fabric specifications.
- One or more graded digital patterns corresponding to one or more available sizes are calculated and stored in a database based on the received specifications of the garment, the received specifications of the fit model, the received digital pattern corresponding to the fit model, and the grade rules.
- the value of one or more fabric constants are determined according to the received one or more fabric specifications.
- An avatar representing the person's body, and a selected one of the available sizes is determined that represents a closest size for fitting on the avatar.
- a virtual garment is created from the stored graded digital pattern corresponding to selected available size. The selected virtual garment is then draped on the avatar according to the fabric constants.
- a method of fitting a garment on a person's body online comprises receiving specifications of a garment, receiving specifications of a fit model, receiving one or more grade rules, and receiving one or more fabric specifications.
- a virtual fit model is calculated and stored based on the received specifications of the garment, and the received specifications of the fit model.
- the values of one or more fabric constants are determined according to the received one or more fabric specifications.
- An avatar representing the person's body is received, and a selected size for the person's body is determined according to the received one or more grade rules.
- a virtual garment is created in the selected size according to the virtual fit model, the one or more grade rules, and the selected size. The selected virtual garment is then draped on the avatar according to the fabric constants.
- a computer program product is stored on computer readable medium containing executable software instructions for fitting one or more garments on a person's body, the executable software instructions.
- a system for scanning a body comprises a processor, a range camera capable of capturing at least a first set of depth images of the body rotated to 0 degrees, and at least a second set of depth images of the body rotated to x degrees, wherein x is >0 degrees, and x ⁇ 360 degrees, a first set of computer instructions executable on the processor capable of calculating a first set of three dimensional points from the first set of depth images and a second set of three dimensional points from the second set of depth images, a second set of computer instructions executable on the processor capable of rotating and translating the first and second set of three dimensional points into a final set of three dimensional points; and a third set of computer instructions executable on the processor capable of creating a three dimensional mesh from the final set of three dimensional points.
- FIG. 1 is a diagram that illustrates components of one embodiment of a system for providing online virtual try-on apparel on an avatar
- FIG. 2 is a diagram that illustrates further detail of the consumer system and a retail system of FIG. 1 ;
- FIG. 3 is a diagram that illustrates further detail of the virtual try-on system of FIG. 1 ;
- FIG. 4 is a diagram that illustrates further detail of the 3D virtual apparel system of FIG. 1 ;
- FIG. 5 is a diagram that illustrates further detail of the body scanner system used with the system of FIG. 1 ;
- FIG. 6 is a flow diagram that illustrates a general view of high level method steps performed by one embodiment
- FIG. 7 is a sample screenshot of a digital pattern for a garment according to one embodiment
- FIG. 8 is a flow diagram illustrating steps performed in creating a 3D virtual garment according to one embodiment
- FIG. 9 is a diagram illustrating an exemplary 3D piece placement and matching of segments of a virtual garment according to one embodiment
- FIG. 10 is a screenshot from the virtual sewing and draping process for a virtual garment according to one embodiment
- FIG. 11 is an example of a rendering of a drape of a virtual garment according to one embodiment
- FIG. 12 is a flow diagram illustrating the steps for creating a base avatar according to one embodiment
- FIG. 13 is a diagrammatic right perspective view of a stereophotogrammetry body scan booth and a scan booth computing device containing body scanning software according to one embodiment
- FIG. 14 is a flow diagram illustrating steps performed for scanning consumer body or fit model body using the stereophotogrammetry method of body scanning, as well as steps for converting the output of such body scanning method into a 3D mesh according to one embodiment;
- FIG. 15 is a flow diagram illustrating further steps performed by an avatar software application according to one embodiment
- FIG. 16 is a flow chart illustrating steps for creating an avatar according to one embodiment
- FIG. 17 is a flow diagram illustrating steps for creating an avatar according to one embodiment
- FIG. 18 is a flow diagram illustrating the steps for creating an avatar according to one embodiment
- FIG. 19 is a flow diagram illustrating a method for modelling the face of consumer body or fit model body according to one embodiment
- FIG. 20 is a flow chart that describes events that occur when a user decides to try on a virtual garment according to one embodiment
- FIG. 21 is a diagram showing an example of what a simulation and animation may look like on computer device in the context of a virtual fitting room according to one embodiment
- FIG. 22 is an example web page produced by a system according to one embodiment that illustrates how stretch values may be visually displayed using a color tension map
- FIG. 23 is another web page produced by a system according to one embodiment that illustrates how another form of a visual representation of consumer drape may show the 3D virtual garment as partially transparent;
- FIG. 24 is a flowchart that describes a process of analyzing fit data according to one embodiment
- FIG. 25 is a flow diagram that illustrates steps to relate fit data and how retailers may interpret such relations according to one embodiment.
- FIG. 26 is a diagram illustrating components of a prior art range camera device that could be used in one embodiment.
- FIG. 27 is a flow diagram illustrating steps that may be performed using a range camera of FIG. 26 in one embodiment.
- FIGS. 1-19 The system for online virtual try-on of apparel on an avatar is disclosed in accordance with preferred embodiments of the present invention is illustrated in FIGS. 1-19 wherein like reference numerals are used throughout to designate like elements.
- FIG. 1 is a diagram that illustrates components of one embodiment of a system 10 for providing online virtual try-on apparel on an avatar.
- FIG. 2 is a diagram that illustrates further detail of the consumer system and a retail system of FIG. 1 .
- FIG. 3 is a diagram that illustrates further detail of the virtual try-on system of FIG. 1 .
- FIG. 4 is a diagram that illustrates further detail of the 3D virtual apparel system of FIG. 1 .
- FIG. 5 is a diagram that illustrates further detail of the body scanner system used with the system of FIG. 1 .
- a three dimensional (3D) virtual apparel processing system 112 gathers all or any combination of the following data available from retailer 50 : (1) paper pattern 51 , (2) grading rules 53 , (3) technical pack 54 , (4) digital pattern 57 , (5) fit model's scan data or measurements 58 , (6) production sample garment, or (7) fabric swatches, where data displayed in FIG. 1 in physical garment storage 55 or digital garment data storage 52 .
- data from stereophotogrammetry system 150 is sent to system 112 .
- System 112 then processes all gathered data and may make output data available to all other systems.
- application service provider (ASP) 100 may receive data from consumer system 20 and stereophotogrammetry system 150 .
- the ASP 100 and consumer system 20 may be connected through a wide area network 1500 , wherein each have network connections 1502 to facilitate such connection.
- Retailer system 50 may also be similarly connected to network 1500 .
- the wide area network 1500 may comprise the internet, and the network connections 1502 may comprise network routers, cards, etc. commonly used to connect to the internet.
- ASP 100 which may utilize off the shelf server software and network technology, then processes all the data and provides services for system 10 .
- garment and apparel may be used interchangeably herein, both in the plural and the singular.
- Step 300 refers to the data gathering and processing that occurs in 3D virtual apparel processing system 112 .
- Product development information received from retailer system 50 may include data from stereophotogrammetry system 150 .
- system 112 and stereophotogrammetry system 150 may be a part of retailer system 50 .
- system 112 may be a part of ASP 100 , but stereophotogrammetry system 150 may be part of a third party network and vice versa.
- system 112 and stereophotogrammetry system 150 may not be a part of ASP 100 or system 50 , but rather a third party system.
- 3D virtual apparel processing system 112 comprises one or more apparel product development workstations 116 with apparel product development software 114 , and external hardware devices such as digitizer 118 , fabric scanner 120 , fabric testing equipment 122 , and the like.
- Retailer System 50 can represent either a retailer, or several companies within the apparel retail and manufacturing supply chain. Moreover, retailer System 50 may contain any portion, combination of sub-systems, or entire systems of system 112 , 150 , and 100 . For example, retailer system 50 may have fabric scanner 120 located therein.
- Stereophotogrammetry system 150 may be used to scan fit model physical body 151 , which refers to a physical fit model commonly used apparel product development. The scan data is used to create fit model avatar object 173 using avatar processing system 160 .
- the retailer may only provide measurements of the fit model 151 , in which case, those measurements are used in fit model avatar processing system 160 to create fit model avatar object 173 .
- the process of creating fit model avatar object 173 may be similar to the process of creating consumer avatar object 171 described below.
- the stereophotogrammetry system 150 may be located either independently at a third party location, at retailer system 50 , with ASP 100 . Further information provided by a retailer may include digital pattern 57 , paper pattern 51 , fabric and print swatches 56 , grading rules 53 , fit-model scan data and/or body measurements 58 , and production sample garment 59 . With reference to FIG. 7 , a sample screenshot of a digital pattern 57 is shown.
- some retailers 50 may not have access to some of the information described above.
- the retailer may not have any information on the pattern other than the technical pack 54 , in which case a production sample garment 59 and technical pack 54 will be used by the 3D virtual apparel processing system 112 .
- the retailer 50 may not provide a technical pack 54 , in which case the production sample garment 59 is used for processing as described below.
- a pattern, and/or technical pack 54 is received electronically from the producer's digital garment data storage 52 , or the less sophisticated garment information 60 is received, the information is processed into 3D virtual apparel processing system 112 , and stored in a first data storage 110 .
- the digital pattern 57 is received, it is imported into apparel product development software 114 , and, if necessary, converted into the proper format.
- the patterns are not digital, they are digitized using a digitizer known to those skilled in the art.
- the pattern is made from the production sample garment 59 and/or technical pack 54 . Further, fabric swatches, or the production sample garment 59 received are/is tested using the fabric testing equipment 122 to produce an initial set of fabric presets, which are tested as described below to produce a final set of presets.
- a flow diagram illustrating steps performed in creating 3D virtual garment object 183 is shown according to one embodiment. Any entity may practice one portion, or all of the steps of any or all the methods described herein. For example, and not by way of limitation, it is more likely in some embodiments that clothing manufactures or retailers 50 would provide specifications for the apparel that may or may not include a digital or paper pattern. Further, in one embodiment, the process of creating 3D virtual garment 183 may be performed once per garment and, and not repeated for example, irrespective of the number of times a consumer virtually tries-on the style or the number of consumers that try-on the garment.
- step 350 from the digital pattern 57 , production sample garment 59 , technical pack 54 , grading rules 53 , fit model scan data or body measurements 58 , and/or paper pattern 51 received from the retailer 50 , digital pattern pieces are created, or converted from digital pattern 57 , using the apparel product development software 114 .
- a pattern refers to the collection of the individual pieces of the garment 59 .
- the pattern pieces are drafted first, then laid over fabric, which is then cut around the perimeter of each piece. The resulting pieces of fabric are then sewn together to form the finished garment 59 . Therefore, the pattern refers to a blueprint of the garment 49 and its individual pieces.
- part of the apparel product development software 114 may include a software program named TUKACAD running on product development workstation 116 in the 3D virtual apparel processing system 112 , which may be used to create or reformat the digital pattern.
- TUKACAD is widely used CAD software for digital pattern making, digitizing, grading, and marker making in the apparel industry, and is available from TUKATech, Inc., 5527 E. Slauson Ave., Los Angeles, Calif. 90040, www.tukatech.com.
- TUKACAD creates points and interpolates splines between points to create a 2D shape or CAD drawing. Additionally, the digital pattern can be graded in TUKACAD to create larger or smaller sizes. Those skilled in the art would recognize that a variety of CAD software programs may be used to perform the functions carried out by TUKACAD.
- a retailer 50 does not have digital pattern 57 or paper pattern 51 for a production sample garment 59 .
- Retailers 50 that do not have patterns 57 or 51 may provide or utilize a widely used technical pack 54 with specifications for how the style is to be made and/or may provide or use a production sample garment 59 for reference.
- These instructions are then interpreted in 3D virtual apparel processing system 112 to create a digital pattern.
- Paper pattern 51 for corresponding to production sample garment 59 .
- Paper pattern 51 may then be digitized or scanned into TUKACAD software using digitizer or pattern scanner 118 .
- TUKACAD software draws the pattern in digital form resulting in a digital pattern made of digital pattern pieces.
- the retailer 50 has a digital pattern 57 in a third-party format.
- the digital pattern may then be converted into the format that can be read by the apparel product development software 114 using built-in conversion tools in TUKACAD Software.
- the physical fabric of a new garment may be tested and simulated to solve for digital fabric presets to be input into apparel product development software 114 for processing.
- various intrinsic characteristics or parameters that uniquely define real fabric may be determined.
- the results of those tests may be the fabric presets, which may be entered into a computer model.
- the fabric presets are not independent variables and further testing may be used to arrive at the final fabric presets.
- the computer model comprises a three dimensional (3D) virtual software environment.
- E-FIT SIMULATOR software named E-FIT SIMULATOR, also called E-FIT herein, is used as the computer model.
- E-FIT SIMULATOR is commercially available from TUKAtech, Inc., 5527 E. Slauson Ave., Los Angeles, Calif. 90040, www.tukatech.com, and is built using 3DS MAX's SDK.
- E-FIT in one embodiment, incorporates cloth simulation plug-in software, CLOTHFX, which is manufactured by Size 8 Software, and is readily available from TurboSquid, Inc., 643 Magazine St., Suite 405, New Orleans, La. 70130, www.turbosquid.com.
- E-FIT may be used in conjunction with the aforementioned CLOTHFX software to create 3D virtual apparel, including draping on a virtual model and simulating animation in a 3D environment as described below.
- This combination of software is currently used commonly by designers and engineers for rapid prototyping of apparel design and development.
- some presets are determined by conducting physical tests on one or more swatches of the fabric from production sample garment 59 , while other presets also require an additional virtual test, wherein results from the physical test are compared with results from the virtual test in a process of linear regression, which is used to arrive at the final preset value.
- One of the presets tested comprises stretch and shear resistance.
- An intrinsic property of cloth or fabric is its ability to stretch, which distinguishes it from a normal rigid body. Fabrics can vary in their ability to stretch, and this characteristic can be quantified.
- FAST fabric assurance by simple testing
- the known FAST-3 fabric extensibility test may be used. Procedurally, a first sub-test is performed by hanging a swatch vertically. A weight is attached to the swatch, and the change in length due to the force of gravity is measured. The dimension of the swatch that may be tested is typically 15 cm by 15 cm.
- the direction selected along which to hang the swatch may depend on the direction of the grain-line of the fabric. That direction is typically known as the warp direction.
- the test may be performed in the vertical direction (where vertical denotes the direction of gravity) for three specific orientations of the fabric. Those orientations are the directions of warp, weft, and bias.
- Weft is the direction perpendicular to warp.
- Bias is the direction that is 45 degrees from the warp and weft directions.
- the first measurement may be taken in the warp direction.
- the length of the swatch in the vertical may be, for example, 15 cm, and a weight of, for example, 100 grams may be attached along the bottom of the swatch, and a new length measurement is taken and recorded.
- the process is repeated for the weft direction. Finally, in the bias direction, the parameter being measured is called shear. For woven fabrics, measurements in the shear direction may also be made using an additional method, similar to the known KES-FB1 tensile/shear testing. For knits, the process may be the same as described above.
- a virtual test for stretch and shear is next conducted.
- E-FIT creates a 3D mesh object for the swatch under test, made in the dimension and shape of cloth, which CLOTHFX simulates gravity, collision with itself, and collision with other objects (or itself), to behave in accordance with how physical cloth would behave in a real environment. Therefore, CLOTHFX as applied to a 3D mesh object is accomplished using a set of algorithms based on known computer cloth simulation theory.
- the CLOTHFX algorithms are based on modelling the 3D mesh object's vertices as having mass, and the connections between vertices as springs. In other embodiments, alternative algorithms based on known research can be used to model the mesh as interacting particles.
- E-FIT and CLOTHFX may create a 3D mesh of the same dimensions of the physical swatch, then hang it vertically, and attach a virtual weight digitally.
- CLOTHFX is used to apply cloth simulation algorithms to the 3D mesh. Under the force of gravity, the 3D mesh (now behaving as cloth) is deformed or stretched, and the resultant change in length is measured. The simulation occurs using default values found in the physical tests described above for the stretch/shear resistance preset in all three directions.
- CLOTHFX applies cloth simulation algorithms to the 3D mesh. In order for CLOTHFX to more precisely model a 3D mesh to behave as a particular fabric, regression analysis is used to solve for the presets by repeating virtual tests and adjusting the presets until the results of the physical and virtual tests match.
- Another parameter may comprise bend resistance. This measurement involves the way that fabrics differ from rigid bodies in their ability to bend. The resistance to bend is measured with this parameter.
- a physical test uses a known method for assessment of the drape of fabrics. A circular swatch, for example, around 15 cm in diameter, may be draped over a circular rigid body, with a smaller diameter than the swatch, which is propped up by a stand. The setup is situated under a light, such that the resultant folds cast a shadow. This is called a projection of the drape. The projection is then photographed, and the surface area of the projected surface is calculated.
- a virtual test for bend resistance may be conducted in similar fashion to the physical test. However, instead of measuring the surface area of the projected image (or shadow from the bends), the mesh is flattened within E-FIT. The resultant area of the flattened mesh may be measured and compared with the surface area measured in the physical test. Using regression analysis, the fabric preset for bend resistance may then be adjusted, and the virtual test may be repeated until the surface areas of both tests match, wherein the resultant fabric preset is the final fabric preset for bend resistance.
- Yet two other presets may be kinetic and static friction.
- Fabric draped on a body can experience damping forces that result from friction with the body's surface and friction with itself or with other fabric.
- a physical test for static friction may be performed by sliding a swatch along a surface, with a known coefficient of static friction. The plane is tilted to find the angle, herein known as the repose angle, at which the swatch begins to slide. The repose angle is used to determine the coefficient of static friction, where the coefficient of static friction equals the tangent of the repose angle for an object sliding down a plane.
- the coefficient of static friction that results from the physical test may be used as the fabric preset, and no further calculation may be required. Therefore, this value is a direct input into CLOTHFX.
- a method in which a constant force is applied to a swatch along a plane to measure the value of the applied force at which the swatch travels at constant velocity.
- a string is attached to the swatch, which is pulled along a plane with a known coefficient of kinetic friction.
- the pull force applied is measured using off-the-shelf instruments for measuring force.
- the pull force that results in a constant velocity of the swatch along the plane is multiplied by the cosine of the vertical angle of the string used to pull the swatch with respect to the plane.
- the coefficient of kinetic friction is equal to the force applied multiplied by the cosine of the angle from the plane and then divided by the normal force.
- the coefficient of kinetic friction may be used as the fabric preset and no further calculation may be required. Therefore, this value may be a direct input into CLOTHFX.
- Yet another preset parameter is the surface density of the cloth.
- a swatch of cloth of the same dimensions can have very different weights, depending on the type of textile used to build the cloth and the density of threads used to weave or knit.
- the surface density test the weight of the cloth is measured.
- a standard scale is used to measure the weight of a swatch. The weight is divided by the surface area of the swatch to arrive at the surface density.
- the physical test may be a direct input into CLOTHFX as a fabric preset.
- Cloth will drape differently depending on the how it falls through a fluid, such as air, and how it reacts with air as it moves in space.
- a fluid such as air
- the resistance to this drag can vary between fabrics.
- the fabric presets 181 may become part of a library of virtual fabrics in the first data storage 110 , to be applied when creating virtual apparel made of specific fabric, removing the need to re-test the fabric with new garments made of the same material.
- step 354 comprises preparing digital pattern 180 of the production sample garment 59 , either by converting digital pattern 57 from another format, digitizing or scanning paper pattern 51 , or creating it using information contained in technical pack 54
- Digital pattern 180 may be represented in TUKACAD file format located in data storage 110 .
- TUKACAD's file format stores the digital pattern as a collection of points and hermite splines that are interpolated between points. Each point has an attribute that can govern the shape and/or interpolation of the connected hermite splines.
- Other types of CAD software may use alternative types of splines or interpolation methods, however since all digital patterns can be converted into TUKACAD's format, all methods for creating and storing data points in a pattern are supported.
- digital pattern 180 may be made for each particular style in a base size.
- a base size refers to a sample size of a garment, or size that is used as a standard for a particular garment. Larger and smaller sizes may then be created differentially from this sample size by modifying the digital pattern 180 , using a process called grading. The amounts that each point in the pattern are to be moved outward or inward are contained in grading rules 53 .
- the next step refers to converting the two dimensional pattern pieces into 3D meshes.
- the digital pattern may be modified with construction information useful for conversion of the 2D pattern into a 3D virtual garment 183 .
- Pattern pieces may need to be adjusted to reduce the complexity of some garment features (e.g., removing extra folds, creating finished pieces for pockets, plackets, etc.).
- Some values used for physical garment production that are not required for virtual apparel also need to be removed (e.g., fabric shrinkage, sewing allowances, etc.). All of these procedures are made to digital pattern 180 in the TUKACAD software contained in apparel product development software 114 . To further explain, the following procedures may or may not be applied to one, more, or all of the pieces of a garment depending on the garment type.
- the digital pattern 180 piece quantity may be adjusted. A few pieces that may otherwise be necessary for production become irrelevant for 3D virtual apparel, and may be removed from the digital pattern 180 .
- a sewing allowance is an extension of the perimeter of a piece that adds additional fabric necessary for physically sewing a garment. This allowance is not necessary for 3D virtual apparel and may be removed from digital pattern 180 .
- any shrinkage allowance may be removed from digital pattern 180 .
- Digital pattern pieces are often created slightly larger in anticipation that once the fabric is washed, the garment will shrink back to the appropriate dimension. Simulation of shrinkage may not be necessary, and therefore, any allowances for shrinkage in the digital pattern 180 may be removed.
- variable hem lines may be removed from digital pattern 180 .
- extra fabric is added to the bottom of the pant leg such that a tailor can adjust the hem line. This additional fabric is not necessary for 3D virtual apparel and may be removed form digital pattern 180 .
- sewing lines may be added (for pockets, flaps, etc) to digital pattern 180 .
- a drill hole may be placed in a physical garment piece.
- a sewing line may be drawn digitally to facilitate adding of pockets, flaps, and other features to 3D virtual garment 183 .
- a fabric code may be assigned to each piece of the digital pattern 180 .
- the piece that refers to the front of a t-shirt may be assigned fabric code by the name of cotton, whereas the piece that represents the lining of the t-shirt may be given fabric code that represents an elastic material type, such as some polyester spandex blend.
- stitch segments may be assigned in the digital pattern 180 . Segments may be defined so that they can be sewn in E-FIT. Marks may be added to the digital pattern 180 to define the starting and ending point of the segments that will be sewn.
- a size may be selected for the fit model avatar 173 (which was created from scan data or measure data from step 58 ). If digital pattern 180 has been graded into several sizes, the base size may be selected to fit the fit model avatar 173 .
- fold lines may be assigned in digital pattern 180 .
- Pieces that are folded e.g., lapels
- Tenth, pattern pieces may be rotated in digital pattern 180 .
- E-FIT may use the orientation of the pattern pieces as a starting point for making transformations to the 3D mesh. Arranging the digital pattern pieces into a set orientation may ease this process.
- internal lines may be adjusted in digital pattern 180 . Because the 2D spline pattern pieces are eventually meshed for 3D software, some adjustment of the splines may be necessary to avoid errors in E-FIT. For instance, a line cannot be meshed. So if there is an internal pattern line that extends past the outer boundary of the pattern piece, that external part of the line may need to be removed form digital pattern 180 .
- the next step 356 may be to convert the digital pattern into a 3D mesh.
- a 3D mesh, or polygon mesh is a collection of vertices, edges and faces that defines the shape of a polyhedral object in computer graphics.
- the mesh is a collection of several closed surfaces.
- a mesh is a collection of numbers organized into several matrices. More simply stated in a geometric description, a mesh is made of points that are joined together with segments and surfaced by polygons.
- the digital pattern 180 may now be imported into E-FIT.
- the CLOTHFX plug-in in E-FIT may convert the pattern pieces into 3D mesh objects. Essentially, the 2D splines are surfaced to create a 3D mesh.
- the digital pattern 180 is now a 3D mesh.
- the 3D mesh is then further defined to have components such as pieces and segments, which later get defined with additional attributes.
- E-FIT interprets the fabric code for each piece of digital pattern 180 and assigns the corresponding fabric presets.
- the piece of digital pattern 180 that represents front of a t-shirt may have been assigned a material code for cotton.
- E-FIT interprets this code and retrieves the fabric presets for cotton from its fabric library of presets.
- E-FIT may apply 3D piece placement, orientation, and curvature in the 3D pattern.
- E-FIT assigns sewing instructions.
- E-FIT matches each particular segment of a 3D mesh corresponding to a particular piece to another segment on the same 3D mesh, or to another 3D piece, in accordance with how the garment is supposed to be sewn together.
- FIG. 9 a diagram illustrates an exemplary 3D piece placement and matching of the segments using E-FIT.
- E-FIT may virtually sew and drape the 3D mesh on the fit model avatar 173 .
- Fit model avatar 173 is a virtual representation of the actual physical fit model, wherein the exact body measurements 164 may have been measured and used to create a virtual body in the base/sample size, or the physical fit model has been scanned, and the scanned data is used to create fit model avatar 173 in the base/sample size. If fit model avatar 173 is created from scanning a physical fit model, the scanning process may be similar the process described below with respect to an avatar.
- Sewing and draping may be completed using functions provided by CLOTHFX and native E-FIT according to the sewing instructions assigned above.
- garments have lining and/or layers of material. In such cases, layers may be placed, stitched, and draped in a specific order.
- the culmination of the simulation results in a drape on fit model avatar 173 that may be identical to the drape of a real garment on a real fit model.
- a screenshot 2050 using CLOTHFX and native E-FIT is shown during the sewing and draping process according to one embodiment.
- step 366 animation is created for the 3D virtual garment 183 .
- Fit model avatar 173 may have a predetermined motion or animation already applied.
- the predetermined motion may simply be a series of frames wherein the position of the fit model avatar 173 is slightly different, and when played out appears to be walking. Then, to simulate animation of the garment being worn, the above-described sewing and draping is performed for each frame. In one embodiment, thirty frames is equivalent to one second of animation.
- a presentation may be created for the retailer 50 to be approved and later presented to consumer 20 .
- Making an object in 3D appear like physical object may often involve duplicating the look not only in 3D software or interactive rendering software, but require visual output hardware (such as a monitor or display) to accurately replicate the appearance of the object in reference to a real object.
- E-FIT may apply a texture.
- the 3DS MAX is used as the 3D engine for E-FIT. Since 3DS MAX refers to “textures” as “material textures,” the term “textures” will be referred to as such herein. However, it is understood by those skilled in the art, that the term “texture” is used for an embodiment that does not include using 3DS MAX, but rather some other 3D software, such as PHOTOSHOP available from Adobe Systems Incorporated, 345 Park Avenue, San Jose, Calif. 95110-2704.
- a material texture 188 contains data that may be assigned to the surface or faces of a 3D mesh so that it appears a certain way when rendered. Material textures 188 affect the color, glossiness, opacity, and the like, of the surface of a 3D mesh.
- these material textures 188 may not be photometric, in the sense that they may not simulate the interaction of light or photons with the material textures 188 accurately.
- a user may use E-FIT's material editor built-in functions to further create the illusion of the garment's appearance. More specifically, the user of E-FIT may work to simulate the correct appearance of material textures by adjusting and applying various material texture properties or texture maps that model the color, roughness, light reflection, opacity, and other visual characteristics.
- material textures 188 may be applied to the surface of each 3D mesh corresponding to each pattern piece. These material textures 188 realistically simulate various attributes that make up the appearance of production sample garment 59 . The following list of attributes may be modelled:
- Certain attributes may be set by the retailer. For example, a retailer may send a color swatch with a specific red-green-blue (RGB) value or PANTONE color value. In instances where the appearance is dependent on the lighting conditions, the attributes may be adjusted at the retailer's discretion.
- RGB red-green-blue
- Prints, images, logos, and other maps can be adjusted in size, position and orientation.
- the retailer may provide information (included in technical pack 54 ) on the placement (position) and size of these maps.
- E-FIT E-FIT
- a user loads these maps and adjusts them accordingly.
- stitch textures, a component of material texture 188 are added to give the appearance of actual stitching threads.
- Completing the above steps results in the completion of 3D virtual garment 183 and fit model drape 186 , which are then stored in data storage 110 .
- media such as images, movies
- original sample 3D viewer data 187 may be created.
- FIG. 11 is an example of such rendering using E-FIT.
- a fit analysis process may be executed which results in creating original sample fit data 18 .
- An avatar may be defined as a 3D mesh constructed to have a similar shape as the consumer body 22 or fit model body 151 it was intended to model, and may or may not be animated.
- Fit-model avatar 173 may be created to drape 3D virtual garment 183 on the avatar to produce fit model drape 186 , by way of system 112 .
- consumer avatar object 171 may be used for simulating the drape of production sample garment 59 on a consumer's body 22 , resulting in consumer drape 1102 .
- the methods for any avatar, whether it be creating consumer avatar 171 or fit model avatar 173 are interchangeable and are described below.
- consumer avatar 171 or fit-model avatar 173 can be generated using three types of procedures, all of which are well-known to one skilled in the art.
- the first procedure utilizes a technique in which one mesh is conformed to another.
- the second procedure utilizes a technique called morphing, where one mesh is morphed to another.
- a third technique involves manually moving vertices from a mesh to another location, which is often called digital 3D sculpting. With respect to creating an avatar, these techniques involve moving vertices from one position to another.
- the conforming and morphing methods are discussed in more detail herein. These two techniques may have disadvantages and advantages over each other and therefore are used in varying situations. Described next is one embodiment of using each of these techniques. However, any technique not discussed, but well known to those skilled in the art could theoretically be used.
- avatar software application 904 begins creating an avatar by first accepting some input data on the consumer or fit-model. There may be many categories of input data, relating to any type of information on a human being or population of human beings—e.g., demographic information. For example, one may have data on the distribution of fat on the human body. Another example is data describing the amount of heat energy emanating from a body. A third example may be the color of the skin, eyes, and hair, and a fourth example may be data on the shape of the body. Since there are many types of information that can describe a human being, it is worthwhile to categorize the information or data.
- the following three categories of data may be used to create an avatar: (1) body shape data, (2) body appearance/cosmetic data, and (3) body function data, where body may be defined to include all or any parts of the body, and data may be qualitative and/or quantitative, and stored in any form or format.
- body may include the torso, head, face, hands, fingers, finger nails, skin, hair, organs, bones, etc, or it may only include the torso.
- Body shape data refers to data that can be used or interpreted to understand and reproduce the accurate shape of a human body subject.
- Body appearance/cosmetic data refers to data that helps reproduce the appearance of a human subject (e.g. eye color, hair style, skin texture).
- Body function data provides information on how the human subject's body functions. In (e.g. the systems of the body, such as lymphatic, endocrine, skeletal, immune, and others). It may aid to have body function data on movement (e.g. how the body's limbs, torso, head, or skeleton, muscular, etc respond to movement). Such data, for example, and not by way of limitation, may be captured using a generic motion capture technology for capturing body movement data.
- each data category may have many different types data in which information relating to that category are stored. The various data types for each data category are described below.
- body shape data there may be three data types in which information on the shape of a human subject can be stored, provided, or retrieved for use in creating an avatar.
- the input data may be one or the following: (1) raw body scan data 172 , (2) body measurements and other shape data 176 and (3) photographs 174 .
- photographs can also be a raw body scan data type, photographs taken in some other mechanism, (e.g. webcam or single camera) may also be included.
- Raw body scan data 172 refers to raw output data from any type of scanner, whether it be generic body scanner 149 (e.g. point cloud originating from RF data, structured light data, lasers, mirrors, or any other type of raw data output from these scanners or other yet undiscovered types of scanners.). Moreover, raw body scan data can originate from stereophotogrammetry body scanner 152
- Body measurements and other shape data 176 may refer to both manual measurements taken of consumer body 22 either by the consumer or by a third-party, extracted body measurements from raw scan data 172 , statistically derived measurements from sizing survey data 178 or avatar statistical data 179 , and/or any combination thereof.
- Photographs 174 refer to supplemental photographs of the body from different angles, which may or may not include the other parts of the body (e.g. face, hands, etc). For example a user may take a photograph of the face of consumer body 22 , and submit the photograph online, by which the system may map the person's face to consumer avatar object 171 . Photographs 174 may not originate from a scanner, but rather may originate from a web cam, a single digital camera and may be user submitted. Photographs 174 shall not be confused with photographs originating from raw body scan data 172 , especially in the case of the method of stereophotogrammetry as described below.
- a combination of data types may be used to help supplement data or data precision that may be lacking. Therefore, in one embodiment, a combination of data types may be used to further increase the precision of the an avatar.
- Sizing survey data 178 refers to body measurement and shape data from a population of human beings.
- Size USA survey provided by TC2 which contains raw scan data or extracted body measurements from over 10,000 people can be used.
- Size USA survey provided by TC2 which contains raw scan data or extracted body measurements from over 10,000 people can be used.
- Such data may represent one or many populations with various demographic characteristics. Then, this data may be searchable or queried by a specific demographic or set of demographics. Then, additional information collected on the consumer or fit model such as, .age, ethnicity, sex, residence, etc may be used to match the consumer to a specific population that is represented in sizing survey data.
- the body measurements or other shape data for that population may be used in part or in entirety to create the avatar of the consumer or fit model.
- statistics on body measurements and shape can gathered and stored as avatar statistical data 179 and may be used for statistical interpretation and later mined for trends that can further be used to constrain other estimates of the shape of the body, or further enhance those estimates.
- Base avatar 158 is a template avatar from which all other avatars can be made. Depending on the data type for the body shape category of data, the base avatar 158 can be morphed or conformed into the shape of consumer body 22 or fit model body 151
- a base avatar 158 may be created using avatar software application 904 in avatar processing system 160 .
- avatar software application 904 may comprise of built-in tools available in 3DS MAX or any 3D software that allows a user to create, edit and store mesh objects.
- 3DS MAX a 3D artist may sculpt the arms, legs, torso, and other body parts. Then a 3D artist may join all the body parts together to form a single mesh of the base avatar 158 .
- the base avatar 158 is rigged.
- a bone structure (or biped) may be inserted into the mesh using 3DS MAX tools, and may be sized and scaled appropriately so that the bone structure fits within the mesh properly. This process is known to those skilled in the art as rigging.
- the bone structure may be attached to the vertices on base avatar 158 mesh so that when the bones move, base avatar 158 will move in accordance with how a human body typically moves.
- This process is known to those skilled in the art as skinning, and is not to be confused with putting skin on, which falls into the category of texturing.
- a file that holds the skinning data may be saved in avatar processing system 160 in avatar data storage 170 .
- Base avatars 158 can be created for male and females for any typical sample size (i.e., men's size 40, women's size 8, etc.). From these base avatars 158 made from sample sizes, new avatars can be made in any size and shape.
- the use of the conforming or morphing techniques is dependent on the type of data received on consumer body 22 or fit model body 151 . If the data type is raw scan data 172 , then a mesh is created from the raw scan data, and the base avatar 158 's mesh is conformed to it. In another embodiment, the received data type may be body measurements and other shape data 176 . In such a case, the morphing technique may be used. In this case, the base avatar 158 mesh is morphed. The following discussion relates to the case where the data type is raw scan data 172 .
- stereophotogrammetry system 150 may comprise any of these prior-art types of body scanning technologies, or alternatively, stereophotogrammetry system 150 may include stereophotogrammetry body scan booth 152 described below. Stereophotogrammetry system 150 may also comprise any body scanning software for processing raw scan data to create 3D meshes or avatars.
- stereophotogrammetry system 150 may include body scanning software 154 described below.
- companies that produce some of these types of prior art scanners include those available from Unique, 133 Troop Avenue, Dartmouth, NS, B3B 2A7, Canada, TC 2 /Imagetwin, located at 5651 Dillard Dr., Cary, N.C. 27518, Telmat Industrie, 6, rue de l'Industrie—B. P. 130—Soultz, 68503 GUEBWILLER Cedex (France), and, or Human Solutions, GmbH, Europaallee 10, 67657 Kaiserslautern, Germany.
- stereophotogrammetry is the practice of determining the geometric properties of objects from photographic images.
- the distance between two points that lie on a plane parallel to the photographic image plane can be determined by measuring their distance on the image, if the scale of the image is known.
- a more sophisticated technique involves estimating the three-dimensional coordinates of points on an object. These are determined by measurements made in two or more photographic images taken from different positions. Common points are identified on each image. A line of sight (or ray) can be constructed from the camera location to the point on the object. It is the intersection of these rays (triangulation) that determines the three-dimensional location of the point. More sophisticated algorithms can exploit other information about the scene that is known a priori, for example symmetries, in some cases allowing reconstructions of 3D coordinates from only one camera position.
- Algorithms for photogrammetry typically express the problem as that of minimizing the sum of the squares of a set of errors. This minimization is known as bundle adjustment and is often performed using the Levenberg-Marquardt algorithm.
- the stereophotogrammetry method may have advantages in cost and features that other methods cannot achieve.
- FIG. 13 a diagrammatic right perspective view of a stereophotogrammetry body scan booth 152 , and scan booth computing device 153 with body scanning software 154 , is shown according to one embodiment.
- several cameras 800 for example twenty, may be positioned around the human body, and then simultaneously triggered to acquire multiple digital photographs.
- the resultant photographs may then be transmitted to scan booth computing device 153 , which contains body scanner software 154 .
- body scanner software 154 may trigger cameras 800 and acquire photographs from cameras 800 .
- the body scanner software 154 may be used to mask and remove background colors, and may further be used proc to implement a process called segmentation to remove object(s) other than the subject of interest.
- Body scanner software 154 performs many of the previous mentioned steps using a program originally written using MATLAB software, available from Mathworks, Inc., MathWorks, Inc., 3 Apple Hill Drive, Natick, Mass. 01760-2098. However, those skilled in the art would recognize that many different software applications may perform similar functions. For example, the software may be written using the C++ programming language to perform the same functions implemented in the MATLAB software.
- 3D mesh 159 is then imported into 3DS MAX, wherein the base avatar 158 is morphed to the dimensions and shape of 3D mesh 159 .
- a flow diagram illustrates steps performed for scanning consumer body 22 or fit model body 151 using the stereophotogrammetry method of body scanning, as well as the steps for converting the output of this body scanning method into a 3D mesh.
- the camera 800 is assembled. Any standard charge coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) camera 800 can be used.
- CMOS 2 megapixel chip is used in order to maximize resolution while minimizing cost, such as that provided in the QUICKCAM 600 available from Logitech, Inc., 6505 Kaiser Dr., Fremont, Calif. 94555 USA.
- QUICKCAM 600 any CCD or CMOS commercially available digital camera, webcam, professional camera, industrial camera, or security camera could be used.
- the aforementioned QUICKCAM 600 has a 2 Megapixel sized CMOS chip providing 30 frames/second over a universal serial bus (USB) 2.0 connection.
- the camera 800 may be dissembled to retrieve only the circuit board with the CMOS chip attached and USB still connected.
- any megapixel size chip with any frame rate and other connections e.g., Firewire
- additional cameras could be added, a slightly rotating pedestal could be used, and/or mirrors could be used in place of some cameras.
- the method described herein was selected due to accuracy and cost-effectiveness.
- a wide angle lens may be attached to a spacer, attached to a camera enclosure, which encloses the circuit board to which the CMOS chip is attached.
- a wide field-of-view lens may be used in this embodiment so that the camera 800 can be positioned as close to the consumer body 22 or fit model body 151 as possible while keeping the subject within the field of view. Any distortion due to the lens may be corrected-for in 3D SOM PRO software using its lens calibration tools.
- a 2.9-8.2 mm lens provided by Computar, Inc., 55 Mall Drive, Commack, N.Y. 11725, may be used.
- a plastic project enclosure (for example, 3 ⁇ 2 ⁇ 1 inches), provided by RadioShack, Inc, may be used to house the camera 800 .
- a 3-5 mm hole may then be cut open to make the CMOS chip visible.
- a 5 mm spacer with threads may be attached over the hole and the lens is screwed into the spacer.
- Steps 400 - 404 may be repeated for each camera to be used.
- stereophotogrammetry body scan booth 152 is assembled.
- Standard zero structures 910 may be used to assemble the structure, for example, a 7 ft ⁇ 7 ft ⁇ 7 ft sized stereophotogrammetry body scan booth 152 .
- a matte 920 with a specific pattern which may be provided by 3D SOM, Inc., may be placed in the center of the floor 915 . This is where the consumer body 22 or fit model body 151 stands.
- Cameras 800 and lights may be fixed to cross beams 912 that attach to the four pillars of the structure 910 along the perimeter.
- Electrical pipe may be built around the structure on the inside and outside of the zero pillars at the very top of the body scanning booth 152 . Fabric may be hooked to the pipes to create drapes to enclose the structure from outside light, and to include a fixed color background behind the subject from all angles.
- Pre-fabricated structures could be used in a similar manner, where modifications may be made depending on the type of structure.
- the camera array may be created.
- 20-50 cameras 800 may be positioned along the walls of the stereophotogrammetry body scan booth 152 .
- At least fifteen cameras 800 may be positioned at approximately eye level and distributed equally around the consumer body 22 or fit model body 151 .
- any configuration could be used.
- At least an additional four cameras may be positioned at two feet higher than eye-level and distributed around consumer body 22 or fit model body 151 .
- the last camera 800 may be positioned in an aerial view above the head of consumer body 22 or fit model body 151 .
- the positioning of the all 20-50 cameras can vary depending on the user's choice, and is not limited to this configuration.
- the matte and the entire subject may be visible in the field of view in all configurations, so as to take advantage of the features of 3D SOM PRO Software.
- the cameras 800 are connected in an array.
- Cameras 800 may be connected USB powered hubs in one embodiment. All hubs may be connected to a computer with USB ports. In other embodiments, the cameras may be wired for Bluetooth, Ethernet, wifi, or the like.
- stereophotogrammetry body scanning software 154 may also contain executable instructions to perform one or more of the following steps 412 - 418 described below.
- step 412 the video stream of consumer body 22 or fit model body 151 is acquired.
- MATLAB software which may be one of the software components of stereophotogrammetry body scanning software 154 , is available from Mathworks, Inc., 3 Apple Hill Drive, Natick, Mass. 01760-2098, and which may be used to read the video streams from the cameras.
- the image acquisition toolbox of MATLAB may be used to start and view all 20 video streams.
- Those skilled in the art would recognize that a variety of software programs may be used to perform the functions carried out by MATLAB.
- the images may be acquired from the video stream, wherein the main subject is consumer body 22 or fit model body 151 , and may be placed in the middle of the stereophotogrammetry body scan booth 152 to stand on a matte, such that their body is in the field view of the cameras.
- the cameras are triggered to acquire images or single frames from each camera 800 .
- a manual trigger may be used with cameras that do not support hardware triggering.
- hardware triggering can be used to speed up image acquisition to prevent any lag time between cameras.
- MATLAB's image processing toolbox may be used to mask images, save them in any format that can be read by 3D SOM PRO, and send them to 3D SOM PRO Software.
- Software written using MATLAB may be compiled into a standalone executable file to perform this step.
- step 418 3D mesh 159 is created using 3D SOM's software.
- the number of cameras 800 may be arbitrary. By way of example, and not by way of limitation, 20 or more, or less, cameras 800 may be used. Further, the position of the cameras 800 may be more or less arbitrary in one embodiment.
- a position calibration map 820 may be used for helping the 3D SOM PRO software determine the position of the cameras 800 in three dimensional space.
- the position calibration map 820 may comprise a flat annular component having radial spaced black circles 822 printed thereon. Depending on the position of each camera 800 , the black circles 822 are captured by each camera 800 with a different distortion, which 3D SOM PRO, or other software used to calibration position, is capable of interpreting to indicate the position of each camera 800 .
- the black circles 822 may preferably be of varying sizes.
- any number of various types of cameras 800 or sensors may be used.
- webcams may be used because they are less expensive and may provide relatively higher resolution with CMOS sensors at the same price.
- more expensive digital cameras with CCD sensors with a broader color ranges may be used.
- any type of lens may be used with the cameras 800 .
- the lenses are capable of having various focal lengths.
- the types of lenses may be defined by variations in focal length, diameter, and/or magnification.
- a lens calibration map 830 having black circles 832 similar to those on the position calibration map 820 may be used.
- Each camera 800 may be calibrated for type of lens by pointing each camera at the lens calibration map 830 at a constant distance to and angle, taking pictures at various zooms.
- the 3D SOM PRO software may then use the varying images captured by each of the cameras 800 and/or lens types.
- the 3D SOM PRO software then takes the calibration images and correct for the varying cameras 800 and/or lens types.
- the stereophotogrammetry system 152 may comprise an arbitrary number of two or more cameras 800 for taking independent photographs of a physical object; a position calibration map 820 for providing three dimensional position data for the two or more cameras 800 ; each camera 800 having a lens, wherein each lens has a type, wherein two or more of the lenses are capable of being the same type; a lens calibration map 830 for each type of lens, wherein the lens calibration map is capable of correcting for non-linearity within the lens; a first set of instructions capable of execution on a processor 153 to acquire a set video streams from the two or more cameras 800 ; a second set of instructions capable of execution on a processor 153 to trigger the two or more cameras 800 substantially simultaneously to produce an image from each camera 800 ; a third set of instructions capable of execution on a processor 154 to download and save the image from each camera 800 ; a fourth set of instructions capable of execution on a processor 153 to
- the system 153 may have a variable number of cameras 800 .
- the system 152 may include variable positions of the cameras 800 .
- the position calibration map 820 may be modifiable according the number and position of the cameras 800 .
- the lens calibration map 830 may be modifiable according the types of lenses on the cameras 800 .
- the size of the whole stereophotogrammetry system 154 may also be adjustable.
- the first, second, third and fourth software instructions may also comprise image acquisition and processing software instructions, which may all be embodied in the body scanner software 154 .
- the image acquisition and processing software instructions may comprise MATLAB software instructions in one embodiment.
- the image acquisition and processing software instructions may comprise LABVIEW software instructions in another embodiment.
- the download of the images from the cameras 800 may occur using universal serial bus (USB), Firewire or wifi network devices.
- USB universal serial bus
- Firewire Firewire or wifi network devices.
- the fifth and sixth software instructions may comprise three dimensional modelling software.
- the three dimensional modelling software may comprise 3DSOM PRO.
- the three dimensional modelling software may comprise compiled object oriented software instructions.
- Lights 840 may be a part of the system 152 , which may be used to create uniform lighting conditions to create the least amount of shadows. Reflectors may be used to further achieve ambient light conditions within the booth 152 .
- a uniform background may be used within the walls of the booth to aid in the masking process. Those skilled in the art, for example, may find a green background generally aids in the masking process.
- the size of the stereophotogrammetry body scan booth 152 may be variable or adjustable, generally having little effect on the operation of the booth 152 . This allows for the booth 152 to be adjusted for use in different special arrangements as space may provide.
- a flow diagram illustrates further steps performed by avatar software application 904 .
- 3D mesh 159 previously created in stereophotogrammetry system 150 , may be sent to the avatar software application 904 .
- the initial step performed by avatar software application 904 is step 427 , importing the 3D mesh 159 .
- a prior art body scanner system 149 may be used in place of stereophotogrammetry system 150 , where prior art body scanner 149 may refer to all currently existing forms of body scanners described in prior art, or alternatively all other body scanners contemplated by future technologies. Then, prior art body scanner system 149 may also provide a 3D mesh as an output. In this case, the initial step performed by avatar software application 904 is step 427 , similarly importing the 3D mesh 159 .
- output data from prior-art body scanner 149 may only provide raw scan data as input in step 425 , and not a 3D mesh.
- 3D mesh 159 may be created from a prior-art scanner system's 149 raw scan data using MESHLAB software, a widely available open source application available form http://meshlab.sourceforge.net/, 3DS MAX, and/or any 3D software able to perform such function with raw scan data.
- step 426 3D mesh 159 is imported in to 3DS MAX software.
- step 428 scaling and alignment of 3D mesh 159 with base avatar 158 may take place.
- the base avatar 158 may be superimposed on top of the 3D mesh 159 .
- the base avatar 158 may then be scaled in size such that its height aligns with the height of the 3D mesh 159 .
- the shape and proportion of the base avatar 158 may not change. In other words, the system grows or shrinks base avatar 158 so that 3D mesh 159 and base avatar 158 occupy a similar volume.
- the limbs of base avatar 158 may also be adjusted to align with the limbs from 3D mesh 159 .
- step 430 the head, hands, and feet are detached from base avatar 158 in order to complete the next step.
- the torso of base avatar 158 is conformed to the torso of 3D mesh 159 .
- MAXSCRIPT code which is a scripting language provided by 3DS MAX, may be run, which can run within 3DS MAX. This script moves vertices of the torso of base avatar 158 to the torso of 3D mesh 159 , such that their shapes and proportions are the same and they occupy the same volume. In running this script, the skinning may be lost and can be reproduced.
- step 434 the hands, feet and head of base avatar 158 are re-attached to newly conformed mesh.
- step 436 the conformed mesh is re-skinned using saved data stored in avatar data storage 170 .
- step 438 animation is applied.
- This step may be to store a standard point-cache file which stores the animation components of consumer avatar 171 or fit model avatar 173 .
- the conformed mesh may be referred to now as consumer avatar 171 . Otherwise, if the subject was fit model body 151 then the conformed mesh may be referred to now as fit model avatar 173 .
- consumer avatar 171 or fit model avatar 173 is exported from 3DS MAX and stored in avatar data storage 170 .
- consumer avatar 171 or fit model avatar 173 may be derived directly from body measurements 176 instead of 3D mesh 159 , where body measurements and other shape data 176 may have been extracted from raw scan data 172 , or from user data 177 (e.g. demographics) using avatar software application 904 . Further quantitative information may include data originated from statistical analysis of historical body scans (sizing survey data 178 ) and/or avatar statistical data 179 . If the consumer provides these measurements, they may do so by entering on computing device 24 which then stores the in user data 177 .
- the computing device 24 may comprise any type of processing device, such as a personal computer (desktop or laptop), smartphone, iPHONE®, iPAD®, tablet pc, mobile computing device, kiosk, gaming device, media center (at home or elsewhere), or the like.
- a personal computer desktop or laptop
- smartphone smartphone
- iPHONE® iPAD®
- tablet pc mobile computing device
- gaming device media center (at home or elsewhere), or the like.
- the consumer may enter body measurements and/or select other avatars features using an html form or a client-side software application 28 running on computer device 24 .
- the user's selection and entered data is then to ASP 100 's avatar software application 904 running in avatar processing system 160 .
- a flow chart illustrates the steps for creating an avatar from any combination of data entities 176 , 177 , 178 , and 179 , according to one embodiment.
- step 500 the consumer body measurements and other shape data 176 are gathered.
- base avatar 158 may be morphed to create the shape of consumer avatar 171 or fit model avatar 173 .
- base avatars 158 may have morph targets, allowing them to be morphed.
- additional base avatars 158 may created with additional morph targets.
- a morph (sometimes called a control) is applied to the base avatar 158 that links to the morph target, and can be used to interpolate between the two objects, changing the size/shape of the base object to match the morph target's geometry either partially or completely.
- a morph target By adjusting the morph target, one can approximate the shape of a new avatar. When several morphs are adjusted such that the new avatar similarly match the consumer body 22 's or fit model body 151 's body shape and or measurements, then one has arrived at consumer avatar 171 or fit model avatar 173 respectively.
- Each morph target may correspond to one or many points of measure.
- Points of measure are control points for a specific body measurement from body measurements and other shape data 176 (e.g. the circumferential waist measurement may have a control point). Therefore, when the point of measure needs to be changed to a specific body measurement value (given by the user, extracted from raw scan data, or derived by some other means), the morph target is adjusted.
- a graphic slide show illustrates an exemplary flow of the morphing process described above.
- the base avatar 158 is shown in it's original shape.
- the morph targets are adjusted closer to the consumer measurement data.
- slide 2004 the morph targets are reached, and the consumer avatar 171 is therefore created.
- base avatar 158 may be morphed as described above.
- Another embodiment includes supplementing body measurement 176 , user data 177 , sizing survey data 178 , or avatar statistical data 179 with digital images 174 .
- Digital images 174 from a single camera may further enhance the process of creating consumer avatar 171 or fit model avatar 173 .
- Multiple digital photographs may be used as references for sculpting the mesh of base avatar 158 within avatar software application 904 , wherein sculpting refers to the process of adjusting the morph targets to match a visual contour of consumer body 22 or fit model body 151 given in a digital photograph.
- a flow diagram illustrates the steps for creating an avatar according to one embodiment.
- digital photographs can be taken of a consumer body via a webcam or any digital camera. To create an avatar from multiple photographs, at least three photographs may be used (front, back and side), along with a height measurement. The digital photographs may be sent to the avatar software application 904 .
- the digital photographs can be masked such that everything besides the consumer body is removed from the image. This can be accomplished using MATLAB software, PHOTOSHOP by Adobe Systems Incorporated, 345 Park Avenue, San Jose, Calif. 95110-2704, or any image editing software.
- the base avatar mesh is sculpted.
- the digital photographs may be used as references to match the shape of the avatar to the real person.
- the photographs may then be mapped to planes in a 3D scene in 3DS MAX and placed around the base avatar's mesh. This makes it possible to use the photographs as references to the shape of the body that is being reproduced digitally. For example, if the photograph is front-facing, then the base avatar's mesh is also front-facing in the scene.
- the base avatar's morph targets are adjusted to get the shape close to where it should be to match the silhouette of the reference image.
- vertices in the base avatar's mesh are adjusted using soft selection methods to correct the avatar to match the references, and the measurements.
- photographs of the front, side and back of the body are adjusted digitally to correct errors in the photography as much as possible.
- body measurements 176 can be further enhanced by adding images from a single camera of the body and face of consumer body 22 or fit model body 151 .
- a flow diagram illustrates a method for modelling the face of consumer body 22 or fit model body 151 .
- the face of consumer body 22 or fit model body 151 can be modelled using digital photographs from a webcam or digital camera.
- step 550 three close-up images of the front profile, left profile, and right profile of the face of consumer body 22 or fit model body 151 may be taken and sent to the avatar software application 904 .
- step 552 FACEGEN Software, provided by Singular Inversions, 2191 Yong street, suite 3412, Toronto, ON. M4S 3H8, Canada, can be used to create a 3D mesh of the head.
- strep 554 a 3D mesh of the head can then be added to consumer avatar 171 or fit model avatar 173 .
- the next process may include draping the 3D virtual garment 183 on a consumer avatar 171 in an automated process on the web or computing device 24 , resulting in consumer drape 1102 .
- the process begins when the consumer chooses to virtually try-on 3D virtual garment 183 .
- the consumer can request to virtually try-on 3D virtual garment 183 by way of a graphical user interface (GUI) on computing device 24 , or by sending a request over the internet through a website.
- GUI graphical user interface
- the consumer may send a request on the internet to virtually try-on a garment by clicking hyperlink 81 which may reside in retailer's online store 80 , a third-party online store, or on an online store running ASP 100 .
- Hyperlink 81 may be positioned next to a display of a 3D virtual garment 183 , or a digital representation of production sample garment 59 available for virtual fitting.
- a sequence of events is started. With reference to FIG. 20 , a flow chart describes the events that occur when a user decides to try on a virtual garment.
- the user may select hyperlink 81 or press the button next to 3D virtual garment 183 or a digital representation of production sample garment 59 on a website.
- the button or hyperlink 81 provides access to application service provider (ASP) 100 in step 602 .
- the ASP 100 may communicate directly with retailer online store 80 or computing device 24 and may run 3D draping software application 900 . With each request, data that signifies the user is included. In the asp-model, if the user is not known, then the user is prompted to sign-in or create a user profile with the ASP 100 .
- a user may run 3D draping software application 900 locally on computing device 24 enabling the user to virtually try on garments.
- This embodiment may require the user to sign in and exchange data with ASP 100 or retailer system
- 3D draping software application 900 may run computer device 24 or may run online in ASP 100 as an online service for retailers or consumers over a wide area network through a network connection.
- 3D virtual try-on processing system 1200 may exist at the retailer or may be hosted by a third party web server.
- 3D draping software application 900 may run on kiosk 130 .
- the user may click on a link or a button with a mouse, or interact with a touch screen on the display of computer device 131 .
- the user may see the resultant output of the 3D virtual try-on process on 3D viewer application 132 .
- step 604 it is determined whether the appropriate size for the consumer has already been determined. If so, processing moves to step 614 . Otherwise, processing moves to step 608 , to conduct size prediction algorithm 908 .
- consumer's body measurements and other shape data 176 are queried from avatar processing system 160 and compared against 3D virtual garment measurements 184 of 3D virtual garment 183 at corresponding points of measure.
- the root mean square (rms) of the deviations of these two sets of measurements is calculated for each size available for production sample garment 59 .
- Ease added to digital pattern 180 may be added to the shape of the avatar to better assist in attaining a solution.
- step 610 it is determined whether the size that results in the lowest rms is sufficient for an initial guess. Those skilled in the art of statistical analysis may use chi-squared or other statistical tests to assess the strength of the initial guess which may depend on the accuracy of which the consumer avatar 161 accurately duplicates the size, shape and proportion of consumer body 22 . Moreover the user may determine if the initial guess is sufficient. If it is determined that the size is sufficient to serve as the initial guess for draping, then processing moves to step 614 wherein the initial guess of the 3D virtual garment 183 is queued for draping on the consumer avatar 161 . Otherwise, processing moves to step 612 wherein multiple sizes of 3D virtual garment 183 are queued for draping on the consumer avatar 161 .
- queue simulation request(s) is/are performed. Once received, simulation requests are sent to a queue system 903 that is capable of maintaining lists of multiple simulation requests from multiple users.
- step 618 processing moves to step 620 where the system retrieves consumer drape 1102 that corresponds to the garment that the user wishes already display on their avatar before draping additional clothing.
- step 622 associated files for the simulation that are queued are then retrieved from data storages 110 and 170 .
- data storages 110 and 170 For example, all or any combination of files stored in data storages 110 and 170 may be retrieved which may be required for the size algorithm, the simulation and the fit analysis described above.
- step 624 node polling system 912 is initiated.
- the software running the queue system 903 checks the node polling system 912 to find an available GPU 1002 .
- GPU 1002 may reside in a GPU cloud computing center 1000 .
- step 628 the polling system 912 is updated to reflect that the selected GPU 1002 is in use for the simulation request and not available for other simulations.
- step 630 3D draping software application 900 then continues by processing the simulation on the selected GPU 1002 .
- the 3D draping software application 900 may be EFIT with slight modifications.
- 3D draping software application 900 may run EFIT without a GUI and user action.
- 3D draping software application 900 is simply EFIT software that has been modified to run automatically by accepting simulation requests from the queue, loading the appropriate files, processing the simulation by draping the garment on one or more CPUs or GPUs, and then exporting the required output files
- Processing involves draping 3D virtual garment 183 on consumer avatar 161 .
- the existing fit model drape 186 on fit model avatar 173 may be loaded onto consumer avatar 161 .
- the drape process may be continued to readjust to account for the difference in the two avatars.
- the resultant output is consumer drape 1102 .
- Processing of cloth simulations in a 3D environment may be hardware-intensive.
- GPUs 1002 are preferred for simulation of 3D graphics. However, when GPUs 1002 are not available, more traditional CPUs may be used in their place.
- GPUs 1002 or CPUs can be run in parallel to increase simulation processing speed through multi-threading so long as the selected processor supports it.
- processing may include simulating for animation.
- an animation file is loaded.
- the animation file may be of consumer avatar 161 walking, running, dancing, sitting, or performing any human motion. Draping is performed on each frame of animation of consumer avatar 161 and then stored in consumer drape 1102 .
- FIG. 21 a diagram shows an example of what the above simulation and animation may look like on computer device ( 24 in FIG. 1 ) in the context of a virtual fitting room according to one embodiment.
- browser 26 is used as the interface.
- step 634 data from resulting from the previous steps of FIG. 19 is exported.
- the following data files may be exported and added to avatar data storage 170 and/or 3D virtual try-on data storage 1100 for later retrieval, by way of example, and not by way of limitation: consumer drape file 1102 ; 3D viewer data 1112 ; fit data 1104 ; and rendered media 1108 .
- step 636 the node polling system 912 is updated to reflect that the selected GPU 1002 is now available.
- a fit analysis algorithm 906 may executed in order to determine qualitative and quantitative data with respect to the outcome of the simulation (the 3D virtual try-on process).
- a fit analysis object may be created to store this qualitative and quantitative data.
- the output of fit analysis algorithm 906 may also be fit data 1104 and/or rendered media 1108 .
- Fit analysis may include deriving qualitative and quantitative data from a consumer drape 1102 for multiple sizes for a specific garment, or just one single size.
- Fit analysis algorithm 906 may perform a stretch test to determine how much the virtual fabric is stretching in consumer drape 1102 .
- Positive stretch values may indicate tighter fit areas, zero or a small stretch value may indicate areas of good fit or simply no-stretch.
- Negative stretch values may indicate areas of compression.
- stretch values may be used to determine how well or how poor a garment fits an avatar. This data can then be stored additionally as fit data 1104 .
- Stretch can be calculated in many ways. For example, but not by way of limitation, stretch may be calculated by measuring the percent difference in a specific measurement before and after the drape. In other words, an initial garment measurement might yield one length. After draping the garment on an avatar, the draped garment measurement at the same location might have a length that has increased or decreased. In one embodiment, the percent difference in length for that specific measurement may be defined as the stretch value. In another embodiment, the stretch value may be calculated for many garment measurements, and the stretch value may refer to the total stretch of all garment measurements, or the average stretch value of all garment measurements.
- Quantitative data may also include calculating the change in stretch in a similar fashion as described above, but with initial value set to the stretch value of the base size, and the final value being the stretch value of the selected size (if other than the base size).
- quantitative data may also include calculating the stretch value for specific points of measure, rather than for the entire garment, and then comparing them with the initial 3D virtual garment measurements from fit model drape 186 .
- quantitative data may also include calculating the total volume of space between the garment and the body and assessing how that total volume may increase or decrease from size to size. All data may be used together, or in pieces in a decision engine to establish a prediction of size.
- the decision engine may consider the total volume between the garment and the body, from size to size, versus the total stretch value, from size to size, and weight the two data types to arrive at the best fit of the garment to the body. It is well known to those skilled in the art that common procedures are available to determine how a garment is fitting using specific points of measure.
- an example web page produced by the system illustrates how stretch values may be visually displayed using a color tension map.
- These color tension maps can be viewed in any image format, on the web, or in any standard image viewing software.
- the color maps may also be viewable using 3D Viewer Application 82 .
- the color tension map displays high stretch values in red, low stretch values in green, and negative stretch values in blue.
- data may include visual images of consumer drape 1102 .
- Qualitative data may include a visual representation or image of the consumer drape using a color tension map to show the parts of the garment that are fitting tight, loose, or well.
- the color tension maps may be configured to show stretch values in certain directions with respect to the grain line of the fabric.
- a color tension map which display stretch values along the warp direction may be very different than a color tension map which displays stretch values along the weft or bias directions.
- Those skilled in the art may recognize different types of ways to present fit analysis data, including, by way of example, and not by way of limitation, using a color map showing shear, color map showing pressure on a body, color map showing pressure from air, color map showing drag force, color map showing tension color map showing compression, gray scale map showing shear, gray scale map showing pressure on a body, gray scale map showing pressure from air, gray scale map showing drag force, gray scale map showing tension or gray scale map showing compression
- FIG. 23 another web page produced by the system illustrates how another form of a visual representation of consumer drape 1102 may show the 3D virtual garment as partially transparent.
- This technique is referred to see-through mode, where the garment is partially transparent, and the user can see partially through the garment, revealing the avatar, and aiding the consumer in assessing how much space there is between the body and the garment.
- the opaqueness or transparency of the garment may also be adjusted.
- Fit analysis algorithm 906 may perform many other types of calculations. For example, but not by way of limitation, fit analysis algorithm 906 may calculate the total volume of space, using methods in calculus, between 3D virtual garment 183 and consumer avatar 161 for all sizes of consumer drape 1102 . This volume may aid in interpreting the correct size of the garment. Moreover, this calculation may aid in interpreting the fit of a garment.
- fit data 1104 The data gathered from the fit analysis algorithm, whether it be quantitative or qualitative or both, stored as fit data 1104 , becomes extremely useful information to retailer system 50 and consumer system 50 . More about this fit data will be discussed later
- the output data may sent to the consumer's computing device 24 by way of either a browser 26 or software application 28 .
- 3D viewer data 1112 and fit data 1104 are displayed in 3D viewer application 82 or 132 .
- 3D viewer application 82 may be embedded in webpage viewed on browser 26 or is an application on consumer computing device 24 .
- 3D viewer application may run in ASP 100 and may be viewable in browser 26 .
- 3D viewing application 82 or 132 is an interactive renderer java applet made with Java and Java 3D libraries, each available from Oracle/Sun, 500 Oracle Parkway, Redwood Shores, Calif. 94065, with built-in functionality to rotate, pan, zoom, and animate virtual garment 183 on consumer avatar 171 .
- the user may also view the drape of one size larger or smaller than the estimated size.
- the user can also select to view the current virtual garment 183 with a color tension map, in x-ray mode, playback animation of the drape, or view the garment with the avatar hidden from view.
- the user can render an image to save in common image formats.
- 3D viewer application 82 or 132 may also have other interactive features that allow the user to rotate, pan, and zoom the 3D content.
- the user may also be able to annotate the garment with comments.
- live sharing and chatting may be implemented so that the user can share the content live with another user. Chatting and video applications may be embedded allowing users to communicate further and discuss the 3D content.
- 3D viewer application 82 may be an interactive renderer created using c++, python, or any programming language capable of creating 3D web applications.
- the user in step 644 , can rate and/or review the fit of the garment by giving a thumbs-up or thumbs-down. In another embodiment, the user can rate and/or review the garment on a numeric scale. In yet another embodiment, the user can rate the garment as “Fits well, too tight or too loose”. Other rating systems known to those skilled in the art can be used. All such reviews described above can be stored in 3D virtual try-on data storage 1100 as user reviews 1106 .
- step 646 the user are given the option of saving consumer drape 1102 of 3D virtual garment 183 for future viewing or mixing with other garments for viewing (e.g., shirt and pants). If saved, virtual garment 183 appears in user's virtual closet 290 where the collection of consumer drapes 1102 are available for the user to view again. The user's subsequent action(s) are tracked within the application and/or webpage to determine whether they purchase the garment. If the user chooses to purchase the garment, an email notification may automatically be generated to the user notifying them that the virtual garment 183 has been saved in their user profile and can be viewed at any time by logging into the ASP 100 's web portal using computing device 24 .
- an email notification may automatically be generated to the user notifying them that the virtual garment 183 has been saved in their user profile and can be viewed at any time by logging into the ASP 100 's web portal using computing device 24 .
- Virtual closet 290 may be accessed when the user is logged into ASP 100 .
- Virtual closet 290 may store consumer drapes 1102 of 3D virtual garments 183 that have been purchased and recently viewed.
- virtual closet 290 may display these garments 183 as visual images of drapes that do not include the model.
- Items in the closet may be viewed in 3D viewing application 30 can be viewed with other 3D virtual garments 183 , for example, from the same retailer, or a different retailer, or mixed and matched in other ways.
- the virtual closet 290 may also provide for sharing between users.
- a user may share the results of their fit with contacts in facebook, myspace, yelp, and other social media sites, as well as personal websites or for viewing in applications in any computing device.
- the user may select a save image function that allows the user to take a picture or snap shot of the consumer drape 1102 of 3D virtual garment 183 on the avatar, and then upload it to their profile on a social media site.
- FIG. 24 is a flowchart that describes a process of analyzing the fit data according to one embodiment.
- step 700 data collection is performed.
- a garment is purchased, a copy of the related consumer drape 1102 of 3D virtual garment 183 is stored in virtual closet 290 .
- Fit data 1104 , user reviews 1106 , rendered media 1108 , and consumer avatar 171 may also be stored as part of the user profile 190 on ASP 100 . All this information together can be gathered together, in step 700 , for a single user, or together, as in step 702 .
- the data can be mined to find trends in buying behaviour, trends in consumer drapes from one garment to another, and or trends in body shapes with particular garments or particular retailers. For example, but not way of limitation, stretch factor calculations for relevant points of measure calculated for the virtual garment 183 could be analyzed across multiple garments for a single user, or multiple users.
- step 704 trends in stretch factor, or other fit data may be correlated with demographics, retailer's, fit model's, sizes, fabric types, revealing valuable information. For example, but not by way of limitation, such analysis may reveal that a consumer fits better with a certain set of brands, then with another set of brands. Such information becomes useful in step 706 . Moreover, such correlations may be easily recognized by those skilled in the art given the data the present system makes available, since brands often have fit models with distinctively different body shapes.
- the trends discovered in step 704 may be used to better predict the outcome of fits with virtual garments in system 10 and can be used as size prediction algorithm 908 .
- fit may be a very subjective personal choice for consumers. For instance, two people of very similar body types may have dramatically different viewpoints on fit, where one person may prefer a tighter fit, or a size larger than the other. Therefore, by studying how variables that measure stretch across multiple garments for groups of similar bodies, and discovering trends, those trends may now be applied to predict other garments that may fit a user.
- step 708 a product recommendation engine is built to interpret predicted garments in step 706 and then suggest those garments to the user in ASP 100 .
- data collected can be used directly to make custom patterns and therefore custom garments for the consumer.
- the data may be used to develop block patterns, or customize the patterns of garments available by the retailer.
- Custom 3D garments and patterns may be sent to the retailer based on the analysis.
- consumer drape 1102 , fit data 1104 , user reviews 1106 , and rendered media 1108 may all contain extremely valuable information not only for aiding consumers in buying clothing online, but also for apparel manufacturers and retailers. Retailers can use such information to better understand their target market, make necessary adjustments to product development, distribution, production, merchandising, and other key decisions in supply chain and sales processes referred to above.
- retailers have no immediate perceivable method of determining how a garment truly fits on each of their customers. Often times, retailers depend on statistical studies to determine the body shape(s) of their target market. Moreover, they rely on third-party research organizations that study body shapes in certain populations. However, the shapes of human bodies are difficult to standardize and are constantly changing. In consequence, most retailers fall short in reaching the broad target market they were designing for.
- a flow diagram illustrates steps to relate fit data and how retailers may interpret such relations.
- data collection is performed. For example, the following data may be collected after each fit is performed on a consumer: (1) number of fits a consumer has in a set period of time; (2) percentage of fits that results in a sale; (3) number of times of consumer try's on a specific garment; (4) the average stretch factor for each fit; and (5) each consumer's fit history and measurement chart.
- a data analysis may be performed on this data. This data can be used to determine which garments are fitting which body types. Correlations between body measurements, or sets of body measurements and purchases can be determined.
- Such correlations can be used to predict the probability that a certain consumer, based on their body shape, will or will not buy a specific garment.
- a point-to-fit analysis may give retailers access to measure in real-time the fitting process with each of its site's visitors. Such information can be used to determine how garments are performing in the virtual fitting room. Furthermore, those results can help retailers determine if changes to the construction of the garment may or may not increase sales.
- retailers may access consumer drape 1102 and derive their own fit data from the actual draped virtual fabric. Furthermore, retailers may compare these drapes with fit model drape
- a web interface may be made available to retailers. By logging on, retailers may have access to daily, weekly, monthly, quarterly, or yearly statistics on user data, which can be manipulated and searched.
- Range Cameras may include, for example, the Microsoft 3D Kinect device.
- a range camera device 2600 of this type may include, for example, a small shoebox sized attachment used for motion capture for video game consoles, or the like.
- This type of range camera device 2600 may include an infrared (IR) light emitter 2602 that emits structured infrared light, a red-green-blue (RBG) camera 2606 , and a CMOS IR sensor 2604 for reading reflected IR light.
- the RBG camera 2606 is used to take visual images, whereas the IR emitter 2602 and CMOS sensor 2604 are used in conjunction to measure depth of objects within the field of view.
- the system described herein may use the depth images attained by the CMOS sensor 2604 to create a 3D model of a subject or object within the field of view. Further, a process of capturing depth images of a human subject and creating a 3D model or avatar of the subject may be performed by one embodiment.
- a flow diagram illustrates steps that may be performed in one embodiment for scanning consumer body 22 using range camera device 2600 .
- a set of computer instructions which is written and available from OpenNITM, may be used to capture one of several depth images by sampling consumer body 22 in an interval of time and in a fixed position in space with respect to the range camera device 2600 .
- OpenNITM is middleware that is part of the free software development kit (SDK) provided by PrimeSense, located at 28 Habarzel St. 4th floor, Tel-Aviv, Israel, 69710.
- SDK free software development kit
- Each depth image may contain the depth or distance to the body, as well as the xy position of each part of their body, also called 3D position data.
- step 2701 a library routine of OpenNITM may be called to calculate actual 3D points from the captured depth images from step 2700 .
- step 2702 consumer body 22 may next be rotated or rotate to a secondary position, by way of example, and not by way of limitation, 90 degrees.
- a second series of one or more images may be captured in a second interval of time.
- the library routine of OpenNITM may be called to calculate actual 3D points from the captured depth images from step 2704 .
- the process is repeated until the subject has rotated 360 degrees, as indicated by decision diamond 2706 .
- the result is a series of 3D points, one set for each capture of images at a rotation stop point as described above.
- each set of 3D points corresponding to a rotation of the consumer body 22 is rotated and translated such that they all are able to fit together to form a final set of 3D points to represent the entire consumer body 22 .
- This final set of 3D points are stored in step 2710 .
- measurements may be extracted. This may be performed various convex-hull algorithms, for example, the Graham scan algorithm or the Andrews monotone convex-hull algorithm.
- a 3D mesh is created from the 3D points. This can be performed by various methods that are commonly used to convert 3D points to a 3D mesh. For example, ball pivoting algorithms, Poisson surface reconstruction, or the like, may be used for this step.
- the mesh may be converted into 3D consumer avatar 171 as described above.
- the mesh could be rigged, skinned, and have a texture applied so that it could be animated and customized to look like the consumer body 22 .
- the consumer 22 could then use this consumer avatar 171 for an online fitting room as described above.
- clothing could be modelled as a 3D mesh, as in the case with digital patterns, and then using the cloth simulation algorithms described above, clothing may be simulated on the avatar in 3D, allowing for the consumer 171 to view in real-time how a garment will look and fit their own body.
- another sensor could be put behind the consumer 22 , or several at different angles.
- consumer 22 may alternatively be asked to rotate their body to capture their body from multiple angles as described above.
- the corrections in change of posture may be made by using a pose tracker library by OpenNI.
- the OpenNI library contains functions for tracking poses by assigning a skeleton to the consumer body 22 . For example, if the arm position has changed from the first series of images, to the next series of images after the body was rotated, then using the pose tracker, the new position of the arm can be used to translate the 3D points associated with the arm to the old position of the arm in 3D space, thereby, correcting for movement by the user.
- the consumer avatar 171 could also be drawn on a monitor or flat-panel display connected to a computer or gaming system, and then be synced with the consumer's movements, such that the consumer could control its movements.
- 3D graphics could be displayed on a live video stream from RGB camera 2606 .
- Those 3D graphics could be consumer avatar 171 .
- 3D virtual garment 183 draped on consumer avatar 171 could also be displayed using augmented reality and dynamically draped using GPU cloth simulation.
- 3D virtual garment 183 may be simulated with animation in real time on consumer avatar 171 no matter what position or posture consumer avatar 171 takes in real time.
- consumer avatar 171 could be hidden from view such that it would appear to the consumer 22 that the 3D virtual garment 183 were actually on consumer body 22 as they see it in real time on the monitor.
- consumer body 22 may change poster wherein the arm may change position in 3D space, using the pose tracking algorithm developed in OpenNITM, consumer avatar 171 may adjust its position to match the new position of consumer body 22 . Since the consumer avatar 171 hidden, this will thus cause 3D virtual garment 183 to re-simulate using the cloth simulation algorithm resulting in a new drape consistent with consumer body 22 's new posture.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Graphics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computer Hardware Design (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
- This Application is a Continuation of U.S. application Ser. No. 13/159,401, filed Jun. 27, 2011, entitled “System And Method For Body Scanning And Avatar Creation”, which is a Continuation-In-Part of U.S. application Ser. No. 13/008,906 filed Jan. 19, 2011 entitled “System And Method For 3d Virtual Try-On Of Apparel On An Avatar,” which is a non-provisional of Application Ser. No. 61/352,390, entitled “System And Method For 3D Virtual Try-On Of Apparel On An Avatar”, filed Jun. 8, 2010.
- A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
- The invention relates to a system and method for body scanning and avatar creation. More specifically, a scanning system and method using a range camera produces an avatar and provides for draping and display of virtual garments using augmented reality.
- In order to solve the problems and shortcomings of the prior art, an apparatus is disclosed for 3D virtual try-on of apparel on an avatar. According to one preferred embodiment, the system for 3D virtual try-on of apparel on an avatar is disclosed. According to one preferred embodiment, a method of fitting a garment on a person's body online comprises receiving specifications of a garment, receiving body specifications of one or more fit models, receiving one or more grade rules, receiving one or more fabric specifications, and receiving specifications of a consumer's body. The value of one or more fabric constants are determined according to the received one or more fabric specifications. One or more virtual garments in graded sizes are created and stored in a database based on the received garment specifications and fabric constants. Moreover, one or more graded virtual fit models are created and stored in a database based on the received specifications of the fit model. Each virtual garment is draped on the related virtual fit model to create a fit-model drape. An avatar is received or created to represent a consumer's body shape. A selected one of the virtual garments is determined that represents a closest size for fitting on the avatar. The selected virtual garment is then re-draped on the consumer avatar. The consumer drape can then be viewed in 3D on the web or in a software application on any computing device. Data regarding the result of the virtual try-on process can then be utilized by the retailer, the consumer, and/or a third party. This virtual try-on data can be in the form of visual data or quantitative data that can be interpreted to determine the goodness of a garment's fit. Specifically, consumers can be presented with such data to assess the appropriate size and the goodness of a garment's fit, retailers can utilize such data for assessing how their garments are performing on their customer's bodies, and finally, such data can be used as a predictive tool for recommending further garments to consumers (e.g., in a predictive, search or decision engine).
- In another preferred embodiment, a method of fitting a garment on a person's body online comprises receiving specifications of a garment, receiving specifications of a fit model, receiving a digital pattern corresponding to the fit model, receiving one or more grade rules, and receiving one or more fabric specifications. One or more graded digital patterns corresponding to one or more available sizes are calculated and stored in a database based on the received specifications of the garment, the received specifications of the fit model, the received digital pattern corresponding to the fit model, and the grade rules. The value of one or more fabric constants are determined according to the received one or more fabric specifications. An avatar representing the person's body, and a selected one of the available sizes is determined that represents a closest size for fitting on the avatar. A virtual garment is created from the stored graded digital pattern corresponding to selected available size. The selected virtual garment is then draped on the avatar according to the fabric constants.
- According to yet another preferred embodiment, a method of fitting a garment on a person's body online comprises receiving specifications of a garment, receiving specifications of a fit model, receiving one or more grade rules, and receiving one or more fabric specifications. A virtual fit model is calculated and stored based on the received specifications of the garment, and the received specifications of the fit model. The values of one or more fabric constants are determined according to the received one or more fabric specifications. An avatar representing the person's body is received, and a selected size for the person's body is determined according to the received one or more grade rules. A virtual garment is created in the selected size according to the virtual fit model, the one or more grade rules, and the selected size. The selected virtual garment is then draped on the avatar according to the fabric constants.
- In yet another preferred embodiment, a computer program product is stored on computer readable medium containing executable software instructions for fitting one or more garments on a person's body, the executable software instructions.
- In yet another preferred embodiment, a system for scanning a body comprises a processor, a range camera capable of capturing at least a first set of depth images of the body rotated to 0 degrees, and at least a second set of depth images of the body rotated to x degrees, wherein x is >0 degrees, and x <360 degrees, a first set of computer instructions executable on the processor capable of calculating a first set of three dimensional points from the first set of depth images and a second set of three dimensional points from the second set of depth images, a second set of computer instructions executable on the processor capable of rotating and translating the first and second set of three dimensional points into a final set of three dimensional points; and a third set of computer instructions executable on the processor capable of creating a three dimensional mesh from the final set of three dimensional points.
-
FIG. 1 is a diagram that illustrates components of one embodiment of a system for providing online virtual try-on apparel on an avatar; -
FIG. 2 is a diagram that illustrates further detail of the consumer system and a retail system ofFIG. 1 ; -
FIG. 3 is a diagram that illustrates further detail of the virtual try-on system ofFIG. 1 ; -
FIG. 4 is a diagram that illustrates further detail of the 3D virtual apparel system ofFIG. 1 ; -
FIG. 5 is a diagram that illustrates further detail of the body scanner system used with the system ofFIG. 1 ; -
FIG. 6 is a flow diagram that illustrates a general view of high level method steps performed by one embodiment; -
FIG. 7 is a sample screenshot of a digital pattern for a garment according to one embodiment; -
FIG. 8 is a flow diagram illustrating steps performed in creating a 3D virtual garment according to one embodiment; -
FIG. 9 is a diagram illustrating an exemplary 3D piece placement and matching of segments of a virtual garment according to one embodiment; -
FIG. 10 is a screenshot from the virtual sewing and draping process for a virtual garment according to one embodiment; -
FIG. 11 is an example of a rendering of a drape of a virtual garment according to one embodiment; -
FIG. 12 is a flow diagram illustrating the steps for creating a base avatar according to one embodiment; -
FIG. 13 is a diagrammatic right perspective view of a stereophotogrammetry body scan booth and a scan booth computing device containing body scanning software according to one embodiment; -
FIG. 14 is a flow diagram illustrating steps performed for scanning consumer body or fit model body using the stereophotogrammetry method of body scanning, as well as steps for converting the output of such body scanning method into a 3D mesh according to one embodiment; -
FIG. 15 is a flow diagram illustrating further steps performed by an avatar software application according to one embodiment; -
FIG. 16 is a flow chart illustrating steps for creating an avatar according to one embodiment; -
FIG. 17 is a flow diagram illustrating steps for creating an avatar according to one embodiment; -
FIG. 18 is a flow diagram illustrating the steps for creating an avatar according to one embodiment; -
FIG. 19 is a flow diagram illustrating a method for modelling the face of consumer body or fit model body according to one embodiment; -
FIG. 20 is a flow chart that describes events that occur when a user decides to try on a virtual garment according to one embodiment; -
FIG. 21 is a diagram showing an example of what a simulation and animation may look like on computer device in the context of a virtual fitting room according to one embodiment; -
FIG. 22 is an example web page produced by a system according to one embodiment that illustrates how stretch values may be visually displayed using a color tension map; -
FIG. 23 is another web page produced by a system according to one embodiment that illustrates how another form of a visual representation of consumer drape may show the 3D virtual garment as partially transparent; -
FIG. 24 is a flowchart that describes a process of analyzing fit data according to one embodiment; -
FIG. 25 is a flow diagram that illustrates steps to relate fit data and how retailers may interpret such relations according to one embodiment. -
FIG. 26 is a diagram illustrating components of a prior art range camera device that could be used in one embodiment; and -
FIG. 27 is a flow diagram illustrating steps that may be performed using a range camera ofFIG. 26 in one embodiment. - For the purpose of illustrating the invention, there is shown in the accompanying drawings several embodiments of the invention. However, it should be understood by those of ordinary skill in the art that the invention is not limited to the precise arrangements and instrumentalities shown therein and described below.
- The system for online virtual try-on of apparel on an avatar is disclosed in accordance with preferred embodiments of the present invention is illustrated in
FIGS. 1-19 wherein like reference numerals are used throughout to designate like elements. -
FIG. 1 is a diagram that illustrates components of one embodiment of asystem 10 for providing online virtual try-on apparel on an avatar.FIG. 2 is a diagram that illustrates further detail of the consumer system and a retail system ofFIG. 1 .FIG. 3 is a diagram that illustrates further detail of the virtual try-on system ofFIG. 1 .FIG. 4 is a diagram that illustrates further detail of the 3D virtual apparel system ofFIG. 1 .FIG. 5 is a diagram that illustrates further detail of the body scanner system used with the system ofFIG. 1 . - A three dimensional (3D) virtual
apparel processing system 112 gathers all or any combination of the following data available from retailer 50: (1)paper pattern 51, (2) grading rules 53, (3)technical pack 54, (4)digital pattern 57, (5) fit model's scan data ormeasurements 58, (6) production sample garment, or (7) fabric swatches, where data displayed inFIG. 1 inphysical garment storage 55 or digitalgarment data storage 52. Moreover, data fromstereophotogrammetry system 150 is sent tosystem 112.System 112 then processes all gathered data and may make output data available to all other systems. In one embodiment, application service provider (ASP) 100 may receive data fromconsumer system 20 andstereophotogrammetry system 150. In one embodiment theASP 100 andconsumer system 20 may be connected through awide area network 1500, wherein each havenetwork connections 1502 to facilitate such connection.Retailer system 50 may also be similarly connected tonetwork 1500. For example, thewide area network 1500 may comprise the internet, and thenetwork connections 1502 may comprise network routers, cards, etc. commonly used to connect to the internet. In one embodiment, it may be advantageous to provide a high speed, or wideband,network connection 1502, such as a fibre optic, T1, T2, or other commonly used wideband typology.ASP 100, which may utilize off the shelf server software and network technology, then processes all the data and provides services forsystem 10. The term garment and apparel may be used interchangeably herein, both in the plural and the singular. - With reference to
FIG. 6 , a flow diagram illustrates a general view of high level method steps performed by one embodiment. Step 300 refers to the data gathering and processing that occurs in 3D virtualapparel processing system 112. Product development information received fromretailer system 50 may include data fromstereophotogrammetry system 150. In another embodiment,system 112 andstereophotogrammetry system 150 may be a part ofretailer system 50. In yet another embodiment,system 112 may be a part ofASP 100, butstereophotogrammetry system 150 may be part of a third party network and vice versa. Furthermore,system 112 andstereophotogrammetry system 150 may not be a part ofASP 100 orsystem 50, but rather a third party system. In one embodiment, 3D virtualapparel processing system 112 comprises one or more apparelproduct development workstations 116 with apparelproduct development software 114, and external hardware devices such asdigitizer 118,fabric scanner 120,fabric testing equipment 122, and the like.Retailer System 50 can represent either a retailer, or several companies within the apparel retail and manufacturing supply chain. Moreover,retailer System 50 may contain any portion, combination of sub-systems, or entire systems ofsystem retailer system 50 may havefabric scanner 120 located therein.Stereophotogrammetry system 150 may be used to scan fit modelphysical body 151, which refers to a physical fit model commonly used apparel product development. The scan data is used to create fitmodel avatar object 173 usingavatar processing system 160. Alternatively, the retailer may only provide measurements of thefit model 151, in which case, those measurements are used in fit modelavatar processing system 160 to create fitmodel avatar object 173. The process of creating fitmodel avatar object 173 may be similar to the process of creatingconsumer avatar object 171 described below. Thestereophotogrammetry system 150 may be located either independently at a third party location, atretailer system 50, withASP 100. Further information provided by a retailer may includedigital pattern 57,paper pattern 51, fabric andprint swatches 56, grading rules 53, fit-model scan data and/orbody measurements 58, andproduction sample garment 59. With reference toFIG. 7 , a sample screenshot of adigital pattern 57 is shown. - In another embodiment, some
retailers 50 may not have access to some of the information described above. For example, the retailer may not have any information on the pattern other than thetechnical pack 54, in which case aproduction sample garment 59 andtechnical pack 54 will be used by the 3D virtualapparel processing system 112. In another example, theretailer 50 may not provide atechnical pack 54, in which case theproduction sample garment 59 is used for processing as described below. - In any case, whether a pattern, and/or
technical pack 54 is received electronically from the producer's digitalgarment data storage 52, or the lesssophisticated garment information 60 is received, the information is processed into 3D virtualapparel processing system 112, and stored in afirst data storage 110. In one embodiment, if thedigital pattern 57 is received, it is imported into apparelproduct development software 114, and, if necessary, converted into the proper format. In another embodiment, if the patterns are not digital, they are digitized using a digitizer known to those skilled in the art. In another embodiment, if no pattern is received, then the pattern is made from theproduction sample garment 59 and/ortechnical pack 54. Further, fabric swatches, or theproduction sample garment 59 received are/is tested using thefabric testing equipment 122 to produce an initial set of fabric presets, which are tested as described below to produce a final set of presets. - With reference to
FIG. 8 , a flow diagram illustrating steps performed in creating 3Dvirtual garment object 183 is shown according to one embodiment. Any entity may practice one portion, or all of the steps of any or all the methods described herein. For example, and not by way of limitation, it is more likely in some embodiments that clothing manufactures orretailers 50 would provide specifications for the apparel that may or may not include a digital or paper pattern. Further, in one embodiment, the process of creating 3Dvirtual garment 183 may be performed once per garment and, and not repeated for example, irrespective of the number of times a consumer virtually tries-on the style or the number of consumers that try-on the garment. - In
step 350, from thedigital pattern 57,production sample garment 59,technical pack 54, grading rules 53, fit model scan data orbody measurements 58, and/orpaper pattern 51 received from theretailer 50, digital pattern pieces are created, or converted fromdigital pattern 57, using the apparelproduct development software 114. Generally, a pattern refers to the collection of the individual pieces of thegarment 59. In standard practice, the pattern pieces are drafted first, then laid over fabric, which is then cut around the perimeter of each piece. The resulting pieces of fabric are then sewn together to form thefinished garment 59. Therefore, the pattern refers to a blueprint of thegarment 49 and its individual pieces. - Indeed, there are several cases in which a
digital pattern 57 is received, made, or modified from the above-referenced information received from theretailer 50. In one embodiment, part of the apparelproduct development software 114 may include a software program named TUKACAD running onproduct development workstation 116 in the 3D virtualapparel processing system 112, which may be used to create or reformat the digital pattern. TUKACAD is widely used CAD software for digital pattern making, digitizing, grading, and marker making in the apparel industry, and is available from TUKATech, Inc., 5527 E. Slauson Ave., Los Angeles, Calif. 90040, www.tukatech.com. TUKACAD creates points and interpolates splines between points to create a 2D shape or CAD drawing. Additionally, the digital pattern can be graded in TUKACAD to create larger or smaller sizes. Those skilled in the art would recognize that a variety of CAD software programs may be used to perform the functions carried out by TUKACAD. - As noted above, there are several cases regarding the kind of information that is received from a
retailer 50 regarding aproduction sample garment 59 from which the digital pattern pieces are created in TUKACAD. In a first case, aretailer 50 does not havedigital pattern 57 orpaper pattern 51 for aproduction sample garment 59.Retailers 50 that do not havepatterns technical pack 54 with specifications for how the style is to be made and/or may provide or use aproduction sample garment 59 for reference. These instructions are then interpreted in 3D virtualapparel processing system 112 to create a digital pattern. - In a likely second case the customer has
paper pattern 51 for corresponding toproduction sample garment 59.Paper pattern 51 may then be digitized or scanned into TUKACAD software using digitizer orpattern scanner 118. As thepaper pattern 51 is being digitized, TUKACAD software draws the pattern in digital form resulting in a digital pattern made of digital pattern pieces. - In a likely third case, the
retailer 50 has adigital pattern 57 in a third-party format. The digital pattern may then be converted into the format that can be read by the apparelproduct development software 114 using built-in conversion tools in TUKACAD Software. - In
step 352, generally, the physical fabric of a new garment may be tested and simulated to solve for digital fabric presets to be input into apparelproduct development software 114 for processing. In order to more precisely simulate the behaviour of fabric in a virtual environment, various intrinsic characteristics or parameters that uniquely define real fabric may be determined. The results of those tests may be the fabric presets, which may be entered into a computer model. In some cases, the fabric presets are not independent variables and further testing may be used to arrive at the final fabric presets. In one embodiment, the computer model comprises a three dimensional (3D) virtual software environment. - In one embodiment, software named E-FIT SIMULATOR, also called E-FIT herein, is used as the computer model. E-FIT SIMULATOR is commercially available from TUKAtech, Inc., 5527 E. Slauson Ave., Los Angeles, Calif. 90040, www.tukatech.com, and is built using 3DS MAX's SDK. E-FIT, in one embodiment, incorporates cloth simulation plug-in software, CLOTHFX, which is manufactured by
Size 8 Software, and is readily available from TurboSquid, Inc., 643 Magazine St., Suite 405, New Orleans, La. 70130, www.turbosquid.com. E-FIT may be used in conjunction with the aforementioned CLOTHFX software to create 3D virtual apparel, including draping on a virtual model and simulating animation in a 3D environment as described below. This combination of software is currently used commonly by designers and engineers for rapid prototyping of apparel design and development. - Generally, some presets are determined by conducting physical tests on one or more swatches of the fabric from
production sample garment 59, while other presets also require an additional virtual test, wherein results from the physical test are compared with results from the virtual test in a process of linear regression, which is used to arrive at the final preset value. For example, there may be three fabric presets for stretch-one for warp, one for weft, and one for shear, which may comprise dependent variables that may not be individually solved-for in an isolated test, but rather may require linear regression using all three parameters to find the final presets. - One of the presets tested comprises stretch and shear resistance. An intrinsic property of cloth or fabric is its ability to stretch, which distinguishes it from a normal rigid body. Fabrics can vary in their ability to stretch, and this characteristic can be quantified. In the physical test of the fabric for this characteristic, the fabric assurance by simple testing (FAST) method known to those skilled in the art may be used. Specifically, the known FAST-3 fabric extensibility test may be used. Procedurally, a first sub-test is performed by hanging a swatch vertically. A weight is attached to the swatch, and the change in length due to the force of gravity is measured. The dimension of the swatch that may be tested is typically 15 cm by 15 cm. The direction selected along which to hang the swatch may depend on the direction of the grain-line of the fabric. That direction is typically known as the warp direction. In one embodiment, the test may be performed in the vertical direction (where vertical denotes the direction of gravity) for three specific orientations of the fabric. Those orientations are the directions of warp, weft, and bias. Weft is the direction perpendicular to warp. Bias is the direction that is 45 degrees from the warp and weft directions. The first measurement may be taken in the warp direction. The length of the swatch in the vertical may be, for example, 15 cm, and a weight of, for example, 100 grams may be attached along the bottom of the swatch, and a new length measurement is taken and recorded. The process is repeated for the weft direction. Finally, in the bias direction, the parameter being measured is called shear. For woven fabrics, measurements in the shear direction may also be made using an additional method, similar to the known KES-FB1 tensile/shear testing. For knits, the process may be the same as described above.
- A virtual test for stretch and shear is next conducted. Generally, for virtual tests, E-FIT creates a 3D mesh object for the swatch under test, made in the dimension and shape of cloth, which CLOTHFX simulates gravity, collision with itself, and collision with other objects (or itself), to behave in accordance with how physical cloth would behave in a real environment. Therefore, CLOTHFX as applied to a 3D mesh object is accomplished using a set of algorithms based on known computer cloth simulation theory. The CLOTHFX algorithms are based on modelling the 3D mesh object's vertices as having mass, and the connections between vertices as springs. In other embodiments, alternative algorithms based on known research can be used to model the mesh as interacting particles. In either case, widely known algorithms in classical dynamics may be used to find the time-varying displacement of each point in the mesh. Such solutions have constants (such as natural frequency, spring constant, mass, etc.) which can be adjusted such that the mesh behaves like any particular fabric. Therefore, before draping, constants which appropriately model the selected fabric are chosen. These constants would be the fabric presets discussed herein. Additional forces that may be modelled may include damping forces, which simulate the effect of friction and air resistance. In the cases of friction and air resistance, the fabric presets found are the coefficient of kinetic friction, coefficient of static friction, and drag coefficient, respectively.
- The cloth simulation algorithms used in E-FIT and CLOTHFX are thoroughly described in, for example: Xavier Provot, Deformation Constraints In A mass-Springmodel To Describe Rigid Cloth Behavior, Wayne A. Davis and Przemyslaw Prusinkiewicz, editors, Graphics Interface, pp. 147-154, Canadian Human-Computer Communications Society, 1995; Pascal Volino, Nadia Magnenat-Thalmann, Comparing Efficiency Of Integration Methods For Cloth Simulation, Computer Graphics International, pp. 265-272, July 2001; Kwang-Jin Choi, Hyeong-Seok Ko, Stable But Responsive Cloth, ACM Transactions on Graphics, 21(3), pp. 604-611, July 2002, D. E. Breen, D. H. House, M. J. Wozny. Predicting The Drape Of Woven Cloth Using Interacting Particles. In Computer Graphics (Proceedings of SIGGRAPH 94), Computer Graphics Proceedings, Annual Conference Series, pp. 365-372, Orlando (Fla.), July 1994; D. Baraff and A. P. Witkin, Large Steps In Cloth Simulation, Computer Graphics (Proceedings of SIGGRAPH 98), Computer Graphics Proceedings, Annual Conference Series, pp. 43-54, Orlando, Fla., July 1998; and Rony Goldenthal, David Harmon, Raanan Fattal, Michel Bercovier, Eitan Grinspun, Efficient Simulation Of Inextensible Cloth,
ACM SIGGRAPH 2007 papers, Aug. 5-9, 2007, San Diego, Calif. - In the vertical test, E-FIT and CLOTHFX may create a 3D mesh of the same dimensions of the physical swatch, then hang it vertically, and attach a virtual weight digitally. CLOTHFX is used to apply cloth simulation algorithms to the 3D mesh. Under the force of gravity, the 3D mesh (now behaving as cloth) is deformed or stretched, and the resultant change in length is measured. The simulation occurs using default values found in the physical tests described above for the stretch/shear resistance preset in all three directions. CLOTHFX applies cloth simulation algorithms to the 3D mesh. In order for CLOTHFX to more precisely model a 3D mesh to behave as a particular fabric, regression analysis is used to solve for the presets by repeating virtual tests and adjusting the presets until the results of the physical and virtual tests match.
- Another parameter may comprise bend resistance. This measurement involves the way that fabrics differ from rigid bodies in their ability to bend. The resistance to bend is measured with this parameter. In one embodiment, a physical test uses a known method for assessment of the drape of fabrics. A circular swatch, for example, around 15 cm in diameter, may be draped over a circular rigid body, with a smaller diameter than the swatch, which is propped up by a stand. The setup is situated under a light, such that the resultant folds cast a shadow. This is called a projection of the drape. The projection is then photographed, and the surface area of the projected surface is calculated.
- A virtual test for bend resistance may be conducted in similar fashion to the physical test. However, instead of measuring the surface area of the projected image (or shadow from the bends), the mesh is flattened within E-FIT. The resultant area of the flattened mesh may be measured and compared with the surface area measured in the physical test. Using regression analysis, the fabric preset for bend resistance may then be adjusted, and the virtual test may be repeated until the surface areas of both tests match, wherein the resultant fabric preset is the final fabric preset for bend resistance.
- Yet two other presets may be kinetic and static friction. Fabric draped on a body can experience damping forces that result from friction with the body's surface and friction with itself or with other fabric. A physical test for static friction may be performed by sliding a swatch along a surface, with a known coefficient of static friction. The plane is tilted to find the angle, herein known as the repose angle, at which the swatch begins to slide. The repose angle is used to determine the coefficient of static friction, where the coefficient of static friction equals the tangent of the repose angle for an object sliding down a plane. The coefficient of static friction that results from the physical test may be used as the fabric preset, and no further calculation may be required. Therefore, this value is a direct input into CLOTHFX.
- In a physical test for kinetic friction, a method is used in which a constant force is applied to a swatch along a plane to measure the value of the applied force at which the swatch travels at constant velocity. In one embodiment, a string is attached to the swatch, which is pulled along a plane with a known coefficient of kinetic friction. The pull force applied is measured using off-the-shelf instruments for measuring force. The pull force that results in a constant velocity of the swatch along the plane is multiplied by the cosine of the vertical angle of the string used to pull the swatch with respect to the plane. Then, the coefficient of kinetic friction is equal to the force applied multiplied by the cosine of the angle from the plane and then divided by the normal force. The coefficient of kinetic friction may be used as the fabric preset and no further calculation may be required. Therefore, this value may be a direct input into CLOTHFX.
- Yet another preset parameter is the surface density of the cloth. A swatch of cloth of the same dimensions can have very different weights, depending on the type of textile used to build the cloth and the density of threads used to weave or knit. In the surface density test, the weight of the cloth is measured. In a physical test, a standard scale is used to measure the weight of a swatch. The weight is divided by the surface area of the swatch to arrive at the surface density. The physical test may be a direct input into CLOTHFX as a fabric preset.
- Another preset parameter may be air resistance. Cloth will drape differently depending on the how it falls through a fluid, such as air, and how it reacts with air as it moves in space. When airflow is directed at a cloth, some fraction of the air molecules that make up the airflow will permeate or penetrate the cloth, and some will collide, transferring momentum to the cloth and causing it to move (drag force). The resistance to this drag can vary between fabrics.
- In a physical test for air resistance, since the resistance to drag is dependent on the coefficient of drag, and the coefficient of drag will be unique from fabric to fabric, the coefficient of drag is measured. One or more values for the air resistance presets provided by CLOTHFX may be used. However, those skilled in the art would recognize that other well-known tests to measure air resistance could be used to determine such presents for air resistance.
- After completing the tests to obtain a final set of fabric presets, the fabric presets 181 may become part of a library of virtual fabrics in the
first data storage 110, to be applied when creating virtual apparel made of specific fabric, removing the need to re-test the fabric with new garments made of the same material. - The next step,
step 354, comprises preparingdigital pattern 180 of theproduction sample garment 59, either by convertingdigital pattern 57 from another format, digitizing or scanningpaper pattern 51, or creating it using information contained intechnical pack 54Digital pattern 180 may be represented in TUKACAD file format located indata storage 110. TUKACAD's file format stores the digital pattern as a collection of points and hermite splines that are interpolated between points. Each point has an attribute that can govern the shape and/or interpolation of the connected hermite splines. Other types of CAD software may use alternative types of splines or interpolation methods, however since all digital patterns can be converted into TUKACAD's format, all methods for creating and storing data points in a pattern are supported. - In one embodiment,
digital pattern 180 may be made for each particular style in a base size. A base size refers to a sample size of a garment, or size that is used as a standard for a particular garment. Larger and smaller sizes may then be created differentially from this sample size by modifying thedigital pattern 180, using a process called grading. The amounts that each point in the pattern are to be moved outward or inward are contained in grading rules 53. - The next step refers to converting the two dimensional pattern pieces into 3D meshes. Once the digital pattern has been prepared, it may be modified with construction information useful for conversion of the 2D pattern into a 3D
virtual garment 183. Pattern pieces may need to be adjusted to reduce the complexity of some garment features (e.g., removing extra folds, creating finished pieces for pockets, plackets, etc.). Some values used for physical garment production that are not required for virtual apparel also need to be removed (e.g., fabric shrinkage, sewing allowances, etc.). All of these procedures are made todigital pattern 180 in the TUKACAD software contained in apparelproduct development software 114. To further explain, the following procedures may or may not be applied to one, more, or all of the pieces of a garment depending on the garment type. - 1) First, the
digital pattern 180 piece quantity may be adjusted. A few pieces that may otherwise be necessary for production become irrelevant for 3D virtual apparel, and may be removed from thedigital pattern 180. - 2) Second, sewing allowances may be removed from
digital pattern 180. A sewing allowance is an extension of the perimeter of a piece that adds additional fabric necessary for physically sewing a garment. This allowance is not necessary for 3D virtual apparel and may be removed fromdigital pattern 180. - 3) Third, any shrinkage allowance may be removed from
digital pattern 180. Digital pattern pieces are often created slightly larger in anticipation that once the fabric is washed, the garment will shrink back to the appropriate dimension. Simulation of shrinkage may not be necessary, and therefore, any allowances for shrinkage in thedigital pattern 180 may be removed. - 4) Fourth, variable hem lines may be removed from
digital pattern 180. Primarily used in men's pants, extra fabric is added to the bottom of the pant leg such that a tailor can adjust the hem line. This additional fabric is not necessary for 3D virtual apparel and may be removed formdigital pattern 180. - 5) Fifth, sewing lines may be added (for pockets, flaps, etc) to
digital pattern 180. When a piece needs to be sewn to the inside of another piece, a drill hole may be placed in a physical garment piece. However, in the process of creatingdigital pattern 180, a sewing line may be drawn digitally to facilitate adding of pockets, flaps, and other features to 3Dvirtual garment 183. - 6) Sixth, a fabric code may be assigned to each piece of the
digital pattern 180. For example, the piece that refers to the front of a t-shirt may be assigned fabric code by the name of cotton, whereas the piece that represents the lining of the t-shirt may be given fabric code that represents an elastic material type, such as some polyester spandex blend. - 7) Seventh, stitch segments may be assigned in the
digital pattern 180. Segments may be defined so that they can be sewn in E-FIT. Marks may be added to thedigital pattern 180 to define the starting and ending point of the segments that will be sewn. - 8) Eighth, a size may be selected for the fit model avatar 173 (which was created from scan data or measure data from step 58). If
digital pattern 180 has been graded into several sizes, the base size may be selected to fit thefit model avatar 173. - 9) Ninth, fold lines may be assigned in
digital pattern 180. Pieces that are folded (e.g., lapels) may have a line drawn on them where the fold will occur, so that E-FIT can fold the pattern piece along that line. - 10) Tenth, pattern pieces may be rotated in
digital pattern 180. E-FIT may use the orientation of the pattern pieces as a starting point for making transformations to the 3D mesh. Arranging the digital pattern pieces into a set orientation may ease this process. - 11) Eleventh, unnecessary folds may be removed from
digital pattern 180. Some pattern pieces may be folded multiple times during the physical construction of the garment. Often, this is not necessary in 3D virtual apparel, and the digital pattern pieces are adjusted to remove this extra length or width fromdigital pattern 180. - 12) Twelfth, internal lines may be adjusted in
digital pattern 180. Because the 2D spline pattern pieces are eventually meshed for 3D software, some adjustment of the splines may be necessary to avoid errors in E-FIT. For instance, a line cannot be meshed. So if there is an internal pattern line that extends past the outer boundary of the pattern piece, that external part of the line may need to be removed formdigital pattern 180. - The
next step 356 may be to convert the digital pattern into a 3D mesh. A 3D mesh, or polygon mesh, is a collection of vertices, edges and faces that defines the shape of a polyhedral object in computer graphics. The mesh is a collection of several closed surfaces. In a mathematical vector algebraic sense, which may be important for calculations, a mesh is a collection of numbers organized into several matrices. More simply stated in a geometric description, a mesh is made of points that are joined together with segments and surfaced by polygons. - In
step 356, thedigital pattern 180 may now be imported into E-FIT. The CLOTHFX plug-in in E-FIT may convert the pattern pieces into 3D mesh objects. Essentially, the 2D splines are surfaced to create a 3D mesh. Thedigital pattern 180 is now a 3D mesh. The 3D mesh is then further defined to have components such as pieces and segments, which later get defined with additional attributes. - In
step 358, E-FIT interprets the fabric code for each piece ofdigital pattern 180 and assigns the corresponding fabric presets. For example, the piece ofdigital pattern 180 that represents front of a t-shirt may have been assigned a material code for cotton. E-FIT interprets this code and retrieves the fabric presets for cotton from its fabric library of presets. - In
step 360, E-FIT may apply 3D piece placement, orientation, and curvature in the 3D pattern. - In
step 362, E-FIT assigns sewing instructions. In this step, E-FIT matches each particular segment of a 3D mesh corresponding to a particular piece to another segment on the same 3D mesh, or to another 3D piece, in accordance with how the garment is supposed to be sewn together. - Referring to
FIG. 9 , a diagram illustrates an exemplary 3D piece placement and matching of the segments using E-FIT. - With reference back to
FIG. 8 , instep 364, E-FIT may virtually sew and drape the 3D mesh on thefit model avatar 173.Fit model avatar 173 is a virtual representation of the actual physical fit model, wherein the exact body measurements 164 may have been measured and used to create a virtual body in the base/sample size, or the physical fit model has been scanned, and the scanned data is used to createfit model avatar 173 in the base/sample size. Iffit model avatar 173 is created from scanning a physical fit model, the scanning process may be similar the process described below with respect to an avatar. - Sewing and draping may be completed using functions provided by CLOTHFX and native E-FIT according to the sewing instructions assigned above. Often, garments have lining and/or layers of material. In such cases, layers may be placed, stitched, and draped in a specific order. The culmination of the simulation results in a drape on
fit model avatar 173 that may be identical to the drape of a real garment on a real fit model. - With reference to
FIG. 10 , a screenshot 2050 using CLOTHFX and native E-FIT is shown during the sewing and draping process according to one embodiment. - With reference back to
FIG. 8 , instep 366, animation is created for the 3Dvirtual garment 183.Fit model avatar 173 may have a predetermined motion or animation already applied. The predetermined motion may simply be a series of frames wherein the position of thefit model avatar 173 is slightly different, and when played out appears to be walking. Then, to simulate animation of the garment being worn, the above-described sewing and draping is performed for each frame. In one embodiment, thirty frames is equivalent to one second of animation. - In step 368 a presentation may be created for the
retailer 50 to be approved and later presented toconsumer 20. Making an object in 3D appear like physical object may often involve duplicating the look not only in 3D software or interactive rendering software, but require visual output hardware (such as a monitor or display) to accurately replicate the appearance of the object in reference to a real object. - E-FIT may apply a texture. In one embodiment, the 3DS MAX is used as the 3D engine for E-FIT. Since 3DS MAX refers to “textures” as “material textures,” the term “textures” will be referred to as such herein. However, it is understood by those skilled in the art, that the term “texture” is used for an embodiment that does not include using 3DS MAX, but rather some other 3D software, such as PHOTOSHOP available from Adobe Systems Incorporated, 345 Park Avenue, San Jose, Calif. 95110-2704. A
material texture 188 contains data that may be assigned to the surface or faces of a 3D mesh so that it appears a certain way when rendered.Material textures 188 affect the color, glossiness, opacity, and the like, of the surface of a 3D mesh. - However, these
material textures 188 may not be photometric, in the sense that they may not simulate the interaction of light or photons with thematerial textures 188 accurately. A user may use E-FIT's material editor built-in functions to further create the illusion of the garment's appearance. More specifically, the user of E-FIT may work to simulate the correct appearance of material textures by adjusting and applying various material texture properties or texture maps that model the color, roughness, light reflection, opacity, and other visual characteristics. - In one embodiment,
material textures 188 may be applied to the surface of each 3D mesh corresponding to each pattern piece. Thesematerial textures 188 realistically simulate various attributes that make up the appearance ofproduction sample garment 59. The following list of attributes may be modelled: -
- a. color:
- combination of ambient, diffuse, specular, and/or filter
- b. roughness or bumpiness:
- bump maps, or displacement maps
- c. light reflection:
- shiny, glossy, matte, etc which are accomplished using general shader settings or maps.
- d. opacity.
- a. color:
- Certain attributes may be set by the retailer. For example, a retailer may send a color swatch with a specific red-green-blue (RGB) value or PANTONE color value. In instances where the appearance is dependent on the lighting conditions, the attributes may be adjusted at the retailer's discretion.
- Prints, images, logos, and other maps can be adjusted in size, position and orientation. The retailer may provide information (included in technical pack 54) on the placement (position) and size of these maps. Using E-FIT, a user loads these maps and adjusts them accordingly. Furthermore, stitch textures, a component of
material texture 188, are added to give the appearance of actual stitching threads. - Completing the above steps results in the completion of 3D
virtual garment 183 andfit model drape 186, which are then stored indata storage 110. - Additionally, in
step 370, media, such as images, movies, may be rendered and stored as original sample renderedmedia 182. Additionally,original sample 3D viewer data 187 may be created.FIG. 11 is an example of such rendering using E-FIT. - With reference back to
FIG. 8 , instep 372, a fit analysis process may be executed which results in creating original sample fit data 18. - The previous discussion, in section “3D Virtual Apparel”, has been focused on the “3D Virtual Try-On”, a process of draping the existing 3D virtual apparel garment on a consumer avatar is described. Since both processes require the use of an avatar, the following section describes processes to create an avatar, whether the avatar is for a fit model or a consumer.
- An avatar may be defined as a 3D mesh constructed to have a similar shape as the
consumer body 22 orfit model body 151 it was intended to model, and may or may not be animated. Fit-model avatar 173 may be created to drape 3Dvirtual garment 183 on the avatar to producefit model drape 186, by way ofsystem 112. Likewise,consumer avatar object 171 may be used for simulating the drape ofproduction sample garment 59 on a consumer'sbody 22, resulting inconsumer drape 1102. The methods for any avatar, whether it be creatingconsumer avatar 171 orfit model avatar 173, are interchangeable and are described below. - In one embodiment,
consumer avatar 171 or fit-model avatar 173 can be generated using three types of procedures, all of which are well-known to one skilled in the art. The first procedure utilizes a technique in which one mesh is conformed to another. The second procedure utilizes a technique called morphing, where one mesh is morphed to another. A third technique involves manually moving vertices from a mesh to another location, which is often called digital 3D sculpting. With respect to creating an avatar, these techniques involve moving vertices from one position to another. However, the conforming and morphing methods are discussed in more detail herein. These two techniques may have disadvantages and advantages over each other and therefore are used in varying situations. Described next is one embodiment of using each of these techniques. However, any technique not discussed, but well known to those skilled in the art could theoretically be used. - An avatar is created using
avatar software application 904, which may be contained inavatar processing system 160.Avatar software application 904 begins creating an avatar by first accepting some input data on the consumer or fit-model. There may be many categories of input data, relating to any type of information on a human being or population of human beings—e.g., demographic information. For example, one may have data on the distribution of fat on the human body. Another example is data describing the amount of heat energy emanating from a body. A third example may be the color of the skin, eyes, and hair, and a fourth example may be data on the shape of the body. Since there are many types of information that can describe a human being, it is worthwhile to categorize the information or data. In one embodiment, the following three categories of data may be used to create an avatar: (1) body shape data, (2) body appearance/cosmetic data, and (3) body function data, where body may be defined to include all or any parts of the body, and data may be qualitative and/or quantitative, and stored in any form or format. For example, but not by way of limitation, the term body may include the torso, head, face, hands, fingers, finger nails, skin, hair, organs, bones, etc, or it may only include the torso. - Body shape data, refers to data that can be used or interpreted to understand and reproduce the accurate shape of a human body subject. Body appearance/cosmetic data, refers to data that helps reproduce the appearance of a human subject (e.g. eye color, hair style, skin texture). Body function data provides information on how the human subject's body functions. In (e.g. the systems of the body, such as lymphatic, endocrine, skeletal, immune, and others). It may aid to have body function data on movement (e.g. how the body's limbs, torso, head, or skeleton, muscular, etc respond to movement). Such data, for example, and not by way of limitation, may be captured using a generic motion capture technology for capturing body movement data. Finally, each data category may have many different types data in which information relating to that category are stored. The various data types for each data category are described below.
- Beginning with the first category of data, body shape data, there may be three data types in which information on the shape of a human subject can be stored, provided, or retrieved for use in creating an avatar. For example, but not by way of limitation, the input data may be one or the following: (1) raw
body scan data 172, (2) body measurements andother shape data 176 and (3) photographs 174. Although photographs can also be a raw body scan data type, photographs taken in some other mechanism, (e.g. webcam or single camera) may also be included. - Raw
body scan data 172 refers to raw output data from any type of scanner, whether it be generic body scanner 149 (e.g. point cloud originating from RF data, structured light data, lasers, mirrors, or any other type of raw data output from these scanners or other yet undiscovered types of scanners.). Moreover, raw body scan data can originate fromstereophotogrammetry body scanner 152 - Body measurements and
other shape data 176 may refer to both manual measurements taken ofconsumer body 22 either by the consumer or by a third-party, extracted body measurements fromraw scan data 172, statistically derived measurements from sizingsurvey data 178 or avatarstatistical data 179, and/or any combination thereof. -
Photographs 174 refer to supplemental photographs of the body from different angles, which may or may not include the other parts of the body (e.g. face, hands, etc). For example a user may take a photograph of the face ofconsumer body 22, and submit the photograph online, by which the system may map the person's face toconsumer avatar object 171.Photographs 174 may not originate from a scanner, but rather may originate from a web cam, a single digital camera and may be user submitted.Photographs 174 shall not be confused with photographs originating from rawbody scan data 172, especially in the case of the method of stereophotogrammetry as described below. - When creating an avatar, the highest precision in reproducing the shape, appearance and function may be desired, however, where precision in data is lacking, a combination of data types may be used to help supplement data or data precision that may be lacking. Therefore, in one embodiment, a combination of data types may be used to further increase the precision of the an avatar.
- For example, but not by way of limitation, one may use the following combination of data types for accurately reproducing the body shape of a human subject. These data types could include size survey data. Sizing
survey data 178 refers to body measurement and shape data from a population of human beings. For example, but no by way of limitation, the widely used Size USA survey, provided by TC2, which contains raw scan data or extracted body measurements from over 10,000 people can be used. Such data may represent one or many populations with various demographic characteristics. Then, this data may be searchable or queried by a specific demographic or set of demographics. Then, additional information collected on the consumer or fit model such as, .age, ethnicity, sex, residence, etc may be used to match the consumer to a specific population that is represented in sizing survey data. If a consumer is matched to a specific population, using demographic data inuser data 177, then the body measurements or other shape data for that population may be used in part or in entirety to create the avatar of the consumer or fit model. In yet another embodiment, once a sufficient collection ofconsumer avatars 171 is gathered, statistics on body measurements and shape can gathered and stored as avatarstatistical data 179 and may be used for statistical interpretation and later mined for trends that can further be used to constrain other estimates of the shape of the body, or further enhance those estimates. - Once information, of any data type, regarding the three data categories discussed above, is gathered, the next step is to interpret the data and create an avatar. However, in order to create an avatar, it may be useful to first create one or
many base avatars 158.Base avatar 158 is a template avatar from which all other avatars can be made. Depending on the data type for the body shape category of data, thebase avatar 158 can be morphed or conformed into the shape ofconsumer body 22 orfit model body 151 - With reference to
FIG. 12 , a flow diagram illustrating the steps for creating abase avatar 158 according to one embodiment is shown. Instep 380, abase avatar 158 may be created usingavatar software application 904 inavatar processing system 160. In one embodiment,avatar software application 904 may comprise of built-in tools available in 3DS MAX or any 3D software that allows a user to create, edit and store mesh objects. Using 3DS MAX, a 3D artist may sculpt the arms, legs, torso, and other body parts. Then a 3D artist may join all the body parts together to form a single mesh of thebase avatar 158. - In
step 382, thebase avatar 158 is rigged. A bone structure (or biped) may be inserted into the mesh using 3DS MAX tools, and may be sized and scaled appropriately so that the bone structure fits within the mesh properly. This process is known to those skilled in the art as rigging. - In
step 384, within 3DS MAX, the bone structure may be attached to the vertices onbase avatar 158 mesh so that when the bones move,base avatar 158 will move in accordance with how a human body typically moves. This process is known to those skilled in the art as skinning, and is not to be confused with putting skin on, which falls into the category of texturing. A file that holds the skinning data may be saved inavatar processing system 160 inavatar data storage 170. -
Base avatars 158 can be created for male and females for any typical sample size (i.e., men'ssize 40, women'ssize 8, etc.). From thesebase avatars 158 made from sample sizes, new avatars can be made in any size and shape. - As discussed earlier, the use of the conforming or morphing techniques is dependent on the type of data received on
consumer body 22 orfit model body 151. If the data type israw scan data 172, then a mesh is created from the raw scan data, and thebase avatar 158's mesh is conformed to it. In another embodiment, the received data type may be body measurements andother shape data 176. In such a case, the morphing technique may be used. In this case, thebase avatar 158 mesh is morphed. The following discussion relates to the case where the data type israw scan data 172. - Generally, in the prior art,
consumer avatar 171, andfit model avatar 173 would be created by measuring the shape of a consumer's body, or a physical fit-model described above, by way of a set of measuring tools, such as lasers, cameras, structured light, radio waves, or other electromagnetic based tools. Such configurations of measurement are typically called direct or passive body scanners, and will be collectively referred to as body scanners herein. In one embodiment,stereophotogrammetry system 150 may comprise any of these prior-art types of body scanning technologies, or alternatively,stereophotogrammetry system 150 may include stereophotogrammetrybody scan booth 152 described below.Stereophotogrammetry system 150 may also comprise any body scanning software for processing raw scan data to create 3D meshes or avatars. Alternatively,stereophotogrammetry system 150 may includebody scanning software 154 described below. For example, companies that produce some of these types of prior art scanners include those available from Unique, 133 Troop Avenue, Dartmouth, NS, B3B 2A7, Canada, TC2/Imagetwin, located at 5651 Dillard Dr., Cary, N.C. 27518, Telmat Industrie, 6, rue de l'Industrie—B. P. 130—Soultz, 68503 GUEBWILLER Cedex (France), and, or Human Solutions, GmbH,Europaallee 10, 67657 Kaiserslautern, Germany. - However, in one embodiment of the presently described system, stereophotogrammetry may be applied. Photogrammetry is the practice of determining the geometric properties of objects from photographic images. In the simplest example, the distance between two points that lie on a plane parallel to the photographic image plane can be determined by measuring their distance on the image, if the scale of the image is known.
- A more sophisticated technique, called stereophotogrammetry, involves estimating the three-dimensional coordinates of points on an object. These are determined by measurements made in two or more photographic images taken from different positions. Common points are identified on each image. A line of sight (or ray) can be constructed from the camera location to the point on the object. It is the intersection of these rays (triangulation) that determines the three-dimensional location of the point. More sophisticated algorithms can exploit other information about the scene that is known a priori, for example symmetries, in some cases allowing reconstructions of 3D coordinates from only one camera position.
- Algorithms for photogrammetry typically express the problem as that of minimizing the sum of the squares of a set of errors. This minimization is known as bundle adjustment and is often performed using the Levenberg-Marquardt algorithm.
- The stereophotogrammetry method may have advantages in cost and features that other methods cannot achieve. With reference to
FIG. 13 , a diagrammatic right perspective view of a stereophotogrammetrybody scan booth 152, and scanbooth computing device 153 withbody scanning software 154, is shown according to one embodiment. Briefly, using stereophotogrammetry,several cameras 800, for example twenty, may be positioned around the human body, and then simultaneously triggered to acquire multiple digital photographs. The resultant photographs may then be transmitted to scanbooth computing device 153, which containsbody scanner software 154. In other words,body scanner software 154 may triggercameras 800 and acquire photographs fromcameras 800. Thebody scanner software 154 may be used to mask and remove background colors, and may further be used proc to implement a process called segmentation to remove object(s) other than the subject of interest.Body scanner software 154 performs many of the previous mentioned steps using a program originally written using MATLAB software, available from Mathworks, Inc., MathWorks, Inc., 3 Apple Hill Drive, Natick, Mass. 01760-2098. However, those skilled in the art would recognize that many different software applications may perform similar functions. For example, the software may be written using the C++ programming language to perform the same functions implemented in the MATLAB software. - Furthermore, the refined photographs are then sent as inputs to 3DSOM PRO software available from About Creative Dimension Software, Ltd., Wey Court West, Union Road, Farnham, Surrey GU9 7PT, United Kingdom. This software then uses these photographs to create
3D mesh 159. However, those skilled in the art would recognize that many different software applications may perform similar functions.3D mesh 159, is then imported into 3DS MAX, wherein thebase avatar 158 is morphed to the dimensions and shape of3D mesh 159. - With reference to
FIG. 14 , a flow diagram illustrates steps performed for scanningconsumer body 22 orfit model body 151 using the stereophotogrammetry method of body scanning, as well as the steps for converting the output of this body scanning method into a 3D mesh. - In
step 400, thecamera 800 is assembled. Any standard charge coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS)camera 800 can be used. In one embodiment, aCMOS 2 megapixel chip is used in order to maximize resolution while minimizing cost, such as that provided in theQUICKCAM 600 available from Logitech, Inc., 6505 Kaiser Dr., Fremont, Calif. 94555 USA. However, any CCD or CMOS commercially available digital camera, webcam, professional camera, industrial camera, or security camera could be used. Theaforementioned QUICKCAM 600 has a 2 Megapixel sized CMOS chip providing 30 frames/second over a universal serial bus (USB) 2.0 connection. Thecamera 800 may be dissembled to retrieve only the circuit board with the CMOS chip attached and USB still connected. However, any megapixel size chip with any frame rate and other connections (e.g., Firewire), could also be used. Moreover, additional cameras could be added, a slightly rotating pedestal could be used, and/or mirrors could be used in place of some cameras. However, the method described herein was selected due to accuracy and cost-effectiveness. - In
step 402, a wide angle lens may be attached to a spacer, attached to a camera enclosure, which encloses the circuit board to which the CMOS chip is attached. A wide field-of-view lens may be used in this embodiment so that thecamera 800 can be positioned as close to theconsumer body 22 orfit model body 151 as possible while keeping the subject within the field of view. Any distortion due to the lens may be corrected-for in 3D SOM PRO software using its lens calibration tools. In one embodiment, a 2.9-8.2 mm lens, provided by Computar, Inc., 55 Mall Drive, Commack, N.Y. 11725, may be used. - In
step 404, a plastic project enclosure (for example, 3×2×1 inches), provided by RadioShack, Inc, may be used to house thecamera 800. A 3-5 mm hole may then be cut open to make the CMOS chip visible. A 5 mm spacer with threads may be attached over the hole and the lens is screwed into the spacer. - Steps 400-404 may be repeated for each camera to be used.
- In
step 406, stereophotogrammetrybody scan booth 152 is assembled. Standard zero structures 910 may be used to assemble the structure, for example, a 7 ft×7 ft×7 ft sized stereophotogrammetrybody scan booth 152. A matte 920 with a specific pattern, which may be provided by 3D SOM, Inc., may be placed in the center of the floor 915. This is where theconsumer body 22 orfit model body 151 stands.Cameras 800 and lights may be fixed to crossbeams 912 that attach to the four pillars of the structure 910 along the perimeter. Electrical pipe may be built around the structure on the inside and outside of the zero pillars at the very top of thebody scanning booth 152. Fabric may be hooked to the pipes to create drapes to enclose the structure from outside light, and to include a fixed color background behind the subject from all angles. Pre-fabricated structures could be used in a similar manner, where modifications may be made depending on the type of structure. - Referring again to
FIG. 14 , instep 408, the camera array may be created. 20-50cameras 800 may be positioned along the walls of the stereophotogrammetrybody scan booth 152. At least fifteencameras 800 may be positioned at approximately eye level and distributed equally around theconsumer body 22 orfit model body 151. However, any configuration could be used. At least an additional four cameras may be positioned at two feet higher than eye-level and distributed aroundconsumer body 22 orfit model body 151. Thelast camera 800 may be positioned in an aerial view above the head ofconsumer body 22 orfit model body 151. The positioning of the all 20-50 cameras can vary depending on the user's choice, and is not limited to this configuration. In one embodiment, the matte and the entire subject may be visible in the field of view in all configurations, so as to take advantage of the features of 3D SOM PRO Software. - In
step 410, thecameras 800 are connected in an array.Cameras 800 may be connected USB powered hubs in one embodiment. All hubs may be connected to a computer with USB ports. In other embodiments, the cameras may be wired for Bluetooth, Ethernet, wifi, or the like. - In one embodiment, stereophotogrammetry
body scanning software 154, which may interface with or include software components, may also contain executable instructions to perform one or more of the following steps 412-418 described below. Instep 412, the video stream ofconsumer body 22 orfit model body 151 is acquired. MATLAB software, which may be one of the software components of stereophotogrammetrybody scanning software 154, is available from Mathworks, Inc., 3 Apple Hill Drive, Natick, Mass. 01760-2098, and which may be used to read the video streams from the cameras. Specifically, the image acquisition toolbox of MATLAB may be used to start and view all 20 video streams. Those skilled in the art would recognize that a variety of software programs may be used to perform the functions carried out by MATLAB. - In
step 414, the images may be acquired from the video stream, wherein the main subject isconsumer body 22 orfit model body 151, and may be placed in the middle of the stereophotogrammetrybody scan booth 152 to stand on a matte, such that their body is in the field view of the cameras. The cameras are triggered to acquire images or single frames from eachcamera 800. In one embodiment, a manual trigger may be used with cameras that do not support hardware triggering. However, hardware triggering can be used to speed up image acquisition to prevent any lag time between cameras. - In
step 416, MATLAB's image processing toolbox may be used to mask images, save them in any format that can be read by 3D SOM PRO, and send them to 3D SOM PRO Software. Software written using MATLAB may be compiled into a standalone executable file to perform this step. - In
step 3D mesh 159 is created using 3D SOM's software. - In one embodiment, the number of
cameras 800 may be arbitrary. By way of example, and not by way of limitation, 20 or more, or less,cameras 800 may be used. Further, the position of thecameras 800 may be more or less arbitrary in one embodiment. Aposition calibration map 820 may be used for helping the 3D SOM PRO software determine the position of thecameras 800 in three dimensional space. In one embodiment, theposition calibration map 820 may comprise a flat annular component having radial spaced black circles 822 printed thereon. Depending on the position of eachcamera 800, the black circles 822 are captured by eachcamera 800 with a different distortion, which 3D SOM PRO, or other software used to calibration position, is capable of interpreting to indicate the position of eachcamera 800. In one embodiment, the black circles 822 may preferably be of varying sizes. - Further, any number of various types of
cameras 800 or sensors may be used. In one embodiment, webcams may be used because they are less expensive and may provide relatively higher resolution with CMOS sensors at the same price. However, more expensive digital cameras with CCD sensors with a broader color ranges may be used. Further, any type of lens may be used with thecameras 800. For example, the lenses are capable of having various focal lengths. For example, the types of lenses may be defined by variations in focal length, diameter, and/or magnification. - In order to calibrate the cameras for such variations in lens types, for example, a
lens calibration map 830 havingblack circles 832 similar to those on theposition calibration map 820 may be used. Eachcamera 800 may be calibrated for type of lens by pointing each camera at thelens calibration map 830 at a constant distance to and angle, taking pictures at various zooms. The 3D SOM PRO software may then use the varying images captured by each of thecameras 800 and/or lens types. The 3D SOM PRO software then takes the calibration images and correct for the varyingcameras 800 and/or lens types. - With the above description of the stereophotogrammetry system 152, those of skill in the art would recognize that the stereophotogrammetry system 152, may comprise an arbitrary number of two or more cameras 800 for taking independent photographs of a physical object; a position calibration map 820 for providing three dimensional position data for the two or more cameras 800; each camera 800 having a lens, wherein each lens has a type, wherein two or more of the lenses are capable of being the same type; a lens calibration map 830 for each type of lens, wherein the lens calibration map is capable of correcting for non-linearity within the lens; a first set of instructions capable of execution on a processor 153 to acquire a set video streams from the two or more cameras 800; a second set of instructions capable of execution on a processor 153 to trigger the two or more cameras 800 substantially simultaneously to produce an image from each camera 800; a third set of instructions capable of execution on a processor 154 to download and save the image from each camera 800; a fourth set of instructions capable of execution on a processor 153 to mask the image from each camera 800 to produce a set of masked images; a fifth set of instructions capable of execution on a processor 153 to process three dimensional positional data from the position calibration for the set of masked images; and a sixth set of instructions capable of execution on a processor 153 to process a three dimensional mesh from the set of one or more masked images. The
system 153 may have a variable number ofcameras 800. Thesystem 152 may include variable positions of thecameras 800. Theposition calibration map 820 may be modifiable according the number and position of thecameras 800. Further, thelens calibration map 830 may be modifiable according the types of lenses on thecameras 800. The size of thewhole stereophotogrammetry system 154 may also be adjustable. The first, second, third and fourth software instructions may also comprise image acquisition and processing software instructions, which may all be embodied in thebody scanner software 154. The image acquisition and processing software instructions may comprise MATLAB software instructions in one embodiment. The image acquisition and processing software instructions may comprise LABVIEW software instructions in another embodiment. - In one embodiment, the download of the images from the
cameras 800 may occur using universal serial bus (USB), Firewire or wifi network devices. - The fifth and sixth software instructions may comprise three dimensional modelling software. In one embodiment, the three dimensional modelling software may comprise 3DSOM PRO. In another embodiment, the three dimensional modelling software may comprise compiled object oriented software instructions.
-
Lights 840 may be a part of thesystem 152, which may be used to create uniform lighting conditions to create the least amount of shadows. Reflectors may be used to further achieve ambient light conditions within thebooth 152. A uniform background may be used within the walls of the booth to aid in the masking process. Those skilled in the art, for example, may find a green background generally aids in the masking process. - Finally, the size of the stereophotogrammetry
body scan booth 152 may be variable or adjustable, generally having little effect on the operation of thebooth 152. This allows for thebooth 152 to be adjusted for use in different special arrangements as space may provide. - With reference to
FIG. 15 , a flow diagram illustrates further steps performed byavatar software application 904. In one embodiment,3D mesh 159, previously created instereophotogrammetry system 150, may be sent to theavatar software application 904. Then, the initial step performed byavatar software application 904 isstep 427, importing the3D mesh 159. - In another embodiment, a prior art
body scanner system 149 may be used in place ofstereophotogrammetry system 150, where priorart body scanner 149 may refer to all currently existing forms of body scanners described in prior art, or alternatively all other body scanners contemplated by future technologies. Then, prior artbody scanner system 149 may also provide a 3D mesh as an output. In this case, the initial step performed byavatar software application 904 isstep 427, similarly importing the3D mesh 159. - However, in another embodiment, output data from prior-
art body scanner 149 may only provide raw scan data as input instep 425, and not a 3D mesh. Thus, instep 4263D mesh 159 may be created from a prior-art scanner system's 149 raw scan data using MESHLAB software, a widely available open source application available form http://meshlab.sourceforge.net/, 3DS MAX, and/or any 3D software able to perform such function with raw scan data. - In
step 3D mesh 159 is imported in to 3DS MAX software. - In
step 428, scaling and alignment of3D mesh 159 withbase avatar 158 may take place. Within 3DS MAX, thebase avatar 158 may be superimposed on top of the3D mesh 159. Thebase avatar 158 may then be scaled in size such that its height aligns with the height of the3D mesh 159. When scaling up and down, the shape and proportion of thebase avatar 158 may not change. In other words, the system grows or shrinksbase avatar 158 so that3D mesh 159 andbase avatar 158 occupy a similar volume. Furthermore, the limbs ofbase avatar 158 may also be adjusted to align with the limbs from3D mesh 159. - In
step 430, the head, hands, and feet are detached frombase avatar 158 in order to complete the next step. - In
step 432, the torso ofbase avatar 158 is conformed to the torso of3D mesh 159. MAXSCRIPT code, which is a scripting language provided by 3DS MAX, may be run, which can run within 3DS MAX. This script moves vertices of the torso ofbase avatar 158 to the torso of3D mesh 159, such that their shapes and proportions are the same and they occupy the same volume. In running this script, the skinning may be lost and can be reproduced. - In
step 434, the hands, feet and head ofbase avatar 158 are re-attached to newly conformed mesh. - In
step 436, the conformed mesh is re-skinned using saved data stored inavatar data storage 170. - In
step 438, animation is applied. This step may be to store a standard point-cache file which stores the animation components ofconsumer avatar 171 orfit model avatar 173. - If the subject was
consumer body 22 then the conformed mesh may be referred to now asconsumer avatar 171. Otherwise, if the subject wasfit model body 151 then the conformed mesh may be referred to now asfit model avatar 173. - In
step 440,consumer avatar 171 orfit model avatar 173 is exported from 3DS MAX and stored inavatar data storage 170. - In one embodiment,
consumer avatar 171 orfit model avatar 173 may be derived directly frombody measurements 176 instead of3D mesh 159, where body measurements andother shape data 176 may have been extracted fromraw scan data 172, or from user data 177 (e.g. demographics) usingavatar software application 904. Further quantitative information may include data originated from statistical analysis of historical body scans (sizing survey data 178) and/or avatarstatistical data 179. If the consumer provides these measurements, they may do so by entering oncomputing device 24 which then stores the inuser data 177. Thecomputing device 24 may comprise any type of processing device, such as a personal computer (desktop or laptop), smartphone, iPHONE®, iPAD®, tablet pc, mobile computing device, kiosk, gaming device, media center (at home or elsewhere), or the like. For example, but not by way of limitation, the consumer may enter body measurements and/or select other avatars features using an html form or a client-side software application 28 running oncomputer device 24. The user's selection and entered data is then toASP 100'savatar software application 904 running inavatar processing system 160. - With reference to
FIG. 16 , a flow chart illustrates the steps for creating an avatar from any combination ofdata entities step 500, the consumer body measurements andother shape data 176 are gathered. In one embodiment, by way of example, and not by way of limitation, there can be approximately between 5 and 50 points of measurements corresponding toconsumer body 22. - Since the data type is body measurements and other shape data,
base avatar 158 may be morphed to create the shape ofconsumer avatar 171 orfit model avatar 173. - One skilled in the art would recognize that in order to morph a mesh, one may require morph targets. Therefore,
base avatars 158 may have morph targets, allowing them to be morphed. For extremely large and small human bodies,additional base avatars 158 may created with additional morph targets. A morph (sometimes called a control) is applied to thebase avatar 158 that links to the morph target, and can be used to interpolate between the two objects, changing the size/shape of the base object to match the morph target's geometry either partially or completely. In other words, by adjusting the morph target, one can approximate the shape of a new avatar. When several morphs are adjusted such that the new avatar similarly match theconsumer body 22's orfit model body 151's body shape and or measurements, then one has arrived atconsumer avatar 171 orfit model avatar 173 respectively. - Each morph target may correspond to one or many points of measure. Points of measure are control points for a specific body measurement from body measurements and other shape data 176 (e.g. the circumferential waist measurement may have a control point). Therefore, when the point of measure needs to be changed to a specific body measurement value (given by the user, extracted from raw scan data, or derived by some other means), the morph target is adjusted.
- With reference to
FIG. 17 , a graphic slide show illustrates an exemplary flow of the morphing process described above. For example, inslide 2000, thebase avatar 158 is shown in it's original shape. As shown inslide 2002, the morph targets are adjusted closer to the consumer measurement data. Finally, inslide 2004, the morph targets are reached, and theconsumer avatar 171 is therefore created. - In
step 502,base avatar 158 may be morphed as described above. - Another embodiment includes supplementing
body measurement 176,user data 177, sizingsurvey data 178, or avatarstatistical data 179 withdigital images 174.Digital images 174 from a single camera may further enhance the process of creatingconsumer avatar 171 orfit model avatar 173. Multiple digital photographs may be used as references for sculpting the mesh ofbase avatar 158 withinavatar software application 904, wherein sculpting refers to the process of adjusting the morph targets to match a visual contour ofconsumer body 22 orfit model body 151 given in a digital photograph. - With reference to
FIG. 18 , a flow diagram illustrates the steps for creating an avatar according to one embodiment. Instep 510, digital photographs can be taken of a consumer body via a webcam or any digital camera. To create an avatar from multiple photographs, at least three photographs may be used (front, back and side), along with a height measurement. The digital photographs may be sent to theavatar software application 904. Instep 512, the digital photographs can be masked such that everything besides the consumer body is removed from the image. This can be accomplished using MATLAB software, PHOTOSHOP by Adobe Systems Incorporated, 345 Park Avenue, San Jose, Calif. 95110-2704, or any image editing software. - In
step 514, the base avatar mesh is sculpted. The digital photographs may be used as references to match the shape of the avatar to the real person. The photographs may then be mapped to planes in a 3D scene in 3DS MAX and placed around the base avatar's mesh. This makes it possible to use the photographs as references to the shape of the body that is being reproduced digitally. For example, if the photograph is front-facing, then the base avatar's mesh is also front-facing in the scene. Second, the base avatar's morph targets are adjusted to get the shape close to where it should be to match the silhouette of the reference image. Then, vertices in the base avatar's mesh are adjusted using soft selection methods to correct the avatar to match the references, and the measurements. When using photographs as references, photographs of the front, side and back of the body are adjusted digitally to correct errors in the photography as much as possible. - In yet another embodiment, the above methods described with respect to creating a
consumer avatar 171 may be mixed, matched, and/or combined. For example,body measurements 176 can be further enhanced by adding images from a single camera of the body and face ofconsumer body 22 orfit model body 151. - With reference to
FIG. 19 , a flow diagram illustrates a method for modelling the face ofconsumer body 22 orfit model body 151. Whichever method described above is used to createconsumer avatar 171 orfit model avatar 173, the face ofconsumer body 22 orfit model body 151 can be modelled using digital photographs from a webcam or digital camera. Instep 550, three close-up images of the front profile, left profile, and right profile of the face ofconsumer body 22 orfit model body 151 may be taken and sent to theavatar software application 904. Instep 552, FACEGEN Software, provided by Singular Inversions, 2191 Yong street, suite 3412, Toronto, ON. M4S 3H8, Canada, can be used to create a 3D mesh of the head. Instrep 554, a 3D mesh of the head can then be added toconsumer avatar 171 orfit model avatar 173. - The next process may include draping the 3D
virtual garment 183 on aconsumer avatar 171 in an automated process on the web orcomputing device 24, resulting inconsumer drape 1102. The process begins when the consumer chooses to virtually try-on 3Dvirtual garment 183. The consumer can request to virtually try-on 3Dvirtual garment 183 by way of a graphical user interface (GUI) oncomputing device 24, or by sending a request over the internet through a website. - In one embodiment, the consumer may send a request on the internet to virtually try-on a garment by clicking
hyperlink 81 which may reside in retailer'sonline store 80, a third-party online store, or on an onlinestore running ASP 100.Hyperlink 81 may be positioned next to a display of a 3Dvirtual garment 183, or a digital representation ofproduction sample garment 59 available for virtual fitting. When a user presseshyperlink 81 usingcomputing device 24, a sequence of events is started. With reference toFIG. 20 , a flow chart describes the events that occur when a user decides to try on a virtual garment. Instep 601, in this embodiment, the user may selecthyperlink 81 or press the button next to 3Dvirtual garment 183 or a digital representation ofproduction sample garment 59 on a website. The button orhyperlink 81 provides access to application service provider (ASP) 100 instep 602. TheASP 100 may communicate directly with retaileronline store 80 orcomputing device 24 and may run 3Ddraping software application 900. With each request, data that signifies the user is included. In the asp-model, if the user is not known, then the user is prompted to sign-in or create a user profile with theASP 100. - In another embodiment, referring to step 600, a user may run 3D
draping software application 900 locally on computingdevice 24 enabling the user to virtually try on garments. This embodiment may require the user to sign in and exchange data withASP 100 orretailer system 3Ddraping software application 900 may runcomputer device 24 or may run online inASP 100 as an online service for retailers or consumers over a wide area network through a network connection. 3D virtual try-onprocessing system 1200 may exist at the retailer or may be hosted by a third party web server. In another embodiment, 3Ddraping software application 900 may run onkiosk 130. The user may click on a link or a button with a mouse, or interact with a touch screen on the display ofcomputer device 131. The user may see the resultant output of the 3D virtual try-on process on3D viewer application 132. - In
step 604, it is determined whether the appropriate size for the consumer has already been determined. If so, processing moves to step 614. Otherwise, processing moves to step 608, to conductsize prediction algorithm 908. - In
step 608, consumer's body measurements andother shape data 176 are queried fromavatar processing system 160 and compared against 3Dvirtual garment measurements 184 of 3Dvirtual garment 183 at corresponding points of measure. The root mean square (rms) of the deviations of these two sets of measurements (body measurements 176 vs. 3D virtual garment measurements 184) is calculated for each size available forproduction sample garment 59. Ease added todigital pattern 180, may be added to the shape of the avatar to better assist in attaining a solution. - In
step 610, it is determined whether the size that results in the lowest rms is sufficient for an initial guess. Those skilled in the art of statistical analysis may use chi-squared or other statistical tests to assess the strength of the initial guess which may depend on the accuracy of which the consumer avatar 161 accurately duplicates the size, shape and proportion ofconsumer body 22. Moreover the user may determine if the initial guess is sufficient. If it is determined that the size is sufficient to serve as the initial guess for draping, then processing moves to step 614 wherein the initial guess of the 3Dvirtual garment 183 is queued for draping on the consumer avatar 161. Otherwise, processing moves to step 612 wherein multiple sizes of 3Dvirtual garment 183 are queued for draping on the consumer avatar 161. - In both
steps queue system 903 that is capable of maintaining lists of multiple simulation requests from multiple users. - It is also possible that the user may want to virtual try-on one garment with one or more other garments that either they have previously tried on. If the user has selected to try on multiple garments,
step 618, then processing moves to step 620 where the system retrievesconsumer drape 1102 that corresponds to the garment that the user wishes already display on their avatar before draping additional clothing. - In
step 622, associated files for the simulation that are queued are then retrieved fromdata storages data storages - In
step 624,node polling system 912 is initiated. When the simulation request is read and all file locations have been verified, instep 626, the software running thequeue system 903 checks thenode polling system 912 to find anavailable GPU 1002. In one embodiment,GPU 1002 may reside in a GPUcloud computing center 1000. - In
step 628, thepolling system 912 is updated to reflect that the selectedGPU 1002 is in use for the simulation request and not available for other simulations. - In
step draping software application 900 then continues by processing the simulation on the selectedGPU 1002. - The 3D
draping software application 900 may be EFIT with slight modifications. For example, but not by way of limitation, 3Ddraping software application 900 may run EFIT without a GUI and user action. In other words, in one embodiment, 3Ddraping software application 900 is simply EFIT software that has been modified to run automatically by accepting simulation requests from the queue, loading the appropriate files, processing the simulation by draping the garment on one or more CPUs or GPUs, and then exporting the required output files - Processing involves draping 3D
virtual garment 183 on consumer avatar 161. The existingfit model drape 186 onfit model avatar 173 may be loaded onto consumer avatar 161. Then, the drape process may be continued to readjust to account for the difference in the two avatars. The resultant output isconsumer drape 1102. Processing of cloth simulations in a 3D environment may be hardware-intensive. To those skilled in the art,GPUs 1002 are preferred for simulation of 3D graphics. However, whenGPUs 1002 are not available, more traditional CPUs may be used in their place. In one embodiment,GPUs 1002 or CPUs can be run in parallel to increase simulation processing speed through multi-threading so long as the selected processor supports it. - Moreover, processing may include simulating for animation. In such a case, an animation file is loaded. The animation file may be of consumer avatar 161 walking, running, dancing, sitting, or performing any human motion. Draping is performed on each frame of animation of consumer avatar 161 and then stored in
consumer drape 1102. - With reference to
FIG. 21 a diagram shows an example of what the above simulation and animation may look like on computer device (24 inFIG. 1 ) in the context of a virtual fitting room according to one embodiment. In this embodiment,browser 26 is used as the interface. - Focusing back to
FIG. 20 , instep 634, data from resulting from the previous steps ofFIG. 19 is exported. In one embodiment, the following data files may be exported and added toavatar data storage 170 and/or 3D virtual try-ondata storage 1100 for later retrieval, by way of example, and not by way of limitation:consumer drape file 1102;3D viewer data 1112;fit data 1104; and renderedmedia 1108. - In
step 636, thenode polling system 912 is updated to reflect that the selectedGPU 1002 is now available. - In
step 638, afit analysis algorithm 906 may executed in order to determine qualitative and quantitative data with respect to the outcome of the simulation (the 3D virtual try-on process). A fit analysis object may be created to store this qualitative and quantitative data. The output offit analysis algorithm 906 may also befit data 1104 and/or renderedmedia 1108. Fit analysis may include deriving qualitative and quantitative data from aconsumer drape 1102 for multiple sizes for a specific garment, or just one single size. -
Fit analysis algorithm 906 may perform a stretch test to determine how much the virtual fabric is stretching inconsumer drape 1102. Positive stretch values may indicate tighter fit areas, zero or a small stretch value may indicate areas of good fit or simply no-stretch. Negative stretch values may indicate areas of compression. In one embodiment, stretch values may be used to determine how well or how poor a garment fits an avatar. This data can then be stored additionally asfit data 1104. - Stretch can be calculated in many ways. For example, but not by way of limitation, stretch may be calculated by measuring the percent difference in a specific measurement before and after the drape. In other words, an initial garment measurement might yield one length. After draping the garment on an avatar, the draped garment measurement at the same location might have a length that has increased or decreased. In one embodiment, the percent difference in length for that specific measurement may be defined as the stretch value. In another embodiment, the stretch value may be calculated for many garment measurements, and the stretch value may refer to the total stretch of all garment measurements, or the average stretch value of all garment measurements.
- Quantitative data may also include calculating the change in stretch in a similar fashion as described above, but with initial value set to the stretch value of the base size, and the final value being the stretch value of the selected size (if other than the base size). Furthermore, quantitative data may also include calculating the stretch value for specific points of measure, rather than for the entire garment, and then comparing them with the initial 3D virtual garment measurements from
fit model drape 186. Moreover, quantitative data may also include calculating the total volume of space between the garment and the body and assessing how that total volume may increase or decrease from size to size. All data may be used together, or in pieces in a decision engine to establish a prediction of size. The decision engine may consider the total volume between the garment and the body, from size to size, versus the total stretch value, from size to size, and weight the two data types to arrive at the best fit of the garment to the body. It is well known to those skilled in the art that common procedures are available to determine how a garment is fitting using specific points of measure. - With reference to
FIG. 22 , an example web page produced by the system illustrates how stretch values may be visually displayed using a color tension map. These color tension maps can be viewed in any image format, on the web, or in any standard image viewing software. The color maps may also be viewable using3D Viewer Application 82. The color tension map displays high stretch values in red, low stretch values in green, and negative stretch values in blue. data may include visual images ofconsumer drape 1102. Qualitative data may include a visual representation or image of the consumer drape using a color tension map to show the parts of the garment that are fitting tight, loose, or well. The color tension maps may be configured to show stretch values in certain directions with respect to the grain line of the fabric. For instance, a color tension map which display stretch values along the warp direction may be very different than a color tension map which displays stretch values along the weft or bias directions. Those skilled in the art may recognize different types of ways to present fit analysis data, including, by way of example, and not by way of limitation, using a color map showing shear, color map showing pressure on a body, color map showing pressure from air, color map showing drag force, color map showing tension color map showing compression, gray scale map showing shear, gray scale map showing pressure on a body, gray scale map showing pressure from air, gray scale map showing drag force, gray scale map showing tension or gray scale map showing compression - With reference to
FIG. 23 , another web page produced by the system illustrates how another form of a visual representation ofconsumer drape 1102 may show the 3D virtual garment as partially transparent. This technique is referred to see-through mode, where the garment is partially transparent, and the user can see partially through the garment, revealing the avatar, and aiding the consumer in assessing how much space there is between the body and the garment. The opaqueness or transparency of the garment may also be adjusted. - Yet another form of visual representation of the consumer drape can be replacing the existing material texture of 3D
virtual garment 183 with a one inch by one inch grid pattern, which is applied as a material texture, which reveals the slope or curvature of the garment along the body.Fit analysis algorithm 906 may perform many other types of calculations. For example, but not by way of limitation,fit analysis algorithm 906 may calculate the total volume of space, using methods in calculus, between 3Dvirtual garment 183 and consumer avatar 161 for all sizes ofconsumer drape 1102. This volume may aid in interpreting the correct size of the garment. Moreover, this calculation may aid in interpreting the fit of a garment. - The data gathered from the fit analysis algorithm, whether it be quantitative or qualitative or both, stored as
fit data 1104, becomes extremely useful information toretailer system 50 andconsumer system 50. More about this fit data will be discussed later - Referring back to
FIG. 20 , instep 640, the output data may sent to the consumer'scomputing device 24 by way of either abrowser 26 orsoftware application 28. - In
step 3D viewer data 1112 andfit data 1104 are displayed in3D viewer application 3D viewer application 82 may be embedded in webpage viewed onbrowser 26 or is an application onconsumer computing device 24. In another embodiment, 3D viewer application may run inASP 100 and may be viewable inbrowser 26. - In one embodiment,
3D viewing application Java 3D libraries, each available from Oracle/Sun, 500 Oracle Parkway, Redwood Shores, Calif. 94065, with built-in functionality to rotate, pan, zoom, and animatevirtual garment 183 onconsumer avatar 171. The user may also view the drape of one size larger or smaller than the estimated size. The user can also select to view the currentvirtual garment 183 with a color tension map, in x-ray mode, playback animation of the drape, or view the garment with the avatar hidden from view. Moreover, the user can render an image to save in common image formats.3D viewer application - Discussed above was an embodiment of
3D viewer application Java 3D. However, it is important to note that3D viewer application 82 may be an interactive renderer created using c++, python, or any programming language capable of creating 3D web applications. - In one embodiment, in
step 644, the user can rate and/or review the fit of the garment by giving a thumbs-up or thumbs-down. In another embodiment, the user can rate and/or review the garment on a numeric scale. In yet another embodiment, the user can rate the garment as “Fits well, too tight or too loose”. Other rating systems known to those skilled in the art can be used. All such reviews described above can be stored in 3D virtual try-ondata storage 1100 as user reviews 1106. - In
step 646, the user are given the option of savingconsumer drape 1102 of 3Dvirtual garment 183 for future viewing or mixing with other garments for viewing (e.g., shirt and pants). If saved,virtual garment 183 appears in user'svirtual closet 290 where the collection ofconsumer drapes 1102 are available for the user to view again. The user's subsequent action(s) are tracked within the application and/or webpage to determine whether they purchase the garment. If the user chooses to purchase the garment, an email notification may automatically be generated to the user notifying them that thevirtual garment 183 has been saved in their user profile and can be viewed at any time by logging into theASP 100's web portal usingcomputing device 24. -
Virtual closet 290 may be accessed when the user is logged intoASP 100.Virtual closet 290 may storeconsumer drapes 1102 of 3Dvirtual garments 183 that have been purchased and recently viewed. In one embodiment,virtual closet 290 may display thesegarments 183 as visual images of drapes that do not include the model. - Items in the closet may be viewed in
3D viewing application 30 can be viewed with other 3Dvirtual garments 183, for example, from the same retailer, or a different retailer, or mixed and matched in other ways. - In some embodiments, the
virtual closet 290 may also provide for sharing between users. With social media integration, a user may share the results of their fit with contacts in facebook, myspace, yelp, and other social media sites, as well as personal websites or for viewing in applications in any computing device. The user may select a save image function that allows the user to take a picture or snap shot of theconsumer drape 1102 of 3Dvirtual garment 183 on the avatar, and then upload it to their profile on a social media site. - With the data collection (
consumer drape 1102,fit data 1104,user reviews 1106, renderedmedia 1108, and consumer avatar 171) that is accomplished bysystem 10 described herein, such data may be analyzed to discover trends and draw conclusions, which can, for example, provide feedback into the system and provide further granular analysis (step 306 inFIG. 3 ). For example, fit analyses for consumers may be performed on the collected data. In this regard, there is a tremendous value in data analyses of theproduction garments 59 consumers have purchased and not-returned. Theproduction garments 59 are a reflection of the consumers' buying behaviour. Tracking and studying the buying behaviour is known to provide valuable information to those skilled in the art. However in the past, analyses have been limited to color, size, and fabric information for apparel goods. For the first time using the presently described system, consumer buying behaviour can now include fit. -
FIG. 24 is a flowchart that describes a process of analyzing the fit data according to one embodiment. Instep 700, data collection is performed. When a garment is purchased, a copy of therelated consumer drape 1102 of 3Dvirtual garment 183 is stored invirtual closet 290.Fit data 1104,user reviews 1106, renderedmedia 1108, andconsumer avatar 171 may also be stored as part of theuser profile 190 onASP 100. All this information together can be gathered together, instep 700, for a single user, or together, as instep 702. - Then, in one embodiment, in
step 704, the data can be mined to find trends in buying behaviour, trends in consumer drapes from one garment to another, and or trends in body shapes with particular garments or particular retailers. For example, but not way of limitation, stretch factor calculations for relevant points of measure calculated for thevirtual garment 183 could be analyzed across multiple garments for a single user, or multiple users. - Moreover, in
step 704, trends in stretch factor, or other fit data may be correlated with demographics, retailer's, fit model's, sizes, fabric types, revealing valuable information. For example, but not by way of limitation, such analysis may reveal that a consumer fits better with a certain set of brands, then with another set of brands. Such information becomes useful instep 706. Moreover, such correlations may be easily recognized by those skilled in the art given the data the present system makes available, since brands often have fit models with distinctively different body shapes. - In
step 706, the trends discovered instep 704 may be used to better predict the outcome of fits with virtual garments insystem 10 and can be used assize prediction algorithm 908. Furthermore, fit may be a very subjective personal choice for consumers. For instance, two people of very similar body types may have dramatically different viewpoints on fit, where one person may prefer a tighter fit, or a size larger than the other. Therefore, by studying how variables that measure stretch across multiple garments for groups of similar bodies, and discovering trends, those trends may now be applied to predict other garments that may fit a user. - In
step 708, a product recommendation engine is built to interpret predicted garments instep 706 and then suggest those garments to the user inASP 100. - Finally, data collected can be used directly to make custom patterns and therefore custom garments for the consumer. The data may be used to develop block patterns, or customize the patterns of garments available by the retailer.
Custom 3D garments and patterns may be sent to the retailer based on the analysis. - Conversely,
consumer drape 1102,fit data 1104,user reviews 1106, and renderedmedia 1108 may all contain extremely valuable information not only for aiding consumers in buying clothing online, but also for apparel manufacturers and retailers. Retailers can use such information to better understand their target market, make necessary adjustments to product development, distribution, production, merchandising, and other key decisions in supply chain and sales processes referred to above. Currently, retailers have no immediate perceivable method of determining how a garment truly fits on each of their customers. Often times, retailers depend on statistical studies to determine the body shape(s) of their target market. Moreover, they rely on third-party research organizations that study body shapes in certain populations. However, the shapes of human bodies are difficult to standardize and are constantly changing. In consequence, most retailers fall short in reaching the broad target market they were designing for. - With reference to
FIG. 25 , a flow diagram illustrates steps to relate fit data and how retailers may interpret such relations. Instep 740, data collection is performed. For example, the following data may be collected after each fit is performed on a consumer: (1) number of fits a consumer has in a set period of time; (2) percentage of fits that results in a sale; (3) number of times of consumer try's on a specific garment; (4) the average stretch factor for each fit; and (5) each consumer's fit history and measurement chart. Instep 742, a data analysis may be performed on this data. This data can be used to determine which garments are fitting which body types. Correlations between body measurements, or sets of body measurements and purchases can be determined. Such correlations can be used to predict the probability that a certain consumer, based on their body shape, will or will not buy a specific garment. Additionally, a point-to-fit analysis may give retailers access to measure in real-time the fitting process with each of its site's visitors. Such information can be used to determine how garments are performing in the virtual fitting room. Furthermore, those results can help retailers determine if changes to the construction of the garment may or may not increase sales. In another embodiment, retailers may accessconsumer drape 1102 and derive their own fit data from the actual draped virtual fabric. Furthermore, retailers may compare these drapes with fit model drape - In
step 744, a web interface, may be made available to retailers. By logging on, retailers may have access to daily, weekly, monthly, quarterly, or yearly statistics on user data, which can be manipulated and searched. - Range Cameras may include, for example, the
Microsoft 3D Kinect device. With reference toFIG. 26 , a diagram illustrates a prior artrange camera device 2600 that could be used in one embodiment. Arange camera device 2600 of this type may include, for example, a small shoebox sized attachment used for motion capture for video game consoles, or the like. This type ofrange camera device 2600 may include an infrared (IR)light emitter 2602 that emits structured infrared light, a red-green-blue (RBG)camera 2606, and aCMOS IR sensor 2604 for reading reflected IR light. TheRBG camera 2606 is used to take visual images, whereas theIR emitter 2602 andCMOS sensor 2604 are used in conjunction to measure depth of objects within the field of view. - In one embodiment, the system described herein may use the depth images attained by the
CMOS sensor 2604 to create a 3D model of a subject or object within the field of view. Further, a process of capturing depth images of a human subject and creating a 3D model or avatar of the subject may be performed by one embodiment. - With reference to
FIG. 27 , a flow diagram illustrates steps that may be performed in one embodiment for scanningconsumer body 22 usingrange camera device 2600. Instep 2700, a set of computer instructions, which is written and available from OpenNI™, may be used to capture one of several depth images by samplingconsumer body 22 in an interval of time and in a fixed position in space with respect to therange camera device 2600. OpenNI™ is middleware that is part of the free software development kit (SDK) provided by PrimeSense, located at 28 Habarzel St. 4th floor, Tel-Aviv, Israel, 69710. - Each depth image may contain the depth or distance to the body, as well as the xy position of each part of their body, also called 3D position data.
- In
step 2701, a library routine of OpenNI™ may be called to calculate actual 3D points from the captured depth images fromstep 2700. Instep 2702,consumer body 22 may next be rotated or rotate to a secondary position, by way of example, and not by way of limitation, 90 degrees. - Next, in
step 2704, a second series of one or more images may be captured in a second interval of time. Instep 2705, the library routine of OpenNI™ may be called to calculate actual 3D points from the captured depth images fromstep 2704. - The process is repeated until the subject has rotated 360 degrees, as indicated by
decision diamond 2706. The result is a series of 3D points, one set for each capture of images at a rotation stop point as described above. - In
step 2708, each set of 3D points corresponding to a rotation of theconsumer body 22 is rotated and translated such that they all are able to fit together to form a final set of 3D points to represent theentire consumer body 22. This final set of 3D points are stored instep 2710. - Next, in
step 2712, measurements may be extracted. This may be performed various convex-hull algorithms, for example, the Graham scan algorithm or the Andrews monotone convex-hull algorithm. - In
step 2714, a 3D mesh is created from the 3D points. This can be performed by various methods that are commonly used to convert 3D points to a 3D mesh. For example, ball pivoting algorithms, Poisson surface reconstruction, or the like, may be used for this step. - In
step 2716, the mesh may be converted into3D consumer avatar 171 as described above. For example, the mesh could be rigged, skinned, and have a texture applied so that it could be animated and customized to look like theconsumer body 22. In step 2722, theconsumer 22 could then use thisconsumer avatar 171 for an online fitting room as described above. As described above, clothing could be modelled as a 3D mesh, as in the case with digital patterns, and then using the cloth simulation algorithms described above, clothing may be simulated on the avatar in 3D, allowing for theconsumer 171 to view in real-time how a garment will look and fit their own body. - In some embodiments, another sensor could be put behind the
consumer 22, or several at different angles. However, to keep hardware cost down and to make the system more practical for in-home use,consumer 22 may alternatively be asked to rotate their body to capture their body from multiple angles as described above. - In
step 2714, the corrections in change of posture may be made by using a pose tracker library by OpenNI. The OpenNI library contains functions for tracking poses by assigning a skeleton to theconsumer body 22. For example, if the arm position has changed from the first series of images, to the next series of images after the body was rotated, then using the pose tracker, the new position of the arm can be used to translate the 3D points associated with the arm to the old position of the arm in 3D space, thereby, correcting for movement by the user. - Alternatively, the
consumer avatar 171 could also be drawn on a monitor or flat-panel display connected to a computer or gaming system, and then be synced with the consumer's movements, such that the consumer could control its movements. - Using a technique known as augmented reality, one skilled in the art of augmented reality systems would recognize that 3D graphics could be displayed on a live video stream from
RGB camera 2606. Those 3D graphics could beconsumer avatar 171. - 3D
virtual garment 183 draped onconsumer avatar 171 could also be displayed using augmented reality and dynamically draped using GPU cloth simulation. In this respect, 3Dvirtual garment 183 may be simulated with animation in real time onconsumer avatar 171 no matter what position orposture consumer avatar 171 takes in real time. - Moreover,
consumer avatar 171 could be hidden from view such that it would appear to theconsumer 22 that the 3Dvirtual garment 183 were actually onconsumer body 22 as they see it in real time on the monitor. - For example,
consumer body 22 may change poster wherein the arm may change position in 3D space, using the pose tracking algorithm developed in OpenNI™,consumer avatar 171 may adjust its position to match the new position ofconsumer body 22. Since theconsumer avatar 171 hidden, this will thus cause 3Dvirtual garment 183 to re-simulate using the cloth simulation algorithm resulting in a new drape consistent withconsumer body 22's new posture. - The various embodiments described above are provided by way of illustration only and should not be construed to limit the invention. Those skilled in the art will readily recognize various modifications and changes that may be made to the claimed invention without following the example embodiments and applications illustrated and described herein, and without departing from the true spirit and scope of the claimed invention, which is set forth in the following claims.
Claims (17)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/853,167 US20200380333A1 (en) | 2010-06-08 | 2020-04-20 | System and method for body scanning and avatar creation |
US16/880,957 US11244223B2 (en) | 2010-06-08 | 2020-05-21 | Online garment design and collaboration system and method |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US35239010P | 2010-06-08 | 2010-06-08 | |
US13/008,906 US20110298897A1 (en) | 2010-06-08 | 2011-01-19 | System and method for 3d virtual try-on of apparel on an avatar |
US13/159,401 US10628729B2 (en) | 2010-06-08 | 2011-06-13 | System and method for body scanning and avatar creation |
US16/853,167 US20200380333A1 (en) | 2010-06-08 | 2020-04-20 | System and method for body scanning and avatar creation |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/159,401 Continuation US10628729B2 (en) | 2010-06-08 | 2011-06-13 | System and method for body scanning and avatar creation |
US16/016,351 Continuation-In-Part US20180374137A1 (en) | 2010-06-08 | 2018-06-22 | Online garment design and collaboration system and method |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/016,351 Continuation-In-Part US20180374137A1 (en) | 2010-06-08 | 2018-06-22 | Online garment design and collaboration system and method |
US16/880,957 Continuation-In-Part US11244223B2 (en) | 2010-06-08 | 2020-05-21 | Online garment design and collaboration system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
US20200380333A1 true US20200380333A1 (en) | 2020-12-03 |
Family
ID=45924816
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/159,401 Active 2033-02-11 US10628729B2 (en) | 2010-06-08 | 2011-06-13 | System and method for body scanning and avatar creation |
US15/863,848 Abandoned US20180144237A1 (en) | 2010-06-08 | 2018-01-05 | System and method for body scanning and avatar creation |
US16/853,167 Abandoned US20200380333A1 (en) | 2010-06-08 | 2020-04-20 | System and method for body scanning and avatar creation |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/159,401 Active 2033-02-11 US10628729B2 (en) | 2010-06-08 | 2011-06-13 | System and method for body scanning and avatar creation |
US15/863,848 Abandoned US20180144237A1 (en) | 2010-06-08 | 2018-01-05 | System and method for body scanning and avatar creation |
Country Status (1)
Country | Link |
---|---|
US (3) | US10628729B2 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210343083A1 (en) * | 2020-04-30 | 2021-11-04 | Clothing Tech LLC | Computer implemented methods for generating 3d garment models |
GB2608885A (en) * | 2021-05-05 | 2023-01-18 | Retail Social Ltd | Systems and methods for the display of virtual clothing |
KR20240000761A (en) * | 2022-06-24 | 2024-01-03 | 한양대학교 산학협력단 | Method and system for creating non-fungible token based virtual human |
Families Citing this family (110)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10127480B1 (en) | 2007-03-09 | 2018-11-13 | R. B. III Associates, Inc. | System for automated decoration |
US9646340B2 (en) * | 2010-04-01 | 2017-05-09 | Microsoft Technology Licensing, Llc | Avatar-based virtual dressing room |
US10332176B2 (en) | 2014-08-28 | 2019-06-25 | Ebay Inc. | Methods and systems for virtual fitting rooms or hybrid stores |
JP5116826B2 (en) * | 2010-10-26 | 2013-01-09 | 株式会社マリネックス | Body measuring device |
KR20120051342A (en) * | 2010-11-12 | 2012-05-22 | 한국전자통신연구원 | System and method for recommending sensitive make-up based on user color sense |
US8711175B2 (en) * | 2010-11-24 | 2014-04-29 | Modiface Inc. | Method and system for simulating superimposition of a non-linearly stretchable object upon a base object using representative images |
US20130088490A1 (en) * | 2011-04-04 | 2013-04-11 | Aaron Rasmussen | Method for eyewear fitting, recommendation, and customization using collision detection |
US20130097194A1 (en) * | 2011-08-05 | 2013-04-18 | New York University | Apparatus, method, and computer-accessible medium for displaying visual information |
US20130132898A1 (en) * | 2011-11-17 | 2013-05-23 | Michael F. Cuento | System, Method and Software Product in Eyewear Marketing, Fitting Out and Retailing |
US8970693B1 (en) * | 2011-12-15 | 2015-03-03 | Rawles Llc | Surface modeling with structured light |
US20130170715A1 (en) * | 2012-01-03 | 2013-07-04 | Waymon B. Reed | Garment modeling simulation system and process |
US20130173226A1 (en) * | 2012-01-03 | 2013-07-04 | Waymon B. Reed | Garment modeling simulation system and process |
US8782565B2 (en) * | 2012-01-12 | 2014-07-15 | Cisco Technology, Inc. | System for selecting objects on display |
US9367124B2 (en) * | 2012-03-20 | 2016-06-14 | A9.Com, Inc. | Multi-application content interactions |
US9373025B2 (en) | 2012-03-20 | 2016-06-21 | A9.Com, Inc. | Structured lighting-based content interactions in multiple environments |
US10702773B2 (en) * | 2012-03-30 | 2020-07-07 | Videx, Inc. | Systems and methods for providing an interactive avatar |
GB2502634A (en) * | 2012-05-31 | 2013-12-04 | Khalil Abu Al-Rubb | Methods of determining the fit of items of clothing using a computer |
US9263084B1 (en) | 2012-06-15 | 2016-02-16 | A9.Com, Inc. | Selective sharing of body data |
WO2014028714A2 (en) * | 2012-08-15 | 2014-02-20 | Fashpose, Llc | Garment modeling simulation system and process |
US9665981B2 (en) * | 2013-01-07 | 2017-05-30 | R.B. Iii Associates Inc | System and method for generating 3-D models from 2-D views |
WO2014034188A1 (en) * | 2012-08-30 | 2014-03-06 | 楽天株式会社 | Clothing image-processing device, clothing image display method and program |
CN103020961B (en) * | 2012-11-26 | 2015-08-05 | 谭平 | Based on the method and apparatus of the virtual costume matching of image |
US9607419B2 (en) * | 2012-12-14 | 2017-03-28 | Electronics And Telecommunications Research Institute | Method of fitting virtual item using human body model and system for providing fitting service of virtual item |
US9070220B2 (en) * | 2012-12-19 | 2015-06-30 | Nvidia Corporation | Method of simulating clothing using long range attachments |
CN103886115B (en) * | 2012-12-20 | 2016-12-28 | 上海工程技术大学 | The foundation of a kind of three-dimensional virtual human platform based on different building shape and call method thereof |
US20140192182A1 (en) * | 2013-01-10 | 2014-07-10 | General Electric Company | Method for viewing virtual objects within an appliance |
US10708545B2 (en) | 2018-01-17 | 2020-07-07 | Duelight Llc | System, method, and computer program for transmitting face models based on face data points |
CN104021590A (en) * | 2013-02-28 | 2014-09-03 | 北京三星通信技术研究有限公司 | Virtual try-on system and virtual try-on method |
US20140240354A1 (en) * | 2013-02-28 | 2014-08-28 | Samsung Electronics Co., Ltd. | Augmented reality apparatus and method |
DE102013203667B4 (en) * | 2013-03-04 | 2024-02-22 | Adidas Ag | Cabin for trying out one or more items of clothing |
US9526442B2 (en) | 2013-05-03 | 2016-12-27 | Fit3D, Inc. | System and method to capture and process body measurements |
KR20160021118A (en) * | 2013-05-13 | 2016-02-24 | 엠포트 피티와이 엘티디 | Devices, frameworks and methodologies for enabling user-driven determination of body size and shape information and utilisation of such information across a networked environment |
US12056749B1 (en) | 2013-05-20 | 2024-08-06 | Sheila E. Pesarchick | Customized apparel procurement method |
GB2518589B (en) * | 2013-07-30 | 2019-12-11 | Holition Ltd | Image processing |
USD746842S1 (en) * | 2013-08-01 | 2016-01-05 | Sears Brands, L.L.C. | Display screen or portion thereof with graphical user interface |
USD746843S1 (en) * | 2013-08-01 | 2016-01-05 | Sears Brands, L.L.C. | Display screen or portion thereof with graphical user interface |
USD746299S1 (en) * | 2013-08-01 | 2015-12-29 | Sears Brands, L.L.C. | Display screen or portion thereof with graphical user interface |
USD746298S1 (en) * | 2013-08-01 | 2015-12-29 | Sears Brands, L.L.C. | Display screen or portion thereof with graphical user interface |
USD750657S1 (en) * | 2013-08-01 | 2016-03-01 | Sears Brands, L.L.C. | Display screen or portion thereof with graphical user interface |
US9460342B1 (en) | 2013-08-05 | 2016-10-04 | Google Inc. | Determining body measurements |
US9635895B1 (en) | 2013-10-29 | 2017-05-02 | Vf Imagewear, Inc. | System and method for mapping wearer mobility for clothing design |
DE102013226558A1 (en) | 2013-12-19 | 2015-06-25 | Bayerische Motoren Werke Aktiengesellschaft | Adaptive determination of the setting of vehicle components |
KR101728588B1 (en) * | 2014-03-27 | 2017-05-02 | 한국전자통신연구원 | Smart device and virtual experience providing server provide virtual experience service method using digital clothes |
US20150302597A1 (en) * | 2014-04-18 | 2015-10-22 | Avyn, Inc. | Systems and methods for preparing custom clothing patterns |
US20150304629A1 (en) * | 2014-04-21 | 2015-10-22 | Xiuchuan Zhang | System and method for stereophotogrammetry |
US11195318B2 (en) | 2014-04-23 | 2021-12-07 | University Of Southern California | Rapid avatar capture and simulation using commodity depth sensors |
US9507649B2 (en) * | 2014-05-29 | 2016-11-29 | Apple Inc. | Web browser for spoofing supported features |
US10529009B2 (en) * | 2014-06-25 | 2020-01-07 | Ebay Inc. | Digital avatars in online marketplaces |
CN104091269A (en) * | 2014-06-30 | 2014-10-08 | 京东方科技集团股份有限公司 | Virtual fitting method and virtual fitting system |
US9626808B2 (en) * | 2014-08-01 | 2017-04-18 | Electronic Arts Inc. | Image-based deformation of simulated characters of varied topology |
US10653962B2 (en) | 2014-08-01 | 2020-05-19 | Ebay Inc. | Generating and utilizing digital avatar data for online marketplaces |
JP2016038811A (en) | 2014-08-08 | 2016-03-22 | 株式会社東芝 | Virtual try-on apparatus, virtual try-on method and program |
JP6320237B2 (en) * | 2014-08-08 | 2018-05-09 | 株式会社東芝 | Virtual try-on device, virtual try-on method, and program |
US20160063320A1 (en) * | 2014-08-29 | 2016-03-03 | Susan Liu | Virtual body scanner application for use with portable device |
US10366447B2 (en) | 2014-08-30 | 2019-07-30 | Ebay Inc. | Providing a virtual shopping environment for an item |
US11205305B2 (en) | 2014-09-22 | 2021-12-21 | Samsung Electronics Company, Ltd. | Presentation of three-dimensional video |
US10257494B2 (en) | 2014-09-22 | 2019-04-09 | Samsung Electronics Co., Ltd. | Reconstruction of three-dimensional video |
KR102240302B1 (en) * | 2014-10-21 | 2021-04-14 | 삼성전자주식회사 | Apparatus and Method for virtual fitting thereof |
KR101671649B1 (en) * | 2014-12-22 | 2016-11-01 | 장석준 | Method and System for 3D manipulated image combined physical data and clothing data |
US9307360B1 (en) | 2015-01-09 | 2016-04-05 | NinthDecimal, Inc. | Systems and methods to identify a predefined geographical region in which a mobile device is located |
US20160284135A1 (en) * | 2015-03-25 | 2016-09-29 | Gila Kamhi | Reality Animation Mechanism |
US10388050B2 (en) * | 2015-03-31 | 2019-08-20 | Seiren Co., Ltd. | System and method for creating customized clothing that combines clothing shapes, clothing patterns, dolls and the user's head image |
US12008624B1 (en) * | 2015-05-05 | 2024-06-11 | Centric Software, Inc. | Method for product recommendations using three-dimensional modeling and points of measure |
EP3115971B1 (en) * | 2015-06-02 | 2020-06-03 | Samsung Electronics Co., Ltd. | Method and apparatus for providing three-dimensional data of cloth |
GB2541642A (en) * | 2015-07-28 | 2017-03-01 | Endura Ltd | Method of and system for providing a low drag garment |
CN105069837B (en) * | 2015-07-30 | 2018-05-29 | 武汉变色龙数据科技有限公司 | A kind of clothes trying analogy method and device |
US9905019B2 (en) * | 2015-08-07 | 2018-02-27 | Selfiestyler Inc. | Virtual apparel fitting systems and methods |
US10430867B2 (en) | 2015-08-07 | 2019-10-01 | SelfieStyler, Inc. | Virtual garment carousel |
CN105184584A (en) * | 2015-09-17 | 2015-12-23 | 北京京东方多媒体科技有限公司 | Virtual fitting system and method |
WO2017056090A2 (en) * | 2015-09-28 | 2017-04-06 | Infime Development Ltd. | Method and system utilizing texture mapping |
WO2017106934A1 (en) * | 2015-12-24 | 2017-06-29 | Mport Pty Ltd | Computer implemented frameworks and methodologies configured to enable the generation, processing and management of 3d body scan data, including shared data access protocols and collaborative data utilisation, and identify verification for 3d environments |
EP3423244B1 (en) * | 2016-03-01 | 2019-10-09 | Koninklijke Philips N.V. | System and method for automated hairstyle processing and hair cutting device |
US10262440B2 (en) * | 2016-03-25 | 2019-04-16 | Ebay Inc. | Publication modification using body coordinates |
US10469803B2 (en) | 2016-04-08 | 2019-11-05 | Maxx Media Group, LLC | System and method for producing three-dimensional images from a live video production that appear to project forward of or vertically above an electronic display |
US10230939B2 (en) | 2016-04-08 | 2019-03-12 | Maxx Media Group, LLC | System, method and software for producing live video containing three-dimensional images that appear to project forward of or vertically above a display |
CN106204533A (en) * | 2016-06-28 | 2016-12-07 | 王凌峰 | A kind of three-dimensional scanner |
US10134200B2 (en) * | 2016-07-06 | 2018-11-20 | Physan, Inc. | Method and system for CIG-mode rendering |
US10929913B2 (en) | 2016-07-12 | 2021-02-23 | United Parcel Service Of America, Inc. | Systems, methods, and computer program products for intelligently processing and manipulating a subject image according to consumer data |
CN107993111A (en) * | 2016-10-26 | 2018-05-04 | 深圳市衣锦未来科技有限公司 | A kind of online human body dimension measurement system |
US10699461B2 (en) | 2016-12-20 | 2020-06-30 | Sony Interactive Entertainment LLC | Telepresence of multiple users in interactive virtual space |
US20180315117A1 (en) * | 2017-04-26 | 2018-11-01 | David Lynton Jephcott | On-Line Retail |
US11094136B2 (en) | 2017-04-28 | 2021-08-17 | Linden Research, Inc. | Virtual reality presentation of clothing fitted on avatars |
US11145138B2 (en) * | 2017-04-28 | 2021-10-12 | Linden Research, Inc. | Virtual reality presentation of layers of clothing on avatars |
US10665022B2 (en) * | 2017-06-06 | 2020-05-26 | PerfectFit Systems Pvt. Ltd. | Augmented reality display system for overlaying apparel and fitness information |
WO2019000464A1 (en) * | 2017-06-30 | 2019-01-03 | 广东欧珀移动通信有限公司 | Image display method and device, storage medium, and terminal |
US10657709B2 (en) | 2017-10-23 | 2020-05-19 | Fit3D, Inc. | Generation of body models and measurements |
US10453265B1 (en) * | 2018-04-05 | 2019-10-22 | Page International—FZ—LLC | Method and device for the virtual try-on of garments based on augmented reality with multi-detection |
JP7224112B2 (en) * | 2018-05-21 | 2023-02-17 | Juki株式会社 | sewing system |
HK1253750A2 (en) * | 2018-07-04 | 2019-06-28 | Bun Kwai | Method and apparatus for converting 3d scanned objects to avatars |
HK1253751A2 (en) * | 2018-07-04 | 2019-06-28 | Bun Kwai | Method and apparatus for fitting an accessory object to an avatar |
US10832472B2 (en) | 2018-10-22 | 2020-11-10 | The Hong Kong Polytechnic University | Method and/or system for reconstructing from images a personalized 3D human body model and thereof |
CN109829785B (en) * | 2019-01-21 | 2021-07-09 | 深圳市云之梦科技有限公司 | Virtual fitting method and device, electronic equipment and storage medium |
US11113859B1 (en) * | 2019-07-10 | 2021-09-07 | Facebook Technologies, Llc | System and method for rendering three dimensional face model based on audio stream and image data |
KR102228098B1 (en) * | 2019-08-19 | 2021-03-17 | (주)클로버추얼패션 | Methode and apparatus of measuring measurement of two dimensional pattern corresponding to three dimensional virtual clothing |
US11380037B2 (en) * | 2019-10-30 | 2022-07-05 | Tencent Technology (Shenzhen) Company Limited | Method and apparatus for generating virtual operating object, storage medium, and electronic device |
US11507056B1 (en) | 2020-04-06 | 2022-11-22 | Lockheed Martin Corporation | System and method for three-dimensional (3D) computer-aided manufacturing (CAM) of an ensemble of pilot equipment and garments |
US11601589B2 (en) | 2020-09-15 | 2023-03-07 | Micron Technology, Inc. | Actuating an image sensor |
EP3996047A1 (en) * | 2020-11-04 | 2022-05-11 | Tata Consultancy Services Limited | Method and system for generating face animations from speech signal input |
US11219371B1 (en) | 2020-11-09 | 2022-01-11 | Micron Technology, Inc. | Determining biometric data using an array of infrared illuminators |
US11321916B1 (en) * | 2020-12-30 | 2022-05-03 | Beijing Wodong Tianjun Information Technology Co., Ltd. | System and method for virtual fitting |
US11848095B2 (en) | 2021-04-29 | 2023-12-19 | Lymphatech, Inc. | Identifying body part or body area anatomical landmarks from digital imagery for the fitting of compression garments for a person in need thereof |
US11908083B2 (en) | 2021-08-31 | 2024-02-20 | Snap Inc. | Deforming custom mesh based on body mesh |
US11663792B2 (en) | 2021-09-08 | 2023-05-30 | Snap Inc. | Body fitted accessory with physics simulation |
US11798238B2 (en) | 2021-09-14 | 2023-10-24 | Snap Inc. | Blending body mesh into external mesh |
US11836866B2 (en) | 2021-09-20 | 2023-12-05 | Snap Inc. | Deforming real-world object using an external mesh |
US11836862B2 (en) | 2021-10-11 | 2023-12-05 | Snap Inc. | External mesh with vertex attributes |
US11790614B2 (en) | 2021-10-11 | 2023-10-17 | Snap Inc. | Inferring intent from pose and speech input |
US11763481B2 (en) | 2021-10-20 | 2023-09-19 | Snap Inc. | Mirror-based augmented reality experience |
US12062146B2 (en) | 2022-07-28 | 2024-08-13 | Snap Inc. | Virtual wardrobe AR experience |
CN116797699B (en) * | 2023-08-28 | 2023-12-15 | 武汉博润通文化科技股份有限公司 | Intelligent animation modeling method and system based on three-dimensional technology |
Family Cites Families (84)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3902182A (en) | 1973-10-18 | 1975-08-26 | Lars Evert Bernhard Hillborg | Process and device for determining photographically dimensions of persons and objects |
US4149246A (en) | 1978-06-12 | 1979-04-10 | Goldman Robert N | System for specifying custom garments |
US4885844A (en) | 1988-11-14 | 1989-12-12 | Chun Joong H | Computer aided custom tailoring with disposable measurement clothing |
WO1991015998A1 (en) | 1990-04-16 | 1991-10-31 | Marras William S | Apparatus for monitoring the motion of the lumbar spine |
US5333245A (en) | 1990-09-07 | 1994-07-26 | Modacad, Inc. | Method and apparatus for mapping surface texture |
US5163007A (en) | 1990-11-13 | 1992-11-10 | Halim Slilaty | System for measuring custom garments |
JP2614691B2 (en) | 1992-01-23 | 1997-05-28 | 旭化成工業株式会社 | Method and apparatus for visualizing assembled shape of paper pattern |
JPH0696100A (en) | 1992-09-09 | 1994-04-08 | Mitsubishi Electric Corp | Remote transaction system |
FR2707120B1 (en) | 1993-07-02 | 1995-09-22 | Lectra Systemes Sa | Clothes grading system. |
US5530652A (en) | 1993-08-11 | 1996-06-25 | Levi Strauss & Co. | Automatic garment inspection and measurement system |
US5680528A (en) | 1994-05-24 | 1997-10-21 | Korszun; Henry A. | Digital dressing room |
US5548519A (en) | 1994-08-12 | 1996-08-20 | Custom Clothing Technology Corporation | Custom apparel manufacturing apparatus and method |
US6057909A (en) | 1995-06-22 | 2000-05-02 | 3Dv Systems Ltd. | Optical ranging camera |
US5680314A (en) | 1995-08-25 | 1997-10-21 | Patterson; Douglas R. | Garment sizing system |
US5678555A (en) | 1996-04-08 | 1997-10-21 | O'connell; Peter | Method of locating and marking veins |
IL119082A (en) | 1996-08-16 | 2001-04-30 | Virtue Ltd | Method for creating graphic images |
US5930769A (en) | 1996-10-07 | 1999-07-27 | Rose; Andrea | System and method for fashion shopping |
US5956525A (en) | 1997-08-11 | 1999-09-21 | Minsky; Jacob | Method of measuring body measurements for custom apparel manufacturing |
US6144388A (en) | 1998-03-06 | 2000-11-07 | Bornstein; Raanan | Process for displaying articles of clothing on an image of a person |
US6167159A (en) | 1998-04-30 | 2000-12-26 | Virtue Ltd. | Triangle mesh compression |
US6307568B1 (en) | 1998-10-28 | 2001-10-23 | Imaginarix Ltd. | Virtual dressing over the internet |
US6456287B1 (en) | 1999-02-03 | 2002-09-24 | Isurftv | Method and apparatus for 3D model creation based on 2D images |
US6404426B1 (en) | 1999-06-11 | 2002-06-11 | Zenimax Media, Inc. | Method and system for a computer-rendered three-dimensional mannequin |
AU5889500A (en) | 1999-06-25 | 2001-01-31 | Tara Chand Singhal | System and method for simulating how an article of wear will appear and feel on an individual |
US6438853B1 (en) | 1999-08-26 | 2002-08-27 | The United States Of America As Represented By The Secretary Of The Army | Set of human torso manikins for use in fabrication and evaluation of body wear for a group of human beings |
FR2799556B1 (en) | 1999-10-08 | 2002-01-25 | Lectra Systemes Sa | METHOD AND DEVICE FOR SIMULATING AND REPRESENTING THE DRESSING OF A MANNEQUIN |
US7663648B1 (en) | 1999-11-12 | 2010-02-16 | My Virtual Model Inc. | System and method for displaying selected garments on a computer-simulated mannequin |
US6488202B1 (en) | 2000-03-31 | 2002-12-03 | The Procter & Gamble Company | Device and method for identifying a size of an absorbent article which is fit-appropriate for a potential wearer |
US7149665B2 (en) | 2000-04-03 | 2006-12-12 | Browzwear International Ltd | System and method for simulation of virtual wear articles on virtual models |
US6968075B1 (en) | 2000-05-09 | 2005-11-22 | Chang Kurt C | System and method for three-dimensional shape and size measurement |
US6546309B1 (en) | 2000-06-29 | 2003-04-08 | Kinney & Lange, P.A. | Virtual fitting room |
US6901379B1 (en) | 2000-07-07 | 2005-05-31 | 4-D Networks, Inc. | Online shopping with virtual modeling and peer review |
US6725124B2 (en) | 2000-09-11 | 2004-04-20 | He Yan | System and method for texture mapping 3-D computer modeled prototype garments |
US6701207B1 (en) | 2000-11-02 | 2004-03-02 | Kinney & Lange, P.A. | Method for integrating information relating to apparel fit, apparel sizing and body form variance |
US6665577B2 (en) * | 2000-12-20 | 2003-12-16 | My Virtual Model Inc. | System, method and article of manufacture for automated fit and size predictions |
GB0101371D0 (en) | 2001-01-19 | 2001-03-07 | Virtual Mirrors Ltd | Production and visualisation of garments |
US7194428B2 (en) | 2001-03-02 | 2007-03-20 | Accenture Global Services Gmbh | Online wardrobe |
WO2002076251A2 (en) | 2001-03-08 | 2002-10-03 | Saint Laurie, Ltd. | A system and method for fitting clothing |
US20020188372A1 (en) | 2001-05-10 | 2002-12-12 | Lane Kenneth M. | Method and system for computer aided garment selection |
US7242999B2 (en) | 2001-05-11 | 2007-07-10 | Kenneth Kuk-Kei Wang | Method and apparatus for identifying virtual body profiles |
US7953648B2 (en) | 2001-11-26 | 2011-05-31 | Vock Curtis A | System and methods for generating virtual clothing experiences |
CA2369710C (en) * | 2002-01-30 | 2006-09-19 | Anup Basu | Method and apparatus for high resolution 3d scanning of objects having voids |
EP1345179A3 (en) | 2002-03-13 | 2004-01-21 | Matsushita Electric Industrial Co., Ltd. | Method and apparatus for computer graphics animation |
US7194327B2 (en) | 2002-07-12 | 2007-03-20 | Peter Ar-Fu Lam | Body profile coding method and apparatus useful for assisting users to select wearing apparel |
US8013852B2 (en) | 2002-08-02 | 2011-09-06 | Honda Giken Kogyo Kabushiki Kaisha | Anthropometry-based skeleton fitting |
KR100511228B1 (en) | 2003-01-10 | 2005-08-31 | 최광진 | Cloth simulation method and computer-readable media storing the program executing the method |
US7020538B2 (en) | 2003-03-06 | 2006-03-28 | Jeffrey Luhnow | Look-up table method for custom fitting of apparel |
WO2004081853A1 (en) * | 2003-03-06 | 2004-09-23 | Animetrics, Inc. | Viewpoint-invariant image matching and generation of three-dimensional models from two-dimensional imagery |
EP1615513A4 (en) | 2003-03-20 | 2010-04-07 | Mbrio Llc | Systems and methods for improved apparel fit |
US20040236552A1 (en) | 2003-05-22 | 2004-11-25 | Kimberly-Clark Worldwide, Inc. | Method of evaluating products using a virtual environment |
US20040236455A1 (en) | 2003-05-22 | 2004-11-25 | Kimberly-Clark Worldwide, Inc. | Method of designing a product in a virtual environment |
US7099734B2 (en) | 2003-05-22 | 2006-08-29 | Kimberly-Clark Worldwide, Inc. | Method of evaluating the performance of a product using a virtual environment |
JP4355341B2 (en) | 2003-05-29 | 2009-10-28 | 本田技研工業株式会社 | Visual tracking using depth data |
US7426302B2 (en) | 2003-11-28 | 2008-09-16 | John Amico | System and method for digitizing a pattern |
JP4503312B2 (en) | 2004-02-26 | 2010-07-14 | 株式会社島精機製作所 | Knit garment wearing simulation method and apparatus, and program thereof |
US20050267614A1 (en) | 2004-03-05 | 2005-12-01 | Looney Michael T | System and method of virtual modeling of thin materials |
US7373284B2 (en) | 2004-05-11 | 2008-05-13 | Kimberly-Clark Worldwide, Inc. | Method of evaluating the performance of a product using a virtual environment |
US7385601B2 (en) | 2004-06-15 | 2008-06-10 | Hbi Branded Apparel Enterprises, Llc | Systems and methods of generating integrated garment-model simulations |
US8660902B2 (en) | 2004-07-23 | 2014-02-25 | Lori Coulter, Llc | Methods and systems for selling apparel |
US7421306B2 (en) | 2004-09-16 | 2008-09-02 | Sanghati, Llc | Apparel size service |
US7253766B2 (en) * | 2004-09-24 | 2007-08-07 | Battelle Memorial Institute | Three-dimensional surface/contour processing based on electromagnetic radiation interrogation |
US7386150B2 (en) | 2004-11-12 | 2008-06-10 | Safeview, Inc. | Active subject imaging with body identification |
US20060130679A1 (en) | 2004-12-20 | 2006-06-22 | Dubois Radford E Iii | Automated cutting system for customized field stencils |
KR100511210B1 (en) | 2004-12-27 | 2005-08-30 | 주식회사지앤지커머스 | Method for converting 2d image into pseudo 3d image and user-adapted total coordination method in use artificial intelligence, and service besiness method thereof |
WO2006071006A1 (en) | 2004-12-30 | 2006-07-06 | G & G Cormerce Ltd | Method for converting 2d image into pseudo 3d image and user-adapted total coordination method in use artificial intelligence, and service business method thereof |
JP4473754B2 (en) | 2005-03-11 | 2010-06-02 | 株式会社東芝 | Virtual fitting device |
US7617016B2 (en) | 2005-04-27 | 2009-11-10 | Myshape, Inc. | Computer system for rule-based clothing matching and filtering considering fit rules and fashion rules |
US7398133B2 (en) | 2005-04-27 | 2008-07-08 | Myshape, Inc. | Matching the fit of individual garments to individual consumers |
CA2620360A1 (en) | 2005-09-01 | 2007-03-08 | G&K Services, Inc. | Virtual sizing system and method |
US7487116B2 (en) | 2005-12-01 | 2009-02-03 | International Business Machines Corporation | Consumer representation rendering with selected merchandise |
US7657341B2 (en) * | 2006-01-31 | 2010-02-02 | Dragon & Phoenix Software, Inc. | System, apparatus and method for facilitating pattern-based clothing design activities |
US7657340B2 (en) | 2006-01-31 | 2010-02-02 | Dragon & Phoenix Software, Inc. | System, apparatus and method for facilitating pattern-based clothing design activities |
US7630522B2 (en) | 2006-03-08 | 2009-12-08 | Microsoft Corporation | Biometric measurement using interactive display systems |
AU2006348151A1 (en) | 2006-09-14 | 2008-03-20 | Myshape, Inc. | Matching the fit of garments to consumers |
US7714912B2 (en) | 2007-01-24 | 2010-05-11 | International Business Machines Corporation | Intelligent mirror |
US20090276291A1 (en) | 2008-05-01 | 2009-11-05 | Myshape, Inc. | System and method for networking shops online and offline |
US20100030620A1 (en) | 2008-06-30 | 2010-02-04 | Myshape, Inc. | System and method for networking shops online and offline |
US20100030663A1 (en) | 2008-06-30 | 2010-02-04 | Myshape, Inc. | System and method for networking shops online and offline |
US20100023426A1 (en) | 2008-07-28 | 2010-01-28 | Myshape, Inc. | Distributed matching system for comparing garment information and buyer information embedded in object metadata at distributed computing locations |
US20100049633A1 (en) | 2008-08-22 | 2010-02-25 | Myshape, Inc. | System and method to identify and visually distinguish personally relevant items |
US20100076819A1 (en) | 2008-09-25 | 2010-03-25 | Myshape, Inc. | System and Method for Distilling Data and Feedback From Customers to Identify Fashion Market Information |
US20100195867A1 (en) | 2009-01-30 | 2010-08-05 | Microsoft Corporation | Visual target tracking using model fitting and exemplar |
US8294767B2 (en) | 2009-01-30 | 2012-10-23 | Microsoft Corporation | Body scan |
US8659658B2 (en) * | 2010-02-09 | 2014-02-25 | Microsoft Corporation | Physical interaction zone for gesture-based user interfaces |
-
2011
- 2011-06-13 US US13/159,401 patent/US10628729B2/en active Active
-
2018
- 2018-01-05 US US15/863,848 patent/US20180144237A1/en not_active Abandoned
-
2020
- 2020-04-20 US US16/853,167 patent/US20200380333A1/en not_active Abandoned
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210343083A1 (en) * | 2020-04-30 | 2021-11-04 | Clothing Tech LLC | Computer implemented methods for generating 3d garment models |
US11676341B2 (en) * | 2020-04-30 | 2023-06-13 | Clothing Tech LLC | Computer implemented methods for generating 3D garment models |
GB2608885A (en) * | 2021-05-05 | 2023-01-18 | Retail Social Ltd | Systems and methods for the display of virtual clothing |
KR20240000761A (en) * | 2022-06-24 | 2024-01-03 | 한양대학교 산학협력단 | Method and system for creating non-fungible token based virtual human |
KR102695909B1 (en) * | 2022-06-24 | 2024-08-16 | 한양대학교 산학협력단 | Method and system for creating non-fungible token based virtual human |
Also Published As
Publication number | Publication date |
---|---|
US10628729B2 (en) | 2020-04-21 |
US20120086783A1 (en) | 2012-04-12 |
US20180144237A1 (en) | 2018-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200380333A1 (en) | System and method for body scanning and avatar creation | |
US11244223B2 (en) | Online garment design and collaboration system and method | |
US11640672B2 (en) | Method and system for wireless ultra-low footprint body scanning | |
US10628666B2 (en) | Cloud server body scan data system | |
US20110298897A1 (en) | System and method for 3d virtual try-on of apparel on an avatar | |
US10702216B2 (en) | Method and system for body scanning and display of biometric data | |
CN111602165B (en) | Garment model generation and display system | |
US20160088284A1 (en) | Method and system for determining biometrics from body surface imaging technology | |
US11393163B2 (en) | Method and system for remote clothing selection | |
US11948057B2 (en) | Online garment design and collaboration system and method | |
US8976230B1 (en) | User interface and methods to adapt images for approximating torso dimensions to simulate the appearance of various states of dress | |
CN106373178B (en) | Apparatus and method for generating artificial image | |
KR102130709B1 (en) | Method for providing digitial fashion based custom clothing making service using product preview | |
Hauswiesner et al. | Free viewpoint virtual try-on with commodity depth cameras | |
CN107251026A (en) | System and method for generating fictitious situation | |
AU2017260525B2 (en) | Method and system for body scanning and display of biometric data | |
Kusumaningsih et al. | User experience measurement on virtual dressing room of Madura batik clothes | |
WO2018182938A1 (en) | Method and system for wireless ultra-low footprint body scanning | |
Sundaram et al. | Plane detection and product trail using augmented reality | |
Wacker et al. | Simulation and visualisation of virtual textiles for virtual try-on | |
Dvořák et al. | Presentation of historical clothing digital replicas in motion | |
WO2021237169A1 (en) | Online garment design and collaboration and virtual try-on system and method | |
Kenkare | Three dimensional modeling of garment drape | |
Li et al. | Intelligent clothing size and fit recommendations based on human model customisation technology | |
Yoon et al. | Image-based dress-up system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: STYKU LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SAREEN, RAJ;REEL/FRAME:062333/0571 Effective date: 20221220 |