US20130170715A1 - Garment modeling simulation system and process - Google Patents
Garment modeling simulation system and process Download PDFInfo
- Publication number
- US20130170715A1 US20130170715A1 US13/733,865 US201313733865A US2013170715A1 US 20130170715 A1 US20130170715 A1 US 20130170715A1 US 201313733865 A US201313733865 A US 201313733865A US 2013170715 A1 US2013170715 A1 US 2013170715A1
- Authority
- US
- United States
- Prior art keywords
- user
- garment
- color
- frameworks
- framework
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G06K9/62—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/06—Buying, selling or leasing transactions
- G06Q30/0601—Electronic shopping [e-shopping]
- G06Q30/0641—Shopping interfaces
- G06Q30/0643—Graphical representation of items or shoppers
-
- A—HUMAN NECESSITIES
- A41—WEARING APPAREL
- A41H—APPLIANCES OR METHODS FOR MAKING CLOTHES, e.g. FOR DRESS-MAKING OR FOR TAILORING, NOT OTHERWISE PROVIDED FOR
- A41H3/00—Patterns for cutting-out; Methods of drafting or marking-out such patterns, e.g. on the cloth
- A41H3/007—Methods of drafting or marking-out patterns using computers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/16—Cloth
Definitions
- the present invention claims priority to provisional application 61/631,318, which has a filing date of Jan. 3, 2012, which is hereby incorporated by reference.
- the present invention claims priority to nonprovisional application Ser. No. 13/586,845, which has a filing date of Aug. 15, 2012, which is hereby incorporated by reference.
- the present invention relates to a simulation system, more specifically to a garment modeling simulation system.
- Clothing consumers seek to know how a particular garment will fit them and appear on them prior to purchase.
- That consumer may try on the clothing.
- the consumer enters a dressing room, takes off their current clothing, tries on the desired garment, observes himself or herself in a mirror, takes off the desired garment, and then put their current clothing back on. That can be tiresome, time consuming, or concerning to privacy to try on different garments at a physical location.
- For online clothing purchases it is not possible to try on any particular garments.
- the problem of determining fit in online purchases is exacerbated by inconsistency in size definitions.
- a medium size of one brand may differ from the medium size of another brand.
- Augmented reality offer possible solutions. It would be desirable to simulate a “likeness” or model of the consumer simulating him or her wearing a desired garment.
- augmented reality systems can still require substantial local computing power, special cameras, and/or travel to a physical location.
- an augmented dressing room system to Kjaerside et al in 2005 discloses a camera, a projection surface, and visual tags. For that system, the consumer must travel to and be physically present in order to interact with that system.
- a second augmented dressing room system to Hauswiesner et al in 2011 discloses using a plurality of depth cameras communicately couple to a system which is used to form a model with virtual clothes. Again, that second system requires a consumer to have specialized equipment, follow a complex process, or travel to a location.
- the present invention is directed to a system and method of simulating modeling a garment, comprising the steps of providing a dictionary having a plurality of figure frameworks, the plurality of figure frameworks comprising varying body characteristics and measurements, with each of the figure frameworks comprising at least one image and body reference data.
- the system providing a garment database comprising images and pairing data for a plurality of garments. It receives a user image and a garment selection and selects a figure framework in response to user input and garment selection. It extracts the facial region and determines a skin tone identifier from the user image. It renders a three dimensional user model from the user image and the selected figure framework to form a user model, shading based on the skin tone identifier. It the overlays and scales the selected garment on the user model, whereby the user model simulates the user wearing the selected garment.
- FIG. 1 depicts a block diagram of an embodiment of the current invention
- FIG. 2 depicts a flowchart for a process implemented to the system of FIG. 1 ;
- FIG. 3 depicts a flowchart for the process of user model creation of FIG. 2 ;
- FIG. 4 depicts a flowchart for the process of garment data creation of FIG. 2 ;
- FIG. 5 depicts a flowchart for the process of garment modeling simulation of FIG. 2 .
- FIG. 6 depicts a series of two dimensional figure frameworks
- FIG. 7 depicts a stage of the output of FIG. 5 ;
- FIG. 8 depicts a series of three dimensional figure frameworks
- FIG. 9 depicts one state of a system presented interface.
- FIG. 1 depicts a block diagram of an embodiment of the system in operation. It depicts a handheld computer 20 with an integrated camera 22 , a communication network 30 , a server 32 , a user model database 34 , and a garment database 36 .
- the user 08 records an image with the camera 22 which is transmitted to the server 32 via the network 30 .
- the server 32 processes the transmitted image and stores the processed image in the user model database 34 .
- the server augments the image with a selected garment from the garment database 36 and renders a user model for display and interaction on the video screen 24 of the computer 20 .
- a computer 20 or server 32 generally refers to a system which includes a central processing unit (CPU), memory, a screen, a network interface, and input/output (I/O) components connected by way of a data bus.
- the I/O components may include for example, a mouse, keyboard, buttons, or a touchscreen.
- the network interface enables data communications with the computer network 40 .
- a server contains various server software programs and preferably contains application server software.
- the preferred computer 20 is a portable handheld computer, smartphone, or tablet computer, such as an iPhone, iPod Touch, iPad, Blackberry, or Android based device.
- the computer is preferably configured with a touch screen 26 and integrated camera 22 elements.
- the computer 20 or servers 32 can take a variety of configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based electronics, network PCs, minicomputers, mainframe computers, and the like. Additionally, the computer 20 or servers 32 may be part of a distributed computer environment where tasks are performed by local and remote processing devices that are linked. Although shown as separate devices, one skilled in the art can understand that the structure of and functionality associated with the aforementioned elements can be optionally partially or completely incorporated within one or the other, such as within one or more processors.
- Camera 22 is preferably a color digital camera integrated with the handheld computer 20 .
- a suitable camera for input producing image input for the system includes a simple optical camera, that is to say a camera without associated range functionality, without depth functionality, with plural vantage point camera array, or the like.
- the communication network 30 includes a computer network and a telephone system.
- the communication network 30 includes of a variety of network components and protocols known in the art which enable computers to communicate.
- the computer network may be a local area network or wide area network such as the internet.
- the network may include modem lines, high speed dedicated lines, packet switches, etc.
- the network protocols used may include those known in the art such as UDP, TCP, IP, IPX, or the like. Additional communication protocols may be used to facilitate communication over the computer network 30 , such as the published HTTP protocol used on the world wide web or other application protocols.
- the user model database includes base figure frameworks and stored user models, which are composites of user provided images joined with one or more base figure frameworks, as will be disclosed further in the specification.
- the base figure frameworks are a plurality of system created frameworks, each framework representing a major portion or all of the human body. In the current embodiment, each framework represents the human body, including a portion of the neck and below.
- the base figures frameworks are of varying body measurements and characteristics. That is to say the base figures are generated with a relative height, weight, body type, chest measurement, band measurement, waist measurement, hip measurement, inseam, rise, thigh measurement, arm length, sleeve length, upper arm measurement, skin tone, eye color, hair color, hair length, and other characteristics.
- the user model database 34 also stores user information such as pant size, shirt size, or dress size.
- the user model database 34 includes sufficient base figure frameworks to form a dictionary of frameworks of differing body measurements and characteristics to represent cross-sections of the population.
- the system 10 divides the chest measurement into simulated one inch ranges. Each chest measurement range is paired with a given category or range of other characteristics.
- one figure framework may represent, for example, a 42 inch chest measurement, a first given waist measurement or range, a first given given hip measurement or range, and so on for the other characteristics.
- the figure framework dictionary is completed by varying the options for the figure frameworks while maintaining one static value for the isolated characteristic in order to represent sufficient cross-sections of the population.
- a set of body reference coordinates is stored.
- the body reference coordinates map a particular location or set of locations within the figure framework.
- the body reference coordinates can define one or more regions of the body or body parts.
- the body reference coordinates may map to the waistline region.
- the base figure framework may include two dimensional (2D) data or three dimensional (3D) data.
- Each 2D figure framework may include an associated set of images for a given framework for a particular set of body measurements and characteristics.
- FIG. 6 depicts a series of associated set of representative 2D figure frameworks 40 40 ′ 40 ′′ 40 ′′′ for a particular set of body measurements and characteristics.
- Each of the images 40 40 ′ 40 ′′ 40 ′′′ shows the particular set of body measurements and characteristics from a different vantage point or in different positions, postures, or “poses.”
- FIG. 8 depicts a series of a subset of 3D figure frameworks in the dictionary, each having a different particular set of body measurements and characteristics.
- the garment database 36 includes data for a plurality of garments.
- the garment data includes, but is not limited to, the garment type, color, pattern, size, images, and region reference coordinates.
- Each garment entry represents a specific article of clothing that a user may virtually model.
- the garment type is input. For example, a bra, a shirt, pants, dress, coat, or other article of clothing may be selected. Additionally, at least one associated image is input into the garment entry. Preferably multiple images from different vantage points are input and associated with the garment type.
- Each garment image has associated pairing data.
- the pairing data includes data which signals that a region of a particular garment should be associated with a region of the body.
- the coordinates representing the lower edge of a bra may be associated with the band, or inframammary fold.
- the coordinates representing the lower edge of a shirt may be associated with the hip line.
- FIG. 2 shows an embodiment of the process implemented to the system of FIG. 1 .
- the user model is generated 100 .
- the system uses input garment data 200 , the system generates a simulated model 300 , with which the user may interact 400 .
- garment data is input 200 .
- the garment type is input.
- Auxiliary associated garment data such as a product identifier, size, color, is input 210 .
- one or more images of the garment are uploaded 215 . Suitable images includes those captured from a simple optical camera. The preferred vantage point of the garment images is from the front of the garment, with supplemental images from the sides and rear of the garment.
- the garment's information is stored in the garment database 36 .
- the product identifier is optionally associated with a bar code.
- the user captures an image of a portion of himself or herself 105 using an optical camera, preferably, the upper body, more specifically above the shoulders.
- a suitable camera includes a simple optical camera.
- the preferred vantage point is from the front of the user.
- the user may supplement the input with additional images from different vantage points.
- the system extracts the facial region 56 of the image, removing the background using systems and processes known in the art. Representative systems and processes include U.S. Pat. Nos. 6,611,613 to Kang et al., 7,123,754 to Matsuo et al., 6,885,760 to Yamada et al, which are incorporated by reference.
- the system provides an interface to the user in order to facilitate automated system extraction of the facial region 56 from the image.
- the system provides at least one guide 54 overlaying the image.
- the guides are shaped to enable coarse indication of the facial region 56 to the system. Suitable guide shapes for encompassing a portion of the facial region 56 include ellipses, quadrilaterals, or other polygons. Other suitable guide shapes permit the user to signal specific points within the facial region 56 to the system.
- Such a representative shape includes a cross-hair guide 54 .
- FIG. 9 a state of one configuration for the interface is shown.
- a first elliptical guide 54 is presented to the user for coarse signaling of the outer boundary of the facial region 56 .
- a second cross-hair guide 54 is presented to the user for coarse signaling of the center of the facial region 56 .
- a third circular guide 54 signals image area outside the facial region 56 .
- the system presents two guides, preferably of the same shape and as simple polygons, such as are ellipses or quadrilaterals.
- a first guide is nested inside a second guide and presented to the user for coarse placement inside the facial region 56 , providing a basis for foreground color information.
- the outer guide is presented to the user for coarse placement outside the facial region 56 , providing a basis for foreground color information.
- the system pre-calculates triangulations for each of the two guides and determines the boundary colors at each of the respective guides using mean value coordinates, preferably at the vertices of the triangles.
- the system calculates a foreground image (F) and a background image (B). To arrive at the facial region 56 with the background removed, the system interpolates colors in the triangles using Barycentric coordinates based on the user provided image (I) according to the following equation:
- the system 10 stores the transformed user images in the user model database 34 . Additional disclosure in interpolation is in the annexed Lipman document, which is hereby incorporated by reference.
- the system determines a skin tone identifier from the facial region 56 of the user provided image for configuration of the candidate figure framework to which the facial region 56 will be joined.
- the skin tone identifier includes four components: a primary diffuse color, a secondary diffuse color, a shadow color, a highlight color.
- the system selects an area or areas to sample that is likely to represent the change in skin color.
- the exemplary configuration samples a circular area around the chin. A table of based on the sample area and the color distribution therein is created, where the system selects the four components based on the relative frequency of colors in the sample.
- the exemplary system selects the most frequent color as the diffuse color, the most frequent dark color as the shadow color, the most frequent bright color as the highlight color, and the color with the greatest difference in hue from the primary diffuse color as the secondary diffuse color.
- the system 10 presents an interface to the user.
- the user can input characteristics, such as height, weight, chest measurement, waist measurement, hip measurement, inseam, sleeve length, skin tone, eye color, hair color, and clothing sizes.
- the interface may also present simplified or derived options to the user. For example, the system may present “banana”, “apple”, “pear”, “hourglass”, or “athletic” or as “body type” options. This signals the system to apply certain body characteristics, such as certain bust-hip ratios, waist-hip ratios, or torso length to leg length ratios.
- the user information is stored as a profile in the user model database 34 .
- the system 10 selects a figure framework based upon the user input.
- the user model data database 34 includes a dictionary of figure framework of varying body measurements and characteristics representing different cross-sections of the population.
- the system 10 selects the figure framework which most closely matches the user based on the user image and user profile data.
- the system determines the degree of correlation to other 2D figure framework for other user inputs and information derived from user input.
- the system selects the 2D figure framework with the highest aggregation correlation.
- the framework selector module is configured to retrieve a 2D figure framework representative of the user having an altered weight facade. That is to say, the framework selector module can select a base 2D figure which may represent a user if that user gains or loses weight.
- the system selects the 2D figure framework as disclosed.
- the framework selector module combines user input with predictive weight change attributes to select a 2D figure framework. For example, people with a lower torso length to leg length ratios may have a higher tendency to initially expand at the hip in weight gain.
- the system preferably employs such tendencies to aid 2D figure framework selection.
- the 2D figure framework base is converted to a 3D figure framework by meshing and rigging 120 , using those means known in the art.
- MakeHumanTM is employed in the meshing and Autodesk'sTM Maya is employed in the rigging.
- Representative meshing systems and processes include U.S. Pat. Nos. 8,089,480 to Chang et al., 6,259,453 to Itoh et al., and 6,262,737 to Li et al., which are incorporated by reference.
- Representative rigging systems and processes include U.S. Pat. No. 8,026,917 to Rogers et al. and U.S. Pat. App. No. 20070146360 to Clatworthy, which are incorporated by reference.
- the system 10 selects the figure framework which most closely matches the user based on the user image and user profile data.
- the system determines the degree of correlation to other 3D figure framework for other user inputs and information derived from user input.
- the system selects the 3D figure framework with the highest aggregation correlation.
- the system 10 morphs the 3D figure framework based on the input or as noted above, the user choosing to have an altered weight facade. Additional disclosure on morphing the 3D figure frameworks is disclosed in Allen, Et al, which is annexed and incorporated by reference.
- the user image of step 105 is stitched to the 3D figure framework 125 to form the user model.
- the user images and figure framework are preferably registered, calibrated, and blended in the stitching process.
- a shader is applied 130 to match the tones of the user image with those of the 3D figure framework.
- Tools of the art such as OpenGL, Direct3D, or Renderman can be employed in the shading.
- the system 10 employs the aforementioned skin tone identifier components in shading the skin, namely the primary diffuse color, the shadow color, the highlight color, and the secondary diffuse color calculated from the user supplied image.
- the rendered user model is stored in the user model database 34 .
- the process of a user simulating modeling or “trying on” a garment is shown.
- the rendered user model is received 305 .
- the user selects a garment 310 .
- the system maps the garment to the user model 315 , using the pairing data and body reference data to associate regions of the selected garment to regions of the user model.
- the user selected garment is scaled and overlaid on the user model according to the system generated user model and the user selected garment, correlating garment regions to user model regions.
- the simulated model is displayed to the video screen 24 , as shown in FIG. 7 .
- the user is presented the option to change the background 320 or to change the simulated model's “pose” 325 .
Abstract
The present invention is directed to a system and method of simulating modeling a garment, comprising the steps of providing a dictionary having a plurality of figure frameworks, the plurality of figure frameworks comprising varying body characteristics and measurements, with each of the figure frameworks comprising at least one image and body reference data. The system providing a garment database comprising images and pairing data for a plurality of garments. It receives a user image and a garment selection and selects a figure framework in response to user input and garment selection. It extracts the facial region and determines a skin tone identifier from the user image. It renders a three dimensional user model from the user image and the selected figure framework to form a user model, shading based on the skin tone identifier. It the overlays and scales the selected garment on the user model, whereby the user model simulates the user wearing the selected garment.
Description
- The present invention claims priority to provisional application 61/631,318, which has a filing date of Jan. 3, 2012, which is hereby incorporated by reference. The present invention claims priority to nonprovisional application Ser. No. 13/586,845, which has a filing date of Aug. 15, 2012, which is hereby incorporated by reference.
- 1. Field of the Invention
- The present invention relates to a simulation system, more specifically to a garment modeling simulation system.
- 2. Description of the Related Art
- Clothing consumers seek to know how a particular garment will fit them and appear on them prior to purchase. At a physical retail location, that consumer may try on the clothing. The consumer enters a dressing room, takes off their current clothing, tries on the desired garment, observes himself or herself in a mirror, takes off the desired garment, and then put their current clothing back on. That can be tiresome, time consuming, or concerning to privacy to try on different garments at a physical location. For online clothing purchases, it is not possible to try on any particular garments. The problem of determining fit in online purchases is exacerbated by inconsistency in size definitions. A medium size of one brand may differ from the medium size of another brand.
- It would be preferable to see how a garment fits and looks without having to physically try it on. Augmented reality offer possible solutions. It would be desirable to simulate a “likeness” or model of the consumer simulating him or her wearing a desired garment. However, augmented reality systems can still require substantial local computing power, special cameras, and/or travel to a physical location. For example, an augmented dressing room system to Kjaerside et al in 2005 discloses a camera, a projection surface, and visual tags. For that system, the consumer must travel to and be physically present in order to interact with that system. A second augmented dressing room system to Hauswiesner et al in 2011 discloses using a plurality of depth cameras communicately couple to a system which is used to form a model with virtual clothes. Again, that second system requires a consumer to have specialized equipment, follow a complex process, or travel to a location.
- For the above reasons, it would be advantageous for a system which enables a user to employ commonly available equipment to simulate himself or herself modeling selected garments.
- The present invention is directed to a system and method of simulating modeling a garment, comprising the steps of providing a dictionary having a plurality of figure frameworks, the plurality of figure frameworks comprising varying body characteristics and measurements, with each of the figure frameworks comprising at least one image and body reference data. The system providing a garment database comprising images and pairing data for a plurality of garments. It receives a user image and a garment selection and selects a figure framework in response to user input and garment selection. It extracts the facial region and determines a skin tone identifier from the user image. It renders a three dimensional user model from the user image and the selected figure framework to form a user model, shading based on the skin tone identifier. It the overlays and scales the selected garment on the user model, whereby the user model simulates the user wearing the selected garment.
- These and other features, aspects, and advantages of the invention will become better understood with reference to the following description, and accompanying drawings.
-
FIG. 1 depicts a block diagram of an embodiment of the current invention; -
FIG. 2 depicts a flowchart for a process implemented to the system ofFIG. 1 ; -
FIG. 3 depicts a flowchart for the process of user model creation ofFIG. 2 ; -
FIG. 4 depicts a flowchart for the process of garment data creation ofFIG. 2 ; -
FIG. 5 depicts a flowchart for the process of garment modeling simulation ofFIG. 2 . -
FIG. 6 depicts a series of two dimensional figure frameworks; -
FIG. 7 depicts a stage of the output ofFIG. 5 ; -
FIG. 8 depicts a series of three dimensional figure frameworks; and -
FIG. 9 depicts one state of a system presented interface. - Detailed descriptions of the preferred embodiment are provided herein. It is to be understood, however, that the present invention may be embodied in various forms. Therefore, specific details disclosed herein are not to be interpreted as limiting, but rather as a basis for the claims and as a representative basis for teaching one skilled in the art to employ the present invention in virtually any appropriately detailed system, structure or manner.
- The present invention is directed to a system and process for approximated three dimensional (3D) simulation of a user modeling a garment based on two dimensional images of both the user and the garment.
FIG. 1 depicts a block diagram of an embodiment of the system in operation. It depicts ahandheld computer 20 with an integratedcamera 22, acommunication network 30, aserver 32, auser model database 34, and agarment database 36. In use, the user 08 records an image with thecamera 22 which is transmitted to theserver 32 via thenetwork 30. Theserver 32 processes the transmitted image and stores the processed image in theuser model database 34. The server augments the image with a selected garment from thegarment database 36 and renders a user model for display and interaction on the video screen 24 of thecomputer 20. - A
computer 20 orserver 32, as referred to in this specification, generally refers to a system which includes a central processing unit (CPU), memory, a screen, a network interface, and input/output (I/O) components connected by way of a data bus. The I/O components may include for example, a mouse, keyboard, buttons, or a touchscreen. The network interface enables data communications with thecomputer network 40. A server contains various server software programs and preferably contains application server software. Thepreferred computer 20 is a portable handheld computer, smartphone, or tablet computer, such as an iPhone, iPod Touch, iPad, Blackberry, or Android based device. The computer is preferably configured with atouch screen 26 and integratedcamera 22 elements. Those skilled in the art will appreciate that thecomputer 20 orservers 32 can take a variety of configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based electronics, network PCs, minicomputers, mainframe computers, and the like. Additionally, thecomputer 20 orservers 32 may be part of a distributed computer environment where tasks are performed by local and remote processing devices that are linked. Although shown as separate devices, one skilled in the art can understand that the structure of and functionality associated with the aforementioned elements can be optionally partially or completely incorporated within one or the other, such as within one or more processors. - Camera 22 is preferably a color digital camera integrated with the
handheld computer 20. A suitable camera for input producing image input for the system includes a simple optical camera, that is to say a camera without associated range functionality, without depth functionality, with plural vantage point camera array, or the like. - The
communication network 30 includes a computer network and a telephone system. Thecommunication network 30 includes of a variety of network components and protocols known in the art which enable computers to communicate. The computer network may be a local area network or wide area network such as the internet. The network may include modem lines, high speed dedicated lines, packet switches, etc. The network protocols used may include those known in the art such as UDP, TCP, IP, IPX, or the like. Additional communication protocols may be used to facilitate communication over thecomputer network 30, such as the published HTTP protocol used on the world wide web or other application protocols. - The user model database includes base figure frameworks and stored user models, which are composites of user provided images joined with one or more base figure frameworks, as will be disclosed further in the specification. The base figure frameworks are a plurality of system created frameworks, each framework representing a major portion or all of the human body. In the current embodiment, each framework represents the human body, including a portion of the neck and below. The base figures frameworks are of varying body measurements and characteristics. That is to say the base figures are generated with a relative height, weight, body type, chest measurement, band measurement, waist measurement, hip measurement, inseam, rise, thigh measurement, arm length, sleeve length, upper arm measurement, skin tone, eye color, hair color, hair length, and other characteristics. The
user model database 34 also stores user information such as pant size, shirt size, or dress size. Theuser model database 34 includes sufficient base figure frameworks to form a dictionary of frameworks of differing body measurements and characteristics to represent cross-sections of the population. In one aspect, the system 10 divides the chest measurement into simulated one inch ranges. Each chest measurement range is paired with a given category or range of other characteristics. Thus, one figure framework may represent, for example, a 42 inch chest measurement, a first given waist measurement or range, a first given given hip measurement or range, and so on for the other characteristics. The figure framework dictionary is completed by varying the options for the figure frameworks while maintaining one static value for the isolated characteristic in order to represent sufficient cross-sections of the population. - For each figure framework, a set of body reference coordinates is stored. The body reference coordinates map a particular location or set of locations within the figure framework. The body reference coordinates can define one or more regions of the body or body parts. For example, the body reference coordinates may map to the waistline region.
- The base figure framework may include two dimensional (2D) data or three dimensional (3D) data. Each 2D figure framework may include an associated set of images for a given framework for a particular set of body measurements and characteristics.
FIG. 6 depicts a series of associated set of representative2D figure frameworks 40 40′ 40″ 40′″ for a particular set of body measurements and characteristics. Each of theimages 40 40′ 40″ 40′″ shows the particular set of body measurements and characteristics from a different vantage point or in different positions, postures, or “poses.”FIG. 8 depicts a series of a subset of 3D figure frameworks in the dictionary, each having a different particular set of body measurements and characteristics. - The
garment database 36 includes data for a plurality of garments. The garment data includes, but is not limited to, the garment type, color, pattern, size, images, and region reference coordinates. Each garment entry represents a specific article of clothing that a user may virtually model. The garment type is input. For example, a bra, a shirt, pants, dress, coat, or other article of clothing may be selected. Additionally, at least one associated image is input into the garment entry. Preferably multiple images from different vantage points are input and associated with the garment type. Each garment image has associated pairing data. The pairing data includes data which signals that a region of a particular garment should be associated with a region of the body. By way of example with a bra, the coordinates representing the lower edge of a bra may be associated with the band, or inframammary fold. Likewise, the coordinates representing the lower edge of a shirt may be associated with the hip line. -
FIG. 2 shows an embodiment of the process implemented to the system ofFIG. 1 . The user model is generated 100. Usinginput garment data 200, the system generates asimulated model 300, with which the user may interact 400. - Referring to
FIG. 4 , garment data isinput 200. Atstep 205, the garment type is input. Auxiliary associated garment data, such as a product identifier, size, color, isinput 210. Next, one or more images of the garment are uploaded 215. Suitable images includes those captured from a simple optical camera. The preferred vantage point of the garment images is from the front of the garment, with supplemental images from the sides and rear of the garment. The garment's information is stored in thegarment database 36. The product identifier is optionally associated with a bar code. - Referring to
FIG. 3 , the user captures an image of a portion of himself or herself 105 using an optical camera, preferably, the upper body, more specifically above the shoulders. A suitable camera includes a simple optical camera. The preferred vantage point is from the front of the user. The user may supplement the input with additional images from different vantage points. In the current embodiment, the system extracts thefacial region 56 of the image, removing the background using systems and processes known in the art. Representative systems and processes include U.S. Pat. Nos. 6,611,613 to Kang et al., 7,123,754 to Matsuo et al., 6,885,760 to Yamada et al, which are incorporated by reference. - Optionally, the system provides an interface to the user in order to facilitate automated system extraction of the
facial region 56 from the image. The system provides at least oneguide 54 overlaying the image. The guides are shaped to enable coarse indication of thefacial region 56 to the system. Suitable guide shapes for encompassing a portion of thefacial region 56 include ellipses, quadrilaterals, or other polygons. Other suitable guide shapes permit the user to signal specific points within thefacial region 56 to the system. Such a representative shape includes across-hair guide 54. With reference toFIG. 9 , a state of one configuration for the interface is shown. A firstelliptical guide 54 is presented to the user for coarse signaling of the outer boundary of thefacial region 56. Asecond cross-hair guide 54 is presented to the user for coarse signaling of the center of thefacial region 56. A thirdcircular guide 54 signals image area outside thefacial region 56. - In a second configuration, the system presents two guides, preferably of the same shape and as simple polygons, such as are ellipses or quadrilaterals. A first guide is nested inside a second guide and presented to the user for coarse placement inside the
facial region 56, providing a basis for foreground color information. The outer guide is presented to the user for coarse placement outside thefacial region 56, providing a basis for foreground color information. The system pre-calculates triangulations for each of the two guides and determines the boundary colors at each of the respective guides using mean value coordinates, preferably at the vertices of the triangles. Next, the system calculates a foreground image (F) and a background image (B). To arrive at thefacial region 56 with the background removed, the system interpolates colors in the triangles using Barycentric coordinates based on the user provided image (I) according to the following equation: -
transparency a=(I−B)/(F−B) - The system 10 stores the transformed user images in the
user model database 34. Additional disclosure in interpolation is in the annexed Lipman document, which is hereby incorporated by reference. - The system also determines a skin tone identifier from the
facial region 56 of the user provided image for configuration of the candidate figure framework to which thefacial region 56 will be joined. The skin tone identifier includes four components: a primary diffuse color, a secondary diffuse color, a shadow color, a highlight color. The system selects an area or areas to sample that is likely to represent the change in skin color. The exemplary configuration samples a circular area around the chin. A table of based on the sample area and the color distribution therein is created, where the system selects the four components based on the relative frequency of colors in the sample. The exemplary system selects the most frequent color as the diffuse color, the most frequent dark color as the shadow color, the most frequent bright color as the highlight color, and the color with the greatest difference in hue from the primary diffuse color as the secondary diffuse color. - At
step 110, the system 10 presents an interface to the user. The user can input characteristics, such as height, weight, chest measurement, waist measurement, hip measurement, inseam, sleeve length, skin tone, eye color, hair color, and clothing sizes. The interface may also present simplified or derived options to the user. For example, the system may present “banana”, “apple”, “pear”, “hourglass”, or “athletic” or as “body type” options. This signals the system to apply certain body characteristics, such as certain bust-hip ratios, waist-hip ratios, or torso length to leg length ratios. The user information is stored as a profile in theuser model database 34. - At
step 115, the system 10 selects a figure framework based upon the user input. As mentioned, the usermodel data database 34 includes a dictionary of figure framework of varying body measurements and characteristics representing different cross-sections of the population. Where the system 10 is configured with a 2D base figure framework, the system 10 selects the figure framework which most closely matches the user based on the user image and user profile data. The system determines the degree of correlation to other 2D figure framework for other user inputs and information derived from user input. The system selects the 2D figure framework with the highest aggregation correlation. - Optionally, the framework selector module is configured to retrieve a 2D figure framework representative of the user having an altered weight facade. That is to say, the framework selector module can select a
base 2D figure which may represent a user if that user gains or loses weight. In this optionally approach, the system selects the 2D figure framework as disclosed. Then the framework selector module combines user input with predictive weight change attributes to select a 2D figure framework. For example, people with a lower torso length to leg length ratios may have a higher tendency to initially expand at the hip in weight gain. The system preferably employs such tendencies to aid 2D figure framework selection. - After selection of the
2D figure framework 115, the 2D figure framework base is converted to a 3D figure framework by meshing and rigging 120, using those means known in the art. In one configuration, MakeHuman™ is employed in the meshing and Autodesk's™ Maya is employed in the rigging. Representative meshing systems and processes include U.S. Pat. Nos. 8,089,480 to Chang et al., 6,259,453 to Itoh et al., and 6,262,737 to Li et al., which are incorporated by reference. Representative rigging systems and processes include U.S. Pat. No. 8,026,917 to Rogers et al. and U.S. Pat. App. No. 20070146360 to Clatworthy, which are incorporated by reference. - Where the system 10 is configured with a 3D base figure framework, the system 10 selects the figure framework which most closely matches the user based on the user image and user profile data. The system determines the degree of correlation to other 3D figure framework for other user inputs and information derived from user input. The system selects the 3D figure framework with the highest aggregation correlation. Optionally, the system 10 morphs the 3D figure framework based on the input or as noted above, the user choosing to have an altered weight facade. Additional disclosure on morphing the 3D figure frameworks is disclosed in Allen, Et al, which is annexed and incorporated by reference.
- The user image of
step 105 is stitched to the3D figure framework 125 to form the user model. The user images and figure framework are preferably registered, calibrated, and blended in the stitching process. - Finally, a shader is applied 130 to match the tones of the user image with those of the 3D figure framework. Tools of the art such as OpenGL, Direct3D, or Renderman can be employed in the shading. The system 10 employs the aforementioned skin tone identifier components in shading the skin, namely the primary diffuse color, the shadow color, the highlight color, and the secondary diffuse color calculated from the user supplied image.
- The rendered user model is stored in the
user model database 34. - Referring to
FIG. 5 , the process of a user simulating modeling or “trying on” a garment is shown. First, the rendered user model is received 305. The user selects agarment 310. The system maps the garment to theuser model 315, using the pairing data and body reference data to associate regions of the selected garment to regions of the user model. The user selected garment is scaled and overlaid on the user model according to the system generated user model and the user selected garment, correlating garment regions to user model regions. Atstep 315, the simulated model is displayed to the video screen 24, as shown inFIG. 7 . The user is presented the option to change thebackground 320 or to change the simulated model's “pose” 325. - Insofar as the description above and the accompanying drawings disclose any additional subject matter, the inventions are not dedicated to the public and the right to file one or more applications to claim such additional inventions is reserved.
Claims (34)
1. A method of simulating modeling a garment comprising the steps of:
providing a dictionary having a plurality of figure frameworks, said plurality of figure frameworks comprising varying body characteristics and measurements, each of said figure frameworks comprising at least one image and body reference data;
providing a garment database comprising garment images and pairing data for a plurality of garments;
receiving a user image and a garment selection;
extracting the facial region from said user image;
determining a skin tone identifier based on said user image;
selecting a figure framework in response to user input and garment selection;
rendering a three dimensional model from said selected framework, shading said model based on said skin tone identifier;
stitching said facial region to said rendered model to form a user model; and
overlaying and scaling said selected garment on said user model, whereby system simulates said user wearing said selected garment.
2. The process according to claim 1 wherein said figure frameworks represent two dimensional data.
3. The process of claim 2 , wherein said dictionary includes a series of associated images in different postures for a set of two dimensional figure frameworks of like body characteristics and measurements.
4. The process according to claim 1 wherein said figure frameworks represent three dimensional data.
5. The process according to claim 1 , wherein said figure frameworks include a neck portion, torso, and legs.
6. The process of claim 1 , wherein said varied characteristics and measurements of said plurality of figure frameworks include relative weight and height.
7. The process of claim 1 , wherein said varied characteristics and measurements of said plurality of figure frameworks are selected from the following: relative weight, height, band, waist, hip, inseam, rise, thigh, arm length, sleeve length, and upper arm length.
8. The process of claim 1 , wherein said user input includes weight and height.
9. The process of claim 1 , wherein said user input includes pant size and shirt size.
10. The process of claim 1 , wherein said user input includes options selected from the following:
weight, height, band, waist, hip, inseam, rise, thigh, arm length, sleeve length, and upper arm length.
11. The process of claim 1 , wherein said user image comprises simple optical camera data.
12. The process of claim 1 , wherein said garment images comprises simple optical camera data.
13. The process of claim 1 , wherein the system provides an interface with guides for user facilitated facial region detection.
14. The process of claim 1 , wherein the system extracts relative color frequency from an area of the facial region to determine said skin tone identifier, said skin tone identifier system calculated from a primary diffuse color, a shadow color, a highlight color, and a secondary diffuse color.
15. The process of claim 14 , wherein said primary diffuse color comprises the most frequent color in said area, said shadow color comprises the most frequent dark color in said area, said highlight color comprises the most frequent bright color in said area, and said secondary diffuse color comprises the color with the most difference in hue from said primary diffuse color.
16. The process of claim 14 , wherein the system selects the chin area of the facial region for sampling.
17. The process of claim 1 , wherein said framework selector module is configured to retrieve a figure framework representative having an altered weight facade with respect to the user.
18. A system for simulating modeling a garment comprising:
a dictionary having a plurality of figure frameworks, said plurality of figure frameworks comprising varying body characteristics and measurements, each of said figure frameworks comprising at least one image and body reference data;
a garment database comprising garment images and pairing data for a plurality of garments;
an interface configured to receive a user image and a garment selection;
a facial extraction module configured to extract the facial region from said user image and determine a skin tone identifier based on said user image;
a framework selector module configured to select a figure framework in response to user input and garment selection;
a rendering engine configured to render a three dimensional model from said selected framework, shading said model based on said skin tone identifier and stitch said facial region to said rendered model to form a user model; and
said rendering engine overlaying and scaling said selected garment on said user model, whereby system simulates said user wearing said selected garment.
19. The system of claim 18 , wherein said figure frameworks represent two dimensional data.
20. The system of claim 19 , wherein said dictionary includes a series of associated images in different postures for a set of two dimensional figure frameworks of like body characteristics and measurements.
21. The system of claim 18 wherein said figure frameworks represent three dimensional data.
22. The system of claim 18 , wherein said figure frameworks include a neck portion, torso, and legs.
23. The system of claim 18 , wherein said varied characteristics and measurements of said plurality of figure frameworks include relative weight and height.
24. The system of claim 18 , wherein said varied characteristics and measurements of said plurality of figure frameworks are selected from the following: relative weight, height, band, waist, hip, inseam, rise, thigh, arm length, sleeve length, and upper arm length.
25. The system of claim 18 , wherein said interface includes user input includes weight and height.
26. The system of claim 18 , wherein said interface includes user input includes pant size and shirt size.
27. The system of claim 18 , wherein said interface includes user input includes options selected from the following: weight, height, band, waist, hip, inseam, rise, thigh, arm length, sleeve length, and upper arm length.
28. The process of claim 18 , wherein said user image comprises simple optical camera data.
29. The process of claim 18 , wherein said garment images comprises simple optical camera data.
30. The system of claim 18 , wherein the system provides an interface with guides for user facilitated facial region detection.
31. The system of claim 18 , wherein the system extracts relative color frequency from an area of the facial region to determine said skin tone identifier, said skin tone identifier system calculated from a primary diffuse color, a shadow color, a highlight color, and a secondary diffuse color.
32. The system of claim 31 , wherein said primary diffuse color comprises the most frequent color in said area, said shadow color comprises the most frequent dark color in said area, said highlight color comprises the most frequent bright color in said area, and said secondary diffuse color comprises the color with the most difference in hue from said primary diffuse color.
33. The system of claim 31 , wherein the system selects the chin area of the facial region for sampling.
34. The system of claim 18 , wherein said framework selector module is configured to retrieve a figure framework representative having an altered weight facade with respect to the user.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/733,865 US20130170715A1 (en) | 2012-01-03 | 2013-01-03 | Garment modeling simulation system and process |
PCT/US2013/055103 WO2014028714A2 (en) | 2012-08-15 | 2013-08-15 | Garment modeling simulation system and process |
US14/421,836 US10311508B2 (en) | 2012-08-15 | 2013-08-15 | Garment modeling simulation system and process |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261631318P | 2012-01-03 | 2012-01-03 | |
US13/586,845 US20130173226A1 (en) | 2012-01-03 | 2012-08-15 | Garment modeling simulation system and process |
US13/733,865 US20130170715A1 (en) | 2012-01-03 | 2013-01-03 | Garment modeling simulation system and process |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/586,845 Continuation US20130173226A1 (en) | 2012-01-03 | 2012-08-15 | Garment modeling simulation system and process |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/586,845 Continuation US20130173226A1 (en) | 2012-01-03 | 2012-08-15 | Garment modeling simulation system and process |
US14/421,836 Continuation US10311508B2 (en) | 2012-08-15 | 2013-08-15 | Garment modeling simulation system and process |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130170715A1 true US20130170715A1 (en) | 2013-07-04 |
Family
ID=50102287
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/733,865 Abandoned US20130170715A1 (en) | 2012-01-03 | 2013-01-03 | Garment modeling simulation system and process |
US14/421,836 Active 2034-10-14 US10311508B2 (en) | 2012-08-15 | 2013-08-15 | Garment modeling simulation system and process |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/421,836 Active 2034-10-14 US10311508B2 (en) | 2012-08-15 | 2013-08-15 | Garment modeling simulation system and process |
Country Status (1)
Country | Link |
---|---|
US (2) | US20130170715A1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130128023A1 (en) * | 2001-11-26 | 2013-05-23 | Curtis A. Vock | System for generating virtual clothing experiences |
CN104156912A (en) * | 2014-08-18 | 2014-11-19 | 厦门美图之家科技有限公司 | Portrait heightening image processing method |
US20170263031A1 (en) * | 2016-03-09 | 2017-09-14 | Trendage, Inc. | Body visualization system |
CN107430542A (en) * | 2014-12-23 | 2017-12-01 | 彼博迪公司 | Obtain image and the method for making clothes |
US20190272675A1 (en) * | 2018-03-02 | 2019-09-05 | The Matilda Hotel, LLC | Smart Mirror For Location-Based Augmented Reality |
CN111783182A (en) * | 2020-07-07 | 2020-10-16 | 恒信东方文化股份有限公司 | Modeling method and system of three-dimensional virtual mannequin |
CN113272852A (en) * | 2019-01-03 | 2021-08-17 | 株式会社艾迪讯 | Method for acquiring photograph for measuring body size, and body size measuring method, server, and program using same |
US20220078339A1 (en) * | 2019-01-03 | 2022-03-10 | Idiction Co., Ltd. | Method for obtaining picture for measuring body size and body size measurement method, server, and program using same |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10332176B2 (en) | 2014-08-28 | 2019-06-25 | Ebay Inc. | Methods and systems for virtual fitting rooms or hybrid stores |
US20140368499A1 (en) * | 2013-06-15 | 2014-12-18 | Rajdeep Kaur | Virtual Fitting Room |
US10529009B2 (en) | 2014-06-25 | 2020-01-07 | Ebay Inc. | Digital avatars in online marketplaces |
US10653962B2 (en) | 2014-08-01 | 2020-05-19 | Ebay Inc. | Generating and utilizing digital avatar data for online marketplaces |
JP6800676B2 (en) * | 2016-09-27 | 2020-12-16 | キヤノン株式会社 | Image processing equipment, image processing methods and programs |
CN109427083B (en) * | 2017-08-17 | 2022-02-01 | 腾讯科技(深圳)有限公司 | Method, device, terminal and storage medium for displaying three-dimensional virtual image |
US11140936B2 (en) | 2018-02-27 | 2021-10-12 | Levi Strauss & Co. | Guided allocation in an apparel management system |
US20210073886A1 (en) | 2019-08-29 | 2021-03-11 | Levi Strauss & Co. | Digital Showroom with Virtual Previews of Garments and Finishes |
WO2022081745A1 (en) * | 2020-10-13 | 2022-04-21 | Maze Ar Llc | Real-time rendering of 3d wearable articles on human bodies for camera-supported computing devices |
JP6960714B1 (en) * | 2021-03-31 | 2021-11-05 | 功憲 末次 | Display system |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020052805A1 (en) * | 2000-10-31 | 2002-05-02 | Junji Seki | Sales transaction support method, sales transaction support apparatus |
US6546309B1 (en) * | 2000-06-29 | 2003-04-08 | Kinney & Lange, P.A. | Virtual fitting room |
US20030101105A1 (en) * | 2001-11-26 | 2003-05-29 | Vock Curtis A. | System and methods for generating virtual clothing experiences |
US20050234782A1 (en) * | 2004-04-14 | 2005-10-20 | Schackne Raney J | Clothing and model image generation, combination, display, and selection |
US20060080182A1 (en) * | 2003-10-21 | 2006-04-13 | Thompson Robert J | Web-based size information system and method |
US20070220540A1 (en) * | 2000-06-12 | 2007-09-20 | Walker Jay S | Methods and systems for facilitating the provision of opinions to a shopper from a panel of peers |
US20080163344A1 (en) * | 2006-12-29 | 2008-07-03 | Cheng-Hsien Yang | Terminal try-on simulation system and operating and applying method thereof |
US20080255920A1 (en) * | 2005-09-01 | 2008-10-16 | G & K Services,Inc. | Virtual Sizing System and Method |
US20100030578A1 (en) * | 2008-03-21 | 2010-02-04 | Siddique M A Sami | System and method for collaborative shopping, business and entertainment |
US7844076B2 (en) * | 2003-06-26 | 2010-11-30 | Fotonation Vision Limited | Digital image processing using face detection and skin tone information |
US20100306082A1 (en) * | 2009-05-26 | 2010-12-02 | Wolper Andre E | Garment fit portrayal system and method |
US20120086783A1 (en) * | 2010-06-08 | 2012-04-12 | Raj Sareen | System and method for body scanning and avatar creation |
US8275590B2 (en) * | 2009-08-12 | 2012-09-25 | Zugara, Inc. | Providing a simulation of wearing items such as garments and/or accessories |
US8525847B2 (en) * | 2009-06-01 | 2013-09-03 | Apple Inc. | Enhancing images using known characteristics of image subjects |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2837593B1 (en) * | 2002-03-22 | 2004-05-28 | Kenneth Kuk Kei Wang | METHOD AND DEVICE FOR VIEWING, ARCHIVING AND TRANSMISSION ON A NETWORK OF COMPUTERS OF A CLOTHING MODEL |
JP4104904B2 (en) * | 2002-05-29 | 2008-06-18 | 富士フイルム株式会社 | Image processing method, apparatus, and program |
US7471846B2 (en) * | 2003-06-26 | 2008-12-30 | Fotonation Vision Limited | Perfecting the effect of flash within an image acquisition devices using face detection |
US20110141101A1 (en) * | 2009-12-11 | 2011-06-16 | Two Loons Trading Company, Inc. | Method for producing a head apparatus |
US9959453B2 (en) * | 2010-03-28 | 2018-05-01 | AR (ES) Technologies Ltd. | Methods and systems for three-dimensional rendering of a virtual augmented replica of a product image merged with a model image of a human-body feature |
US9098873B2 (en) | 2010-04-01 | 2015-08-04 | Microsoft Technology Licensing, Llc | Motion-based interactive shopping environment |
GB201104312D0 (en) * | 2011-03-14 | 2011-04-27 | Bell Alexandra | Improved virtual try on simulation service |
US8824808B2 (en) * | 2011-08-19 | 2014-09-02 | Adobe Systems Incorporated | Methods and apparatus for automated facial feature localization |
-
2013
- 2013-01-03 US US13/733,865 patent/US20130170715A1/en not_active Abandoned
- 2013-08-15 US US14/421,836 patent/US10311508B2/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070220540A1 (en) * | 2000-06-12 | 2007-09-20 | Walker Jay S | Methods and systems for facilitating the provision of opinions to a shopper from a panel of peers |
US6546309B1 (en) * | 2000-06-29 | 2003-04-08 | Kinney & Lange, P.A. | Virtual fitting room |
US20020052805A1 (en) * | 2000-10-31 | 2002-05-02 | Junji Seki | Sales transaction support method, sales transaction support apparatus |
US20030101105A1 (en) * | 2001-11-26 | 2003-05-29 | Vock Curtis A. | System and methods for generating virtual clothing experiences |
US7844076B2 (en) * | 2003-06-26 | 2010-11-30 | Fotonation Vision Limited | Digital image processing using face detection and skin tone information |
US20060080182A1 (en) * | 2003-10-21 | 2006-04-13 | Thompson Robert J | Web-based size information system and method |
US20050234782A1 (en) * | 2004-04-14 | 2005-10-20 | Schackne Raney J | Clothing and model image generation, combination, display, and selection |
US20080255920A1 (en) * | 2005-09-01 | 2008-10-16 | G & K Services,Inc. | Virtual Sizing System and Method |
US20080163344A1 (en) * | 2006-12-29 | 2008-07-03 | Cheng-Hsien Yang | Terminal try-on simulation system and operating and applying method thereof |
US20100030578A1 (en) * | 2008-03-21 | 2010-02-04 | Siddique M A Sami | System and method for collaborative shopping, business and entertainment |
US20100306082A1 (en) * | 2009-05-26 | 2010-12-02 | Wolper Andre E | Garment fit portrayal system and method |
US8525847B2 (en) * | 2009-06-01 | 2013-09-03 | Apple Inc. | Enhancing images using known characteristics of image subjects |
US8275590B2 (en) * | 2009-08-12 | 2012-09-25 | Zugara, Inc. | Providing a simulation of wearing items such as garments and/or accessories |
US20120086783A1 (en) * | 2010-06-08 | 2012-04-12 | Raj Sareen | System and method for body scanning and avatar creation |
Non-Patent Citations (8)
Title |
---|
"HSL and HSV" (downloaded from www.wikipedia.org on 4/28/2015 * |
Begole et al, "Designed to Fit: Challenges of Interaction Design for Clothes Fitting Room Technologies", Human-Computer Interaction, Part IV, HCII 2009, LNCS 5613, pp. 448-457, 2009 * |
Bodhani, Aasha, "Shops Offer the E-Tail Experience", Engineering&Technology, June 2012 * |
Divivier et al, "Virtual Try On, Topics in Realistic, Individualized Dressing in Virtual Reality", Proceedings of the Virtual and Augmented Reality Status Conference, Germany, 2004 * |
Fretwell, Lisa, "Cisco StyleME Virtual Fashion Mirror", December 2011 * |
Laird, Sam, "Clothes Shopping with Bodymetrics Lets You Try It On For Virtual Size", Mashable, January 9, 2012 * |
Sobottka et al, "Looking for Facial Features in Color Images", Pattern Recognition and Image Analysis: Advantages in Mathematical Theory and Applications, Russian Academy of Sciences, Vol. 7, No. 1, 1997 * |
Sterling, Bruce, "GoldRun Revolutionizes Mobile Marketing" (Augmented Reality: GoldRun|Beyond the Beyond, November 2010, downloaded from www.wired.com * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8843402B2 (en) * | 2001-11-26 | 2014-09-23 | Curtis A. Vock | System for generating virtual clothing experiences |
US20130128023A1 (en) * | 2001-11-26 | 2013-05-23 | Curtis A. Vock | System for generating virtual clothing experiences |
CN104156912A (en) * | 2014-08-18 | 2014-11-19 | 厦门美图之家科技有限公司 | Portrait heightening image processing method |
US11042919B2 (en) * | 2014-12-23 | 2021-06-22 | Bit Body, Inc. | Methods of capturing images and making garments |
CN107430542A (en) * | 2014-12-23 | 2017-12-01 | 彼博迪公司 | Obtain image and the method for making clothes |
US20170372395A1 (en) * | 2014-12-23 | 2017-12-28 | Bit Body, Inc. | Methods of capturing images and making garments |
US20190172114A1 (en) * | 2014-12-23 | 2019-06-06 | Bit Body, Inc. | Methods of capturing images and making garments |
US20170263031A1 (en) * | 2016-03-09 | 2017-09-14 | Trendage, Inc. | Body visualization system |
US10573077B2 (en) * | 2018-03-02 | 2020-02-25 | The Matilda Hotel, LLC | Smart mirror for location-based augmented reality |
US20190272675A1 (en) * | 2018-03-02 | 2019-09-05 | The Matilda Hotel, LLC | Smart Mirror For Location-Based Augmented Reality |
CN113272852A (en) * | 2019-01-03 | 2021-08-17 | 株式会社艾迪讯 | Method for acquiring photograph for measuring body size, and body size measuring method, server, and program using same |
US20220078339A1 (en) * | 2019-01-03 | 2022-03-10 | Idiction Co., Ltd. | Method for obtaining picture for measuring body size and body size measurement method, server, and program using same |
CN111783182A (en) * | 2020-07-07 | 2020-10-16 | 恒信东方文化股份有限公司 | Modeling method and system of three-dimensional virtual mannequin |
Also Published As
Publication number | Publication date |
---|---|
US20150235305A1 (en) | 2015-08-20 |
US10311508B2 (en) | 2019-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20130170715A1 (en) | Garment modeling simulation system and process | |
US20130173226A1 (en) | Garment modeling simulation system and process | |
US8976230B1 (en) | User interface and methods to adapt images for approximating torso dimensions to simulate the appearance of various states of dress | |
US9928411B2 (en) | Image processing apparatus, image processing system, image processing method, and computer program product | |
US9928412B2 (en) | Method, medium, and system for fast 3D model fitting and anthropometrics | |
US11640672B2 (en) | Method and system for wireless ultra-low footprint body scanning | |
CN103106604B (en) | Based on the 3D virtual fit method of body sense technology | |
CN107251026B (en) | System and method for generating virtual context | |
US9147207B2 (en) | System and method for generating image data for on-line shopping | |
EP3479296A1 (en) | System, device, and method of virtual dressing utilizing image processing, machine learning, and computer vision | |
WO2019167063A1 (en) | Virtual representation creation of user for fit and style of apparel and accessories | |
CN108986159A (en) | A kind of method and apparatus that three-dimensional (3 D) manikin is rebuild and measured | |
CN113255025A (en) | System and method for generating virtual content from three-dimensional models | |
WO2020203656A1 (en) | Information processing device, information processing method, and program | |
US20150269759A1 (en) | Image processing apparatus, image processing system, and image processing method | |
CN105069837B (en) | A kind of clothes trying analogy method and device | |
WO2020104990A1 (en) | Virtually trying cloths & accessories on body model | |
Masri et al. | Virtual dressing room application | |
KR101158453B1 (en) | Apparatus and Method for coordinating a simulated clothes with the three dimensional effect at plane using the two dimensions image data | |
KR101508161B1 (en) | Virtual fitting apparatus and method using digital surrogate | |
WO2018182938A1 (en) | Method and system for wireless ultra-low footprint body scanning | |
WO2014028714A2 (en) | Garment modeling simulation system and process | |
CN116246041A (en) | AR-based mobile phone virtual fitting system and method | |
KR20210130420A (en) | System for smart three dimensional garment fitting and the method for providing garment fitting service using there of | |
Wang et al. | Application Performance Experiment of Three-dimensional Anthropometric Virtual Fitting System |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |