GB2503331A - Aligning garment image with image of a person, locating an object in an image and searching for an image containing an object - Google Patents
Aligning garment image with image of a person, locating an object in an image and searching for an image containing an object Download PDFInfo
- Publication number
- GB2503331A GB2503331A GB1307246.7A GB201307246A GB2503331A GB 2503331 A GB2503331 A GB 2503331A GB 201307246 A GB201307246 A GB 201307246A GB 2503331 A GB2503331 A GB 2503331A
- Authority
- GB
- United Kingdom
- Prior art keywords
- image
- garment
- colour
- person
- background
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 claims abstract description 124
- 210000000707 wrist Anatomy 0.000 claims abstract description 6
- 238000004458 analytical method Methods 0.000 claims description 38
- 239000003086 colorant Substances 0.000 claims description 12
- 210000001624 hip Anatomy 0.000 claims description 10
- 238000001514 detection method Methods 0.000 claims description 6
- 210000003127 knee Anatomy 0.000 claims description 6
- 210000003423 ankle Anatomy 0.000 claims description 5
- 238000013507 mapping Methods 0.000 claims description 5
- 230000000877 morphologic effect Effects 0.000 claims description 5
- 239000004753 textile Substances 0.000 claims description 5
- 210000002683 foot Anatomy 0.000 claims description 2
- 238000002955 isolation Methods 0.000 description 12
- 238000012545 processing Methods 0.000 description 7
- 238000005259 measurement Methods 0.000 description 4
- 238000013519 translation Methods 0.000 description 4
- 230000014616 translation Effects 0.000 description 4
- 230000037237 body shape Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 210000003414 extremity Anatomy 0.000 description 1
- 238000012804 iterative process Methods 0.000 description 1
- 210000002414 leg Anatomy 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5838—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/14—Transformations for image registration, e.g. adjusting or mapping for alignment of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Library & Information Science (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- Architecture (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Processing Or Creating Images (AREA)
- Image Analysis (AREA)
Abstract
A method of aligning a representation of a garment contained in a first image with a representation of a person contained in a second image. The method including allocating nodes to predetermined key points on the garment, analysing the shape of the person to find the outline of the person, allocating nodes to predetermined points on the person, such as the wrists, elbows , shoulders and chest. The first and second images are manipulated to align the predetermined points and the first image is scaled and overlaid onto the second image. A method for analysing an image to locate and isolate an object within the image from a background includes classifying the colour of each pixel of the image, estimating a colour descriptor of the object and the background to isolate the object in the image and identify a shape descriptor of the object. A method of searching for an image containing an object includes determining the complexity of the image, the bounds, shape descriptor and colour descriptor of the object, returning images based on comparison of colour and shape descriptors.
Description
Title: A System and Method for Image Analysis
Description of Invention
The present invention relates to a system and method for the identification of an item in an image, and the analysis of an item in an image and the matching of the image with other images including similar items. Additionally, the present invention relates to the fitting of an item in a first image to a particular shape stored in a second image. More specifically, the present invention relates to the identification of a garment in an image, the matching of a garment in the image with other, similar garments in images and the virtual fitment of a garment shown in an image on a representation (or image) of a person.
Particularly, it is desirable to be able to search for a garment or item of clothing which is similar to another, particular type or style of garment or item of clothing, view similar garments or items of clothing and virtually fit' the garment or item of clothing to an image of a person.
Previous systems have been suggested which allow limited searching of images by using an image as the basis of the search (or the search parameters'). The tools which exist presently, such as Google's "Google Goggles"®, use analysis of the shape and colour contained in an image to provide similar images. Google Goggles uses object recognition algorithms, and is reliant on the extensive library of images that Google already possesses in their databases (as evidenced by Google's image search function). It is known that Goggles does not work well with images of soft' or organic' shapes, such as clothing, furniture or animals.
Additionally, there are known services which take input in the form of uploaded images containing garments, and seek to match the garment in the image with similar garments in the database. These systems, however, require a user interaction to identify which type of garment is present in the image. Such services are limited, because they are heavily reliant upon user interaction.
WO01/11886 discloses a system for the virtual fitting of clothing which analyses only the critical' points of a garment. Additionally, systems are known which require the preparation of time-consuming 3D scans of garments to be fitted' prior to the garments being available for online virtual fitting (on a 3D torso based on measurements. Additionally, other known systems utilise pre-acquired images of clothing on an adjustable model which has been pre-set to model particular body shapes.
It is an object of the present invention to provide a system and method which accurately determines a garment type in an image, provides image results from a database which contain similar garments to those in the source image, and allows a user to virtually fit' the garments in an image to a representation of themselves.
Accordingly, one aspect of the present invention provides a method for analysing an image to locate and isolate an object within the image, the image containing at least an object against a background, the method comprising classifying the colour of each pixel of the image, estimating, based on the colour of each pixel, the colour descriptor of the object and the colour descriptor of the background of the image, determining, based on the colour of the object and the colour of the background, the locations in the image of the object and the background, isolating the object in the image, and identifying the shape descriptor of the object.
Preferably, classifying the colour of each pixel comprises first determining whether the pixel is white, grey or black, and if the pixel is not white, grey or black, determining whether the pixel is, grey, red, orange, yellow, green, cyan, blue, purple or pink.
Conveniently, after the colour classification step, the method further includes the step of transferring the pixel colour space from RGB (Red, Green, Blue) to HSI (Hue, Saturation, Intensity).
Advantageously, estimating the respective colour descriptors of the object and the background includes creating a colour histogram based on the pixel
colours of the object and background.
Preferably, estimating the respective colour descriptors of the object and background includes determining whether the colour of the object and the colour of substantially all of the background are similar.
Conveniently, the method includes the step of calculating the ratio of the number of pixels in the image having one colour to the total number of pixels in the image.
Advantageously, if the ratio is calculated as being 0.8 or higher, concluding that the background and object are the same colour.
Preferably, if the estimated colour descriptors of the object and the background are similar, the step of determining the location of the object and the background comprises using analysis of the edges of the object.
Conveniently if the colour descriptors of the object and background are not similar, the step of determining the location of the object and the background comprises discarding the pixel data relating to the background.
Advantageously, estimating the colour descriptors of the object and background further includes determining whether the background includes regions of a colour similar to the colour of the object.
Preferably, if it is determined that the background includes regions of a colour similar to the colour of the object, the method further comprises clustering pixels forming the image and analysing the clusters by way of a k-means algorithm, separating the regions of similar colour in the background from the object.
Conveniently, the method includes the further steps of analysing the isolated object to identify areas of the object of a similar colour to the background, and if there are areas of a similar colour present in the object, applying a morphological operation to the image of the object to remove these areas.
Advantageously, the step of determining the locations of the object and background in the image includes assuming that the object is in a central region of the image.
Preferably, the step of determining the locations of the object and the background in the image includes making an assumption regarding the location of the object with respect to the background, and comparing the estimated colours of the object and background to determine which is the
object and which is the background.
Conveniently, the step of identifying the shape descriptor of the object includes comparing the object in the image with a selection of pre-determined objects having known shapes.
Preferably, the method further includes the steps of comparing the shape descriptor of the object in the image with other images containing a similar object, and using the data obtained from the comparison to improve the shape descriptor identification.
Conveniently, the method further includes the step of identifying the pattern descriptor of the object in the image.
Advantageously, the step of identifying the pattern descriptor comprises using a k-means algorithm.
Preferably, using the k-means algorithm clusters similar patterns on the object, and determining the dominant pattern to identify the pattern descriptor.
Another aspect of the present invention provides a method of searching for an image containing an object, the method comprising the steps of identifying an image to be used as the basis for the search, determining the complexity of the image, determining the bounds of the object within the image, identifying the shape descriptor of the object within the image based on the identified bounds of the object, determining the colour descriptor of the object within the image, comparing the object in the image with the content of other, predetermined images, based upon the colour and shape descriptors of the object, and returning images which, based on the comparison of the object of the image and the predetermined images, include content similar to the basis ofthesearch.
Preferably, the step of identifying the image to be used as the basis for the search includes receiving an image from a user.
Conveniently, the step of determining the complexity of the image includes performing pixel colour analysis of the image.
Advantageously, the step of determining the complexity includes using a human detection algorithm.
Preferably, the step of determining the complexity further includes analysis of
the background of the image.
Conveniently, the step of determining the complexity of the image includes analysing shapes present in the image.
Advantageously, the step of determining the bounds of the image includes performing edge analysis.
Preferably, the step of determining the bounds of the image includes performing colour and texture analysis.
Conveniently, the step of determining the bounds of the image comprises manually determining the bounds.
Advantageously, the step of identifying the shape descriptor of the object within the image includes comparing the determined image bounds with shapes with a selection of pre-determined objects having known shape descriptors.
Preferably, the step of determining the colour descriptor of the object includes analysing the colours of the pixels within the image.
Conveniently, the method further includes the step of creating a colour histogram based upon the pixel data.
Advantageously, the step of comparing the object in the image with predetermined images includes analysis of a database of pre-processed images.
Preferably, the step of returning the results includes providing images and data relating to the images.
Conveniently, the data relating to the images includes price, size, colour, pattern, texture, textile and/or supply data.
Advantageously, the basis for the search further includes providing data relating to price, size, colour, pattern, texture, textile and/or supply data.
Preferably, further including analysing and comparing the data with predetermined catalogued data associated with the predetermined images.
Conveniently, the method further includes the step of identifying the pattern descriptor of the object in the image.
Advantageously, the step of identifying the pattern descriptor comprises using a k-means algorithm.
A yet further embodiment of the present invention provides a method of aligning a representation of a garment contained in a first image with a representation of a person contained in a second image, the method including the steps of identifying the garment in the first image and the person in the second image, analysing the shape of the garment in the first image allocating nodes to predetermined points on the garment, associated with the garment shape and predetermined shape data, analysing the shape of the person in the second image to find the outline of the person in the second image, allocating nodes to predetermined points on the person, associated with the shape of the person and predetermined shape data, analysing the predetermined points of both the garment and the person and determining alignment information, manipulating the first and second images to align the predetermined points, scaling the first image based upon the dimensions of the second image, and overlaying the first image onto the second image.
Preferably, identifying the garment in the first image includes comparing the garment with shapes with a selection of pre-determined objects having known shapes.
Conveniently, the method further includes the step of isolating the garment in
the first image from the background of the image.
Advantageously, the step of isolating is carried out automatically.
Preferably, the step of isolating is carried out manually.
Conveniently, analysing the shape of the garment includes analysing the outline of the garment and creating a map of the shape of the garment.
Advantageously, the step of allocating nodes to predetermined points on the garment includes performing shape analysis of the outline found for the garment and subsequently identifying the points based on predetermined criteria.
Preferably, analysing the shape of the person includes analysing the outline of the person and creating a map of the shape of the person.
Conveniently, the method further includes the step of displaying the map of the shape of the person as an estimation of the outline of the person.
Advantageously, the step of allocating nodes to predetermined points on the person includes analysing the shape of the outline found for the person and subsequently identifying the points based on predetermined criteria.
Preferably, the method further includes the step of placing the predetermined points of the garment in the first image on at least one of neck, shoulders, elbows, wrists, chest, navel, waist, hips, knees or ankles.
Conveniently, the method further includes the step of placing the predetermined points of the person in the second image on at least one of head, neck, shoulders, elbows, wrists, chest, navel, waist, hips, knees, ankles or feet.
Advantageously, the method further includes the step of analysing the predetermined points on both the garment and person to determine alignment information includes forming correspondences between the predetermined points on the garment in particular locations and predetermined points on the person in particular locations.
Preferably, the particular locations on the person include the joints of the person.
Conveniently, the particular locations on the garment include the areas of the garment which correspond to joints of a person.
Advantageously, the step of manipulating the first and second images uses one-to-one mapping of the predetermined points.
Preferably, the step of scaling the first image based on the dimensions of the second image include analysing the distances between the predetermined points in both the first and second images and subsequently scaling the first image based upon the distances in the second image.
Conveniently, the step of scaling the first image further takes into account the outline determined for the person in the second image and includes scaling the garment accordingly.
Advantageously, the scaling takes into account the height and weight of the person in the second image.
Preferably, the method further includes the step of analysing the lighting of the person in the second image and applying simulated lighting to the garment in the first image based on the lighting in the second image.
Embodiments of the present invention will now be described, by way of example, with reference to the accompanying figures, in which: FIGURE 1 shows a flow diagram incorporating both image search and garment fitting; FIGURE 2 shows examples of images of garments to be isolated from the
background;
FIGURE 3 shows examples of the isolation of the shapes of two of the garments shown in Figure 4; FIGURE 4 shows a schematic view of the garment isolation process; FIGURE 5 shows a schematic view of the garment fitting process; FIGURE 6 shows a flow diagram of the fitment process; FIGURE 7 shows stages of the virtual fitting process; FIGURE 8 shows further stages of the virtual fitting process; and FIGURE 9 shows the relative location of a vertex during a fitting process.
Each aspect of the present invention will be discussed in turn, and it is to be understood that each of the aspects may, if desired, be used in isolation from the others. Figure 1 shows a flow diagram setting out some of the steps associated with the image search and garment fitting aspects of the invention.
Image Search The image search tool allows a user to search for images of garments, based upon matches to an existing image rather than keywords, and comprises two portions, which will be discussed in turn.
Analysis and Cataloguing The first portion of the search tool is the analysis and cataloguing of images which are to be made available to be searched. This analysis process is, in general, carried out as an offline cataloguing operation as new garment images are to be incorporated into the database. These images may be provided by a feed from a fashion house or fashion retailer, or may be manually incorporated. The images may be analysed on-the-fly' as they are imported into the database, or alternatively the images may be imported and then analysed.
Image Isolation In the analysis, the database images may first be segmented to isolate the garment in the image, and then their colour, and shape and pattern descriptors extracted. The analysis must locate and identify varying garment types which are pictured on varying backgrounds, because different fashion houses and clothing retailers use different types and configurations of images. Figure 2 shows three such examples of garments to be analysed, with backgrounds of varying complexity.
For instance, some retailers use pictures of garments on a white or neutral background, some use images of garments worn by models, some use images with a shaded background, and same use images which include objects in the background or are more complex. Further, some images include a combination of some or all of the above.
The method employed in this analysis is a pixel colour'-based method. When analysing images to extract the garment contained therein, there are a number of issues which must be overcome. These include situations in which the garment is a similar colour to the skin of the model (making it difficult to ascertain easily and with confidence where the garment ends and the model begins) and situations where the garment is a similar colour to the background of the picture (making it difficult to locate the garment against the background).
Colour Classification In carrying out the analysis on an image, each pixel is classified as a colour.
This colour is, in general, chosen from a set which includes white, black, gray, red, orange, yellow, green, cyan, blue, purple and pink. As a first step, each pixel is analysed to establish if the pixel colour is white, black or gray. If not, as a second step, one of the other colours is assigned, as will be explained in detail later.
Once the initial analysis is completed, the image is transferred from an RGB (Red, Green, Blue) colour space to a HIS (Hue, Saturation, Intensity) colour space, where three colour channels are extracted from the or each image.
Additionally, a fourth channel, V, may be used which represents the minimum value of RGB components in each pixel.
Next, the values of all channels are normalized to the range [0,1]. For each pixel, a process may be carried out to establish whether each pixel is white, black or gray, using the following equations: whte c V and SC 02 -I-3(02 -(:1 -bi;ackT:f 0,2.5<1 and 5<02.5.1-3(0,25-i) gravtf 0.25<1< 0.75 and 5<02 If a pixel is determined to be white, grey or black, the pixel colour information is recorded. However, if a pixel is not identified as being as white, black or grey, further analysis is required. The colour of each pixel may be determined using the following equations: *r Q:5< or H«=0M3 arwIae f 0.03 <11 and H S OJO veUowtf 0i0dH and its 0:2-i green tf 021 C H and. K S 0.44 wt f 44c K and H btuetf 036<11 and. 1150.78 pwpef 038-cM and 115 0.88 rtnk'J 0.88cM and 1150,96 This cataloguing method assigns a colour to all pixels of the image, including background pixels. Once this has been completed, it is necessary to isolate the garment from the remainder (the background) of the image so as to obtain the information regarding the garment.
Background Remova/ and Isolation
Figures 3a and 3b show examples of isolation -two of the garments shown in Figure 2 are shown in isolation, with the shape of the garment accurately isolated in the image.
Most often, images of garments which are to be catalogued into the database include the or each garment presented against a white or monotonic background. If the background is white (or a monotonic colour), it is possible to discard parts of the image which are white (or the monotonic colour). However, in the case of a white garment on a white background (or indeed a monotonic garment on a monotonic background of the same colour), purely discarding the white (or monotonic) pixels would remove the background and the garment, and the process would fail.
Additionally, it is necessary to detect body parts present in the image, for instance hair, skin, and face (if the image includes a model wearing the garment). This is discussed in more detail later.
If the garment shown in an image is white, the majority of the pixels in the image will be white in colour. If the ratio of white pixels to all pixels in the image is larger than a threshold ratio, preferably a ratio of 0.8, it can be said that the garment in the image is white. However, in a situation where there are other objects in the background of the image, the threshold ratio will not be reached. Therefore, it is also necessary to consider one third of the width of the image (in the horizontal direction) and the full height of the image (in the vertical direction) and calculate the ratio of white pixels in that section.
However, since garments are not always presented in the centre of the image, this second, partial area' approach may fail in cases where the first approach succeeds. Therefore, it may be necessary to consider both criteria to establish whether or not the garment in the image is white. If it is determined that the garment in the image is white, edge information is used to obtain an approximate boundary for the outline and shape of the garment.
If the garment is determined as not being white, the background may be removed, with the foreground of the image (the area which contains the garment) being retained. However, there is a possibility that when removing the white pixels, some pixels from the foreground (where the garment has small regions of white colour among other colours) may also be removed.
To prevent this from occurring, the non-white pixels of the garment may be considered as being 1' and the white pixels located in the garment may be considered as 0'. Therefore, any white colour regions in the texture of a garment appear to be small holes'. These holes' may then be removed using morphological methods, without affecting the white pixels of the background.
This makes it possible to distinguish between removed white pixels in the garment and the non-removed pixels as the background colour, allowing
removal of the white background.
Another difficulty arises when the garment in the image is being worn by a model, and the garment is of a colour similar to the model's skin or hair. The present invention includes a skin and hair detection algorithm to remove the skin and hair from an image. In situations where the garment has a colour similar to the skin or hair of a model, simple removal of pixels having the same colour as the skin or hair of the model also results in removal of the garment.
Two situations in which the garment shares the colour of hair or skin arise in general -the garment either has small regions of skin and/or hair colour among other colours, or it has a uniform, dominant colour of skin and/or hair.
In the case where the garment includes small regions of skin and/or hair colour, these small regions may be detected using morphological operations.
After this morphological step, if the ratio of skin and/or hair pixels to the whole number of foreground pixels is larger than a threshold, preferably 0.85, it may indicate that the garment's colour is similar to skin and/or hair. To distinguish between pixels which form the garment and those which are skin and/or hair, pixels that have this colour associated are clustered together, optimally into two or three clusters by way of a k-means algorithm. The k-means algorithm takes a number, n, of observations in k clusters, with each observation belonging to the cluster with the nearest mean, through an iterative refinement process. Taking into account that skin and/or hair are usually present in the outer part of the foreground, the algorithm may then determine which cluster belongs to the garment and which belongs to skin and/or hair.
Alternatively, if the background cannot be removed and the object or garment isolated automatically, the background may be removed semi-automatically -a user may be prompted to identify the region of the image which contains the garment and the background region of the image, by way of clicking, or drawing a line or number of lines in part of the object or garment, along with a line or number of lines in part of the background. Alternatively, a user may be asked to draw a bounding box around the object or garment in the image to aid in the isolation. The line-drawing or box-drawing process may be repeated in an iterative fashion to improve the accuracy of the isolation of the image or garment. The background may also be removed manually.
Figure 4 shows a schematic view of the isolation process, including a dotted-line indication of the background of the image and a solid line indication of the garment.
After the background has been removed, the object or garment in the foreground may then be isolated. Once isolated, or segmented', the colour, pattern and shape descriptor may be extracted from the image. To extract the colour descriptor, the colour information obtained in the analysis step may be used to generate a colour histogram for the garment. This allows the estimation of the colour of the garment.
Colour, Shape and Pattern Descriptors For pattern descriptors, the edge information in areas of the garment is analysed. This analysis is carried out using a k-means algorithm to cluster the similar patterns in the garment (one garment may include different patterns).
Then, the most dominant clusters determined using the k-means algorithm are used to create the pattern descriptor for the garment.
For shape descriptors, an effective shape descriptor may be used. If the image being analysed shows the garment worn by a model, the garment silhouette is not the same as the original shape of the garment. The shape of the garment in the image is dependent, in general, on the pose adopted by the model in the image, and the way in which some parts of the garment may be hidden by the model's limbs.
The present invention addresses the problems associated with this shape difference. In some cases, retailers provide both flat' images of a garment and images of the garment being worn by a model. In such cases, the two garment images are analysed using the above discussed methods, and subsequently compared, with the relationship between the corresponding images used to determine the shape descriptor of the garment.
In the case where the only images of a garment available are those of a model wearing the garment, the images may be compared with images of other, similar garments in the database which are shown in both flat' and worn' configurations to find the most similar picture and therefore to identify the shape descriptor of the unworn (or flat') garment. The most similar picture may be determined based on the pose of the model and garment silhouette, and the shape descriptor of the garment is assigned based upon this data.
When images have been analysed, the data is then stored in a database, made available to be searched. Retailers may provide a stream of images of garments to be included in the database, with the above method being carried out in an automated fashion, or in batches.
Searching with Images The second step is the upload and processing step for searching. To enable the search to occur, the user may upload an image to a host or into a database accessible by the searching algorithm in a known fashion, or alternatively, link to an image which is already available online. This image may then be processed to be used as the basis for the image search.
The processing step may then be carried out on the image.
Image Complexity After the image is uploaded (or linked to), the complexity of the image may be determined. This is preferably an automated step, and the complexity is determined using details extracted from the image. In general, the images used will contain a human, therefore human detection algorithms may be used to detect any human parts (face, torso, legs, arms, body and the like) are present in the image. Secondly, the colours clustered in the image borders and/or background are analysed to determine the complexity of the image based upon the number of colours present in the borders and/or background.
The human detection coupled with the analysis of the colours may then be used to determine the complexity of the background.
Once the complexity has been determined, the bounds of the image may be determined. To determine the bounds, a user may be asked to identify the bounds of the garment in the image that is to be used as the search parameter, by indication of e.g a bounding box or indication line. Alternatively, the bounds of the garment may be determined automatically, or using the methods of identification discussed above.
Then, characteristics of the garment such as shape, colour and the like may be extracted from the image using the above algorithms and image processing techniques.
Based on these characteristics the garment class (e.g. tops, trousers, dresses, shirts, etc.) may also be determined, as discussed in more detail below.
Search characteristics may then be compared with predetermined characteristics extracted from images of garments in a database, to provide the user with garment images and details which are similar to the garment in the uploaded image, also discussed in more detail below.
Garment Classes The analysis engine supports various garment classes, such that when an image is uploaded, and the image is analysed, no user input is required to identify the garment class. The analysis engine may include both a selection of pre-set template' garment classes, which may include exaggerated or stylised' garments and a selection of real garment images which represent each of the garment classes.
The processing of the uploaded image assesses the content of the image, and as a first step, may analyse the intensity of the regions of the image, clustering them together to form areas of the image. The areas may then be merged, based upon assumptions made about the foreground and background of the image. This assumption regarding the foreground and background may be based upon input from a user on image upload, but if no user input is available, the system may assume that the corners of the image are background, with the rest of the image being the foreground. This may be carried out in the same way as for the image cataloguing analysis discussed above.
Then, an iterative process may be undertaken, using regression techniques, to extract the block shape of the garment in the image. If the outline of the garment in the image is particularly jagged', a final step may be undertaken, which smoothes out the outline of the garment. The class of the garment may then be detected. This detection method may be applied to images of garments provided by retailers. Alternatively, the isolation process may be carried out in line with the above discussion.
Given that the detectable classes of garments are pre-set, there are only a limited number of garment classes which may be identified, and the analysed image may therefore be uploaded by a user, analysed by the system without input from the user, and then passed to the search engine. The class of garment is deduced from the outline of the garment by comparison with existing data in the database as discussed above, and the garment class which returns the most likely match is the garment class that may be assigned.
Alternatively, the garment class may be identified by the user.
The data extracted from the garment is then compared against the images of garments in the database, to find garments which match' the image uploaded by the user, and the results of the search are displayed to the user.
The search may also include parameters which are associated with the image of the garment, such as (but not limited to) price, size, colour, pattern, texture, textile and/or supply data. These parameters may be input by the user when the image which is to be the basis of the search is identified, and may also be stored in the database with the analysed images as meta-data to improve the search experience and usefulness of the search.
Garment Fitting Images uploaded by a user andlor images from the database may also be virtually fitted' on to an image of a person. Figure 5 shows a schematic view of the virtual garment fitting process.
The virtual fitting aspect of the present invention allows for the alignment of a representation of a garment contained in an image with a representation of a person stored in a second image. The process will now be discussed in detail.
The virtual fitting is the alignment of an image of a garment with an image of a person, such that the garment is scaled and deformed to fit' onto the image of the person.
To enable virtual fitting, images of garments may be processed to analyse the shape and fit of the garments. This analysis may follow on from the image processing steps set out above. The shape and fit analysis process involves analysis of the shape characteristics of the garments, and places a number of nodes' at points of the garment which to enable accurate and effective scaling and fitment of the garment.
Nodes and Predetermined Points The nodes' are, in general, placed at predetermined key' points of the garment, for instance with a jumper, the nodes may be placed at the portions of the garment which correspond to wrists, elbows, shoulders and chest, and in the case of trousers, the nodes may be placed at ankles, knees, hips and waist. It is to be understood that the nodes may be placed in positions which, dependent on the shape and fit of the garment, will allow accurate virtual fitme nt.
To enable an image of a person to be used for virtual fitting, it is necessary to analyse the image of a person. In general, as a first step, the system identifies the outline of the shape of the person in the image. If this cannot be achieved automatically, the user may be asked to position an outline' shape of a person on the image of the person. The outline shape may be scaled and deformed to fit with the shape of the person in the picture. The outline may, alternatively, be carried using the background isolation steps above.
Once the outline has been determined, the predetermined key' points on the body of the person may then be identified using image analysis techniques.
Alternatively, the predetermined points may be indicated by a user.
Then, nodes may be identified which correspond to the shape and position of the person. These nodes may be joined together to form an outline of a rough skeleton' of the person, within the outline of the person determined previously.
The user may be asked to confirm the location of key nodes', such as those placed at elbows, knees and head. Once the nodes have been positioned, the image and associated spatial data is stored, and the image is ready for the subsequent fitment of a garment.
Garment Scaling The user may then select an image of a garment (which may be an image returned by the search discussed above) and fit' it to their image, virtually. The image of the garment will have been analysed as above, and will then be manipulated and scaled, using image transformation algorithms, to fit' with the image of the person, using the nodes at the predetermined points in both the garment and person images.
Further, in the case of a garment image which has been provided by a retailer, size data may be associated with the image, to allow a user to see how different garment sizes would fit them. This requires the measurements of the person in the image to be associated with their analysed image. The present invention may further include a measurement module which may be used to measure the sizes of portions of a person in an image, based on the height and weight of the person in the image. This measurement module can be used to provide more accurate scaling and fitment or garments.
In order to create a more realistic result for the garment fitting, the lighting conditions in the picture of the user may be analysed, and similar lighting is applied to the virtually fitted garment to improve the realism and quality of the virtual fitting. Additionally, the virtual fitting system allows for the recognition and overlaying of a user's hair in the uploaded picture. In a situation where the user has, for instance, shoulder length hair or longer, the hair in the uploaded image may be identified using the processing techniques detailed above, with the hair placed over the fitted garment where necessary.
The virtual fitting algorithm requires correspondence between the garment silhouette and the model in the image of the user (the target image'). To provide optimal fitting results, the algorithm serves to preserve the characteristic features of the garment, i.e. the general fit of the garment and its specific shape. Also, the algorithm reflects the specific body shape of the model and deforms the garment accordingly, and minimises deviations from the original garment shape in order not to introduce unnecessary visual artefacts.
The virtual fitting algorithm consists of three main phases, which are shown in Figure 6 and are outlined in the following sections. The algorithm requires three inputs, which are: * a garment binary mask, showing the areas of the garment image that are part of the garment itself, which may be extracted using the image processing techniques discussed above; * the garment image itself, together with alignment information from a standard silhouette template; and * the model image, together with alignment information from the same silhouette template used for the garment image.
While the silhouette template may differ in appearance to reflect garment and model body shape characteristics, both templates share the same number of points which are ordered in a similar fashion. Therefore, there is a specific one-to-one mapping between points at key positions along the silhouette on both images. For the rest of the algorithm, it may be assumed that the input silhouettes are generally correctly aligned to the garment and model respectively.
The contour selection phase, shown schematically in Figure 7 guarantees the best compromise between conformance to the model silhouette and garment shape fidelity. Selecting contour points exclusively from the edge of the garment binary mask perfectly preserves the garment shape but does not provide any information about the body position in relation to the garment itself. Conversely, selecting points from the silhouette template would cause the garment shape to be significantly altered during morphing and fitment, which may result, for example, in loose dresses being fitted tightly to the body.
The algorithm proceeds by automatically detecting the garment contour from the binary mask. This contour g(Q is then checked against the silhouette template on the garment image sg(/) for all point pairs (U) and a decision is made according to: 0) = fciinHui -s1U) U1ftITIn1gU) -sQ)j a:.
Therefore, the final contour cO)is obtained by selecting a point from the body silhouette template sg(j) if this is close to the garment contour, thus providing body alignment information, while the other points are taken from the extracted garment contour g(Q, thus preserving the original garment shape.
Given the specific one-to-one mapping between garment silhouettes and model images, it is possible to calculate an initial coarse registration to transfer the contour cO) to the model image. The geometric relationship between the two silhouettes is defined by the general affine transform: ri = [A = f:; :jr] where sm('i) is the silhouette template in the model image.
The free parameters (A,b) of the general affine transform are computed by minimising: [AU hi = argnhin llsJo -As ci) -This is computed using all points from the two templates, regardless of which ones have been used in the final contour CO. This ensures that as much information as possible about the geometric relationship between the two templates in all areas of the image is provided for the estimated transform.
However, as there is a specific mapping between the silhouettes, the computed transform is only applied to points in cO) chosen from the garment contour, whereas those originally belonging to sg(i) are replaced with their matches from 5m(i).
The result from this step of the algorithm is shown in Figure 7, where the dotted-line outline of the garment contour on the model image is obtained by transforming the estimated c(i), whereas the chained-line segments are taken directly from the silhouette template s(i).
The final phase of the transfer algorithm is local refinement of the transferred contour. Local inaccuracies in the transform estimate can arise from differences between the two silhouette templates that may have been locally altered to better fit either the garment or the model body characteristics.
In order to account for these local differences, the algorithm iteratively identifies segments originating from the garment contour that are discontinuous with respect to their surrounding segments. The discontinuities can fall under two possible cases. The first is that segments obtained from the transformed g(i) sandwiched between others obtained from Sm0), which are fixed, and the second is segments whose end points are separated by a distance that exceeds a predefined threshold.
The algorithm proceeds by recursively finding the translation aligning the end points of two consecutive misaligned segments. The recursion is iterated until a fixed segment is found. When the algorithm terminates, two translations t1 and t2 are found for the two endpoints of each segment pk(m), mc[O..M] where k is the index of the segment and m is the index of the point within the kth segment. The translation is then propagated to all other points in the segment by the weighted average:
M
The propagation of the translation concludes the algorithm, whose output is a set of two contours in the garment and model images with known correspondences. This then constitutes the input of the morphing stage for the final phase of the virtual fitting algorithm.
After transferring the key points to the coordinate system of the model, the relationship between these corresponding key points is used to obtain the way in which the garment should be deformed to fit on the model. The corresponding key points form a relationship between the picture of the model and the picture of the garment, as shown in Figure 8. This relationship is used, along with a regular rectangular grid to deform the garment based on the relationship between corresponding points to fit it on the model.
A rectangular grid is superimposed on to the model, as can be seen in Figure 8. For each vertex of the grid, its relative location with respect to the key points in close proximity to it is calculated. For each vertex of the grid, those key points that lie inside the cells containing that vertex are considered as neighbour to that vertex. The relative location of each vertex is obtained with respect to its neighbouring key points. For each pair of neighbouring key points, the relative location of the vertex is obtained and saved. These relative locations are applied on the corresponding key points on the garment to obtain the corresponding vertex on the garment. Doing this for all vertices of the grid, the corresponding grid for the garment is obtained.
On the picture of the model in Figure 9, the relative location of a vertex with respect to key points is obtained as follows: v is taken to be a vertex point and pi and P2 two neighbour key points of v.
The equation = p + a4 -)-- -m) may then be used, where R90 is degrees rotation matrix and a and b are local coordinates of v with respect to Pi and P2. (a,b) is the relative location of vwith respect to pi and P2, that can be obtained by solving a linear system of equations.
Referring again to Figure 8, after applying relative coordinates on corresponding key points on the picture of the garment, the corresponding vertices for those that have neighbour key points are obtained. The corresponding vertices of vertices inside the convex hull obtained by the key points are found for the correspondences determined in the previous step.
The key points considered in the first step are the first set and those considered in the second step are the second set.
For each vertex in the set, its local coordinates with respect to points that are in the same row as well as the points that are in the same column are obtained and applied on the corresponding points on the garment. Doing so, the corresponding vertices of the set on the garment are obtained.
Since the correspondence of points may not cover boundary regions of the garment, the first set is extended and further vertices are considered of the grid in a margin of the second set. The relative coordinate of a point is obtained with respect to two nearest key points in the same line and these coordinates are applied to the corresponding key points on the garment to obtain corresponding key points on the garment.
After obtaining the corresponding grid on the garment, the texture inside each cell of the grid may be mapped on the garment to its corresponding cell on the model to synthesize the deformed picture of the garment on the model.
When used in this specification and claims, the terms "comprises" and "comprising" and variations thereof mean that the specified features, steps or integers are included. The terms are not to be interpreted to exclude the presence of other features, steps or components.
The features disclosed in the foregoing description, or the following claims, or the accompanying drawings, expressed in their specific forms or in terms of a means for performing the disclosed function, or a method or process for attaining the disclosed result, as appropriate, may, separately, or in any combination of such features, be utilised for real ising the invention in diverse forms thereof.
Claims (57)
- Claims 1. A method of aligning a representation of a garment contained in a first image with a representation of a person contained in a second image, the method including the steps of: identifying the garment in the first image and the person in the second image; analysing the shape of the garment in the first image allocating nodes to predetermined points on the garment, associated with the garment shape and predetermined shape data; analysing the shape of the person in the second image to find the outline of the person in the second image; allocating nodes to predetermined points on the person, associated with the shape of the person and predetermined shape data; analysing the predetermined points of both the garment and the person and determining alignment information; manipulating the first and second images to align the predetermined points; scaling the first image based upon the dimensions of the second image; and overlaying the first image onto the second image.
- 2. The method of claim 1 wherein identifying the garment in the first image includes comparing the garment with shapes with a selection of pre-determined objects having known shapes.
- 3. The method of claim 2, further including the step of isolating the garment in the first image from the background of the image.
- 4. The method of claim 3, wherein the step of isolating is carried out automatically.
- 5. The method of claim 4, wherein the step of isolating is carried out manually.6. The method of any one of claims 1-5 wherein analysing the shape of the garment includes analysing the outline of the garment and creating a map of the shape of the garment.
- 6. The method of any one of claims 1-6, wherein the step of allocating nodes to predetermined points on the garment includes performing shape analysis of the outline found for the garment and subsequently identifying the points based on predetermined criteria.
- 7. The method of any one of claims 1-7 wherein analysing the shape of the person includes analysing the outline of the person and creating a map of the shape of the person.
- 8. The method of claim 7, further including the step of displaying the map of the shape of the person as an estimation of the outline of the person.
- 9. The method of any one of claims 1-8, wherein the step of allocating nodes to predetermined points on the person includes analysing the shape of the outline found for the person and subsequently identifying the points based on predetermined criteria.
- 10. The method of any one of claims 1-9, further including the step of placing the predetermined points of the garment in the first image on at least one of neck, shoulders, elbows, wrists, chest, navel, waist, hips, knees or ankles.
- 11. The method of any one of claims 1-10, further including the step of placing the predetermined points of the person in the second image on at least one of head, neck, shoulders, elbows, wrists, chest, navel, waist, hips, knees, ankles or feet.
- 12. The method of any one of claims 1-11,wherein the step of analysing the predetermined points on both the garment and person to determine alignment information includes forming correspondences between the predetermined points on the garment in particular locations and predetermined points on the person in particular locations.
- 13. The method of claim 12, wherein the particular locations on the person include the joints of the person.
- 14. The method of claim 12 or 13, wherein the particular locations on the garment include the areas of the garment which correspond to joints of a person.
- 15. The method of any one of claims 1-14, wherein the step of manipulating the first and second images uses one-to-one mapping of the predetermined points.
- 16. The method of any one of claims 1-15, wherein the step of scaling the first image based on the dimensions of the second image include analysing the distances between the predetermined points in both the first and second images and subsequently scaling the first image based upon the distances in the second image.
- 17. The method of claim 16, wherein the step of scaling the first image further takes into account the outline determined for the person in the second image and includes scaling the garment accordingly.
- 18. The method of claim 16 or 17, wherein the scaling takes into account the height and weight of the person in the second image.
- 19. The method of any one of claims 1-18, further including the step of analysing the lighting of the person in the second image and applying simulated lighting to the garment in the first image based on the lighting in the second image.
- 20. A method for analysing an image to locate and isolate an object within the image, the image containing at least an object against a background, the method comprising: classifying the colour of each pixel of the image; estimating, based on the colour of each pixel, the colour descriptor of the object and the colour descriptor of the background of the image; determining, based on the colour of the object and the colour of the background, the locations in the image of the object and the background; isolating the object in the image; and identifying the shape descriptor of the object.
- 21. The method of claim 20, wherein classifying the colour of each pixel comprises first determining whether the pixel is white, grey or black; and if the pixel is not white, grey or black, determining whether the pixel is, grey, red, orange, yellow, green, cyan, blue, purple or pink.
- 22. The method of claim 20 or 21, wherein after the colour classification step, the method further includes the step of transferring the pixel colour space from RGB (Red, Green, Blue) to HSI (Hue, Saturation, Intensity).
- 23. The method of any one of claims 20-22, wherein estimating the respective colour descriptors of the object and the background includes creating a colour histogram based on the pixel colours of the object andbackground.
- 24. The method of any one of claims 20-23, wherein estimating the respective colour descriptors of the object and background includes determining whether the colour of the object and the colour of substantially allof the background are similar.
- 25. The method of any one of claims 20-24, further comprising the step of calculating the ratio of the number of pixels in the image having one colour to the total number of pixels in the image.
- 26. The method of claim 25, wherein if the ratio is calculated as being 0.8 or higher, concluding that the background and object are the same colour.
- 27. The method of any one of claims 24-26, wherein if the estimated colour descriptors of the object and the background are similar, the step of determining the location of the object and the background comprises using analysis of the edges of the object.
- 28. The method of any one of claims 24-26, wherein if the colour descriptors of the object and background are not similar, the step of determining the location of the object and the background comprises discarding the pixel data relating to the background.
- 29. The method of claim 24, wherein estimating the colour descriptors of the object and background further includes determining whether the background includes regions of a colour similar to the colour of the object.
- 30. The method of claim 29, wherein if it is determined that the background includes regions of a colour similar to the colour of the object, the method further comprises clustering pixels forming the image and analysing the clusters by way of a k-means algorithm, separating the regions of similarcolour in the background from the object.
- 31. The method of any one of claims 24-30 wherein the method includes the further steps of: analysing the isolated object to identify areas of the object of a similar colour to the background; and if there are areas of a similar colour present in the object; applying a morphological operation to the image of the object to remove these areas.
- 32. The method of any one of claims 20-31, wherein the step of determining the locations of the object and background in the image includes assuming that the object is in a central region of the image.
- 33. The method of any one of claims 20-32, wherein the step of determining the locations of the object and the background in the image includes: making an assumption regarding the location of the object with respectto the background; andcomparing the estimated colours of the object and background to determine which is the object and which is the background.
- 34. The method of any one of claims 20-33, wherein the step of identifying the shape descriptor of the object includes comparing the object in the image with a selection of pre-determined objects having known shapes.
- 35. The method of any one of claims 20-34, further including the steps of: comparing the shape descriptor of the object in the image with other images containing a similar object; and using the data obtained from the comparison to improve the shape descriptor identification.
- 36. The method of any one of claims 20-35, further including the step of identifying the pattern descriptor of the object in the image.
- 37. The method of claim 36 wherein the step of identifying the pattern descriptor comprises using a k-means algorithm.
- 38. The method of claim 37 wherein using the k-means algorithm clusters similar patterns on the object, and determining the dominant pattern to identify the pattern descriptor.
- 39. A method of searching for an image containing an object, the method comprising the steps of: identifying an image to be used as the basis for the search; determining the complexity of the image; determining the bounds of the object within the image; identifying the shape descriptor of the object within the image based on the identified bounds of the object; determining the colour descriptor of the object within the image; comparing the object in the image with the content of other, predetermined images, based upon the colour and shape descriptors of the object; and returning images which, based on the comparison of the object of the image and the predetermined images, include content similar to the basis of the search.
- 40. The method of claim 39, wherein the step of identifying the image to be used as the basis for the search includes receiving an image from a user.
- 41. The method of any one of claims 39-40, wherein the step of determining the complexity of the image includes performing pixel colour analysis of the image.
- 42. The method of any one of claims 39-41, wherein the step of determining the complexity includes using a human detection algorithm.
- 43. The method of any one of claims 39-42, wherein the step of determining the complexity further includes analysis of the background of the image.
- 44. The method of any one of claims 39-43, wherein the step of determining the complexity of the image includes analysing shapes present in the image.
- 45. The method of any one of claims 39-44, wherein the step of determining the bounds of the image includes performing edge analysis.
- 46. The method of any one of claims 39-45, wherein the step of determining the bounds of the image includes performing colour and texture analysis.
- 47. The method of any one of claims 39-46, wherein the step of determining the bounds of the image comprises manually determining the bounds.
- 48. The method of any one of claims 39-47, wherein the step of identifying the shape descriptor of the object within the image includes comparing the determined image bounds with shapes with a selection of pre-determined objects having known shape descriptors.
- 49. The method of any one of claims 39-48, wherein the step of determining the colour descriptor of the object includes analysing the colours of the pixels within the image.
- 50. The method of claim 49, further including the step of creating a colour histogram based upon the pixel data.
- 51. The method of any one of claims 39-50, wherein the step of comparing the object in the image with predetermined images includes analysis of a database of pre-processed images.
- 52. The method of any one of claims 39-51, wherein the step of returning the results includes providing images and data relating to the images.
- 53. The method of claim 52, wherein the data relating to the images includes price, size, colour, pattern, texture, textile and/or supply data.
- 54. The method of any one of claims 39-53, wherein the basis for the search further includes providing data relating to price, size, colour, pattern, texture, textile and/or supply data.
- 55. The method of claim 54, wherein further including analysing and comparing the data with predetermined catalogued data associated with the predetermined images.
- 56. The method of any one of claims 39-55, further including the step of identifying the pattern descriptor of the object in the image.
- 57. The method of claim 56 wherein the step of identifying the pattern descriptor comprises using a k-means algorithm.59. A method for analysing an image as hereinbefore described with reference to the accompanying drawings.60. A method of searching for an image as hereinbefore described with reference to the accompanying drawings.61. A method of aligning a representation of a garment as hereinbefore described with reference to the accompanying drawings.62. Any novel feature or combination of features as described herein.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1207040.5A GB2501473A (en) | 2012-04-23 | 2012-04-23 | Image based clothing search and virtual fitting |
Publications (2)
Publication Number | Publication Date |
---|---|
GB201307246D0 GB201307246D0 (en) | 2013-05-29 |
GB2503331A true GB2503331A (en) | 2013-12-25 |
Family
ID=46261680
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1207040.5A Withdrawn GB2501473A (en) | 2012-04-23 | 2012-04-23 | Image based clothing search and virtual fitting |
GB1307246.7A Withdrawn GB2503331A (en) | 2012-04-23 | 2013-04-22 | Aligning garment image with image of a person, locating an object in an image and searching for an image containing an object |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1207040.5A Withdrawn GB2501473A (en) | 2012-04-23 | 2012-04-23 | Image based clothing search and virtual fitting |
Country Status (2)
Country | Link |
---|---|
GB (2) | GB2501473A (en) |
WO (1) | WO2013160663A2 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105373929B (en) * | 2014-09-02 | 2020-06-02 | 阿里巴巴集团控股有限公司 | Method and device for providing photographing recommendation information |
CN105447047B (en) * | 2014-09-02 | 2019-03-15 | 阿里巴巴集团控股有限公司 | It establishes template database of taking pictures, the method and device for recommendation information of taking pictures is provided |
EP3115971B1 (en) * | 2015-06-02 | 2020-06-03 | Samsung Electronics Co., Ltd. | Method and apparatus for providing three-dimensional data of cloth |
WO2017203705A1 (en) * | 2016-05-27 | 2017-11-30 | 楽天株式会社 | Image processing device, image processing method, and image processing program |
JP7100139B2 (en) * | 2018-09-06 | 2022-07-12 | 富士フイルム株式会社 | Image processing equipment, methods and programs |
CN114187588B (en) * | 2021-12-08 | 2023-01-24 | 贝壳技术有限公司 | Data processing method, device and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001011886A1 (en) * | 1998-10-28 | 2001-02-15 | Hi-Pic Ltd. | Virtual dressing over the internet |
JP2003055826A (en) * | 2001-08-17 | 2003-02-26 | Minolta Co Ltd | Server and method of virtual try-on data management |
JP2004086662A (en) * | 2002-08-28 | 2004-03-18 | Univ Waseda | Clothes try-on service providing method and clothes try-on system, user terminal device, program, program for mounting cellphone, and control server |
EP1959394A2 (en) * | 2005-11-15 | 2008-08-20 | Reyes Infografica, S.L. | Method of generating and using a virtual fitting room and corresponding system |
GB2473503A (en) * | 2009-09-15 | 2011-03-16 | Metail Ltd | Image processing with keying of foreground objects |
GB2488237A (en) * | 2011-02-17 | 2012-08-22 | Metail Ltd | Using a body model of a user to show fit of clothing |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6546309B1 (en) * | 2000-06-29 | 2003-04-08 | Kinney & Lange, P.A. | Virtual fitting room |
US20020130890A1 (en) * | 2001-02-09 | 2002-09-19 | Harry Karatassos | Programmatic fitting algorithm in garment simulations |
US20050264562A1 (en) * | 2004-03-05 | 2005-12-01 | Macura Matthew J | System and method of virtual representation of thin flexible materials |
KR20120040565A (en) * | 2010-10-19 | 2012-04-27 | (주)피센 | 3-d virtual fitting system and method using mobile device |
CN102044038A (en) * | 2010-12-27 | 2011-05-04 | 上海工程技术大学 | Three-dimensional virtual dressing method of clothes for real person |
-
2012
- 2012-04-23 GB GB1207040.5A patent/GB2501473A/en not_active Withdrawn
-
2013
- 2013-04-22 GB GB1307246.7A patent/GB2503331A/en not_active Withdrawn
- 2013-04-22 WO PCT/GB2013/051011 patent/WO2013160663A2/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001011886A1 (en) * | 1998-10-28 | 2001-02-15 | Hi-Pic Ltd. | Virtual dressing over the internet |
JP2003055826A (en) * | 2001-08-17 | 2003-02-26 | Minolta Co Ltd | Server and method of virtual try-on data management |
JP2004086662A (en) * | 2002-08-28 | 2004-03-18 | Univ Waseda | Clothes try-on service providing method and clothes try-on system, user terminal device, program, program for mounting cellphone, and control server |
EP1959394A2 (en) * | 2005-11-15 | 2008-08-20 | Reyes Infografica, S.L. | Method of generating and using a virtual fitting room and corresponding system |
GB2473503A (en) * | 2009-09-15 | 2011-03-16 | Metail Ltd | Image processing with keying of foreground objects |
GB2488237A (en) * | 2011-02-17 | 2012-08-22 | Metail Ltd | Using a body model of a user to show fit of clothing |
Also Published As
Publication number | Publication date |
---|---|
GB2501473A (en) | 2013-10-30 |
WO2013160663A3 (en) | 2014-02-06 |
WO2013160663A2 (en) | 2013-10-31 |
GB201307246D0 (en) | 2013-05-29 |
GB201207040D0 (en) | 2012-06-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Bartol et al. | A review of body measurement using 3D scanning | |
US9147207B2 (en) | System and method for generating image data for on-line shopping | |
Yang et al. | Physics-inspired garment recovery from a single-view image | |
US9728012B2 (en) | Silhouette-based object and texture alignment, systems and methods | |
US9478035B2 (en) | 2D/3D localization and pose estimation of harness cables using a configurable structure representation for robot operations | |
GB2503331A (en) | Aligning garment image with image of a person, locating an object in an image and searching for an image containing an object | |
US20170018117A1 (en) | Method and system for generating three-dimensional garment model | |
TW202022782A (en) | Method and image matching method for neural network training and device thereof | |
Sundaresan et al. | Model driven segmentation of articulating humans in Laplacian Eigenspace | |
Aldoma et al. | Automation of “ground truth” annotation for multi-view RGB-D object instance recognition datasets | |
US20210375045A1 (en) | System and method for reconstructing a 3d human body under clothing | |
EP2580708A2 (en) | Parameterized model of 2d articulated human shape | |
CN111325806A (en) | Clothing color recognition method, device and system based on semantic segmentation | |
US11922593B2 (en) | Methods of estimating a bare body shape from a concealed scan of the body | |
CN105869217B (en) | A kind of virtual real fit method | |
Werghi et al. | A functional-based segmentation of human body scans in arbitrary postures | |
Xu et al. | 3d virtual garment modeling from rgb images | |
Bang et al. | Estimating garment patterns from static scan data | |
Buxton et al. | Reconstruction and interpretation of 3D whole body surface images | |
Liu et al. | Extract feature curves on noisy triangular meshes | |
Senanayake et al. | Automated human body measurement extraction: single digital camera (webcam) method–phase 1 | |
Chen et al. | Optimizing human model reconstruction from RGB-D images based on skin detection | |
Vasconcelos et al. | Methodologies to build automatic point distribution models for faces represented in images | |
Huang et al. | Automatic realistic 3D garment generation based on two images | |
Le et al. | Overlay upper clothing textures to still images based on human pose estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |