SE543031C2 - Collapsible wall device and method for determining body measurements of an individual - Google Patents

Collapsible wall device and method for determining body measurements of an individual

Info

Publication number
SE543031C2
SE543031C2 SE1830233A SE1830233A SE543031C2 SE 543031 C2 SE543031 C2 SE 543031C2 SE 1830233 A SE1830233 A SE 1830233A SE 1830233 A SE1830233 A SE 1830233A SE 543031 C2 SE543031 C2 SE 543031C2
Authority
SE
Sweden
Prior art keywords
individual
images
measurements
implemented method
computer implemented
Prior art date
Application number
SE1830233A
Other languages
Swedish (sv)
Other versions
SE1830233A1 (en
Inventor
Magnus Jansson
Peter Söderström
Original Assignee
Sizewall Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sizewall Ab filed Critical Sizewall Ab
Priority to SE1830233A priority Critical patent/SE543031C2/en
Priority to PCT/SE2019/000012 priority patent/WO2020032843A1/en
Publication of SE1830233A1 publication Critical patent/SE1830233A1/en
Publication of SE543031C2 publication Critical patent/SE543031C2/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A41WEARING APPAREL
    • A41HAPPLIANCES OR METHODS FOR MAKING CLOTHES, e.g. FOR DRESS-MAKING OR FOR TAILORING, NOT OTHERWISE PROVIDED FOR
    • A41H1/00Measuring aids or methods
    • A41H1/02Devices for taking measurements on the human body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/16Cloth

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Dentistry (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Pathology (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Textile Engineering (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The present disclosure generally relates to the field of body measurements, and in particular to methods and devices for determining body measurements of an individual in order to e.g. remotely trying out clothes. According to a first aspect, the disclosure relates to a collapsible wall device (1) designed to enable determination of body measurements of an individual positioned on the collapsible wall device. According to a second aspect, the disclosure relates to a computer implemented method for determining body measurements of an individual. The method comprises obtaining (S1) at least two images of the individual and analyzing (S2) a predetermined pattern pictured in the obtained images to detect parts of the images that comprise the body of the individual and to determine reference points of the pre-determ ined pattern. The computer implemented method further comprises constructing (S3) a model of the body of the individual and determining (S4) the body measurements based on the constructed model, the detected body parts and the determined reference points.

Description

Collapsible wall device and method for determining body measurements of an individual Technical fieldThe present disclosure generally relates to the field of body measurements, and inparticular to methods and devices for determining body measurements of an individual in order to e.g. remotely trying out clothes.
BackgroundAs technology makes fast progress, daily lifestyle is changing. For example, online shopping markets are growing strongly all over the world.
When a customer wants to purchase a garment from an online store, he or shecontacts an online store through an internet application using an electronic device(e.g. a computer or a smartphone). When the consumer contacts the online storeusing the electronic device, the online store shows products of the online store tothe electronic device. Then the consumer searches the product considering e.g. aspecification, a function, a price, a condition of sale of the product from adatabase constructed at the online store.
The online stores selling clothes may provide an easy way of purchase. However,information regarding sizes of the clothes and the shoes may be insufficient in theonline shopping stores. One problem is that a certain size (e.g. 10) does notcorrespond to the same body size for all manufacturers. Hence, one individualmight need size 8 in some clothes, size 10 in others and sometimes even a size12. This is in most cases impossible for the customer to know, without trying theclothes on. Hence, a person shopping from online stores will in the worst casehave to return more than 50 percent of the purchases. Consequently, companiesselling clothes over the internet spend a lot of money in handling returned clothes.ln the worst case, the goods then need to be shipped back and replaced severaltimes before the customer finds the size and model that suits, if ever.Furthermore, the difficulty in finding the right size and model that the customer 2 experiences, may prevent the customer from making more purchases in the future.
Hence, there is a need for way to select clothes that suit the customer that doesnot require presence of the customer in the store. Thus, there is a need for asimple and user-friendly way of determining body measurements of an individual.A plurality of methods for solving this have been proposed. i-lovxfever, they aretypëcally either not accurate enough or very cornplicated.
Summarylt is an object of the disclosure to alleviate at least some of the drawbacks with the prior art. Thus, it is an object to provide a simple and robust method fordetermining body measurements. lt is a further object to provide a method that iseasy to perform, such that a customer can determine the body measurements onhis or her own at home, without too much effort. lt is a further object to provide amethod that can be used for different manufacturers.
According to a first aspect, the disclosure relates to a collapsible wall devicedesigned to enable determination of body measurements of an individualpositioned on the collapsible wall device. The collapsible wall device comprises afloor part comprising a front side with at least one marking thereon, the at leastone marking indicating where the individual shall stand. Furthermore, thecollapsible wall device comprises wall part comprising a lower edge foldablyattached to one edge of the floor part and a front side with a predeterminedpattern thereon, the predetermined pattern being recognizable by an imageprocessing algorithm. The collapsible wall device also comprises an erectingdevice configured to hold the wall part in an upright position in relation to the floorpart, whereby the predetermined pattern on the front side of the wall part facesthe individual standing on the at least one marking on the front side of the floorpart. The collapsible wall device will facilitate for a user to take images that can beused for determining body measurements. The collapsible wall device provides abackground in the images that will make image processing needed to determinethe body measurements less complex and more accurate. This makes the 3 determination of body measurements user friendly and more correct than when performed on images captured with a random background.
Furthermore, the collapsible wall device will be small when in a co||apsed state.Hence, it is easy to deliver and to store. ln addition, one single collapsible walldevice can then be used by family and friends. lt is also not dependent on the sizeof the body. Thus, an individual that gains (or loses) weight or grows can still use the same collapsible wall device. ln some embodiments, the erecting device is configured to raise the wall part tothe upright position. Thus, the collapsible wall device is easy to erect and does not require any hooks or similar to stand up. ln some embodiments, the erecting device comprises springs arranged at theouter edges of the wall part and/or the floor part. Thereby, a solid and user- friendly construction is provided in a simple way. ln some embodiments, the wall part and the floor part are formed by one singlepiece of material. Hence, the collapsible wall device is easy to manufacture. lnsome embodiments, the wall part and the floor part are made of nylon, plastics, textile, paper, etc. ln some embodiments, the at least one marking comprises a pair of foot printsfacing away from the wall part. ln some embodiments, the at least one markingcomprises markings for different foot sizes. ln some embodiments, the at leastone marking indicates a plurality of directions for the individual to face, whencapturing images for use in the determination of body measurements. Thus, it iseasy for a user to understand where to stand when capturing images for usewhen determining body measurements. Hence, the likelihood that the individual will stand in a way suitable for determining measurements is increased. ln some embodiments, the predetermined pattern comprises regularly positionedand/or sized objects. Thus, the distance between the camera and the body to be 4 measured does not need to be fixed or known, as the predetermined pattern canbe used to estimate the distance.
According to a second aspect, the disclosure relates to a computer implementedmethod for determining body measurements of an individual. The methodcomprises obtaining at least two images of the individual, the images picturing theindividual facing in different directions while standing in front of a wall having apredetermined pattern thereon and analyzing the predetermined pattern picturedin the obtained images to detect parts of the images that comprise the body of theindividual and to determine reference points of the pre-determined pattern. Thecomputer implemented method further comprises constructing a model of thebody of the individual based on the detected parts of the image comprising thebody of the individual, detecting body parts in the constructed model and/or in theobtained images and determining the body measurements based on theconstructed model, the detected body parts and the determined reference points.The proposed method enables determining body measurements in an accurateand user-friendly way. The body measurements may be determined with limitedprocessing effort, as the predetermined pattern may be used to filter out theindividual from the background. The pattern also makes the distance between the camera and the individual less important. ln some embodiments, the computer implemented method comprises providinguser output instructing an individual to stand in front of a wall having apredetermined pattern thereon. Thereby, the individual is more likely to be correctly positioned. ln some embodiments, the computer implemented method comprises providinguser output indicating that one or more of the obtained images has insufficient quality. Hence, images with insufficient quality may be re-captured. ln some embodiments, the computer implemented method comprises determining a size of a garment by comparing the estimated body measurements with a size table. Thus, the correct size of a garment may be selected without the individual trying it on. ln some embodiments, the constructed model comprises a set of binary two-dimensional images of the body of the individual, wherein each binary two-dimensional image corresponds to one of the obtained images. ln some embodiments, the determining comprises extracting metrics from theindividual binary two-dimensional images and combining the metrics to obtain thebody measurements. ln some embodiments, the combining comprises averagingdistance measures from the individual binary two-dimensional images to producea distance measurement associated with the body. ln some embodiments, thecombining comprises combining individual cross-section measurements from theindividual binary two-dimensional images to produce a circumference measurement of a body part. ln some embodiments, the constructed model is a three-dimensional model ofbody of the individual. ln some embodiments, the three-dimensional model of thebody of the individual is constructed using a multiple-view 3D reconstructionalgorithm or a structure-from-motion algorithm. ln some embodiments, thedetermining body measurements comprises extracting the body measurementsfrom the three-dimensional model based on the detected body parts. ln someembodiments, the extracted measurements comprise at least one circumferencemeasurement of a body part and/or a distance measurement associated with thebody. ln some embodiments, the computer implemented method comprises using thepredetermined pattern to resolve projective ambiguity resulting from constructingthe three-dimensional model using an uncalibrated camera. Hence, the methodmay be performed even if the camera is imperfect or uncalibrated as thepredetermined pattern may be used to correct deficiencies. ln some embodiments, the detecting comprises using a machine learning modeltrained to detect the body parts. 6 ln some embodiments, the obtained images picture the individual frompredetermined angles and/or in predetermined poses. ln some embodiments, the predetermined pattern comprises regularly spacedshapes with predetermined dimensions. Thus, the distance between the cameraand the body to be measured does not need to be fixed or known, as thepredetermined pattern can be used to estimate the distance.
According to a third aspect, the disclosure relates to an electronic devicecomprising a camera assembly configured to capture images and a control unit.The control unit is configured to capture, using the camera assembly, at least twoimages of the individual, the images picturing the individual facing in differentdirections while standing in front of a wall having a predetermined pattern thereon,to analyze the predetermined pattern pictured in the obtained images to detectparts of the images that comprise the body of the individual and to determinereference points of the pre-determined pattern. The control unit is furtherconfigured to construct a model of the body of the individual based on thedetected parts of the image comprising the body of the individual, to detect bodyparts in the constructed model and/or in the obtained images and to determine thebody measurements based on the constructed model, the detected body parts and the determined reference points.
According to a fourth aspect, the disclosure relates to a system comprising thecollapsible wall according to the first aspect and a mobile device configured toexecute the method according to the second aspect.
According to a fifth aspect, the disclosure relates to a control unit configured to perform the method according to the second aspect.
According to a sixth aspect, the disclosure relates to a computer programcomprising instructions which, when the program is executed by a computer,cause the computer to carry out the method according to the second aspect. 7 According to a seventh aspect, the disclosure relates to a computer-readablemedium comprising instructions which, when executed by a computer, cause thecomputer to carry out the method according to the second aspect.
Brief description of the drawinqs Fig. 1 i||ustrates an individual standing on a collapsible wall device.
Fig. 2 i||ustrates the collapsible wall device when rolled.
Fig. 3a-3c i||ustrates the collapsible wall device seen from the front side.
Fig 4a-4c i||ustrates examples of rods that can be used in an erecting device ofthe collapsible wall device.
Fig 5a-5f i||ustrates an erecting device according to a first embodiment.
Fig 6a-6e i||ustrates an erecting device according to a second embodiment.
Fig 7 i||ustrates an erecting device according to a third embodiment.
Fig 8a-8d i||ustrates examples of markings indicating where the individual shallstand.
Fig. 9 is a flowchart of the proposed method for determining body measurementsof an individual.
Fig. 10 i||ustrates step S6 of the method of Fig. 9 according to one exampleembodiment.
Fig. 11 and Fig. 12 i||ustrates cross-section and circumference measurementsaccording to the example embodiment of Fig. 10.
Fig. 13a and 13b i||ustrates circumference measurements on a limb cross-sectionaccording to the second example embodiment.
Fig. 14 i||ustrates an electronic device according to some embodiments.
Fig. 15 i||ustrates a control unit of the electronic device according to some embodiments.
Detailed descriptionThis disclosure proposes e simple way of determining body measurements using a background with a graphicai pattern thereon and image processing. Morespecificaiiy this disclosure proposes a system comprising a collapsible wall device 8 and a computer implemented method performed by an electronic device e.g. asmartphone.
To determine body measurements a user first erects the collapsible wall device.The collapsible wall device is typically so small (when in a co||apsed state) that itcan be sent to the customer e.g. by post. ln the erected state one part of thecollapsible wall device is placed on the floor and one part stands upright inrelation to the floor. The upright part, and possibly also the floor part, will serve asa background when capturing images for use when determining bodymeasurements. The user is then instructed about where on the collapsible walldevice the individual to be measured shall stand, either by the softwareapplication or by an instruction printed on the collapsible wall device. Theindividual to be measured (e.g. the user) then stands directly on the part of thecollapsible wall device that lies on the floor, on a marked position. Because of thepredetermined pattern on the wall part of the collapsible wall device, the distancebetween the individual to be measured and the camera is not so important.
Images or video of the body are then captured from different angles, for exampleby slowly rotating the body or by capturing from different angles. The softwareapplication then determines the individual's specific measurements by performing image processing.
The prdpdsed technique een be used in different situetiens where bodymeasurements are needed, e.g. td try' etit cietnes er other wearebie equipment.The method might aiee be used in medieai eppiicatiens er for any application where body measurements are needed.
Fig. 1 illustrates an individual 100 standing on a collapsible wall device 1 designedto enable determination of body measurements of an individual positioned on thecollapsible wall device 1. ln Fig. 1 the collapsible wall device 1 is raised. ln otherwords, the collapsible wall device 1 is in an erected state, in which it can be usedfor determining body measurements. An electronic device 2, here a camera, is held in front of the individual 100 in order to capture images (for use when 9 determining body measurements of the individual) with the collapsible wall device1 serving as a background.
The collapsible wall device 1 comprises a floor part 11, a wall part 12 and anerecting device 13. The wall part 12 and the floor part 11 are for example made ofnylon, plastics, textile, paper, etc. The floor part 11 and the wall part 12 may bedesigned as one foldable sheet. ln other words, according to some embodimentsthe wall part 11 and the floor part 12 are formed by one single piece of material.The collapsible wall device 1 need to be big enough to be usable for an individualof any size. For example, the wall part is about 2x2 metres and the floor part isabout 1x2 metres.
The collapsible wall device 1 can be collapsed. For example, the collapsible walldevice 1 can be rolled or folded. Fig. 2 illustrates the collapsible wall device 1when rolled. Fig. 3 illustrates a collapsible wall device seen from the front side i.e.from above, when unrolled on a surface, (but not raised).
The floor part 11 comprises a front side 112. ln other words, a side that will facethe individual 100 when using the collapsible wall device 1. On the front side 112there is at least one marking 14 that indicates where the individual shall stand.
The wall part 12 comprises a lower edge 121 and an upper edge 123. When inuse, the lower edge 121 of the wall part 12 is foldably (or articulately) attached toone edge of the floor part 11, herein referred to as the back edge 111. The edgeopposite the back edge 111 is herein referred to as a front edge 113. ln otherwords, the floor part 11 and the wall part 12 may be assembled in a way, suchthat it is possible to raise the wall part 12 in relation to the floor part 11 to anerected state. ln the erected state the angle between the floor part 11 and the wallpart 12 is approximately 90 degrees, e.g. between 85 and 95 degrees.
The wall part 12 also comprises a front side 122. The front side 122 has apredetermined pattern thereon. The predetermined pattern covers the entire frontside 122 of the wall part 12 or a major part of the front side 122. The predetermined pattern comprises e.g. squares, dots or other geometric shapes.
The predetermined pattern should typically have a colour that contrasts the colourof the individual i.e. it should differ from the individual's skin and clothes. Forexample, a squared pattern in blue and green (like blue-screen/green-screentechnology) may be used. The size of the squares or dots is typically around 1-2cm in cross-section. ln other words, the predetermined pattern comprisesregularly positioned and or sized objects. The predetermined pattern 15 isrecognizable by an image processing algorithm. ln some embodiment, the floorpart also has a predetermined pattern thereon. ln other words, in someembodiments the floor part 11 also has a predetermined pattern thereon. Thepredetermined pattern covers the entire front side 112 of the floor part 11 or amajor part of the front side 112. For example, the same predetermined patterncovers the floor part 11 and the wall part 12. The predetermined patterns maythen be aligned at the transition between the floor part 11 and the wall part 12.
The erecting device 13 configured to hold the wall part 12 in an upright position inrelation to the floor part. Thereby, the predetermined pattern 15 on the front sideof the wall part 12 faces the individual 100 standing on the at least one marking14 on the front side of the floor part 12, as illustrated in Fig. 1.
The collapsible wall device 1 facilitates determination of body measurements indifferent ways. Firstly, it properly shields the body of the individual 100 fromdisturbing objects in the background. Thus, it facilitates detection ofdifferent partsof the body. More, specifically, when picturing the individual 100 standing on thecollapsible wall device 1, it is possible to determine how many objects are fully orpartially covered by the body. An image processing algorithm can then be used todetermine body measurements of the individual 100, as will be further explainedin Fig. 9.
The at least one marking 14 on the floor part 11 will also guide the individual toface in different directions, that are beneficial for determining body parts. Hence,by using the collapsible wall device, the user is guided to capture images that arewell suited for body part determination. ll Furthermore, because the body will be in a fixed position against markings on thefloor part 11 of the collapsible wall device 1 (which an application software orother readable instruction clearly describes) the distance between the individualand the front side 122 is optimized and known. Typically, it is desirable if theindividual is positioned as closed as possible to the wall. The known distancebetween the individual 100 and the wall part 12 may also be used by an imageprocessing algorithm, e.g. to calibrate extracted measurements. For example,objects closer to the camera may appear to be larger in the images due to aperspective effect. This means that if the individual is standing a certain distancefrom of the wall part 12, the covered parts of the predetermined pattern on thewall part 12 will be larger than the actual individual, due to this perspective effect.Knowing the distance between the individual and the wall, and a set of knowndistances between points on the predetermined pattern in both the floor part 11and wall part, it is possible to compute a correction factor that translatesmeasurements extracted from the covered parts on the predetermined patterns to measurements on the individual.
Hence, there are several advantages achieved by using the collapsible walldevice in comparison to other types of reference objects, such as a referenceobject (e.g. a ruler or disc) positioned in front of the body or in other parts of theimage. Such a reference object may in addition cause the arms of the individual todisappear or parts of the body to be occluded, e.g. due to the individual holdingthe reference object. This problem is typically even more significant whenpicturing the individual 100 from the side or similar. The benefits of using thecollapsible wall device 1 for determining body measurements will be described infurther detail in relation to Fig. 9 describing a method for determining bodymeasurements, which can be used together with the collapsible wall device 1.
Hence, the collapsible wall device 1 may have different forms and be designedand constructed in different ways. ln some embodiments, the floor part 11 and thewall part 12 are two (or more) separate pieces, as illustrated in Fig. 3b and 3c.Then, the floor part 11 and the wall part 12 may be dis-connectable from each 12 other. Alternatively, the floor part 11 and the wall part 12 are two pieces that arenon-releasably fixated to each other. The collapsible wall device 1 or the separatepieces may have different shapes. For example, they may be rectangular (Fig.3a), rounded (Fig. 3b) or e||iptic (Fig. 3b). ln some embodiments, a wedge part 17is arranged between the parts to assure that the predetermined pattern covers theentire background and of the individual 1 standing on the collapsible wall device 1.ln some embodiments, the predetermined pattern is aligned at the transitions between the wall part 12, the floor part and (if present) the wedge part 17.
The erecting device 13 typically comprises rods configured to erect the wall part12 in an upright position and possibly also to tense the floor part 11. ln someembodiments, the collapsible wall device 1, is self-standing i.e. independent of walls and hooks.
Fig 4a-4c illustrates examples of rods 32 that can be used in the erecting device13. The rods 32 can be made ofdifferent materials e.g. plastic, fiberglass, metal,wood or other suitable material. ln some embodiments the rods 32 are bendableor flexible. ln some embodiments the rods are made from rod parts. lf the rods 32are arranged in channels, then the channels may then have openings to facilitateassembling the parts of the rods. Fig. 4a shows a spring rod 32', e.g. a steelspring strip. Fig. 4b illustrates a telescopic rod 32". Fig. 4c illustrates a rod 32"'formed by a plurality of hollow parts that are held together by elastic twists. The hollow parts are e.g. assembled using sleeves, e.g. metallic sleeves.
The erecting device 13 may be constructed in different ways. For example, therods 32 are arranged as a frame or as a cross. Some possible designs will now bedescribed with reference to Fig. 5 to 7. For simplicity, the predetermined pattern and the at least one marking 14 are not shown in Fig 5-7.
Fig 5a-5e illustrates an erecting device 13 according to a first embodiment. ln thisembodiment the wall part 12 and the floor part 11 are designed as one sheet. lnFig 5a-5e the sheet is rectangular. However, it must be appreciated that it might also have other shapes, e.g. it may be e||iptic or have rounded corners. 13 Fig. 5a illustrates the collapsible wall device 1 in an erected state. The erectingdevice 13 comprises springs arranged at the outer edges of the wall part 12 andthe floor part 11. More specifically, in this embodiment the erecting devicecomprises a rod pocket and at least one rod 532 e.g. a spring steel strip. A rodpocket defines the channel 531 that extends along the edges of the rectangularsheet. The at least one rod 532 in positioned in the channel 531 along the edge ofthe rectangular sheet, as illustrated in Fig. 5b which shows the channel 531 seenfrom the side along the channel 531. ln some embodiments the least one rod 532is bendable or flexible. Thus, the collapsible wall device 1 may be rolled withoutremoving the at least one rod 532. ln the erected state, the wall part 12 is raised to an upright in relation to the floorpart 11. This is achieved by folding the rectangular sheet such that the angle oibetween the floor part 11 and the wall part 12 is about 90°. An angle of 90° isusually desirable. However, in reality the angle oi might deviate a few degrees.The at least one rod 532 is angled in the transition 533 between the floor part 11and the wall part 12. ln other words, the at least one rod 532 is shaped like acorner in the transition between the wall part 12 and the floor part 11. ln someembodiments, the at least one rod 532 is also angled in the corners 534 of therectangular sheet. Another possibility is that the corners of the rectangular sheetare rounded and that the at least one rod is simply bent in the corners. lf the at least one rod 532 is stiff enough, the corners can be formed by bendingthe steel spring, as illustrated in Fig. 5c. Another possibility is to use a sleeve 537made in a solid material to create the corner (Fig. 5d). ln some embodiments, illustrated in Fig. 5e, the erecting device 13 comprises aplurality of rods 532 that can be assembled by coupling means. For example, therod parts are connected by angle coupling means 535 e.g. at the transitions 533between the floor part 11 and the wall part 12. ln some embodiments, openings536 are formed in the channels 531 at the transitions, such that a user can easilyconnect the parts to each other when erecting the collapsible wall device 1. Theangle coupling means 535 are e.g. plastic blocks with holes on two adjacent sides 14 where the springs may be inserted and fixated, such that the angle of about 90° isformed between the rods 532 of the wall part 12 and the floor part 11 when thecollapsible wall device 1 is erected. ln some embodiments, the rods 532 are also split into rod parts along the sides ofthe collapsible wall device 1. For example, the rods 532 are split in the middle ofthe upper edge 123 (Fig. 3a) of the wall part 12 and in the middle of the front edge113 of the floor part 11. ln this way, the collapsible wall device 1 may be foldedalong a middle 51, before rolling it, which means that the collapsible wall device 1will be shorter (about half length) in the collapsed state. Coupling means 535 arethen arranged to fasten the rod parts to each other when erecting the collapsiblewall device 1. The coupling means 535 are e.g. plastic blocks or sleeves withholes on opposite sides where the rods can be inserted and fixated. The couplingmeans 535 ang the angular coupling means 535 may be fixed e.g. stitched to thecollapsible wall device 1. ln some embodiments, openings 536 are formed in thechannels 531 at the couplings, such that a user can connect the parts to eachother. Alternatively, the coupling means 535 are hinges (Fig. 5f) that can belocked in a desired angle with e.g. a "click" mechanism.
Fig 6a illustrates a collapsible wall device 1 with an erecting device 13 accordingto a second embodiment. ln this embodiment the floor part 11 and the wall part 12have rounded corners. However, it might as well be elliptic. The erecting device13 comprises springs 632 arranged at the outer edges of the wall part 12 and thefloor part 11. ln other words, a spring 632, e.g. a steel spring strip, is arranged in achannel 631 along the wall part 12 (Fig 6b). ln the same way another steel springstrip is arranged in a channel along the floor part 11. ln the erected state the floorpart 11 is attached to the wall part 12 by a coupling device 16, which holds thewall part 12 in an upright position in relation to the floor part 11, see Fig. 6c whichpictures the coupling device (and the collapsible wall device 1) seen from the side.The coupling device 16 is e.g. a hinge that can be locked at an angle of 90° or it isa fixed coupling. ln some embodiments, the wall part 12 may be detached fromthe floor part 11 when the collapsible wall device 1 is in a collapsed state.
The collapsible wall device 1 can then be folded by twisting the springs 632 (Fig.6d), such that the flexible rod forms two or more loops (Fig. 6e) instead of one.The collapsible wall device 1 can then be fit in a flat case or box. When taking outthe collapsible wall device 1 from the case or box, the steel spring strip will forcethe collapsible wall device 1 to automatically "pop-up". ln other words, in someembodiments, the erecting device 13 is configured to raise the wall part 12 to theupright position.
Fig. 7 illustrates an erecting device 13 according to a third embodiment. ln thisembodiment the erecting device 13 comprises (stiff) rods 731 that are arranged asa cross at the back of the wall part and attached to the wall part 12 in the corners734. Click joints 733 are arranged along on rods 732 such that the rods can becollapsed (left most drawing). The click joints make it possible to unfold thecollapsible wall device 1 (like an umbrella). The drawing second closest to the leftillustrates the collapsible wall device 1 when erected and the two right figuresillustrated the click joint 733 in further detail, when open (second right most) andclosed (right most). ln Fig 7, the floor part is only supported by the floor. However,it may alternatively be supported by at least one rod. lt must be appreciated that the collapsible wall device 1 and the erecting device13 are not limited to examples above. ln principle, any suitable construction maybe used as long as the wall is erectable to a state where the at least one marking14 and the predetermined pattern 15 are arranged as described above and aslong as it is also collapsible and portable. Even though a stand-alone constructionis generally desirable a wall mounted design is not to be excluded. ln other words, in some embodiments, the floor part is only supported by thesurface. ln some embodiments the floor part is also supported by rods, in order toassure that it is correctly unfolded.
Fig 8a-8d illustrates examples of markings indicating where the individual shallstand. The markings indicate to a user an approximate position for his or her feet when capturing images for use when determining body measurements. 16 The markings are for example foot prints facing away from the wall part 12, asillustrated in Fig. 8a. The foot prints show where the individual shall put its feet.The markings are typically matched to an image processing algorithm, such thatthe images of the individual are captured from angles beneficial for determiningbody measurements.
For simplicity Fig. 8a to 8d only illustrate two pairs of markings. One marking(here denoted 14a) positioned close to the wall part 12 that indicates a directionfacing straight out from the wall part 12 and one marking (here denoted 14b) a bitfurther away from the wall part 12 that indicates an angle facing along the wall.However, the markings may indicate further directions, such that the individualmay be pictured from different angles. For example, the markings may comprise 6pairs of foot prints with 15° in-between, such that the individual may be picturedfrom 6 different angles. ln other words, in some embodiment, the at least onemarking indicates a plurality ofdirections for the individual to face, when capturing images for use in the determination of body measurements. ln some embodiments the markings comprise lines 141 with different style (e.g.colour or pattern) for different foot sizes. ln other words, the at least one marking14 may also comprise markings for different foot sizes. For example, differentcolours or styles that corresponds to different shoe sizes (for instance size 8, 12,14 etc.). Then it will be even more clear for the user where to put his or her feet.
The markings may be designed in other ways as illustrated in Fig. 8b to 8d. lnsome embodiments, the markings are shaped as arrows (Fig.8b, Fig. 8c) indicating where the individual shall place his or her feet and in which direction. ln some embodiments, the markings comprise circles (Fig. 8d) showing a centre and an area where the user shall place his or her feet.
All the different types of markings can be numbered, such that a user may beinstructed (e.g. via a software application used for determining the bodymeasurements) about the position and direction he or she should place his or her feet in one particular image. 17 Fig. 9 is a flowchart of a computer implemented method for determining bodymeasurements of an individual 100. The method may be implemented as acomputer program comprising instructions which, when the program is executedby a computer (e.g. a processor in an electronic device 2) cause the computer tocarry out the method. According to some embodiments the computer program isstored in a computer-readable medium (e.g. a memory or a compact disc) thatcomprises instructions which, when executed by a computer, cause the computer to carry out the method.
The proposed method may be performed by an individual 100 being a customerwho is shopping in an internet store. The individual 100 may be at home or inprinciple anywhere else. The computer implemented method may be executedlocally i.e. where the individual is. For example, the computer implementedmethod may be a software application installed on the customers smartphone.Alternatively, the computer implemented method may be executed remotely in aserver. ln some embodiments the method is performed jointly by a local device and a remote device, that communicate with each other.
The method is typically performed using the collapsible wall device 1 described inFig. 1 to Fig. 8 However, another background may also be used, as long as it hasa predetermined pattern thereon. ln other words, in some embodiments, thepredetermined pattern comprises regularly spaced shapes with predetermined dimensions.
The user, e.g. the individual, initiates the method for example by starting asoftware application installed on a smart phone. The user may then be instructedto fold out the collapsible wall device 1 and to take a series of images whilestanding in different directions indicated e.g. by the at least one marking 14. lnother words, in some embodiments, the method comprises providing S0 useroutput instructing an individual to stand in front ofa wall having a predeterminedpattern thereon. The instruction may also instruct the user how to pose. For example, the individual may be instructed to hold his/her hands on his hips or 18 straight out from the body, while standing on the at least one marking 14 on thecollapsible wall device 1.
The proposed method comprises obtaining S1 at least two images of theindividual picturing the individual facing in different directions while standing infront of a wall having a predetermined pattern thereon. ln other words, theindividual is pictured from different angles or directions in the at least two images.lf the method is performed using a mobile device comprising a camera (e.g. asmartphone), the camera may be used to capture the images. ln other words, insome embodiments the obtaining S1 comprises capturing the images or readingthe captured images from a memory of the electronic device performing themethod. lf the method is implemented in a server, the obtaining S1 typicallycomprises receiving the images from a camera e.g. via a web interface or via email. lt is typically important that the individual stands 100 in a desirable way in relationto the wall. For example, the individual shall stand close to the wall and facedifferent predetermined directions in the different images. This might easily beachieved using the collapsible wall device 1, as it will guide the user to the rightpositions. For example, 3 to 5 images are captured. Between the capturing of theimages the individual typically turns slightly, such that the next image pictures theuser from a different angle. The images are typically captured right from the frontin relation to the wall. ln some embodiments, the individual has a predefined posein the images. For example, the user holds his hands at his/her hips or straight outfrom the body. ln other words, in some embodiments, the obtained images picturethe individual from predetermined angles and/or in predetermined poses.
The shooting can be done as follows. The individual 100 to be measured standson the at least one marking 14 on the floor part 11 of the collapsible wall device 1,with his back facing the front side 122 of the wall part 12. Another person mayneed to assist in capturing the images using e.g. a smartphone comprising acamera. Alternatively, a self-timer is used. A first image is captured right from thefront. The individual then turns e.g. 20 degrees, whereby a second image is 19 captured and so on. This continues until a certain number of images have beencaptured. The images are then stored in the smartphone for further processing.Alternatively, the images are sent to a server for storage. ln some embodiments, a video is captured and then the obtaining comprisesselecting the images from the captured video. The video may picture theindividual when rotating, while standing on the collapsible wall device 1. ln some embodiments, the individual stands still during the shooting, while thecamera is moved around the individual to picture the individual from differentdirections. ln some embodiments, there are further markings (e.g. arrows) on thefloor part indicating such angles. Then it can be assured that the individual standsat the same position and with the same pose in all images. This may be beneficial for some types of image processing.
A combination of rotating the individual and moving the camera is also possible.For example, images frem 3 different angies are captured vvhere the individualfaces the Garnera (ie. standing vvith his or her feet in a åt? angie frem the wait).Theh the ihdividuai tums 'ißü degrees, and three images are captured from thesame angies, te picture the individuai frem the hack. in same embodiments, thefisar part camp-riset: a rotating part eg. a pedestai, such that the inciixfidtiai can heautomaticaiiy retated. ln some embodiments, at least one image is taking from above to picture theindividual's feet. Such an image enables determining measurements of theindividual's foot and may consequently be used to try out shoes or similar. ln some embodiments, at least one image of the background pattern without theindividual present is also obtained. Such an image can be used to calibrate opticalparameters of the camera such as lens distortion, focal length and principal pointor to compensate for angular distortion or imperfection in the installation of thecollapsible wall device 1.
The method further comprises analysing S2 the predetermined pattern pictured inthe obtained images to detect parts of the images that comprise the body of theindividual and to determine reference points of the predetermined pattern. Morespecifically, the predetermined pattern is analysed to (i) find which parts of theimages contain background and the individual respectively, and (ii) determine thecorrespondence between pixel measurements and physical metricmeasurements. The is done by analysing the predetermined pattern 15, which isbasically a background reference pattern. ln some embodiments, the patterncomprises a plurality of regularly spaced squares with predetermined dimensions.The system then first detects these squares. This can be done in a number ofways. One possible approach is (i) to apply an image segmentation algorithm todetect connected regions, (ii) to analyze each region to determine whether it is asquare of the expected colour, (iii) to create a list of the sizes of each detectedsquare, and (iv) to determine the square size that fits best with a majority of thedetected squares. The determination of whether a segmented region is a squarecan be done in a number of ways, e.g. by fitting a polygon to the squareboundary. The comparison with the expected reference colour should typically bedone in a liberal way, to allow for photographic differences due to lightingconditions and individual differences between cameras. The determination of amajority square size can be done e.g. by (i) taking the median of all square sizesor, (ii) by selecting the square size S where the number of squares with a sizewithin [S - s, S + s] is as large as possible, using a predetermined a parameter.Once the majority square size has been decided, detected squares with a sizethat differ too much from the majority size are rejected, and a list of acceptedsquares is kept. The precise colours of the predetermined pattern, as manifestedin the current photographic setting, can then be determined by computing a mean value of colours within and in between the reference squares.
The detected predetermined pattern 15 is then used to determine knownreference points of the predetermined pattern. lf the pattern comprises squares,the reference points are for example corners of one or more of the squares. ln thisembodiment, the determined reference points are typically used to compute a 21 pixels-to-meter ratio for each image, which is used in later processing to convertbody measurements to physical, metric measurements.
The method further comprises constructing S3 a model of the body of theindividual based on the detected parts of the image comprising the body of the individual. There are different ways of implementing this. ln some embodiments, the constructed model comprises a set of binary two-dimensional (also referred to as 2D) images of the body of the individual, whereineach binary two-dimensional image corresponds to one of the obtained images.More specifically, for each image, a foreground/background mask is obtained. Therest of the method will now be explained with reference to this embodiment.
The foreground/background mask is for example obtained using a "green-screen-style" analysis, which comprises comparing all pixel values with thepredetermined pattern and applying a threshold on the colour difference. Theoutput from this step is a set of binary images with a predetermined value forpixels belonging to the predetermined pattern and another predetermined value for pixels belonging to the body of the individual.
The predetermined pattern brings about further advantages, that may be usede.g. when constructing S3 the model. For example, the predetermined pattern canbe used to compensate for optical or perspective distortion in the obtainedimages. lf the predetermined pattern continues on the floor part, then it may beused for further correction and calibration if the images. Furthermore, thepredetermined pattern on the floor part may be used to verify the camera'sposition in relation to the wall.
The method further comprises detecting S4 body parts in the constructed modeland/or in the obtained images. ln other words, the set of binary images (i.e. thetvvo-dimensional model) are analysed to detect key body parts such as head,torso, legs, feet, arms, knees, shoulders, etc. The output is e.g. a list of key bodypart positions, expressed as image pixel coordinates. There are several published methods for human pose estimation in the scientific computer vision literature that 22 can be used, and this invention is not restricted to any specific such method. Onepossibility is to use deep Convolutional Neural Networks (deep CNNs). ln otherwords, in some embodiments, the detecting S4 comprises using a machine learning model trained to detect the body parts.
As an example, the method described in [Newell, Yang, Deng. "Stacked hourglassnetworks for human pose estimation". ECCV 2016] can be used. ln this approach,a deep CNN is trained on images of humans in different environments. At runtime,the CNN produces one heatmap for each key point of the human body (knee,elbow, ankle, etc.). By finding maxima in these heatmaps, the location of eachsuch key point in the image can be obtained. ln the context of this disclosure,better results can be achieved by constructing a deep CNN that is trained only onimages representative of the intended use case. A training dataset can beconstructed containing images of different individuals standing at the designatedposition in front of the predetermined pattern. Ground truth can be provided bymanual annotation, where the precise location of each key body part is annotatedby a human. A neural network can then be trained on this dataset, or by using acombination of this dataset and publicly available larger and more generaldatasets. The neural network layout can be constructed in a number of ways, e.g.according to [Newell, Yang, Deng] as mentioned above. The binaryforeground/background mask can also be used as a separate input to the neuralnetwork, included in both training and runtime. At runtime, theforeground/background mask can be used to cut out an image patch to feed to the network where the individual is well-centred. lf the analysing, constructing S3 or detecting S4 fails in one or more of theimages, then the user might need to recapture one or more of the images. Hence,in some embodiments, the method comprises providing S5 user output indicatingthat one or more of the obtained images has insufficient quality. The instruction ise.g. provided via a user interface in the smartphone.
The method further comprises determining S6 the body measurements based onthe constructed model, the detected body parts and the determined reference 23 points. More specifically, the body measurements are determined based on theconstructed model and a relationship between image coordinates of the referencepoints in the constructed model and known metric distances between thedetermined reference points of the predetermined pattern. For the two-dimensional model this is done by extracting S6a metrics from the individualbinary two-dimensional images and combining S6b the metrics to obtain the bodymeasurements, see Fig. 10. ln other words, the measured key body part positions are used to guide the extraction of the desired output body measurements.
There are different types of measurements; distance measurements and cross-section measurements. Distance measurements are for example body height ordistance between shoulders. Cross section measurements are e.g. a circumference of an arm, a leg or a head. ln one example embodiment, to extract S6a measurement expressing acircumference of a certain limb 104, two key points 101 representing thebeginning and end of that limb 104 are used to construct a limb axis 103. Acrossthis limb axis 103, a number of cross-section measurements 102 are made,excluding cross-sections that are closer to the key points 101 than apredetermined margin. The largest cross-section measurement is then used as across-section measurement. This procedure is illustrated for the upper arm cross- section measurement in Fig. 11.
For distance measurements expressing lengths (e.g. shoulder-to-shoulder), themeasurements can be extracted S6a by measuring the distance between keypose points directly.
Then the measurements from individual images are combined S6b. First, all pixelmeasurements are converted to metric measurements using the reference points(or rather the pixels-to-meter ratio) computed in the analysing S2 step. ln someembodiments the distance between the wall and the individual 100 is known.Then the measurements may also be calibrated to compensate for this. To determine this compensation, the camera focal length can be used. The focal 24 length can either be preconfigured to a nominal value related to the specificcamera model or be computed in a camera calibration with the predetermined pattern.
The distance measurements from the different input images are simply combinedS6b by averaging the distance measurements, to minimize the measurementerror. ln other words, in some embodiments the combining S6b comprisesaveraging distance measures from the individual binary two-dimensional imagesto produce a distance measurement associated with the body. ln someembodiments, any individual measurement that stands out from the othermeasurements is disregarded.
For circumference measurements, the combining S6b comprises combiningindividual cross-section measurements from the individual binary two-dimensionalimages to produce a circumference measurement of a body part. This can bedone in a number of ways, e.g. by fitting a smooth curve consistent with the cross-section measurements and measuring the length of this curve. Prior knowledgeabout the typical shape of different body part cross-sections can be built-in byassuming an initial outline curve and finding a new curve that is consistent withthe measurements while being as close as possible to the original initial curve. Analternative, simpler method is to compute the circumference c as c=k=i where d1 and d2 are orthogonal cross-section measurements, and k is a predetermined constant.
Note that by selecting k differently, this formula can be used for computing thecircumference of ellipses, rectangles, and everything in between. A constant kcan be selected that best captures the typical shape of each body part, anddifferent constants k can be used for different body parts. This is illustrated in Fig.12. lf the individual stands on a surface that also has a predetermined patternthereon, then measurements may also be performed on the individual's feet. Thismay be done in a similar manner as for the rest of the body. For example, thefeet's contour may be determined by analysing the part of the predeterminedpattern 15 that is covered by the feet. However, this may require that furtherimages are captured, e.g. from above. ln some embodiments, the method comprises determining S7 a size of a garmentby comparing the estimated body measurements with a size table. Themanufacturers typically provide tables, with arm length, forearm circumference,size of the collar, etc. for specific sizes. When the body measurements have beendetermined, then the body measurements may be compared to the tables to findthe right size for the individual. lt is also possible to determine which bodymeasurements differ most from the standard sizes e.g. if an individual has relatively large forearms.
A plurality of size tables provided by different manufacturers may be stored in adatabase, e.g. in a server. Each manufacturer may provide one or more sizetables. Then if a customer has selected a particular garment, then thecorresponding size table is retrieved from the server and the matching S7 is madefor that particular size table. Hence, one set of body measurement may typicallyresult in different sizes for different garments. Thus, a person that would normally buy size 10 for all garments, would in some cases get an 8 or 12. ln general, it is desirable to select the garment such that all body measurementsare smaller than the numbers in the table. However, in particularfor garments witha loose fit, a small deviation (<1-2 cm) above the value in the size table may be acceptable.
The matching may also take other parameters into account, such as desirespresented by the customer. The customer might e.g. want a tight fit.
The proposed method has now been described with reference to a model comprising tvvo-dimensional images. Another approach is to instead use a full 26 three-dimensional reconstruction, 3D reconstruction, of the human body to extractthe measurements. ln other words, in some embodiments, the constructed modelis a three-dimensional model of body of the individual. The steps of obtaining S1and analysing S2 images works basically in the same way as in the embodimentusing 2D imaging, and the output from the analysing S2 is thus plurality ofreference points of the pre-determined pattern and a binary mask defining whichpart of the obtained images that contains the individual. However, in thisembodiment it may be beneficial to let the individual stand still and to insteadrotate the camera around the individual while capturing the images. Then it can beassured that the individual has the same pose in all images. ln someembodiments, the analysing S2 step is performed for only one of the images e.g.an image comprising a frontal view. However, it is generally good to perform the analysing for several images, to increase robustness and average out error.
The following steps (S3 to S6) are slightly different when using a three-dimensional reconstruction. ln this embodiment the constructing step takesmultiple images picturing the individual as input and produces a 3D model of thebody of the individual as output. The binary mask is used to determine where inthe images the individual is located, so that the 3D reconstruction can focus onreconstructing the individual and not irrelevant stuff around (furniture, etc.). Thepredetermined pattern makes the 3D reconstruction more robust, as it ensures ahigh contrast between the individual and the background, as no details of the background have similar colour and pattern as the individual, etc. ln some embodiments, the three-dimensional model of the body of the individualis constructed using a multiple-view 3D reconstruction algorithm or a structure-from-motion algorithm. The 3D model can be represented as a (i) depth mapdefined on a pixel grid, (ii) a point cloud, (iii) a polygonal mesh, or (iv) a voxel grid.Such methods are very well-studied in the scientific computer vision literature, andany combination of algorithms found in the literature can be used. One example isdescribed in [Vu, Labatut, Pons, Keriven. "High Accuracy and Visibility-ConsistentDense Multiview Stereo". PAMI 2012]. Most basic theory is well-described in the 27 book [Hartley, Zisserman. "Multiple View Geometry in Computer Vision".Cambridge University Press 2000]. lf the individual stands close to the wall, the3D reconstruction will only be able to accurately reconstruct the frontal side of theindividual (i.e. the side facing the camera). To get accurate measurements, twosets of images can be obtained; one set where the individual is facing the camera,and one set where the human faces the wall. For each of these image sets, areconstruction can be produced that accurately reconstructs half of the body. Thetwo halves can then be combined, either by producing a joint 3D model of theentire body, or by producing half-circumference measurements from each partialbody model and combining these measurements. The reference points of thepredetermined pattern obtained by the reference pattern analysis is used to get an absolute metric scale of the 3D body model. ln some embodiments the distance between the wall and the individual 100 isknown. Then this distance can be included as a metric constraint in the 3Dreconstruction, contributing to resolving projective reconstruction errors and thereby increasing the accuracy of the metric reconstruction. ln the detecting S4 step, key body parts are then localized, in a manner similar toin the embodiment using 2D imaging. However, in contrast to in the embodimentusing 2D imaging, in this embodiment, 3D information can be included in the bodypart localization. For example, the 3D information can be represented as a depthmap which is fed to a deep learning model as an additional input channel. Theoutput from the detecting S4 is a list of positions of key body parts, relative to the3D body model. ln the determining S5 step, the actual body measurements are then extractedfrom the 3D model. ln other words, in some embodiments, the determining S5body measurements comprises extracting the body measurements from the three-dimensional model based on the detected body parts. As in the embodiment using2D imaging, the extracted measurements may comprise circumferencemeasurements of a body part and/or distance measurements associated with thebody. 28 For circumference measurements, the procedure is illustrated in Fig. 13a and Fig.13b. The 3D model is analysed along an axis 1301 going through two key bodyparts, e.g. shoulder 1302 and elbow 1303. A number of cross-sections 1304 of themodel orthogonal to this axis are analysed. For each such cross-section 1304, thelimb (e.g. arm) contour 1305 is extracted by computing the intersection betweenthe 3D model and the cross-section plane. The circumference of each suchcontour is measured, and the largest circumference over a certain number ofcross-sections is taken as the output circumference measurement. lf only a partialbody contour is available, e.g. due to occlusion of the back-side of the limb, thecontour can be approximated using a minimum-energy continuation of the contourcurve 1306, using an optimization that minimizes an energy function expressed by an integral over the (first and n-th order) curve derivatives.
For distance measurements, i.e. measurements representing lengths (e.g.shoulder-to-shoulder), the distance measurements can be made by directlymeasuring the distance between points on the 3D model that are closest to the key pose positions.
The pre-determined pattern may also be used in other ways. For example, if theintrinsic calibration of the camera is not known or only known approximately, thebackground pattern can be used to resolve the projective ambiguity resulting froma 3D reconstruction using an uncalibrated camera. A reference coordinate systemcan be constructed using the pattern, and the reconstruction can be constrainedto respect the known coordinates of points on the background pattern. Methodsfor handling projective ambiguities using additional constraints are well-known inthe computer vision literature, see e.g. [Hartely, Zisserman] as referenced above".ln other words, in some embodiments, the computer implemented methodcomprises using the predetermined pattern to resolve projective ambiguityresulting from constructing the three-dimensional model using an uncalibrated Camera.
Fig. 14 illustrates an electronic device 2 configured to implement the proposedmethod. The illustrated electronic device 2 is a mobile telephone. The electronic 29 device 2 includes a camera assembly 21 for capturing digital still pictures and/ordigital video clips. lt is emphasized that the electronic device 2 need not be amobile telephone but could alternatively be a dedicated camera or some otherdevice. Other exemplary types of electronic devices 2 include, but are not limitedto, a camera, a tablet computing device, a PDA and a personal computer. ln some embodiments, the electronic device is a server arrangement. ln some embodiments, the electronic device 2 comprises a communicationinterface, e.g. wireless communication interface, configured for communicating with a backend server.
The electronic device 2 also comprises a control unit 22 configured to implementthe method described in connection with Fig. 9 to 13. Fig. 15 illustrates the controlunit 22 in more detail. The control unit 22 comprises hardware and software. Thehardware is for example various electronic components on a for example aPrinted Circuit Board, PCB. The most important of those components is typically aprocessor 401 e.g. a microprocessor, along with a memory 22 e.g. EPROM or aFlash memory chip. The software (also called firmware) is typically lower-level software code that runs in the microcontroller.
The control unit 22, or more specifically a processor 221 of the control unit 22, isconfigured to cause the control unit 22 to perform any or all of the aspects of themethod illustrated in Fig. 9 and described in connection thereto. For example, thedetermination of body measurements i.e. steps S1 to S5 are performed in theelectronic device 2. The obtained images are then stored in the memory 222 andneed not to be exposed outside the electronic device 2, which increases integrityof the individual. Size tables are typically stored in a backend server. Hence, itmight be desirable to perform the size matching (step S7), in the backend server.Alternatively, relevant size tables could be downloaded to the electronic device 2.Then the entire method may be performed in the electronic device 2. ln particular, the control unit 22 is configured to capture, using the camera assembly 21, at least two images of the individual, the images picturing the individual facing in different directions while standing in front of a wall having apredetermined pattern thereon and to analyse the predetermined pattern picturedin the obtained images to detect parts of the images that comprise the body of the individual and to determine reference points of the pre-determined pattern.
The control unit 22 is also configured to construct a model of the body of theindividual based on the detected parts of the image comprising the body of theindividual, to detect body parts in the constructed model and/or in the obtainedimages, and to determine the body measurements based on the constructedmodel, the detected body parts and the determined reference points.
The disclosure also relates to a system comprising the collapsible wall describedin Fig. 1 to 9 and the electronic device 2 described in relation to Fig. 14 and 15.
The terminology used in the description of the embodiments as illustrated in theaccompanying drawings is not intended to be limiting of the described method;control arrangement or computer program. Various changes, substitutions and/oralterations may be made, without departing from disclosure embodiments as defined by the appended claims.
The term "or" as used herein, is to be interpreted as a mathematical OR, i.e., asan inclusive disjunction; not as a mathematical exclusive OR (XOR), unlessexpressly stated otherwise. ln addition, the singular forms "a", "an" and "the" areto be interpreted as "at least one", thus also possibly comprising a plurality ofentities of the same kind, unless expressly stated otherwise. lt will be furtherunderstood that the terms "includes", "comprises", "including" and/ or"comprising", specifies the presence of stated features, actions, integers, steps,operations, elements, and/ or components, but do not preclude the presence oraddition of one or more other features, actions, integers, steps, operations,elements, components, and/ or groups thereof. A single unit such as e.g. aprocessor may fulfil the functions of several items recited in the claims.

Claims (31)

Claims
1. A collapsible wall device (1) designed to enable determination of body measurements of anindividual positioned on the collapsible wall device (1), the collapsible wall device ( 1) comprising: - a floor part (11) comprising a front side with at least one marking (14) thereon, the at least onemarking (14) indicating where the individual shall stand, - a wall part (12) comprising a lower edge (121) foldably attached to one edge (111) of the floor part(11) and a front side with a predetermined pattern (15) thereon, the predetermined pattern (15)being recognizable by an image processing algorithm and - an erecting device (13) configured to hold the wall part (12) in an upright position in relation to thefloor part, whereby the predetermined pattern (15) on the front side of the wall part (12) faces theindividual (100) standing on the at least one marking (14) on the front side of the floor part (12).
2. The collapsible wall device (1) according to claim 1, wherein the erecting device (13) is configuredto raise the wall part (12) to the upright position.
3. The collapsible wall device (1) according to claim 1 or 2, wherein the erecting device (13)comprises springs arranged at the outer edges of the wall part (12) and/or the floor part (11).
4. The collapsible wall device (1) according to any of the preceding claims, wherein the wall part (11)and the floor part (12) are formed by one single piece of material.
5. The collapsible wall device (1) according to any of the preceding claims, wherein the wall part (12)and the floor part (11) can be made of nylon, plastics, textlle, paper or metal.
6. The collapsible wall device (1) according to any of the preceding claims, wherein the at least onemarking (14) comprises a pair of foot prints facing away from the wall part (12).
7. The collapsible wall device (1) according to any of the preceding claims, wherein the at least onemarking (14) comprises markings for different foot sizes.
8. The collapsible wall device (1) according to any of the preceding claims, wherein the at least onemarking indicates a plurality of directions for the individual to face, when capturing images for use inthe determination of body measurements.
9. The collapsible wall device (1) according to any of the preceding claims, wherein thepredetermined pattern comprises regularly positioned and or sized objects.
10. A computer implemented method for determining body measurements of an individual,positioned on a collapsible wall device according to any of the preceding claims 1-9, the methodcom prising: v obtaining (S1) at least two images of the individual, the images picturing the individual facing indifferent directions while standing in front of a wall having a predetermined pattern thereon, - analyzing (S2) the predetermined pattern pictured in the obtained images to detect parts of theimages that comprise the body of the individual and to determine reference points of the pre-determined pattern, ° constructing (S3) a model of the body of the individual based on the detected parts of the imagecomprising the body of the individual, v detecting (S4) body parts in the constructed model and/or in the obtained images, and v determining (S6) the body measurements based on the constructed model, the detected bodyparts and the determined reference points. - recognize the collapsible wall trough imprinted code on the collapsible wall/reference, in order toinltiate detection of the individual and start measuring the body.
11. The computer implemented method according to claim 10, comprising:- providing (S0) user output instructing an individual to stand in front of a wall having apredetermined pattern thereon. '
12. The computer implemented method according to claim 10 or 11, comprising:- providing (S5) user output indicating that one or more of the obtained images has insufficientquality.
13. The computer implemented method according to any one of claims 10 to 12, comprising:- determining (S7) a size of a garment by comparing the estimated body measurements with a sizetable.
14. The computer implemented method according to any one of claims 10 to 13, wherein theconstructed model comprises a set of binary two-dimensional images of the body of the individual,wherein each binary two-dimensional image corresponds to one of the obtained images.
15. The computer implemented method according to claim 14,wherein the determining (S6) comprises extracting (S6a) metrics from the individual binary two-dimensional images and combining (S6b) the metrics to obtain the body measurements.
16. The computer implemented method according to claim 15, wherein the combining (S6b)comprises averaging distance measures from the individual binary two-dimensional images toproduce a distance measurement associated with the body.
17. The computer implemented method according to claim 15 or 16, wherein the combining (S6b)comprises combining individual cross-section measurements from the individual binary two-dimensional images to produce a circumference measurement of a body part.
18. The computer implemented method according to any one of claims 10 to 13,wherein the constructed model is a three-dimensional model of body of the individual.
19. The computer implemented method according to claim 18, wherein the three-dimensional modelof the body of the individual is constructed using a multiple-view 3D reconstruction algorithm or astructure-from-motlon algorithm.
20. The computer implemented method according to claim 18 or 19, wherein the determining (S6)body measurements comprises extracting the body measurements from the three-dimensionalmodel based on the detected body parts.
21. The computer implemented method according to claim 20 wherein the extracted measurementscomprise at least one circumference measurement of a body part and/or a distance measurementassociated with the body.
22. The computer implemented method according to any one of claims 18 to 21, further comprisingusing the predetermined pattern to resolve projective ambiguity resulting from constructing thethree-dimensional model using an uncallbrated camera or only approximatively calibrated camera.
23. The computer implemented method according to any one of claims 10 to 22, wherein thedetecting (S4) comprises using a machine learning model trained to detect the body parts.
24. The computer implemented method according to any one of claims 10 to 23, wherein theobtained images picture the individual from predetermined angles and/or in predetermined poses.
25. The computer implemented method according to any one of claims 10 to 24, wherein thepredetermined pattern comprises regularly spaced shapes with predetermined dimensions.
26. The computer implemented method according to any one of claims 10 to 25, wherein theconstructing (S3) comprises using the predetermined pattern to compensate for optical orperspective distortion in the obtained images.
27. A computer program comprising instructions which, when the program is executed by a controlunit, cause the control unit to carry out the method of any one of the claims 10 to 26.
28. A computer-readable storage medium comprising instructions which, when executed by a controlunit, cause the control unit to carry out the method of any one of the claims 10 to 26.
29. An electronic device (2) for determining body measurements of an individual, positioned on acollapsible wall device according to any of the preceding claims 1-9, comprising; - a camera assembly (21) configured to capture images, - a control unit (22) configured to: ~ capture, using the camera assembly, at least two images of the individual, the images picturing theindividual facing in different directions while standing in front of a wall having a predeterminedpattern thereon, ~ analyze the predetermined pattern pictured in the obtained images to detect parts of the imagesthat comprise the body of the individual and to determine reference points of the pre-determinedpattern, ° construct a model of the body of the individual based on the detected parts of the imagecomprising the body of the individual, ~ detect body parts in the constructed model and/or in the obtained images, and ~ determine the body measurements based on the constructed model, the detected body parts andthe determined reference points. ~ recognize the collapsible wall trough imprinted code on the collapsible wall/reference, in order toinitiate detection of the individual and start measuring the body.
30. The electronic device of claim 29, wherein the control unit is configured to execute the methodaccording to any one of claims 10 to 26.
31. A system comprising the collapsible wall according to any one of claims 1- 9 and the electronicdevice according to claim 29 or 30.
SE1830233A 2018-08-08 2018-08-08 Collapsible wall device and method for determining body measurements of an individual SE543031C2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
SE1830233A SE543031C2 (en) 2018-08-08 2018-08-08 Collapsible wall device and method for determining body measurements of an individual
PCT/SE2019/000012 WO2020032843A1 (en) 2018-08-08 2019-08-07 Collapsible wall device and method for determining body measurements of an individual

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
SE1830233A SE543031C2 (en) 2018-08-08 2018-08-08 Collapsible wall device and method for determining body measurements of an individual

Publications (2)

Publication Number Publication Date
SE1830233A1 SE1830233A1 (en) 2020-02-09
SE543031C2 true SE543031C2 (en) 2020-09-29

Family

ID=69415642

Family Applications (1)

Application Number Title Priority Date Filing Date
SE1830233A SE543031C2 (en) 2018-08-08 2018-08-08 Collapsible wall device and method for determining body measurements of an individual

Country Status (2)

Country Link
SE (1) SE543031C2 (en)
WO (1) WO2020032843A1 (en)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB855101A (en) * 1957-01-04 1960-11-30 Nikolaus Muller Improvements in and relating to the preparation of tailors' patterns
US3902182A (en) * 1973-10-18 1975-08-26 Lars Evert Bernhard Hillborg Process and device for determining photographically dimensions of persons and objects
US5956525A (en) * 1997-08-11 1999-09-21 Minsky; Jacob Method of measuring body measurements for custom apparel manufacturing
US20070083384A1 (en) * 2005-09-28 2007-04-12 Right-Fit Education Llc Method and system for posture awareness training
US20090062693A1 (en) * 2007-08-29 2009-03-05 Lancastria Limited System for determining individual user anthropometric characteristics related to mattress preference
NL1037949C2 (en) * 2010-05-10 2011-11-14 Suitsupply B V METHOD FOR DETERMINING REMOTE SIZES.
US8721567B2 (en) * 2010-12-27 2014-05-13 Joseph Ralph Ferrantelli Mobile postural screening method and system
GB201317658D0 (en) * 2013-10-07 2013-11-20 Laurence Daniel isizeme

Also Published As

Publication number Publication date
SE1830233A1 (en) 2020-02-09
WO2020032843A1 (en) 2020-02-13

Similar Documents

Publication Publication Date Title
JP7249390B2 (en) Method and system for real-time 3D capture and live feedback using a monocular camera
US10460517B2 (en) Mobile device human body scanning and 3D model creation and analysis
CN108389212B (en) Method for measuring foot size and computer readable medium
US9905019B2 (en) Virtual apparel fitting systems and methods
JP6392756B2 (en) System and method for obtaining accurate body size measurements from a two-dimensional image sequence
US10813715B1 (en) Single image mobile device human body scanning and 3D model creation and analysis
JP5791812B2 (en) Clothes image processing device, clothes image display method, and program
US10420397B2 (en) Foot measuring and sizing application
TR201815349T4 (en) Improved virtual trial simulation service.
US20180247426A1 (en) System for accurate remote measurement
CN107481082A (en) A kind of virtual fit method and its device, electronic equipment and virtual fitting system
WO2018133691A1 (en) Method and device for obtaining figure parameter of user
KR101557492B1 (en) Apparatus and Method for generating user&#39;s three dimensional body model based on depth information
US20130069939A1 (en) Character image processing apparatus and method for footskate cleanup in real time animation
KR101499699B1 (en) Apparatus and Method for generating user&#39;s three dimensional body model based on depth information
Desai et al. Skeleton-based continuous extrinsic calibration of multiple RGB-D kinect cameras
CN107430542A (en) Obtain image and the method for making clothes
EP3801201A1 (en) Measuring surface distances on human bodies
US11544872B2 (en) Camera calibration method using human joint points
SE543031C2 (en) Collapsible wall device and method for determining body measurements of an individual
Senanayake et al. Automated human body measurement extraction: single digital camera (webcam) method–phase 1
Wu et al. A study on improving the calibration of body scanner built on multiple RGB-Depth cameras
JP7447956B2 (en) Processing device, attitude analysis system, and program
US20220222887A1 (en) System and method for rendering clothing on a two-dimensional image
US20210069580A1 (en) Methods, Devices, Systems, and Computer Program Products for Creating Three-Dimensional Puzzles