GB2504711A - Pose-dependent generation of 3d subject models - Google Patents

Pose-dependent generation of 3d subject models Download PDF

Info

Publication number
GB2504711A
GB2504711A GB1214042.2A GB201214042A GB2504711A GB 2504711 A GB2504711 A GB 2504711A GB 201214042 A GB201214042 A GB 201214042A GB 2504711 A GB2504711 A GB 2504711A
Authority
GB
United Kingdom
Prior art keywords
subject
pose
sample space
dimensional representation
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1214042.2A
Other versions
GB2504711B (en
GB201214042D0 (en
Inventor
Frank Perbet
Minh-Tri Pham
Oliver Woodford
Ricardo Gherardi
Atsuto Maki
Bjorn Stenger
Roberto Cipolla
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Europe Ltd
Original Assignee
Toshiba Research Europe Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Research Europe Ltd filed Critical Toshiba Research Europe Ltd
Priority to GB1214042.2A priority Critical patent/GB2504711B/en
Publication of GB201214042D0 publication Critical patent/GB201214042D0/en
Publication of GB2504711A publication Critical patent/GB2504711A/en
Application granted granted Critical
Publication of GB2504711B publication Critical patent/GB2504711B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1079Measuring physical dimensions, e.g. size of the entire body or parts thereof using optical or photographic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1071Measuring physical dimensions, e.g. size of the entire body or parts thereof measuring angles, e.g. using goniometers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1072Measuring physical dimensions, e.g. size of the entire body or parts thereof measuring distances on the body, e.g. measuring length, height or thickness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Abstract

A method of generating a three dimensional representation of a subject comprises determining that a part of the subject is located within a sample space 130 and determining if the pose of the subject is within a range of poses, the range of poses including a target pose. If the subject pose is within the range of poses, depth and image information of the subject in the sample space are captured using and RGB-D sensor 120/220 capable of capturing colour image and range information. A three dimensional representation of the subject is generated by fitting a three dimensional model for the subject in the target pose to the captured depth and image information. A location indicator may be superimposed upon an image of the sample space and displayed upon display 110 to help the subject (e.g. a person/human body) to stand in an optimum position. Alternatively, weight scale 230 provides the required subject position and additionally supply subject weight/mass information. The display may also display a guide pose representing a desired target pose which the subject is required to adopt. By performing matching between the guide pose and the subject, a determination is made about the subjects position. If a suitable match is determined, depth and image information are acquired to enable model fitting. The method may further include receiving a user input indicative of measurements of the subject to assist with the model fitting. An apparatus and computer program for performing the method are also described. The method enables an accurate 3d model of a subject to be generated, from which direct measurements (height etc) may be determined. Furthermore, the model may also facilitate an estimation of other measurements such as weight, fitness, gender or BMI.

Description

Methods and systems for generating a 3D representation of a subject
FIELD
Embodiments of the present invention relate generally to methods and systems for generating a three dimensional representation of a subject such as a human.
BACKGROUND
Three dimensional representations of subjects such as the human body have a wide variety of applications. These applications include: Health, for example the monitoring of body shape and changes in body shape over time; internet shopping, a 3D representation can be used in a virtual dressing room'; further, a 3D representation of a person can be used in the production of made to measure clothing, or to determine the correct size of clothing for a person; and Gaming and social networking where 3D representations are used to create accurate avatars.
BRIEF DESCRIPTION OF THE DRAWIINOS
In the following, embodiments will be described with reference to the drawings in which: Figure 1 shows an apparatus for generaUng a 3D representation of a subject; Figure 2 shows an apparatus including a scale for generating a 3D representation of a subject; Figure3 is a flowchart showing a method of generating a 3D representation; Figure 4 is a flowchart showing a method of generating a 3D representation; Figure 5 is a Flowchart showing a method of generating a 3D representation; Figure 6 shows an apparatus for generating a 3D representation of a subject; Figure 7 is a flowchart showing a method of determining a location for a subject in order to generate a 3D representation; Figure 8 is a flowchart showing a generating a 3D representation of a subject; Figure 9A shows an example of a 3D reconstruction of a subject; Figure 98 shows an estimation of normal vectors for the scene shown in Figure 9A; Figures 1 0A and I OB show a representation of a flatness term for a scene; Figures hA-C illustrate an example of the steps involved in determining a ground plane; Figure 12 shows an indicator to guide a user to a location; Figures 1 3A-C illustrate a set of indicators for guiding a user's feet; Figure 14 shows an example of determining if a user is correctly located; Figures 15A-E show an example of the steps involved in segmenting a user from the
background;
Figures 16A-C show indicators used to guide the pose of a user; Figure 17 shows an indication of the steps involved in generating a 3D representation of a subject; Figure 16 shows an indication of the steps involved in generating a 3D representation of a subject; Figure 19 shows a mesh which models the 3D shape of a human fitted to a subject; and Figure 20 shows an example of a 3D representation of a human and measurements calculated from the 3D representation.
DETAILED DESCRIPTION
In an embodiment a method of generating a three dimensional representation of a subject comprises determining that a part of the subject is located within a sample space; determining if the pose of the subject is within a range of poses, the range of poses including a target pose; if the pose of the subject is within the range of poses, capturing depth and image information from the sample space; and generating a three dimensional representation of the subject by fitting a three dimensional model for the subject in the target pose to the captured depth and image information.
In an embodiment the method further comprises capturing an image of a sample space; and displaying a location indicator superimposed on an image of the sample space, the location indicator comprising an indication of a first location within the region of the sample space for a first part of the subject.
In an embodiment the method further comprises determining a ground plane in the sample space and selecting a region on the ground plane as the first location.
In an embodiment displaying a location indicator superimposed on an image of the sample space comprises using depth information to generate a top down view of the first location of the sample space.
In an embodiment the method further comprises displaying a guide pose, the guide pose comprisin9 a representation of the target pose.
In an embodiment the method further comprises determining an attribute of the subject and matching an attribute of the guide pose with the attribute of the subjecL In an embodiment determining that a part of the subject is located within a region of the sample space comprises receiving a signal from a scale located in the region of the sample space, the signal being indicative of the weight of the subject.
In an embodiment the three dimensional representation of the subject is generated by fitting the three dLmensional model for the subject in the guide pose to the captured depth, the captured image information and the weight of the subject.
In an embodiment the method further comprises receiving an input indicative of a measurement of said subject, and wherein the three dimensional model is fitted to the measurement in addition to the captured depth and image information.
In an embodiment the method further comprises calculating a value for a measurement of the subject from the three dimensional representation.
In an embodiment the method further comprises displaying the three dimensional representation of the subject.
In an embodiment the method further comprises receiving an input of an alternative value to be modelled of a measurement of the subject: adjusting the three dimensional representation of the subject to the alternative value and displaying the adjusted three dimensional representation of the subject.
In an embodiment the method further comprises displaying a further three dimensional representation of the subject, the further three dimensional representation of the subject being obtained at an earlier time than the three dimensional representation of the subject.
In an embodiment generating a three dimensional representation of the subject by fitting a three dimensional model for the subject in the guide pose to the captured depth and image information comprises forcing the three dimensional model to lie within the captured data.
In an embodiment an apparatus for generating a three dimensional representation of a subject, the apparatus comprises a depth sensor operable to capture depth information from a sample space; an image sensor operable to capture image infomiation from the sample space; a processor configured to determine that a part of the subject is located within the sample space; determine if the pose of the subject is within a range of poses, the range of poses including a target pose; if the pose of the subject is within the range of poses, cause the depth sensor and image sensor to capture depth and image information from the sample space; and generate a three dimensional representation of the subject by fitting athree dimensional model for the subject in the target pose to the captured depth and image information.
In an embodiment the apparatus further comprises: a scale operable to measure the weight of the subject, wherein the processor is configured to determine that a part of the subject is within the sample space by receiving a signal from the scale and the processor is configured to generate a three dimensional representation of the subject by fitting a three dimensional model for the subject in the target pose to the captured depth and image information.
In an embodiment the apparatus further comprises a display configured to display the three dimensional representation of the subject.
In an embodiment the display is configured to display a location indicator superimposed on an image of the sample space, the location indicator comprising an indication of a first location within the region of the sample space for a first part of the subject.
In an embodiment the display is configured to display an indication of a guide pose, the guide pose comprising a representation of the target pose.
In an embodiment a computer readable medium carries processor executable instructions which when executed on a processor cause the processor to carry out a method of generafing a three dimensional representation of a subject.
Embodiments of the present invention can be implemented either in hardware or on software in a generat purpose computer. Further embodiments of the present invention can be implemented in a combination of hardware and software. Embodiments of the present invention can also be implemented by a single processing apparatus or a distributed network of processing apparatus.
Since the embodiments of the present invention can be implemented by software, embodiments of the present invention encompass computer code provided to a general purpose computer on any suitable carrier medium. The carrier medium can comprise any storage medium such as a floppy disk, a CD ROM, a magnetic device or a programmable memory device, or any transient medium such as any signal e.g. an electrical, optical or microwave signal.
Figure 1 shows an apparatus for generating a 3D representation of a subject such as a person according to an embodiment. The apparatus 100 comprises a display 1101 such as a television set. A RGB-D sensor 120 is mounted on the display 110. The RGB-D sensor is capable of generating a RGB image and a depth image of the scene in front of the display 110. An area of available space 130 is located on the floor in front of the display 110.
In an embodiment, the display 110 is configured to display indications to guide a subject to the area of available space 130. Once the subject is standing in the correct location, the display is configured to display indications to guide the subject to a pose.
When the subject is in the guide pose, a 3D representation of the subject is generated using information obtained by the RGB-D sensor 120.
In an embodiment, the apparatus is configured to determine that a part of the subject is located within the sample space and to determine if the pose of the subject is within a range of poses including a target pose. In an embodiment, when the apparatus detects that the subject is within the range of poses, the method to capture a three dimensional representation of the subject is automatically initiated.
Figure 2 shows an apparatus for generating a 3D representation of a subject such as a person according to an embodiment. The apparatus 200 comprises a display 210 and a RGB-D sensor 220. The apparatus further comprises a scale 230.
Figure 3 is a flowchart showing the method carried out by the apparatus 100 described above. In step S302, the user starts the scanner and the floor is detected. If the floor is successfully detected in step S302, the method moves onto step S304. If the floor is not successfully detected in step S302, the method moves to step S303. In step S303 the disptay displays an indication instructing the user to clear a space on the floor and restart the scanner.
In step S304, the display shows a real time RGB image of the floor obtained by the ROB-D sensor with two virtual shoes superimposed on the image in the available space on the floor.
In step S305, the user steps into the correct location using the shoes as a guide. It is checked whether the user is correctly located. If the user is not correctly located, then in step S306, the method returns to 8305 and the user must more precisely step into the indicated location, if the location of the user closely matches the correct location, then the method moves to step 8307.
In step S307, a silhouette guide appears on the display. The user then mimics the guide silhoueue in step S308. It is checked in step S308 whether the user's pose closely matches the required pose. If the users pose does not closely match the guide pose, then in step 5309 the method returns to step 5308.
lithe user's pose closely matches the guide pose, then in step 8310, the RGB-D scanner captures data in a scan which takes approximately 10 seconds. In step 5311 a 3D representation of the user is generated and output.
Figure 4 shows a flowchart illustrating the steps carried out by the apparatus without a scale 100 or the apparatus with a scale 200. The steps S302 to 8311 carried out by the apparatus without a scale 100 are as described above in relation to Figure 3.
When using the apparatus 200 with a scale, the user steps onto the foot track of the scale in step S402. Once the user has stepped onto the foot track, the display displays a silhouette to guide the user to the correct pose as in step 8307 described above. The remaining steps of the method are as described above in relation to Figure 3.
Processes described allow a low dimensional human model to be registered with the depth-map measured by a depth-sensor. The method is lightweight and can easily be installed, at home in a living-room or in a doctor's office for instance. The 3D human shape of a given person is inferred using the front-view only, the back view is estimated. Additional constraints like waist size or weight can be added for increased accuracy.
Figure 5 shows a flow chart illustrating the main steps involved in generating a 3D representation of a human subject according to an embodiment.
In an offline step S502, a principal component analysis model of a 3D human shape is learnt from a database of human shapes. In the Eater steps, this model is used to generate a 3D representation of a human subject. In step S504, a segmented depth map is received and from the depth map, the PCA parameters of the human subject scanned are determined. In step S506, further measurements may be given to improve accuracy.
Figure 6 shows a block diagram of an apparatus for generating a 3D representation of a subject according to an embodiment. The apparatus 600 includes a measurement device 610. The measurement device 610 has a RGB capture device 612 such as a digital camera and a depth sensor 614. The depth sensor 612 or the measurement device 610 may include a Primesense PS1O8O sensor. The apparatus further comprises a processor 620 and a memory 630. The processor 620 executes a program stored in the memory to process the data captured by the measurement device 610.
The apparatus 600 includes a display 640 which displays images to guide the subject.
The apparatus also includes a network interface 660 and a user interface 650.
Figure 7 is a flowchart illustrating a method of calibrating a 3D data processing apparatus such as that shown in Figure 6. The method shown in Figure 7 involves determining a suitable location for the user to be guided to in order to capture RGB-D information for use in calculating a 3-D representation of the user.
Once the method shown in Figure 7 has taken place, the subject is guided to the location and the subject is guided to a suitable pose for generation of a 3D representation.
In step S702 up-vectors are detected in a RGB-D image from the measurement device 610. An estimation of the vertical direction is computed using the so-called Manhattan assumption. In the Manhattan assumption, the world assumed to be vertical and horizontal planes.
In step 8704 the ground plane detected. The floor is detected and segmented.
In step 8706 a suitable position for the foot location is determined. This involves finding a proper place so the user can stand and perform the standard pose. This location is shown on top of the RGB image, using augmented-reality rendering.
Figure 8 shows a flowchart ilLustrating the steps involved in calculating a 3D representation of a subject.
In step S802, the feet of the subject are detected and it is determined whether the subject is correctly located in the sample space.
If the subject is correctly located, the method moves to step 3804. In step 3804, the subject is segmented. The depth map captured by the depth sensor is used to segment
the subject from the background.
In step 3806, a shape guide is shown on the display to guide the subject to a pose. A guide is shown on the screen, indicating a specific pose that the user must mimic. To ease the process; the guides roughly adapt to the size of the current user.
Once the subject is in the correct pose a 3D scan of the person is computed in step 3808.
The method of Figure 7 will now be described in more detail in reference to Figures 9 to 12.
Figure 9A shows a 3D reconstruction from a calibrated depth sensor. For the depth sensor, the focal length and other optical parameters of the sensor are known. For example, the following parameters may be used for the camera model: focal length: the focal length in pixels, principal point: the principal point coordinates, and distortions: the image distortion coefficients (radial and tangential distortions).
As mentioned above in step 3702 up-vectors are computed using the Manhattan assumption. The up-vector is the direction which is either perpendicular or parallel to the flat surface in the scene. To achieve this normals are first computed over the depth-map given by the depth sensor. This is done by fitting a local plane using principal component analysts (PCA) for each pixel neighbourhood on a blurred depth map.
Figure 9B shows an estimation of the normal vectors for the image in Figure 9A. In the calculation of the normal vectors shown in Figure 9B pixel neighbourhoods having a radius of 5 pixels were used and the depth map was blurred using a value of 4 for sigma. The eigen-vector corresponding to the smallest eigen-value is considered to be the normal to the surface.
Using planes defined by these local normals a value for a flatness is calculated which estimates how flat the surface is for each pixel of the blurred depth map. Figure 1OA shows a RGB image of a scene and Figure lOB shows the calculated flatness value for the depth image of the scene. In Figure lOB, a high flatness term is illustrated by the light areas and a lower flatness term is illustrated by the darker areas. As shown in Figure lOB the higher the projection of neighbouring points onto the fitted plane is the smaller the flatness is.
The up-vector is found by minimising: E)Eu) = >: n1) = > + cos(4.angle 2 where n is the normal at pixel i andjç is the flatness al pixel /. Using uni = angie(u. n) and other basic trigonometric functions, an optimized expression is found to be: -1) + 1)2 This is solved using the Levenberg-Marquardt algorithm. So that a good local minimum is found the up vector in camera space is used as the initial solution. This assumes that the camera is correctly orientated with respect to the surroundings.
In step 5704. the ground piano is found by fitting a plane to the depthmap using tterative closest point (ICP). The cost function has multiple local minima, and the global minimum is not necessarily the correct one. Fri order to find the correct plane, an iterative solver is used and it is given a proper initial solution. This initial solution is given by computing the histogram of point's height according to the up-vector previously computed. The initial ground plane is perpendicUlar to the up-vector and its height is at the lowest maximum of the height histogram. The ground plane is then found by minimizing the expression: Egp(rx, Ty, t) Kqp(projz(M (rti1, rj, where p. is the 3D point in camera space and prnjz(p) returns the height of the point p. Mfr. r, t) is the affine transform between camera space and the ground space, it is expressed relative to the initial solution, and r,.0,1 are the pitch and roll of the rotation matrix respectively and i is the height component along the up-vector. The Kernel is chosen to be robust and smooth: Kgp(h) = Ksmooi;L ( h) (Jgp With: ( 2.x2 if Ksmooth(x) 1 (w -1)2. if H <1 1 if i<1x The cost function E, is minimized using the Levenberg-Marquardt algorithm in three steps, using three different o, namely 0.2, 0.1 and 0.05. This multi-resolution approach insures that the final result is both accurate, that is that outliers do not affect the result too much; and also that the final result is the desired one, that is the desired local minimum.
An example of ground plane detection is shown in Figures hA, 11B and 110. Figure 1 IA shows a RGB image of a scene, Figure 11 B shows the final cost, with black being zero and white being one. Figure 11 C shows the fitted plane 1110.
In step S706 a suitable position for the subject during the scan is determined. A good place for the user to be during the scan is (i) close to the camera, so that the depth precision is higher; and (ii) at a place where the ground is uncluttered.
This optimal place is found very simply by accumulating the points belonging to the ground in two histograms: 1. Width histogram: the x-coordinates are histogramed and the median is chosen to be the x-coordinate of the centre.
2. Depth histogram: the z-coordinates are histogramed and the closest possible depth is chosen to be the z-coordinate of the centre.
The foot location is rendered using Augmented-Reality that is superimposed with the RGB image. A foot location is shown in Figure 12. The foot location 1210 is rendered as a circle 1212 on the ground plane enclosing two rendered shoes 1214 and 1216.
After the foot location 1210 is displayed, the user is asked to walk to the indicated location and perform a standard pose.
To guide the user toward the correct position, two shoes 1214 and 1216 are rendered on top of the ROB image, and a green circle 1212 is shown around them. The inventors have noted that users sometimes have difficuEty aligning their shoes with the virtual shoes 1214 and 1216. To assist the user a view from above is generated using the depth map as shown in Figures 13A, 13B and 13C. As shown in Figure 13A, the location is displayed in a top down view of two footprints 1312 and 1314 located within a circle. When the subject's feet are incorrectly located as is the case in the view shown in Figure 13A both of the footprints are shown in red. As shown in Figure 13B when one of the subjects feet is correctly located, the colour of the footprint 1314 corresponding to that foot changes to green to indicate to the subject that one foot is correctly located. When both feet are correctly located, as shown in Figure 13C, both of the footprints are coloured green to indicate that both of the subject's feet are correctly located.
Detecting that the subject's feet are in the correct location is done by calculating looking at the number of points within a bounding box located on top of each shoe. This is shown in Figure 14. Figure 14 shows a subject 1410 located within the circle 1412.
Bounding boxes 1420 and 1422 are located above the locations of the shoes shown in Figure 12. To determine if a foot is present, a ratio is calculated of the number of pixels inside each of the bounding boxes to the number of pixels in the depth map for the front face of the bounding box that are not located within the bounding box. In the example shown in Figure 14, a ratio of 0.3 is used as the threshold to determine that a foot is present. When both bounding boxes are occupied, it is assumed that a person is detected.
In an alternative embodiment, to determine if a foot is present, the sum of the number of pixels of the depth map which falls into the corresponding bounding box is computed. If this sum is higher than a given threshold, it is assumed that the foot is present.
Once a person is detected as the subject the method moves to step 8804 in which the body of the subject is segmented from the background.
The user's silhouette is segmented by modelling the image using a grid-based Markov random field (MRF) with two labels, foreground and background. The MRF it is solved using graph-cut.
The unary terms are: U(i) = Uf/00(i).Ubb(i) where / is a pixel (pointing to a colour and a depth) (,,,. is for selecting out the floor (forcing a zero probability at ground level) and is one if the 3D points belongs to a bounding box supposed to surround the whole body.
The binary terms are: B(i,j) = Bdepth(i,j)eBrgb(i,j) where B.gh is a smooth constraint saying a boundary is likely if there is an edge in the ROB image, and is a smooth constraint saying a boundary is likely if there is an edge in the depth image.
The graph-cut solver does not enforce connectedness so the result is taken as the biggest connected region. The segmentation computes at interactive frame rate, the results are robust and precise enough for the present purpose.
Figures 15A-E show an example of the segmentation of a subject. Figure iSA shows an RGB image and Figure 158 shows the corresponding depth image. Figure 150 shows the unary term and Figure 15D shows the binary term. Finally Figure 15E shows the segmented subject.
Once the subject has been detected and segmented, the subject is guided to a pose for the scan. In order to guide the subject to the correct pose, a semi-transparent guide is superimposed on top of the subject, a red colour indicates that the user is not performing the correct pose, green indicates that this is close enough to proceed.
The display of a guide pose is shown in Figures 16A-C. Figure 16A shows a display analogous to that shown in Figure 12. In order to guide the subject to stand in the correct position, and indicator 1610 is displayed. The indicator 1610 is displayed as a circle 1612 located on the ground plane. Two foot or shoe indicators 1614 and 1616 are located within the circle 1612 to indicate to the subject the correct position to stand.
To guide the user toward the correct position; two shoes 1214 and 1216 are rendered on top of the ROB image, and a green circle 1212 is shown around them.
As shown in Figure 168, when the subject's feet are determined to be located in the correct regions, a pose indicator 1620 is displayed. As shown in Figure 168, to indicate that the subject 1630 is not standing in the correct pose, the pose indicator is displayed in red.
The subject then adjusts their pose to mimic the pose indicator 1(320. When the subject is determined to be in the correct pose, as shown in Figure 160, the pose indicator is displayed in green to indicate that the subject's pose is correct.
The inventors have found that displaying a generic template which does not take into account the variations in height between subjects caused difficulties for subjects in mimicking the pose. It was noted that some users tend to try to compensate by raising or lowering their arms. Therefore, in order to obtain a good angle between the arms and the torso, an approximately matching guide was found to be a better option for guiding a subject's pose. To generate a height matched guide a database of guides or silhouettes was generated by randomly sampling the human model. In order to keep this fast and allow an interactive frame-rate, the images in the database are small, for example an image size of 20x10 pixels. The images are blurred and sub-sampled before matching, the L2 norm is used. When the pose performed by the user is sufficiently closed to the guide, the body scan is performed.
In the example described above, the guide pose displayed to the subject is a silhouette. In embodiments the guide pose may take the form of a stick figure, stylized or realistic depiction of the human shape. Further, in the example described above, the height of the guide pose is matched to the height of the subject, in embodiments different attributes of the guide pose may be matched to the subject. For example, the whole shape of the subject is matched in an embodiment. In embodiments several different metrics may be used to match the guide pose to the subject, for example silhouette overlap, and / or difference from keypoints.
To generate a 3D representation of the subject, the human shape is modelled as a principal component analysis (RCA) representation of a mesh. The input parameters are the RCA coefficients. The mesh is fitted to the depth-map by minimizing the following function: Eregistration Ezth + + + Epea + Ee.xira where: E/ep.h is the depth difference between the model and the data.
is the difference between the si[houette of the model and the data.
is a term included to make sure that the model lies inside the data.
E, is a term that keeps the model shape close to an average human shape by keepin the pca coefficients near zero.
Eertm allows to impose additional constraints, likeweight or waist size.
The input parameters of are: pca1 is the] RCA component. In an example 50 RCA coefficients were used.
; i which are the pose of the human model. There are 5 degrees of freedom as the feet are constrained to touch the ground. R and t define the pose (i.e. position) of the shape model relative to the shoes. R and t define rigid transformations in three dimensions. R is the rotation component and t is the translation component.
It is noted that shoes gives an approximate initial pose. The minimisation of E,.,srm,io,, will result in a more accurate pose. Note that R and t influence the rendering of the shape model, which impacts the following cost components of E,1,,11, and E1115..
The term E,, is included for the following two reasons. In an embodiment, a Primesense depth sensor was used. Such a depth sensor has a tendency to inflate object at the front. Secondly, to give an estimate of the naked shape, clothes must be ignored. Forcing the model to line within the observed data improves the estimate of the naked shape.
Tn more detail, the terms in the cost function are calculated as follows: EdPçru! = Wdepth t -dD) * * (sAii = tL7 jcjde * E(sAli * (1 - -wpr i * (pe.ckj)2 Eea!tra = Weztra * (de.cire.dk -eurrentk)2 Where: dM is the model depth-map at pixel 1, dr)1 is the data from the depth map at pixel /, sf14/ is the foreground mask of the model segmentation, and sf1 is the foreground mask of the data segmentation, at pixel / :sDi == 1 corresponds to foreground, sIN == 0 corresponds to background! sMi == 1 corresponds to foreground,
sMi == 0 corresponds to background,
desiredjj, is a desired additional measure (for instance: a hips circumference of 80cm) curren% is the model current additional measure wm are coefficients set so that all the different energy terms are within the same order of magnitude and have a chance to compete with each other.
The cost EmQIQfl is minimized using the Levenberg-Marquardt algorithm in a multiresolution framework. At each step, the image resolution increases. The blur decreases and the number of PCA coefficients used increases. This multiresolution framework is useful for (i) speeding up the convergence time (U) increasing the chance of finding the global minimum.
It was also found that jittering the solution helped to prevent the solution from falling into a local minimum: at each step, the current solution was randomly disturbed.
Figures 17A-C show the different steps of the rnuFtiresolution process. Figure 17A shows the first stage. Figure 17B shows the middle stage and Figure 17C shows the final stage. In Figures 1 7A-C cyan is Ecje,, magenta is yellow is Figures 1 8A-C show the registration, the mesh is shown along with the 3D points of the depth sensor. Figure iSA shows the first fitting stage, Figure 1SB shows the middle fitting stage and Figure 180 shows the final fitting stage.
Figure 19 shows a mesh fitted to the 3D points of the depth sensor.
Additional constraints can be added to the model to improve the accuracy. The estimation may be inaccurate for the following reasons: the depth sensor may be quite noisy. The naked body is covered by clothes. Additionally, the depth sensor can only see the front view. In order to compensate for this, it is possible to enter additional constraints into the solver, for example weight or waist size.
The waist size can be directly measured from the registered mesh. Constraints like the weight require the use of a regressor.
Once the mesh has been registered, length measurements can be directly computed.
Various paths along the mesh are been parameterized on the original template, allowing them to be computed again once the mesh has been fitted.
While certain measurements are directly estimable based on the appearance of the human shape, other measurements are not Examples of such measurements are weight, fitness, gender, and BMI. In this case, machine learning was used to infer the measurements. Using the Caesar database in which body shapes and various body measurements of over 4,000 people are known, a regressor or a classifier was trained to estimate each measurement based on the PCA coefficients of the human shape.
For regression-type measurements (e.g. weight, fitness1 BMI), a linear regression model was used: = vx + t where i is the i-th measurement, xis the vector of PCA coefficients, and W, b E R are the model parameters. The model parameters were learned using Support Vector Regression (SVR).
Similarly, for a classification-type measurement like gender, a linear classification model was used: sign(wx I... b.) where c is the /-th measurement, x is the vector of PCA coefficients, and w E R are the model parameters. Since c1 represents gender, c1 = I means the human is a male and c = -1 means the human is female. The model parameters were learned using Support Vector Machines (SVM).
Figure 20 shows an example of a mesh fitted to a human subject and direct and inferred measurements. As shown in Figure 20, direct measurements 2010 include height, chest, waist neck, hips, sleeves, cuff and shoulder measurements. Inferred measurements 2020 include weight, an indication of fitness, gender and body mass index (BMI). Figure 20 also shows a 3D representation 2030 of the human subject.
The body scanner and methods described above have been found to allow a 3D representation of a human subject to be calculated in approximately 30 seconds from data obtained in a single scan once the subject has been guided to the correct pose.
The embodiments described above have applications in health monitoring, shopping and gaming. The systems described above could be used to perform measurements of a person to monitor their health, to allow a person to determine their measurements when ordering clothing online, or to create a realistic avatar for video games.
In an embodiment, a user may input a target weight or other measurement and the system may display an estimate of the users body shape when the user gains or loses weight.
In an embodiment this is done using the cost component Ea.ra. Eexi,a computes the difference between the desired weight given by the user and the weight estimated from the current the body shape. Minimizing Eexr.a via the minimization of then makes the estimated weight of the resulting body shape close to the desired weight given by the user.
In an embodiment, the system may display a series of three dimensional representations to allow visualisation of changes in a user's body shape overtime. For example, the system may display morphing between body shapes or superimpose the body shapes.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.

Claims (20)

  1. CLAIMS: 1. A method of generating a three dimensional representation of a subject, the method comprising: determining that a part of the subject is located within a sample space; determining if the pose of the subject is within a range of poses, the range of poses including a target pose; if the pose of the subject is within the range of poses; capturing depth and image information from the sample space; and generating a three dimensional representation of the subject by fitting a three dimensional model for the subject in the target pose to the captured depth and image information.
  2. 2. A method according to claim 1, further comprising capturing an image of a sample space; and displaying a location indicator superimposed on an image of the sample space, the location indicator comprising an indication of a first location within the region of the sample space for a first part of the subject.
  3. 3. A method according to claim 2, further comprising determining a ground plane in the sample space and selecting a region on the ground plane as the first location.
  4. 4. A method according to claim 2, wherein displaying a location indicator superimposed on an image of the sample space comprises using depth information to generate a top down view of the first location of the sample space.
  5. 5. A method according to claim 1 further comprising displaying a guide pose, the guide pose comprising a representation of the target pose.
  6. 6. A method acceding to claim 5, further comprising determining an attribute of the subject and matching an attribute of the guide pose with the attribute of the subject.
  7. 7. A method according to claim 1, wherein determining that a part of the subject is located within a region of the sample space comprises receiving a signal from a scale located in the region of the sample space, the signal being indicative of the weight of the subject.
  8. 8. A method according to claim 7, wherein the three dimensional representation of the subject is generated by fitting the three dimensional model for the subject in the guide pose to the captured depth, the captured image information and the weight of the subject.
  9. 9. A method according to claim 1, further comprising receiving an input indicative of a measurement of said subject, and wherein the three dimensional model is fitted to the measurement in addition to the captured depth and image information.
  10. 10. A method according to claim 1, further comprising calculating a value for a measurement of the subject from the three dimensional representation.
  11. 11. A method according to claim 1 further comprising displaying the three dimensional representation of the subject.
  12. 12. A method according to claim 11, further comprising receiving an input of an alternative value to be modelled of a measurement of the subject; adjusting the three dimensional representation of the subject to the alternative value and displaying the adjusted three dimensional representation of the subject.
  13. 13. A method according to claim 11, further comprising displaying a further three dimensional representation of the subject, the further three dimensional representation of the subject being obtained at an earlier time than the three dimensional representation of the subject.
  14. 14. A method according to claim 1, wherein generating a three dimensional representation of the subject by fitting a three dimensional model for the subject in the guide pose to the captured depth and image information comprises forcing the three dimensional model to lie within the captured data.
  15. 15. An apparatus for generating a three dimensional representation of a subject, the apparatus comprising: a depth sensor operable to capture depth information from a sample space; an image sensor operable to capture image information from the sample space; a processor configured to determine that a part of the subject is located within the sample space; determine if the pose of the subject is within a range of poses, the range of poses including a target pose; if the pose of the subject is within the range of poses, cause the depth sensor and image sensor to capture depth and image information from the sample space; and generate a three dimensional representation of the subject by fitting a three dimensional model for the subject in the target pose to the captured depth and image information.
  16. 16. An apparatus according to claim 15, further comprising: a scale operable to measure the weight of the subject, wherein the processor is configured to determine that a part of the subject is within the sample space by receiving a signal from the scale and the processor is configured to generate a three dimensional representation of the subject by fitting a three dimensional model for the subject in the target pose to the captured depth and image information.
  17. 17. An apparatus according to claim 15, further comprising a display configured to display the three dimensional representation of the subject.
  18. 18. An apparatus according to claim 17, wherein the display is configured to display a location indicator superimposed on an image of the sample space, the location indicator comprising an indication of a first location within the region of the sample space for a first part of the subject.
  19. 19. An apparatus according to claim 17, wherein the display is configured to display an indication of a guide pose, the guide pose comprising a representation of the target pose.
  20. 20. A computer readable medium carrying processor executable instructions which I when executed on a processor cause the processor to carry out a method according to claim 1.Amendments to the claims have been filed as follows CLAIMS: 1. A method of generating a three dimensional representation of a body shape of a human subject, the method comprising: determining that a part of the subject is located within a sample space; determining if a pose of the subject is within a range of poses, the range of poses including a target pose; if the pose of the subject is within the range of poses, capturing depth and image information from the sample space; and generating a three dimensional representation of the body shape of the subject by fitting a three dimensional model for the subject in the target pose to the captured depth and image information.2. A method according to claim 1, further comprising : 15 capturing an image of a sample space; and displaying a location indicator superimposed on an image of the sample space, the location indicator comprising an indication of a first location within the region of the sample space for a first part of the subject.3. A method according to claim 2, further comprising determining a ground plane in the sample space and selecting a region on the ground plane as the first location.4. A method according to claim 2, wherein displaying a location indicator superimposed on an image of the sample space comprises using depth information to generate a top down view of the first location of the sample space.5. A method according to claim I further comprising displaying a guide pose, the guide pose comprising a representation of the target pose.6. A method according to claim 5, further comprising determining an attribute of the subject and matching an attribute of the guide pose with the attribute of the subject.7. A method according to claim 1, wherein determining that a part of the subject is located within a region of the sample space comprises receiving a signal from a scale located in the region of the sample space, the signal being indicative of the weight of the subject.8. A method according to claim 7, wherein the three dimensional representation of the body shape of the subject is generated by fitting the three dimensional model for the subject in the target pose to the captured depth information, the captured image information and the weight of the subject.9. A method according to claim 1, further comprising receiving an input indicative of a measurement of said subject, and wherein the three dimensional model is filled to the measurement in addition to the captured depth and image information.10. A method according to claim 1, further comprising calculating a value for a measurement of the subject from the three dimensional representation. S. * S * * .11. A method according to claim 1, further comprising displaying the three dimensional representation of the subject. * .* * . * * S*12. A method according to claim 11, further comprising receiving an input of an alternative value to be modelled of a measurement of the subject; adjusting the three : ... dimensional representation of the subject to the alternative value and displaying the adjusted three dimensional representation of the subject.13. A method according to claim 11, further comprising displaying a further three dimensional representation of the subject, the further three dimensional representation of the subject being obtained at an earlier time than the three dimensional representation of the subject.14. A rnethod according to claim 1, wherein generating a three dimensional representation of the subject by filling a three dimensional model for the subject in the guide pose to the captured depth and image information comprises forcing the three dimensional model to lie within the captured data.15. An apparatus for generating a three dimensional representation of a body shape of a human subject, the apparatus comprising: a depth sensor operable to capture depth information from a sample space; an image sensor operable to capture image information from the sample space; a processor configured to determine that a pad of the subject is located within the sample space; determine if a pose of the subject is within a range of poses, the range of poses including a target pose; if the pose of the subject is within the range of poses, cause the depth sensor and image sensor to capture depth and image information from the sample space; and generate a three dimensional representation of the body shape of the subject by fitting a three dimensional model for the subject in the target pose to the captured depth and image information.16. An apparatus according to claim 15, further comprising: a scale operable to measure the weight of the subject, wherein the processor is configured to determine that a pad of the subject is within the sample space by receiving a signal from the scale and the processor is configured to generate a three dimensional representation of the subject by fitting a three dimensional model for the subject in the target pose to the captured depth and image information.17. An apparatus according to claim 15, further comprising a display configured to display the three dimensional representation of the subject.18. An apparatus according to claim 17, wherein the display is configured to display a location indicator superimposed on an image of the sample space, the location indicator comprising an indication of a first location within the region of the sample space for a first part of the subject.19. An apparatus according to claim 17, wherein the display is configured to display an indication of a guide pose, the guide pose comprising a representation of the target pose.20. A computer readable medium carrying processor executable instructions which when executed on a processor cause the processor to carry out a method according to claim 1. 0* * S * * *S**S**S * S * *S * * * o *. ** * * * ***
GB1214042.2A 2012-08-07 2012-08-07 Methods and systems for generating a 3D representation of a subject Active GB2504711B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1214042.2A GB2504711B (en) 2012-08-07 2012-08-07 Methods and systems for generating a 3D representation of a subject

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1214042.2A GB2504711B (en) 2012-08-07 2012-08-07 Methods and systems for generating a 3D representation of a subject

Publications (3)

Publication Number Publication Date
GB201214042D0 GB201214042D0 (en) 2012-09-19
GB2504711A true GB2504711A (en) 2014-02-12
GB2504711B GB2504711B (en) 2015-06-03

Family

ID=46934992

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1214042.2A Active GB2504711B (en) 2012-08-07 2012-08-07 Methods and systems for generating a 3D representation of a subject

Country Status (1)

Country Link
GB (1) GB2504711B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2518931A (en) * 2013-10-07 2015-04-08 Isizeme Ltd Method for generating body measurement data of a user and system for selecting a set of articles of clothing for a user
WO2015193628A1 (en) * 2014-06-19 2015-12-23 Toshiba Research Europe Limited Methods and systems for generating a three dimensional representation of a human body shape
WO2016073841A1 (en) * 2014-11-06 2016-05-12 Siemens Medical Solutions Usa, Inc. Scan data retrieval with depth sensor data
CN107374638A (en) * 2017-07-07 2017-11-24 华南理工大学 A kind of height measuring system and method based on binocular vision module
GB2542114B (en) * 2015-09-03 2018-06-27 Heartfelt Tech Limited Method and apparatus for determining volumetric data of a predetermined anatomical feature
WO2020205979A1 (en) 2019-04-01 2020-10-08 Jeff Chen User-guidance system based on augmented-reality and/or posture-detection techniques

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018011334A1 (en) 2016-07-13 2018-01-18 Naked Labs Austria Gmbh Optical marker to adjust the turntable of a 3d body scanner
CN110930344B (en) * 2018-08-29 2023-05-05 杭州海康威视数字技术股份有限公司 Target quality determination method, device and system and electronic equipment
CN109241934A (en) * 2018-09-21 2019-01-18 北京字节跳动网络技术有限公司 Method and apparatus for generating information
KR102132721B1 (en) * 2019-01-03 2020-07-10 (주) 아이딕션 Method, server and program of acquiring image for measuring a body size and a method for measuring a body size using the same
CN112488918A (en) * 2020-11-27 2021-03-12 叠境数字科技(上海)有限公司 Image interpolation method and device based on RGB-D image and multi-camera system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080152191A1 (en) * 2006-12-21 2008-06-26 Honda Motor Co., Ltd. Human Pose Estimation and Tracking Using Label Assignment
US20100111370A1 (en) * 2008-08-15 2010-05-06 Black Michael J Method and apparatus for estimating body shape
US20100277571A1 (en) * 2009-04-30 2010-11-04 Bugao Xu Body Surface Imaging
US20110025834A1 (en) * 2009-07-31 2011-02-03 Samsung Electronics Co., Ltd. Method and apparatus of identifying human body posture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080152191A1 (en) * 2006-12-21 2008-06-26 Honda Motor Co., Ltd. Human Pose Estimation and Tracking Using Label Assignment
US20100111370A1 (en) * 2008-08-15 2010-05-06 Black Michael J Method and apparatus for estimating body shape
US20100277571A1 (en) * 2009-04-30 2010-11-04 Bugao Xu Body Surface Imaging
US20110025834A1 (en) * 2009-07-31 2011-02-03 Samsung Electronics Co., Ltd. Method and apparatus of identifying human body posture

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2518931A (en) * 2013-10-07 2015-04-08 Isizeme Ltd Method for generating body measurement data of a user and system for selecting a set of articles of clothing for a user
WO2015193628A1 (en) * 2014-06-19 2015-12-23 Toshiba Research Europe Limited Methods and systems for generating a three dimensional representation of a human body shape
US10460158B2 (en) 2014-06-19 2019-10-29 Kabushiki Kaisha Toshiba Methods and systems for generating a three dimensional representation of a human body shape
WO2016073841A1 (en) * 2014-11-06 2016-05-12 Siemens Medical Solutions Usa, Inc. Scan data retrieval with depth sensor data
US10430551B2 (en) 2014-11-06 2019-10-01 Siemens Healthcare Gmbh Scan data retrieval with depth sensor data
GB2542114B (en) * 2015-09-03 2018-06-27 Heartfelt Tech Limited Method and apparatus for determining volumetric data of a predetermined anatomical feature
CN107374638A (en) * 2017-07-07 2017-11-24 华南理工大学 A kind of height measuring system and method based on binocular vision module
WO2020205979A1 (en) 2019-04-01 2020-10-08 Jeff Chen User-guidance system based on augmented-reality and/or posture-detection techniques
EP3948661A4 (en) * 2019-04-01 2023-01-04 Jeff Chen User-guidance system based on augmented-reality and/or posture-detection techniques

Also Published As

Publication number Publication date
GB2504711B (en) 2015-06-03
GB201214042D0 (en) 2012-09-19

Similar Documents

Publication Publication Date Title
GB2504711A (en) Pose-dependent generation of 3d subject models
US11576645B2 (en) Systems and methods for scanning a patient in an imaging system
Weiss et al. Home 3D body scans from noisy image and range data
US11576578B2 (en) Systems and methods for scanning a patient in an imaging system
US20200229737A1 (en) System and method for patient positionging
JP6392756B2 (en) System and method for obtaining accurate body size measurements from a two-dimensional image sequence
KR101833364B1 (en) Method and system for constructing personalized avatars using a parameterized deformable mesh
CN104335005B (en) 3D is scanned and alignment system
WO2019080229A1 (en) Chess piece positioning method and system based on machine vision, storage medium, and robot
CN109271914A (en) Detect method, apparatus, storage medium and the terminal device of sight drop point
KR20170008638A (en) Three dimensional content producing apparatus and three dimensional content producing method thereof
WO2018075053A1 (en) Object pose based on matching 2.5d depth information to 3d information
Guomundsson et al. ToF imaging in smart room environments towards improved people tracking
Hernandez et al. Near laser-scan quality 3-D face reconstruction from a low-quality depth stream
CN108509857A (en) Human face in-vivo detection method, electronic equipment and computer program product
US11527026B2 (en) Body measurement device and method for controlling the same
US20230222680A1 (en) System and method for mobile 3d scanning and measurement
CN112509117A (en) Hand three-dimensional model reconstruction method and device, electronic equipment and storage medium
Niese et al. A Novel Method for 3D Face Detection and Normalization.
Clarkson et al. Calculating body segment inertia parameters from a single rapid scan using the microsoft kinect
Lunscher et al. Point cloud completion of foot shape from a single depth map for fit matching using deep learning view synthesis
EP3756164B1 (en) Methods of modeling a 3d object, and related devices and computer program products
Malleson et al. Single-view RGBD-based reconstruction of dynamic human geometry
US20220270337A1 (en) Three-dimensional (3d) human modeling under specific body-fitting of clothes
CN108629333A (en) A kind of face image processing process of low-light (level), device, equipment and readable medium