CA2884668A1 - Anatomically isolated virtual 3d model - Google Patents

Anatomically isolated virtual 3d model Download PDF

Info

Publication number
CA2884668A1
CA2884668A1 CA2884668A CA2884668A CA2884668A1 CA 2884668 A1 CA2884668 A1 CA 2884668A1 CA 2884668 A CA2884668 A CA 2884668A CA 2884668 A CA2884668 A CA 2884668A CA 2884668 A1 CA2884668 A1 CA 2884668A1
Authority
CA
Canada
Prior art keywords
user
measurement
model
mass
isolated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA2884668A
Other languages
French (fr)
Inventor
Unknown
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tortolo Calvin
Original Assignee
Tortolo Calvin
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tortolo Calvin filed Critical Tortolo Calvin
Priority to CA2884668A priority Critical patent/CA2884668A1/en
Publication of CA2884668A1 publication Critical patent/CA2884668A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/16Using real world measurements to influence rendering

Abstract

A 3D human model is generated and rendered on screen to represent a user. The model comprises individually isolated parts, constructed on the basis of the users given numerical inputs. The numerical inputs represent each isolated area of the models body, in correspondence with the area on their own real body. The areas together form the whole, with properties combined totaling the properties of the whole. These specifically personalized models are representative of the users accounts, able to be modified, updated and compared with other users and subsequent renditions of themselves, via a progression timeline. The bodies receive date-stamps corresponding to the date they were altered, thus allowing for multiple renditions of a user's body along their path to achieving their desired fitness goals, to be compared with one another and the current, and to allow each and any of the isolated masses (eg.
biceps) to be compared with how they looked at any point along the users progression timeline, displaying the difference between that point and the current.

Description

Background This present disclosure is directed towards a method for the 3D rendering of a virtual human model on a mobile device, and in particular a graphical representation useful within the fitness industry.
Summary Detailed in this application is how the virtual model is generated, displayed, further modified, copied and compared, and utilized in the mobile platform in a virtual environment. Further detailed is the manner in which a user interacts with the model and the platform, and the sequences involved in the process from beginning to end and on-going. The application provides an objective look at a user's own physical body, molded from real life numerical inputs, based on statistics alone. This application receives values inputted by the user and generates a 3D model based on those values. It is a dimensionally accurate, objective depiction of the user's real life physical appearance and measurements. The created model eventually compared against the user's peers and also itself at different intervals where progress was made. The body also being used for educational anatomical purposes, with the users own body being demonstrated on and referenced from. Also used as a stat sheet, tracking one's measurements around the body as they change progressively; the stats displayed on a human body structure.
The application is designed for use within the fitness and athletics industry, with the improving and altering of one's physical appearance and capabilities being ranked based on physical improvement and output. Within this industry trainers often manage and track clients and athletes, and the athletes often track and monitor themselves. The processes herein detail how a 3D model building program that will facilitate many aspects of the fitness industry is utilized, and the intricate methods applied.
Brief description of drawings Introduction/figure 0: 3D model, rotatable within confines of app, on-screen, sized and shaped from corresponding value fields.
Figure 1) Creation process: Figure 1 displays the process executed by the user, forming the model from beginning (weighted mass) to completion (human structure).
2) Measurement input interface: Figure 2 depicts how the values in the fields on the measurement input interface are input in relation to the body, also displaying the approximate size of the masses and where they extend to, as well as where the joints that confine them are placed approximately.
3) Perimeter measurement display: Figure 3 demonstrates how a measurement that has been taken from the body is then applied to the virtual model, for the specific area of the body (mass) to which that measurement corresponds.
4) Adjustment of mass: Figure 4 demonstrates the result of a change in measurement value on a specific mass of the body.
5) Measurement precedence: Figure 5 displays the effect of measurement precedence as detailed in step 4.
6) Timeline progression: Figure 6 displays the timeline progression component of the app, demonstrating the function of the "date-stamping" process.
+ Flowchart outlining the steps involved in the 3D model creation/alteration process.
Description The anatomically isolated virtual human model software is a virtual 3D
representation building program. The virtual 3D representation being a statistically structured formation of received, combined input, rendered on a graphical user interface. The program encompasses the creation of a 3D model based on a user; that model comprised of isolated locational masses forming the parts of the model which are displayed individually. The masses are adjustable by measurement input fields corresponding to different areas of the user's body, either individually or as the whole adjusts. The properties of individual masses are combined to determine the properties of the whole body structure yet the individual masses are isolated from one another due to separation by connectors which do not adjust in response to measurement input fields. The masses are confined between the boundaries of these connectors (eg. the left thigh mass separated from the stomach by a joint at the hip, and separated from the calf at a joint at the knee). When an inputted value is received through the measurement input interface regarding the thigh, it only affects the mass between those confines on the model. The dimensional appearance of the whole structure changes as each individual mass is changed, each individual mass isolated from the others in terms of alterability. This process follows an accurate, user-specific, statistical approach to the receiving of data regarding a user, the generating of a virtual human model based on that data, and the rendering of that virtual human model for the user on the user interface, saved to memory in correspondence with a database.
The method may run in a mobile application, comprising an input interface, a structure-generating computing processor and a computing archiving database with memory, the memory connected to the processor. Computer-readable code is saved into memory by the computer implemented method. As inputs are received and recorded, output is generated.
Saving the data after receiving input values, includes placing a date-stamp on the model, allowing for the viewing of separate date-stamped models alongside each other to view progress on the model in whole and in each individual area. The processor takes values received through the input interface and modifies those values through an in-app scale, converting measurements to smaller, on-screen measurements and generating resulted output. The scale is applied to all values corresponding to measurements on screen regarding their virtual model and its isolated parts. A result of the process being a consistent, current, correct depiction of the user, maintained by the user, for the user or another user to associate their account(s) and therefore themselves with, with intention to view, track and share as desired. Viewing of the rendered 3D
model is similar to looking at one's own reflection only virtually and dimensionally in an app.
The creation may be done by a processor on a mobile device.
The app may run on a touch screen device, with icons and modules that respond by tap/touch to pre-determined commands. The virtual on screen body model is 3D in the sense that it is rendered on-screen in a manner that the 3D representation comprises a physical volume within a virtual environment, allowing for depth in that on-screen environment and for it to be rotated around 360 degrees or specific body parts to be magnified. The body demonstrates 3D qualities within the virtual environment.
The computing software powers a platform designed for the hosting of created 3D virtual bodies. The 3D virtual bodies are created by, of and for a user. Said created bodies' structure and framing, comprised of isolated parts, is based on the given recorded measurements of a user, generated based on the given input values from the user or someone on behalf of the user, then translated through an in-app scale. A user or someone acting for the user measures and records the values corresponding to specific, isolated areas on the body determined by the app. Other than by reason of aesthetic appeal, these areas are determined with regards to functional usage and exertion properties in fitness and athletics, not areas without this specific purpose.
The factors determining the isolated areas and why they specifically are isolated, are with regards to known contributing physical factors effecting athletic ability and the functionality of muscles. The areas of the body incorporated in the process are there because they affect the performance of the user/athlete one way or another, some of them already tracked by the majority people training. Whether it is a hands-on method with notes, or just by way of visually monitoring, the majority of people training are aware of the presence of some of these areas and changes these areas receive. For example, a larger muscle mass on the chest would potentially allow for a stronger pushing motion (eg. bench press). Body optimization is factored in, which is the way a body is put together, determining how effectively it achieves its highest exertion and performance potential. Body optimization aims to allow the most available output with as little possible excess strain that would slow or tire the athlete.
This could include the same above example with the chest, adding the effect of a smaller waist to potentially allow an athlete to maintain the same amount of power while exerting for a longer period of time, without the excess weight to carry around. Exertion properties determining strength, explosiveness, endurance, recovery time and resistance to injury are all factors. An example of strength and explosiveness would be having a smaller waist with larger muscle mass in the leg and gluteus muscle groups, potentially allowing for more speed and vertical leaping ability.
The recorded measurements are input into value fields on a measurement input interface where designated. The measurement input interface receives data corresponding to measurements. This includes hitting an on screen command that prompts the measurement input interface to open, which consists of a list of value input fields with a title on each corresponding to the area on the body it is representative of. The user inputs their measurements (eg. 16 upper arm - bicep/tricep) in correspondence with the designated area(s) into the value field(s) and once finished hits an apply type command, which prompts the values to be saved and sent to an in-app processor. The in-app processor converts these real life dimensions into adjusted screen-sized ones through a pre-determined in-app scale. After being converted by the processor, the now in-app values are applied to the virtual 3D body, rendering a dimensionally accurate translation of the measurement(s) - and ultimately user - on screen.
The accuracy of the dimensional representation depends on how accurate the measurements inputted by the user were, and how many locations they decided to measure and input. There are a set number of isolated areas that each generate and render individual change.
The measurement input interface receives measurements on specific, isolated, predetermined locations and encompasses a field entry, the entry representing a measurement, which sends the data received from the field entry to a computing generator which acts as the applier of inputted data to a system of determined output.
The processor receives numerical input from the input interface and in turn generates and renders output, the output being a virtual, structurally formed, dimensionally accurate model of the user's body.
The face is not included in this process; no facial customization options take place. Modifiable areas of the body commence below the head and end at the ankle (see figure 2).
Step 1 The first step in the process is the user obtaining the necessary measurements from their body;
self-measured or done by a professional. These are specific measurements taken of a specific location on their body. They may or may not already have some of these measurements, but the measurement criteria would likely be different than any previous need for a measurement as the points designated making up said isolated masses are custom placed by the app. The measurement that represents said locational area is the perimeter around the area, which may follow through a set of points placed by the app at a set distance between the confines of that mass. The perimeter following through these points is the circumference of that area, with the circumference being used for the vast majority of measurement fields. The user measures around the designated area in the general location/placement instructed by the app. The user obtains the measurement of circumference that will translate through the in-app scale to an on-screen dimensional depiction of the user's measurement that will then be generated by a group of points that make up that isolated area in the app. The areas may consist of four points:
A, B, C and D, spanning around the isolated area of the model a full 360 degrees. Circumference (C) is the complete distance around the isolated area, through said points, and in at least some embodiments works as follows; C = points A-A through BCD (as displayed in figure 3).
Step 2 A user then commences the creation process on the device, demonstrated in figure 1, which entails:
a) The user inputting certain broad measurements representative of the body in whole into the measurement input interface, these include;
= Weight (first) - in pounds or kilograms.
= Height (vertical slider, top of the head to bottom of the foot) - in inches or centimeters.
= Bodyfat % (vascularity, resulting in musclular shaped areas of the body) -obtained by a machine usually found in gyms.
(for the above see figure 101-105) The changes from values inputted regarding the whole body as shown above are visually rendered for the user dispersed across the body.
b) The user inputs specific measurements of the app-determined, isolated areas on the body into the measurement input interface, the measurements being traced through the circumference (points A-A through BCD) points placed around the perimeter of the mass. In at least some embodiments, the following measurements are inputted to correspond with these areas labelled as;
= Neck perimeter mass - vertically from the top of the neck to its base, above the collarbone.
= Shoulder perimeter mass - upper, upper torso, from shoulder to shoulder horizontally and from the collarbone to the middle of chest vertically.
= Upper arm perimeter mass (2) - largest part of the upper arm, the measurement line centered between the shoulder and the elbow, the mass cut off at the shoulder and the elbow.
= Forearm perimeter mass (2) - measured at the thickest part of the forearm, just beneath the elbow, the mass spanning from the elbow to the wrist.
= Chest perimeter mass - underneath the shoulder perimeter mass, middle of the chest to the ribcage.
= Waist and stomach perimeter mass - the user's waist, from just above the belly button to the hip line.
= Hip perimeter mass - from the hip bone to the top of the thigh.
= Thigh perimeter mass (2) - From the top of the thigh to above the knee.
= Calf perimeter mass (2) - underneath the knee, the measurement at the largest part of the top half of the calf, cut off at the ankle.
(for the above refer to figure 2) The measurement areas detailed above adjust with the input of a measurement value through the measurement input interface (see figure 4). The perimeter line around the area adjusts directly in response to a measurement value input and the rest of the mass within the confines of the connectors is adjusted relative to the the perimeter line. No mass is square shaped, for example the bicep has its largest diameter at a location, (usually the perimeter line) while below and above it are less protruding. The volume of the bicep on either side of the perimeter line adjusts in size relative the length of the perimeter line. This allows for the full mass adjustment, not just the perimeter line which was measured directly.
These isolated.areas of the body are determined by the app with regard to functionality and athletics-related factors. They are formed and generated by parameters given by the app and , are separated and cut off at specific locations designated by the app. They remain isolated and independent because of these designated cut-offs. The designated cut-offs are referred to as joints or connectors. These connectors do not adjust in size or shape regardless of inputted measurement adjustments, but instead are adjusted on their own separately.
They remain in size and act as a connecting point to the isolated masses that are located on either or one side of them. They effectively hold the structure, comprised of individual masses, together as one whole. As a mass is adjusted in value and in turn reformed to new size, the connectors remain in a fixed position, allowing the rest of the body structure to remain intact while only the specified area is altered.
This is done on a processing level in a set of sequences. Before any of these sequences the app has obtained the gender of the user, and uses it to base the frame and non-functional but instead personal, visual identifiers and features (eg. head, hands, feet, breasts, the appearance of bone structure, thickness of joints, vertical/horizontal placement of features, and body part to body part ratios) on.
The first sequence includes the new user entering their weight into the input interface as displayed in figure 101.
All values that are entered are converted from real life measurements to in-app measurements.
This is done by the in-app scale with a pre-determined input-output response (eg. 30.48:1 cm).
After receiving the value corresponding to weight the processor generates a mass with properties in size (area cubed) corresponding to the weight inputted, determined by the app how much area a specific amount of weight generates in mass.
The next sequence involves the mass being adjusted to its vertical dimensions as the user enters their height measurement into the input interface as depicted in figure 102. There now exists a non-human shaped mass of 3D on-screen area cubed, spanning vertically from point x to point y with a ruler representing the height property of the mass and figure.
The next sequence involves the division of the mass into its parts; one part for every app-determined major, functional muscle group/area of the body (see b above, For the visual see figure 103). The computing system separates fractional, percentage based areas of mass from one another, the percentages determined by how much percentage an area of the body will contain compared to its counterparts. The separated masses are placed where they will represent a specific area of the body (eg a mass of area x, in place below a mass of area y, x to represent the calf muscle area of the body and y the thigh).
The following sequence is the addition of connectors to the process, displayed in figure 104. Connector pieces that carry no mass in relation to the weight of the model are placed in between the now individual masses and attached on either side (if there is more than one side) to the masses they separate. There exists at this point, a series of individual masses forming the outline of a human body, connected to one another by massless joints.

The next sequence requires the inputting of a value representing the user's body fat %
(if they don't input one the system uses a default) depicted in figure 105.
With the value for body fat % entered, the processor now has enough information to generate a presentable, human body-like structure with human shape(s), curves and edges around the masses. The body fat % value may represent the contouring and sculpting of the masses, which firstly makes the model appear human. Without body fat % the masses would remain as non-human shaped masses, with a reasonable body fat % the masses hold a human like, chiseled state. This is so because the body fat % quantifier releases form from the area(s) with the addition of body fat %. Without the addition of a value representing a body fat % within the range for humans, the body would remain as a group of formless masses, applying form brings the masses closer to the app designed display of the raw muscle shape. Further reduction or increase of body fat % sculpts the individual masses and therefore the whole body structure, giving the visual effect of vascularity/definition increases or decreases (eg. a decrease of vascularity from an increase in body fat %). After the body fat % is inputted, the processor configures the 3D
formation. Once complete a 3D model with human dimensions is generated and rendered for the user on screen via the computing, visual output software. The model is visually representative of the input values received thus far.
At this point that the user has the ability to freely personalize the 3D model until it is as close to their real world dimensions as they are comfortable with. This following process is an optional sequence unlike the creation process, as the user can continue into the rest of the program's functions and features with their model as is (gender, weight, height, body fat %).
The sequence entails the identifying of all the now-isolated masses with a label of sorts, and the user inputting measurement values into the specific value field on the input interface corresponding to the specific isolated mass (see figure 2).
The isolated masses are measured in perimeter, by a circumference, comprised by a set of points, placed by the app, spanning 360 degrees around the mass, located at a height that may be the center distance between the connectors on either side of it. There may exist 4 placed points (front, sides and back), through which the circumference measurements run.
Circumference can = point A-A, through BCD (see figure 3). As the user or a professional acting on part of the user, measures the circumferences of the isolated areas of their body, and inputs them into the value fields on the input interface, the processor generates the new circumference of the on screen mass as translated through the in-app scale from the measurement received, forming the life-like dimensions of that mass.
It is at this point that the computing software applies a process called measurement precedence. As a measurement regarding an isolated mass is received by the input interface, read by the processor, generated into the result formation of said received input, and rendered for the user, the 3D model and process now applies measurement precedence.
Measurement precedence is explained after step 4.
Step 3 Based on the aforementioned input values corresponding to individual isolated parts of the whole body, together forming the whole body, the values are received on the measurement input interface and the computing processor generates the virtual model (the model seen in figure 0).
Step 4 The generated model is then rendered on-screen and a user can interact with it by viewing and rotating it in a 360 degree range of motion. The user can save this model and further use it with their account on the app; the model from that point onwards remaining associated with the account. The model is saved to memory, now representative of that user in their current state, available to be updated by the adjusting of values mentioned in step 1 and displayed in figure 2.
As a user continues to update their model as often as the software allows for the model to be updated, the model will potentially become more specific to that user. As the user decides to apply more specific locational measurements to the model, the weight of the model takes less effect on the mass areas of the model the more individual measurements are applied (while measurement precedence is active). This is due to a process called measurement precedence, where the personalized, individual measurements which have been inputted through the measurement input interface and generated on the model, take priority in shape and size (area cubed) over the weight, once the user has added an input value corresponding to that area.
Measurement precedence is a control mechanism; it is what maintains the structure in whole as the parts of the structure change. The weight of any part is always a % of the weight of the whole, and the weight of all parts combined totaling the weight of the whole.
The weight input field is only modifiable in relation to the whole, not individual weight inputs for individual parts.
Each individual mass has a weight property, but only based on its % in correlation to the whole.
The weight of the whole does effect the size of all individual areas, but only before measurement precedence for an area has been activated, at which point the weight of the whole effects the size of all areas excluding the one(s) receiving measurement precedence.
Even once adjusted, the size of the area(s) does not affect the weight of the whole, the masses only respond to the weight of the whole, which is only before they have inputted a measurement value into the field corresponding to that mass. This means that all masses combined remain in weight as the total body weight, and that the adjustment of the size of any individual mass does not take effect on any other masses, nor any other component of the body. Once a measurement is inputted, the weight behind that part of the body remains as it was and nothing is effected but the size of that measurement. An overall weight change effects all parts of the body based on the weight of individual masses relative to the whole (eg. a change of weight of the whole by 10kg, results in the thigh receiving a weight change of 1kg, where the weight % of the thigh is 10%). Furthermore, as a weight change affects the size of all areas of the body by their % of the total of the body before measurement precedence, it still occurs after measurement precedence, excluding the part receiving precedence.
Measurement precedence is specific to all individual areas of the body, giving precedence to only the measurement that the measurement precedence applier corresponds to. Any weight change after measurement precedence has taken effect on an individual area, affects the parts of the body that have not received a specific measurement the same as they would be affected without any precedence. Eg; if the only calf area has received measurement precedence and the user changes the weight value of the whole body, every part that isn't under measurement precedence will receive the same % increase/decrease in weight and size as its % related to the whole, whereas only the weight of the calf changes, its weight no longer determining its size while measurement precedence is active. Measurement precedence is turned on when a manual measurement for an individual mass is received, and can be turned off (specific precedence command corresponding to all locations) at any time, re-allowing the weight value to determine the size of the area. This means that even when a measurement is updated, the rest of the body acts without concern for the change of measurement of the individual area, and a user can allow their model to receive size adjustments from the change of weight in only the areas they don't personally wish to input a measurement for. There is no measurement precedence until an individual area's measurement is received via the input field on the measurement input interface.
For measurement precedence see figure 5.
Step 5 The user now uses the program's functions and features through an account they create and sign in with. The 3D model represents the users account as their body (the user's body). The user's model is open to measurement updates in response to the user making gains/losses on their real life body, by re-opening the measurement input interface for them to add new values into the field entries, for the processor to receive then generate and re-render the new updated change(s). The new changes now saved into the apps memory corresponding to that users account and therefore model. Every time the user updates their model, the new updated dimensional depiction becomes the model associated with their account, and the previous dimensional model at the interval it was updated from is archived in the systems memory, and given a date-stamp corresponding to the date that model became no longer the one belonging to the user, and instead a previous rendition the user used to identify with.
A version of the 3D
model is made available to exist for each date changes in measurement were altered through the measurement input interface.
These date-stamped models are referred to as progressions when viewing them in relation to one another. See figure 6 for the progression timeline. The models themselves in whole are viewed and compared with one another and with the current model. More specifically, the individual parts are date-stamped, displayed and listed by statistics, to be compared individually with one another and the stats of those specific parts on the user's current model (eg. bicep/upper arm size from date x, to date y, to now).
Step 6 A component of the measurement input interface allows for "target"
measurements to be set.
The user enters their predicted or target measurements for one or more locational masses through the measurement input interface along with their target weight and body fat percentage, done by the same process as detailed in step 2. A 3D model is then generated separate from the one associated with their account, which allows the user to visualize how their physical body and any part of it will look once they have reached their goal(s) according to x changes. This target model can be compared next to the current model and is also stored as a model in the 3D model timeline, allowing the user to compare their progressive physical changes with their ideal physical measurements (displayed in figure 6) In some embodiments of the method of the present disclosure, making a body for another user(s) occurs. This is on the premise of a trainer-client relationship or otherwise. The creator selects an open slot to create and save the body to, creates the body by the method of step 2, then saves it in their database as listed by the user they have created it for and attaches it to another user's account, for the receiving user to interact with it as if they had created it themself. This only occurs once the required permissions have been obtained by the creator and receiver from the app.
Trainers managing and tracking clients can store and archive a list of their clients/athletes, saved to memory, assorted with their bodies, to optimize the experience and results they provide. They can also compare their lists and provided results with other trainers this way.
CA2884668A 2015-03-13 2015-03-13 Anatomically isolated virtual 3d model Abandoned CA2884668A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA2884668A CA2884668A1 (en) 2015-03-13 2015-03-13 Anatomically isolated virtual 3d model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CA2884668A CA2884668A1 (en) 2015-03-13 2015-03-13 Anatomically isolated virtual 3d model

Publications (1)

Publication Number Publication Date
CA2884668A1 true CA2884668A1 (en) 2016-09-13

Family

ID=56896871

Family Applications (1)

Application Number Title Priority Date Filing Date
CA2884668A Abandoned CA2884668A1 (en) 2015-03-13 2015-03-13 Anatomically isolated virtual 3d model

Country Status (1)

Country Link
CA (1) CA2884668A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021062047A1 (en) * 2019-09-26 2021-04-01 Amazon Technologies, Inc. Predictive personalized three-dimensional body models
US11232629B1 (en) 2019-08-30 2022-01-25 Amazon Technologies, Inc. Two-dimensional image collection for three-dimensional body composition modeling
US11854146B1 (en) 2021-06-25 2023-12-26 Amazon Technologies, Inc. Three-dimensional body composition from two-dimensional images of a portion of a body
US11861860B2 (en) 2021-09-29 2024-01-02 Amazon Technologies, Inc. Body dimensions from two-dimensional body images
US11887252B1 (en) 2021-08-25 2024-01-30 Amazon Technologies, Inc. Body model composition update from two-dimensional face images
US11903730B1 (en) 2019-09-25 2024-02-20 Amazon Technologies, Inc. Body fat measurements from a two-dimensional image

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11232629B1 (en) 2019-08-30 2022-01-25 Amazon Technologies, Inc. Two-dimensional image collection for three-dimensional body composition modeling
US11580693B1 (en) 2019-08-30 2023-02-14 Amazon Technologies, Inc. Two-dimensional image collection for three-dimensional body composition modeling
US11903730B1 (en) 2019-09-25 2024-02-20 Amazon Technologies, Inc. Body fat measurements from a two-dimensional image
WO2021062047A1 (en) * 2019-09-26 2021-04-01 Amazon Technologies, Inc. Predictive personalized three-dimensional body models
US11069131B2 (en) 2019-09-26 2021-07-20 Amazon Technologies, Inc. Predictive personalized three-dimensional body models
CN114514562A (en) * 2019-09-26 2022-05-17 亚马逊技术有限公司 Predictive personalized three-dimensional body model
CN114514562B (en) * 2019-09-26 2023-06-13 亚马逊技术有限公司 Predictive personalized three-dimensional body model
US11836853B2 (en) 2019-09-26 2023-12-05 Amazon Technologies, Inc. Generation and presentation of predicted personalized three-dimensional body models
US11854146B1 (en) 2021-06-25 2023-12-26 Amazon Technologies, Inc. Three-dimensional body composition from two-dimensional images of a portion of a body
US11887252B1 (en) 2021-08-25 2024-01-30 Amazon Technologies, Inc. Body model composition update from two-dimensional face images
US11861860B2 (en) 2021-09-29 2024-01-02 Amazon Technologies, Inc. Body dimensions from two-dimensional body images

Similar Documents

Publication Publication Date Title
CA2884668A1 (en) Anatomically isolated virtual 3d model
KR101970687B1 (en) Fitness coaching system using personalized augmented reality technology
McErlain-Naylor et al. Determinants of countermovement jump performance: a kinetic and kinematic analysis
Bezodis et al. Understanding the effect of touchdown distance and ankle joint kinematics on sprint acceleration performance through computer simulation
CN109637625B (en) Self-learning fitness plan generation system
Jun et al. Big foot: Using the size of a virtual foot to scale gap width
JP2009297240A (en) Learning support apparatus and method
US10688345B1 (en) Ideal target weight training recommendation system and method
Poliszczuk et al. Changes in somatic parameters and dynamic balance in female rhythmic gymnasts over a space of two years
US20180229081A1 (en) Method and program for determining sequence of extractions by using computer
CA3119856A1 (en) Systems and methods for providing personalized workout and diet plans
WO2022169999A1 (en) System and method for providing movement based instruction
Sommer et al. Synchronized metronome training induces changes in the kinematic properties of the golf swing
KR102050559B1 (en) Customized health management system considering genetic specificity
Harry et al. Sex and acute weighted vest differences in force production and joint work during countermovement vertical jumping
Sato et al. Key motion characteristics of side-step movements in hip-hop dance and their effect on the evaluation by judges
Sim et al. How to quantify the transition phase during golf swing performance: Torsional load affects low back complaints during the transition phase
CN110337316B (en) Information processing apparatus, information processing method, and program
JP6577150B2 (en) Human body model display system, human body model display method, communication terminal device, and computer program
CN109243570A (en) Based on the movement recommended method and system of body local fat content, storage medium
Yeadon et al. A virtual environment for learning to view during aerial movements
CN112691357B (en) Cardiopulmonary function exercise system and method
Ede et al. A kinetic and kinematic comparison of the two-footed and step-out back handsprings on the balance beam
JP2020108823A (en) Information processing device, information processing method, and program
JP2022552785A (en) Quantified movement feedback system

Legal Events

Date Code Title Description
FZDE Dead

Effective date: 20171005