CN112819881B - Human body measuring method - Google Patents

Human body measuring method Download PDF

Info

Publication number
CN112819881B
CN112819881B CN202110123228.XA CN202110123228A CN112819881B CN 112819881 B CN112819881 B CN 112819881B CN 202110123228 A CN202110123228 A CN 202110123228A CN 112819881 B CN112819881 B CN 112819881B
Authority
CN
China
Prior art keywords
machine learning
learning model
human body
image
height
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110123228.XA
Other languages
Chinese (zh)
Other versions
CN112819881A (en
Inventor
肖永强
肖金华
唐尉棉
关胤
徐戈
杨晓燕
王炅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou Reliable Cloud Technology Co ltd
Minjiang University
Original Assignee
Fuzhou Reliable Cloud Technology Co ltd
Minjiang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou Reliable Cloud Technology Co ltd, Minjiang University filed Critical Fuzhou Reliable Cloud Technology Co ltd
Priority to CN202110123228.XA priority Critical patent/CN112819881B/en
Publication of CN112819881A publication Critical patent/CN112819881A/en
Application granted granted Critical
Publication of CN112819881B publication Critical patent/CN112819881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Image Analysis (AREA)

Abstract

The human body measurement method comprises the following steps of obtaining a first material, wherein the first material comprises a human body image and a segmentation grid for the human body; setting a first machine learning model, wherein the input of the first machine learning model is a human body image in a first material, and outputting a segmentation grid comprising the first material on the body of a human body to train the first machine learning model; by the method, the first machine learning model can generate the segmentation grid of the picture, the segmentation result is used as a vector to be easier to learn and process by the second machine learning model, and the final measurement and estimation result of the human body parameters can be more accurate.

Description

Human body measuring method
Technical Field
The invention relates to a human body measurement method based on machine learning.
Background
Along with the popularization of intelligent terminals such as intelligent mobile phones and tablet personal computers, the related technology of measuring the height of a human body and measuring the size of an object based on images and videos shot by the intelligent terminals is continuously developed. For example (application number: CN 201910450077.1), a human three-dimensional size prediction method based on machine learning. It is proposed to capture images of the front and side of the human body to input a machine learning model to predict the three-dimensional size. However, the human body three-dimensional size prediction method based on machine learning can only directly input front and side images. A more accurate value cannot be obtained by machine learning.
Disclosure of Invention
Therefore, it is necessary to provide a method capable of better identifying three-dimensional data parameters of human body, so as to solve the problem that the identification of the measured parameters of human body is not accurate in the prior art;
in order to achieve the above object, the present inventors provide a human body measurement method, including the steps of obtaining a first material including a human body image and a split grid of a human body; setting a first machine learning model, wherein the input of the first machine learning model is a human body image in a first material, outputting a predicted value of a segmentation grid of the human body in the first material, and training the first machine learning model;
acquiring a second material, wherein the second material comprises a second image, and the second image is a front side standing image and a side standing image of the same person; the second material also comprises the height and other body parameters of the human body to which the second image belongs; inputting the second image into the first machine learning model for processing to obtain an output result of the first machine learning model on the second image; adding height information of a human body to which the second image belongs to the output result, taking the height information as input information of a second machine learning model, taking predicted values of other physical parameters of the human body to which the second image belongs as output information of the second machine learning model, and training the second machine learning model;
and in the measuring stage, acquiring front and side standing images of the armor, inputting the front and side standing images of the armor into a first machine learning model to obtain a first processing result, and inputting the first processing result into a second machine learning model to obtain an output result of the second machine learning model on human parameters of the armor.
Further, the first material further comprises a plurality of control points for each human body part in the segmentation grid, and the output of the first machine learning model comprises a plurality of control points for each human body part in the segmentation grid.
Further, a first machine learning model is trained with a first corpus, the first corpus comprising more than 200 sets of first material.
Further, a second machine learning model is trained with a second corpus, the second corpus comprising more than 50 sets of second material.
Specifically, the first machine learning model is a deep neural network.
Specifically, the second machine learning model is a LightGBM.
Further, after executing the step of adding the height information of the human body to which the second image belongs to the output result, the extending processing is performed on the height value, which specifically includes:
averaging the output result of the second image by the first machine learning model;
a gaussian mixture model G with 2 sub-distributions is set,
training G by using the height parameters of each human body in the second material,
calculating the height parameters of each human body in the second material through G, obtaining two sub-distribution probabilities P, Q,
expanding the height value dimension in the input information of the second machine learning model into two dimensions, respectively assigning P, Q and K, not reserving the original height value dimension,
training a second machine learning model;
in the measuring stage, the height data of the first processing result and the first auxiliary nail are calculated through G to obtain two sub probability values multiplied by K, and the two sub probability values are input into a second machine learning model to obtain an output result of the second machine learning model on human parameters of the first auxiliary nail.
Specifically, after executing the step of adding the height information of the human body to which the second image belongs to the output result, the extension processing is performed on the height value, which specifically includes:
after the height of the human body is set as H and P and Q are obtained through calculation, the sizes of the P and the Q are compared, and if the P is larger than the Q, the following values are added into the input information of the second machine learning model: h0.515, H0.345, H0.542; if P is not greater than Q, adding to the input information of the second machine learning model the following values: h0.612, H0.421, H0.643;
meanwhile, the dimension of the height value in the input information of the second machine learning model is not expanded into two dimensions, and the dimension of the original height value is reserved.
Specifically, after executing the step of adding the height information of the human body to which the second image belongs to the output result, the extension processing is performed on the height value, which specifically includes:
adding to the input information of the second machine learning model the following values:
(0.515*P+0.612*Q)/(P+Q),
(0.345*P+0.6421*Q)/(P+Q),
(0.542*P+0.643*Q)/(P+Q),
meanwhile, the dimension of the height value in the input information of the second machine learning model is not expanded into two dimensions, and the dimension of the original height value is not reserved.
Optionally, the method further includes the steps of performing skin color detection on the split grid of the human body, performing straight line regression on the boundary line between the skin color region and the non-skin color region in the split grid of the human body to obtain a boundary straight line, obtaining the width of the skin color region at the preset distance of the skin color region side of the boundary straight line, obtaining the width of the non-skin color region at the preset distance of the non-skin color region side of the boundary straight line, and performing the steps if the width of the non-skin color region is larger than the width of the skin color region: generating a reference curve inward at the edge of the non-skin tone region that is S from the edge of the non-skin tone region, wherein,
s = non-skin tone region width-width of skin tone region.
By the method, the first machine learning model can generate the segmentation grid of the picture, the segmentation result is used as a vector to be easier to learn and process by the second machine learning model, and the final measurement and estimation result of the human body parameters can be more accurate.
Drawings
FIG. 1 is a flow chart of a anthropometric method according to an embodiment of the present invention;
FIG. 2 is a schematic view of a device for measuring parameters of a human body according to an embodiment of the present invention;
fig. 3 is a schematic view of an image capturing device for measuring parameters of a human body according to an embodiment of the present invention.
Detailed Description
In order to describe the technical content, constructional features, achieved objects and effects of the technical solution in detail, the following description is made in connection with the specific embodiments in conjunction with the accompanying drawings.
Referring to fig. 1, a human body measurement method includes the steps of obtaining a first material, wherein the first material includes a human body image and a split grid for a human body; s100, setting a first machine learning model, wherein the input of the first machine learning model is a human body image in a first material, and outputting a predicted value of a segmentation grid of the human body in the first material to train the first machine learning model; the human body image of the first material is not limited to the front or side angle with respect to the human body, and may include a human body image of any angle. Training is performed through multi-angle human body images, so that the first machine learning model can divide the human body images more carefully and intelligently. The method can further comprise the step of preprocessing the human body image in the first material manually before the first material is acquired, and drawing a segmentation grid for the human body in the human body image in the first material. The segmentation grid is a segmentation processing grid drawn for segmentation of a body component region of a person. Through the steps, the technical effect of training the first machine learning model to learn the human body part distinction can be achieved.
Then, further, the human body measurement method comprises the steps of obtaining a second material, wherein the second material comprises a second image, and the second image is a front side standing image and a side standing image of the same human body; the second material also comprises the height and other body parameters of the human body to which the second image belongs. Other body parameters herein may include measurable data indicators of all bodies such as chest circumference, waist circumference, hip circumference, hand length, palm length, shoulder width, foot length, etc. S101, inputting the second image into the first machine learning model for processing, and obtaining an output result of the first machine learning model on the second image. The output result of the first machine learning model on the second image refers to the trained first machine learning model, and after the second image is processed, the segmentation grids of different human body parts in the second image can be output according to the principle. S102, adding height information of a human body to which the second image belongs to an output result of the first machine learning model to the second image, taking the height information as input information of the second machine learning model, taking predicted values of other physical parameters of the human body to which the second image belongs as output information of the second machine learning model, and training the second machine learning model. After training, the second machine learning model can learn and estimate other body parameters of the human body to which the second image belongs by receiving the second image, the segmentation grid information therein and the height information. In a specific embodiment, in the application measurement stage, front and side standing images of the nail are acquired, the front and side standing images of the nail are input into a first machine learning model to obtain a first processing result, the first processing result includes a block segmentation result of human body parts of the nail in the front and side standing images of the nail, and then the first processing result is input into a second machine learning model alone or together with the height of the nail, to obtain an output result of the second machine learning model on human body parameters of the nail. The parameters of the human body of the armor can comprise parameters of chest circumference, waistline, hip circumference, hand length, palm length, shoulder width, foot length and the like, and the technical effect of measuring the parameters of the human body is achieved by the scheme.
In some other further embodiments, the first material further includes a plurality of control points for each human body part in the split grid, and these control points may also be completed in the step of labeling the material, that is, when preprocessing is performed manually on the human body image in the first material, labeling is performed manually on the human body image in the first material, where labeling the control points is mainly for increasing the information amount, and the output of the first machine learning model includes a plurality of control points for each human body part in the split grid. Training by adding control points as outputs can also make training of the first machine learning model faster and more stable. The labeling mode of the control points can be joint nodes of human bodies in the images, and the first machine learning model can be provided with more data by adding the labeling information of the control points. Selecting joint nodes as control points can also make classification of the first machine learning model more scientific.
The single-group first material and the single-group second material train the first machine learning model and the second machine learning model, but the output result is not stable enough, and the single-group first material and the single-group second material train the first machine learning model and the second machine learning model and have no significance in practical operation. We need to provide for training a first machine learning model for a first corpus and a second machine learning model for a second corpus. The first materials in the first material library are subjected to a pretreatment step of manual marking, and at least comprise 20 groups of first materials, and the second material library at least comprises more than 10 groups of materials. In some specific embodiments, to achieve a better training effect, i.e., the output convergence of the first machine learning model and the second machine learning model or the confidence of the output result meets, it is preferable to train the first machine learning model using a first material library including 200 or more groups of first materials and train the second machine learning model using a second material library including 50 or more groups of second materials.
In some other specific embodiments, the first machine learning model is an architecture of a deep neural network. The second machine learning model is a LightGBM architecture. The result of measuring the parameter can be more accurate through the setting.
After the steps are carried out, the fitting degree of the scheme to different heights can be further improved. After executing the step of adding the height information of the human body to which the second image belongs to the output result, performing expansion processing on the height value, specifically including:
averaging the output result of the second image by the first machine learning model;
a gaussian mixture model G with several sub-distributions is set up,
the height parameters of each human body in the second material are utilized to train G, the peak value conditions of height distribution which possibly occur in the crowd can be better adapted through the scheme, and in the technical scheme, the method specifically comprises the following steps: a gaussian mixture model G with 2 sub-distributions is set,
and training G by using the height parameters of each human body in the second material. And calculating the height parameters of each human body in the second material through G, and obtaining two sub-distribution probabilities P, Q by each human body, wherein the two peak values are respectively caused by gender differences and are suitable for the situation of two peak values possibly occurring in the height values of the crowd. Expanding the height value dimension in the input information of the second machine learning model into two dimensions, respectively assigning P, Q and K, and training the second machine learning model without reserving the original height value dimension;
in the measuring stage, the height data of the first processing result and the first auxiliary nail are calculated through G to obtain two sub probability values multiplied by K, and the two sub probability values are input into a second machine learning model to obtain an output result of the second machine learning model on human parameters of the first auxiliary nail. By the aid of the scheme, the second machine learning model has obvious advantages in processing body measurement parameters. When considering the possible different factors of more types of people, nationality and the like affecting the height of the group, the technical proposal with 4 sub-distributions, 6 sub-distributions and 8 sub-distributions is feasible.
In some other embodiments, after executing the step of adding the height information of the human body to which the second image belongs to the output result, the extending the height value specifically includes:
after the height of the human body is set as H and P and Q are obtained through calculation, the sizes of the P and the Q are compared, and if the P is larger than the Q, the following values are added into the input information of the second machine learning model: h0.515, H0.345, H0.542; if P is not greater than Q, adding to the input information of the second machine learning model the following values: h0.612, H0.421, H0.643; assuming that the PQ is respectively two peaks with relatively high values and relatively low values on the height axis, the three-dimensional array corresponding to the gender is added to the input information as two dimensions of the input information, so that an output result generated by the second machine learning model after training is more stable. In this embodiment, the height value dimension in the input information of the second machine learning model is not extended to two dimensions, and p×k and q×k are not assigned respectively, so that the original height value dimension is preserved.
Specifically, after executing the step of adding the height information of the human body to which the second image belongs to the output result, the extension processing is performed on the height value, which specifically includes:
adding to the input information of the second machine learning model the following values:
(0.515*P+0.612*Q)/(P+Q),
(0.345*P+0.6421*Q)/(P+Q),
(0.542*P+0.643*Q)/(P+Q),
in this embodiment, the dimension of the height value in the input information of the second machine learning model is not required to be expanded into two dimensions, and P x K and Q x K are not respectively assigned, so that the dimension of the original height value is reserved. Through the scheme, the output result generated by the second machine learning model after training is completed is more stable.
In other embodiments, considering that the human body in the recorded video has a clothing problem, in order to further improve the recognition degree, in specific embodiments, the method further includes the steps of performing skin color detection on the split grid of the human body, and performing straight line regression on the boundary line between the skin color region and the non-skin color region in the split grid of the human body to obtain a boundary straight line. Acquiring the width of the skin color region at the preset distance on the skin color region side of the boundary line, and acquiring the width of the non-skin color region at the preset distance on the non-skin color region side of the boundary line, wherein the length of the preset distance is selected to be proportional to the length of the boundary line segment in the boundary line, for example:
preset distance = boundary segment length x 0.2
If the width of the non-skin color area is larger than the width of the skin color area, the interference of clothing needs to be eliminated, the steps are performed: generating a reference curve inward at the edge of the non-skin tone region that is S from the edge of the non-skin tone region, wherein,
s = non-skin tone region width-width of skin tone region.
The generated reference curve can be used for display and can also be used for being input into the second machine learning model as the second material, so that the recognition capability of the second machine learning model on the clothing of the person is improved.
The scheme also designs a human body measurement system, which comprises a first machine learning model and a second machine learning model, wherein the first machine learning model is trained through the following steps: acquiring a first material, wherein the first material comprises a human body image and a segmentation grid for the human body; the human body image in the first material is input into the first machine learning model, and the method further comprises the step of taking the segmentation grid of the human body in the first material as output of the first machine learning model. The second machine learning model is trained by: acquiring a second material, wherein the second material comprises a second image, and the second image is a front side standing image and a side standing image of the same person; the second material also comprises the height and other body parameters of the human body to which the second image belongs; inputting the second image into the first machine learning model for processing to obtain an output result of the first machine learning model on the second image; the output result is added with the height information of the human body to which the second image belongs as the input information of the second machine learning model, and other body parameters of the human body to which the second image belongs are used as the output information of the second machine learning model. The system is also used for acquiring front and side standing images of the armor in a measuring stage, inputting the front and side standing images of the armor into a first machine learning model to obtain a first processing result, and inputting the first processing result into a second machine learning model to obtain an output result of the second machine learning model on human parameters of the armor. Through the system, in the stage of formal application, the output of the human body measurement parameters can be obtained only by inputting the front and side images of the human body in the system, so that the technical effect of measuring the human body parameters is achieved by the scheme of the invention.
Specifically, the first material further includes a number of control points for each human body part in the segmentation grid, and the output of the first machine learning model includes a number of control points for each human body part in the segmentation grid.
Specifically, the first machine learning model is trained with a first material library that includes more than 200 sets of first materials.
Specifically, the second machine learning model is trained with a second material library that includes more than 50 sets of second material.
Preferably, the first machine learning model is a deep neural network.
Preferably, the second machine learning model is LightGBM.
In order to further improve the fitting degree of the scheme to different heights, the body measurement system expands the height value after adding the height information of the body to which the second image belongs to the output result, and specifically comprises:
averaging the output result of the second image by the first machine learning model;
a gaussian mixture model G with several sub-distributions is set up,
the height parameters of each human body in the second material are utilized to train G, the peak value conditions of height distribution which possibly occur in the crowd can be better adapted through the scheme, and in the technical scheme, the method specifically comprises the following steps: a gaussian mixture model G with 2 sub-distributions is set,
and training G by using the height parameters of each human body in the second material. And calculating the height parameters of each human body in the second material through G, and obtaining two sub-distribution probabilities P, Q by each human body, wherein the two peak values are respectively caused by gender differences and are suitable for the situation of two peak values possibly occurring in the height values of the crowd. Expanding the height value dimension in the input information of the second machine learning model into two dimensions, respectively assigning P, Q and K, and training the second machine learning model without reserving the original height value dimension;
in the measuring stage, the height data of the first processing result and the first auxiliary nail are calculated through G to obtain two sub probability values multiplied by K, and the two sub probability values are input into a second machine learning model to obtain an output result of the second machine learning model on human parameters of the first auxiliary nail. By the aid of the scheme, the second machine learning model has obvious advantages in processing body measurement parameters. When considering the possible different factors of more types of people, nationality and the like affecting the height of the group, the technical proposal with 4 sub-distributions, 6 sub-distributions and 8 sub-distributions is feasible.
In some other embodiments, the body measurement system expands the height value after adding the height information of the body to which the second image belongs to the output result, and specifically includes:
after the height of the human body is set as H and P and Q are obtained through calculation, the sizes of the P and the Q are compared, and if the P is larger than the Q, the following values are added into the input information of the second machine learning model: h0.515, H0.345, H0.542; if P is not greater than Q, adding to the input information of the second machine learning model the following values: h0.612, H0.421, H0.643; assuming that the PQ is respectively two peaks with relatively high values and relatively low values on the height axis, the three-dimensional array corresponding to the gender is added to the input information as two dimensions of the input information, so that an output result generated by the second machine learning model after training is more stable. In this embodiment, the height value dimension in the input information of the second machine learning model is not extended to two dimensions, and p×k and q×k are not assigned respectively, so that the original height value dimension is preserved.
Specifically, the body measurement system expands the height value after adding the height information of the body to which the second image belongs to the output result, specifically including:
adding to the input information of the second machine learning model the following values:
(0.515*P+0.612*Q)/(P+Q),
(0.345*P+0.6421*Q)/(P+Q),
(0.542*P+0.643*Q)/(P+Q),
in this embodiment, the dimension of the height value in the input information of the second machine learning model is not required to be expanded into two dimensions, and P x K and Q x K are not respectively assigned, so that the dimension of the original height value is reserved. Through the scheme, the output result generated by the second machine learning model after training is completed is more stable.
In other embodiments, considering that the human body in the recorded video has a clothing problem, in order to further improve the recognition degree, in specific embodiments, the human body measurement system is further configured to perform the step of performing skin color detection on the split grid of the human body, and performing straight line regression on the boundary line between the skin color region and the non-skin color region in the split grid of the human body to obtain the boundary straight line. Acquiring the width of the skin color region at the preset distance on the skin color region side of the boundary line, and acquiring the width of the non-skin color region at the preset distance on the non-skin color region side of the boundary line, wherein the length of the preset distance is selected to be proportional to the length of the boundary line segment in the boundary line, for example:
preset distance = boundary segment length x 0.2
If the width of the non-skin color area is larger than the width of the skin color area, the interference of clothing needs to be eliminated, the steps are performed: generating a reference curve inward at the edge of the non-skin tone region that is S from the edge of the non-skin tone region, wherein,
s = non-skin tone region width-width of skin tone region.
The generated reference curve can be used for display and can also be used for being input into the second machine learning model as the second material, so that the recognition capability of the second machine learning model on the clothing of the person is improved.
In other embodiments, referring to fig. 2, a device for measuring parameters of a human body is also provided. The intelligent system-mounted handheld device with shooting function comprises a prompt module 200, a picture detection module 202 and a processing unit 204, wherein the prompt module is used for sending out prompt of shooting a front photo by a user, acquiring the photo shot by the user, and the picture detection module is used for detecting whether the photo of the user is the front photo or not, if not, the prompt module prompts the user to shoot again; the prompting module sends out a prompt of photographing the side photo by the user, the photo photographed by the user is obtained, the picture detection module detects whether the photo of the user is the side photo, and if not, the prompting module prompts the user to photograph again.
The processing unit is used for inputting a first machine learning model to obtain a first processing result, inputting the first processing result to a second machine learning model to obtain an output result of the second machine learning model on human parameters of the first, and the first machine learning model is trained through the following steps: acquiring a first material, wherein the first material comprises a human body image and a segmentation grid for the human body; the human body image in the first material is input into a first machine learning model, and the human body image segmentation grid for the human body in the first material is used as output of the first machine learning model; the second machine learning model is trained by: acquiring a second material, wherein the second material comprises a second image, and the second image is a front side standing image and a side standing image of the same person; the second material also comprises the height and other body parameters of the human body to which the second image belongs; inputting the second image into the first machine learning model for processing to obtain an output result of the first machine learning model on the second image; the output result is added with the height information of the human body to which the second image belongs as the input information of the second machine learning model, and other body parameters of the human body to which the second image belongs are used as the output information of the second machine learning model. The first machine learning model can be integrated in the device or arranged at the cloud end, and the processing unit only needs to input the first machine learning model into the machine learning model for processing.
In our technical scheme, the method for detecting whether the user photo is a front picture specifically includes:
detecting whether the absolute value of a yaw_angle field under the head point field of the front photo is smaller than 20 by using face++ face detection and face key point interface processing; the method for detecting whether the user photo is a side picture specifically comprises the following steps: using face++ face detection and face key point interface processing, it is detected whether the absolute value of the yaw_angle field under the head position field of the side photo is greater than 150. The mode of using the face++ face detection interface can be selected autonomously, and the judgment of the front face and the side face can also be carried out according to the confidence level requirement, so that the judgment standard of the absolute value of the yaw_angle field is properly adjusted to be the front face: whether less than 30 or less than 15, the judgment standard for adjusting the absolute value of the yaw_angle field is a side face: whether greater than 160 or greater than 140, etc. Through the scheme, the service of obtaining the photo through the user interaction until the picture processing result is obtained is realized, and the user experience is improved.
In other embodiments, as shown in fig. 3, we will introduce a camera device for measuring parameters of human body, which is characterized by comprising two orthogonal cameras 300, a voice prompt module 302 and a processing unit 304, wherein the intersection area of the two cameras is a user photographing area, the processing unit 304 detects whether a human body exists in the user photographing area, the voice prompt module sends out a prompt to the user facing one of the cameras, the two cameras respectively acquire the human body images of the user, the processing unit is used for analyzing and acquiring the human body images of the user, the camera images with high confidence and identified face are set as front images, the other images are set as side images, the processing unit further performs the following steps, whether the user photograph is a front image or not is detected, if not, the user is prompted to take a photograph again, the user is prompted to take a photograph of the side photograph, whether the user photograph is a side image is detected, if not, and if not, the user is prompted to take a photograph again. The main difference between the camera device in the example and the intelligent equipment is that two groups of orthogonal cameras are required to be arranged, whether a human body exists or not can be intelligently identified, and the cameras opposite to the human body can be automatically identified as a front picture and a side picture, so that the follow-up processing is facilitated.
The subsequent processing also includes: the processing unit inputs a first machine learning model to obtain a first processing result, inputs the first processing result to a second machine learning model to obtain an output result of the second machine learning model on human parameters of the first, and the first machine learning model is trained through the following steps: acquiring a first material, wherein the first material comprises a human body image and a segmentation grid for the human body; the human body image in the first material is input into a first machine learning model, and the human body image segmentation grid for the human body in the first material is used as output of the first machine learning model; the second machine learning model is trained by: acquiring a second material, wherein the second material comprises a second image, and the second image is a front side standing image and a side standing image of the same person; the second material also comprises the height and other body parameters of the human body to which the second image belongs; inputting the second image into the first machine learning model for processing to obtain an output result of the first machine learning model on the second image; the output result is added with the height information of the human body to which the second image belongs as the input information of the second machine learning model, and other body parameters of the human body to which the second image belongs are used as the output information of the second machine learning model. Through the scheme, the user can also obtain the pictures interactively until the service of the picture processing result is obtained, and the user experience is improved.
In a specific embodiment, the method for detecting whether the user photo is a front picture includes:
using face++ face detection and face key point interface processing, detect if the absolute value of the yaw _ angle field under the head point field of the front photo is less than 20,
the method for detecting whether the user photo is a side picture specifically comprises the following steps:
using face++ face detection and face key point interface processing, it is detected whether the absolute value of the yaw_angle field under the head position field of the side photo is greater than 150.
It should be noted that, although the foregoing embodiments have been described herein, the scope of the present invention is not limited thereby. Therefore, based on the innovative concepts of the present invention, alterations and modifications to the embodiments described herein, or equivalent structures or equivalent flow transformations made by the present description and drawings, apply the above technical solution, directly or indirectly, to other relevant technical fields, all of which are included in the scope of the invention.

Claims (10)

1. The human body measurement method is characterized by comprising the following steps of obtaining a first material, wherein the first material comprises a human body image and a segmentation grid for the human body; setting a first machine learning model, wherein the input of the first machine learning model is a human body image in a first material, outputting a predicted value of a segmentation grid of the human body in the first material, and training the first machine learning model;
acquiring a second material, wherein the second material comprises a second image, and the second image is a front side standing image and a side standing image of the same person; the second material also comprises the height and other body parameters of the human body to which the second image belongs; inputting the second image into the first machine learning model for processing to obtain an output result of the first machine learning model on the second image; adding height information of a human body to which the second image belongs to the output result, taking the height information as input information of a second machine learning model, taking predicted values of other physical parameters of the human body to which the second image belongs as output information of the second machine learning model, and training the second machine learning model;
in the measuring stage, front and side standing images of the nail are acquired, the front and side standing images of the nail are input into a first machine learning model to obtain a first processing result, height data of the additional nail of the first processing result are input into a second machine learning model to obtain an output result of the second machine learning model on human parameters of the nail.
2. The anthropometric method of claim 1, wherein the first material further comprises a number of control points for each of the human body parts in the segmented mesh, and the output of the first machine learning model comprises a number of control points for each of the human body parts in the segmented mesh.
3. The anthropometric method of claim 1, wherein the first machine learning model is trained with a first library of materials, the first library of materials comprising more than 200 sets of first materials.
4. The anthropometric method of claim 1, wherein a second machine learning model is trained with a second library of material, the second library of material comprising more than 50 sets of second material.
5. The anthropometric method of claim 1, wherein the first machine learning model is a deep neural network.
6. The anthropometric method of claim 1, wherein the second machine learning model is LightGBM.
7. The human body measurement method according to claim 1, wherein after performing the step of adding the height information of the human body to which the second image belongs to the output result, the extending process is performed on the height value, specifically comprising:
averaging the output result of the second image by the first machine learning model;
a gaussian mixture model G with 2 sub-distributions is set,
training G by using the height parameters of each human body in the second material,
calculating the height parameters of each human body in the second material through G, obtaining two sub-distribution probabilities P, Q,
expanding the height value dimension in the input information of the second machine learning model into two dimensions, respectively assigning P, Q and K, not reserving the original height value dimension,
training a second machine learning model;
in the measuring stage, the height data of the first processing result and the first auxiliary nail are calculated through G to obtain two sub probability values multiplied by K, and the two sub probability values are input into a second machine learning model to obtain an output result of the second machine learning model on human parameters of the first auxiliary nail.
8. The method according to claim 7, wherein after performing the step of adding the height information of the human body to which the second image belongs to the output result, the height value is expanded, comprising:
after the height of the human body is set as H and P and Q are obtained through calculation, the sizes of the P and the Q are compared, and if the P is larger than the Q, the following values are added into the input information of the second machine learning model: h0.515, H0.345, H0.542; if P is not greater than Q, adding to the input information of the second machine learning model the following values: h0.612, H0.421, H0.643;
meanwhile, the dimension of the height value in the input information of the second machine learning model is not expanded into two dimensions, and the dimension of the original height value is reserved.
9. The method according to claim 7, wherein after performing the step of adding the height information of the human body to which the second image belongs to the output result, the height value is expanded, comprising:
adding to the input information of the second machine learning model the following values:
(0.515*P+0.612*Q)/(P+Q),
(0.345*P+0.6421*Q)/(P+Q),
(0.542*P+0.643*Q)/(P+Q),
meanwhile, the dimension of the height value in the input information of the second machine learning model is not expanded into two dimensions, and the dimension of the original height value is not reserved.
10. The anthropometric method of claim 1, further comprising the steps of performing skin tone detection on the segmented mesh of the human body, performing linear regression on the boundary between the skin tone region and the non-skin tone region in the segmented mesh of the human body to obtain a boundary line, obtaining the width of the skin tone region at a predetermined distance from the skin tone region side of the boundary line, obtaining the width of the non-skin tone region at a predetermined distance from the non-skin tone region side of the boundary line, and performing the steps of, if the width of the non-skin tone region is greater than the width of the skin tone region: generating a reference curve inward at the edge of the non-skin tone region that is S from the edge of the non-skin tone region, wherein,
s = non-skin tone region width-width of skin tone region.
CN202110123228.XA 2021-01-29 2021-01-29 Human body measuring method Active CN112819881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110123228.XA CN112819881B (en) 2021-01-29 2021-01-29 Human body measuring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110123228.XA CN112819881B (en) 2021-01-29 2021-01-29 Human body measuring method

Publications (2)

Publication Number Publication Date
CN112819881A CN112819881A (en) 2021-05-18
CN112819881B true CN112819881B (en) 2023-10-31

Family

ID=75860018

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110123228.XA Active CN112819881B (en) 2021-01-29 2021-01-29 Human body measuring method

Country Status (1)

Country Link
CN (1) CN112819881B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104434113A (en) * 2014-12-01 2015-03-25 江西洪都航空工业集团有限责任公司 Stature measuring method
CN109938737A (en) * 2019-03-01 2019-06-28 苏州博慧智能科技有限公司 A kind of human body body type measurement method and device based on deep learning critical point detection
CN110074788A (en) * 2019-04-18 2019-08-02 梦多科技有限公司 A kind of body data acquisition methods and device based on machine learning
CN110135443A (en) * 2019-05-28 2019-08-16 北京智形天下科技有限责任公司 A kind of human body three-dimensional size prediction method based on machine learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104434113A (en) * 2014-12-01 2015-03-25 江西洪都航空工业集团有限责任公司 Stature measuring method
CN109938737A (en) * 2019-03-01 2019-06-28 苏州博慧智能科技有限公司 A kind of human body body type measurement method and device based on deep learning critical point detection
CN110074788A (en) * 2019-04-18 2019-08-02 梦多科技有限公司 A kind of body data acquisition methods and device based on machine learning
CN110135443A (en) * 2019-05-28 2019-08-16 北京智形天下科技有限责任公司 A kind of human body three-dimensional size prediction method based on machine learning

Also Published As

Publication number Publication date
CN112819881A (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN105335722B (en) Detection system and method based on depth image information
US9818023B2 (en) Enhanced face detection using depth information
CN111062263B (en) Method, apparatus, computer apparatus and storage medium for hand gesture estimation
WO2019071664A1 (en) Human face recognition method and apparatus combined with depth information, and storage medium
JP2000306095A (en) Image collation/retrieval system
CN111178249A (en) Face comparison method and device, computer equipment and storage medium
CN110569731A (en) face recognition method and device and electronic equipment
CN110263768A (en) A kind of face identification method based on depth residual error network
CN109886223B (en) Face recognition method, bottom library input method and device and electronic equipment
KR20220024494A (en) Method and system for human monocular depth estimation
CN110599514B (en) Image segmentation method and device, electronic equipment and storage medium
CN110796101A (en) Face recognition method and system of embedded platform
CN109410138B (en) Method, device and system for modifying double chin
CN111160291A (en) Human eye detection method based on depth information and CNN
CN112580434B (en) Face false detection optimization method and system based on depth camera and face detection equipment
CN112001285B (en) Method, device, terminal and medium for processing beauty images
CN112185515A (en) Patient auxiliary system based on action recognition
KR101817773B1 (en) An Advertisement Providing System By Image Processing of Depth Information
CN117854160A (en) Human face living body detection method and system based on artificial multi-mode and fine-granularity patches
CN112819881B (en) Human body measuring method
CN112733804B (en) Image pick-up device for measuring human body parameters
KR100951315B1 (en) Method and device detect face using AAMActive Appearance Model
CN112396117B (en) Image detection method and device and electronic equipment
CN111814624B (en) Gait recognition training method, gait recognition method and storage device for pedestrian in video
CN115619842A (en) Human body parameter measuring device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant