CN113112321A - Intelligent energy body method, device, electronic equipment and storage medium - Google Patents

Intelligent energy body method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113112321A
CN113112321A CN202110262272.9A CN202110262272A CN113112321A CN 113112321 A CN113112321 A CN 113112321A CN 202110262272 A CN202110262272 A CN 202110262272A CN 113112321 A CN113112321 A CN 113112321A
Authority
CN
China
Prior art keywords
user
information
detected
numerical characteristic
parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110262272.9A
Other languages
Chinese (zh)
Inventor
陈海波
权甲
李珂
赵昕
潘志锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Deep Blue Technology Shanghai Co Ltd
Original Assignee
Deep Blue Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Deep Blue Technology Shanghai Co Ltd filed Critical Deep Blue Technology Shanghai Co Ltd
Priority to CN202110262272.9A priority Critical patent/CN113112321A/en
Publication of CN113112321A publication Critical patent/CN113112321A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0621Item configuration or customization
    • AHUMAN NECESSITIES
    • A41WEARING APPAREL
    • A41HAPPLIANCES OR METHODS FOR MAKING CLOTHES, e.g. FOR DRESS-MAKING OR FOR TAILORING, NOT OTHERWISE PROVIDED FOR
    • A41H1/00Measuring aids or methods
    • A41H1/02Devices for taking measurements on the human body
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Accounting & Taxation (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Textile Engineering (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an intelligent measuring method, an intelligent measuring device, electronic equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring image information, numerical characteristic information and measurement volume marking information of a plurality of sample users, wherein the measurement volume marking information comprises identification of at least one measurement volume parameter and an actual parameter value; training a deep learning model by using the image information, the numerical characteristic information and the measurement body marking information of the plurality of sample users to obtain an intelligent measurement body model; acquiring visual detection information and numerical characteristic information of a user to be detected; and according to the visual detection information and the numerical characteristic information of the user to be detected, predicting by using the intelligent vector model to obtain a predicted parameter value of at least one vector parameter of the user to be detected. The method is simple and convenient to operate, a user can independently measure the body anytime anywhere, the measured data is accurate, the calculated amount is low, the body measuring speed is high, the use cost is low, and the method can be popularized and used on a large scale.

Description

Intelligent energy body method, device, electronic equipment and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to an intelligent energy method, an apparatus, an electronic device, and a computer-readable storage medium.
Background
With the upgrading of consumption, consumers have higher requirements on wearing personal clothes, and the demand for customizing clothes is increasingly strong. When a customer is to customize a garment personally, accurate dimensional information of a plurality of body parts of the customer is acquired, and the acquisition of the information requires a professional quantifier to measure each body part of the customer one by one, which is costly.
The prior art provides a 3D scanning volume measuring machine, and this equipment can be used for carrying out the volume for customer, and nevertheless this equipment is huge, with high costs, uses inconveniently, need deposit this equipment to fixed places such as entity shop, and the customer needs carry out the volume in the entity shop, is unfavorable for extensive popularization.
Disclosure of Invention
The application aims to provide an intelligent energy body method, an intelligent energy body device, electronic equipment and a computer readable storage medium, the operation is simple and convenient, a user can measure the body at home, the measured data is accurate, the calculation amount is low, the body measuring speed is high, the use cost is low, and the intelligent energy body method and the intelligent energy body device can be popularized and used on a large scale.
The purpose of the application is realized by adopting the following technical scheme:
in a first aspect, the present application provides a smart metering method, the method comprising: acquiring image information, numerical characteristic information and measurement marking information of a plurality of sample users, wherein the image information comprises a human body contour image of at least one gesture, the numerical characteristic information comprises an identification and an actual parameter value of at least one numerical characteristic parameter, and the measurement marking information comprises an identification and an actual parameter value of at least one measurement parameter; training a deep learning model by using the image information, the numerical characteristic information and the measurement body marking information of the plurality of sample users to obtain an intelligent measurement body model; the method comprises the steps of obtaining visual detection information and numerical characteristic information of a user to be detected, wherein the visual detection information of the user to be detected is obtained by shooting the whole body of the user to be detected by a camera of user equipment; and according to the visual detection information and the numerical characteristic information of the user to be detected, predicting by using the intelligent vector model to obtain a predicted parameter value of at least one vector parameter of the user to be detected. The technical scheme has the advantages that the deep learning model can be trained by utilizing the image information, the numerical characteristic information and the measurement body marking information of a plurality of sample users to obtain the intelligent measurement body model, and the prediction parameter value of at least one measurement body parameter of the user to be measured is obtained by utilizing the intelligent measurement body model according to the visual detection information and the numerical characteristic information of the user to be measured; on the other hand, the method can obtain the prediction parameter value of at least one measuring body parameter of the user to be detected only by inputting the visual detection information and the numerical characteristic information of the user to be detected into the intelligent measuring body model, and compared with the method for measuring the body by using a large-volume intelligent energy body device with a 3D scanning measuring body machine, the method is simple and convenient to operate, the user can independently measure the body at any time and any place, the measurement data is accurate, the calculation amount is low, the measuring body speed is high, the use cost is low, and the method can be popularized and used in a large scale.
In some optional embodiments, the deep learning model includes an input layer, a convolutional layer, a fully-connected layer, and an output layer; the method for training the deep learning model by utilizing the image information, the numerical characteristic information and the measurement body marking information of the plurality of sample users to obtain the intelligent measurement body model comprises the following steps: for each of the plurality of sample users, inputting image information and numerical feature information of the sample user into the input layer; inputting image information of the sample user into the convolutional layer through the input layer to obtain a convolution result, and inputting the convolution result into the full-connection layer through the convolutional layer; inputting numerical characteristic information of the sample user into the full-connection layer through the input layer; obtaining a prediction parameter value of at least one volume parameter of the sample user through the full-connection layer prediction based on the convolution result and the numerical characteristic information of the sample user; respectively comparing the predicted parameter value and the actual parameter value of each quantitative parameter of the sample user to obtain a comparison result of the sample user; training the deep learning model based on the comparison results of the plurality of sample users to obtain the intelligent vector model. The technical scheme has the advantages that on one hand, the image information and the numerical characteristic information of a sample user can be input into an input layer, the image information of the sample user is input into a convolution layer by using the input layer to obtain a convolution result, and then the convolution layer is used for inputting the convolution result into a full-connection layer, wherein the numerical characteristic information is one-dimensional numerical information, the step of convolution is not needed, the numerical characteristic information of the sample user can be input into the full-connection layer by using the input layer, and the deep learning model can accept the input of different types of information; on the other hand, a predicted parameter value of at least one volume parameter of the sample user can be obtained by utilizing full-connection layer prediction based on the convolution result and the numerical characteristic information of the sample user, the predicted parameter value and the actual parameter value of each volume parameter of the sample user are compared to obtain a comparison result of the sample user, a depth model is trained based on the comparison result to obtain an intelligent volume model, and by utilizing the method, the depth model can be trained by combining data of image information, numerical characteristic information and volume marking information of a plurality of sample users, the more the sample data is, the higher the prediction accuracy of the obtained intelligent volume model is.
In some optional embodiments, the loss function of the deep learning model adopts an L2 loss function, and the weights corresponding to at least two volume parameters in the loss function are different. The technical scheme has the beneficial effects that on one hand, the L2 loss function is sensitive to errors, the errors can be measured by utilizing the L2 loss function, the calculation efficiency of the L2 loss function is high, and overfitting of a model can be prevented; on the other hand, the size errors of different parts of the human body have different influence degrees on clothes making, correspondingly, the weight corresponding to each measurement parameter in the loss function can be not completely the same, different weights are given to different measurement parameters, and therefore the accuracy of the prediction result is improved.
In some optional embodiments, the method further comprises: and responding to a shooting request sent by the user equipment, and displaying reference contour information on a display screen of the user equipment so that the user to be detected can adjust a shooting posture according to the reference contour information. The technical scheme has the advantages that the reference contour information can be displayed on the display screen of the user equipment, the user can adjust the shooting posture according to the reference contour information, the image with the shooting posture meeting the requirement is shot, if the reference contour information is not provided as the reference, the collected human body contour image may be incomplete, or the quality of the collected human body contour image is not high.
In some optional embodiments, the predicting, by using the intelligent vector model, a predicted parameter value of at least one vector parameter of the user to be tested according to the visual inspection information and the numerical characteristic information of the user to be tested includes: acquiring a human body outline image of at least one gesture of the user to be detected based on the visual detection information of the user to be detected; respectively detecting whether the human body contour image of each posture of the user to be detected is parallel to the shooting surface, and adjusting the human body contour image which is not parallel to the shooting surface; inputting the numerical characteristic information of the user to be detected and the adjusted human body contour image of at least one posture into the intelligent energy body model, and predicting to obtain a prediction parameter value of at least one body parameter of the user to be detected. The technical scheme has the advantages that whether the human body contour image of each posture of the user to be detected is parallel to the shooting surface or not can be detected, if the human body contour image is not parallel to the shooting surface, the human body contour in the image deviates from the actual human body contour, the human body contour image which is not parallel to the shooting surface can be adjusted to obtain the human body contour image which is parallel to the shooting surface, the prediction parameter value of at least one volume parameter of the user to be detected is obtained through prediction according to the adjusted human body contour image of at least one posture, and the prediction result is accurate.
In some optional embodiments, the predicting, by using the intelligent vector model, a predicted parameter value of at least one vector parameter of the user to be tested according to the visual inspection information and the numerical characteristic information of the user to be tested includes: acquiring a human body outline image of one posture of the user to be detected based on the visual detection information of the user to be detected; predicting to obtain human body contour images of other postures of the user to be detected based on the human body contour image of one posture of the user to be detected; and inputting the human body contour image and the numerical characteristic information of at least one posture of the user to be detected into the intelligent energy body model, and predicting to obtain a predicted parameter value of at least one quantitative body parameter of the user to be detected. The technical scheme has the advantages that the human body contour images of other postures of the user to be detected can be obtained through prediction based on the human body contour image of one posture of the user to be detected, the prediction parameter value of at least one quantitative parameter of the user to be detected is obtained through prediction according to the human body contour image of at least one posture of the user to be detected, and the prediction result is accurate.
In some optional embodiments, the predicting, by using the intelligent vector model, a predicted parameter value of at least one vector parameter of the user to be tested according to the visual inspection information and the numerical characteristic information of the user to be tested includes: estimating to obtain the clothes thickness information of the user to be detected based on the visual detection information of the user to be detected, and acquiring a human body outline image of at least one posture of the user to be detected; respectively adjusting the human body contour image of at least one posture of the user to be detected based on the clothes thickness information of the user to be detected; inputting the numerical characteristic information of the user to be detected and the adjusted human body contour image of at least one posture into the intelligent energy body model, and predicting to obtain a prediction parameter value of at least one body parameter of the user to be detected. The technical scheme has the advantages that the clothes thickness information of the user to be detected can be estimated and obtained based on the visual detection information of the user to be detected, the thickness degree of clothes worn by the user to be detected has a large influence on the body measuring result, the body contour image of at least one posture of the user to be detected can be respectively adjusted based on the clothes thickness information of the user to be detected, the adjusted body contour image of at least one posture is relatively close to the actual body contour of the user to be detected, the prediction parameter value of at least one body parameter of the user to be detected can be predicted and obtained according to the adjusted body contour image of at least one posture, and the prediction result is relatively accurate.
In some optional embodiments, the numerical characteristic parameter comprises at least one of: gender, age, body type, weight, and height; the volume parameters include at least one of: height, head circumference, shoulder shape, arm extension, waist circumference and leg length. The technical scheme has the advantages that the numerical characteristic parameters can be personal information which is easy to obtain by a user, the numerical characteristic parameters can comprise at least one of sex, age, body type, weight and height, the parameter values of the numerical characteristic parameters are stable, the body measurement parameters can be parameters which are easy to directly measure and stable in parameter values, and the body measurement parameters can comprise at least one of height, head circumference, shoulder type, arm spread, waist size and leg length.
In a second aspect, the present application provides a smart metering device, the device comprising: the system comprises a sample acquisition module, a data processing module and a data processing module, wherein the sample acquisition module is used for acquiring image information, numerical characteristic information and measurement marking information of a plurality of sample users, the image information comprises a human body contour image of at least one gesture, the numerical characteristic information comprises an identification and an actual parameter value of at least one numerical characteristic parameter, and the measurement marking information comprises an identification and an actual parameter value of at least one measurement parameter; the model training module is used for training the deep learning model by utilizing the image information, the numerical characteristic information and the measurement body marking information of the plurality of sample users to obtain an intelligent measurement body model; the system comprises a to-be-detected acquisition module, a to-be-detected acquisition module and a to-be-detected processing module, wherein the to-be-detected acquisition module is used for acquiring visual detection information and numerical characteristic information of a to-be-detected user, and the visual detection information of the to-be-detected user is obtained by shooting the whole body of the to-be-detected user by a; and the information prediction module is used for predicting and obtaining a prediction parameter value of at least one volume parameter of the user to be detected by using the intelligent volume model according to the visual detection information and the numerical characteristic information of the user to be detected.
In some optional embodiments, the deep learning model includes an input layer, a convolutional layer, a fully-connected layer, and an output layer; the model training module comprises: an input unit for inputting image information and numerical feature information of the sample user into the input layer for each of the plurality of sample users; the image convolution unit is used for inputting the image information of the sample user into the convolution layer through the input layer to obtain a convolution result, and inputting the convolution result into the full-connection layer through the convolution layer; the numerical characteristic unit is used for inputting numerical characteristic information of the sample user into the full connection layer through the input layer; a first prediction unit, configured to obtain a prediction parameter value of at least one volume parameter of the sample user through the full-link layer prediction based on the convolution result and the numerical characteristic information of the sample user; the parameter comparison unit is used for respectively comparing the predicted parameter value and the actual parameter value of each quantitative parameter of the sample user to obtain a comparison result of the sample user; and the model acquisition unit is used for training the deep learning model based on the comparison results of the plurality of sample users to obtain the intelligent vector model.
In some optional embodiments, the loss function of the deep learning model adopts an L2 loss function, and the weights corresponding to at least two volume parameters in the loss function are different.
In some optional embodiments, the apparatus further comprises: and the profile display module is used for responding to a shooting request sent by the user equipment and displaying reference profile information on a display screen of the user equipment so that the user to be detected can adjust the shooting posture according to the reference profile information.
In some optional embodiments, the information prediction module comprises: the first acquisition unit is used for acquiring a human body outline image of at least one gesture of the user to be detected based on the visual detection information of the user to be detected; the first adjusting unit is used for respectively detecting whether the human body contour image of each posture of the user to be detected is parallel to the shooting surface or not and adjusting the human body contour image which is not parallel to the shooting surface; and the second prediction unit is used for inputting the numerical characteristic information of the user to be detected and the adjusted human body contour image of at least one posture into the intelligent energy body model and predicting to obtain a prediction parameter value of at least one body parameter of the user to be detected.
In some optional embodiments, the information prediction module comprises: the second acquisition unit is used for acquiring a human body outline image of one posture of the user to be detected based on the visual detection information of the user to be detected; the third prediction unit is used for predicting and obtaining human body contour images of other postures of the user to be detected based on the human body contour image of one posture of the user to be detected; and the fourth prediction unit is used for inputting the human body contour image and the numerical characteristic information of at least one posture of the user to be detected into the intelligent energy body model, and predicting to obtain a prediction parameter value of at least one quantitative body parameter of the user to be detected.
In some optional embodiments, the information prediction module comprises: the third acquisition unit is used for estimating and obtaining the clothes thickness information of the user to be detected based on the visual detection information of the user to be detected and acquiring a human body outline image of at least one gesture of the user to be detected; the second adjusting unit is used for respectively adjusting the human body outline image of at least one posture of the user to be detected based on the clothes thickness information of the user to be detected; and the fifth prediction unit is used for inputting the numerical characteristic information of the user to be detected and the adjusted human body contour image of at least one posture into the intelligent energy body model, and predicting to obtain a prediction parameter value of at least one body parameter of the user to be detected.
In some optional embodiments, the numerical characteristic parameter comprises at least one of: gender, age, body type, weight, and height; the volume parameters include at least one of: height, head circumference, shoulder shape, arm extension, waist circumference and leg length.
In a third aspect, the present application provides an electronic device comprising a memory and a processor, the memory storing a computer program, the processor implementing the steps of any of the above methods when executing the computer program.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of any of the methods described above.
Drawings
The present application is further described below with reference to the drawings and examples.
Fig. 1 is a schematic flow chart of an intelligent agent measuring method provided by an embodiment of the present application;
FIG. 2 is a schematic flow chart of obtaining an intelligent vector model according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of a deep learning model provided in an embodiment of the present application;
FIG. 4 is a schematic flow chart illustrating a process for obtaining a predicted parameter value according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of another process for obtaining a predicted parameter value according to an embodiment of the present application;
FIG. 6 is a schematic diagram of another process for obtaining a predicted parameter value according to an embodiment of the present application;
FIG. 7 is a schematic flow chart of another smart energy body method provided by an embodiment of the present application;
fig. 8 is a schematic structural diagram of an intelligent metering device provided in an embodiment of the present application;
FIG. 9 is a schematic structural diagram of a model training module according to an embodiment of the present disclosure;
FIG. 10 is a block diagram of an information prediction module according to an embodiment of the present disclosure;
FIG. 11 is a block diagram of another information prediction module according to an embodiment of the present disclosure;
FIG. 12 is a schematic structural diagram of another information prediction module provided in an embodiment of the present application;
FIG. 13 is a schematic structural diagram of another smart energy device provided in an embodiment of the present application;
fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 15 is a schematic structural diagram of a program product for implementing an intelligent energy method according to an embodiment of the present application.
Detailed Description
The present application is further described with reference to the accompanying drawings and the detailed description, and it should be noted that, in the present application, the embodiments or technical features described below may be arbitrarily combined to form a new embodiment without conflict.
Referring to fig. 1, an embodiment of the present application provides an intelligent agent measuring method, which may include steps S101 to S104.
Step S101: acquiring image information, numerical characteristic information and measurement marking information of a plurality of sample users, wherein the image information comprises a human body contour image of at least one posture, the numerical characteristic information comprises an identification and an actual parameter value of at least one numerical characteristic parameter, and the measurement marking information comprises an identification and an actual parameter value of at least one measurement parameter. The number of sample users is not limited in the present application, and the plurality of sample users is, for example, 30, 50, 100, 1000, or 10000 sample users.
In one embodiment, the numerical characteristic parameter may include at least one of: gender, age, body type, weight, and height; the volume parameter may include at least one of: height, head circumference, shoulder shape, arm extension, waist circumference and leg length.
Therefore, the numerical characteristic parameter can be personal information which is easily acquired by a user, the numerical characteristic parameter can comprise at least one of sex, age, body type, weight and height, the parameter value of the numerical characteristic parameter is stable, the measurement parameter can be a parameter which is easy to directly measure and has stable parameter value, and the measurement parameter can comprise at least one of height, head circumference, shoulder shape, arm spread, waist circumference and leg length.
In one embodiment, when the numerical characteristic parameter includes height, the parameter value of height input by the user can be used as the predicted parameter value of height without predicting the height of the user.
Step S102: and training a deep learning model by using the image information, the numerical characteristic information and the measurement body marking information of the plurality of sample users to obtain an intelligent measurement body model.
Referring to fig. 2-3, in one embodiment, the deep learning model may include an input layer 10, a convolutional layer 20, a fully-connected layer 30, and an output layer 40; the step S102 may include steps S201 to S206.
Step S201: inputting image information and numerical feature information of the sample user into the input layer 10 for each of the plurality of sample users;
step S202: the image information of the sample user is input to the convolutional layer 20 through the input layer 10 to obtain a convolution result, and the convolution result is input to the all-connected layer 30 through the convolutional layer 20.
Step S203: inputting numerical characteristic information of the sample user into the full-link layer 30 through the input layer 10.
Step S204: based on the convolution result and the numerical characteristic information of the sample user, a predicted parameter value of at least one volume parameter of the sample user is predicted by the fully-connected layer 30.
Step S205: and respectively comparing the predicted parameter value and the actual parameter value of each quantitative parameter of the sample user to obtain a comparison result of the sample user.
Step S206: training the deep learning model based on the comparison results of the plurality of sample users to obtain the intelligent vector model.
Therefore, on one hand, the image information and the numerical characteristic information of the sample user can be input into the input layer 10, the image information of the sample user is input into the convolutional layer 20 by using the input layer 10 to obtain a convolution result, and then the convolution result is input into the fully-connected layer 30 by using the convolutional layer 20, wherein the numerical characteristic information is one-dimensional numerical information, the step of convolution is not needed, the numerical characteristic information of the sample user can be input into the fully-connected layer 30 by using the input layer 10, and the deep learning model can accept the input of different types of information; on the other hand, the method can utilize the full connection layer 30 to predict and obtain a predicted parameter value of at least one volume parameter of the sample user based on the convolution result and the numerical characteristic information of the sample user, compare the predicted parameter value and the actual parameter value of each volume parameter of the sample user to obtain a comparison result of the sample user, train the depth model based on the comparison result to obtain the intelligent volume model.
In a specific embodiment, the loss function of the deep learning model may adopt an L2 loss function, and weights corresponding to at least two volume parameters in the loss function may not be the same.
Therefore, on one hand, the L2 loss function is sensitive to errors, the errors can be measured by the L2 loss function, the calculation efficiency of the L2 loss function is high, and overfitting of the model can be prevented; on the other hand, the size errors of different parts of the human body have different influence degrees on clothes making, correspondingly, the weight corresponding to each measurement parameter in the loss function can be not completely the same, different weights are given to different measurement parameters, and therefore the accuracy of the prediction result is improved.
In one embodiment, different weights may be set for different parameters of the body measurement according to experience and recommendations of professionals such as garment designers. For example, the height affects the clothing to a lesser extent than the waist circumference, and a lower weight may be given to the height and a higher weight may be given to the waist circumference.
Step S103: the method comprises the steps of obtaining visual detection information and numerical characteristic information of a user to be detected, wherein the visual detection information of the user to be detected is obtained by shooting the whole body of the user to be detected by a camera of user equipment. The user equipment can be an electronic device with a camera, such as a mobile phone, a computer, a tablet computer and intelligent wearable equipment. The camera is for example an optical camera and/or an infrared camera.
Step S104: and according to the visual detection information and the numerical characteristic information of the user to be detected, predicting by using the intelligent vector model to obtain a predicted parameter value of at least one vector parameter of the user to be detected.
Therefore, the deep learning model can be trained by utilizing the image information, the numerical characteristic information and the measurement body marking information of a plurality of sample users to obtain an intelligent measurement body model, and the prediction parameter value of at least one measurement body parameter of the user to be measured is obtained by utilizing the intelligent measurement body model according to the visual detection information and the numerical characteristic information of the user to be measured; on the other hand, the method can obtain the prediction parameter value of at least one body parameter of the user to be detected only by inputting the visual detection information and the numerical characteristic information of the user to be detected into the intelligent body model, and compared with the method for carrying out body measurement by using intelligent energy body equipment with a large volume, such as a 3D scanning body measuring machine, the method is simple and convenient to operate, the user can independently carry out body measurement at any time and any place, the measurement data is accurate, the calculation amount is low, the body measurement speed is high, the use cost is low, and the method can be popularized and used in a large scale.
Referring to fig. 4, in a specific embodiment, the step S104 may include steps S301 to S303.
Step S301: and acquiring a human body outline image of at least one gesture of the user to be detected based on the visual detection information of the user to be detected. The posture of the user to be tested may include a front side, a back side, a top side, a side surface, a bottom surface, etc. of the user to be tested, wherein the side surface may form any angle with the front side.
Step S302: and respectively detecting whether the human body contour image of each posture of the user to be detected is parallel to the shooting surface or not, and adjusting the human body contour image which is not parallel to the shooting surface.
Step S303: inputting the numerical characteristic information of the user to be detected and the adjusted human body contour image of at least one posture into the intelligent energy body model, and predicting to obtain a prediction parameter value of at least one body parameter of the user to be detected.
For example, the body contour images of the at least one pose are, for example, a front whole-body image, which may include a whole-body image of a flat arm and a whole-body image of a flat arm, and a side whole-body image, which may include a whole-body image of a flat arm.
Therefore, whether the human body contour image of each posture of the user to be measured is parallel to the shooting surface or not can be detected, if the human body contour image is not parallel to the shooting surface, the human body contour in the image deviates from the actual human body contour, the human body contour image which is not parallel to the shooting surface can be adjusted to obtain the human body contour image which is parallel to the shooting surface, the prediction parameter value of at least one body parameter of the user to be measured is obtained through prediction according to the adjusted human body contour image of at least one posture, and the prediction result is accurate.
Referring to fig. 5, in a specific embodiment, the step S104 may include steps S304 to S306.
Step S304: and acquiring a human body outline image of one posture of the user to be detected based on the visual detection information of the user to be detected.
Step S305: and predicting to obtain the human body contour images of other postures of the user to be detected based on the human body contour image of one posture of the user to be detected.
In a specific embodiment, the step S305 may include: and predicting a 3D model corresponding to the human body outline image by utilizing a PIFuHD network based on the human body outline image of one posture of the user to be detected, and predicting to obtain the human body outline images of other postures of the user to be detected according to the 3D model.
Therefore, the 3D model corresponding to the human body outline image can be predicted by utilizing the PIFuHD network based on the human body outline image of one posture of the user to be detected, the human body outline images of other postures of the user to be detected are obtained through prediction of the 3D model, the intelligent degree and the automatic degree of the whole process are high, and the generated 3D model is high in precision.
Step S306: and inputting the human body contour image and the numerical characteristic information of at least one posture of the user to be detected into the intelligent energy body model, and predicting to obtain a predicted parameter value of at least one quantitative body parameter of the user to be detected.
Therefore, the human body contour image of the other posture of the user to be detected can be obtained through prediction based on the human body contour image of one posture of the user to be detected, the prediction parameter value of at least one volume parameter of the user to be detected is obtained through prediction according to the human body contour image of at least one posture of the user to be detected, and the prediction result is accurate.
Referring to fig. 6, in a specific embodiment, the step S104 may include steps S307 to S309.
Step S307: estimating to obtain the clothes thickness information of the user to be detected based on the visual detection information of the user to be detected, and acquiring a human body outline image of at least one posture of the user to be detected. The clothes thickness information may include upper garment thickness information and/or lower garment thickness information.
The visual detection information is, for example, image information or video information, when the visual detection information is the image information, a human body contour image requiring at least one posture can be prompted, when the visual detection information is the video information, whether images of different postures of a user exist or not can be detected for each frame of image in the video information, and a criterion for judging that the different postures of the user exist is that a change between the postures meets a preset condition, for example, the preset condition is a preset condition, and for example, a rotation angle between the postures exceeds 30 degrees can be taken as the different postures.
Wherein, step S307 may include: acquiring visual detection information and clothes thickness marking information of a plurality of sample objects, wherein the clothes thickness marking information is used for indicating the clothes thickness information; training by using a deep learning model according to the visual detection information and the clothes thickness labeling information of the plurality of sample objects to obtain a clothes thickness classification model; and inputting the visual detection information of the user to be detected into the clothes thickness classification model to obtain the clothes thickness information of the user to be detected.
Step S308: and respectively adjusting the human body outline image of at least one gesture of the user to be detected based on the clothes thickness information of the user to be detected.
Step S309: inputting the numerical characteristic information of the user to be detected and the adjusted human body contour image of at least one posture into the intelligent energy body model, and predicting to obtain a prediction parameter value of at least one body parameter of the user to be detected.
Therefore, the clothes thickness information of the user to be detected can be estimated and obtained based on the visual detection information of the user to be detected, the thickness degree of clothes worn by the user to be detected has a large influence on the measuring result, the human body contour image of at least one posture of the user to be detected can be respectively adjusted based on the clothes thickness information of the user to be detected, the adjusted human body contour image of at least one posture is relatively close to the actual human body contour of the user to be detected, the prediction parameter value of at least one measuring parameter of the user to be detected can be predicted and obtained according to the adjusted human body contour image of at least one posture, and the prediction result is relatively accurate.
Referring to fig. 7, in a specific embodiment, the method may further include step S105.
Step S105: and responding to a shooting request sent by the user equipment, and displaying reference contour information on a display screen of the user equipment so that the user to be detected can adjust a shooting posture according to the reference contour information. The user equipment is, for example, a mobile phone, the reference outline information is, for example, a character shape frame, and the user can adjust the distance between the user equipment and the mobile phone and the placing angle of the mobile phone, so that the human outline image shot by the user is located in the character shape frame, and a complete human outline image is obtained.
In a practical application, the step S105 may further include: and displaying text prompt information on a display screen of the user equipment so that the user to be tested adjusts the shooting posture according to the text prompt information. The text prompt information is, for example: the user can be guided to swing into various postures by lifting the arms, standing straight, lifting the chest and raising the head, putting down the arms and the like.
In another practical application, the step S105 may further include: and controlling the user equipment to play voice prompt information so that the user to be tested adjusts the shooting posture according to the voice prompt information. The voice prompt information is, for example: the user can be guided to swing into various postures by lifting the arms, standing straight, lifting the chest and raising the head, putting down the arms and the like.
Therefore, reference contour information can be displayed on a display screen of the user equipment, a user can adjust the shooting posture according to the reference contour information, images with the postures meeting requirements are shot, if the reference contour information is not provided as a reference, the collected human body contour image may be incomplete, or the quality of the collected human body contour image is not high.
In a specific application, an embodiment of the present application provides an intelligent agent measuring method, where the method includes:
acquiring image information, numerical characteristic information and volume marking information of a plurality of sample users, wherein the image information comprises two front human body contour images and one side human body contour image, the arms of people in one front human body contour image are open and parallel to the shoulders, the arms of people in the other front human body contour image are attached to two sides of the body, the arms of people in the side human body contour image are open and perpendicular to the shoulders, the numerical characteristic information comprises information such as height, weight, age, sex and the like, the volume marking information comprises at least one identification and actual parameter value of a volume parameter, and the actual parameter value of the volume parameter can be measured by a professional volumist;
training a deep learning model by using the image information, the numerical characteristic information and the measurement body marking information of the plurality of sample users to obtain an intelligent measurement body model;
responding to a shooting request sent by the user equipment, and displaying reference contour information on a display screen of the user equipment so that the user to be detected can adjust a shooting posture according to the reference contour information;
the method comprises the steps of obtaining visual detection information and numerical characteristic information of a user to be detected, wherein the visual detection information of the user to be detected is obtained by shooting the whole body of the user to be detected by a camera of user equipment;
and according to the visual detection information and the numerical characteristic information of the user to be detected, predicting by using the intelligent vector model to obtain a predicted parameter value of at least one vector parameter of the user to be detected.
Referring to fig. 8, an embodiment of the present application further provides an intelligent energy measuring device, and a specific implementation manner of the intelligent energy measuring device is consistent with the implementation manner and the achieved technical effect described in the embodiment of the intelligent energy measuring method, and details of a part of the implementation manner and the achieved technical effect are not repeated.
The device comprises: a sample obtaining module 101, configured to obtain image information, numerical characteristic information, and volume labeling information of a plurality of sample users, where the image information includes a human body contour image of at least one gesture, the numerical characteristic information includes an identifier of at least one numerical characteristic parameter and an actual parameter value, and the volume labeling information includes an identifier of at least one volume parameter and an actual parameter value; the model training module 102 is used for training a deep learning model by using the image information, the numerical characteristic information and the measurement body marking information of the plurality of sample users to obtain an intelligent measurement body model; the system comprises a to-be-detected acquisition module 103, a data acquisition module and a data processing module, wherein the to-be-detected acquisition module is used for acquiring visual detection information and numerical characteristic information of a to-be-detected user, and the visual detection information of the to-be-detected user is obtained by shooting the whole body of the to-be-detected user by a camera of user equipment; and the information prediction module 104 is configured to predict, by using the intelligent vector model, a prediction parameter value of at least one vector parameter of the user to be tested according to the visual detection information and the numerical characteristic information of the user to be tested.
In one embodiment, the numerical characteristic parameter may include at least one of: gender, age, body type, weight, and height; the volume parameter may include at least one of: height, head circumference, shoulder shape, arm extension, waist circumference and leg length.
Referring to FIG. 9, in one embodiment, the deep learning model may include an input layer, a convolutional layer, a fully-connected layer, and an output layer; the model training module 102 may include: an input unit 1021, which can be used for inputting image information and numerical characteristic information of the sample user into the input layer for each of the plurality of sample users; an image convolution unit 1022, configured to input image information of the sample user into the convolutional layer through the input layer to obtain a convolution result, and input the convolution result into the fully-connected layer through the convolutional layer; a numerical feature unit 1023, operable to input numerical feature information of the sample user into the fully-connected layer via the input layer; a first prediction unit 1024, configured to obtain a prediction parameter value of at least one quantum parameter of the sample user through the full link layer prediction based on the convolution result and the numerical characteristic information of the sample user; the parameter comparison unit 1025 can be used for respectively comparing the predicted parameter value and the actual parameter value of each body parameter of the sample user to obtain a comparison result of the sample user; the model obtaining unit 1026 may be configured to train the deep learning model based on the comparison result of the plurality of sample users, so as to obtain the intelligent vector model.
In a specific embodiment, the loss function of the deep learning model may adopt an L2 loss function, and weights corresponding to at least two volume parameters in the loss function may not be the same.
Referring to fig. 10, in a specific embodiment, the information prediction module 104 may include: a first obtaining unit 1041, configured to obtain, based on the visual detection information of the user to be detected, a human body contour image of at least one gesture of the user to be detected; the first adjusting unit 1042 may be configured to detect whether the human body profile image of each posture of the user to be detected is parallel to the shooting surface, and adjust the human body profile image that is not parallel to the shooting surface; the second prediction unit 1043 may input the numerical characteristic information of the user to be measured and the adjusted human body contour image of the at least one posture into the intelligent energy body model, and predict a prediction parameter value of the at least one body parameter of the user to be measured.
Referring to fig. 11, in a specific embodiment, the information prediction module 104 may include: a second obtaining unit 1044, configured to obtain a human body contour image of one gesture of the user to be detected based on the visual detection information of the user to be detected; a third prediction unit 1045, configured to predict, based on the human body contour image of one posture of the user to be detected, a human body contour image of another posture of the user to be detected; the fourth prediction unit 1046 may be configured to input the human body contour image of the at least one posture of the user to be measured and the numerical characteristic information into the smart energy body model, and predict a prediction parameter value of the at least one volume parameter of the user to be measured.
Referring to fig. 12, in a specific embodiment, the information prediction module 104 may include: a third obtaining unit 1047, configured to obtain, based on the visual detection information of the user to be detected, clothes thickness information of the user to be detected by estimation, and obtain a human body contour image of at least one posture of the user to be detected; a second adjusting unit 1048, configured to respectively adjust a human body contour image of at least one posture of the user to be tested based on the clothes thickness information of the user to be tested; the fifth prediction unit 1049 may be configured to input the numerical feature information of the user to be measured and the adjusted human body contour image of the at least one posture into the intelligent energy body model, and predict a prediction parameter value of the at least one body parameter of the user to be measured.
Referring to fig. 13, in a specific embodiment, the apparatus may further include: the profile display module 105 may be configured to display reference profile information on a display screen of the user equipment in response to receiving a shooting request sent by the user equipment, so that the user to be tested adjusts a shooting posture according to the reference profile information.
Referring to fig. 14, an embodiment of the present application further provides an electronic device 200, where the electronic device 200 includes at least one memory 210, at least one processor 220, and a bus 230 connecting different platform systems.
The memory 210 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)211 and/or cache memory 212, and may further include Read Only Memory (ROM) 213.
The memory 210 further stores a computer program, and the computer program can be executed by the processor 220, so that the processor 220 executes the steps of the intelligent energy measuring method in the embodiment of the present application, and a specific implementation manner of the method is consistent with the implementation manner and the achieved technical effect described in the embodiment of the intelligent energy measuring method, and details of the method are not repeated.
Memory 210 may also include a utility 214 having at least one program module 215, such program modules 215 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Accordingly, the processor 220 may execute the computer programs described above, and may execute the utility 214.
Bus 230 may be a local bus representing one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or any other type of bus structure.
The electronic device 200 may also communicate with one or more external devices 240, such as a keyboard, pointing device, bluetooth device, etc., and may also communicate with one or more devices capable of interacting with the electronic device 200, and/or with any devices (e.g., routers, modems, etc.) that enable the electronic device 200 to communicate with one or more other computing devices. Such communication may be through input-output interface 250. Also, the electronic device 200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 260. The network adapter 260 may communicate with other modules of the electronic device 200 via the bus 230. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 200, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, and data backup storage platforms, to name a few.
The embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium is used for storing a computer program, and when the computer program is executed, the steps of the intelligent energy measuring method in the embodiment of the present application are implemented, and a specific implementation manner of the method is consistent with the implementation manner and the achieved technical effect described in the embodiment of the intelligent energy measuring method, and some contents are not described again.
Fig. 15 shows a program product 300 for implementing the above intelligent energy method provided by the present embodiment, which can adopt a portable compact disc read only memory (CD-ROM) and includes program codes, and can be run on a terminal device, such as a personal computer. However, the program product 300 of the present invention is not so limited, and in this application, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Program product 300 may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that can communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the C language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
While the present application is described in terms of various aspects, including exemplary embodiments, the principles of the invention should not be limited to the disclosed embodiments, but are also intended to cover various modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.

Claims (11)

1. A smart metering method, the method comprising:
acquiring image information, numerical characteristic information and measurement marking information of a plurality of sample users, wherein the image information comprises a human body contour image of at least one gesture, the numerical characteristic information comprises an identification and an actual parameter value of at least one numerical characteristic parameter, and the measurement marking information comprises an identification and an actual parameter value of at least one measurement parameter;
training a deep learning model by using the image information, the numerical characteristic information and the measurement body marking information of the plurality of sample users to obtain an intelligent measurement body model;
the method comprises the steps of obtaining visual detection information and numerical characteristic information of a user to be detected, wherein the visual detection information of the user to be detected is obtained by shooting the whole body of the user to be detected by a camera of user equipment;
and according to the visual detection information and the numerical characteristic information of the user to be detected, predicting by using the intelligent vector model to obtain a predicted parameter value of at least one vector parameter of the user to be detected.
2. The smart volumetric method as defined in claim 1 wherein the deep learning model includes an input layer, a convolutional layer, a fully-connected layer, and an output layer;
the method for training the deep learning model by utilizing the image information, the numerical characteristic information and the measurement body marking information of the plurality of sample users to obtain the intelligent measurement body model comprises the following steps:
for each of the plurality of sample users, inputting image information and numerical feature information of the sample user into the input layer;
inputting image information of the sample user into the convolutional layer through the input layer to obtain a convolution result, and inputting the convolution result into the full-connection layer through the convolutional layer;
inputting numerical characteristic information of the sample user into the full-connection layer through the input layer;
obtaining a prediction parameter value of at least one volume parameter of the sample user through the full-connection layer prediction based on the convolution result and the numerical characteristic information of the sample user;
respectively comparing the predicted parameter value and the actual parameter value of each quantitative parameter of the sample user to obtain a comparison result of the sample user;
training the deep learning model based on the comparison results of the plurality of sample users to obtain the intelligent vector model.
3. The intelligent quantum method of claim 2, wherein the loss function of the deep learning model is an L2 loss function, and the weight of at least two quantum parameters in the loss function is different.
4. The smart metering method of claim 1, further comprising:
and responding to a shooting request sent by the user equipment, and displaying reference contour information on a display screen of the user equipment so that the user to be detected can adjust a shooting posture according to the reference contour information.
5. A smart metrology method as claimed in claim 1 wherein said predicting a predicted parameter value for at least one metrology parameter of said user under test using said smart metrology model based on visual inspection information and numerical characterization information of said user under test comprises:
acquiring a human body outline image of at least one gesture of the user to be detected based on the visual detection information of the user to be detected;
respectively detecting whether the human body contour image of each posture of the user to be detected is parallel to the shooting surface, and adjusting the human body contour image which is not parallel to the shooting surface;
inputting the numerical characteristic information of the user to be detected and the adjusted human body contour image of at least one posture into the intelligent energy body model, and predicting to obtain a prediction parameter value of at least one body parameter of the user to be detected.
6. A smart metrology method as claimed in claim 1 wherein said predicting a predicted parameter value for at least one metrology parameter of said user under test using said smart metrology model based on visual inspection information and numerical characterization information of said user under test comprises:
acquiring a human body outline image of one posture of the user to be detected based on the visual detection information of the user to be detected;
predicting to obtain human body contour images of other postures of the user to be detected based on the human body contour image of one posture of the user to be detected;
and inputting the human body contour image and the numerical characteristic information of at least one posture of the user to be detected into the intelligent energy body model, and predicting to obtain a predicted parameter value of at least one quantitative body parameter of the user to be detected.
7. A smart metrology method as claimed in claim 1 wherein said predicting a predicted parameter value for at least one metrology parameter of said user under test using said smart metrology model based on visual inspection information and numerical characterization information of said user under test comprises:
estimating to obtain the clothes thickness information of the user to be detected based on the visual detection information of the user to be detected, and acquiring a human body outline image of at least one posture of the user to be detected;
respectively adjusting the human body contour image of at least one posture of the user to be detected based on the clothes thickness information of the user to be detected;
inputting the numerical characteristic information of the user to be detected and the adjusted human body contour image of at least one posture into the intelligent energy body model, and predicting to obtain a prediction parameter value of at least one body parameter of the user to be detected.
8. A smart metering method according to claim 1 wherein the numerical characteristic parameters include at least one of: gender, age, body type, weight, and height;
the volume parameters include at least one of: height, head circumference, shoulder shape, arm extension, waist circumference and leg length.
9. A smart metering device, the device comprising:
the system comprises a sample acquisition module, a data processing module and a data processing module, wherein the sample acquisition module is used for acquiring image information, numerical characteristic information and measurement marking information of a plurality of sample users, the image information comprises a human body contour image of at least one gesture, the numerical characteristic information comprises an identification and an actual parameter value of at least one numerical characteristic parameter, and the measurement marking information comprises an identification and an actual parameter value of at least one measurement parameter;
the model training module is used for training the deep learning model by utilizing the image information, the numerical characteristic information and the measurement body marking information of the plurality of sample users to obtain an intelligent measurement body model;
the system comprises a to-be-detected acquisition module, a to-be-detected acquisition module and a to-be-detected processing module, wherein the to-be-detected acquisition module is used for acquiring visual detection information and numerical characteristic information of a to-be-detected user, and the visual detection information of the to-be-detected user is obtained by shooting the whole body of the to-be-detected user by a;
and the information prediction module is used for predicting and obtaining a prediction parameter value of at least one volume parameter of the user to be detected by using the intelligent volume model according to the visual detection information and the numerical characteristic information of the user to be detected.
10. An electronic device, characterized by an electronic device memory storing a computer program and a processor implementing the steps of the method according to any of claims 1-8 when the processor executes the computer program.
11. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202110262272.9A 2021-03-10 2021-03-10 Intelligent energy body method, device, electronic equipment and storage medium Pending CN113112321A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110262272.9A CN113112321A (en) 2021-03-10 2021-03-10 Intelligent energy body method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110262272.9A CN113112321A (en) 2021-03-10 2021-03-10 Intelligent energy body method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113112321A true CN113112321A (en) 2021-07-13

Family

ID=76711247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110262272.9A Pending CN113112321A (en) 2021-03-10 2021-03-10 Intelligent energy body method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113112321A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658689A (en) * 2021-08-25 2021-11-16 深圳前海微众银行股份有限公司 Multi-agent model training method and device, electronic equipment and storage medium
CN113763453A (en) * 2021-09-06 2021-12-07 北京云数工场数据科技有限公司 Artificial intelligence energy system and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016721A (en) * 2017-03-07 2017-08-04 上海优裁信息技术有限公司 The modeling method of human 3d model
CN110074788A (en) * 2019-04-18 2019-08-02 梦多科技有限公司 A kind of body data acquisition methods and device based on machine learning
CN110569784A (en) * 2019-09-05 2019-12-13 武汉纺织大学 Human body size measuring method and system, storage medium and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016721A (en) * 2017-03-07 2017-08-04 上海优裁信息技术有限公司 The modeling method of human 3d model
CN110074788A (en) * 2019-04-18 2019-08-02 梦多科技有限公司 A kind of body data acquisition methods and device based on machine learning
CN110569784A (en) * 2019-09-05 2019-12-13 武汉纺织大学 Human body size measuring method and system, storage medium and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHUNSUKE SAITO ET AL.: "PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization", 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), pages 82 - 86 *
赵杰 等: "智能机器人技术:安保、巡逻、处置类警用机器人研究实践", 机械工业出版社, pages: 113 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658689A (en) * 2021-08-25 2021-11-16 深圳前海微众银行股份有限公司 Multi-agent model training method and device, electronic equipment and storage medium
CN113763453A (en) * 2021-09-06 2021-12-07 北京云数工场数据科技有限公司 Artificial intelligence energy system and method

Similar Documents

Publication Publication Date Title
US11551374B2 (en) Hand pose estimation from stereo cameras
CN111476306B (en) Object detection method, device, equipment and storage medium based on artificial intelligence
CN110348543B (en) Fundus image recognition method and device, computer equipment and storage medium
CN108038880A (en) Method and apparatus for handling image
CN111754513A (en) Product surface defect segmentation method, defect segmentation model learning method and device
CN108280455A (en) Human body critical point detection method and apparatus, electronic equipment, program and medium
CN113112321A (en) Intelligent energy body method, device, electronic equipment and storage medium
CN112016398B (en) Handheld object recognition method and device
CN108229418A (en) Human body critical point detection method and apparatus, electronic equipment, storage medium and program
CN108427941A (en) Method, method for detecting human face and device for generating Face datection model
CN112683169A (en) Object size measuring method, device, equipment and storage medium
CN116994339B (en) Method and system for sitting body forward-bending test based on image processing
CN110070076A (en) Method and apparatus for choosing trained sample
KR20210087181A (en) An electronic device detecting a location and a method thereof
CN110110666A (en) Object detection method and device
CN108229494A (en) network training method, processing method, device, storage medium and electronic equipment
CN113065634B (en) Image processing method, neural network training method and related equipment
CN110246561A (en) A kind of moving distance calculation method, device and system
CN108509929A (en) Method, method for detecting human face and device for generating Face datection model
CN109871116A (en) Device and method for identifying a gesture
CN116453222B (en) Target object posture determining method, training device and storage medium
CN114694257B (en) Multi-user real-time three-dimensional action recognition evaluation method, device, equipment and medium
CN112763030B (en) Weighing method, device, equipment and storage medium
CN110443191A (en) The method and apparatus of article for identification
CN114863478A (en) Livestock weight identification method and device, storage medium and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210713

RJ01 Rejection of invention patent application after publication