CN110569593A - Method and system for measuring three-dimensional size of dressed human body, storage medium and electronic equipment - Google Patents

Method and system for measuring three-dimensional size of dressed human body, storage medium and electronic equipment Download PDF

Info

Publication number
CN110569593A
CN110569593A CN201910837132.2A CN201910837132A CN110569593A CN 110569593 A CN110569593 A CN 110569593A CN 201910837132 A CN201910837132 A CN 201910837132A CN 110569593 A CN110569593 A CN 110569593A
Authority
CN
China
Prior art keywords
hidden layer
neuron
layer
output
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910837132.2A
Other languages
Chinese (zh)
Inventor
胡新荣
刘嘉文
刘军平
彭涛
吴晓堃
李敏
陈佳
丁益祥
陈常念
张自力
崔树芹
何儒汉
孙召云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Textile University
Original Assignee
Wuhan Textile University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Textile University filed Critical Wuhan Textile University
Priority to CN201910837132.2A priority Critical patent/CN110569593A/en
Publication of CN110569593A publication Critical patent/CN110569593A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/061Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0621Item configuration or customization

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Neurology (AREA)
  • Finance (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a system for measuring three-dimensional size of a dressed human body, a storage medium and electronic equipment, wherein the method comprises the following steps: establishing a training data set; establishing a neural network model framework, wherein the neural network model framework comprises an input layer, a plurality of hidden layers and an output layer; training according to the training data set and the neural network model frame to determine model parameters; establishing a neural network model according to the neural network model framework and the model parameters, wherein a plurality of hidden layers in the neural network model are connected in sequence according to training results; acquiring user input parameters, wherein the user input parameters comprise user two-dimensional size and characteristic information; and training the input parameters of the user through a neural network model to obtain the three-dimensional size of the user. The invention provides a method for constructing a multilayer neural network model, obtaining model parameters by training sample data, inputting user information and two-dimensional parameters into the model, and outputting three-dimensional information of a human body.

Description

Method and system for measuring three-dimensional size of dressed human body, storage medium and electronic equipment
Technical Field
The invention relates to the field of measurement of human body dimensions, in particular to a method and a system for measuring three-dimensional dimensions of a wearing human body, a storage medium and electronic equipment.
Background
with the rapid development of society and economy, the living standard of people is improved, the computer science technology and the network technology are rapidly improved, and the popularization of an electronic shopping platform is realized. For the clothing industry, people are starting and may pursue personalized clothing that is privately customized to own aesthetic appeal of their body shape. Therefore, the production of clothes is moving towards small-batch, multi-variety and even customized clothes. A virtual garment customization system, namely an Electronic garment customization eMTM (Electronic map to measure), is also produced according to the progress of the technology and the requirements of people.
The core technology in eMT, namely human body characteristic dimension acquisition, the current measurement method mainly comprises contact type manual measurement and non-contact type measurement. The non-contact measurement is mainly to scan a human body by using a three-dimensional scanner, and the other method is to acquire and position human body characteristic points to acquire two-dimensional size information based on image segmentation, and then to summarize an empirical formula by using a large amount of sample data to acquire three-dimensional data, which has the defects that: for users with extreme body types, the error of the three-dimensional size fitted by the empirical formula method is large.
Disclosure of Invention
The invention aims to provide a method and a system for measuring the three-dimensional size of a dressed human body, a storage medium and electronic equipment, which are used for constructing a multilayer neural network model, obtaining model parameters by training sample data, and outputting the three-dimensional information of the human body by inputting user information and two-dimensional parameters into the model.
The technical scheme provided by the invention is as follows:
The invention provides a method for measuring three-dimensional size of a dressed human body, which comprises the following steps:
Establishing a training data set, wherein each group of training data in the training data set comprises a plurality of training input parameters and training target data;
establishing a neural network model framework, wherein the neural network model framework comprises an input layer, a plurality of hidden layers and an output layer;
training according to the training data set and the neural network model frame to determine model parameters;
Establishing a neural network model according to the neural network model framework and the model parameters, wherein a plurality of hidden layers in the neural network model are connected in sequence according to training results;
acquiring user input parameters, wherein the user input parameters comprise user two-dimensional size and characteristic information;
and training the user input parameters through the neural network model to obtain the three-dimensional size of the user.
Further, the method also comprises the following steps:
Acquiring a test data set, wherein each group of test data in the test data set comprises a plurality of input parameters and target data;
the input layer of the neural network model sends the plurality of input parameters in any group of test data to each neuron in the first hidden layer, and the number of neurons in the input layer is the same as that of the input parameters;
Each neuron in the first hidden layer trains the plurality of input parameters to obtain first hidden layer output parameters respectively and sends the first hidden layer output parameters to each neuron in the next hidden layer;
Training the neuron in each of the rest hidden layers which are not the first hidden layer by using the hidden layer output parameter of the previous hidden layer as an input parameter to obtain a corresponding hidden layer output parameter, and respectively sending the hidden layer output parameter to each neuron in the next hidden layer until all the hidden layers are trained, and respectively sending the hidden layer output parameter to an output layer by using the neuron in the last hidden layer;
the output layer processes the received hidden layer output parameters to obtain output parameters ZkP is the number of samples in the test data set, k is 1,2,3 … P;
If the target data Tkand said output parameter ZkIf there is an error, an error function is defined,
And correcting the connection weight in the model parameter according to the error function.
further, the training of the plurality of input parameters by each neuron in the first hidden layer to obtain first hidden layer output parameters specifically includes:
Each neuron in the first hidden layer respectively acquires the plurality of input parameters and trains according to an activation function f,wherein the content of the first and second substances,A first hidden layer output parameter, N, obtained by training the kth group of test data for the h neuron in the first hidden layer0for the input layer neuron number, i is 1,2,3 … N0,ωi,1,h,Is the connection weight, x, of the ith neuron in the input layer to the h neuron in the first hidden layeriIs an input parameter of the ith neuron of the input layer, theta1,hA threshold for the h neuron in the first hidden layer;
Training the hidden layer output parameter of the previous hidden layer as an input parameter by using the neuron in each of the rest hidden layers other than the first hidden layer to obtain a corresponding hidden layer output parameter specifically comprises:
the neuron in each of the rest hidden layers which are not the first hidden layer takes the hidden layer output parameter of the previous hidden layer as an input parameter, trains according to an activation function f to obtain a corresponding hidden layer output parameter,Wherein the content of the first and second substances,an nth hidden layer output parameter, N, obtained by training the kth group of test data for the h neuron in the nth hidden layern-1implicitly including the number of layer neurons, ω, for the n-1i,n,h,For the connection weight of the ith neuron in the nth-1 hidden layer to the h neuron in the nth hidden layer, i is 1,2,3 … Nn-1,yn-1,iHidden layer output parameter, θ, for the ith neuron in the n-1 hidden layern,hthe threshold value of the h-th neuron in the nth hidden layer is N, where N is 2,3, …, N;
The training of the output layer on all the hidden layer termination output parameters to obtain the output parameters specifically comprises the following steps:
the neuron of the output layer receives all the output parameters of the termination hidden layer of the last hidden layer as input parameters and trains according to an activation function fObtaining output parameters, wherein the hidden layer of the output layer comprises a neuron,Wherein Z iskOutput parameter, N, obtained by training the kth set of test data for the output layerNThe number of neurons in the Nth hidden layer, ω, of the last hidden layeriFor the connection weight of the ith neuron in the Nth hidden layer to the neuron in the output layer, i is 1,2,3 … NN,yN,iand gamma is a threshold value of the neuron of the output layer.
further, the correcting the connection weight in the model parameter according to the error function specifically includes:
calculating each connection weight omega according to the error functiontChange value of Δ ωtwherein, ω istAny one connection weight in the neural network model is used, and eta is a learning step length;
Using the output layer as the starting point, adjusting the corresponding connection weight omega according to the reverse sequence of the training sequence of the neural network model and the change value in sequencetthe adjusted connection weight is omegat',
further, the method also comprises the following steps:
Comparing the size error of the user three-dimensional size with the size error of the user standard three-dimensional size;
and if the size error exceeds a preset threshold, adding the corresponding user input parameters and the user standard three-dimensional size into the test data set to serve as a group of test data.
the invention also provides a dressing human body three-dimensional size measuring system, comprising:
the data set establishing module is used for establishing a training data set, wherein each group of training data in the training data set comprises a plurality of training input parameters and training target data;
the framework establishing module is used for establishing a neural network model framework, and the neural network model framework comprises an input layer, a plurality of hidden layers and an output layer;
The model training module trains according to the training data set established by the data set establishing module and the neural network model frame established by the frame establishing module to determine model parameters;
The model establishing module is used for establishing a neural network model according to the neural network model framework established by the framework establishing module and the model parameters determined by the model training module, and a plurality of hidden layers in the neural network model are sequentially connected according to a training result;
the parameter acquisition module is used for acquiring user input parameters, and the user input parameters comprise user two-dimensional size and characteristic information;
And the processing module trains the user input parameters acquired by the parameter acquisition module through the neural network model established by the model establishment module to obtain the three-dimensional size of the user.
further, the method also comprises the following steps:
the data set establishing module is used for acquiring a test data set, wherein each group of test data in the test data set comprises a plurality of input parameters and a target data;
The analysis module is used for sending the plurality of input parameters in any group of test data acquired by the data set establishing module to each neuron in the first hidden layer by an input layer of the neural network model, and the number of neurons in the input layer is the same as that of the input parameters;
The analysis module, each neuron in the first hidden layer trains the multiple input parameters to obtain a first hidden layer output parameter, and sends the first hidden layer output parameter to each neuron in the next hidden layer, specifically including: each neuron in the first hidden layer respectively acquires the plurality of input parameters and trains according to an activation function f,wherein the content of the first and second substances,A first hidden layer output parameter, N, obtained by training the kth group of test data for the h neuron in the first hidden layer0For the input layer neuron number, i is 1,2,3 … N0,ωi,1,h,is the connection weight, x, of the ith neuron in the input layer to the h neuron in the first hidden layeriis an input parameter of the ith neuron of the input layer, theta1,hA threshold for the h neuron in the first hidden layer;
training the hidden layer output parameter of the previous hidden layer as an input parameter by using the neuron in each of the rest hidden layers other than the first hidden layer to obtain a corresponding hidden layer output parameter specifically comprises:
the neuron in each of the rest hidden layers which are not the first hidden layer takes the hidden layer output parameter of the previous hidden layer as an input parameter, trains according to an activation function f to obtain a corresponding hidden layer output parameter,wherein the content of the first and second substances,An nth hidden layer output parameter, N, obtained by training the kth group of test data for the h neuron in the nth hidden layern-1Implicitly including the number of layer neurons, ω, for the n-1i,n,h,For the connection weight of the ith neuron in the nth-1 hidden layer to the h neuron in the nth hidden layer, i is 1,2,3 … Nn-1,yn-1,iHidden layer output parameter, θ, for the ith neuron in the n-1 hidden layern,hthe threshold value of the h-th neuron in the nth hidden layer is N, where N is 2,3, …, N;
the training of the output layer on all the hidden layer termination output parameters to obtain the output parameters specifically comprises the following steps:
Neuron connection of output layerAll the hidden layer output parameters of the final hidden layer are received as input parameters, training is carried out according to an activation function f to obtain output parameters, the hidden layer of the output layer comprises a neuron,Wherein Z iskOutput parameter, N, obtained by training the kth set of test data for the output layerNThe number of neurons in the Nth hidden layer, ω, of the last hidden layeriFor the connection weight of the ith neuron in the Nth hidden layer to the neuron in the output layer, i is 1,2,3 … NN,yN,iA termination hidden layer output parameter of an ith neuron in an Nth hidden layer, wherein gamma is a threshold value of the neuron of an output layer;
the analysis module is used for training the neuron in each of the rest hidden layers, which are not the first hidden layer, by using the hidden layer output parameter of the previous hidden layer as an input parameter to obtain a corresponding hidden layer output parameter, and respectively sending the hidden layer output parameter to each neuron in the next hidden layer until all the hidden layers are trained, and respectively sending the hidden layer output parameter to the output layer by the neuron in the last hidden layer;
The analysis module processes the received hidden layer output parameter by the output layer to obtain an output parameter Zkp is the number of samples in the test data set, k is 1,2,3 … P;
The analysis module is used for judging whether the target data T is the target data TkAnd said output parameter Zkif there is an error, an error function is defined,
The analysis module corrects the connection weight in the model parameter according to the error function, and specifically includes: calculating each connection weight omega according to the error functiontchange value of Δ ωtwherein, ω istFor middle nodes of neural network modelMeaning a connection weight, wherein eta is a learning step length;
Using the output layer as the starting point, adjusting the corresponding connection weight omega according to the reverse sequence of the training sequence of the neural network model and the change value in sequencetthe adjusted connection weight is omegat',
further, the method also comprises the following steps:
the comparison module is used for comparing the size error of the user three-dimensional size and the user standard three-dimensional size obtained by the processing module;
And the data set establishing module is used for adding the corresponding user input parameters and the user standard three-dimensional size into the test data set to be used as a group of test data if the size error obtained by the comparison module exceeds a preset threshold.
The invention also provides a storage medium having stored thereon a computer program which, when executed by a processor, implements any of the methods described above.
the invention also provides an electronic device comprising a memory and a processor, wherein the memory stores a computer program running on the processor, and the processor implements any one of the methods described above when executing the computer program.
by the method, the system, the storage medium and the electronic equipment for measuring the three-dimensional size of the dressed human body, a multi-input and multi-hidden-layer neural network model is constructed, multi-dimensional parameters (height, gender, shoe code, X-dimension width pause and X-dimension thickness) are used as input, the hidden layers are included, and finally the three-dimensional size of the X-dimension of the user is given to the user.
Drawings
The above features, technical features, advantages and implementations of the method, system, storage medium and electronic device for measuring a three-dimensional dimension of a human body will be further described in the following detailed description of preferred embodiments with reference to the accompanying drawings.
FIG. 1 is a flow chart of one embodiment of a method of measuring the three-dimensional size of a wearer of the present invention;
FIG. 2 is a schematic diagram of a neural network model framework of the present invention;
FIG. 3 is a flow chart of another embodiment of a method of measuring the three-dimensional size of a wearer of the present invention;
FIG. 4 is a flow chart of another embodiment of a method of measuring the three-dimensional size of a wearer of the present invention;
fig. 5 is a schematic structural diagram of one embodiment of a three-dimensional body measurement system of the present invention.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will explain specific embodiments of the present invention with reference to the drawings of the specification. It is obvious that the drawings in the following description are only some examples of the invention, from which other drawings and embodiments can be derived by a person skilled in the art without inventive effort.
for the sake of simplicity, only the parts relevant to the present invention are schematically shown in the drawings, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
In one embodiment of the present invention, as shown in fig. 1, a method for measuring a three-dimensional size of a wearer includes:
s100, establishing a training data set, wherein each group of training data in the training data set comprises a plurality of training input parameters and training target data;
S200, establishing a neural network model framework, wherein the neural network model framework comprises an input layer, a plurality of hidden layers and an output layer;
S300, training according to the training data set and the neural network model framework, and determining model parameters;
s400, establishing a neural network model according to the neural network model framework and the model parameters, wherein a plurality of hidden layers in the neural network model are connected in sequence according to training results;
s500, acquiring user input parameters, wherein the user input parameters comprise two-dimensional size and characteristic information of a user;
s600, training the user input parameters through the neural network model to obtain the three-dimensional size of the user.
Specifically, in this embodiment, a training data set is established, and each set of training data in the training data set is a body related parameter of a user, which includes a plurality of training input parameters such as height, gender, shoe size, X-circumference width, X-circumference thickness, and the like, and a training target data, i.e., a standard three-dimensional size of the user such as a three-dimensional size of the X-circumference.
A neural network model framework is established, an activation function is selected firstly, and in order to ensure the effectiveness of neurons in the whole network, an S function is adopted on the activation function. Meanwhile, the user adopts a multi-input structure, and the occurrence of the saturation phenomenon can be effectively avoided. As shown in fig. 2, the neural network model framework includes an input layer, a plurality of hidden layers, and an output layer, where the input layer and the hidden layers each include a plurality of neurons, the output layer includes a neuron, the number of neurons in the input layer is the same as the number of training input parameters, and the number of neurons in the hidden layers and the number of neurons between hidden layers may be the same or different.
Training is carried out according to the training data set and the neural network model framework, and model parameters in the training data set, such as connection weights of neurons in front and back layers, are determined. And then inputting the model parameters into a neural network model framework to establish a neural network model, wherein the connection sequence of the hidden layers in the neural network model framework is not determined, but the sequence of a plurality of hidden layers in the established neural network model is determined, and the hidden layers are connected in sequence according to a training result. The neural network model established based on the BP learning algorithm adjusts and modifies the connection weight of the network through the back propagation of the network output error to minimize the error, and the learning process comprises forward calculation and error back propagation. Therefore, the network parameters are adjusted through forward propagation and backward propagation of the test data set, and the self-learning capacity of the neural network model is improved.
After the neural network model is trained, inputting user input parameters of a user to be measured, wherein the user input parameters comprise two-dimensional size of the user, such as X-dimension width, X-dimension thickness and the like, and characteristic information, such as height, sex, shoe size and the like, the two-dimensional size of the user is necessary information, and the information in the characteristic information can be flexibly adjusted based on the requirements of the user. And training the neural network model according to the user input parameters to obtain the three-dimensional size of the user.
the invention constructs a neural network model with multiple inputs and hidden layers, uses multidimensional parameters (height, sex, shoe size, X-dimension width pause and X-dimension thickness) as inputs, contains multiple hidden layers, and finally gives the user the X-dimension of the user, the algorithm greatly improves the accuracy of the empirical formula, and is suitable for users with different body types, wherein the neural network model generates the three-dimension of the user, and the accuracy is improved to 95% for special body types.
in another embodiment of the present invention, the three-dimensional sizes of two users are measured by using the above calculation method, and the measured data and the experimental data by using the above calculation method are shown in table 1.
TABLE 1 measured and Experimental data for user A and user B
Another embodiment of the present invention is an optimized embodiment of the above embodiment, as shown in fig. 3, compared with the above embodiment, the main improvement of this embodiment is that the method further includes:
s700, acquiring a test data set, wherein each group of test data in the test data set comprises a plurality of input parameters and a target data;
s710, the input layer of the neural network model sends the input parameters in any group of test data to each neuron in the first hidden layer, and the number of neurons in the input layer is the same as that of the input parameters;
S720, each neuron in the first hidden layer trains the plurality of input parameters to obtain first hidden layer output parameters respectively, and sends the first hidden layer output parameters to each neuron in the next hidden layer;
S730, training the neuron in each of the rest hidden layers which are not the first hidden layer by using the hidden layer output parameter of the previous hidden layer as an input parameter to obtain a corresponding hidden layer output parameter, and respectively sending the hidden layer output parameter to each neuron in the next hidden layer until all the hidden layers are trained, and respectively sending the hidden layer output parameter to an output layer by the neuron in the last hidden layer;
S740, the output layer processes the received hidden layer output parameter to obtain an output parameter Zkp is the number of samples in the test data set, k is 1,2,3 … P;
s750 if the target data Tkand said output parameter ZkIf there is an error, an error function is defined,
s760, correcting the connection weight value in the model parameter according to the error function.
specifically, in this embodiment, for the neural network model, training is performed through training data in the training data set to further determine that only the preliminary neural network model is established by the model parameters, and an error of a measurement result obtained through the neural network model is large, so that more samples are required to train and correct the model parameters in the neural network model, and thus a neural network model with more accurate measurement is obtained.
Thus, a test data set is obtained, wherein each set of test data in the test data set comprises a plurality of input parameters, such as height, gender, shoe size, width of the X-circumference, thickness of the X-circumference, etc., of a user's body-related parameter, and a target data, i.e. a standard three-dimensional size of the user, such as the three-dimensional size of the X-circumference.
as shown in fig. 2, an input layer of a neural network model selects a plurality of input parameters in any group of test data, each neuron in the input layer receives one of the input parameters, the input parameters are different from each other, each neuron then sends the input parameter obtained by each neuron to each neuron in a first hidden layer, each neuron in the first hidden layer trains all the received input parameters to obtain a corresponding first hidden layer output parameter, the number of the first hidden layer output parameters is the same as the number of the neurons in the first hidden layer, and each neuron in the first hidden layer sends each neuron in the first hidden layer to each neuron in a next hidden layer. And then each neuron in each hidden layer trains the received hidden layer output parameters sent by each neuron in the previous hidden layer as self input parameters to obtain corresponding hidden layer output parameters until the neuron of the last hidden layer sends the hidden layer output parameters to the neurons of the output layer respectively.
the output layer processes the received hidden layer output parameters to obtain output parameters ZkAnd the output parameter is the three-dimensional size of the user obtained by the training of the neural network model. Since the test data set includes a plurality of sets of test data, i.e., the number of samples, P is the number of samples in the test data set, k is 1,2,3 … P, if the target data T of each sample iskAnd corresponding output parameter ZkIf there is an error, an error function is defined,And then correcting the connection weight in the model parameters according to the error function, and improving the measurement precision of the neural network model.
in another embodiment of the present invention, S720 each neuron in the first hidden layer is separatelyTraining the plurality of input parameters to obtain a first hidden layer output parameter specifically comprises: each neuron in the first hidden layer respectively acquires the plurality of input parameters and trains according to an activation function f,wherein the content of the first and second substances,a first hidden layer output parameter, N, obtained by training the kth group of test data for the h neuron in the first hidden layer0For the input layer neuron number, i is 1,2,3 … N0,ωi,1,h,Is the connection weight, x, of the ith neuron in the input layer to the h neuron in the first hidden layeriis an input parameter of the ith neuron of the input layer, theta1,ha threshold for the h neuron in the first hidden layer; training the hidden layer output parameter of the previous hidden layer as an input parameter by using the neuron in each of the rest hidden layers other than the first hidden layer to obtain a corresponding hidden layer output parameter specifically comprises: the neuron in each of the rest hidden layers which are not the first hidden layer takes the hidden layer output parameter of the previous hidden layer as an input parameter, trains according to an activation function f to obtain a corresponding hidden layer output parameter,wherein the content of the first and second substances,An nth hidden layer output parameter, N, obtained by training the kth group of test data for the h neuron in the nth hidden layern-1Implicitly including the number of layer neurons, ω, for the n-1i,n,h,For the connection weight of the ith neuron in the nth-1 hidden layer to the h neuron in the nth hidden layer, i is 1,2,3 … Nn-1,yn-1,iHidden layer output parameter, θ, for the ith neuron in the n-1 hidden layern,hthe number of hidden layers is N, where N is 2,3, …, N.
S74the step of processing the received hidden layer output parameters by the 0 output layer to obtain the output parameters specifically comprises: the neuron of the output layer receives all the output parameters of the termination hidden layer of the last hidden layer as input parameters, and trains according to an activation function f to obtain the output parameters, wherein the hidden layer of the output layer comprises a neuron,wherein Z iskOutput parameter, N, obtained by training the kth set of test data for the output layerNThe number of neurons in the Nth hidden layer, ω, of the last hidden layeriFor the connection weight of the ith neuron in the Nth hidden layer to the neuron in the output layer, i is 1,2,3 … NN,yN,iand gamma is a threshold value of the neuron of the output layer.
In another embodiment of the present invention, the step S760 of correcting the connection weights in the model parameters according to the error function specifically includes: calculating each connection weight omega according to the error functiontChange value of Δ ωtWherein, ω istany one connection weight in the neural network model is used, and eta is a learning step length; using the output layer as the starting point, adjusting the corresponding connection weight omega according to the reverse sequence of the training sequence of the neural network model and the change value in sequencetthe adjusted connection weight is omegat',
specifically, in this embodiment, the connection weight is modified from the output layer, and then the connection weight in the hidden layer is modified, where a plurality of hidden layers are also modified according to the reverse order of the connection order of the hidden layers in the neural network model.
Another embodiment of the present invention is an optimized embodiment of the above embodiment, and as shown in fig. 4, compared with the above embodiment, the main improvement of this embodiment is that the present invention further includes:
S800, comparing the size error of the user three-dimensional size with the size error of the user standard three-dimensional size;
S900, if the size error exceeds a preset threshold value, adding the corresponding user input parameters and the user standard three-dimensional size into the test data set to serve as a group of test data.
specifically, in this embodiment, the user input parameter of the user to be tested is input into the neural network model to be trained to obtain the user three-dimensional size, the user standard three-dimensional size of the user to be tested may be obtained in another manner, the user standard three-dimensional size is the current actual three-dimensional size of the user to be tested, then the size error between the user three-dimensional size and the user standard three-dimensional size is calculated by comparison, and if the size is smaller, the training result of the neural network model is accurate, and the neural network model does not need to be adjusted. If the size error exceeds the preset threshold, the preset threshold is set by a user according to actual needs, and the error of the training result is larger, the corresponding user input parameters and the user standard three-dimensional size are added into the test data set to be used as a group of test data, and when the preset period or the number of the test data in the test data set exceeds the preset threshold, the model parameters of the neural network model are corrected through the test data set. The test data in the test data set may be additionally acquired data, in addition to the data of the user to be tested.
the neural network model provided by the invention can set an updating period, add the data of the user into the information base in real time, and then train and update the model parameters again. The applicability of the whole system is stronger, and the system is more suitable for the use requirements of merchants.
In one embodiment of the present invention, as shown in FIG. 5, a system 100 for measuring the three-dimensional size of a wearer's body comprises:
a data set creating module 110 for creating a training data set, wherein each set of training data in the training data set comprises a plurality of training input parameters and a training target data;
a framework building module 120, which builds a neural network model framework, wherein the neural network model framework comprises an input layer, a plurality of hidden layers and an output layer;
A model training module 130, which trains according to the training data set established by the data set establishing module 110 and the neural network model framework established by the framework establishing module 120, and determines model parameters;
A model building module 140, which builds a neural network model according to the neural network model framework built by the framework building module 120 and the model parameters determined by the model training module 130, wherein a plurality of hidden layers in the neural network model are connected in sequence according to the training results;
A parameter obtaining module 150, configured to obtain user input parameters, where the user input parameters include a user two-dimensional size and feature information;
The processing module 160 trains the user input parameters acquired by the parameter acquisition module 150 through the neural network model established by the model establishment module 140 to obtain the three-dimensional size of the user;
The data set creating module 110 obtains a test data set, where each set of test data in the test data set includes a plurality of input parameters and a target data;
The analysis module 170, the input layer of the neural network model sends the plurality of input parameters in any set of test data acquired by the data set establishing module 110 to each neuron in the first hidden layer, and the number of neurons in the input layer is the same as the number of input parameters;
The analyzing module 170, where each neuron in the first hidden layer trains the multiple input parameters to obtain a first hidden layer output parameter, and sends the first hidden layer output parameter to each neuron in the next hidden layer, specifically includes: each neuron in the first hidden layer respectively acquires the plurality of input parameters and trains according to an activation function f,wherein the content of the first and second substances,A first hidden layer output parameter, N, obtained by training the kth group of test data for the h neuron in the first hidden layer0For the input layer neuron number, i is 1,2,3 … N0,ωi,1,h,Is the connection weight, x, of the ith neuron in the input layer to the h neuron in the first hidden layeriIs an input parameter of the ith neuron of the input layer, theta1,ha threshold for the h neuron in the first hidden layer;
training the hidden layer output parameter of the previous hidden layer as an input parameter by using the neuron in each of the rest hidden layers other than the first hidden layer to obtain a corresponding hidden layer output parameter specifically comprises:
the neuron in each of the rest hidden layers which are not the first hidden layer takes the hidden layer output parameter of the previous hidden layer as an input parameter, trains according to an activation function f to obtain a corresponding hidden layer output parameter,Wherein the content of the first and second substances,an nth hidden layer output parameter, N, obtained by training the kth group of test data for the h neuron in the nth hidden layern-1Implicitly including the number of layer neurons, ω, for the n-1i,n,h,for the connection weight of the ith neuron in the nth-1 hidden layer to the h neuron in the nth hidden layer, i is 1,2,3 … Nn-1,yn-1,iHidden layer output parameter, θ, for the ith neuron in the n-1 hidden layern,hthe threshold value of the h-th neuron in the nth hidden layer is N, where N is 2,3, …, N;
the training of the output layer on all the hidden layer termination output parameters to obtain the output parameters specifically comprises the following steps:
The neuron of the output layer receives all the output parameters of the termination hidden layer of the last hidden layer as input parameters, and trains according to an activation function f to obtain the output parameters, wherein the hidden layer of the output layer comprisesone of the neurons is a neuron of the group,wherein Z iskoutput parameter, N, obtained by training the kth set of test data for the output layerNthe number of neurons in the Nth hidden layer, ω, of the last hidden layerifor the connection weight of the ith neuron in the Nth hidden layer to the neuron in the output layer, i is 1,2,3 … NN,yN,ia termination hidden layer output parameter of an ith neuron in an Nth hidden layer, wherein gamma is a threshold value of the neuron of an output layer;
The analysis module 170 trains the hidden layer output parameter of the previous hidden layer as an input parameter by using the neuron in each of the rest hidden layers, which are not the first hidden layer, to obtain a corresponding hidden layer output parameter, and sends the hidden layer output parameter to each neuron in the next hidden layer respectively until all the hidden layers are trained, and the neuron in the last hidden layer sends the hidden layer output parameter to the output layer respectively;
The analysis module 170 processes the received hidden layer output parameter by the output layer to obtain an output parameter ZkP is the number of samples in the test data set, k is 1,2,3 … P;
The analysis module 170, if the target data TkAnd said output parameter ZkIf there is an error, an error function is defined,
the analyzing module 170 corrects the connection weight in the model parameter according to the error function, and specifically includes: calculating each connection weight omega according to the error functiontchange value of Δ ωtWherein, ω istAny one connection weight in the neural network model is used, and eta is a learning step length;
starting from the output layer and reversing the training sequence of the neural network modelAccording to the sequence, the corresponding connection weight omega is adjusted according to the change valuetThe adjusted connection weight is omegat',
A comparison module 180 for comparing the size error of the user three-dimensional size obtained by the processing module 160 with the user standard three-dimensional size;
the data set creating module 110 adds the corresponding user input parameter and the user standard three-dimensional size to the test data set as a set of test data if the size error obtained by the comparing module 180 exceeds a preset threshold.
The specific operation modes of the modules in this embodiment have been described in detail in the corresponding method embodiments, and thus are not described in detail again.
an embodiment of the invention provides a computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out all or part of the method steps of the first embodiment.
The present invention can implement all or part of the flow in the method of the first embodiment, and can also be implemented by using a computer program to instruct related hardware, where the computer program can be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments can be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, etc. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
An embodiment of the present invention further provides an electronic device, which includes a memory and a processor, wherein the memory stores a computer program running on the processor, and the processor executes the computer program to implement all or part of the method steps in the first embodiment.
the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like which is the control center for the computer device and which connects the various parts of the overall computer device using various interfaces and lines.
the memory may be used to store the computer programs and/or modules, and the processor may implement various functions of the computer device by running or executing the computer programs and/or modules stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, video data, etc.) created according to the use of the cellular phone, etc. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. the three-dimensional size measuring method of the dressed human body is characterized by comprising the following steps:
establishing a training data set, wherein each group of training data in the training data set comprises a plurality of training input parameters and training target data;
Establishing a neural network model framework, wherein the neural network model framework comprises an input layer, a plurality of hidden layers and an output layer;
Training according to the training data set and the neural network model frame to determine model parameters;
Establishing a neural network model according to the neural network model framework and the model parameters, wherein a plurality of hidden layers in the neural network model are connected in sequence according to training results;
Acquiring user input parameters, wherein the user input parameters comprise user two-dimensional size and characteristic information;
And training the user input parameters through the neural network model to obtain the three-dimensional size of the user.
2. The method for measuring the three-dimensional size of the wearer's body according to claim 1, further comprising:
acquiring a test data set, wherein each group of test data in the test data set comprises a plurality of input parameters and target data;
the input layer of the neural network model sends the plurality of input parameters in any group of test data to each neuron in the first hidden layer, and the number of neurons in the input layer is the same as that of the input parameters;
each neuron in the first hidden layer trains the plurality of input parameters to obtain first hidden layer output parameters respectively and sends the first hidden layer output parameters to each neuron in the next hidden layer;
Training the neuron in each of the rest hidden layers which are not the first hidden layer by using the hidden layer output parameter of the previous hidden layer as an input parameter to obtain a corresponding hidden layer output parameter, and respectively sending the hidden layer output parameter to each neuron in the next hidden layer until all the hidden layers are trained, and respectively sending the hidden layer output parameter to an output layer by using the neuron in the last hidden layer;
the output layer processes the received hidden layer output parameters to obtain output parameters Zkp is the number of samples in the test data set, k is 1,2,3 … P;
if the target data Tkand said output parameter ZkIf there is an error, an error function is defined,
and correcting the connection weight in the model parameter according to the error function.
3. the method for measuring three-dimensional size of a dressed human body according to claim 2, wherein the training of the plurality of input parameters by each neuron in the first hidden layer to obtain first hidden-layer output parameters specifically comprises:
Each neuron in the first hidden layer respectively acquires the plurality of input parameters and trains according to an activation function f,Wherein the content of the first and second substances,a first hidden layer output parameter, N, obtained by training the kth group of test data for the h neuron in the first hidden layer0is an input layerNumber of neurons, i ═ 1,2,3 … N0,ωi,1,hthe connection weight, x, from the ith neuron in the input layer to the h neuron in the first hidden layeriIs an input parameter of the ith neuron of the input layer, theta1,ha threshold for the h neuron in the first hidden layer;
Training the hidden layer output parameter of the previous hidden layer as an input parameter by using the neuron in each of the rest hidden layers other than the first hidden layer to obtain a corresponding hidden layer output parameter specifically comprises:
the neuron in each of the rest hidden layers which are not the first hidden layer takes the hidden layer output parameter of the previous hidden layer as an input parameter, trains according to an activation function f to obtain a corresponding hidden layer output parameter,Wherein the content of the first and second substances,An nth hidden layer output parameter, N, obtained by training the kth group of test data for the h neuron in the nth hidden layern-1Implicitly including the number of layer neurons, ω, for the n-1i,n,hThe connection weight of the ith neuron in the nth-1 hidden layer to the h neuron in the nth hidden layer is 1,2,3 … Nn-1,yn-1,iHidden layer output parameter, θ, for the ith neuron in the n-1 hidden layern,hThe threshold value of the h-th neuron in the nth hidden layer is N, where N is 2,3, …, N;
the training of the output layer on all the hidden layer termination output parameters to obtain the output parameters specifically comprises the following steps:
The neuron of the output layer receives all the output parameters of the termination hidden layer of the last hidden layer as input parameters, and trains according to an activation function f to obtain the output parameters, wherein the hidden layer of the output layer comprises a neuron,wherein Z iskMeasure the kth group for the output layeroutput parameter, N, obtained by training test dataNThe number of neurons in the Nth hidden layer, ω, of the last hidden layerifor the connection weight of the ith neuron in the Nth hidden layer to the neuron in the output layer, i is 1,2,3 … NN,yN,iAnd gamma is a threshold value of the neuron of the output layer.
4. the method for measuring the three-dimensional size of the dressed human body according to claim 3, wherein the step of correcting the connection weight in the model parameters according to the error function specifically comprises the following steps:
calculating each connection weight omega according to the error functiontchange value of Δ ωtWherein, ω istAny one connection weight in the neural network model is used, and eta is a learning step length;
Using the output layer as the starting point, adjusting the corresponding connection weight omega according to the reverse sequence of the training sequence of the neural network model and the change value in sequencetthe adjusted connection weight is omegat',
5. the method for measuring the three-dimensional size of the wearer's body according to claim 2, further comprising:
Comparing the size error of the user three-dimensional size with the size error of the user standard three-dimensional size;
and if the size error exceeds a preset threshold, adding the corresponding user input parameters and the user standard three-dimensional size into the test data set to serve as a group of test data.
6. Dress human three-dimensional dimension measurement system, its characterized in that includes:
the data set establishing module is used for establishing a training data set, wherein each group of training data in the training data set comprises a plurality of training input parameters and training target data;
the framework establishing module is used for establishing a neural network model framework, and the neural network model framework comprises an input layer, a plurality of hidden layers and an output layer;
The model training module trains according to the training data set established by the data set establishing module and the neural network model frame established by the frame establishing module to determine model parameters;
The model establishing module is used for establishing a neural network model according to the neural network model framework established by the framework establishing module and the model parameters determined by the model training module, and a plurality of hidden layers in the neural network model are sequentially connected according to a training result;
The parameter acquisition module is used for acquiring user input parameters, and the user input parameters comprise user two-dimensional size and characteristic information;
And the processing module trains the user input parameters acquired by the parameter acquisition module through the neural network model established by the model establishment module to obtain the three-dimensional size of the user.
7. The three-dimensional body measurement system of claim 6, further comprising:
the data set establishing module is used for acquiring a test data set, wherein each group of test data in the test data set comprises a plurality of input parameters and a target data;
The analysis module is used for sending the plurality of input parameters in any group of test data acquired by the data set establishing module to each neuron in the first hidden layer by an input layer of the neural network model, and the number of neurons in the input layer is the same as that of the input parameters;
The analysis module trains the plurality of input parameters to obtain first hidden layer output parameters by each neuron in the first hidden layer respectively, and sends the first hidden layer output parameters to each neuron in the next hidden layer respectivelyEach neuron specifically comprises: each neuron in the first hidden layer respectively acquires the plurality of input parameters and trains according to an activation function f,wherein the content of the first and second substances,a first hidden layer output parameter, N, obtained by training the kth group of test data for the h neuron in the first hidden layer0For the input layer neuron number, i is 1,2,3 … N0,ωi,1,hthe connection weight, x, from the ith neuron in the input layer to the h neuron in the first hidden layeriIs an input parameter of the ith neuron of the input layer, theta1,ha threshold for the h neuron in the first hidden layer;
training the hidden layer output parameter of the previous hidden layer as an input parameter by using the neuron in each of the rest hidden layers other than the first hidden layer to obtain a corresponding hidden layer output parameter specifically comprises:
the neuron in each of the rest hidden layers which are not the first hidden layer takes the hidden layer output parameter of the previous hidden layer as an input parameter, trains according to an activation function f to obtain a corresponding hidden layer output parameter,wherein the content of the first and second substances,An nth hidden layer output parameter, N, obtained by training the kth group of test data for the h neuron in the nth hidden layern-1Implicitly including the number of layer neurons, ω, for the n-1i,n,hthe connection weight of the ith neuron in the nth-1 hidden layer to the h neuron in the nth hidden layer is 1,2,3 … Nn-1,yn-1,iHidden layer output parameter, θ, for the ith neuron in the n-1 hidden layern,hFor the threshold of the h neuron in the n hidden layerThe number of layers of (a) is N, N is 2,3, …, N;
The training of the output layer on all the hidden layer termination output parameters to obtain the output parameters specifically comprises the following steps:
the neuron of the output layer receives all the output parameters of the termination hidden layer of the last hidden layer as input parameters, and trains according to an activation function f to obtain the output parameters, wherein the hidden layer of the output layer comprises a neuron,wherein Z iskOutput parameter, N, obtained by training the kth set of test data for the output layerNThe number of neurons in the Nth hidden layer, ω, of the last hidden layerifor the connection weight of the ith neuron in the Nth hidden layer to the neuron in the output layer, i is 1,2,3 … NN,yN,iA termination hidden layer output parameter of an ith neuron in an Nth hidden layer, wherein gamma is a threshold value of the neuron of an output layer;
The analysis module is used for training the neuron in each of the rest hidden layers, which are not the first hidden layer, by using the hidden layer output parameter of the previous hidden layer as an input parameter to obtain a corresponding hidden layer output parameter, and respectively sending the hidden layer output parameter to each neuron in the next hidden layer until all the hidden layers are trained, and respectively sending the hidden layer output parameter to the output layer by the neuron in the last hidden layer;
The analysis module processes the received hidden layer output parameter by the output layer to obtain an output parameter ZkP is the number of samples in the test data set, k is 1,2,3 … P;
the analysis module is used for judging whether the target data T is the target data Tkand said output parameter Zkif there is an error, an error function is defined,
The analysis module corrects the connection weight in the model parameter according to the error function, and specifically includes: calculating respective connection weights based on the error functionvalue omegatChange value of Δ ωtWherein, ω istAny one connection weight in the neural network model is used, and eta is a learning step length;
using the output layer as the starting point, adjusting the corresponding connection weight omega according to the reverse sequence of the training sequence of the neural network model and the change value in sequencetThe adjusted connection weight is omegat',
8. the three-dimensional body measurement system of claim 7, further comprising:
the comparison module is used for comparing the size error of the user three-dimensional size and the user standard three-dimensional size obtained by the processing module;
and the data set establishing module is used for adding the corresponding user input parameters and the user standard three-dimensional size into the test data set to be used as a group of test data if the size error obtained by the comparison module exceeds a preset threshold.
9. A storage medium having a computer program stored thereon, characterized in that: the computer program, when executed by a processor, implements the method of any of claims 1 to 5.
10. an electronic device comprising a memory and a processor, the memory having stored thereon a computer program that runs on the processor, characterized in that: the processor, when executing the computer program, implements the method of any of claims 1 to 5.
CN201910837132.2A 2019-09-05 2019-09-05 Method and system for measuring three-dimensional size of dressed human body, storage medium and electronic equipment Pending CN110569593A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910837132.2A CN110569593A (en) 2019-09-05 2019-09-05 Method and system for measuring three-dimensional size of dressed human body, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910837132.2A CN110569593A (en) 2019-09-05 2019-09-05 Method and system for measuring three-dimensional size of dressed human body, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN110569593A true CN110569593A (en) 2019-12-13

Family

ID=68777989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910837132.2A Pending CN110569593A (en) 2019-09-05 2019-09-05 Method and system for measuring three-dimensional size of dressed human body, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110569593A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111814804A (en) * 2020-05-25 2020-10-23 武汉纺织大学 Human body three-dimensional size information prediction method and device based on GA-BP-MC neural network
CN111990977A (en) * 2020-07-31 2020-11-27 南京晓庄学院 Wearable optical fiber sensor is to human biological parameter monitoring devices based on neural network
CN112598114A (en) * 2020-12-17 2021-04-02 海光信息技术股份有限公司 Power consumption model construction method, power consumption measurement method and device and electronic equipment
CN112648923A (en) * 2020-12-12 2021-04-13 山东省农业机械科学研究院 Virtual calculation method for detecting size of product part
CN112700008A (en) * 2021-01-06 2021-04-23 青岛弯弓信息技术有限公司 Model matching processing method and system for cloud configuration platform
CN112712239A (en) * 2020-12-23 2021-04-27 青岛弯弓信息技术有限公司 Industrial internet based collaborative manufacturing system and control method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101322589A (en) * 2008-07-16 2008-12-17 苏州大学 Non-contact type human body measuring method for clothing design
CN103767219A (en) * 2014-01-13 2014-05-07 无锡吉姆兄弟时装定制科技有限公司 Noncontact human body three-dimensional size measuring method
CN107041585A (en) * 2017-03-07 2017-08-15 上海优裁信息技术有限公司 The measuring method of human dimension
CN107280118A (en) * 2016-03-30 2017-10-24 深圳市祈飞科技有限公司 A kind of Human Height information acquisition method and the fitting cabinet system using this method
CN108595905A (en) * 2017-10-25 2018-09-28 中国石油化工股份有限公司 A kind of erosion failure quantitative forecasting technique based on BP neural network model
CN109559373A (en) * 2018-10-25 2019-04-02 武汉亘星智能技术有限公司 A kind of method and system based on the 2D human body image amount of progress body

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101322589A (en) * 2008-07-16 2008-12-17 苏州大学 Non-contact type human body measuring method for clothing design
CN103767219A (en) * 2014-01-13 2014-05-07 无锡吉姆兄弟时装定制科技有限公司 Noncontact human body three-dimensional size measuring method
CN107280118A (en) * 2016-03-30 2017-10-24 深圳市祈飞科技有限公司 A kind of Human Height information acquisition method and the fitting cabinet system using this method
CN107041585A (en) * 2017-03-07 2017-08-15 上海优裁信息技术有限公司 The measuring method of human dimension
CN108595905A (en) * 2017-10-25 2018-09-28 中国石油化工股份有限公司 A kind of erosion failure quantitative forecasting technique based on BP neural network model
CN109559373A (en) * 2018-10-25 2019-04-02 武汉亘星智能技术有限公司 A kind of method and system based on the 2D human body image amount of progress body

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111814804A (en) * 2020-05-25 2020-10-23 武汉纺织大学 Human body three-dimensional size information prediction method and device based on GA-BP-MC neural network
CN111814804B (en) * 2020-05-25 2022-07-12 武汉纺织大学 Human body three-dimensional size information prediction method and device based on GA-BP-MC neural network
CN111990977A (en) * 2020-07-31 2020-11-27 南京晓庄学院 Wearable optical fiber sensor is to human biological parameter monitoring devices based on neural network
CN112648923A (en) * 2020-12-12 2021-04-13 山东省农业机械科学研究院 Virtual calculation method for detecting size of product part
CN112598114A (en) * 2020-12-17 2021-04-02 海光信息技术股份有限公司 Power consumption model construction method, power consumption measurement method and device and electronic equipment
CN112598114B (en) * 2020-12-17 2023-11-03 海光信息技术股份有限公司 Power consumption model construction method, power consumption measurement method, device and electronic equipment
CN112712239A (en) * 2020-12-23 2021-04-27 青岛弯弓信息技术有限公司 Industrial internet based collaborative manufacturing system and control method
CN112712239B (en) * 2020-12-23 2022-07-01 青岛弯弓信息技术有限公司 Industrial Internet based collaborative manufacturing system and control method
CN112700008A (en) * 2021-01-06 2021-04-23 青岛弯弓信息技术有限公司 Model matching processing method and system for cloud configuration platform
CN112700008B (en) * 2021-01-06 2022-06-28 青岛弯弓信息技术有限公司 Model matching processing method and system for cloud configuration platform

Similar Documents

Publication Publication Date Title
CN110569593A (en) Method and system for measuring three-dimensional size of dressed human body, storage medium and electronic equipment
CN108510437B (en) Virtual image generation method, device, equipment and readable storage medium
US10284992B2 (en) HRTF personalization based on anthropometric features
CN109222972B (en) fMRI whole brain data classification method based on deep learning
CN103649987B (en) Face impression analysis method, beauty information providing method and face image generation method
CN110121118A (en) Video clip localization method, device, computer equipment and storage medium
KR20220066366A (en) Predictive individual 3D body model
CN109002763B (en) Method and device for simulating human face aging based on homologous continuity
CN111242933B (en) Retinal image artery and vein classification device, apparatus, and storage medium
CN110399487B (en) Text classification method and device, electronic equipment and storage medium
CN111814804B (en) Human body three-dimensional size information prediction method and device based on GA-BP-MC neural network
CN111369428A (en) Virtual head portrait generation method and device
CN109461053B (en) Dynamic distribution method of multiple recommendation channels, electronic device and storage medium
CN111860484B (en) Region labeling method, device, equipment and storage medium
CN114648441B (en) Method and device for designing shoe body according to dynamic foot pressure distribution
WO2015153240A1 (en) Directed recommendations
CN113963148A (en) Object detection method, and training method and device of object detection model
CN111444379A (en) Audio feature vector generation method and audio segment representation model training method
CN114758636A (en) Dance music generation method, device, terminal and readable storage medium
KR20210012493A (en) A System Providing Auto Revision of Pattern with Artificial Neural Network
CN112183303A (en) Transformer equipment image classification method and device, computer equipment and medium
CN114897884A (en) No-reference screen content image quality evaluation method based on multi-scale edge feature fusion
CN110381374B (en) Image processing method and device
JP2024500224A (en) Method and apparatus for hair styling analysis
CN114647877A (en) Method and apparatus for shoe body design based on dynamic foot pressure distribution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191213