CN114037066B - Data processing method and device, electronic equipment and storage medium - Google Patents
Data processing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114037066B CN114037066B CN202210012840.4A CN202210012840A CN114037066B CN 114037066 B CN114037066 B CN 114037066B CN 202210012840 A CN202210012840 A CN 202210012840A CN 114037066 B CN114037066 B CN 114037066B
- Authority
- CN
- China
- Prior art keywords
- variable
- value
- predicted
- determining
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title abstract description 9
- 238000013528 artificial neural network Methods 0.000 claims abstract description 109
- 238000012545 processing Methods 0.000 claims abstract description 36
- 238000000034 method Methods 0.000 claims description 55
- 238000007781 pre-processing Methods 0.000 claims description 24
- 230000008569 process Effects 0.000 claims description 18
- 230000015654 memory Effects 0.000 claims description 8
- 238000010586 diagram Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 11
- 238000004590 computer program Methods 0.000 description 10
- 238000012549 training Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 5
- 230000007787 long-term memory Effects 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 5
- 230000000306 recurrent effect Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000000875 corresponding effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application provides a data processing method, a data processing device, an electronic device and a storage medium, wherein the data processing method comprises the following steps: inputting the variable value of the first variable into a first neural network to predict to obtain the predicted value of a second variable; determining a predicted value of a third variable according to a variable value of the first variable, a predicted value of the second variable and a constraint relation between the third variable and the first variable and the second variable; determining a loss value of the third variable according to the actual value of the third variable and the predicted value of the third variable under the variable value of the first variable; predicting the predicted variable quantity of the second variable by a second neural network; determining a total count value of the second variable according to the predicted variable quantity of the second variable and the predicted value of the second variable; determining a target loss value of a third variable according to the total count value of the second variable and judging whether an iteration end condition is reached; and if so, determining the target value of the target variable according to the total count value of the second variable and the variable value of the first variable. The data processing efficiency can be improved.
Description
Technical Field
The present application relates to the field of artificial intelligence, and more particularly, to a data processing method, apparatus, electronic device, and storage medium.
Background
With the rapid development of computer technology, artificial intelligence is applied more and more widely in daily life. In the related art, when data processing is performed, usually, an optimal solution is obtained by iterating for multiple times with minimized errors, but since the data efficiency is low due to uncertain iteration times, how to improve the data processing efficiency is a technical problem to be solved urgently in the prior art.
Disclosure of Invention
In view of the foregoing problems, embodiments of the present application provide a data processing method, an apparatus, an electronic device, and a storage medium to improve the foregoing problems.
According to an aspect of an embodiment of the present application, there is provided a data processing method, including: inputting the variable value of the first variable into a first neural network for variable prediction to obtain a predicted value of a second variable; determining a predicted value of a third variable according to a variable value of the first variable, a predicted value of the second variable and a constraint relation between the third variable and the first variable and the second variable; determining a loss value of the third variable according to an actual value of the third variable under the variable value of the first variable and a predicted value of the third variable; determining input information according to the variable value of the first variable and the loss value of the third variable; predicting the variable quantity by a second neural network according to the input information to obtain the predicted variable quantity of the second variable; determining a total count value of the second variable according to the predicted variable quantity of the second variable and the predicted value of the second variable; determining a target loss value of the third variable according to the total count value of the second variable, and judging whether an iteration end condition is reached according to the target loss value of the third variable; and if the iteration end condition is reached, determining a target value of the target variable according to the total count value of the second variable and the variable value of the first variable.
According to an aspect of an embodiment of the present application, there is provided a data processing apparatus including: the first prediction module is used for inputting the variable value of the first variable into the first neural network for variable prediction to obtain the predicted value of the second variable; the predicted value determining module of the third variable is used for determining the predicted value of the third variable according to the variable value of the first variable, the predicted value of the second variable and the constraint relation between the third variable and the first variable and the second variable; a loss value determining module of a third variable, which is used for determining a loss value of the third variable according to an actual value of the third variable under the variable value of the first variable and a predicted value of the third variable; the input information determining module is used for determining input information according to the variable value of the first variable and the loss value of the third variable; the second prediction module is used for predicting the variable quantity by a second neural network according to the input information to obtain the predicted variable quantity of the second variable; the total count value determining module of the second variable is used for determining the total count value of the second variable according to the predicted variable quantity of the second variable and the predicted value of the second variable; a target loss value determining module of a third variable, configured to determine a target loss value of the third variable according to the total count value of the second variable, and determine whether an iteration end condition is reached according to the target loss value of the third variable; and the target value determining module of the target variable is used for determining the target value of the target variable according to the total count value of the second variable and the variable value of the first variable if the iteration ending condition is reached.
In some embodiments, the data processing apparatus further comprises: and the processing module is used for taking the total count value of the second variable as the predicted value of the second variable in the next iteration process if the iteration end condition is not met, and returning to execute the step of calculating to obtain the predicted value of the third variable according to the variable value of the first variable, the predicted value of the second variable and the constraint relation between the third variable and the first variable as well as the second variable.
In some embodiments, the data processing apparatus further comprises: and the second neural network parameter adjusting module is used for reversely adjusting the parameters of the second neural network according to at least one of the predicted variation of the second variable and the total count value of the second variable.
In some embodiments, the target loss value determination calculation module for the third variable comprises: the judging unit is used for judging whether the target loss value of the third variable is smaller than a target loss value threshold value or not; an iteration end condition determining unit, configured to determine that an iteration end condition is reached if the target loss value of the third variable is smaller than the target loss value threshold; and if the target loss value of the third variable is not less than the target loss value threshold, determining that the iteration end condition is not reached.
In some embodiments, the input information determination module comprises: and the first preprocessing unit is used for preprocessing the variable value of the first variable by a third neural network to obtain the preprocessed variable value of the first variable. And the second preprocessing unit is used for preprocessing the loss value of the third variable by a fourth neural network to obtain the preprocessed loss value of the third variable. And the input information determining unit is used for combining the variable value of the preprocessed first variable with the loss value of the preprocessed third variable to obtain the input information.
In some embodiments, the data processing apparatus further comprises: and the preprocessing module is used for preprocessing the actual value of the third variable by a fifth neural network to obtain the preprocessed actual value of the third variable. And the information adding module is used for adding the actual value of the preprocessed third variable into the input information.
In some embodiments, the first variable is a two-dimensional coordinate of a joint key point in a joint image, the second variable is a scale factor, the third variable is a joint length, and the target variable is a three-dimensional coordinate of the joint key point.
In some embodiments, the predicted value determination module for the third variable comprises: and the predicted three-dimensional coordinate determining unit of the joint key point is used for determining the predicted three-dimensional coordinate of the joint key point according to the coordinate value of the two-dimensional coordinate of the joint key point in the joint image and the predicted value of the scale factor. And the predicted joint length determining unit of the joint is used for determining the predicted joint length of the joint indicated by the joint image according to the predicted three-dimensional coordinates of the key points of the joint.
According to an aspect of an embodiment of the present application, there is provided an electronic device including: a processor; a memory having computer readable instructions stored thereon which, when executed by the processor, implement a method of data processing as described above.
According to an aspect of embodiments of the present application, there is provided a computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a processor, implement a method of data processing as described above.
According to an aspect of embodiments of the present application, there is provided a computer program product comprising computer instructions which, when executed by a processor, implement a method of data processing as described above.
In the scheme of the application, data processing is divided into two stages, and variable values of a first variable are obtained; performing variable prediction by the first neural network according to the variable value of the first variable to obtain a predicted value of a second variable; then, the second neural network predicts the variable quantity according to the input information to obtain the predicted variable quantity of the second variable; calculating a total count value of the second variable based on the predicted variation amount of the second variable and the predicted value of the second variable; and determining a target value based on the total count value of the second variable and the variable value of the first variable when the iteration end condition is reached.
Also, in the present scheme, the input information includes a loss value according to the third variable and a variable value of the first variable. The method comprises the steps of firstly obtaining a coarser-granularity prediction of a predicted value of a second variable, then conducting finer-granularity prediction on a prediction variable quantity of the second variable through a second neural network according to a loss value of a third variable and a variable value of a first variable under a variable value of the first variable, adding the predicted value of the second variable and the prediction variable quantity of the second variable to obtain a total count value of the second variable, and determining a target loss value of the third variable according to the total count value of the second variable; and judging whether an iteration ending condition is reached or not according to the target loss value of the third variable, and determining the target value of the target variable according to the total count value of the second variable and the variable value of the first variable according to the constraint relation among the third variable, the second variable and the first variable after the iteration ending condition is determined to be reached. According to the method and the device, fitting operation is embedded into the neural network, iteration ending conditions are limited, iteration efficiency is improved, errors generated in the data processing process are involved in iteration, the finally obtained target value is more accurate, the variable value of the second variable is determined by combining coarse granularity and fine granularity, the accuracy of the value of the determined second variable is guaranteed, and the accuracy of the target value of the determined target variable is further guaranteed. By using the method, the iteration ending condition can be achieved only by carrying out a small number of iterations, so that the iteration number is controllable, and the data processing efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a flowchart illustrating a data processing method according to an embodiment of the present application.
Fig. 2 is a flowchart illustrating the detailed steps of step 140 according to an embodiment of the present application.
Fig. 3 is a flowchart illustrating specific steps of step 170 according to an embodiment of the present application.
Fig. 4 is a schematic diagram illustrating a data processing procedure according to an embodiment of the present application.
Fig. 5 is a block diagram of a data processing apparatus according to an embodiment of the present application.
FIG. 6 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the application.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Fig. 1 is a schematic flowchart illustrating a data processing method according to an embodiment of the present application, where the method of the present application may be performed by an electronic device with processing capability, such as a server, a cloud server, and the like, and is not limited in detail herein. As shown in fig. 1, the method includes:
and step 110, inputting the variable value of the first variable into the first neural network for variable prediction to obtain a predicted value of the second variable.
In some embodiments, the variable value of the first variable may be a specific numerical value, may also be the coordinate of a certain key point in the image, and may also be a control variable in some automatic control production line, and the variable value of the specific first variable may be set according to actual needs, and is not specifically limited herein.
In this application, the input variable of the first neural network is referred to as a first variable, where the first variable may be one or more, and it is understood that when the first variable is multiple, step 110 corresponds to obtaining a variable value of each first variable. For example, if the first variable includes the age, gender, and location of the user, the age, gender, and location of the user are obtained.
An Artificial Neural Network (ANN), Neural Network (NN) for short, is a mathematical model or computational model that simulates the structure and function of a biological neural network, and performs computation by coupling a large number of artificial neurons. A neural network is an operational model, which is composed of a large number of nodes (or neurons) and their interconnections.
In some embodiments, the first neural network may be constructed from one or more of a fully-connected network, a convolutional neural network, a recurrent neural network, a long-term memory neural network, a feed-forward neural network, and the like, which may include one or more neural network layers. In a particular embodiment, the first neural network may be a four-layer fully-connected network.
In some embodiments, the first neural network may be configured as a neural network of a multi-network structure, parameters are different between different network structures, and the parameters when the first neural network predicts may be changed by changing the network structure of the first neural network, so as to enrich the predicted value of the second variable.
In some embodiments, the second variable may be a variable that is correlated to the first variable, for example the first variable may be a deformation of a spring and the second variable may be a spring force of the spring.
Thus, after the first neural network learns the correlation between the first variable and the second variable, the value of the second variable at the current variable value of the first variable may be predicted based on the variable value of the first variable. In the present application, the value of the second variable predicted by the first neural network from the first variable is referred to as a predicted value of the second variable.
And step 120, determining a predicted value of the third variable according to the variable value of the first variable, the predicted value of the second variable and the constraint relation between the third variable and the first variable and the second variable.
Because the third variable has a constraint relation with the first variable and the second variable, the value of the third variable under the variable value of the first variable and the predicted value of the second variable can be calculated according to the constraint relation, the variable value of the first variable and the predicted value of the second variable. In the present application, the value of the third variable at the variable value of the first variable and the predicted value of the second variable is referred to as the predicted value of the third variable.
Specifically, the constraint relationship between the third variable and the first variable and the second variable may be embodied by a certain functional expression. For example, if the second variable is the spring force of a spring, the first variable is the deformation of the spring, the third variable is the spring force coefficient of the spring, and the function expression is:
Wherein,is the elastic force of the spring, k is the elastic coefficient of the spring, and x is the deformation amount of the spring.
And step 130, determining a loss value of the third variable according to the actual value of the third variable under the variable value of the first variable and the predicted value of the third variable.
In some embodiments, the actual value of the third variable at the variable value of the first variable is subtracted from the predicted value of the third variable to obtain a loss value of the third variable.
Step 140 determines the input information based on the variable value of the first variable and the loss value of the third variable.
In some embodiments, the variable value of the first variable may be combined with the loss value of the third variable, with the combined result as input information.
In other embodiments, the variable value of the first variable and the loss value of the third variable may be preprocessed, and then the preprocessed variable value of the first variable and the preprocessed loss value of the third variable may be combined, and the combined result may be used as the input information.
In some embodiments, the preprocessing may be to perform data format conversion, unit conversion, normalization processing, and the like, and is not particularly limited herein.
In some embodiments, the preprocessing may be further performed by a neural network model, specifically, as shown in fig. 2, the step 140 includes:
and step 210, preprocessing the variable value of the first variable by the third neural network to obtain the preprocessed variable value of the first variable.
In some embodiments, the third neural network may be constructed from one or more of a fully-connected network, a convolutional neural network, a recurrent neural network, a long-term memory neural network, a feed-forward neural network, and the like, which may include one or more neural network layers. In a particular embodiment, the third neural network may be a fully connected network.
And step 220, preprocessing the loss value of the third variable by the fourth neural network to obtain a preprocessed loss value of the third variable.
In some embodiments, the fourth neural network may be constructed from one or more of a fully-connected network, a convolutional neural network, a recurrent neural network, a long-term memory neural network, a feed-forward neural network, and the like, which may include one or more neural network layers. In a particular embodiment, the fourth neural network may be a fully connected network.
And step 230, combining the variable value of the preprocessed first variable with the loss value of the preprocessed third variable to obtain input information.
In some embodiments, the preprocessed variable value of the first variable and the preprocessed loss value of the third variable are spliced to obtain the input information.
In this embodiment, the variable value of the first variable and the loss value of the third variable are respectively preprocessed by using the third neural network and the fourth neural network, and the preprocessed variable value of the first variable and the preprocessed loss value of the third variable are combined to obtain the input information, so that the subsequent prediction of the second neural network according to the input information can be more accurate.
Referring to fig. 1, in step 150, the second neural network performs variation prediction according to the input information to obtain a predicted variation of the second variable.
In some embodiments, the second neural network may be constructed from one or more of a fully-connected network, a convolutional neural network, a recurrent neural network, a long-term memory neural network, a feed-forward neural network, and the like, which may include one or more neural network layers. In a particular embodiment, the second neural network may be a four-layer fully-connected network.
In some embodiments, when training the second neural network, the second neural network may be set as a neural network of a multi-network structure, parameters are different between different network structures, and values of the predicted variation amount may be enriched by changes to the network structure of the second neural network.
And step 160, determining a total count value of the second variable according to the predicted variable quantity of the second variable and the predicted value of the second variable.
In some embodiments, the predicted amount of change in the second variable is added to the predicted value of the second variable to obtain a total count value of the second variable.
And 170, determining a target loss value of the third variable according to the total count value of the second variable, and judging whether an iteration end condition is reached according to the target loss value of the third variable.
In some embodiments, step 170 comprises: determining to obtain a first value of a third variable according to a variable value of the first variable, a total count value of the second variable and a constraint relation between the third variable and the first variable and the second variable; and determining a target loss value of the third variable according to the first value of the third variable and the actual value of the third variable.
In an embodiment, a value of the third variable under the variable value of the first variable and the total count value of the second variable may be calculated according to a constraint relationship among the first variable, the second variable, and the third variable, and the variable value of the first variable and the total count value of the second variable. In this application, the value of the third variable at the variable value of the first variable and the total count value of the second variable is referred to as the first value of the third variable.
And after the first value of the third variable is determined, subtracting the actual value of the third variable from the first value of the third variable to obtain a target loss value of the third variable.
In some embodiments, as shown in fig. 3, step 170 further comprises:
in step 310, it is determined whether the target loss value of the third variable is less than the target loss value threshold.
In some embodiments, the determination of whether the end-of-iteration condition is reached may be performed according to the process shown in FIG. 3. The target loss value threshold may be set according to actual needs, and is not particularly limited herein.
In step 320, if the target loss value of the third variable is smaller than the target loss value threshold, it is determined that the iteration end condition is reached.
In step 330, if the target loss value of the third variable is not less than the target loss value threshold, it is determined that the iteration end condition is not reached.
When the iteration end condition is determined not to be reached, the process returns to step 120 directly, and the subsequent steps of 120 and 170 are executed.
And step 180, if the iteration end condition is reached, determining a target value of the target variable according to the total count value of the second variable and the variable value of the first variable. The target variable has constraint relations with the first variable and the second variable, so that the total count value of the second variable and the variable value of the first variable can be calculated by combining the total count value of the second variable and the variable value of the first variable based on the constraint relations. In the present application, for the sake of distinction, a value obtained by calculating the target variable between the total count value of the second variable and the variable value of the first variable is referred to as a target value of the target variable.
In some embodiments, after step 170, the method further comprises: and if the iteration end condition is not met, taking the total count value of the second variable as a predicted value of the second variable in the next iteration process, and returning to execute the step 120.
When the iteration end condition is determined not to be met, the second neural network is required to be continuously utilized for prediction. And when the next iteration is performed, taking the total count value of the second variable in the previous iteration as the predicted value of the second variable in the step 120, and re-executing the step 120 and the subsequent steps according to the replaced predicted value of the second variable until the iteration end condition is determined to be reached according to the re-obtained target loss value of the third variable.
The total count value of the second variable in the previous iteration refers to the total count value of the second variable obtained in the previous iteration process of the current iteration turn.
In this embodiment, before performing the next iteration, the method further includes: and reversely adjusting the parameters of the second neural network according to at least one of the predicted variation of the second variable and the total count value of the second variable. And after the input information is re-determined according to the new predicted value of the second variable, the variable quantity of the second variable is re-predicted according to the input information through the second neural network after the parameters are adjusted, and the subsequent steps are executed.
The scheme of the application can be applied to an on-line application stage and a training stage of the second neural network, in the training stage, before returning to the step 120, the parameters of the second neural network are reversely adjusted, and then the variable quantity of the second variable is predicted again through the second neural network after the parameters are adjusted, so that the iteration ending condition can be reached by repeating the steps 120-170 for a few times, the iterative training of the second neural network can be controlled, and the training efficiency of the second neural network is improved. In the online application stage, according to the method of the present application, when it is determined that the iteration end condition is not reached, the iteration end condition can be reached by repeating steps 120 to 170 a small number of times, thereby ensuring that the target value of the target variable can be determined in time.
In the scheme of the application, data processing is carried out by embedding the fitting operation into a deep learning network. The scheme of the application is divided into two stages, wherein the first stage is an initialization stage, and the second stage is a fitting stage. After obtaining the variable value of the first variable in the first stage, the first neural network carries out variable prediction according to the variable value of the first variable to obtain the predicted value of the second variable, and the stage is to carry out coarse-grained calculation on the predicted value of the second variable.
In the second stage, firstly, determining a predicted value of a third variable according to a variable value of the first variable and a predicted value of a second variable in the first stage and a constraint relation between the third variable and the first variable and the second variable; subtracting a predicted value of the third variable from an actual value of the third variable under the variable value of the first variable to determine a loss value of the third variable; determining input information of a second neural network according to the variable value of the first variable and the loss value of the third variable, and predicting the variable quantity of the second neural network according to the input information to obtain the predicted variable quantity of the second variable; and predicting to obtain the predicted variation of the second variable through a second neural network according to the variable value of the first variable and the loss value of the third variable.
And then adding the predicted variable quantity of the second variable and the predicted value of the second variable to obtain a total count value of the second variable, judging whether an iteration end condition is reached according to a target loss value of a third variable determined by the total count value of the second variable, and determining a target value of the target variable through the total count value of the second variable and the variable value of the first variable according to a constraint relation between the target variable and the first variable and the second variable under the condition of reaching the iteration end condition. Therefore, the variable value of the second variable is determined by combining the coarse granularity and the fine granularity, the accuracy of the value of the determined second variable is ensured, and the accuracy of the target value of the determined target variable is further ensured. Moreover, experiments prove that the iteration ending condition can be achieved by using the method only by carrying out a small number of iterations, so that the iteration number is controllable, and the data processing efficiency is improved. Through experiments, the iteration end condition can be determined to be reached after 3 times of iteration.
In the prior art, data processing is generally performed by iteration for a plurality of times by minimizing errors, but because the iteration times are uncertain, the data processing efficiency is low, and training cannot be learned. According to the method and the device, the error information generated in the data processing process is added to the prediction and/or training process of the neural network, the iteration times are controllable, and the data processing efficiency can be improved.
In some embodiments, the method further comprises:
preprocessing the actual value of the third variable by a fifth neural network to obtain a preprocessed actual value of the third variable; and adding the actual value of the preprocessed third variable into the input information.
In some embodiments, the fifth neural network may be constructed from one or more of a fully-connected network, a convolutional neural network, a recurrent neural network, a long-term memory neural network, a feed-forward neural network, and the like, which may include one or more neural network layers. In a particular embodiment, the fifth neural network may be a fully connected network.
In some embodiments, the actual value of the preprocessed third variable is added to the input information to enrich the data characteristics.
In some embodiments, the present solution may be used to determine the three-dimensional coordinates of the joint key points from the two-dimensional coordinates of the joint key points in the joint image, in a specific embodiment, the first variable is the two-dimensional coordinates of the joint key points in the joint image, the second variable is the scale factor, the third variable is the joint length, and the target variable is the three-dimensional coordinates of the joint key points.
In this embodiment, first, a first neural network predicts a joint image by acquiring two-dimensional coordinates (i.e., first variables) of joint key points, and obtains a prediction scale factor (i.e., a prediction value of second variables). And then the predicted joint length of the joint (namely the predicted value of the third variable) can be calculated according to the two-dimensional coordinates of the key points of the joint and the constraint relation between the predicted scale factor and the joint length of the joint, and the joint length loss (namely the loss value of the third variable) is obtained by subtracting the predicted joint length and the actual joint length (namely the actual value of the third variable).
Then, input information is determined according to the two-dimensional coordinates of the joint key points and the joint length loss. In some embodiments, the two-dimensional coordinates of the joint key points and the joint length loss may be directly combined to obtain the input information. In some embodiments, the two-dimensional coordinates of the joint key points may be preprocessed by the third neural network, the joint length loss may be preprocessed by the fourth neural network, and the preprocessed two-dimensional coordinates of the joint key points and the preprocessed joint length loss may be combined to obtain the input information. In some embodiments, the input information may also include the actual length of the joint. Of course, the actual length of the joint may be preprocessed and added to the input information.
Then, the second neural network predicts according to the input information to obtain the variable quantity of the predicted scale factor (namely the predicted variable quantity of the second variable); and then adding the predicted scale factor and the predicted scale factor variation to obtain a total scale factor (namely a total counting value of the second variable).
Thereafter, a target loss value for the joint length (i.e., a target loss value for the third variable) is calculated based on the aggregate scale factor; judging whether an iteration end condition is reached or not according to the target loss value of the joint length; and if the iteration ending condition is determined to be reached, calculating the three-dimensional coordinates of the joint key points (namely the target values of the target variables) according to the total scale factor and the two-dimensional coordinates of the joint key points.
In a particular embodiment, the constraint relationship between the third variable (i.e., joint length) and the first variable (i.e., the two-dimensional coordinates of the joint keypoints) and the second variable (i.e., the scaling factor) may be expressed in terms of the following functional expression:
Wherein,is a scale factor, and is a function of,is expressed according to a ratioTwo-dimensional coordinates of case factors and joint key pointsA function of the intermediate three-dimensional coordinates is calculated,a function representing calculating the middle three-dimensional coordinates of the joint key points from the three-dimensional coordinates of the joint key points and the camera internal reference matrix of the camera. Wherein the intermediate three-dimensional coordinates of the joint key points are related to the two-dimensional coordinates of the joint key points.
In the embodiment, for a point P in the world coordinate system, its coordinates in the world coordinate system are (X, Y, C), and in the camera coordinate system of the camera, its coordinates are (X)C,YC,ZC) Based on the camera, the coordinates of the point P in the image coordinate system after perspective projection of the point P are (u, v), and the specific coordinate transformation process can be described by the following formula:
From the above, it can be obtained:
Further transformation yields:
If the point P is the key point of the joint in this embodiment, the coordinates are as described aboveThe intermediate three-dimensional coordinates of the key points of the joint are taken as corresponding, and it can be seen that the key points of the jointThe intermediate three-dimensional coordinates of a point may be determined by the inverse of the camera's camera internal reference matrix and the two-dimensional coordinates of the joint key.
Can be regarded as a scale factor in the present embodiment,is an internal reference matrix of the camera, which is a matrix constructed by the focal length of the camera and the coordinates of the optical center of the camera, whereinIs the focal length of the camera and,are the coordinates of the optical center of the camera,is the inverse of the camera intrinsic reference matrix of the camera.
In a specific embodiment, the three-dimensional coordinates of the joint key points are determined according to the following formula based on the total scale factor and the two-dimensional coordinates of the joint key points:
Wherein,to total scale factor (i.e. total count of second variable),three-dimensional coordinates of the key points of the joint (i.e. the target values of the target variables),is a key point of a jointThe intermediate three-dimensional coordinates of the joint key points may be equal to the intermediate three-dimensional coordinates of (1), as described aboveTwo-dimensional coordinates of joint keyResult of one dimension complementAnd multiplying the two to obtain the product.
In this embodiment, step 120 includes: determining a predicted three-dimensional coordinate of the joint key point according to the coordinate value of the two-dimensional coordinate of the joint key point in the joint image and the predicted value of the scale factor; and determining the predicted joint length of the joint indicated by the joint image according to the predicted three-dimensional coordinates of the key points of the joint.
Specifically, the predicted three-dimensional coordinates of the joint key points can be obtained according to the above formula (6) based on the coordinate values of the two-dimensional coordinates of the joint key points in the joint image and the predicted values of the scale factors.
In this embodiment, the joint key points include at least a first joint key point indicating one end portion of the corresponding joint and a second joint key point indicating the other end portion of the corresponding joint. Thus, the predicted three-dimensional coordinates of the joint key points include at least the predicted three-dimensional coordinates of the first joint key point and the predicted three-dimensional coordinates of the second joint key point.
On the basis, the Euclidean distance between the first joint key point and the second joint key point can be calculated based on the predicted three-dimensional coordinate of the first joint key point and the predicted three-dimensional coordinate of the second joint key point, and the predicted joint length of the joint indicated by the joint image is obtained. Specifically, the process can be described by the following formula:
Wherein,to predict the joint length (predicted value of the third variable), the predicted three-dimensional coordinates of the first joint key point areThe predicted three-dimensional coordinates of the second joint key points are。
In some embodiments, the target loss value of the joint length (i.e., the target loss value of the third variable) may be calculated according to the above formula (2) to obtain the target three-dimensional coordinates of the joint key point under the coordinate values of the two-dimensional coordinates of the joint key point in the joint image (i.e., the variable value of the first variable) and the total scale factor (i.e., the total count value of the second variable); and then calculating the target length of the joint length (namely the first numerical value of the third variable) from the target three-dimensional coordinates of the key points of the joint according to a formula (7), and finally subtracting the target numerical value of the joint length from the actual value of the joint length (namely the actual value of the third variable) to obtain the target loss value of the joint length.
Fig. 4 is a process diagram illustrating data processing according to an embodiment of the present application. As shown in fig. 4, the data processing process is divided into a first phase and a second phase. In a first stage, a prediction of a predicted value of a second variable is made using a first neural network. In the second stage, the amount of change of the second variable is predicted by using the second neural network.
The first stage comprises the following specific processes: after obtaining the variable value of the first variable, the first neural network net1 is based on the variable value of the first variablePredicting to obtain the predicted value of the second variable. The phase being for a second variableA coarser granularity of calculation of the predicted value.
The second stage comprises the following specific processes: based on the constraint relation among the first variable, the second variable and the third variable, the variable value of the first variable is obtainedAnd predicted value of second variableCalculating a variable value at a first variablePredicted value of the third variable of(ii) a The actual value of the third variableWith the value of a variable at a first variablePredicted value of the third variable ofSubtracting to obtain the loss value of the third variable。
Then, according to the loss value of the third variableAnd the value of the variable of the first variableDetermining input information. Specifically, in the present embodimentIn (1), as shown in FIG. 4, the loss value of the third variable is setObtaining a loss value of a preprocessed third variable after preprocessing of a fourth neural network init _ net2(ii) a Variable value of the first variable by the third neural network init _ net1Preprocessing the first variable to obtain the variable value of the preprocessed first variable(ii) a Actual value of the third variable by the fifth neural network init _ net3Preprocessing is carried out to obtain the actual value of the preprocessed third variable. Then, the variable value of the first variable is obtained after the pretreatmentLoss value of the pretreated third variableAnd the actual value of the preprocessed third variableAnd combining to obtain the input information.
Of course, in other embodiments, the variable value of the first variable may be obtained after preprocessingAnd pretreated third variableLoss valueCombining to obtain input information。
Then, the second neural network net2 is used to input informationPredicting the variation to obtain the predicted variation of the second variable。
Then, thereafter, the predicted amount of change of the second variable is variedObtaining a predicted value with the second variableAnd adding to obtain the total count value of the second variable. Then, calculating a target numerical value of a third variable according to the total count value of the second variable; subtracting the actual value obtained according to the target value of the third variable and the third variable to obtain a target loss value of the third variable; and when the target loss value of the third variable is smaller than the target loss value threshold, ending the iteration, outputting the total count value of the second variable of the last iteration, and calculating the target value of the target variable according to the total count value of the second variable and the variable value of the first variable.
The variable value of the second variable is predicted in stages, the variable value of the first variable is predicted in a coarse granularity mode in the first stage, the variable value of the first variable, the loss value of the third variable and the actual value of the third variable are preprocessed through a neural network and then added into the variable value prediction of the second variable in the second stage, the variable value of the second variable is determined by combining coarse granularity and fine granularity, the accuracy of the value of the determined second variable is guaranteed, the accuracy of the target value of the determined target variable is further guaranteed, iteration conditions are limited in the scheme of the method, and learning efficiency is improved.
Embodiments of the apparatus of the present application are described below, which may be used to perform the methods of the above-described embodiments of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the above-described embodiments of the method of the present application.
Fig. 5 is a block diagram illustrating a data processing apparatus according to an embodiment of the present application, and as shown in fig. 5, the data processing apparatus 500 includes: a first prediction module 510, configured to input a variable value of a first variable into a first neural network for variable prediction, so as to obtain a predicted value of a second variable; a predicted value determining module 520 for the third variable, configured to determine a predicted value of the third variable according to the variable value of the first variable, the predicted value of the second variable, and the constraint relationship between the third variable and the first variable and the second variable; a loss value determining module 530 for determining a loss value of the third variable according to an actual value of the third variable at the variable value of the first variable and a predicted value of the third variable; an input information determining module 540, configured to determine input information according to a variable value of the first variable and a loss value of the third variable; a second prediction module 550, configured to perform variable prediction on the second neural network according to the input information to obtain a predicted variable of the second variable; a total count value determination module 560 for determining a total count value of the second variable according to the predicted variation of the second variable and the predicted value of the second variable; the target loss value determining module 570 of the third variable is configured to determine a target loss value of the third variable according to the total count value of the second variable, and determine whether an iteration end condition is reached according to the target loss value of the third variable; and a target value determining module 580 for the target variable, configured to determine the target value of the target variable according to the total count value of the second variable and the variable value of the first variable if the iteration end condition is reached.
In some embodiments, the data processing apparatus 500 further comprises: and the processing module is used for taking the total count value of the second variable as the predicted value of the second variable in the next iteration process if the iteration end condition is not met, and returning to execute the step of determining the predicted value of the third variable according to the variable value of the first variable, the predicted value of the second variable and the constraint relation between the third variable and the first variable as well as the second variable.
In some embodiments, the data processing apparatus 500 further comprises: and the second neural network parameter adjusting module is used for reversely adjusting the parameters of the second neural network according to at least one of the predicted variable quantity of the second variable and the total count value of the second variable.
In some embodiments, the target loss value determination module 570 for the third variable comprises: the system is used for judging whether the target loss value of the third variable is smaller than a target loss value threshold value or not; an iteration end condition determining unit, configured to determine that an iteration end condition is reached if the target loss value of the third variable is smaller than the target loss value threshold; and if the target loss value of the third variable is not less than the target loss value threshold, determining that the iteration end condition is not reached.
In some embodiments, the input information determination module 540 includes: and the first preprocessing unit is used for preprocessing the variable value of the first variable by the third neural network to obtain the preprocessed variable value of the first variable. And the second preprocessing unit is used for preprocessing the loss value of the third variable by the fourth neural network to obtain the preprocessed loss value of the third variable. And the input information determining unit is used for combining the variable value of the preprocessed first variable and the loss value of the preprocessed third variable to obtain the input information.
In some embodiments, the data processing apparatus 500 further comprises: and the preprocessing module is used for preprocessing the actual value of the third variable by a fifth neural network to obtain the preprocessed actual value of the third variable. And the information adding module is used for adding the actual value of the preprocessed third variable into the input information.
In some embodiments, the first variable is a two-dimensional coordinate of a key point of the joint in the image of the joint, the second variable is a scale factor, the third variable is a length of the joint, and the target variable is a three-dimensional coordinate of the key point of the joint.
In some embodiments, the predicted value determination module 520 for the third variable comprises: and the predicted three-dimensional coordinate determining unit is used for determining and obtaining the predicted three-dimensional coordinate of the joint key point according to the coordinate value of the two-dimensional coordinate of the joint key point in the joint image and the predicted value of the scale factor. And the joint prediction length determining unit is used for determining the prediction joint length of the joint indicated by the joint image according to the prediction three-dimensional coordinates of the joint key points.
FIG. 6 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application. It should be noted that the computer system 600 of the electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a CPU601, which can perform various appropriate actions and processes, such as performing the methods in the above-described embodiments, according to a program stored in a ROM602 or a program loaded from a storage section 608 into a RAM 603. In the RAM603, various programs and data necessary for system operation are also stored. The CPU601, ROM602, and RAM603 are connected to each other via a bus 604. An I/O interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output section 607 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a Network interface card such as a LAN (Local Area Network) card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. When the computer program is executed by the CPU601, various functions defined in the system of the present application are executed.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer-readable storage medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable storage medium carries computer readable instructions which, when executed by a processor, implement the method of any of the embodiments described above.
According to an aspect of the present application, there is also provided an electronic device, including: a processor; a memory having computer readable instructions stored thereon which, when executed by the processor, implement the method of any of the above embodiments.
According to an aspect of an embodiment of the present application, there is provided a computer program product or a computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method of any of the above embodiments.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Claims (9)
1. A method of data processing, the method comprising:
inputting the variable value of the first variable into a first neural network for variable prediction to obtain a predicted value of a second variable; the first variable is a two-dimensional coordinate of a joint key point in a joint image;
determining a predicted value of a third variable according to a variable value of the first variable, a predicted value of the second variable and a constraint relation between the third variable and the first variable and the second variable; the second variable is a scale factor and the third variable is joint length;
determining a loss value of the third variable according to an actual value of the third variable under the variable value of the first variable and a predicted value of the third variable;
determining input information according to the variable value of the first variable and the loss value of the third variable;
predicting the variable quantity by a second neural network according to the input information to obtain the predicted variable quantity of the second variable;
determining a total count value of the second variable according to the predicted variable quantity of the second variable and the predicted value of the second variable;
determining a target loss value of the third variable according to the total count value of the second variable, and judging whether an iteration end condition is reached according to the target loss value of the third variable;
if the iteration end condition is reached, determining a target value of a target variable according to the total count value of the second variable and the variable value of the first variable; the target variable is a three-dimensional coordinate of a joint key point.
2. The method of claim 1, wherein after determining the target loss value of the third variable according to the total count value of the second variable and determining whether the end-of-iteration condition is reached according to the target loss value of the third variable, the method further comprises:
and if the iteration ending condition is not met, taking the total count value of the second variable as the predicted value of the second variable in the next iteration process, and returning to execute the step of determining to obtain the predicted value of the third variable according to the variable value of the first variable, the predicted value of the second variable and the constraint relation between the third variable and the first variable as well as the second variable.
3. The method according to claim 2, wherein before the step of determining the predicted value of the third variable according to the variable value of the first variable and the predicted value of the second variable and the constraint relationship between the third variable and the first variable and the second variable, the step of taking the total count value of the second variable as the predicted value of the second variable in the next iteration, and returning to execute:
and reversely adjusting the parameters of the second neural network according to at least one of the predicted variable quantity of the second variable and the total count value of the second variable.
4. The method of claim 1, wherein determining input information based on the variable value of the first variable and the loss value of the third variable comprises:
preprocessing the variable value of the first variable by a third neural network to obtain the preprocessed variable value of the first variable;
preprocessing the loss value of the third variable by a fourth neural network to obtain a preprocessed loss value of the third variable;
and combining the preprocessed variable value of the first variable with the preprocessed loss value of the third variable to obtain the input information.
5. The method of claim 1, further comprising:
preprocessing the actual value of the third variable by a fifth neural network to obtain a preprocessed actual value of the third variable;
and adding the actual value of the preprocessed third variable into the input information.
6. The method of claim 1, wherein determining a predicted value of a third variable according to the variable value of the first variable and the predicted value of the second variable and the constraint relationship between the third variable and the first variable and the second variable comprises:
determining a predicted three-dimensional coordinate of the joint key point according to the coordinate value of the two-dimensional coordinate of the joint key point in the joint image and the predicted value of the scale factor;
and determining the predicted joint length of the joint indicated by the joint image according to the predicted three-dimensional coordinates of the joint key points.
7. A data processing apparatus, characterized in that the apparatus comprises:
the first prediction module is used for inputting the variable value of the first variable into the first neural network for variable prediction to obtain the predicted value of the second variable; the first variable is a two-dimensional coordinate of a joint key point in a joint image;
the predicted value determining module of the third variable is used for determining the predicted value of the third variable according to the variable value of the first variable, the predicted value of the second variable and the constraint relation between the third variable and the first variable and the second variable; the second variable is a scale factor and the third variable is joint length;
a loss value determining module of a third variable, which is used for determining a loss value of the third variable according to an actual value of the third variable under the variable value of the first variable and a predicted value of the third variable;
the input information determining module is used for determining input information according to the variable value of the first variable and the loss value of the third variable;
the second prediction module is used for predicting the variable quantity by a second neural network according to the input information to obtain the predicted variable quantity of the second variable;
the total count value determining module of the second variable is used for determining the total count value of the second variable according to the predicted variable quantity of the second variable and the predicted value of the second variable;
a target loss value determining module of a third variable, configured to determine a target loss value of the third variable according to the total count value of the second variable, and determine whether an iteration end condition is reached according to the target loss value of the third variable;
the target value determining module of the target variable is used for determining the target value of the target variable according to the total count value of the second variable and the variable value of the first variable if the iteration ending condition is reached; the target variable is a three-dimensional coordinate of a joint key point.
8. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a memory electrically connected with the one or more processors;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the method of any of claims 1-6.
9. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210012840.4A CN114037066B (en) | 2022-01-07 | 2022-01-07 | Data processing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210012840.4A CN114037066B (en) | 2022-01-07 | 2022-01-07 | Data processing method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114037066A CN114037066A (en) | 2022-02-11 |
CN114037066B true CN114037066B (en) | 2022-04-12 |
Family
ID=80141386
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210012840.4A Active CN114037066B (en) | 2022-01-07 | 2022-01-07 | Data processing method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114037066B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109871791A (en) * | 2019-01-31 | 2019-06-11 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN112288086A (en) * | 2020-10-30 | 2021-01-29 | 北京市商汤科技开发有限公司 | Neural network training method and device and computer equipment |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11468315B2 (en) * | 2018-10-24 | 2022-10-11 | Equifax Inc. | Machine-learning techniques for monotonic neural networks |
KR20210087680A (en) * | 2020-01-03 | 2021-07-13 | 네이버 주식회사 | Method and apparatus for generating data for estimating 3 dimensional pose of object included in input image, and prediction model for estimating 3 dimensional pose of object |
CN112562069B (en) * | 2020-12-24 | 2023-10-27 | 北京百度网讯科技有限公司 | Method, device, equipment and storage medium for constructing three-dimensional model |
CN112836618B (en) * | 2021-01-28 | 2023-10-20 | 清华大学深圳国际研究生院 | Three-dimensional human body posture estimation method and computer readable storage medium |
-
2022
- 2022-01-07 CN CN202210012840.4A patent/CN114037066B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109871791A (en) * | 2019-01-31 | 2019-06-11 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN112288086A (en) * | 2020-10-30 | 2021-01-29 | 北京市商汤科技开发有限公司 | Neural network training method and device and computer equipment |
Also Published As
Publication number | Publication date |
---|---|
CN114037066A (en) | 2022-02-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3446260B1 (en) | Memory-efficient backpropagation through time | |
CN112955907B (en) | Method and system for quantitatively training long-term and short-term memory neural networks | |
CN110929869B (en) | Sequence data processing method, device, equipment and storage medium | |
CN110766142A (en) | Model generation method and device | |
CN109635990B (en) | Training method, prediction method, device, electronic equipment and storage medium | |
CN111259647A (en) | Question and answer text matching method, device, medium and electronic equipment based on artificial intelligence | |
CN111027672A (en) | Time sequence prediction method based on interactive multi-scale recurrent neural network | |
CN112464042B (en) | Task label generating method and related device for convolution network according to relationship graph | |
CN115151917A (en) | Domain generalization via batch normalized statistics | |
CN112420125A (en) | Molecular attribute prediction method and device, intelligent equipment and terminal | |
CN114418189A (en) | Water quality grade prediction method, system, terminal device and storage medium | |
CN116109449A (en) | Data processing method and related equipment | |
US20230206036A1 (en) | Method for generating a decision support system and associated systems | |
CN111161238A (en) | Image quality evaluation method and device, electronic device, and storage medium | |
CN113110843B (en) | Contract generation model training method, contract generation method and electronic equipment | |
CN108509179B (en) | Method for detecting human face and device for generating model | |
CN114118570A (en) | Service data prediction method and device, electronic equipment and storage medium | |
Ortega-Zamorano et al. | FPGA implementation of neurocomputational models: comparison between standard back-propagation and C-Mantec constructive algorithm | |
CN117422182A (en) | Data prediction method, device and storage medium | |
CN114037066B (en) | Data processing method and device, electronic equipment and storage medium | |
Shetty et al. | A Weighted Ensemble of VAR and LSTM for Multivariate Forecasting of Cloud Resource Usage | |
CN113723712B (en) | Wind power prediction method, system, equipment and medium | |
Tec et al. | A Comparative Tutorial of Bayesian Sequential Design and Reinforcement Learning | |
CN115271207A (en) | Sequence relation prediction method and device based on gated graph neural network | |
CN115169692A (en) | Time series prediction method and related device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |