WO2024007938A1 - 一种多任务预测方法、装置、电子设备及存储介质 - Google Patents
一种多任务预测方法、装置、电子设备及存储介质 Download PDFInfo
- Publication number
- WO2024007938A1 WO2024007938A1 PCT/CN2023/103755 CN2023103755W WO2024007938A1 WO 2024007938 A1 WO2024007938 A1 WO 2024007938A1 CN 2023103755 W CN2023103755 W CN 2023103755W WO 2024007938 A1 WO2024007938 A1 WO 2024007938A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- task
- prediction
- key point
- loss
- prediction result
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 80
- 238000009826 distribution Methods 0.000 claims abstract description 91
- 238000012549 training Methods 0.000 claims abstract description 44
- 230000008569 process Effects 0.000 claims description 27
- 230000000875 corresponding effect Effects 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 10
- 230000004044 response Effects 0.000 claims description 5
- 230000006870 function Effects 0.000 description 25
- 230000000694 effects Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 238000010276 construction Methods 0.000 description 4
- 238000007476 Maximum Likelihood Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 238000005315 distribution function Methods 0.000 description 3
- 230000000644 propagated effect Effects 0.000 description 3
- 238000012790 confirmation Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 101000822695 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C1 Proteins 0.000 description 1
- 101000655262 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C2 Proteins 0.000 description 1
- 101000655256 Paraclostridium bifermentans Small, acid-soluble spore protein alpha Proteins 0.000 description 1
- 101000655264 Paraclostridium bifermentans Small, acid-soluble spore protein beta Proteins 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Definitions
- Embodiments of the present disclosure relate to the field of computer technology, for example, to a multi-task prediction method, device, electronic device, and storage medium.
- Multi-task learning can refer to a method of jointly training multiple tasks using useful information from multiple related but different tasks.
- the reasonable construction of multiple task losses has an important impact on the training effect.
- the multi-task includes key point prediction tasks
- using existing losses to perform regression training on the model cannot guarantee the training effect, and joint training is prone to failure, which directly affects the accuracy of multi-task prediction.
- Embodiments of the present disclosure provide a multi-task prediction method, device, electronic device and storage medium, which can realize multi-task joint training including key point prediction tasks, achieve good training results, and ensure the accuracy of multi-task prediction.
- embodiments of the present disclosure provide a multi-task prediction method, including:
- the at least one prediction task includes a key point prediction task;
- the loss term of the preset model during the training process includes the first prediction result and key point position according to the key point prediction task.
- the first loss is constructed from the error distribution between labels.
- embodiments of the present disclosure also provide a multi-task prediction device, including:
- the input module is set to input the original image into the preset model
- An output module configured to output the prediction result of at least one prediction task for the original image through the preset model
- the at least one prediction task includes a key point prediction task;
- the loss term of the preset model during the training process includes the error distribution between the first prediction result of the key point prediction task and the key point location label. The first loss of the build.
- embodiments of the present disclosure also provide an electronic device, including:
- a storage device arranged to store at least one program
- the at least one processor When the at least one program is executed by the at least one processor, the at least one processor is caused to implement the multi-task prediction method as described in any one of the embodiments of the present disclosure.
- embodiments of the present disclosure also provide a readable storage medium containing a computer program that, when executed by a computer processor, performs the multi-task prediction method as described in any embodiment of the present disclosure.
- Figure 1 is a schematic flowchart of a multi-task prediction method provided by an embodiment of the present disclosure
- Figure 2 is a schematic flowchart of the training steps of a preset model in a multi-task prediction method provided by an embodiment of the present disclosure
- Figure 3 is a schematic block diagram of the preset model training steps in a multi-task prediction method provided by an embodiment of the present disclosure
- Figure 4 is a schematic structural diagram of a multi-task prediction device provided by an embodiment of the present disclosure.
- FIG. 5 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
- the term “include” and its variations are open-ended, ie, “including but not limited to.”
- the term “based on” means “based at least in part on.”
- the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; and the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
- Figure 1 is a schematic flowchart of a multi-task prediction method provided by an embodiment of the present disclosure.
- the embodiments of the present disclosure are suitable for multi-task prediction of images through a preset model, where the multi-task includes a key point prediction task, and the preset model is trained based on the real error distribution of the key point prediction task.
- the method can be executed by a multi-task prediction device, which can be implemented in the form of at least one of software and hardware.
- the device can be configured in electronic equipment, such as mobile phones, computers and other equipment.
- the multi-task prediction method provided by this embodiment may include:
- the original image may be an image obtained in compliance with relevant laws and regulations.
- the preset model may be a neural network model, which may be used for prediction of at least one task of the original image.
- pre The model can include a backbone network shared by multiple tasks and independent branch networks owned by each task.
- the shared features of the original image can be extracted through the backbone network; the shared features can be input to separate branch networks for each task to output the prediction results of each task respectively.
- At least one prediction task may include a key point prediction task.
- the key point prediction task can refer to the task of predicting the location of key points from the original image.
- Different types of original images have different key points that need to be predicted.
- key points that need to be predicted in hand images may include finger points
- key points that need to be predicted in limb images may include joint points, etc.
- sample images of the same category as the original images can be input into the preset model, and the preset model can output prediction results of at least one prediction task for the sample images.
- the loss term for each task can be determined based on the prediction results of each task and the true value label of each task, so that the preset model can be trained based on the loss term of at least one task.
- the backbone network in the preset model can be trained based on the loss term of at least one task.
- the prediction result of the key point prediction task for the sample image may be called a first prediction result.
- the loss term of the preset model during the training process may include a first loss constructed based on the error distribution between the first prediction result of the key point prediction task and the key point position label.
- the distribution of variables around the real value can affect the loss function used.
- the corresponding loss function is mean square error
- the probability distribution and the loss function can be connected through likelihood estimation.
- the mean square error is a loss function obtained by estimating the Gaussian distribution of variables through maximum likelihood estimation.
- the error distribution between the first prediction result and the key point position label can be considered as the probability distribution of the first prediction result around the real key point, and the real error distribution can be represented by a distribution function.
- the error between the first prediction result and the key point position label can be used as sample data.
- the distribution function can be approximated by using a neural network, or the distribution function can be approximated by mathematical modeling. After determining the first prediction result and the key point position label After the error distribution between the two, the error distribution can be estimated through likelihood to obtain the loss function, that is, the first loss is obtained.
- key point position prediction often uses the mean square error between the predicted coordinates and the true value label as the loss term for model training.
- This loss term is constructed in such a way that the predicted key points obey a Gaussian distribution around the true value by default.
- using existing losses to perform regression training on the model cannot guarantee the training effect, and joint training is prone to failure.
- an appropriate loss function can be constructed to help the model parameters learn efficiently and accurately. This not only optimizes the prediction effect of key point locations, but also enables multi-task joint training to achieve better results.
- the preset model is obtained through multi-task joint training and can perform multi-task predictions. Compared with performing multi-task predictions based on multiple models, the preset model can not only align the effects of separate models for each task, but also reduce the number of models to one to reduce inference time.
- the technical solution of the embodiment of the present disclosure is to input the original image into a preset model; output the prediction result of at least one prediction task for the original image through the preset model; wherein the at least one prediction task includes a key point prediction task; the preset model is trained
- the loss term in the process includes the first loss constructed based on the error distribution between the first prediction result of the key point prediction task and the key point location label.
- the embodiments of the present disclosure can be combined with the optional solutions in the multi-task prediction method provided in the above embodiments.
- the multi-task prediction method provided in this embodiment describes in detail the construction steps of the first loss in the training process of the preset model. By constructing a flow model based on the first prediction result, the real error distribution can be obtained by fitting key points with the flow model. By estimating the error distribution through residual likelihood, the first loss can be quickly obtained.
- FIG. 2 is a schematic flowchart of the training steps of a preset model in a multi-task prediction method provided by an embodiment of the present disclosure.
- the training steps of the preset model in a multi-task prediction method can be to include:
- the sample image is an image belonging to the same category as the original image.
- S220 Output the prediction result of at least one prediction task for the sample image through the preset model.
- At least one prediction task may include a key point prediction task, and the prediction result of the key point prediction task may be called a first prediction result.
- the goal of constructing a flow-based generative model is to train a generator.
- the simple distribution ⁇ (z) may be, for example, Gaussian distribution, Laplace distribution, etc.
- the complex distribution p G (x) may refer to the distribution of the error between the first prediction result and the key point position label.
- the simple distribution can be substituted into the flow model to obtain the error distribution between the first prediction result and the key point position label.
- constructing a convection model based on the first prediction result and the key point position label includes: sampling the error between the first prediction result and the key point position label, and the first preset distribution, respectively. Obtain the first sample and the second sample; build a flow model based on the first sample and the second sample.
- the first preset distribution can be considered as a simple distribution.
- the error between the first prediction result and the key point position label and the value in the first preset distribution can be sampled to obtain the first sample x i and the second sample z i respectively.
- the corresponding relationship between the first sample and the second sample can be formula 1: where p G ( ⁇ ) is the error distribution, ⁇ ( ⁇ ) is the simple distribution, det (J) is the Jacobian determinant, and G -1 is the inverse of the flow model.
- a flow model can be constructed based on Equation 1, the first sample, and the second sample.
- G -1 can be obtained by substituting the first sample and the second sample collected each time into Formula 1, and the initial flow model can be obtained by performing the inverse operation on G -1 .
- the likelihood estimate of the initial flow model satisfies the preset condition, which may include that the likelihood estimate of the initial flow model satisfies the maximum likelihood estimate.
- the initial flow model can maximize the probability of the first sample appearing, that is, satisfy the maximum likelihood estimation function
- the final flow model can be obtained.
- the first preset distribution can be input into the completed flow model, and the error distribution between the first prediction result and the key point position label is output through the completed flow model.
- S250 Perform log likelihood estimation on the residuals of the error distribution and the second preset distribution, and use the obtained residual likelihood estimation loss as the first loss.
- likelihood estimation in addition to performing log likelihood estimation on the residual of the error distribution and the second preset distribution, likelihood estimation can also be directly performed on the error distribution to obtain the first loss. However, this approach will cause the regression efficiency of the prediction model to be slightly slower.
- p G ( ⁇ ) is the error distribution
- N (0, 1) is a simple Gaussian distribution, that is, the second preset distribution
- s is the correction term.
- the residual likelihood estimation loss (Residual Log-likelihood Estimation Loss, RLE-Loss) can be determined according to the right part of the likelihood function equation, and RLE-Loss can be used as the first loss.
- the first loss can be expressed as:
- the backbone network of the preset model can be trained according to the first loss to optimize other multi-task prediction effects.
- the technical solution of the embodiment of the present disclosure describes in detail the construction steps of the first loss in the training process of the preset model.
- the real error distribution of the key points can be fitted through the flow model.
- the first error can be quickly obtained.
- the multi-task prediction method provided by the embodiments of the present disclosure belongs to the same concept as the multi-task prediction method provided by the above-mentioned embodiments.
- Technical details that are not described in detail in this embodiment can be referred to the above-mentioned embodiments, and the same technical features are used in this embodiment. It has the same effect as in the above embodiment.
- the embodiments of the present disclosure can be combined with the optional solutions in the multi-task prediction method provided in the above embodiments.
- the preset model can be applied to multi-task prediction of hand images.
- at least one task can also include a gesture recognition task and a left and right hand classification task. At least one.
- At least one task may also include a gesture classification task; the loss term of the preset model during the training process, and It may include: a second loss constructed based on the second prediction result of the gesture classification task and the gesture classification label.
- the hand image can be input into the preset model, so that the preset model outputs the prediction results of the hand key point prediction task and the gesture classification task.
- the prediction results of the hand key point prediction task can include the position coordinates of at least one finger point on the hand; the prediction results of the gesture classification task can include gesture classifications such as finger gestures such as "V", "OK” or "Five fingers spread” .
- sample images of the hand can be input into the preset model, and the first prediction result of the hand key point prediction task and the second prediction result of the gesture classification task are output through the preset model.
- a first loss can be constructed based on the error distribution between the first prediction result and the key point position label
- a second loss (such as cross-entropy loss) can be constructed based on the second prediction result and the gesture classification label.
- the second loss can be expressed as: Among them, y gesture is the second prediction result; is the gesture classification label; CE( ⁇ ) is the cross-entropy loss function. Furthermore, the preset model can be trained according to the first loss and the second loss.
- At least one task may also include a left and right hand classification task; the loss term of the preset model during the training process may also include: according to the left and right hand Third prediction results for classification tasks and third loss for left and right hand classification label construction.
- the hand image can be input into the preset model, so that the preset model outputs the prediction results of the hand key point prediction task and the left and right hand classification task.
- the prediction results of the left-hand and right-hand classification tasks can include the classification of the left hand and the right hand.
- sample images of the hand can be input into the preset model, and the first prediction result of the hand key point prediction task and the third prediction result of the left and right hand classification task are output through the preset model.
- the first loss can be constructed based on the error distribution between the first prediction result and the key point position label
- the third loss (for example, it can also be a cross-entropy loss) can be constructed based on the third prediction result and the left and right hand classification labels.
- the third loss can be expressed as: Among them, y lr is the third preliminary test results; is the left and right hand classification label; CE( ⁇ ) is the cross entropy loss function. Furthermore, the preset model can be trained according to the first loss and the third loss.
- At least one task when at least one task includes a hand key point prediction task, at least one task may also include at least one of a gesture recognition task and a left and right hand classification task.
- at least one of the second loss and the third loss can also be used to train the preset model.
- other prediction tasks based on hand images can also be implemented based on the preset model disclosed in this embodiment, and are not exhaustive here.
- FIG. 3 is a schematic block diagram of the preset model training steps in a multi-task prediction method provided by an embodiment of the present disclosure.
- the image features can be extracted through the multi-task shared backbone network in the preset model.
- the backbone network can be, for example, a convolutional neural network (Convolutional Neural Networks, CNN), or it can also be other feature extraction networks.
- CNN convolutional Neural Networks
- the extracted features of the sample images can be input into multiple task-independent branch networks respectively to output the prediction results of multiple tasks respectively.
- the prediction result of the hand key point prediction task can be called the first prediction result; the prediction result of the gesture classification task can be called the second prediction result; and the prediction result of the left and right hand classification task can be called the third prediction result.
- the flow model can be constructed based on the first prediction result and the key point position label; based on the constructed flow model, the error distribution between the first prediction result and the key point position label is determined, as shown in the figure, distribution P ( ⁇
- the second loss can also be constructed based on the second prediction result and the gesture classification label.
- a third loss can also be constructed based on the third prediction result and the left and right hand classification labels.
- the total loss function can be composed of the first loss, the second loss and the third loss.
- the preset model can be trained through the total loss function.
- the tasks performed separately can be The three models are reduced to one model, which can reduce the time required for inference and maintain the original independent model effect.
- the preset model after outputting the prediction result of at least one prediction task for the original image through the preset model, it may also include: generating a gesture control instruction according to the prediction result of the at least one prediction task, so that the target application Perform corresponding actions in response to gesture control instructions.
- preset models can be applied to on-device gesture recognition.
- the hand image may be collected in response to a collection instruction input by the user, and at least one prediction result for the hand image may be output through a preset model.
- gesture control instructions can be generated based on the prediction results to cause the target application on the terminal to perform corresponding actions.
- the target application program may refer to a program corresponding to the gesture control instruction.
- the gesture control instruction can be a confirmation instruction; accordingly, the target application can receive the confirmation instruction and perform the corresponding follow-up program.
- the preset model can be applied to multi-task prediction of hand images.
- at least one task can also include at least one of the gesture recognition task and the left and right hand classification task.
- the real error distribution of hand key points to construct the loss term, it can not only optimize the model's prediction effect for hand key points, but also enable the model to achieve better results in multi-task learning of gesture recognition tasks and left and right hand classification tasks.
- the multi-task prediction method provided by the embodiments of the present disclosure belongs to the same concept as the multi-task prediction method provided by the above-mentioned embodiments.
- Technical details that are not described in detail in this embodiment can be referred to the above-mentioned embodiments, and the same technical features are used in this embodiment. It has the same effect as in the above embodiment.
- FIG. 4 is a schematic structural diagram of a multi-task prediction device provided by an embodiment of the present disclosure.
- the embodiments of the present disclosure are suitable for multi-task prediction of images through a preset model, where the multi-task includes a key point prediction task, and the preset model is trained based on the real error distribution of the key point prediction task.
- the multi-task prediction device provided by the embodiment of the present disclosure may include:
- the input module 410 is configured to input the original image into the preset model
- the output module 420 is configured to output the prediction result of at least one prediction task for the original image through the preset model
- At least one prediction task includes a key point prediction task; the loss term of the preset model during the training process includes a first loss constructed based on the error distribution between the first prediction result of the key point prediction task and the key point position label.
- the multi-task prediction device may also include:
- the error distribution between the first prediction result and the key point position label is determined.
- the loss building block can be set to:
- the loss building block can be set to:
- the loop determines the initial flow model based on the first sample and the second sample
- the initial flow model is updated iteratively until the likelihood estimate of the initial flow model meets the preset conditions, and the flow model is obtained.
- the loss building module can also be set to build the first loss based on the following steps:
- At least one task when the key point prediction task is a hand key point prediction task, at least one task also includes a gesture classification task;
- the loss term of the preset model during the training process also includes: a second loss constructed based on the second prediction result of the gesture classification task and the gesture classification label.
- the key point prediction task is a hand key point prediction task
- At least one task also included a left-right hand classification task
- the loss term of the preset model during the training process also includes: a third loss constructed based on the third prediction result of the left and right hand classification task and the left and right hand classification labels.
- the multi-task prediction device also includes:
- the control module is configured to, after outputting the prediction result of at least one prediction task for the original image through the preset model, generate a gesture control instruction according to the prediction result of the at least one prediction task, so that the target application executes a corresponding action in response to the gesture control instruction.
- the multi-task prediction device provided by the embodiments of the present disclosure can execute the multi-task prediction method provided by any embodiment of the present disclosure, and has corresponding functional modules and effects of the execution method.
- FIG. 5 shows a schematic structural diagram of an electronic device (such as the terminal device or server in FIG. 5 ) 500 suitable for implementing embodiments of the present disclosure.
- Terminal devices in embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistant, PDA), tablet computers (PAD), portable multimedia players (Portable Media Player , PMP), mobile terminals such as vehicle-mounted terminals (such as vehicle-mounted navigation terminals), and fixed terminals such as digital TVs, desktop computers, etc.
- PDA Personal Digital Assistant
- PMP portable multimedia players
- mobile terminals such as vehicle-mounted terminals (such as vehicle-mounted navigation terminals)
- fixed terminals such as digital TVs, desktop computers, etc.
- the electronic device shown in FIG. 5 is only an example and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
- the electronic device 500 may include a processor (such as a central processing unit, a graphics processor, etc.) 501 , and the processor 501 may be configured according to a program stored in a read-only memory (Read-Only Memory, ROM) 502 or from a program.
- the storage device 508 loads the program in the random access memory (Random Access Memory, RAM) 503 to execute various appropriate actions and processes.
- RAM Random Access Memory
- various programs and data required for the operation of the electronic device 500 are also stored.
- the processor 501, ROM 502, and RAM 503 are connected to each other through a bus 504.
- An input/output (I/O) interface 505 is also connected to bus 504.
- input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD) , an output device 507 such as a speaker, a vibrator, etc.; a storage device 508 including a magnetic tape, a hard disk, etc.; and a communication device 509.
- Communication device 509 may allow electronic device 500 to communicate wirelessly or wiredly with other devices to exchange data.
- FIG. 5 illustrates electronic device 500 with various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided.
- embodiments of the present disclosure include a computer program product including a computer program carried on a non-transitory computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
- the computer program may be downloaded and installed from the network via communication device 509, or from storage device 508, or from ROM 502.
- the processor 501 When the computer program is executed by the processor 501, the above functions defined in the multi-task prediction method of the embodiment of the present disclosure are performed.
- the electronic device provided by the embodiment of the present disclosure belongs to the same concept as the multi-task prediction method provided by the above embodiment.
- Technical details that are not described in detail in this embodiment can be referred to the above embodiment, and this embodiment has the same features as the above embodiment. Effect.
- Embodiments of the present disclosure provide a computer-readable storage medium on which a computer program is stored.
- the program is executed by a processor, the multi-task prediction method provided in the above embodiments is implemented.
- the computer-readable storage medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
- the computer-readable storage medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof.
- Examples of computer readable storage media may include, but are not limited to: an electrical connection having at least one conductor, a portable computer disk, a hard drive, random access memory (RAM), read only memory (ROM), erasable programmable read only memory ( Erasable Programmable Read-Only Memory (EPROM) or flash memory (FLASH), optical fiber, portable Compact Disc Read-Only Memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
- a computer-readable signal medium may also be any computer-readable storage medium other than a computer-readable storage medium that can be sent, propagated, or transmitted for use by or in connection with an instruction execution system, apparatus, or device program.
- Program codes contained on computer-readable storage media can be transmitted using any appropriate medium, including but not limited to: wires, optical cables, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the above.
- the client and server can communicate using any currently known or future developed network protocol, such as HyperText Transfer Protocol (HTTP), and can communicate with digital data in any form or medium.
- HTTP HyperText Transfer Protocol
- Data communications e.g., communications network
- Examples of communication networks include Local Area Networks (LANs), Wide Area Networks (WANs), the Internet (e.g., the Internet), and end-to-end networks (e.g., ad hoc end-to-end networks), as well as any current network for knowledge or future research and development.
- LANs Local Area Networks
- WANs Wide Area Networks
- the Internet e.g., the Internet
- end-to-end networks e.g., ad hoc end-to-end networks
- the above-mentioned computer-readable storage medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device.
- the computer-readable storage medium carries at least one program.
- the electronic device executes the at least one program.
- Computer program code for performing operations of the present disclosure may be written in at least one programming language, including but not limited to object-oriented programming languages such as Java, Smalltalk, C++, and conventional programming languages, or a combination thereof.
- a procedural programming language such as "C” or a similar programming language.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as an Internet service provider through Internet connection).
- LAN local area network
- WAN wide area network
- Internet service provider such as an Internet service provider through Internet connection
- each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains at least one operable function for implementing the specified logical function.
- Execute instructions may also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
- the units involved in the embodiments of the present disclosure can be implemented in software or hardware. Among them, the names of units and modules do not constitute limitations on the units and modules themselves.
- exemplary types of hardware logic components include: field programmable gate array (Field Programmable Gate Array, FPGA), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), application specific standard product (Application Specific Standard Parts, ASSP), System on Chip (SOC), Complex Programmable Logic Device (CPLD), etc.
- a machine-readable storage medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- the machine-readable storage medium may be a machine-readable signal medium or a machine-readable storage medium.
- Machine-readable storage media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing.
- machine-readable storage media include an electrical connection based on at least one wire, a portable computer disk, a hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM), flash memory Flash memory, optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- RAM random access memory
- ROM read only memory
- EPROM erasable programmable read only memory
- flash memory Flash memory flash memory Flash memory
- optical fiber portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- Example 1 provides a multi-task prediction method, including:
- the at least one prediction task includes a key point prediction task;
- the loss term of the preset model during the training process includes the error distribution between the first prediction result of the key point prediction task and the key point location label. The first loss of the build.
- Example 2 provides a multi-task prediction method, which also includes:
- the error distribution between the first prediction result and the key point location label is determined based on the following steps:
- an error distribution between the first prediction result and the key point position label is determined.
- Example 3 provides a multi-task prediction method, Also includes:
- constructing a flow model based on the first prediction result and the key point location label includes:
- a flow model is constructed based on the first sample and the second sample.
- Example 4 provides a multi-task prediction method, which also includes:
- constructing a flow model based on the first sample and the second sample includes:
- the initial flow model is iteratively updated until the likelihood estimate of the initial flow model meets a preset condition, and the flow model is obtained.
- Example 5 provides a multi-task prediction method, which also includes:
- the first loss is constructed based on the following steps:
- Log likelihood estimation is performed on the residuals of the error distribution and the second preset distribution, and the resulting residual likelihood estimation loss is used as the first loss.
- Example 6 provides a multi-task prediction method, which also includes:
- the at least one task when the key point prediction task is a hand key point prediction task, the at least one task further includes a gesture classification task;
- the loss term of the preset model during the training process also includes: a second loss constructed based on the second prediction result of the gesture classification task and the gesture classification label.
- Example 7 provides a multi-task prediction method, which also includes:
- the at least one task when the key point prediction task is a hand key point prediction task, the at least one task further includes a left and right hand classification task;
- the loss term of the preset model during the training process also includes: a third loss constructed based on the third prediction result of the left and right hand classification task and the left and right hand classification labels.
- Example 8 provides a multi-task prediction method, further including:
- the method further includes:
- a gesture control instruction is generated according to the prediction result of the at least one prediction task, so that the target application program performs a corresponding action in response to the gesture control instruction.
- Example 9 provides a multi-task prediction device, which includes:
- the input module is set to input the original image into the preset model
- An output module configured to output the prediction result of at least one prediction task for the original image through the preset model
- the at least one prediction task includes a key point prediction task;
- the loss term of the preset model during the training process includes the error distribution between the first prediction result of the key point prediction task and the key point location label. The first loss of the build.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Compression Of Band Width Or Redundancy In Fax (AREA)
- Image Analysis (AREA)
Abstract
本公开实施例公开了一种多任务预测方法、装置、电子设备及存储介质,其中该方法包括:将原始图像输入预设模型;通过所述预设模型输出针对所述原始图像的至少一个预测任务的预测结果;其中,所述至少一个预测任务包括关键点预测任务;所述预设模型在训练过程中的损失项,包括根据所述关键点预测任务的第一预测结果与关键点位置标签之间的误差分布构建的第一损失。
Description
本申请要求在2022年7月4日提交中国专利局、申请号为202210785776.3的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
本公开实施例涉及计算机技术领域,例如涉及一种多任务预测方法、装置、电子设备及存储介质。
多任务学习可指,利用相关但不同的多个任务中的有用信息来对多个任务进行联合训练的方法。在多任务学习过程中,多个任务损失的合理构建对训练效果有着重要的影响。
在多任务包含关键点预测任务的情况下,利用现有损失来对模型进行回归训练,其训练效果得不到保证,容易出现联合训练失败的情况,从而直接影响到多任务预测的准确度。
发明内容
本公开实施例提供了一种多任务预测方法、装置、电子设备及存储介质,能够实现连同关键点预测任务在内的多任务联合训练,训练效果佳,能够保证多任务预测的准确度。
第一方面,本公开实施例提供了一种多任务预测方法,包括:
将原始图像输入预设模型;
通过所述预设模型输出针对所述原始图像的至少一个预测任务的预测结果;
其中,所述至少一个预测任务包括关键点预测任务;所述预设模型在训练过程中的损失项,包括根据所述关键点预测任务的第一预测结果与关键点位置
标签之间的误差分布构建的第一损失。
第二方面,本公开实施例还提供了一种多任务预测装置,包括:
输入模块,设置为将原始图像输入预设模型;
输出模块,设置为通过所述预设模型输出针对所述原始图像的至少一个预测任务的预测结果;
其中,所述至少一个预测任务包括关键点预测任务;所述预设模型在训练过程中的损失项,包括根据所述关键点预测任务的第一预测结果与关键点位置标签之间的误差分布构建的第一损失。
第三方面,本公开实施例还提供了一种电子设备,包括:
至少一个处理器;
存储装置,设置为存储至少一个程序,
当所述至少一个程序被所述至少一个处理器执行时,使得所述至少一个处理器实现如本公开实施例任一所述的多任务预测方法。
第四方面,本公开实施例还提供了一种包含计算机程序的可读存储介质,所述计算机程序在由计算机处理器执行时执行如本公开实施例任一所述的多任务预测方法。
图1为本公开实施例所提供的一种多任务预测方法的流程示意图;
图2为本公开实施例所提供的一种多任务预测方法中预设模型的训练步骤的流程示意图;
图3为本公开实施例所提供的一种多任务预测方法中预设模型训练步骤的示意框图;
图4为本公开实施例所提供的一种多任务预测装置的结构示意图;
图5为本公开实施例所提供的一种电子设备的结构示意图。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“至少一个”。
可以理解的是,在使用本公开实施例公开的技术方案之前,均应当依据相关法律法规通过恰当的方式对本公开所涉及个人信息的类型、使用范围、使用场景等告知用户并获得用户的授权。
可以理解的是,本技术方案所涉及的数据(包括但不限于数据本身、数据的获取或使用)应当遵循相应法律法规及相关规定的要求。
图1为本公开实施例所提供的一种多任务预测方法的流程示意图。本公开实施例适用于通过预设模型对图像进行多任务预测的情形,其中多任务中包含关键点预测任务,预设模型基于关键点预测任务的真实误差分布训练得到。该方法可以由多任务预测装置来执行,该装置可以通过软件和硬件中至少之一的形式实现,该装置可配置于电子设备中,例如配置于手机、电脑等设备中。
如图1所示,本实施例提供的多任务预测方法,可以包括:
S110、将原始图像输入预设模型;
S120、通过预设模型输出针对原始图像的至少一个预测任务的预测结果。
本实施例中,原始图像可以是遵循相关法律法规的要求获取的图像。预设模型可以为神经网络模型,可用于原始图像的至少一个任务的预测。其中,预
设模型可包括多任务共享的主干网络和每个任务所有的各自独立的分支网络。通过主干网络可提取原始图像的共享特征;共享特征可分别输入到每个任务独立的分支网络,以分别输出每个任务的预测结果。
其中,至少一个预测任务可包括关键点预测任务。关键点预测任务可以指从原始图像中预测得到关键点位置的任务。不同类型的原始图像,需要预测的关键点存在差异。例如手部图像中需要预测的关键点可以包括指节点,肢体图像中需要预测的关键点可以包括关节点等。
预设模型在训练过程中,可将与原始图像同种类别的样本图像输入预设模型,通过预设模型输出针对样本图像的至少一个预测任务的预测结果。根据每个任务的预测结果和每个任务的真值标签可以确定每个任务的损失项,从而可基于至少一个任务的损失项对预设模型进行训练。例如,可基于至少一个任务的损失项对预设模型中的主干网络进行训练。
当至少一个预测任务包括关键点预测任务时,针对样本图像的关键点预测任务的预测结果可称为第一预测结果。预设模型在训练过程中的损失项,可包括根据关键点预测任务的第一预测结果与关键点位置标签之间的误差分布构建的第一损失。
其中,变量在真实值周围的分布情况(可称为概率分布)可影响采用的损失函数。例如变量为高斯分布时,对应的损失函数为均方误差;变量为拉普拉斯分布时,对应的损失函数为绝对值误差等。其中,可通过似然估计将概率分布与损失函数联系在一起。例如,均方误差是将变量的高斯分布经最大似然估计后得到的损失函数。
本实施例中,第一预测结果与关键点位置标签之间的误差分布,可认为是第一预测结果在真实关键点周围的概率分布,该真实的误差分布可用分布函数表示。其中,可以将第一预测结果与关键点位置标签之间的误差作为样本数据。在样本数据基础上,可以采用神经网络的方式来近似得到该分布函数,或者采用数学建模的方式逼近该分布函数。在确定第一预测结果与关键点位置标签之
间的误差分布后,可将误差分布经似然估计得到损失函数,即得到第一损失。
相关技术中,关键点位置预测时常采用预测坐标与真值标签之间的均方误差作为损失项,进行模型训练。这种损失项的构建方式,默认预测关键点在真实值周围服从高斯分布。然而,由于不同类型图像中关键点在真实值周围的分布情况各有不同,利用现有损失来对模型进行回归训练,其训练效果得不到保证,容易出现联合训练失败的情况。
然而,本公开实施例中通过确定第一预测结果与关键点位置标签之间真实的误差分布,能够构建恰当的损失函数来帮助模型参数高效、准确地学习。这不仅可优化关键点位置的预测效果,还可以使多任务的联合训练取得较佳的效果。通过多任务联合训练得到预设模型,可以执行多任务的预测。与基于多个模型执行多任务预测相比,预设模型不仅可对齐每个任务单独模型的效果,还可将模型个数降低为一个,以降低推理耗时。
本公开实施例的技术方案,将原始图像输入预设模型;通过预设模型输出针对原始图像的至少一个预测任务的预测结果;其中,至少一个预测任务包括关键点预测任务;预设模型在训练过程中的损失项,包括根据关键点预测任务的第一预测结果与关键点位置标签之间的误差分布构建的第一损失。通过根据关键点的真实误差分布来构建损失项,能够使损失项构建的更加合理,从而可实现连同关键点预测任务在内的多任务联合训练,训练效果佳,可保证多任务预测的准确度。
本公开实施例与上述实施例中所提供的多任务预测方法中的可选方案可以结合。本实施例所提供的多任务预测方法,对预设模型的训练过程中第一损失的构建步骤进行了详细描述。通过根据第一预测结果构建流模型,能够通过流模型拟合关键点得到真实的误差分布。通过将误差分布经残差似然估计,能够快速得到第一损失。
图2为本公开实施例所提供的一种多任务预测方法中预设模型的训练步骤的流程示意图。如图2所示,一种多任务预测方法中预设模型的训练步骤,可
以包括:
S210、将样本图像输入预设模型。
其中,样本图像为与原始图像属于同种类别的图像。
S220、通过预设模型输出针对样本图像的至少一个预测任务的预测结果。
其中,至少一个预测任务可包括关键点预测任务,关键点预测任务的预测结果可称为第一预测结果。
S230、根据第一预测结果与关键点位置标签,对流模型进行构建。
流模型(Flow-based generative model)的构建,其目标是训练一个生成器。通过该生成器可将一个简单分布π(z)中的样本转换成复杂分布pG(x)中的样本x=G(z)。本实施例中,简单分布π(z)例如可以为高斯分布、拉普拉斯分布等,复杂分布pG(x)可指第一预测结果与关键点位置标签的误差的分布。通过学习简单分布和误差的采样值之间的映射关系,可以构建得到流模型。
S240、根据构建完成的流模型,确定第一预测结果与关键点位置标签之间的误差分布。
在确定由简单分布到复杂分布的映射关系的流模型之后,可将简单分布代入流模型,得到第一预测结果与关键点位置标签之间的误差分布。
在一些可选的实现方式中,根据第一预测结果与关键点位置标签,对流模型进行构建,包括:对第一预测结果与关键点位置标签的误差,以及第一预设分布进行采样,分别得到第一样本和第二样本;根据第一样本和第二样本构建流模型。
其中,第一预设分布可认为是简单分布。可以对第一预测结果与关键点位置标签的误差,以及第一预设分布中的数值进行采样,分别得到第一样本xi和第二样本zi。根据流模型的可逆性,第一样本与第二样本的对应关系,可以为公式1:其中pG(·)为误差分布,π(·)为简单分布,det
(J)为雅克比行列式,G-1为流模型的逆。可根据公式1、第一样本和第二样本构建流模型。
在一些实现方式中,第一样本和第二样本可进行循环采样。根据第一样本和第二样本构建流模型,可以包括:循环根据第一样本和第二样本确定初始流模型;对初始流模型进行迭代更新,直至初始流模型的似然估计满足预设条件为止,得到流模型。
其中,将每次采集的第一样本和第二样本代入公式1可得到G-1,通过将G-1进行逆运算可得到初始流模型。其中,初始流模型的似然估计满足预设条件,可以包括初始流模型的似然估计满足最大似然估计。通过调整模型参数,使初始流模型能够最大化第一样本出现的概率,即满足最大似然估计函数能够得到最终的流模型。
相应的,可以将第一预设分布输入构建完成的流模型,通过构建完成的流模型输出第一预测结果与关键点位置标签之间的误差分布。
S250、将误差分布与第二预设分布的残差进行对数似然估计,并将得到的残差似然估计损失作为第一损失。
在一些其他实现方式中,除了对误差分布与第二预设分布的残差进行对数似然估计外,还可以直接对误差分布进行似然估计,以得到第一损失。但这种方式会导致预测模型的回归效率稍慢。
本实施例中,为提高模型回归效率,可以选择对误差分布与第二预设分布(例如高斯分布)的残差进行对数似然估计,同时可引入修正项使残差过程成立。示例性的,上述残差ε(x)可以表示为:
其中,pG(·)为误差分布;N(0,1)为简单的高斯分布,即第二预设分布;s为修正项。
将残差取对数可得到似然函数:
可根据似然函数等式右侧部分确定残差似然估计损失(Residual Log-likelihood Estimation Loss,RLE-Loss),并可将RLE-Loss作为第一损失。示例性的,第一损失可表示为:
其中,为简单的第二预设分布;为误差分布与第二预设分布的商;logσ为修正项。通过前两项结合,可以在第二预设分布固定的情况下,使模型快速回归。
S260、根据第一损失对预设模型进行训练。
本实施例中,可以根据第一损失对预设模型的主干网络进行训练,以优化其他多任务预测效果。
本公开实施例的技术方案,对预设模型的训练过程中第一损失的构建步骤进行了详细描述。通过根据第一预测结果构建流模型,能够通过流模型拟合关键点真实的误差分布。通过将误差分布经残差似然估计,能够快速得到第一误差。本公开实施例提供的多任务预测方法与上述实施例提供的多任务预测方法属于同一构思,未在本实施例中详尽描述的技术细节可参见上述实施例,并且相同的技术特征在本实施例与上述实施例中具有相同的效果。
本公开实施例与上述实施例中所提供的多任务预测方法中的可选方案可以结合。本实施例所提供的多任务预测方法中,预设模型可应用于手部图像的多任务预测,至少一个任务除了包括手部关键点预测任务外,还可以包括手势识别任务和左右手分类任务中至少之一。通过利用手部关键点真实的误差分布来构建损失项,不仅能够优化模型针对手部关键点的预测效果,还使模型在手势识别任务和左右手分类任务中至少之一的多任务学习中取得较佳的效果。
在一些可选的实现方式中,当关键点预测任务为手部关键点预测任务时,至少一个任务还可以包括手势分类任务;预设模型在训练过程中的损失项,还
可以包括:根据手势分类任务的第二预测结果与手势分类标签构建的第二损失。
在这些可选的实现方式中,可以将手部图像输入预设模型,以使预设模型输出手部关键点预测任务和手势分类任务的预测结果。手部关键点预测任务的预测结果,可以包括手部至少一个指节点的位置坐标;手势分类任务的预测结果,可以包括手指比出“V”、“OK”或“五指张开”等手势分类。
在预设模型训练过程中,可将手部的样本图像输入预设模型,通过预设模型输出手部关键点预测任务的第一预测结果,以及手势分类任务的第二预测结果。根据第一预测结果与关键点位置标签之间的误差分布可构建的第一损失,根据第二预测结果与手势分类标签可构建的第二损失(例如交叉熵损失)。
示例性的,第二损失可表示为:其中,ygesture为第二预测结果;为手势分类标签;CE(·)为交叉熵损失函数。进而,可根据第一损失和第二损失对预设模型进行训练。
在一些可选的实现方式中,当关键点预测任务为手部关键点预测任务时,至少一个任务还可以包括左右手分类任务;预设模型在训练过程中的损失项,还可以包括:根据左右手分类任务的第三预测结果与左右手分类标签构建的第三损失。
在这些可选的实现方式中,可以将手部图像输入预设模型,以使预设模型输出手部关键点预测任务和左右手分类任务的预测结果。其中,左右手分类任务的预测结果,可以包括左手、右手的分类。
在预设模型训练过程中,可将手部的样本图像输入预设模型,通过预设模型输出手部关键点预测任务的第一预测结果,以及左右手分类任务的第三预测结果。根据第一预测结果与关键点位置标签之间的误差分布可构建的第一损失,根据第三预测结果与左右手分类标签可构建的第三损失(例如也可以是交叉熵损失)。
示例性的,第三损失可表示为:其中,ylr为第三预
测结果;为左右手分类标签;CE(·)为交叉熵损失函数。进而,可根据第一损失和第三损失对预设模型进行训练。
可以理解的是,当至少一个任务包括手部关键点预测任务时,至少一个任务还可以包括手势识别任务和左右手分类任务中至少之一。相应的,除利用第一损失对预设模型进行训练外,还可以利用第二损失和第三损失中至少之一对预设模型进行训练。此外,其他基于手部图像的预测任务,也可以基于本实施例公开的预设模型实现,在此不做穷举。
示例性的,图3为本公开实施例所提供的一种多任务预测方法中预设模型训练步骤的示意框图。参见图3,手部的样本图像输入预设模型后,可通过预设模型中多任务共享的主干网络提取图像特征。其中,该主干网络例如可以为卷积神经网络(Convolutional Neural Networks,CNN),或者也可以为其他特征提取网络。其中,提取的样本图像的特征可分别输入多个任务独立的分支网络,以分别输出多个任务的预测结果。
其中,手部关键点预测任务的预测结果可称为第一预测结果;手势分类任务的预测结果可称为第二预测结果;左右手分类任务的预测结果可称为第三预测结果。其中,可根据第一预测结果与关键点位置标签,对流模型进行构建;根据构建完成的流模型,确定第一预测结果与关键点位置标签之间的误差分布,如图所示的分布P(μ|θ);将误差分布与第二预设分布的残差进行对数似然估计,并将得到的残差似然估计损失作为第一损失。也可根据第二预测结果与手势分类标签构建第二损失。还可根据第三预测结果与左右手分类标签构建第三损失。
其中,可以由第一损失、第二损失和第三损失构成总损失函数。示例性的,总损失函数可表示为:L=α×Lgesture+β×Llr+γ×Lkpt;其中,Lkpt可表示第一损失,Lgesture可表示第二损失,Llr可表示第三损失;α、β、γ可分别表示损失权重。从而可以通过总损失函数,对预设模型进行训练。通过将手部关键点预测任务,手势识别任务和左右手分类任务进行联合训练,可将分别执行任务
的三个模型减少到一个模型,可降低推理耗时,且能够保持原来独立模型效果。
在一些可选的实现方式中,在通过预设模型输出针对原始图像的至少一个预测任务的预测结果之后,还可以包括:根据至少一个预测任务的预测结果生成手势控制指令,以使目标应用程序响应手势控制指令执行对应动作。
在这些可选的实现方式中,预设模型可应用于端上手势识别。可响应于用户输入的采集指令采集手部图像,并可通过预设模型输出针对手部图像的至少一个预测结果。进而,可根据预测结果生成手势控制指令,以使端上的目标应用程序执行对应动作。其中,目标应用程序可以指与手势控制指令对应的程序。示例性的,当预设结果包含“五指张开”的手势分类,以及手部关键点位置时,手势控制指令可以为确认指令;相应的,目标应用程序可以接收确认指令,并执行相应的后续程序。
本公开实施例的技术方案中,预设模型可应用于手部图像的多任务预测,至少一个任务除了包括手部关键点预测任务外,还可以包括手势识别任务和左右手分类任务中至少之一。通过利用手部关键点真实的误差分布来构建损失项,不仅能够优化模型针对手部关键点的预测效果,还使模型在手势识别任务和左右手分类任务的多任务学习中取得较佳的效果。本公开实施例提供的多任务预测方法与上述实施例提供的多任务预测方法属于同一构思,未在本实施例中详尽描述的技术细节可参见上述实施例,并且相同的技术特征在本实施例与上述实施例中具有相同的效果。
图4为本公开实施例所提供的一种多任务预测装置的结构示意图。本公开实施例适用于通过预设模型对图像进行多任务预测的情形,其中多任务中包含关键点预测任务,预设模型基于关键点预测任务的真实误差分布训练得到。
如图4所示,本公开实施例提供的多任务预测装置,可以包括:
输入模块410,设置为将原始图像输入预设模型;
输出模块420,设置为通过预设模型输出针对原始图像的至少一个预测任务的预测结果;
其中,至少一个预测任务包括关键点预测任务;预设模型在训练过程中的损失项,包括根据关键点预测任务的第一预测结果与关键点位置标签之间的误差分布构建的第一损失。
在一些可选的实现方式中,多任务预测装置,还可以包括:
损失构建模块,设置为基于下述步骤确定第一预测结果与关键点位置标签之间的误差分布:
根据第一预测结果与关键点位置标签,对流模型进行构建;
根据构建完成的流模型,确定第一预测结果与关键点位置标签之间的误差分布。
在一些可选的实现方式中,损失构建模块,可以设置为:
对第一预测结果与关键点位置标签的误差,以及第一预设分布进行采样,分别得到第一样本和第二样本;
根据第一样本和第二样本构建流模型。
在一些可选的实现方式中,损失构建模块,可以设置为:
循环根据第一样本和第二样本确定初始流模型;
对初始流模型进行迭代更新,直至初始流模型的似然估计满足预设条件为止,得到流模型。
在一些可选的实现方式中,损失构建模块,还可以设置为基于下述步骤构建第一损失:
将误差分布与第二预设分布的残差进行对数似然估计,并将得到的残差似然估计损失作为第一损失。
在一些可选的实现方式中,当关键点预测任务为手部关键点预测任务时,至少一个任务还包括手势分类任务;
预设模型在训练过程中的损失项,还包括:根据手势分类任务的第二预测结果与手势分类标签构建的第二损失。
在一些可选的实现方式中,当关键点预测任务为手部关键点预测任务时,
至少一个任务还包括左右手分类任务;
预设模型在训练过程中的损失项,还包括:根据左右手分类任务的第三预测结果与左右手分类标签构建的第三损失。
在一些可选的实现方式中,多任务预测装置,还包括:
控制模块,设置为在通过预设模型输出针对原始图像的至少一个预测任务的预测结果之后,根据至少一个预测任务的预测结果生成手势控制指令,以使目标应用程序响应手势控制指令执行对应动作。
本公开实施例所提供的多任务预测装置,可执行本公开任意实施例所提供的多任务预测方法,具备执行方法相应的功能模块和效果。
值得注意的是,上述装置所包括的单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,功能单元的名称也只是为了便于相互区分,并不用于限制本公开实施例的保护范围。
下面参考图5,图5示出了适于用来实现本公开实施例的电子设备(例如图5中的终端设备或服务器)500的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,PDA)、平板电脑(PAD)、便携式多媒体播放器(Portable Media Player,PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图5示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图5所示,电子设备500可以包括处理器(例如中央处理器、图形处理器等)501,处理器501可以根据存储在只读存储器(Read-Only Memory,ROM)502中的程序或者从存储装置508加载到随机访问存储器(Random Access Memory,RAM)503中的程序而执行各种适当的动作和处理。在RAM 503中,还存储有电子设备500操作所需的各种程序和数据。处理器501、ROM 502以及RAM 503通过总线504彼此相连。输入/输出(I/O)接口505也连接至总线504。
通常,以下装置可以连接至I/O接口505:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置506;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置507;包括例如磁带、硬盘等的存储装置508;以及通信装置509。通信装置509可以允许电子设备500与其他设备进行无线或有线通信以交换数据。虽然图5示出了具有各种装置的电子设备500,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置509从网络上被下载和安装,或者从存储装置508被安装,或者从ROM502被安装。在该计算机程序被处理器501执行时,执行本公开实施例的多任务预测方法中限定的上述功能。
本公开实施例提供的电子设备与上述实施例提供的多任务预测方法属于同一构思,未在本实施例中详尽描述的技术细节可参见上述实施例,并且本实施例与上述实施例具有相同的效果。
本公开实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述实施例所提供的多任务预测方法。
需要说明的是,本公开上述的计算机可读存储介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的例子可以包括但不限于:具有至少一个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)或闪存(FLASH)、光纤、便携式
紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读存储介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读存储介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如超文本传输协议(Hyper Text Transfer Protocol,HTTP)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读存储介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读存储介质承载有至少一个程序,当上述至少一个程序被该电子设备执行时,使得该电子设备:
将原始图像输入预设模型;通过预设模型输出针对原始图像的至少一个预测任务的预测结果;其中,至少一个预测任务包括关键点预测任务;预设模型在训练过程中的损失项,包括根据关键点预测任务的第一预测结果与关键点位置标签之间的误差分布构建的第一损失。
可以以至少一种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言一诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言一诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)-连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含至少一个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和流程图中的每个方框、以及框图和流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元、模块的名称并不构成对该单元、模块本身的限定。
本文中以上描述的功能可以至少部分地由至少一个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Parts,
ASSP)、片上系统(System on Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,CPLD)等等。
在本公开的上下文中,机器可读存储介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读存储介质可以是机器可读信号介质或机器可读储存介质。机器可读存储介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的示例包括基于至少一个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)、快闪存储器、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,【示例一】提供了一种多任务预测方法,包括:
将原始图像输入预设模型;
通过所述预设模型输出针对所述原始图像的至少一个预测任务的预测结果;
其中,所述至少一个预测任务包括关键点预测任务;所述预设模型在训练过程中的损失项,包括根据所述关键点预测任务的第一预测结果与关键点位置标签之间的误差分布构建的第一损失。
根据本公开的一个或多个实施例,【示例二】提供了一种多任务预测方法,还包括:
在一些可选的实现方式中,所述第一预测结果与关键点位置标签之间的误差分布,基于下述步骤确定:
根据所述第一预测结果与所述关键点位置标签,对流模型进行构建;
根据构建完成的所述流模型,确定所述第一预测结果与所述关键点位置标签之间的误差分布。
根据本公开的一个或多个实施例,【示例三】提供了一种多任务预测方法,
还包括:
在一些可选的实现方式中,所述根据所述第一预测结果与所述关键点位置标签,对流模型进行构建,包括:
对所述第一预测结果与所述关键点位置标签的误差,以及第一预设分布进行采样,分别得到第一样本和第二样本;
根据所述第一样本和第二样本构建流模型。
根据本公开的一个或多个实施例,【示例四】提供了一种多任务预测方法,还包括:
在一些可选的实现方式中,所述根据所述第一样本和第二样本构建流模型,包括:
循环根据所述第一样本和所述第二样本确定初始流模型;
对所述初始流模型进行迭代更新,直至所述初始流模型的似然估计满足预设条件为止,得到所述流模型。
根据本公开的一个或多个实施例,【示例五】提供了一种多任务预测方法,还包括:
在一些可选的实现方式中,所述第一损失基于下述步骤构建:
将所述误差分布与第二预设分布的残差进行对数似然估计,并将得到的残差似然估计损失作为所述第一损失。
根据本公开的一个或多个实施例,【示例六】提供了一种多任务预测方法,还包括:
在一些可选的实现方式中,当所述关键点预测任务为手部关键点预测任务时,所述至少一个任务还包括手势分类任务;
所述预设模型在训练过程中的损失项,还包括:根据所述手势分类任务的第二预测结果与手势分类标签构建的第二损失。
根据本公开的一个或多个实施例,【示例七】提供了一种多任务预测方法,还包括:
在一些可选的实现方式中,当所述关键点预测任务为手部关键点预测任务时,所述至少一个任务还包括左右手分类任务;
所述预设模型在训练过程中的损失项,还包括:根据所述左右手分类任务的第三预测结果与左右手分类标签构建的第三损失。
根据本公开的一个或多个实施例,【示例八】提供了一种多任务预测方法,还包括:
在一些可选的实现方式中,在所述通过预设模型输出针对所述原始图像的至少一个预测任务的预测结果之后,还包括:
根据所述至少一个预测任务的预测结果生成手势控制指令,以使目标应用程序响应所述手势控制指令执行对应动作。
根据本公开的一个或多个实施例,【示例九】提供了一种多任务预测装置,该装置包括:
输入模块,设置为将原始图像输入预设模型;
输出模块,设置为通过所述预设模型输出针对所述原始图像的至少一个预测任务的预测结果;
其中,所述至少一个预测任务包括关键点预测任务;所述预设模型在训练过程中的损失项,包括根据所述关键点预测任务的第一预测结果与关键点位置标签之间的误差分布构建的第一损失。
Claims (11)
- 一种多任务预测方法,包括:将原始图像输入预设模型;通过所述预设模型输出针对所述原始图像的至少一个预测任务的预测结果;其中,所述至少一个预测任务包括关键点预测任务;所述预设模型在训练过程中的损失项,包括根据所述关键点预测任务的第一预测结果与关键点位置标签之间的误差分布构建的第一损失。
- 根据权利要求1所述的方法,其中,所述第一预测结果与关键点位置标签之间的误差分布,基于下述步骤确定:根据所述第一预测结果与所述关键点位置标签,对流模型进行构建;根据构建完成的所述流模型,确定所述第一预测结果与所述关键点位置标签之间的误差分布。
- 根据权利要求2所述的方法,其中,所述根据所述第一预测结果与所述关键点位置标签,对流模型进行构建,包括:对所述第一预测结果与所述关键点位置标签的误差,以及第一预设分布进行采样,分别得到第一样本和第二样本;根据所述第一样本和第二样本构建流模型。
- 根据权利要求3所述的方法,其中,所述根据所述第一样本和第二样本构建流模型,包括:循环根据所述第一样本和所述第二样本确定初始流模型;对所述初始流模型进行迭代更新,直至所述初始流模型的似然估计满足预设条件为止,得到所述流模型。
- 根据权利要求1所述的方法,其中,所述第一损失基于下述步骤构建:将所述误差分布与第二预设分布的残差进行对数似然估计,并将得到的残差似然估计损失作为所述第一损失。
- 根据权利要求1所述的方法,其中,当所述关键点预测任务为手部关键点预测任务时,所述至少一个任务还包括手势分类任务;所述预设模型在训练过程中的损失项,还包括:根据所述手势分类任务的第二预测结果与手势分类标签构建的第二损失。
- 根据权利要求1所述的方法,其中,当所述关键点预测任务为手部关键点预测任务时,所述至少一个任务还包括左右手分类任务;所述预设模型在训练过程中的损失项,还包括:根据所述左右手分类任务的第三预测结果与左右手分类标签构建的第三损失。
- 根据权利要求6或7中任一所述的方法,其中,在所述通过预设模型输出针对所述原始图像的至少一个预测任务的预测结果之后,还包括:根据所述至少一个预测任务的预测结果生成手势控制指令,以使目标应用程序响应所述手势控制指令执行对应动作。
- 一种多任务预测装置,包括:输入模块(410),设置为将原始图像输入预设模型;输出模块(420),设置为通过所述预设模型输出针对所述原始图像的至少一个预测任务的预测结果;其中,所述至少一个预测任务包括关键点预测任务;所述预设模型在训练过程中的损失项,包括根据所述关键点预测任务的第一预测结果与关键点位置标签之间的误差分布构建的第一损失。
- 一种电子设备,包括:至少一个处理器;存储装置,设置为存储至少一个程序,当所述至少一个程序被所述至少一个处理器执行时,使得所述至少一个处理器实现如权利要求1-8中任一所述的多任务预测方法。
- 一种包含计算机程序的可读存储介质,所述计算机程序在由计算机处理器执行时执行如权利要求1-8中任一所述的多任务预测方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210785776.3A CN117409473A (zh) | 2022-07-04 | 2022-07-04 | 一种多任务预测方法、装置、电子设备及存储介质 |
CN202210785776.3 | 2022-07-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024007938A1 true WO2024007938A1 (zh) | 2024-01-11 |
Family
ID=89454354
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/103755 WO2024007938A1 (zh) | 2022-07-04 | 2023-06-29 | 一种多任务预测方法、装置、电子设备及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN117409473A (zh) |
WO (1) | WO2024007938A1 (zh) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112699837A (zh) * | 2021-01-13 | 2021-04-23 | 新大陆数字技术股份有限公司 | 一种基于深度学习的手势识别方法及设备 |
CN112907589A (zh) * | 2021-04-02 | 2021-06-04 | 联通(上海)产业互联网有限公司 | 一种检测异常并且分割图像中异常区域的深度学习算法 |
CN112949437A (zh) * | 2021-02-21 | 2021-06-11 | 深圳市优必选科技股份有限公司 | 一种手势识别方法、手势识别装置及智能设备 |
CN113420848A (zh) * | 2021-08-24 | 2021-09-21 | 深圳市信润富联数字科技有限公司 | 神经网络模型的训练方法及装置、手势识别的方法及装置 |
US20220005161A1 (en) * | 2020-07-01 | 2022-01-06 | Disney Enterprises, Inc. | Image Enhancement Using Normalizing Flows |
-
2022
- 2022-07-04 CN CN202210785776.3A patent/CN117409473A/zh active Pending
-
2023
- 2023-06-29 WO PCT/CN2023/103755 patent/WO2024007938A1/zh unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220005161A1 (en) * | 2020-07-01 | 2022-01-06 | Disney Enterprises, Inc. | Image Enhancement Using Normalizing Flows |
CN112699837A (zh) * | 2021-01-13 | 2021-04-23 | 新大陆数字技术股份有限公司 | 一种基于深度学习的手势识别方法及设备 |
CN112949437A (zh) * | 2021-02-21 | 2021-06-11 | 深圳市优必选科技股份有限公司 | 一种手势识别方法、手势识别装置及智能设备 |
CN112907589A (zh) * | 2021-04-02 | 2021-06-04 | 联通(上海)产业互联网有限公司 | 一种检测异常并且分割图像中异常区域的深度学习算法 |
CN113420848A (zh) * | 2021-08-24 | 2021-09-21 | 深圳市信润富联数字科技有限公司 | 神经网络模型的训练方法及装置、手势识别的方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
CN117409473A (zh) | 2024-01-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109800732B (zh) | 用于生成漫画头像生成模型的方法和装置 | |
WO2019233421A1 (zh) | 图像处理方法及装置、电子设备、存储介质 | |
WO2020147369A1 (zh) | 自然语言处理方法、训练方法及数据处理设备 | |
WO2024174911A1 (zh) | 代码生成方法、装置、存储介质及电子设备 | |
US20220374776A1 (en) | Method and system for federated learning, electronic device, and computer readable medium | |
WO2023231753A1 (zh) | 一种神经网络的训练方法、数据的处理方法以及设备 | |
WO2023202543A1 (zh) | 文字处理方法、装置、电子设备及存储介质 | |
WO2023274187A1 (zh) | 基于自然语言推理的信息处理方法、装置和电子设备 | |
US20240282027A1 (en) | Method, apparatus, device and storage medium for generating animal figures | |
WO2024051655A1 (zh) | 全视野组织学图像的处理方法、装置、介质和电子设备 | |
CN115081616A (zh) | 一种数据的去噪方法以及相关设备 | |
EP4020327A2 (en) | Method and apparatus for training data processing model, electronic device and storage medium | |
CN116977885A (zh) | 视频文本任务处理方法、装置、电子设备及可读存储介质 | |
CN116258657A (zh) | 模型训练方法、图像处理方法、装置、介质及电子设备 | |
WO2024114659A1 (zh) | 一种摘要生成方法及其相关设备 | |
WO2023143121A1 (zh) | 一种数据处理方法及其相关装置 | |
WO2024007938A1 (zh) | 一种多任务预测方法、装置、电子设备及存储介质 | |
CN111353585B (zh) | 神经网络模型的结构搜索方法和装置 | |
CN111310794B (zh) | 目标对象的分类方法、装置和电子设备 | |
CN111581455B (zh) | 文本生成模型的生成方法、装置和电子设备 | |
CN114707070A (zh) | 一种用户行为预测方法及其相关设备 | |
WO2024183592A1 (zh) | 一种图像处理方法、装置、电子设备及存储介质 | |
WO2024183593A1 (zh) | 一种图像分类方法、装置、电子设备及存储介质 | |
WO2023179420A1 (zh) | 一种图像处理方法、装置、电子设备及存储介质 | |
CN110633596A (zh) | 预测车辆方向角的方法和装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23834705 Country of ref document: EP Kind code of ref document: A1 |