CN111783986A - Network training method and device and posture prediction method and device - Google Patents

Network training method and device and posture prediction method and device Download PDF

Info

Publication number
CN111783986A
CN111783986A CN202010638037.2A CN202010638037A CN111783986A CN 111783986 A CN111783986 A CN 111783986A CN 202010638037 A CN202010638037 A CN 202010638037A CN 111783986 A CN111783986 A CN 111783986A
Authority
CN
China
Prior art keywords
prediction
image
dimensional
information
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010638037.2A
Other languages
Chinese (zh)
Other versions
CN111783986B (en
Inventor
季向阳
王谷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202010638037.2A priority Critical patent/CN111783986B/en
Publication of CN111783986A publication Critical patent/CN111783986A/en
Application granted granted Critical
Publication of CN111783986B publication Critical patent/CN111783986B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a network training method and device and a posture prediction method and device, wherein the method comprises the following steps: predicting a two-dimensional sample image through a posture prediction network to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction posture information corresponding to the target object, wherein the prediction posture information comprises three-dimensional rotation information and three-dimensional translation information; performing differentiable rendering operation according to the predicted attitude information corresponding to the target object and the three-dimensional model corresponding to the target object to obtain differentiable rendering information corresponding to the target object; determining the total loss of the self-supervision training of the attitude prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image and the differentiable rendering information; according to the attitude prediction network trained according to the total loss of the self-supervision training, the accuracy of the attitude prediction network can be improved.

Description

Network training method and device and posture prediction method and device
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a network training method and apparatus, and an attitude prediction method and apparatus.
Background
Acquiring six-dimensional (6D) pose (i.e., rotation in 3 degrees of freedom and translation in 3 degrees of freedom) of an object in a three-dimensional (3D) space from a two-dimensional (2D) image is critical in many real-world applications, such as providing critical information for tasks such as robot grabbing or motion planning; in unmanned driving, obtaining the 6D pose of the vehicle and pedestrian may provide driving decision information for the vehicle.
In recent years, deep learning has made great progress in the task of 6D pose estimation, however, it is still a very challenging task to estimate the 6D pose of an object using only monocular RGB (red \ green \ blue) images. One important reason is that the amount of data required for deep learning is very large, and the real annotation data for estimating the 6D object posture is very complex to obtain, and is very time-consuming and labor-consuming.
Disclosure of Invention
The present disclosure provides an automatic supervision training technical scheme for training a neural network.
According to an aspect of the present disclosure, there is provided a network training method, including:
predicting a two-dimensional sample image through a posture prediction network to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction posture information corresponding to the target object, wherein the prediction posture information comprises three-dimensional rotation information and three-dimensional translation information;
performing differentiable rendering operation according to the predicted attitude information corresponding to the target object and the three-dimensional model corresponding to the target object to obtain differentiable rendering information corresponding to the target object;
determining the total loss of the self-supervision training of the attitude prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image and the differentiable rendering information;
and training the attitude prediction network according to the total loss of the self-supervision training.
In one possible implementation, the differentiable rendering information corresponding to the target object includes: rendering a segmentation mask, rendering a two-dimensional image, rendering a depth image,
determining the total loss of the self-supervision training of the attitude prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image and the differentiable rendering information, wherein the determining comprises:
determining a first self-supervision training loss according to the two-dimensional sample image and the rendered two-dimensional image;
determining a second unsupervised training loss according to the predicted segmentation mask and the rendered segmentation mask;
determining a third self-supervision training loss according to the depth image corresponding to the two-dimensional sample image and the rendering depth image;
and determining the total loss of the self-supervised training of the attitude prediction network according to the first self-supervised training loss, the second self-supervised training loss and the third self-supervised training loss.
In one possible implementation, the determining a first unsupervised training loss according to the two-dimensional sample image and the rendered two-dimensional image includes:
after the two-dimensional sample image and the rendered two-dimensional image are respectively converted into a color model LAB mode, determining first image loss by adopting a first loss function according to the two-dimensional sample image after the mode conversion, the rendered two-dimensional image after the mode conversion and the prediction segmentation mask;
determining a second image loss by adopting a second loss function according to the two-dimensional sample image, the rendered two-dimensional image and the prediction segmentation mask, wherein the second loss function is a loss function based on a multi-scale structure similarity index;
determining a third image loss by adopting a third loss function according to the two-dimensional sample image, the rendered two-dimensional image and the prediction segmentation mask, wherein the third loss function is a loss function based on a multi-scale characteristic distance of a deep convolutional neural network;
determining the first unsupervised training loss according to the first image loss, the second image loss and the third image loss.
In one possible implementation, the determining a second unsupervised training loss according to the predicted partition mask and the rendered partition mask includes:
and determining a second self-supervision training loss by adopting a cross entropy loss function according to the prediction segmentation mask and the rendering segmentation mask.
In one possible implementation manner, the determining, according to the depth image corresponding to the two-dimensional sample image and the rendered depth image, a third unsupervised training loss includes:
respectively carrying out back projection operation on the depth image corresponding to the two-dimensional sample image and the rendering depth image to obtain point cloud information corresponding to the depth image and point cloud information corresponding to the rendering depth image;
and determining a third self-supervision training loss according to the point cloud information corresponding to the depth image and the point cloud information corresponding to the rendering depth image.
In one possible implementation, the network of attitude predictions includes: a class prediction subnetwork, a bounding box prediction subnetwork, and a pose prediction subnetwork,
the predicting a two-dimensional sample image through a posture prediction network to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction posture information corresponding to the target object includes:
predicting the two-dimensional sample image through the category prediction sub-network to obtain category information corresponding to a target object in the two-dimensional sample image;
predicting the two-dimensional sample image through the boundary frame prediction sub-network to obtain boundary frame information corresponding to a target object in the two-dimensional sample image;
and processing the two-dimensional sample image, the category information and the bounding box information through the attitude prediction sub-network to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction attitude information corresponding to the target object.
In one possible implementation, before the predicting the two-dimensional sample image by the pose prediction network, the method further includes:
rendering and synthesizing according to a three-dimensional model of an object and preset posture information to obtain a synthesized two-dimensional image and annotation information of the synthesized two-dimensional image, wherein the annotation information of the synthesized two-dimensional image comprises annotated object class information, annotated bounding box information, preset posture information and a preset synthesized segmentation mask;
predicting the synthesized two-dimensional image through the attitude prediction network to obtain prediction information of the synthesized two-dimensional image, wherein the prediction information comprises predicted object class information, predicted boundary box information, a predicted synthesis segmentation mask and predicted synthesis attitude information;
and training the attitude prediction network according to the prediction information and the labeling information of the synthesized two-dimensional image.
According to an aspect of the present disclosure, there is provided an attitude prediction method, the method including:
carrying out prediction processing on an image to be processed through an attitude prediction network to obtain attitude information of a target object in the image to be processed,
the posture prediction network is obtained by training by using the network training method of any one of claims 1 to 7.
According to an aspect of the present disclosure, there is provided a network training apparatus, including:
the prediction module is used for predicting a two-dimensional sample image through a posture prediction network to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction posture information corresponding to the target object, wherein the prediction posture information comprises three-dimensional rotation information and three-dimensional translation information;
the rendering module is used for carrying out differentiable rendering operation according to the predicted attitude information corresponding to the target object and the three-dimensional model corresponding to the target object to obtain differentiable rendering information corresponding to the target object;
the determining module is used for determining the total loss of the self-supervision training of the attitude prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image and the differentiable rendering information;
and the self-supervision training module is used for training the posture prediction network according to the total loss of the self-supervision training.
In one possible implementation, the differentiable rendering information corresponding to the target object includes: rendering a segmentation mask, rendering a two-dimensional image, rendering a depth image, the determining module being further configured to:
determining a first self-supervision training loss according to the two-dimensional sample image and the rendered two-dimensional image;
determining a second unsupervised training loss according to the predicted segmentation mask and the rendered segmentation mask;
determining a third self-supervision training loss according to the depth image corresponding to the two-dimensional sample image and the rendering depth image;
and determining the total loss of the self-supervised training of the attitude prediction network according to the first self-supervised training loss, the second self-supervised training loss and the third self-supervised training loss.
In a possible implementation manner, the determining module is further configured to:
after the two-dimensional sample image and the rendered two-dimensional image are respectively converted into a color model LAB mode, determining first image loss by adopting a first loss function according to the two-dimensional sample image after the mode conversion, the rendered two-dimensional image after the mode conversion and the prediction segmentation mask;
determining a second image loss by adopting a second loss function according to the two-dimensional sample image, the rendered two-dimensional image and the prediction segmentation mask, wherein the second loss function is a loss function based on a multi-scale structure similarity index;
determining a third image loss by adopting a third loss function according to the two-dimensional sample image, the rendered two-dimensional image and the prediction segmentation mask, wherein the third loss function is a loss function based on a multi-scale characteristic distance of a deep convolutional neural network;
determining the first unsupervised training loss according to the first image loss, the second image loss and the third image loss.
In a possible implementation manner, the determining module is further configured to:
and determining a second self-supervision training loss by adopting a cross entropy loss function according to the prediction segmentation mask and the rendering segmentation mask.
In a possible implementation manner, the determining module is further configured to:
respectively carrying out back projection operation on the depth image corresponding to the two-dimensional sample image and the rendering depth image to obtain point cloud information corresponding to the depth image and point cloud information corresponding to the rendering depth image;
and determining a third self-supervision training loss according to the point cloud information corresponding to the depth image and the point cloud information corresponding to the rendering depth image.
In one possible implementation, the network of attitude predictions includes: a class prediction subnetwork, a bounding box prediction subnetwork, and an attitude prediction subnetwork, the prediction module further to:
predicting the two-dimensional sample image through the category prediction sub-network to obtain category information corresponding to a target object in the two-dimensional sample image;
predicting the two-dimensional sample image through the boundary frame prediction sub-network to obtain boundary frame information corresponding to a target object in the two-dimensional sample image;
and processing the two-dimensional sample image, the category information and the bounding box information through the attitude prediction sub-network to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction attitude information corresponding to the target object.
In one possible implementation, the apparatus further includes:
the pre-training module is used for rendering and synthesizing according to a three-dimensional model of an object and preset posture information to obtain a synthesized two-dimensional image and annotation information of the synthesized two-dimensional image, wherein the annotation information of the synthesized two-dimensional image comprises labeled object type information, labeled boundary box information, preset posture information and a preset synthesis segmentation mask;
predicting the synthesized two-dimensional image through the attitude prediction network to obtain prediction information of the synthesized two-dimensional image, wherein the prediction information comprises predicted object class information, predicted boundary box information, a predicted synthesis segmentation mask and predicted synthesis attitude information;
and training the attitude prediction network according to the prediction information and the labeling information of the synthesized two-dimensional image.
According to an aspect of the present disclosure, there is provided an attitude prediction apparatus, the apparatus including:
the prediction module is used for carrying out prediction processing on the image to be processed through a posture prediction network to obtain the posture information of the target object in the image to be processed,
the posture prediction network is obtained by adopting any one of the network training methods for training.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In this way, a two-dimensional sample image can be predicted through a gesture prediction network, so that a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction gesture information corresponding to the target object are obtained, the prediction gesture information includes three-dimensional rotation information and three-dimensional translation information, and differentiable rendering operation is performed according to the prediction gesture information corresponding to the target object and a three-dimensional model corresponding to the target object, so that differentiable rendering information corresponding to the target object is obtained. And determining the training self-supervision training total loss of the attitude prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image and the differentiable rendering information, and training the attitude prediction network according to the training self-supervision training total loss. According to the network training method and device and the posture prediction method and device provided by the embodiment of the disclosure, the posture prediction network is trained on the two-dimensional sample image and the depth image without the labeling information in a self-supervision manner, so that the accuracy of the posture prediction network can be improved, and meanwhile, the training efficiency of the posture prediction network can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
FIG. 1 shows a flow diagram of a network training method according to an embodiment of the present disclosure;
FIG. 2 shows a schematic diagram of a network training method according to an embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of a network training method according to an embodiment of the present disclosure;
FIG. 4 shows a schematic diagram of a network training method according to an embodiment of the present disclosure;
FIG. 5 shows a block diagram of a network training apparatus according to an embodiment of the present disclosure;
FIG. 6 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure;
fig. 7 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
In the related art, when training a neural network for estimating the pose of an object, a large amount of synthetic data is obtained in a rendering manner by using a three-dimensional model of a known object, and then the neural network is trained by using the synthetic data. However, a large domain gap exists between the synthetic data and the real data, so that the training result on the synthetic data is often not high in precision and cannot be satisfied, and the effect of means such as domain self-adaptation or domain randomization on the problem is limited.
The embodiment of the disclosure provides a network self-supervision training method, which can improve the prediction accuracy of a posture prediction network by training the posture prediction network in a self-supervision manner through a real two-dimensional sample image and a real depth image.
Fig. 1 shows a flowchart of a network training method according to an embodiment of the present disclosure, which may be performed by an electronic device such as a terminal device or a server, in one possible implementation, the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like, and the method may be implemented by a processor calling a computer readable instruction stored in a memory. Alternatively, the method may be performed by a server.
As shown in fig. 1, the network training method includes:
in step S11, a two-dimensional sample image is predicted through a pose prediction network, and a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction pose information corresponding to the target object are obtained, where the prediction pose information includes three-dimensional rotation information and three-dimensional translation information.
For example, the posture prediction network is a neural network for predicting the 6D posture of the target object, and may be applied to the fields of "robot operation", "automatic driving", "augmented reality", and the like. The two-dimensional sample image may be an image including a target object, which may be any object, for example: human face, human body, animal, plant, object, etc.
The two-dimensional sample image may be input to a gesture prediction network for prediction, so as to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction gesture information corresponding to the target object, where a pixel value of any pixel point in the prediction segmentation mask is used to identify whether the pixel point is a pixel point in the target object in the sample two-dimensional image, for example: and when the pixel value is 1, identifying the pixel point as the pixel point on the target object, and when the pixel value is 0, identifying that the pixel point is not the pixel point on the target object. The predicted attitude information may include three-dimensional rotation information R of the target object in three dimensions, which may be expressed by quaternion, and three-dimensional translation information t (t) of the target object in three dimensionsx,ty,tz)。
In step S12, a differentiable rendering operation is performed according to the predicted pose information corresponding to the target object and the three-dimensional model corresponding to the target object, so as to obtain differentiable rendering information corresponding to the target object.
For example, the two-dimensional sample image may be detected, a three-dimensional model corresponding to a target object in the two-dimensional sample image may be determined, and a differentiable rendering information corresponding to the target object may be obtained by performing a rendering operation with a differentiable renderer according to the predicted pose information corresponding to the target object and the three-dimensional model corresponding to the target object, where the differentiable rendering information may include a rendering segmentation mask, a rendering two-dimensional image, and a rendering depth image.
In step S13, a total loss of the unsupervised training of the pose prediction network is determined according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image, and the differentiable rendering information.
For example, the self-supervision training total loss of the posture training network can be obtained by performing visual consistency constraint and geometric consistency constraint on the two-dimensional sample image, the prediction segmentation mask and the depth image corresponding to the two-dimensional sample image and differentiable rendering information obtained by rendering through the predicted posture information.
In step S14, the posture prediction network is trained according to the total loss of the self-supervised training.
For example, the parameters of the posture prediction network may be adjusted according to the total loss of the self-supervised training until the total loss of the self-supervised training meets the training requirement, and the self-supervised training of the posture prediction network is completed.
In this way, a two-dimensional sample image can be predicted through a gesture prediction network, so that a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction gesture information corresponding to the target object are obtained, the prediction gesture information includes three-dimensional rotation information and three-dimensional translation information, and differentiable rendering operation is performed according to the prediction gesture information corresponding to the target object and a three-dimensional model corresponding to the target object, so that differentiable rendering information corresponding to the target object is obtained. And determining the total training loss of the attitude prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image and the differentiable rendering information, and training the attitude prediction network according to the total self-supervision training loss. According to the network training method provided by the embodiment of the disclosure, the gesture prediction network is trained in a self-supervision manner through the two-dimensional sample image and the depth image without the labeling information, so that the accuracy of the gesture prediction network can be improved, and meanwhile, the training efficiency of the gesture prediction network can be improved.
In one possible implementation, the differentiable rendering information corresponding to the target object may include: rendering a segmentation mask, rendering a two-dimensional image, and rendering a depth image, wherein the determining a total training loss of the posture prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image, and the differentiable rendering information may include:
determining a first self-supervision training loss according to the two-dimensional sample image and the rendered two-dimensional image;
determining a second unsupervised training loss according to the predicted segmentation mask and the rendered segmentation mask;
determining a third self-supervision training loss according to the depth image corresponding to the two-dimensional sample image and the rendering depth image;
and determining the total loss of the self-supervised training of the attitude prediction network according to the first self-supervised training loss, the second self-supervised training loss and the third self-supervised training loss.
For example, the rendering segmentation mask may be a mask image obtained by rendering, and a pixel value of any pixel in the rendering segmentation mask is used to identify whether the pixel is a pixel in a target object in the sample two-dimensional image. The rendering two-dimensional image may be a two-dimensional image obtained through a three-dimensional model of the target object and the predicted pose information, the rendering depth image may be a depth image obtained through the three-dimensional model of the target object and the predicted pose information, and the rendering process may be completed through a renderer (e.g., Soft-Rasterizer) in the related art, which is not described in detail in this disclosure.
The method can establish visual consistency constraint between a two-dimensional sample image and a rendered two-dimensional image, establish visual consistency constraint between a prediction segmentation mask and the rendered segmentation mask, establish geometric consistency constraint between a depth image corresponding to the two-dimensional sample image and the rendered depth image, and optimize the attitude prediction network by optimizing two self-supervision constraints of visual consistency and geometric consistency.
The total loss of training for the pose prediction network comprises a loss determined by a visual consistency constraint and a loss determined by a geometric consistency constraint, wherein the loss determined by the visual consistency constraint comprises a first training loss and a second training loss, the loss determined by the geometric consistency constraint comprises a third training loss, and the total loss of the unsupervised training for the pose prediction network can be determined by the following equation (one).
Lself=Lvisual+ηLgeomFormula (I)
Wherein L isselfDenotes the total loss of the self-supervised training, LvisualRepresenting the loss determined by the visual consistency constraint, LgeomRepresenting losses determined by geometric conformance constraints, i.e. LvisualFirst + second, Lgeomη denotes the weight of the third supervised training loss.
In a possible implementation manner, the determining a first unsupervised training loss according to the two-dimensional sample image and the rendered two-dimensional image may include:
after the two-dimensional sample image and the rendered two-dimensional image are respectively converted into a color model LAB mode, determining first image loss by adopting a first loss function according to the two-dimensional sample image after the mode conversion, the rendered two-dimensional image after the mode conversion and the prediction segmentation mask;
determining a second image loss by adopting a second loss function according to the two-dimensional sample image, the rendered two-dimensional image and the prediction segmentation mask, wherein the second loss function is a loss function based on a multi-scale structure similarity index;
determining a third image loss by adopting a third loss function according to the two-dimensional sample image, the rendered two-dimensional image and the prediction segmentation mask, wherein the third loss function is a loss function based on a multi-scale characteristic distance of a deep convolutional neural network;
determining the first unsupervised training loss according to the first image loss, the second image loss and the third image loss.
For example, three loss functions may be employed to determine the first unsupervised training loss.
The first loss function is to convert the two-dimensional sample image and the rendered two-dimensional image into an LAB (LAB color model) mode, respectively, and calculate a 1-norm distance between the two-dimensional sample image and the rendered two-dimensional image after conversion into the LAB mode as a first image loss after discarding a luminance L channel, where the first loss function may refer to the following formula (ii).
Figure BDA0002567409500000131
Wherein L isabCan represent a first image loss, MpRepresenting a predictive segmentation mask, N+May represent regions of the predictive segmentation mask having pixel values greater than 0, and p may represent a color space transformation operation, ISCan represent a two-dimensional sample image, IRIt may be indicated that rendering a two-dimensional image,
Figure BDA0002567409500000132
representing the jth pixel in the predictive segmentation mask.
The second loss function is a loss function based on MS-SSIM (Multi-Scale-Structural Similarity Index), and the second loss function may refer to the following formula (three).
Lms-ssim=1-ms-ssim(IS⊙MP,IRS) formula (III)
Wherein L isms-ssimMay represent a second image loss, ms-ssim represents a multi-scale similarity indicator function, ⊙ represents a element-by-element multiplication, S is the number of scales employed, exemplaryAnd S may take the value 5.
The third loss function is a perceptual metric loss function based on a depth convolution neural network, and can be obtained by respectively extracting features of different layers of the two-dimensional sample image and the rendered two-dimensional image by using a pre-trained depth convolution neural network, and solving an average 2-norm distance between the features of the two-dimensional sample image and the rendered two-dimensional image after normalization to obtain a third image loss, wherein the third loss function can refer to the following formula (four).
Figure BDA0002567409500000141
Wherein L isperceptualRepresenting the third image loss, L is the total number of layers of the acquired feature,lit is possible to indicate the sequence number of the layer,
Figure BDA0002567409500000142
can represent a normalized feature, NlIs the firstlSet of layer features, | NlL is the firstlThe number of layer features, for example, L may be 5.
And weighting and summing the first image loss, the second image loss and the third image loss to obtain a first self-supervision training loss.
In a possible implementation manner, the determining a second unsupervised training loss according to the predicted partition mask and the rendered partition mask may include:
and determining a second self-supervision training loss by adopting a cross entropy loss function according to the prediction segmentation mask and the rendering segmentation mask.
For example, due to the defectiveness of the predicted segmentation mask, the constraint of consistency of the predicted segmentation mask and the rendered segmentation mask uses a cross entropy loss function for readjusting the weights of the positive and negative regions, which can be referred to as the following formula (five).
Figure BDA0002567409500000143
Wherein L ismaskTo representSecond loss of unsupervised training, MRRepresenting a rendering partitioning mask, N-May represent an area in the prediction partitioning mask where the pixel value is equal to 0,
Figure BDA0002567409500000144
and representing the j-th pixel point in the rendering segmentation mask.
In a possible implementation manner, the determining, according to the depth image corresponding to the two-dimensional sample image and the rendered depth image, a third unsupervised training loss may include:
respectively carrying out back projection operation on the depth image corresponding to the two-dimensional sample image and the rendering depth image to obtain point cloud information corresponding to the depth image and point cloud information corresponding to the rendering depth image;
and determining a third self-supervision training loss according to the point cloud information corresponding to the depth image and the point cloud information corresponding to the rendering depth image.
For example, a depth image and a rendered depth image corresponding to a two-dimensional sample image may be converted into point cloud information in a camera coordinate system through an inverse projection operation, a geometric consistency constraint may be established for the depth image and the point cloud information corresponding to the rendered depth image corresponding to the two-dimensional sample image, and the geometric consistency constraint may be established through a chamfer (chamfer) distance between the point cloud information. The back projection operation may refer to the following equation (six), and the calculation of the chamfer (chamfer) distance may refer to equation (seven).
Figure BDA0002567409500000151
Where D may represent a depth image (depth image or rendered depth image corresponding to a two-dimensional sample image), M may represent a segmentation mask (predictive segmentation mask or rendered segmentation mask), K may represent camera internal parameters, xjAnd yjThe two-dimensional coordinates of the jth pixel point can be represented.
Figure BDA0002567409500000152
Wherein p isSCan represent point cloud information corresponding to a depth image corresponding to a two-dimensional sample image, pRCan represent point cloud information corresponding to the rendering depth image, LgeomA third loss of unsupervised training may be indicated.
That is, the network total loss can be calculated by the following formula (eight):
Lself=Lmask+αLab+βLms-ssin+γLperceptual+ηLgeomformula (eight)
Where α, β, γ are the weights of the first image loss, the second image loss, and the third image loss, respectively, for example: α is 0.2, β is 1, and γ is 0.15.
After obtaining the total loss of the training of the self-supervision network, the posture prediction network may be trained according to the total loss of the training of the self-supervision network, and for example, the self-supervision training process of the posture prediction network may refer to fig. 2.
In one possible implementation, the network of attitude predictions includes: a class prediction subnetwork, a bounding box prediction subnetwork, and a pose prediction subnetwork,
the predicting the two-dimensional sample image through the gesture prediction network to obtain the prediction segmentation mask corresponding to the target object in the two-dimensional sample image and the prediction gesture information corresponding to the target object may include:
predicting the two-dimensional sample image through the category prediction sub-network to obtain category information corresponding to a target object in the two-dimensional sample image;
predicting the two-dimensional sample image through the boundary frame prediction sub-network to obtain boundary frame information corresponding to a target object in the two-dimensional sample image;
and processing the two-dimensional sample image, the category information and the bounding box information through the attitude prediction sub-network to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction attitude information corresponding to the target object.
For example, the class prediction subnetwork and the boundary box prediction subnetwork may be constructed on a detector based on an FPN (feature pyramid), the class prediction subnetwork is used to predict the two-dimensional sample image to obtain class information of the target object in the two-dimensional sample image, and the two-dimensional sample image is predicted according to the boundary box prediction subnetwork to obtain boundary box information corresponding to the target object in the two-dimensional sample image. In the case where the detector extracts the FPN features, the fusion may be, for example, to reduce the features of different FPN layers from 128 dimensions to 64 dimensions by 1 × 1 convolution, then to upsample or downsample the spatial size of the features of different layers to 1/8 (unified to 60 × 80 if the input picture is 480 × 640) of the input image by bilinear interpolation, and then to stitch the features of different layers with unified size in dimension.
And after the FPN characteristics are fused, splicing the fused FPN characteristics with the two-dimensional sample image and the two-dimensional coordinates corresponding to the two-dimensional sample image to obtain new characteristics. And obtaining the characteristics based on each target object by the new characteristics and the bounding box information through a target detection special layer (ROI-Align), and processing the characteristics of each target object by a posture prediction sub-network to obtain the posture information and the prediction segmentation mask corresponding to the target object.
Wherein the posture prediction subnetwork may comprise: the system comprises a mask prediction sub-network, a quaternion sub-network, a 2D central point prediction sub-network and a central point-to-camera distance prediction sub-network, wherein the mask prediction sub-network outputs a prediction division mask, the quaternion sub-network outputs three-dimensional rotation information, the 2D central point prediction sub-network outputs two-dimensional coordinates, the two-dimensional coordinates and coordinate information output by the central point-to-camera distance prediction sub-network after being transformed jointly obtain three-dimensional translation information of a target object, and the three-dimensional translation information and the three-dimensional rotation information form posture information of the target object, and refer to FIG..
In one possible implementation, before the predicting the two-dimensional sample image by the pose prediction network, the method may further include:
rendering and synthesizing according to a three-dimensional model of an object and preset posture information to obtain a synthesized two-dimensional image and annotation information of the synthesized two-dimensional image, wherein the annotation information of the synthesized two-dimensional image comprises annotated object class information, annotated bounding box information, preset posture information and a preset synthesized segmentation mask;
predicting the synthesized two-dimensional image through the attitude prediction network to obtain prediction information of the synthesized two-dimensional image, wherein the prediction information comprises predicted object class information, predicted boundary box information, a predicted synthesis segmentation mask and predicted synthesis attitude information;
and training the posture training network according to the prediction information and the labeling information of the synthesized two-dimensional image.
For example, the pose prediction network may be pre-trained with the synthesized two-dimensional image before the pose prediction network is self-supervised trained with the two-dimensional sample information.
Illustratively, a composite two-dimensional image can be obtained through a known three-dimensional model and preset pose information of an object by OpenGL (open graphics Library) and a renderer based on a physical engine, and annotation information of the composite two-dimensional image, including annotation object type information, annotation boundary frame information, preset pose information, and a preset composite segmentation mask, can be obtained in a composite process.
And processing the synthesized two-dimensional image through the attitude prediction network to obtain the prediction information of the synthesized two-dimensional image, wherein the prediction information can comprise predicted object class information, predicted boundary box information, a predicted synthesis division mask and predicted synthesis attitude information.
In the training process, a first loss may be calculated according to the predicted object class information and the labeled object class information, a second loss may be calculated according to the predicted bounding box information and the labeled bounding box information, a third loss may be calculated according to the predicted composite segmentation mask and the preset composite segmentation mask, and a fourth loss may be calculated according to the predicted composite pose information and the preset pose information, the total loss of the pose prediction network may include the first loss, the second loss, the third loss, and the fourth loss, and the total loss of the pose prediction network may be determined by the following formula (nine).
Lsynthetic=λclassLfocalboxLgioumaskLbceposeLposeFormula (nine)
LsyntheticCan represent the total loss of the attitude prediction network, Lfocal、Lgiou、Lbce、LposeRespectively representing a first loss, a second loss, a third loss and a fourth loss, wherein,
Figure BDA0002567409500000181
predicted pose information for a point x representing a three-dimensional model M of an object
Figure BDA0002567409500000182
And preset attitude information
Figure BDA0002567409500000183
The average 1-norm distance between transformed points, where,
Figure BDA0002567409500000184
is three-dimensional rotation information in the predicted pose information,
Figure BDA0002567409500000185
is the three-dimensional translation information in the predicted pose information,
Figure BDA0002567409500000186
is three-dimensional rotation information in the preset posture information,
Figure BDA0002567409500000187
is three-dimensional translation information, lambda, in the preset attitude informationclass、λbox、λmask、λposeThe weights used to represent the first loss, the second loss, the third loss, and the fourth loss may be the same value or different values.
After pre-training is performed according to the synthesized two-dimensional image, when the gesture prediction network is trained by self-supervision of the two-dimensional sample image, only the gesture prediction sub-network can be trained, and other networks do not update network parameters any more.
In order that those skilled in the art will better understand the embodiments of the present disclosure, the embodiments of the present disclosure are described below by way of specific examples.
Referring to fig. 4, the training of the pose prediction network is divided into two phases.
In the first stage, a large number of synthesized two-dimensional images are generated by using a three-dimensional model of an object through OpenGL and a rendering method based on a physical engine, identification information of the synthesized two-dimensional images can be obtained in the synthesis process, a posture prediction network is trained, and category information, boundary box information, a prediction segmentation mask and posture information of the object are output.
And in the second stage, inputting the attitude prediction network by using an unmarked real two-dimensional sample image to obtain predicted attitude information and a predicted segmentation mask of a target object in the two-dimensional sample image, inputting the predicted attitude information and a three-dimensional model of the target object into a differentiable renderer to obtain a rendering segmentation mask, a rendering two-dimensional image and a rendering depth image, establishing visual consistency constraints between the rendering segmentation mask and the predicted segmentation mask, between the rendering two-dimensional image and the real two-dimensional sample image, establishing geometric consistency constraints between the rendering depth image and point cloud information respectively corresponding to the depth images corresponding to the two-dimensional sample image, and performing self-supervision training on the attitude prediction network by optimizing the two self-supervision constraints.
The embodiment of the disclosure provides an attitude prediction method, which includes:
carrying out prediction processing on an image to be processed through an attitude prediction network to obtain attitude information of a target object in the image to be processed,
the posture prediction network is obtained by adopting any one of the network training methods for training.
For example, the posture prediction network obtained by training through any one of the methods performs prediction processing on the image to be processed to obtain the posture information of the target object in the image to be processed.
Therefore, according to the attitude prediction method provided by the embodiment of the disclosure, the accuracy of attitude prediction can be improved.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a network training device, a posture prediction device, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the network training methods and the posture prediction methods provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the method section are not repeated.
Fig. 5 shows a block diagram of a network training apparatus according to an embodiment of the present disclosure, as shown in fig. 5, the apparatus includes:
the prediction module 51 may be configured to predict a two-dimensional sample image through a gesture prediction network, so as to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction gesture information corresponding to the target object, where the prediction gesture information includes three-dimensional rotation information and three-dimensional translation information;
the rendering module 52 may be configured to perform a differentiable rendering operation according to the predicted posture information corresponding to the target object and the three-dimensional model corresponding to the target object, so as to obtain differentiable rendering information corresponding to the target object;
a determining module 53, configured to determine a total loss of the unsupervised training of the posture prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image, and the differentiable rendering information;
an auto-supervised training module 54 may be used to train the pose prediction network based on the auto-supervised training total loss.
In this way, a two-dimensional sample image can be predicted through a gesture prediction network, so that a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction gesture information corresponding to the target object are obtained, the prediction gesture information includes three-dimensional rotation information and three-dimensional translation information, and differentiable rendering operation is performed according to the prediction gesture information corresponding to the target object and a three-dimensional model corresponding to the target object, so that differentiable rendering information corresponding to the target object is obtained. And determining the total loss of the self-supervision training of the attitude prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image and the differentiable rendering information, and training the attitude prediction network according to the total loss of the self-supervision training. According to the network training device provided by the embodiment of the disclosure, the gesture prediction network is trained on the two-dimensional sample image and the depth image without the labeling information in a self-supervision manner, so that the accuracy of the gesture prediction network can be improved, and meanwhile, the training efficiency of the gesture prediction network can be improved.
In a possible implementation manner, the rendering information corresponding to the target object includes: rendering a segmentation mask, rendering a two-dimensional image, rendering a depth image, the determining module 53 may be further configured to:
determining a first self-supervision training loss according to the two-dimensional sample image and the rendered two-dimensional image;
determining a second unsupervised training loss according to the predicted segmentation mask and the rendered segmentation mask;
determining a third self-supervision training loss according to the depth image corresponding to the two-dimensional sample image and the rendering depth image;
and determining the total loss of the self-supervised training of the attitude prediction network according to the first self-supervised training loss, the second self-supervised training loss and the third self-supervised training loss.
In a possible implementation manner, the determining module 53 may be further configured to:
after the two-dimensional sample image and the rendered two-dimensional image are respectively converted into a color model LAB mode, determining first image loss by adopting a first loss function according to the two-dimensional sample image after the mode conversion, the rendered two-dimensional image after the mode conversion and the prediction segmentation mask;
determining a second image loss by adopting a second loss function according to the two-dimensional sample image, the rendered two-dimensional image and the prediction segmentation mask, wherein the second loss function is a loss function based on a multi-scale structure similarity index;
determining a third image loss by adopting a third loss function according to the two-dimensional sample image, the rendered two-dimensional image and the prediction segmentation mask, wherein the third loss function is a loss function based on a multi-scale characteristic distance of a deep convolutional neural network;
determining the first unsupervised training loss according to the first image loss, the second image loss and the third image loss.
In a possible implementation manner, the determining module 53 may be further configured to:
and determining a second self-supervision training loss by adopting a cross entropy loss function according to the prediction segmentation mask and the rendering segmentation mask.
In a possible implementation manner, the determining module 53 may be further configured to:
respectively carrying out back projection operation on the depth image corresponding to the two-dimensional sample image and the rendering depth image to obtain point cloud information corresponding to the depth image and point cloud information corresponding to the rendering depth image;
and determining a third self-supervision training loss according to the point cloud information corresponding to the depth image and the point cloud information corresponding to the rendering depth image.
In one possible implementation, the posture prediction network may include: a class prediction subnetwork, a bounding box prediction subnetwork, and an attitude prediction subnetwork, the prediction module 51 may be further operable to:
predicting the two-dimensional sample image through the category prediction sub-network to obtain category information corresponding to a target object in the two-dimensional sample image;
predicting the two-dimensional sample image through the boundary frame prediction sub-network to obtain boundary frame information corresponding to a target object in the two-dimensional sample image;
and processing the two-dimensional sample image, the category information and the bounding box information through the attitude prediction sub-network to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction attitude information corresponding to the target object.
In one possible implementation, the apparatus may further include:
the pre-training module can be used for performing rendering synthesis operation according to a three-dimensional model of an object and preset posture information to obtain a synthesized two-dimensional image and annotation information of the synthesized two-dimensional image, wherein the annotation information of the synthesized two-dimensional image comprises labeled object type information, labeled boundary box information, preset posture information and a preset synthesis segmentation mask;
predicting the synthesized two-dimensional image through the attitude prediction network to obtain prediction information of the synthesized two-dimensional image, wherein the prediction information comprises predicted object class information, predicted boundary box information, a predicted synthesis segmentation mask and predicted synthesis attitude information;
and training the attitude prediction network according to the prediction information and the labeling information of the synthesized two-dimensional image.
According to an aspect of the present disclosure, there is provided an attitude prediction apparatus, which may include:
the prediction module is used for carrying out prediction processing on the image to be processed through a posture prediction network to obtain the posture information of the target object in the image to be processed,
the posture prediction network is obtained by adopting any one of the network training methods for training.
Therefore, according to the attitude prediction device provided by the embodiment of the disclosure, the accuracy of attitude prediction can be improved.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code, which when run on a device, a processor in the device executes instructions for implementing the network training method and the posture prediction method provided in any of the above embodiments.
Embodiments of the present disclosure also provide another computer program product for storing computer readable instructions, which when executed cause a computer to perform the operations of the network training method and the posture prediction method provided in any of the above embodiments.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 6 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 6, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 7 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 7, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. Electronic device 1900 may operate based on operations stored in memory 1932Systems, e.g. Microsoft Server operating System (Windows Server)TM) Apple Inc. of the present inventionTM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (12)

1. A method of network training, comprising:
predicting a two-dimensional sample image through a posture prediction network to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction posture information corresponding to the target object, wherein the prediction posture information comprises three-dimensional rotation information and three-dimensional translation information;
performing differentiable rendering operation according to the predicted attitude information corresponding to the target object and the three-dimensional model corresponding to the target object to obtain differentiable rendering information corresponding to the target object;
determining the total loss of the self-supervision training of the attitude prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image and the differentiable rendering information;
and training the attitude prediction network according to the total loss of the self-supervision training.
2. The method of claim 1, wherein the differentiable rendering information corresponding to the target object comprises: rendering a segmentation mask, rendering a two-dimensional image, rendering a depth image,
determining the total loss of the self-supervision training of the attitude prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image and the differentiable rendering information, wherein the determining comprises:
determining a first self-supervision training loss according to the two-dimensional sample image and the rendered two-dimensional image;
determining a second unsupervised training loss according to the predicted segmentation mask and the rendered segmentation mask;
determining a third self-supervision training loss according to the depth image corresponding to the two-dimensional sample image and the rendering depth image;
and determining the total loss of the self-supervised training of the attitude prediction network according to the first self-supervised training loss, the second self-supervised training loss and the third self-supervised training loss.
3. The method of claim 2, wherein determining a first unsupervised training loss from the two-dimensional sample image and the rendered two-dimensional image comprises:
after the two-dimensional sample image and the rendered two-dimensional image are respectively converted into a color model LAB mode, determining first image loss by adopting a first loss function according to the two-dimensional sample image after the mode conversion, the rendered two-dimensional image after the mode conversion and the prediction segmentation mask;
determining a second image loss by adopting a second loss function according to the two-dimensional sample image, the rendered two-dimensional image and the prediction segmentation mask, wherein the second loss function is a loss function based on a multi-scale structure similarity index;
determining a third image loss by adopting a third loss function according to the two-dimensional sample image, the rendered two-dimensional image and the prediction segmentation mask, wherein the third loss function is a loss function based on a multi-scale characteristic distance of a deep convolutional neural network;
determining the first unsupervised training loss according to the first image loss, the second image loss and the third image loss.
4. The method of claim 2 or 3, wherein determining a second supervised training penalty from the predicted partition mask and the rendered partition mask comprises:
and determining a second self-supervision training loss by adopting a cross entropy loss function according to the prediction segmentation mask and the rendering segmentation mask.
5. The method of any one of claims 2 to 4, wherein determining a third unsupervised training loss from the depth image corresponding to the two-dimensional sample image and the rendered depth image comprises:
respectively carrying out back projection operation on the depth image corresponding to the two-dimensional sample image and the rendering depth image to obtain point cloud information corresponding to the depth image and point cloud information corresponding to the rendering depth image;
and determining a third self-supervision training loss according to the point cloud information corresponding to the depth image and the point cloud information corresponding to the rendering depth image.
6. The method of any of claims 1 to 5, wherein the pose prediction network comprises: a class prediction subnetwork, a bounding box prediction subnetwork, and a pose prediction subnetwork,
the predicting a two-dimensional sample image through a posture prediction network to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction posture information corresponding to the target object includes:
predicting the two-dimensional sample image through the category prediction sub-network to obtain category information corresponding to a target object in the two-dimensional sample image;
predicting the two-dimensional sample image through the boundary frame prediction sub-network to obtain boundary frame information corresponding to a target object in the two-dimensional sample image;
and processing the two-dimensional sample image, the category information and the bounding box information through the attitude prediction sub-network to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction attitude information corresponding to the target object.
7. The method of claim 6, wherein prior to said predicting a two-dimensional sample image through a pose prediction network, the method further comprises:
rendering and synthesizing according to a three-dimensional model of an object and preset posture information to obtain a synthesized two-dimensional image and annotation information of the synthesized two-dimensional image, wherein the annotation information of the synthesized two-dimensional image comprises annotated object class information, annotated bounding box information, preset posture information and a preset synthesized segmentation mask;
predicting the synthesized two-dimensional image through the attitude prediction network to obtain prediction information of the synthesized two-dimensional image, wherein the prediction information comprises predicted object class information, predicted boundary box information, a predicted synthesis segmentation mask and predicted synthesis attitude information;
and training the attitude prediction network according to the prediction information and the labeling information of the synthesized two-dimensional image.
8. A method of attitude prediction, the method comprising:
carrying out prediction processing on an image to be processed through an attitude prediction network to obtain attitude information of a target object in the image to be processed,
the posture prediction network is obtained by training by using the network training method of any one of claims 1 to 7.
9. A network training apparatus comprising:
the prediction module is used for predicting a two-dimensional sample image through a posture prediction network to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction posture information corresponding to the target object, wherein the prediction posture information comprises three-dimensional rotation information and three-dimensional translation information;
the rendering module is used for carrying out differentiable rendering operation according to the predicted attitude information corresponding to the target object and the three-dimensional model corresponding to the target object to obtain differentiable rendering information corresponding to the target object;
the determining module is used for determining the total loss of the self-supervision training of the attitude prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image and the differentiable rendering information;
and the self-supervision training module is used for training the posture prediction network according to the total loss of the self-supervision training.
10. An attitude prediction apparatus, characterized in that the apparatus comprises:
the prediction module is used for carrying out prediction processing on the image to be processed through a posture prediction network to obtain the posture information of the target object in the image to be processed,
the posture prediction network is obtained by training by using the network training method of any one of claims 1 to 7.
11. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any one of claims 1 to 8.
12. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 8.
CN202010638037.2A 2020-07-02 2020-07-02 Network training method and device, and gesture prediction method and device Active CN111783986B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010638037.2A CN111783986B (en) 2020-07-02 2020-07-02 Network training method and device, and gesture prediction method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010638037.2A CN111783986B (en) 2020-07-02 2020-07-02 Network training method and device, and gesture prediction method and device

Publications (2)

Publication Number Publication Date
CN111783986A true CN111783986A (en) 2020-10-16
CN111783986B CN111783986B (en) 2024-06-14

Family

ID=72759605

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010638037.2A Active CN111783986B (en) 2020-07-02 2020-07-02 Network training method and device, and gesture prediction method and device

Country Status (1)

Country Link
CN (1) CN111783986B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508007A (en) * 2020-11-18 2021-03-16 中国人民解放军战略支援部队航天工程大学 Space target 6D posture estimation technology based on image segmentation Mask and neural rendering
CN112529917A (en) * 2020-12-22 2021-03-19 中国第一汽车股份有限公司 Three-dimensional target segmentation method, device, equipment and storage medium
CN112529913A (en) * 2020-12-14 2021-03-19 北京达佳互联信息技术有限公司 Image segmentation model training method, image processing method and device
CN112926461A (en) * 2021-02-26 2021-06-08 商汤集团有限公司 Neural network training and driving control method and device
CN113256574A (en) * 2021-05-13 2021-08-13 中国科学院长春光学精密机械与物理研究所 Three-dimensional target detection method
CN113470124A (en) * 2021-06-30 2021-10-01 北京达佳互联信息技术有限公司 Training method and device of special effect model and special effect generation method and device
CN113592876A (en) * 2021-01-14 2021-11-02 腾讯科技(深圳)有限公司 Training method and device for split network, computer equipment and storage medium
CN114511811A (en) * 2022-01-28 2022-05-17 北京百度网讯科技有限公司 Video processing method, video processing device, electronic equipment and medium
WO2022143314A1 (en) * 2020-12-29 2022-07-07 华为技术有限公司 Object registration method and apparatus
WO2022160898A1 (en) * 2021-01-29 2022-08-04 浙江师范大学 Unsupervised depth representation learning method and system based on image translation
CN114882301A (en) * 2022-07-11 2022-08-09 四川大学 Self-supervision learning medical image identification method and device based on region of interest
WO2022178952A1 (en) * 2021-02-25 2022-09-01 湖南大学 Target pose estimation method and system based on attention mechanism and hough voting
CN116681755A (en) * 2022-12-29 2023-09-01 广东美的白色家电技术创新中心有限公司 Pose prediction method and device
WO2023174182A1 (en) * 2022-03-18 2023-09-21 华为技术有限公司 Rendering model training method and apparatus, video rendering method and apparatus, and device and storage medium
CN118274786A (en) * 2024-05-31 2024-07-02 四川宏大安全技术服务有限公司 Buried pipeline settlement monitoring method and system based on Beidou coordinates
WO2024188056A1 (en) * 2023-03-10 2024-09-19 北京字跳网络技术有限公司 Object posture recognition model construction method and apparatus, and object posture recognition method and apparatus

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150002542A1 (en) * 2013-06-28 2015-01-01 Calvin Chan Reprojection oled display for augmented reality experiences
CN105474273A (en) * 2013-07-25 2016-04-06 微软技术许可有限责任公司 Late stage reprojection
WO2018121737A1 (en) * 2016-12-30 2018-07-05 北京市商汤科技开发有限公司 Keypoint prediction, network training, and image processing methods, device, and electronic device
CN109215080A (en) * 2018-09-25 2019-01-15 清华大学 6D Attitude estimation network training method and device based on deep learning Iterative matching
US20190051056A1 (en) * 2017-08-11 2019-02-14 Sri International Augmenting reality using semantic segmentation
CN109872343A (en) * 2019-02-01 2019-06-11 视辰信息科技(上海)有限公司 Weak texture gestures of object tracking, system and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150002542A1 (en) * 2013-06-28 2015-01-01 Calvin Chan Reprojection oled display for augmented reality experiences
CN105474273A (en) * 2013-07-25 2016-04-06 微软技术许可有限责任公司 Late stage reprojection
WO2018121737A1 (en) * 2016-12-30 2018-07-05 北京市商汤科技开发有限公司 Keypoint prediction, network training, and image processing methods, device, and electronic device
US20190051056A1 (en) * 2017-08-11 2019-02-14 Sri International Augmenting reality using semantic segmentation
CN109215080A (en) * 2018-09-25 2019-01-15 清华大学 6D Attitude estimation network training method and device based on deep learning Iterative matching
CN109872343A (en) * 2019-02-01 2019-06-11 视辰信息科技(上海)有限公司 Weak texture gestures of object tracking, system and device

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508007B (en) * 2020-11-18 2023-09-29 中国人民解放军战略支援部队航天工程大学 Space target 6D attitude estimation method based on image segmentation Mask and neural rendering
CN112508007A (en) * 2020-11-18 2021-03-16 中国人民解放军战略支援部队航天工程大学 Space target 6D posture estimation technology based on image segmentation Mask and neural rendering
CN112529913A (en) * 2020-12-14 2021-03-19 北京达佳互联信息技术有限公司 Image segmentation model training method, image processing method and device
CN112529917A (en) * 2020-12-22 2021-03-19 中国第一汽车股份有限公司 Three-dimensional target segmentation method, device, equipment and storage medium
WO2022143314A1 (en) * 2020-12-29 2022-07-07 华为技术有限公司 Object registration method and apparatus
CN114758334A (en) * 2020-12-29 2022-07-15 华为技术有限公司 Object registration method and device
CN113592876A (en) * 2021-01-14 2021-11-02 腾讯科技(深圳)有限公司 Training method and device for split network, computer equipment and storage medium
WO2022160898A1 (en) * 2021-01-29 2022-08-04 浙江师范大学 Unsupervised depth representation learning method and system based on image translation
WO2022178952A1 (en) * 2021-02-25 2022-09-01 湖南大学 Target pose estimation method and system based on attention mechanism and hough voting
CN112926461B (en) * 2021-02-26 2024-04-19 商汤集团有限公司 Neural network training and driving control method and device
CN112926461A (en) * 2021-02-26 2021-06-08 商汤集团有限公司 Neural network training and driving control method and device
CN113256574B (en) * 2021-05-13 2022-10-25 中国科学院长春光学精密机械与物理研究所 Three-dimensional target detection method
CN113256574A (en) * 2021-05-13 2021-08-13 中国科学院长春光学精密机械与物理研究所 Three-dimensional target detection method
CN113470124A (en) * 2021-06-30 2021-10-01 北京达佳互联信息技术有限公司 Training method and device of special effect model and special effect generation method and device
CN113470124B (en) * 2021-06-30 2023-09-22 北京达佳互联信息技术有限公司 Training method and device for special effect model, and special effect generation method and device
CN114511811A (en) * 2022-01-28 2022-05-17 北京百度网讯科技有限公司 Video processing method, video processing device, electronic equipment and medium
WO2023174182A1 (en) * 2022-03-18 2023-09-21 华为技术有限公司 Rendering model training method and apparatus, video rendering method and apparatus, and device and storage medium
CN114882301B (en) * 2022-07-11 2022-09-13 四川大学 Self-supervision learning medical image identification method and device based on region of interest
CN114882301A (en) * 2022-07-11 2022-08-09 四川大学 Self-supervision learning medical image identification method and device based on region of interest
CN116681755A (en) * 2022-12-29 2023-09-01 广东美的白色家电技术创新中心有限公司 Pose prediction method and device
CN116681755B (en) * 2022-12-29 2024-02-09 广东美的白色家电技术创新中心有限公司 Pose prediction method and device
WO2024188056A1 (en) * 2023-03-10 2024-09-19 北京字跳网络技术有限公司 Object posture recognition model construction method and apparatus, and object posture recognition method and apparatus
CN118274786A (en) * 2024-05-31 2024-07-02 四川宏大安全技术服务有限公司 Buried pipeline settlement monitoring method and system based on Beidou coordinates

Also Published As

Publication number Publication date
CN111783986B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
CN111783986B (en) Network training method and device, and gesture prediction method and device
CN110674719B (en) Target object matching method and device, electronic equipment and storage medium
CN110503689B (en) Pose prediction method, model training method and model training device
CN111540000B (en) Scene depth and camera motion prediction method and device, electronic device and medium
CN111340766A (en) Target object detection method, device, equipment and storage medium
CN111340048B (en) Image processing method and device, electronic equipment and storage medium
CN109584362B (en) Three-dimensional model construction method and device, electronic equipment and storage medium
CN109145970B (en) Image-based question and answer processing method and device, electronic equipment and storage medium
CN110706339B (en) Three-dimensional face reconstruction method and device, electronic equipment and storage medium
CN111401230B (en) Gesture estimation method and device, electronic equipment and storage medium
CN111462238A (en) Attitude estimation optimization method and device and storage medium
CN114445562A (en) Three-dimensional reconstruction method and device, electronic device and storage medium
CN113052874B (en) Target tracking method and device, electronic equipment and storage medium
WO2023168957A1 (en) Pose determination method and apparatus, electronic device, storage medium, and program
CN112991381A (en) Image processing method and device, electronic equipment and storage medium
CN114255221A (en) Image processing method, defect detection method, image processing device, defect detection device, electronic equipment and storage medium
CN111311588B (en) Repositioning method and device, electronic equipment and storage medium
CN112529846A (en) Image processing method and device, electronic equipment and storage medium
CN111339880A (en) Target detection method and device, electronic equipment and storage medium
CN114066856A (en) Model training method and device, electronic equipment and storage medium
CN111488964A (en) Image processing method and device and neural network training method and device
CN109635926B (en) Attention feature acquisition method and device for neural network and storage medium
CN111914774A (en) 3D object detection method and device based on sparse convolutional neural network
CN114973359A (en) Expression recognition method and device, electronic equipment and storage medium
CN114550086A (en) Crowd positioning method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant