CN114241013B - Object anchoring method, anchoring system and storage medium - Google Patents
Object anchoring method, anchoring system and storage medium Download PDFInfo
- Publication number
- CN114241013B CN114241013B CN202210173770.0A CN202210173770A CN114241013B CN 114241013 B CN114241013 B CN 114241013B CN 202210173770 A CN202210173770 A CN 202210173770A CN 114241013 B CN114241013 B CN 114241013B
- Authority
- CN
- China
- Prior art keywords
- pose
- model
- neural network
- training
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/251—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application provides an object anchoring method, an anchoring system and a storage medium, wherein the object anchoring method comprises the following steps: training according to the acquired image sequence containing the interested object to obtain a three-dimensional model of the interested object and a six-degree-of-freedom pose estimation neural network model for estimating the pose of the object; and performing pose estimation on the interested object according to the three-dimensional model of the interested object and the six-degree-of-freedom pose estimation neural network model for object pose estimation to obtain the pose of the interested object, and superposing virtual information on the interested object according to the pose to realize the rendering of the interested object. The method and the device can solve the problems of user-defined object identification and 3DThe inaccuracy and illumination and environment during tracking have great influence on the algorithm, so that the method for self-defining object information gain and display of the mobile terminal is realized, and the information is displayed and matched with the object 3DThe position and attitude correspond.
Description
Technical Field
The application belongs to the technical field of image recognition, and particularly relates to an object anchoring method, an anchoring system and a storage medium.
Background
Common object recognition and 3D position and posture tracking deep learning algorithms require a large amount of manual labeling data, and user-defined object training is difficult to ensure accuracy under various complex illumination and environments. In the prior art, a feature engineering method is used, and features such as SIFT and SURF are used, and although the features have certain robustness on an illumination background, the features are sensitive to a somewhat complex illumination background and are easy to fail in tracking. Many existing methods require a user to give an initial pose and provide an accurate 3D model, which cannot be tracked for objects without a 3D model.
Disclosure of Invention
To overcome, at least to some extent, the problems in the related art, the present application provides an object anchoring method, an anchoring system, and a storage medium.
According to a first aspect of embodiments herein, there is provided a method of anchoring an object, comprising the steps of:
training according to the acquired image sequence containing the interested object to obtain a three-dimensional model of the interested object and a six-degree-of-freedom pose estimation neural network model for estimating the pose of the object;
and performing pose estimation on the interested object according to the three-dimensional model of the interested object and the six-degree-of-freedom pose estimation neural network model for object pose estimation to obtain the pose of the interested object, and superposing virtual information on the interested object according to the pose to realize the rendering of the interested object.
In the above object anchoring method, the modeling is completed based on deep learning or computer vision in the process of obtaining the three-dimensional model of the object of interest through training according to the obtained image sequence containing the object of interest.
Further, the process of completing modeling based on deep learning is as follows:
extracting the characteristics of each frame of image, and estimating the camera initialization pose corresponding to each frame of image;
acquiring a mask of each frame of image by utilizing a pre-trained significance segmentation network;
model training and inference are performed to obtain a mesh of the model.
Further, the process of performing model training and inference is as follows:
Position coordinates of each pixel point by using internal parametersConversion to imaging plane coordinates;
Inputting the imaging plane coordinates and the optimized camera pose into a neural networkExtracting the color difference characteristics between frames(ii) a Characterizing color differences between framesAnd adding the color difference to the original image to compensate the color difference between frames.
initializing the camera pose corresponding to the imageInput neural networkIn the method, the optimized pose is obtained;
Wherein, the initial position of camera after optimizing is:
in the formula (I), the compound is shown in the specification,Tis a function, which represents taking the position coordinates;
initial position of self-optimized cameraEmitting light rays in a direction ofwPassing through the position coordinates of the pixel points;
Wherein the direction of the lightwComprises the following steps:
Utilizing deep learning networksPredict thisMDotProbability at the surface of the implicit equation (i.e., implicit function TSDF);
wherein, the judgment condition of the point predicted to be on the surface of the implicit equation is as follows:
in the formula (I), the compound is shown in the specification,representing points predicted to be on the surface of the implicit equation,a threshold value is indicated which is indicative of,indicating minimum compliance;
Will predict as points on the surface of the implicit equationSend into neural rendererRObtaining the values of the predicted RGB colors;
according to predictionValue and acquisitionKCalculating the color of each pixel point to obtain the square loss of the pixel difference value;
wherein the square loss of pixel differenceLComprises the following steps:
in the formula (I), the compound is shown in the specification,all represent coefficients;representing the difference values of the pixels of the image,difference value representing background maskDifference from foreground maskThe sum of the total weight of the components,representing a difference of the edges;
in the formula (I), the compound is shown in the specification,Pindicating all selectionskPoint;
in the formula (I), the compound is shown in the specification,indicating all selectionskOut of the dots, dots outside the mask;
in the formula (I), the compound is shown in the specification,BCErepresenting a two-value cross-entropy loss,indicating all selectionskOne of the dots within the mask;
in the formula (I), the compound is shown in the specification,representing the boundaries of the mask;
when the model deduces, the spiritOver a networkDeep learning networkAnd neural networksInput 3 in the combined model ofDPoint; the combined model is used to obtain the points present on its surface, from which a mesh is formed.
Further, the process of completing the modeling based on the computer vision is as follows:
performing feature extraction and matching by adopting a visual algorithm or a deep learning algorithm;
estimating the pose of the camera;
segmenting salient objects in the image sequence;
reconstructing the dense point cloud;
using the reconstructed dense point cloud as the input of grid generation, and reconstructing the grid of the object by using a reconstruction algorithm;
finding out texture coordinates corresponding to the grid vertex according to the camera pose and the image corresponding to the camera pose to obtain a mapping of the grid;
and obtaining a three-dimensional model according to the grids of the object and the mapping of the grids.
In the object anchoring method, the specific process of training the six-degree-of-freedom pose estimation neural network model for object pose estimation according to the acquired image sequence containing the object of interest comprises the following steps:
obtaining a synthetic data set by adopting a PBR rendering method according to the three-dimensional model and the preset scene model of the object; the synthetic dataset includes synthetic training data;
obtaining a real data set by adopting a model reprojection segmentation algorithm according to the camera pose and the object pose; the real dataset comprises real training data;
and training the six-degree-of-freedom pose estimation neural network based on deep learning by utilizing the synthetic training data and the real training data to obtain a six-degree-of-freedom pose estimation neural network model.
Further, the specific process of obtaining the synthetic data set by using the PBR rendering method according to the three-dimensional model and the preset scene model of the object is as follows:
reading a three-dimensional model and a preset scene model of an object;
carrying out object pose randomization, rendering camera pose randomization, material randomization and illumination randomization by adopting a PBR rendering method to obtain a series of image sequences and corresponding labeling labels; the label labels are of category, position and pose with six degrees of freedom.
Further, the specific process of obtaining the real data set by using the model re-projection segmentation algorithm according to the camera pose and the object pose is as follows:
acquiring an image sequence, a camera pose and an object pose, and segmenting an object in a real image;
synthesizing the real data with discrete poses into data with dense and continuous poses, and further obtaining a real image and a corresponding label thereof; the label labels are of category, position and pose with six degrees of freedom.
Furthermore, the specific process of training the six-degree-of-freedom pose estimation neural network based on deep learning by using the synthetic training data and the real training data to obtain the six-degree-of-freedom pose estimation neural network model is as follows:
inputting 2D coordinates of a plurality of characteristic points extracted from an image and an object, 3D coordinates corresponding to the characteristic points and an image mask;
training the six-degree-of-freedom pose estimation neural network by adopting the following loss function to obtain a six-degree-of-freedom pose estimation neural network model;
the loss function needed when training the six-degree-of-freedom pose estimation neural network is as follows:
in the formula (I), the compound is shown in the specification,the loss is indicated by an indication of,are all indicative of the coefficients of the,a loss of classification is indicated and,indicating that the loss of the bounding box,which represents the loss in the 2D representation,which represents the loss in 3D to the user,which is indicative of a loss of the mask,representing a projection loss;
in the formula (I), the compound is shown in the specification,is shown to take the first placeiThe classification information of each of the detection anchor points,is shown to take the first placejInformation of individual background features;the anchor point is represented by a representation of,an anchor point representing the background is shown,a true value of the category is represented,representing features proposed by a neural network;
in the formula (I), the compound is shown in the specification,is shown asiThe coordinate characteristics of each of the detection anchor points,representing the true value of the coordinate of the detection box;
in the formula (I), the compound is shown in the specification,is expressed as 2DThe characteristics of the coordinates are such that,2 for representing an objectDThe true value of the characteristic point is that,feature points and masks representing neural network predictions;
in the formula (I), the compound is shown in the specification,is expressed by 3DThe characteristics of the coordinates are such that,3 for representing an objectDThe true value of the characteristic point is that,feature points and masks representing neural network predictions;
in the formula (I), the compound is shown in the specification,first to show the prospectiThe characteristics of the device are as follows,indicating taking the backgroundjThe characteristics of the device are as follows,fgthe representation of the foreground is performed,bgrepresenting a background;
in the formula (I), the compound is shown in the specification,is shown as 3DFeature projection to 2DRear sum 2DThe true value is used for making a difference value,feature points and masks representing the neural network predictions.
In the object anchoring method, the rendering of the interested object is realized through a mobile terminal or through the mixing of the mobile terminal and a cloud server;
the process realized by the mobile terminal is as follows:
before tracking is started, accessing a cloud server, downloading an object model, a deep learning model and a feature database of a user, and then performing other calculations on a mobile terminal;
the mobile terminal reads camera data from the equipment, and the object pose is obtained by detecting or identifying the neural network and estimating the neural network by the pose of six degrees of freedom;
rendering the content to be rendered according to the pose of the object;
the process of realizing the mixing of the mobile terminal and the cloud server is as follows:
inputting an image sequence in the mobile terminal, and performing significance detection on each frame of image;
uploading the significance detection area to a cloud server for retrieval to obtain information of the object and a deep learning model related to the information, and loading the information to the mobile terminal;
estimating the pose of the object at the mobile terminal to obtain the pose of the object;
and rendering the content to be rendered according to the pose of the object.
According to a second aspect of the embodiments of the present application, there is also provided an object anchoring system, which includes a cloud training unit and an object pose calculation and rendering unit;
the cloud training unit is used for training according to the acquired image sequence containing the interested object to obtain a three-dimensional model of the interested object and a six-degree-of-freedom pose estimation neural network model for estimating the pose of the object;
the object pose calculation and rendering unit is used for estimating the pose of the interested object according to the three-dimensional model of the interested object and the six-degree-of-freedom pose estimation neural network model for estimating the pose of the interested object, and superposing virtual information on the interested object to realize the rendering of the interested object;
the cloud training unit comprises a modeling unit, a synthetic training data generating unit, a real training data generating unit and a training algorithm unit;
the modeling unit is used for training according to the acquired image sequence containing the interested object to obtain a three-dimensional model of the interested object;
the synthetic training data generation unit is used for obtaining a synthetic data set according to a three-dimensional model of an object and a preset scene model, and the synthetic data set comprises synthetic training data;
the real training data generation unit is used for obtaining a real data set according to the camera pose and the object pose, and the real data set comprises real training data;
and the training algorithm unit is used for training the six-degree-of-freedom pose estimation neural network based on deep learning according to the synthetic training data and the real training data to obtain a six-degree-of-freedom pose estimation neural network model.
According to a third aspect of embodiments of the present application, there is also provided a storage medium having an executable program stored thereon, which when called, performs the steps in the object anchoring method described in any one of the above.
According to the above embodiments of the present application, at least the following advantages are obtained: according to the object anchoring method, the model which is used for carrying out recognition and 3D position and posture tracking by using the 2D image is trained by adopting synthetic data synthesis and real data synthesis, the problem that inaccuracy, illumination, environment and the like have great influence on the algorithm when a user self-defines object recognition and 3D tracking can be solved, and then the method for obtaining and displaying the self-defined object information of the mobile terminal is realized, and the information is displayed and corresponds to the 3D position and posture of the object.
According to the object anchoring method, the problem that workload of manual marking is large and speed is low can be solved by adopting a method of combining modeling rendering synthetic data and automatic marking real data, efficiency and accuracy of model training are improved, a deep learning model of a user-defined object can be tracked possibly, and the tracking initialization can be automatic initialization and is low in sensitivity to illumination, environment and the like.
According to the object anchoring method, the end cloud combined framework is adopted, so that large-scale object recognition and 3D position and posture tracking of the mobile terminal are possible.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the scope of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of the specification of the application, illustrate embodiments of the application and together with the description, serve to explain the principles of the application.
Fig. 1 is a flowchart of an object anchoring method according to an embodiment of the present disclosure.
Fig. 2 is a block diagram of an object anchoring system according to an embodiment of the present invention.
Fig. 3 is a block diagram of a structure of a cloud-end training unit in an object anchoring system according to an embodiment of the present disclosure.
Fig. 4 is a block diagram illustrating a structure of a deep learning-based modeling unit in an object anchoring system according to an embodiment of the present disclosure.
Fig. 5 is a schematic diagram of a modeling process of a modeling unit based on computer vision in an object anchoring system according to an embodiment of the present application.
Fig. 6 is a block diagram illustrating a structure of a synthesized training data generating unit in an object anchoring system according to an embodiment of the present disclosure.
Fig. 7 is a flowchart illustrating a processing of a PBR rendering unit in an object anchoring system according to an embodiment of the present disclosure.
Fig. 8 is a flowchart illustrating a process of a composite image reality migration unit in an object anchoring system according to an embodiment of the present application.
Fig. 9 is a block diagram illustrating a structure of a real training data generating unit in an object anchoring system according to an embodiment of the present disclosure.
Fig. 10 is a flowchart of an implementation of an object pose calculation and rendering unit in an object anchoring system by a mobile terminal according to an embodiment of the present disclosure.
Fig. 11 is a flowchart of an implementation of an object pose calculation and rendering unit in an object anchoring system by mixing a mobile terminal and a cloud server according to an embodiment of the present disclosure.
Description of reference numerals:
1. a cloud training unit;
11. a modeling unit;
12. a synthetic training data generating unit; 121. a PBR rendering unit; 122. a composite image reality migration unit;
13. a real training data generating unit; 131. a model reprojection segmentation algorithm unit; 132. an inter-frame data synthesis unit;
14. a training algorithm unit;
2. and an object pose calculating and rendering unit.
Detailed Description
For the purpose of promoting a clear understanding of the objects, aspects and advantages of the embodiments of the present application, reference will now be made to the accompanying drawings and detailed description, wherein like reference numerals refer to like elements throughout.
The illustrative embodiments and descriptions of the present application are provided to explain the present application and not to limit the present application. Additionally, the same or similar numbered elements/components used in the drawings and the embodiments are used to represent the same or similar parts.
As used herein, "first," "second," …, etc., are not specifically intended to mean in a sequential or chronological order, nor are they intended to limit the application, but merely to distinguish between elements or operations described in the same technical language.
As used herein, the terms "comprising," "including," "having," "containing," and the like are open-ended terms that mean including, but not limited to.
As used herein, "and/or" includes any and all combinations of the described items.
References to "plurality" herein include "two" and "more than two"; reference to "multiple sets" herein includes "two sets" and "more than two sets".
Certain words used to describe the present application are discussed below or elsewhere in this specification to provide additional guidance to those skilled in the art in describing the present application.
As shown in fig. 1, an object anchoring method provided in an embodiment of the present application includes the following steps:
and S1, training according to the acquired image sequence containing the interested object to obtain a three-dimensional model of the interested object and a six-degree-of-freedom pose estimation neural network model for estimating the pose of the object.
S2, performing pose estimation on the interested object according to the three-dimensional model of the interested object and the six-degree-of-freedom pose estimation neural network model for object pose estimation to obtain the pose of the interested object, and overlaying virtual information on the interested object according to the pose to realize the rendering of the interested object.
In the step S1, in the process of training the obtained stereoscopic model of the object of interest according to the obtained image sequence containing the object of interest, the modeling may be completed based on deep learning, or may be completed based on computer vision.
When modeling is completed based on deep learning, the specific process is as follows:
s111, extracting features and initializing camera pose estimation;
extracting each frame imageEstimating the camera initialization pose corresponding to each frame of image。
S112, segmenting the salient object;
obtaining each frame of image by utilizing pre-trained significance segmentation networkIs used for forming a mask。
S113, model training and inference;
the goal of model training is to obtain a mesh of the model.
Position coordinates of each pixel point by using internal parametersConversion to imaging plane coordinates。
Inputting imaging plane coordinates and optimized camera pose into neural networkExtracting the color difference characteristics between frames(ii) a Characterizing color differences between framesAnd adding the color difference to the original image to compensate the color difference between frames.
initializing the camera pose corresponding to the imageInput neural networkIn the middle, more accurate optimized pose is obtained. The optimized camera pose is characterized by,To representxThe angle of rotation of the shaft is such that,to representyThe angle of rotation of the shaft is such that,to representzThe rotation angle of the shaft; the initial position of the camera is。
Wherein, the initial position of camera after optimizing is:
in the formula (3), the reaction mixture is,Tis a function, which represents taking the position coordinates.
Initial position of self-optimized cameraEmitting light rays in a direction ofwPassing through the position coordinates of the pixel points。
Wherein the direction of the lightwComprises the following steps:
Utilizing deep learning networksPredict thisMDotProbability at the surface of the implicit equation (i.e., implicit function TSDF).
Wherein, the judgment condition of the point predicted to be on the surface of the implicit equation is as follows:
in the formula (5), the reaction mixture is,representing points predicted to be on the surface of the implicit equation,is indicative of a threshold value that is,indicating minimum compliancem. A point satisfying equation (5) can be predicted as a point on the surface of the implicit equation.
Will predict as points on the surface of the implicit equationSend into neural rendererRObtaining the values of the predicted RGB colors。
according to predictionValue and acquisitionKThe color of each pixel point is calculated to obtain the square loss of the pixel difference value, so that the shape of the grid is closer to the grid of the object in the image.
Wherein the square loss of pixel differenceLComprises the following steps:
in the formula (7), the reaction mixture is,are all indicative of the coefficients of the,the number of the channels can be 1,it may be in the range of 0.5,can be 1;representing the difference values of the pixels of the image,difference value representing background maskDifference from foreground maskThe sum of the total weight of the components,indicating the difference of the edges.
in the formula (8), the reaction mixture is,Pindicating all selectionskAnd (4) point.
in the formula (9), the reaction mixture is,indicating all selectionskThe dots are those outside the mask.
The physical meaning of formula (9) is: for points not on the object, the estimated background mask value is as close to 0 as possible.
the physical meaning of formula (10) is: for points on the object, the estimated foreground mask value is as close to 1 as possible.
In the formulae (9) and (10),BCErepresenting a two-value cross-entropy loss,indicating all selectionskThe dots are dots within the mask.
Equation (11) performs a loss enhancement on the edge points to increase the weight.
When model is inferred, neural networkDeep learning networkAnd neural networksInput 3 in the combined model ofDPoint; the combined model is used to obtain the points present on its surface, from which a mesh is formed.
When modeling is completed based on computer vision, the specific process is as follows:
s121, extracting and matching features by adopting a visual algorithm or a deep learning algorithm;
and extracting features from the input image sequence, matching the features, and taking the matched features as input of camera pose estimation.
The input image sequence may be a color image or a grayscale image. The algorithm for extracting and matching the features can be SIFT, HAAR, ORB and other traditional visual algorithms, and can also be a deep learning algorithm.
S122, estimating the pose of the camera;
and taking the matched features as observed quantities, and estimating the pose of the camera by using an SFM (structure-from-motion algorithm, which is an off-line algorithm for three-dimensional reconstruction based on various collected disordered pictures).
S123, segmenting the salient objects in the image sequence;
and taking the camera pose as a priori, and segmenting the salient objects in the image sequence by using a salient object segmentation algorithm to serve as the input of point cloud reconstruction.
S124, reconstructing the dense point cloud;
and generating a 3D point cloud of the feature points according to the camera pose and the feature points, and obtaining dense point cloud by using a block matching algorithm.
And S125, using the reconstructed dense point cloud as the input of grid generation, and reconstructing the grid of the object by using a Poisson and other reconstruction algorithms.
And S126, finding texture coordinates corresponding to the grid vertex according to the camera pose and the image corresponding to the camera pose, and obtaining a grid map.
And S127, obtaining a three-dimensional model according to the grids of the object and the mapping of the grids.
In the step S1, the specific process of training the six-degree-of-freedom pose estimation neural network model for object pose estimation according to the acquired image sequence including the object of interest includes:
and obtaining a synthetic data set by adopting a PBR rendering method according to the three-dimensional model and the preset scene model of the object. Wherein the synthetic data set includes synthetic training data.
And obtaining a real data set by adopting a model reprojection segmentation algorithm according to the camera pose and the object pose. Wherein the real dataset comprises real training data.
And training the six-degree-of-freedom pose estimation neural network based on deep learning by utilizing the synthetic training data and the real training data to obtain a six-degree-of-freedom pose estimation neural network model.
In a specific embodiment, according to the three-dimensional model and the preset scene model of the object, the specific process of obtaining the synthetic data set by using the PBR rendering method includes:
reading a three-dimensional model and a preset scene model of an object;
and (4) carrying out object pose randomization, rendering camera pose randomization, material randomization and illumination randomization by adopting a PBR rendering method to obtain a series of image sequences and corresponding labeling labels. The label can be a category, a position, a pose with six degrees of freedom, and the like.
The specific process of obtaining the synthetic data set by adopting the PBR rendering method according to the three-dimensional model and the preset scene model of the object further comprises the following steps:
reading a three-dimensional model or a real image or a PBR image, and carrying out preprocessing work such as background removal on the image; synthetic images at different angles and corresponding annotation labels are generated through a deep learning Network such as GAN (generic adaptive Network) or NERF (Neural radiation Fields). The label can be a category, a position, a pose with six degrees of freedom, and the like.
In a specific embodiment, the specific process of obtaining the real data set by using the model reprojection segmentation algorithm according to the camera pose and the object pose is as follows:
acquiring an image sequence, a camera pose and an object pose, and segmenting an object in a real image;
and synthesizing the real data with the discrete poses into data with more dense and continuous poses, and further obtaining a real image and a corresponding label thereof. The label can be a category, a position, a pose with six degrees of freedom, and the like.
In a specific embodiment, the specific process of training the six-degree-of-freedom pose estimation neural network based on deep learning by using the synthetic training data and the real training data to obtain the six-degree-of-freedom pose estimation neural network model is as follows:
the method comprises the steps of inputting an image, 2D coordinates of a plurality of characteristic points extracted from an object, 3D coordinates corresponding to the characteristic points and an image mask.
And training the six-degree-of-freedom pose estimation neural network by adopting the following loss function to obtain a six-degree-of-freedom pose estimation neural network model.
The loss function needed when training the six-degree-of-freedom pose estimation neural network is as follows:
in the formula (12), the reaction mixture is,it is indicated that there is a loss of,are all indicative of the coefficients of the,a loss of classification is indicated and,indicating that the loss of the bounding box,which represents the loss in the 2D representation,which represents the loss in 3D to the user,which is indicative of a loss of the mask,representing a projection loss.
in the formula (13), the reaction mixture is,is shown to take the first placeiThe classification information of each of the detection anchor points,is shown to take the first placejInformation of individual background features.The anchor point is represented by a representation of,an anchor point representing the background is shown,a true value of the category is represented,representing the proposed features of the neural network.
formula (A), (B) and14) in (1),is shown asiThe coordinate characteristics of each of the detection anchor points,and represents the true value of the coordinate of the detection box.
in the formula (15), the reaction mixture is,is expressed as 2DThe characteristics of the coordinates are such that,2 for representing an objectDThe true value of the characteristic point is that,feature points and masks representing the neural network predictions.
in the formula (16), the compound represented by the formula,is expressed by 3DThe characteristics of the coordinates are such that,3 for representing objectsDFeature(s)The value of the point true is shown,feature points and masks representing the neural network predictions.
in the formula (17), the compound represented by the formula (I),first to show the prospectiThe characteristics of the device are as follows,indicating taking the backgroundjThe characteristics of the device are as follows,fgthe representation of the foreground is performed,bgrepresenting the background.
in the formula (18), the reaction mixture,is shown as 3DFeature projection to 2DRear sum 2DThe true value is used for making a difference value,feature points and masks representing the neural network predictions.
In the step S2, the pose calculation and rendering of the object of interest may be implemented by the mobile terminal, or may be implemented by mixing the mobile terminal and the cloud server.
The mode of realizing the pose calculation and the rendering of the interested object through the mobile terminal is suitable for the condition that the user-defined models are few. Before tracking is started, the cloud server is accessed only once, and after the object model, the deep learning model, the feature database and the like of the user are downloaded, other calculations are carried out on the mobile terminal. The mobile terminal reads camera data from the equipment, obtains the pose of an object by detecting or identifying the neural network and estimating the neural network by the pose of six degrees of freedom, and then renders the content to be rendered according to the pose.
The mode of realizing the pose calculation and rendering of the interested object by mixing the mobile terminal and the cloud server is suitable for the condition that more user-defined models are available, and is a general object pose tracking solution. In the tracking process, the cloud server needs to be accessed and resources downloaded one or more times. The mobile terminal inputs an image sequence and outputs an object pose and a rendered image.
The main flow of the mode is as follows: inputting an image sequence in the mobile terminal, performing significance detection on each frame of image, uploading a significance detection area to the cloud server for retrieval, obtaining object information and a depth learning model related to the object information, loading the object information and the depth learning model to the mobile terminal for pose estimation, then obtaining an object pose, and rendering content to be rendered according to the pose.
The object anchoring method provided by the application adopts a modeling mode of unsupervised deep learning, only a small number of feature points are needed to be provided, the initial camera posture is calculated, modeling can be carried out, and the feature points on the object are not needed, so that modeling can be carried out on a pure-color object or an object with less texture.
According to the object anchoring method, the model which is used for carrying out recognition and 3D position and posture tracking by using the 2D image is trained by adopting synthetic data synthesis and real data synthesis, the problem that inaccuracy, illumination, environment and the like have great influence on the algorithm when a user self-defines object recognition and 3D tracking can be solved, and then the method for obtaining and displaying the self-defined object information of the mobile terminal is realized, and the information is displayed and corresponds to the 3D position and posture of the object.
According to the object anchoring method, the problem that workload of manual marking is large and speed is low can be solved by adopting a method of combining modeling rendering synthetic data and automatic marking real data, efficiency and accuracy of model training are improved, a deep learning model of a user-defined object can be tracked possibly, and the tracking initialization can be automatic initialization and is low in sensitivity to illumination, environment and the like.
According to the object anchoring method, the end cloud combined framework is adopted, so that large-scale object recognition and 3D position and posture tracking of the mobile terminal are possible.
Based on the object anchoring method provided by the application, the application also provides an object anchoring system provided by the application.
Fig. 2 is a schematic structural diagram of an object anchoring system according to an embodiment of the present application.
As shown in fig. 2, the object anchoring system provided in the embodiment of the present application includes a cloud training unit 1 and an object pose calculation and rendering unit 2. The cloud training unit 1 is used for obtaining a three-dimensional model of an interested object and a six-degree-of-freedom pose estimation neural network model for object posture estimation through training according to an acquired image sequence containing the interested object. The object pose calculation and rendering unit 2 is configured to perform pose estimation on the object of interest according to the three-dimensional model of the object of interest and the six-degree-of-freedom pose estimation neural network model for object pose estimation, and superimpose virtual information on the object of interest to implement rendering of the object of interest.
In the present embodiment, as shown in fig. 3, the cloud training unit 1 includes a modeling unit 11, a synthetic training data generating unit 12, a real training data generating unit 13, and a training algorithm unit 14.
The modeling unit 11 is configured to train a three-dimensional model of the object of interest according to the acquired image sequence including the object of interest.
The synthetic training data generating unit 12 is configured to obtain a synthetic data set according to the three-dimensional model of the object and the preset scene model, where the synthetic data set includes synthetic training data.
The real training data generating unit 13 is configured to obtain a real data set according to the camera pose and the object pose, where the real data set includes real training data.
The training algorithm unit 14 is configured to train the pose estimation neural network based on six degrees of freedom for deep learning according to the synthetic training data and the real training data, so as to obtain a pose estimation neural network model based on six degrees of freedom.
In a specific embodiment, the modeling unit 11 comprises a deep learning based modeling unit and a computer vision based modeling unit.
As shown in fig. 4, the input of the deep learning based modeling unit is a sequence of images, the output of which is a deep learning model. And inputting the multiple images into the deep learning model for inference to obtain grids and textures.
The modeling process of the deep learning based modeling unit is the same as the content of the above steps S111-S113, and is not repeated here.
As shown in fig. 5, the input of the computer vision based modeling unit is a sequence of images, the output of which is a modeled stereo model.
The modeling process of the computer vision-based modeling unit is the same as that of the above steps S121 to S127, and is not repeated here.
In the above-described embodiment, as shown in fig. 6 and 7, the synthetic training data generating unit 12 includes a PBR (physical-Based Rendering) Rendering unit. The PBR rendering unit 121 reads the stereo model and the preset scene model of the object by using the render frames such as the blend, unity, and the like, and performs object pose randomization, rendering camera pose randomization, material randomization, and illumination randomization to obtain a series of image sequences and corresponding annotation tags. The label can be a category, a position, a pose with six degrees of freedom, and the like.
As shown in fig. 6 and 8, the synthetic training data generating unit 12 further includes a synthetic image reality migrating unit 122, where the synthetic image reality migrating unit 122 reads the stereo model or the real image or the PBR image, performs preprocessing such as background removal on the image, and then generates synthetic images at different angles and their corresponding label labels through a deep learning Network such as GAN (generic advanced Network) or NERF (Neural radiation Fields). The label can be a category, a position, a pose with six degrees of freedom, and the like.
In the above embodiment, as shown in fig. 9, the real training data generating unit 13 includes the model reprojection segmentation algorithm unit 131. The model re-projection segmentation algorithm unit 131 obtains the image sequence, the camera pose and the object pose, and segments the object in the real image.
The real training data generating unit 13 further includes an inter-frame data synthesizing unit 132, which is configured to synthesize the real data with discrete poses into data with more dense and continuous poses, so as to obtain a real image and its corresponding label. The label can be a category, a position, a pose with six degrees of freedom, and the like.
In the above embodiment, the training algorithm unit 14 trains the six-degree-of-freedom pose estimation neural network based on deep learning according to the synthetic training data and the real training data.
And training a six-degree-of-freedom pose estimation neural network by using an end-to-end method. And the object detection and the pose estimation with six degrees of freedom can be finished by one network. The six-degree-of-freedom pose estimation neural network inputs 2D coordinates of a plurality of characteristic points extracted from an image and an object, 3D coordinates corresponding to the characteristic points, and an image mask. The network structure is shown in figure 9 of the drawings,a neural network of a first stage for outputting a detection box;a second stage neural network that is used to compute 2D and 3D keypoints for the object. The cross entropy of the mask is mainly used for removing the interference of background features, 2D key points are regressed in a Gaussian thermodynamic diagram mode, 3D key points need to be normalized to be 0-1 based on the initial posture of an object, and projection errors are used for guaranteeing 2D and 3D relationsConsistency of key points.
The loss functions required for training the pose estimation neural network with six degrees of freedom are the same as those in the above equations (12) to (18), and are not described in detail here.
In the above embodiments, the object pose calculation and rendering unit 2 may be implemented by a mobile terminal, or may be implemented by mixing the mobile terminal and a cloud server.
As shown in fig. 10, the mode in which the object pose calculation and rendering unit 2 is implemented by the mobile terminal is suitable for the case where there are few custom models for the user. Before tracking is started, the cloud server is accessed only once, and after the object model, the deep learning model, the feature database and the like of the user are downloaded, other calculations are carried out on the mobile terminal. The mobile terminal reads camera data from the equipment, obtains the pose of an object by detecting or identifying the neural network and estimating the neural network by the pose of six degrees of freedom, and then renders the content to be rendered according to the pose.
As shown in fig. 11, the mode of the object pose calculation and rendering unit 2 implemented by mixing the mobile terminal and the cloud server is suitable for the case of many user-defined models of users, and is a solution for tracking the object pose in general. In the tracking process, the cloud server needs to be accessed and resources downloaded one or more times. The mobile terminal inputs an image sequence and outputs an object pose and a rendered image.
The main flow of the mode is as follows: inputting an image sequence in the mobile terminal, performing significance detection on each frame of image, uploading a significance detection area to the cloud server for retrieval, obtaining object information and a depth learning model related to the object information, loading the object information and the depth learning model to the mobile terminal for pose estimation, then obtaining an object pose, and rendering content to be rendered according to the pose.
It should be noted that: the object anchoring system provided in the above embodiment is only illustrated by the division of the above program modules, and in practical applications, the above processing may be distributed to different program modules according to needs, that is, the internal structure of the object anchoring system is divided into different program modules to complete all or part of the above-described processing. In addition, the embodiments of the object anchoring system and the object anchoring method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the embodiments of the methods for details, which are not described herein again.
In an exemplary embodiment, the present application further provides a storage medium, which is a computer readable storage medium, for example, a memory including a computer program, which is executable by a processor to perform the steps of the foregoing object anchoring method.
The embodiments of the present application described above may be implemented in various hardware, software code, or a combination of both. For example, the embodiments of the present application may also be program code for executing the above-described method in a data signal processor. The present application may also relate to various functions performed by a computer processor, digital signal processor, microprocessor, or field programmable gate array. The processor described above may be configured in accordance with the present application to perform certain tasks by executing machine-readable software code or firmware code that defines certain methods disclosed herein. Software code or firmware code may be developed in different programming languages and in different formats or forms. Software code may also be compiled for different target platforms. However, different code styles, types, and languages of software code and other types of configuration code for performing tasks according to the present application do not depart from the spirit and scope of the present application.
The foregoing is merely an illustrative embodiment of the present application, and any equivalent changes and modifications made by those skilled in the art without departing from the spirit and principles of the present application shall fall within the protection scope of the present application.
Claims (8)
1. A method of anchoring an object, comprising the steps of:
training according to the acquired image sequence containing the interested object to obtain a three-dimensional model of the interested object and a six-degree-of-freedom pose estimation neural network model for estimating the pose of the object; in the process of training and obtaining the three-dimensional model of the interested object according to the obtained image sequence containing the interested object, the modeling is completed based on the deep learning or the computer vision, and the process of completing the modeling based on the deep learning is as follows:
extracting the characteristics of each frame of image, and estimating the camera initialization pose corresponding to each frame of image;
acquiring a mask of each frame of image by utilizing a pre-trained significance segmentation network;
carrying out model training and inference to obtain a grid of the model;
the process of completing modeling based on computer vision is as follows:
performing feature extraction and matching by adopting a visual algorithm or a deep learning algorithm;
estimating the pose of the camera;
segmenting salient objects in the image sequence;
reconstructing the dense point cloud;
using the reconstructed dense point cloud as the input of grid generation, and reconstructing the grid of the object by using a reconstruction algorithm;
finding out texture coordinates corresponding to the grid vertex according to the camera pose and the image corresponding to the camera pose to obtain a mapping of the grid;
obtaining a three-dimensional model according to the grids of the object and the mapping of the grids;
the specific process of training the six-degree-of-freedom pose estimation neural network model for object pose estimation according to the acquired image sequence containing the object of interest comprises the following steps:
obtaining a synthetic data set by adopting a PBR rendering method according to the three-dimensional model and the preset scene model of the object; the synthetic dataset includes synthetic training data;
obtaining a real data set by adopting a model reprojection segmentation algorithm according to the camera pose and the object pose; the real dataset comprises real training data;
training a six-degree-of-freedom pose estimation neural network based on deep learning by utilizing the synthetic training data and the real training data to obtain a six-degree-of-freedom pose estimation neural network model;
and performing pose estimation on the interested object according to the three-dimensional model of the interested object and the six-degree-of-freedom pose estimation neural network model for object pose estimation to obtain the pose of the interested object, and superposing virtual information on the interested object according to the pose to realize the rendering of the interested object.
2. The method for anchoring an object according to claim 1, wherein said process of performing model training and inference is:
Position coordinates of each pixel point by using internal parametersConversion to imaging plane coordinates;
Inputting imaging plane coordinates and optimized camera pose into neural networkExtracting the color difference characteristics between frames(ii) a Characterizing color differences between framesAdding the color difference to an original image to compensate the color difference between frames;
initializing the camera pose corresponding to the imageInput neural networkIn the method, the optimized pose is obtained;
Wherein, the initial position of camera after optimizing is:
in the formula (I), the compound is shown in the specification,Tis a function, which represents taking the position coordinates;
initial position of self-optimized cameraEmitting light rays in a direction ofwPassing through the position coordinates of the pixel points;
Wherein the direction of the lightwComprises the following steps:
Utilizing deep learning networksPredict thisMDotProbability at the surface of the implicit equation;
wherein, the judgment condition of the point predicted to be on the surface of the implicit equation is as follows:
in the formula (I), the compound is shown in the specification,representing points predicted to be on the surface of the implicit equation,a threshold value is indicated which is indicative of,indicating minimum compliancem;
Will predict as points on the surface of the implicit equationSend into neural rendererRObtaining the values of the predicted RGB colors;
according to predictionValue and acquisitionKCalculating the color of each pixel point to obtain the square loss of the pixel difference value;
wherein the square loss of pixel differenceLComprises the following steps:
in the formula (I), the compound is shown in the specification,all represent coefficients;representing the difference values of the pixels of the image,difference value representing background maskDifference from foreground maskThe sum of the total weight of the components,representing a difference of the edges;
in the formula (I), the compound is shown in the specification,Pindicating all selectionskThe point of the light beam is the point,representing a predicted color value;
in the formula (I), the compound is shown in the specification, indicating all selectionskOut of the dots, dots outside the mask;
in the formula (I), the compound is shown in the specification,BCErepresenting a two-value cross-entropy loss, indicating all selectionskOne of the dots within the mask;
in the formula (I), the compound is shown in the specification,representing the boundaries of the mask;
3. The object anchoring method according to claim 1, wherein the specific process of obtaining the synthetic dataset by using the PBR rendering method according to the stereoscopic model and the preset scene model of the object is:
reading a three-dimensional model and a preset scene model of an object;
carrying out object pose randomization, rendering camera pose randomization, material randomization and illumination randomization by adopting a PBR rendering method to obtain a series of image sequences and corresponding labeling labels; the label labels are of category, position and pose with six degrees of freedom.
4. The object anchoring method according to claim 1, wherein the specific process of obtaining the real dataset by using the model reprojection segmentation algorithm according to the camera pose and the object pose is as follows:
acquiring an image sequence, a camera pose and an object pose, and segmenting an object in a real image;
synthesizing the real data with discrete poses into data with dense and continuous poses, and further obtaining a real image and a corresponding label thereof; the label labels are of category, position and pose with six degrees of freedom.
5. The object anchoring method according to claim 1, wherein the training of the six-degree-of-freedom pose estimation neural network based on deep learning by using the synthetic training data and the real training data to obtain the six-degree-of-freedom pose estimation neural network model comprises:
inputting 2D coordinates of a plurality of characteristic points extracted from an image and an object, 3D coordinates corresponding to the characteristic points and an image mask;
training the six-degree-of-freedom pose estimation neural network by adopting the following loss function to obtain a six-degree-of-freedom pose estimation neural network model;
the loss function needed when training the six-degree-of-freedom pose estimation neural network is as follows:
in the formula (I), the compound is shown in the specification,it is indicated that there is a loss of,are all indicative of the coefficients of the,a loss of classification is indicated and,indicating that the loss of the bounding box,which represents the loss in the 2D representation,which represents the loss in 3D to the user,which is indicative of a loss of the mask,representing a projection loss;
in the formula (I), the compound is shown in the specification,is shown to take the first placeiThe classification information of each of the detection anchor points,is shown to take the first placejInformation of individual background features;the anchor point is represented by a representation of,an anchor point representing the background is shown,a true value of the category is represented,representing features proposed by a neural network;
in the formula (I), the compound is shown in the specification,is shown asiThe coordinate characteristics of each of the detection anchor points,representing the true value of the coordinate of the detection frame;
in the formula (I), the compound is shown in the specification,is expressed as 2DThe characteristics of the coordinates are such that,2 for representing an objectDA true value of the characteristic point;
in the formula (I), the compound is shown in the specification,is expressed by 3DThe characteristics of the coordinates are such that,3 for representing an objectDA true value of the characteristic point;
in the formula (I), the compound is shown in the specification,first to show the prospectiThe characteristics of the device are as follows,indicating taking the backgroundjThe characteristics of the device are as follows,fgthe representation of the foreground is performed,bgrepresenting a background;
6. The object anchoring method according to claim 1, wherein the rendering of the object of interest is implemented by a mobile terminal or by a mobile terminal mixed with a cloud server;
the process realized by the mobile terminal is as follows:
before tracking is started, accessing a cloud server, downloading an object model, a deep learning model and a feature database of a user, and then performing other calculations on a mobile terminal;
the mobile terminal reads camera data from the equipment, and the object pose is obtained by detecting or identifying the neural network and estimating the neural network by the pose of six degrees of freedom;
rendering the content to be rendered according to the pose of the object;
the process of realizing the mixing of the mobile terminal and the cloud server is as follows:
inputting an image sequence in the mobile terminal, and performing significance detection on each frame of image;
uploading the significance detection area to a cloud server for retrieval to obtain information of the object and a deep learning model related to the information, and loading the information to the mobile terminal;
estimating the position and the attitude of an object at the mobile terminal to obtain the position and the attitude of the object;
and rendering the content to be rendered according to the pose of the object.
7. An object anchoring system is characterized by comprising a cloud training unit and an object pose calculation and rendering unit;
the cloud training unit is used for training according to the acquired image sequence containing the interested object to obtain a three-dimensional model of the interested object and a six-degree-of-freedom pose estimation neural network model for estimating the pose of the object;
the object pose calculation and rendering unit is used for estimating the pose of the interested object according to the three-dimensional model of the interested object and the six-degree-of-freedom pose estimation neural network model for estimating the pose of the interested object, and superposing virtual information on the interested object to realize the rendering of the interested object;
the cloud training unit comprises a modeling unit, a synthetic training data generating unit, a real training data generating unit and a training algorithm unit;
the modeling unit is used for training according to the acquired image sequence containing the interested object to obtain a three-dimensional model of the interested object;
the synthetic training data generation unit is used for obtaining a synthetic data set according to a three-dimensional model of an object and a preset scene model, and the synthetic data set comprises synthetic training data;
the real training data generation unit is used for obtaining a real data set according to the camera pose and the object pose, and the real data set comprises real training data;
and the training algorithm unit is used for training the six-degree-of-freedom pose estimation neural network based on deep learning according to the synthetic training data and the real training data to obtain a six-degree-of-freedom pose estimation neural network model.
8. A storage medium having stored thereon an executable program which, when invoked, performs the steps in the object anchoring method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210173770.0A CN114241013B (en) | 2022-02-25 | 2022-02-25 | Object anchoring method, anchoring system and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210173770.0A CN114241013B (en) | 2022-02-25 | 2022-02-25 | Object anchoring method, anchoring system and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114241013A CN114241013A (en) | 2022-03-25 |
CN114241013B true CN114241013B (en) | 2022-05-10 |
Family
ID=80748105
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210173770.0A Active CN114241013B (en) | 2022-02-25 | 2022-02-25 | Object anchoring method, anchoring system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114241013B (en) |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9996936B2 (en) * | 2016-05-20 | 2018-06-12 | Qualcomm Incorporated | Predictor-corrector based pose detection |
EP3705049A1 (en) * | 2019-03-06 | 2020-09-09 | Piur Imaging GmbH | Apparatus and method for determining motion of an ultrasound probe including a forward-backward directedness |
CN112884820B (en) * | 2019-11-29 | 2024-06-25 | 杭州三坛医疗科技有限公司 | Image initial registration and neural network training method, device and equipment |
-
2022
- 2022-02-25 CN CN202210173770.0A patent/CN114241013B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN114241013A (en) | 2022-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3944200B1 (en) | Facial image generation method and apparatus, device and storage medium | |
Hepp et al. | Learn-to-score: Efficient 3d scene exploration by predicting view utility | |
CN108876814B (en) | Method for generating attitude flow image | |
CN113822993B (en) | Digital twinning method and system based on 3D model matching | |
US20220415030A1 (en) | AR-Assisted Synthetic Data Generation for Training Machine Learning Models | |
Joshi et al. | Deepurl: Deep pose estimation framework for underwater relative localization | |
CN115428027A (en) | Neural opaque point cloud | |
CN116070687B (en) | Neural network light field representation method based on global ray space affine transformation | |
KR20230150867A (en) | Multi-view neural person prediction using implicit discriminative renderer to capture facial expressions, body posture geometry, and clothing performance | |
Jeon et al. | Struct-MDC: Mesh-refined unsupervised depth completion leveraging structural regularities from visual SLAM | |
CN115018989A (en) | Three-dimensional dynamic reconstruction method based on RGB-D sequence, training device and electronic equipment | |
CN115953476A (en) | Human body free visual angle synthesis method based on generalizable nerve radiation field | |
CN118505878A (en) | Three-dimensional reconstruction method and system for single-view repetitive object scene | |
CN118154770A (en) | Single tree image three-dimensional reconstruction method and device based on nerve radiation field | |
Maxim et al. | A survey on the current state of the art on deep learning 3D reconstruction | |
Yao et al. | Neural Radiance Field-based Visual Rendering: A Comprehensive Review | |
US20240161362A1 (en) | Target-augmented material maps | |
CN114241013B (en) | Object anchoring method, anchoring system and storage medium | |
Englert et al. | Enhancing the ar experience with machine learning services | |
CN113034675B (en) | Scene model construction method, intelligent terminal and computer readable storage medium | |
Bubenıcek | Using Game Engine to Generate Synthetic Datasets for Machine Learning | |
CN115841546A (en) | Scene structure associated subway station multi-view vector simulation rendering method and system | |
Johnston et al. | Single View 3D Point Cloud Reconstruction using Novel View Synthesis and Self-Supervised Depth Estimation | |
Karkalou et al. | Semi-global matching with self-adjusting penalties | |
CN114581571B (en) | Monocular human body reconstruction method and device based on IMU and forward deformation field |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |