CN116363217B - Method, device, computer equipment and medium for measuring pose of space non-cooperative target - Google Patents

Method, device, computer equipment and medium for measuring pose of space non-cooperative target Download PDF

Info

Publication number
CN116363217B
CN116363217B CN202310638984.5A CN202310638984A CN116363217B CN 116363217 B CN116363217 B CN 116363217B CN 202310638984 A CN202310638984 A CN 202310638984A CN 116363217 B CN116363217 B CN 116363217B
Authority
CN
China
Prior art keywords
semantic key
key point
cooperative target
semantic
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310638984.5A
Other languages
Chinese (zh)
Other versions
CN116363217A (en
Inventor
王梓
余英建
李璋
苏昂
于起峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202310638984.5A priority Critical patent/CN116363217B/en
Publication of CN116363217A publication Critical patent/CN116363217A/en
Application granted granted Critical
Publication of CN116363217B publication Critical patent/CN116363217B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/24Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for cosmonautical navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Astronomy & Astrophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method, a device, computer equipment and a medium for measuring the pose of a space non-cooperative target, which relate to the relative navigation positioning of space, the selection of semantic key points on the space non-cooperative target in the pose measurement field of the non-cooperative target in the computer vision field, the construction of a training data set, the construction of a depth neural network prediction semantic key point set, the training of the depth neural network by using the training data set, the prediction of the semantic key point set in an input image by using the trained depth neural network after the training of the depth neural network is completed, finally, the establishment of a weighted N-point perspective problem based on the prediction, and the solution of the problem to obtain the position and the pose of the non-cooperative target in the input image under a camera coordinate system. The method can adapt to a complex space non-control environment and realize reliable position and posture prediction of the space non-cooperative target under a camera coordinate system.

Description

Method, device, computer equipment and medium for measuring pose of space non-cooperative target
Technical Field
The invention mainly relates to the technical field of radar imaging remote sensing, in particular to a method, a device, computer equipment and a medium for measuring pose of a spatial non-cooperative target.
Background
With the rapid development of space technology, for example, the position and the gesture of a target spacecraft relative to a service spacecraft are required to be measured in the tasks of formation flight, invalid satellites, space debris removal and the like, the existing method obtains the relative position and the gesture by predicting the position of a semantic key point on an image defined on the target spacecraft and then solving an N-point perspective problem.
However, the existing method takes each key point as an independent target, trains the deep neural network to predict the position of each key point in the image, lacks overall modeling of the spacecraft, and is difficult to adapt to a complex space uncontrolled environment.
Disclosure of Invention
Aiming at the technical problems existing in the prior art, the invention provides a method, a device, computer equipment and a medium for measuring the pose of a space non-cooperative target.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
in one aspect, the present invention provides a method for measuring pose of a spatial non-cooperative target, including:
acquiring coordinates of each semantic key point on the space non-cooperative target under a space non-cooperative target body coordinate system;
acquiring sample images containing space non-cooperative targets, projecting coordinates of all semantic key points under a space non-cooperative target body coordinate system to an image coordinate system to obtain coordinates of all semantic key points in each sample image, obtaining a semantic key point true value set of the space non-cooperative targets on each sample image, and constructing a training data set;
Constructing a deep neural network for predicting a semantic key point set;
training the deep neural network by using the training data set until training converges;
predicting a semantic key point set of a space non-cooperative target in an input image by using the trained deep neural network to obtain a corresponding relation between coordinates of each semantic key point in the predicted semantic key point set in an image coordinate system and coordinates of each semantic key point in the space non-cooperative target body coordinate system;
and solving the position and the gesture of the spatial non-cooperative target in the input image under a camera coordinate system based on the corresponding relation.
Further, the number of semantic key points on the spatial non-cooperative targets is more than or equal to 4.
Further, the semantic key point true value set of the spatial non-cooperative target on each sample image consists of all semantic key point elements, the firstThe semantic key point element is described by +.>Index classification item of corresponding relation between semantic key point elements and semantic key points on space non-cooperative target coordinate system>One description->X-axis image coordinate classification item of coordinates of semantic key point elements on X-axis of image +.>And a description of- >Y-axis image coordinate classification item of coordinates of semantic key point elements on image Y-axis +.>Composition is prepared.
Further, the deep neural network comprises a feature extraction network, a feature encoder, a feature decoder and three prediction heads, wherein the three prediction heads are an index classification item prediction head, an X-axis image coordinate classification item prediction head and a Y-axis image coordinate classification item prediction head respectively;
the feature extraction network is used for extracting a feature map from an input image; the feature encoder is used for encoding the extracted features to obtain a feature map after global information encoding; the feature decoder is used for inquiring the feature map coded by the feature encoder by taking the key point inquiry vector as input to obtain the decoded feature corresponding to each prediction element; the index classification item predicting head, the X-axis image coordinate classification item predicting head and the Y-axis image coordinate classification item predicting head respectively predict the index classification item, the X-axis image coordinate classification item and the Y-axis image coordinate classification item of the semantic key point element by receiving the decoded features output by the feature decoder.
Further, the invention uses the training data set to train the deep neural network using a random gradient descent method.
Further, the invention trains the deep neural network, comprising:
the semantic key point true value set of the space non-cooperative target in the input image isWherein->The semantic key element is marked as +.>The number of semantic key point elements in the semantic key point truth value set is +.>,/>Equal to the number of semantic keypoints on spatially non-cooperative targets +.>
The semantic key point predicted value set predicted by the deep neural network based on the input image is as followsWherein->The semantic key element is marked as +.>The number of semantic key point elements in the semantic key point predicted value set is +.>And->
True value set to semantic keypointsSupplementing zero element to obtain a set->Make->The number of the semantic key point elements is +.>And semantic key point predictor set +.>The number of semantic key point elements which can only be the same;
defining index function, obtaining optimal index function by minimizing bipartite matching loss functionThe following are provided:
wherein 、/>、/>Respectively +.>Index classification items, X-axis image coordinate classification items and Y-axis image coordinate classification items of semantic key point prediction elements in a semantic key point prediction set; />、/>、/>Respectively +.>Index classification item, X-axis image coordinate classification item and Y-axis image coordinate classification item of semantic key point prediction elements in semantic key point true value set, +. >Indicate->Index of each semantic key point prediction element in the semantic key point true value set; />Is a balance parameter; />Is cross entropy loss; when the X-axis image coordinate classification term and the Y-axis image coordinate classification term are gaussian distributions,is KL loss; when the X-axis image coordinate classification item and the Y-axis image coordinate classification item are one-hot encoded, the ++>Is cross entropy loss;
pairing semantic keypoint elements in a semantic keypoint predictor set using an optimal indexing functionIs a semantic key element of the group.
Further, the invention trains the deep neural network, further comprises constructing a loss function of the deep neural network, and supervising the training of the deep neural network, wherein the loss function of the deep neural network is as follows:
wherein Is a coordinate term loss function; />、/>、/>Respectively the +.>Index classification items, X-axis image coordinate classification items and Y-axis image coordinate classification items of semantic key point elements in the semantic key point true value set of the semantic key point prediction elements; />Represents the +.>The semantic keypoint predictor is the best index of semantic keypoint elements in the semantic keypoint truth set.
Further, the coordinate term loss function of the inventionThe method comprises the following steps:
wherein and />The average of the predicted value and the true value of the coordinate classification term are respectively:
wherein Classifying the predicted dimension of the term for the coordinate term; />Prediction variance for coordinate classification term:
further, when the deep neural network is trained, the conditions for training convergence are as follows:
setting the maximum iteration number, and ending training when the iteration number exceeds the maximum iteration number;
or, setting a loss function threshold, and ending training when the loss function value obtained by current calculation is smaller than the loss function threshold;
or, when the currently calculated loss function value is no longer reduced, the training is ended.
Further, the position and the posture of the spatial non-cooperative target in the camera coordinate system are obtained through the following steps:
the first semantic key point set of the input image obtained through predictionX-axis image coordinate classification item and Y-axis image of each semantic key point elementImage coordinate classification item +.>、/>Get->Coordinate position of each semantic key element on X-axis and Y-axis of image coordinate system +.>、/>
wherein and />Are respectively->X-axis image coordinate classification item and Y-axis image coordinate classification item of each semantic key point element are at the +. >Probability on individual positions, +.> and />Respectively the width and height of the input image, +.>The coefficient is the ratio of the resolution of the coordinate classification item to the image scale;
obtaining a predicted input mapThe first of the set of semantic keypoints of the imageThe uncertainty of the positions of the semantic key point elements on the X axis and the Y axis of the image coordinate system is as follows:
the first semantic key point set of the input image obtained through predictionIndex classification item of each semantic key point element to obtain the +.>Coordinates +.>
wherein ,indicate->Indexing the semantic key point elements to predefined semantic key points;
constructing a weighted N-point perspective model, and obtaining the position and the posture of a non-cooperative target under a camera coordinate system by solving the weighted N-point perspective model, wherein the weighted N-point perspective model is as follows:
wherein and />The optimal estimated values of the rotation matrix and the translation vector of the space non-cooperative target under the camera coordinate system are respectively called the pose of the space non-cooperative target; />Is an indication function if and only if the condition in brackets is 1, otherwise equal to 0; / >Is a robust estimation function; />Is a weighted re-projection residual, expressed as:
;/>
wherein ,is an internal parameter matrix of the camera, +.>Pose as spatially non-cooperative target, wherein +.>Rotation matrix under camera coordinate system for spatially non-cooperative targets, +.>For translation vectors of spatially non-cooperative targets in the camera coordinate system,is->Personal languageCoordinates under a space non-cooperative target body coordinate system corresponding to the sense key point element, ++>Is->Photographic depth of individual semantic key elements.
In another aspect, the present invention provides a spatial non-cooperative target pose measurement apparatus, comprising:
the first module is used for acquiring coordinates of each semantic key point on the space non-cooperative target under a space non-cooperative target body coordinate system;
the second module is used for acquiring sample images containing the space non-cooperative targets, projecting the coordinates of the semantic key points under the space non-cooperative target body coordinate system to the image coordinate system to obtain the coordinates of the semantic key points in each sample image, obtaining a semantic key point true value set of the space non-cooperative targets on each sample image, and constructing a training data set;
the third module is used for constructing a deep neural network for predicting the semantic key point set;
A fourth module for training the deep neural network using the training data set until training converges;
a fifth module, configured to predict a semantic key point set of a spatial non-cooperative target in the input image by using the trained deep neural network, to obtain a correspondence between coordinates of each semantic key point in the predicted semantic key point set in an image coordinate system and coordinates thereof in a spatial non-cooperative target object coordinate system;
and a sixth module, configured to solve, based on the correspondence, a position and an attitude of the spatial non-cooperative target in the input image under a camera coordinate system.
In another aspect, the present invention provides a computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring coordinates of each semantic key point on the space non-cooperative target under a space non-cooperative target body coordinate system;
acquiring sample images containing space non-cooperative targets, projecting coordinates of all semantic key points under a space non-cooperative target body coordinate system to an image coordinate system to obtain coordinates of all semantic key points in each sample image, obtaining a semantic key point true value set of the space non-cooperative targets on each sample image, and constructing a training data set;
Constructing a deep neural network for predicting a semantic key point set;
training the deep neural network by using the training data set until training converges;
predicting a semantic key point set of a space non-cooperative target in an input image by using the trained deep neural network to obtain a corresponding relation between coordinates of each semantic key point in the predicted semantic key point set in an image coordinate system and coordinates of each semantic key point in the space non-cooperative target body coordinate system;
and solving the position and the gesture of the spatial non-cooperative target in the input image under a camera coordinate system based on the corresponding relation.
In another aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring coordinates of each semantic key point on the space non-cooperative target under a space non-cooperative target body coordinate system;
acquiring sample images containing space non-cooperative targets, projecting coordinates of all semantic key points under a space non-cooperative target body coordinate system to an image coordinate system to obtain coordinates of all semantic key points in each sample image, obtaining a semantic key point true value set of the space non-cooperative targets on each sample image, and constructing a training data set;
Constructing a deep neural network for predicting a semantic key point set;
training the deep neural network by using the training data set until training converges;
predicting a semantic key point set of a space non-cooperative target in an input image by using the trained deep neural network to obtain a corresponding relation between coordinates of each semantic key point in the predicted semantic key point set in an image coordinate system and coordinates of each semantic key point in the space non-cooperative target body coordinate system;
and solving the position and the gesture of the spatial non-cooperative target in the input image under a camera coordinate system based on the corresponding relation.
Compared with the prior art, the invention has the technical effects that:
according to the method, the position of each semantic key point in the spatial non-cooperative target in the image coordinate system is predicted by training the deep neural network, so that the position and the gesture of the spatial non-cooperative target in the camera coordinate system are solved. The method can adapt to a complex space non-control environment and realize reliable position and posture prediction of the space non-cooperative target under a camera coordinate system.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an embodiment;
FIG. 2 is a schematic diagram of a semantic keypoint truth value set representation of spatial non-cooperative targets in an embodiment;
FIG. 3 is a block diagram of a deep neural network used in one embodiment;
FIG. 4 is a training flow diagram of a deep neural network in one embodiment;
FIG. 5 is a flow diagram of target position and pose estimation based on a deep neural network in one embodiment.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, in one embodiment, a method for measuring pose of a spatial non-cooperative target is provided, including the following steps:
(1) Constructing a training data set;
acquiring coordinates of each semantic key point on the space non-cooperative target under a space non-cooperative target body coordinate system;
the method comprises the steps of obtaining sample images containing space non-cooperative targets, projecting coordinates of semantic key points under a space non-cooperative target body coordinate system to an image coordinate system to obtain coordinates of the semantic key points in each sample image, obtaining a semantic key point true value set of the space non-cooperative targets on each sample image, and constructing a training data set.
(2) Constructing a deep neural network for predicting a semantic key point set;
(3) Training a deep neural network using the training dataset;
and training the deep neural network based on the training data set by using an optimization algorithm until training converges.
(4) Predicting an input image by using the trained deep neural network;
predicting a semantic key point set of a space non-cooperative target in an input image by using the trained deep neural network to obtain a corresponding relation between coordinates of each semantic key point in the predicted semantic key point set in an image coordinate system and coordinates of each semantic key point in the space non-cooperative target body coordinate system;
(5) And solving the position and the posture of the spatial non-cooperative target in the input image under a camera coordinate system.
And solving the position and the gesture of the spatial non-cooperative target in the input image under a camera coordinate system based on the corresponding relation.
It is understood that the number of semantic keypoints on the spatially non-cooperative targets is 4 or more.
The semantic keypoint truth value set of the spatial non-cooperative target on each sample image consists of all semantic keypoint elements. As shown in FIG. 2, in an embodiment, a schematic diagram of a semantic key point truth value set representation of a spatial non-cooperative target is shown, in which 11 semantic key points are selected on the spatial non-cooperative target, and all semantic key points form a semantic key point truth value set, and each semantic key point element in the semantic key point truth value set is respectively recorded as Wherein->Individual semantic key elements->Consists of three items including a description +.>Index classification item of corresponding relation between semantic key point elements and semantic key points on space non-cooperative target coordinate system>One description->X-axis image coordinate classification item of coordinates of semantic key point elements on X-axis of image +.>And a description of->Y-axis image coordinate classification item of coordinates of semantic key point elements on image Y-axis +.>,/>、/>、/>。/> and />Respectively the width and height of the input image, +.>Is the resolution coefficient of the coordinate classification term, +.> and />One-dimensional gaussian distribution representation may be employed:
wherein and />Is the true value of the semantic key point element on the image coordinate system,/or->For a fixed spatial variance, its value may be dependent on the resolution of the image>、/>And resolution of coordinate classification item>And (5) adjusting. In addition, the expression +.A.Thermocoded version is also used> and />
Index classification itemDescribes->And the corresponding relation between each semantic key point element and the semantic key point on the space non-cooperative target body coordinate system is expressed by adopting single-hot coding. Let->Individual semantic key elements->Corresponding to the first +.>Semantic key point, then->The method comprises the following steps:
In addition, a background element is introducedPixels on the image which are not semantic keypoints are described, wherein +.>Index categorization in (a)Item->All are zero:
element(s)May also be referred to as a null element. The background class elements are individually set as one class, so that the dimension of the index classification item is the number of semantic key points plus 1,1 represents the background element.
There are two situations in which a training dataset is constructed:
1) When a three-dimensional model of a space non-cooperative target exists, three-dimensional model editing software can be used for selecting semantic key points of the surface of the space non-cooperative target, and the coordinates of the semantic key points under a space non-cooperative target body coordinate system (also called a body coordinate system for short) are recorded;
2) When only a sample image with pose labels exists, a plurality of semantic key points can be manually selected, and the coordinates of each semantic key point in a body coordinate system are calculated from a plurality of images by using a multi-view intersection technology. The number of semantic key points is recorded as(/>) First->The coordinates of the semantic key points in the body coordinate system are +.>Obtaining the first +.in a sample image by a pinhole imaging model>Coordinates of the individual semantic keypoints:
wherein Is->Homogeneous coordinates of the semantic key points in the image coordinate system. According to the method, the image coordinates of each semantic key point on each image can be obtained, and then the semantic key point set true value of the spatial non-cooperative target on each sample image can be obtained.
The depth neural network adopted by an embodiment comprises a feature extraction network (TZ), a feature encoder, a feature decoder and three prediction heads, wherein the three prediction heads are an index classification item prediction head, an X-axis image coordinate classification item prediction head and a Y-axis image coordinate classification item prediction head respectively.
The feature extraction network is used for extracting a feature map from an input image; the feature encoder is used for encoding the extracted feature map to obtain a feature map with global information encoded, and plays a role in global feature fusion interaction. The feature decoder is used for inquiring the feature map coded by the feature encoder by taking the key point inquiry vector as input to obtain the decoded feature corresponding to each prediction element; the index classification item predicting head, the X-axis image coordinate classification item predicting head and the Y-axis image coordinate classification item predicting head respectively predict the index classification item, the X-axis image coordinate classification item and the Y-axis image coordinate classification item of the semantic key point element by receiving the decoded features output by the feature decoder.
The deep neural network used in one embodiment is shown in fig. 3, and in fig. 3, the input image is extracted into a feature extraction network (TZ) to extract a feature map. The number of feature encoders is K, namely a first feature encoder (BM 1) and a second feature encoder (BM 2), respectively. In fig. 3, by stacking the feature encoder and the feature decoder, the capability of feature encoding and decoding is improved, and the effect of neural network prediction is improved.
The optimization algorithm adopted by the invention for training the deep neural network is not limited, and a person skilled in the art can select the optimization algorithm in the prior art according to the situation.
In one embodiment of the invention, the deep neural network is trained using a stochastic gradient descent method based on the training dataset.
As shown in fig. 4, in one embodiment, a method for training the deep neural network is provided, including:
(1) Inputting a prediction set and a truth value set;
the semantic key point true value set of the space non-cooperative target in the input image isWherein->The semantic key element is marked as +.>The number of semantic key point elements in the semantic key point truth value set is equal to the number of semantic key points on the space non-cooperative target +.>
The semantic key point predicted value set predicted by the deep neural network based on the input image is as followsWherein->The semantic key element is marked as +.>The number of semantic key point elements in the semantic key point predicted value set is +.>And (2) and
(2) And supplementing background elements to the truth value set to make the number of elements in the prediction set and the truth value set equal.
True value set to semantic keypointsSupplementing zero element to obtain a set- >Make->The number of the semantic key point elements is +.>And semantic key point predictor set +.>The number of semantic key point elements which can only be the same;
(3) Minimizing the matching loss function results in an optimal index function.
Defining index function, obtaining optimal index function by minimizing bipartite matching loss functionThe following are provided:
wherein 、/>、/>Respectively +.>Index classification items, X-axis image coordinate classification items and Y-axis image coordinate classification items of semantic key point prediction elements in a semantic key point prediction set; />、/>、/>Respectively +.>Index classification item, X-axis image coordinate classification item and Y-axis image coordinate classification item of semantic key point prediction elements in semantic key point true value set, +.>Indicate->Index of each semantic key point prediction element in the semantic key point true value set; />Is a balance parameter; />Is cross entropy loss; when the X-axis image coordinate classification item and the Y-axis image coordinate classification item are Gaussian distribution, the +.>Is KL loss; when the X-axis image coordinate classification item and the Y-axis image coordinate classification item are one-hot encoded, the ++>Is a cross entropy loss.
(4) The elements of a truth set are paired for each element of the prediction set by the best index function.
Pairing semantic keypoint elements in a semantic keypoint predictor set using an optimal indexing functionIs a semantic key element of the group.
(5) And calculating a loss function value predicted by the deep neural network.
On the basis of the method for training the deep neural network, the method further comprises the steps of constructing a loss function of the deep neural network and supervising the training of the deep neural network, wherein the loss function of the deep neural network is as follows:
wherein Is a coordinate term loss function; />、/>、/>Respectively the +.>Index classification items, X-axis image coordinate classification items and Y-axis image coordinate classification items of semantic key point elements in the semantic key point true value set of the semantic key point prediction elements; />Represents the +.>Personal semanticsThe best index of the semantic keypoint elements in the semantic keypoint truth set of the keypoint prediction elements.
The coordinate term loss functionThe method comprises the following steps:
wherein and />The average of the predicted value and the true value of the coordinate classification term are respectively:
wherein Classifying the predicted dimension of the term for the coordinate term; />Prediction variance for coordinate classification term:
it will be appreciated that the present invention is not limited to the end conditions for terminating training of the model, and those skilled in the art can make reasonable settings based on methods known in the art or based on empirical, conventional means, including but not limited to setting the maximum number of iterations, etc. As in training the deep neural network, the conditions for training convergence are any one of the following three:
(a) Setting the maximum iteration number, and ending training when the iteration number exceeds the maximum iteration number;
(b) Setting a loss function threshold, and ending training when the loss function value obtained by current calculation is smaller than the loss function threshold;
(c) And ending training when the currently calculated loss function value is not reduced any more.
In one embodiment, given an image of a spatially non-cooperative target, the estimation process of the position and the pose of the spatially non-cooperative target in the camera coordinate system is shown in fig. 5, and includes the following steps:
(1) A training image is input.
(2) And predicting by the deep neural network to obtain a key point set.
A set of semantic keypoints of the spatial non-cooperative targets in the input image is predicted using the deep neural network that has completed training.
(3) The position and uncertainty of the element on the image coordinate system are obtained through the X-axis and Y-axis classification items of the image.
The first semantic key point set of the input image obtained through predictionX-axis image coordinate classification item and Y-axis image coordinate classification item of each semantic key point element +.>、/>Get->Coordinate position of each semantic key element on X-axis and Y-axis of image coordinate system +.>、/>
Here, the and />Are respectively- >X-axis image coordinate classification item and Y-axis image coordinate classification item of each semantic key point element are at the +.>Probability on individual positions, +.> and />Respectively the width and height of the input image, +.>Is a coefficient, which is the ratio of the resolution of the coordinate classification term to the image scale.
Acquiring the first semantic key point set of the predicted input imageThe uncertainty of the positions of the semantic key point elements on the X axis and the Y axis of the image coordinate system is as follows:
(4) And establishing a corresponding relation between the coordinates of the key point image and the coordinates of the body coordinate system through the index classification item.
The first semantic key point set of the input image obtained through predictionIndex classification item of each semantic key point element to obtain the +.>Coordinates +.>
wherein ,indicate->Index of individual semantic keypoint elements to predefined semantic keypoints.
(5) And constructing and solving an N-point perspective model with weights.
Constructing a weighted N-point perspective model, and obtaining the position and the posture of a non-cooperative target under a camera coordinate system by solving the weighted N-point perspective model, wherein the weighted N-point perspective model is as follows:
wherein and />The optimal estimated values of the rotation matrix and the translation vector of the space non-cooperative target under the camera coordinate system are respectively called the pose of the space non-cooperative target; />Is an indication function if and only if the condition in brackets is true is 1, otherwise equal to 0. The function of the indication function is to eliminate the background elements of the network prediction when estimating the pose. />Is a robust estimation function, e.g. Huber Loss, etc., is a common function in robust estimation and will not be described here. />Is a weighted re-projection residual, expressed as:
wherein ,is->Prediction uncertainty of image coordinates of each semantic key point element:
wherein ,is an internal parameter matrix of the camera, +.>Pose as spatially non-cooperative target, wherein +.>Rotation matrix under camera coordinate system for spatially non-cooperative targets, +.>For translation vectors of spatially non-cooperative targets in the camera coordinate system,is->Coordinates under a space non-cooperative target object coordinate system corresponding to each semantic key point element, ++>Is->Photographic depth of individual semantic key elements.
The weighted N-point perspective model can be solved through a general optimization library g2o or ceres to obtain the gesture and the position of the space non-cooperative target under a camera coordinate system, namely and />
In one embodiment, a spatial non-cooperative target pose measurement apparatus is provided, including:
the first module is used for acquiring coordinates of each semantic key point on the space non-cooperative target under a space non-cooperative target body coordinate system;
the second module is used for acquiring sample images containing the space non-cooperative targets, projecting the coordinates of the semantic key points under the space non-cooperative target body coordinate system to the image coordinate system to obtain the coordinates of the semantic key points in each sample image, obtaining a semantic key point true value set of the space non-cooperative targets on each sample image, and constructing a training data set;
the third module is used for constructing a deep neural network for predicting the semantic key point set;
a fourth module for training the deep neural network using the training data set until training converges;
a fifth module, configured to predict a semantic key point set of a spatial non-cooperative target in the input image by using the trained deep neural network, to obtain a correspondence between coordinates of each semantic key point in the predicted semantic key point set in an image coordinate system and coordinates thereof in a spatial non-cooperative target object coordinate system;
and a sixth module, configured to solve, based on the correspondence, a position and an attitude of the spatial non-cooperative target in the input image under a camera coordinate system.
The implementation method of each module and the construction of the model can be the method described in any of the foregoing embodiments, which is not described herein.
In another aspect, the present invention provides a computer device, including a memory and a processor, the memory storing a computer program, the processor implementing the steps of the spatial non-cooperative target pose measurement method provided in any of the embodiments described above when executing the computer program. The computer device may be a server. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing sample data. The network interface of the computer device is used for communicating with an external terminal through a network connection.
In another aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the spatial non-cooperative target pose measurement method provided in any of the embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The application is not a matter of the known technology.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (11)

1. The method for measuring the pose of the spatial non-cooperative target is characterized by comprising the following steps of:
acquiring coordinates of each semantic key point on the space non-cooperative target under a space non-cooperative target body coordinate system;
obtaining samples containing spatially non-cooperative targetsThe method comprises the steps of (1) projecting coordinates of all semantic key points under a space non-cooperative target body coordinate system to an image coordinate system to obtain coordinates of all semantic key points in each sample image, obtaining a semantic key point true value set of a space non-cooperative target on each sample image, and constructing a training data set according to the semantic key point true value set, wherein the semantic key point true value set of the space non-cooperative target on each sample image consists of all semantic key point elements, and the first step is that the image is a training data setiThe semantic key point element is described by a descriptioniIndex classification item of corresponding relation of semantic key points on semantic key point elements and space non-cooperative target coordinate systemOne description of the inventioniX-axis image coordinate classification item of coordinates of semantic key point elements on X-axis of image +.>And one describe (1)iY-axis image coordinate classification item of coordinates of semantic key point elements on image Y-axis +.>Composition; the number of semantic key point elements in the semantic key point truth value set is equal to the number of semantic key points on the space non-cooperative target +. >
Constructing a deep neural network for predicting a semantic key point set;
training the deep neural network by using the training data set until training converges;
predicting a semantic key point set of a space non-cooperative target in an input image by using the trained deep neural network to obtain a corresponding relation between coordinates of each semantic key point in the predicted semantic key point set in an image coordinate system and coordinates of each semantic key point in the space non-cooperative target body coordinate system;
based on the correspondence, solving the position and the gesture of the spatial non-cooperative target in the input image under a camera coordinate system comprises:
the first semantic key point set of the input image obtained through predictioniX-axis image coordinate classification item and Y-axis image coordinate classification item of semantic key point elements、/>Obtain the firstiCoordinate position of each semantic key element on X-axis and Y-axis of image coordinate system +.>、/>
wherein and />Respectively the firstiX-axis image coordinate classification item and Y-axis image coordinate classification item of each semantic key point element are arranged on the first stagejProbability on individual positions, +.> and />Respectively the width and height of the input image, +.>The coefficient is the ratio of the resolution of the coordinate classification item to the image scale;
obtaining a predicted input map The first of the set of semantic keypoints of the imageiThe uncertainty of the positions of the semantic key point elements on the X axis and the Y axis of the image coordinate system is as follows:
the first semantic key point set of the input image obtained through predictioniIndex classification item of each semantic key point element to obtain the firstCoordinates +.>
wherein ,represent the firstiIndexing the semantic key point elements to predefined semantic key points;
constructing a weighted N-point perspective model, and obtaining the position and the posture of a non-cooperative target under a camera coordinate system by solving the weighted N-point perspective model, wherein the weighted N-point perspective model is as follows:
wherein the number of semantic key point elements in the semantic key point predicted value set is as followsAnd->;/> and />The optimal estimated values of the rotation matrix and the translation vector of the space non-cooperative target under the camera coordinate system are respectively called the pose of the space non-cooperative target; />Is an indication function if and only if the condition in brackets is 1, otherwise equal to 0; />Is a robust estimation function; />Is a weighted re-projection residual, expressed as:
wherein ,is an internal parameter matrix of the camera, +. >Pose as spatially non-cooperative target, wherein +.>Rotation matrix under camera coordinate system for spatially non-cooperative targets, +.>Translation vector in camera coordinate system for spatially non-cooperative target, +.>Is the firstiCoordinates under a space non-cooperative target object coordinate system corresponding to each semantic key point element, ++>Is the firstiPhotographic depth of individual semantic key elements, +.>Is->Homogeneous coordinates of the semantic key points in the image coordinate system.
2. The method for pose measurement of a spatially non-cooperative target according to claim 1, wherein the number of semantic keypoints on the spatially non-cooperative target is 4 or more.
3. The spatial non-cooperative target pose measurement method according to claim 1 or 2, wherein the deep neural network comprises a feature extraction network, a feature encoder, a feature decoder and three prediction heads, the three prediction heads being an index classification item prediction head, an X-axis image coordinate classification item prediction head, and a Y-axis image coordinate classification item prediction head, respectively;
the feature extraction network is used for extracting a feature map from an input image; the feature encoder is used for encoding the extracted features to obtain a feature map after global information encoding; the feature decoder is used for inquiring the feature map coded by the feature encoder by taking the key point inquiry vector as input to obtain the decoded feature corresponding to each prediction element; the index classification item predicting head, the X-axis image coordinate classification item predicting head and the Y-axis image coordinate classification item predicting head respectively predict the index classification item, the X-axis image coordinate classification item and the Y-axis image coordinate classification item of the semantic key point element by receiving the decoded features output by the feature decoder.
4. A method of spatial non-cooperative target pose measurement according to claim 3, wherein the training data set is used to train the deep neural network using a random gradient descent method.
5. The method of spatial non-cooperative target pose measurement according to claim 4, wherein training the deep neural network comprises:
the semantic key point true value set of the space non-cooperative target in the input image isWherein->The semantic key element is marked as +.>
The semantic key point predicted value set predicted by the deep neural network based on the input image is as followsWherein->The semantic key element is marked as +.>
True value set to semantic keypointsSupplementing zero element to obtain a set->Make->The number of the semantic key point elements is +.>And semantic key point predictor set +.>The number of semantic key point elements which can only be the same;
defining index function, obtaining optimal index function by minimizing bipartite matching loss functionThe following are provided:
wherein 、/>、/>Respectively +.>Index classification items, X-axis image coordinate classification items and Y-axis image coordinate classification items of semantic key point prediction elements in a semantic key point prediction set; />、/>、/>Respectively +. >Index classification item, X-axis image coordinate classification item and Y-axis image coordinate classification item of semantic key point prediction elements in semantic key point true value set, +.>Indicate->Index of each semantic key point prediction element in the semantic key point true value set; />Is a balance parameter;is cross entropy loss; when the X-axis image coordinate classification item and the Y-axis image coordinate classification item are Gaussian distribution, the +.>Is KL loss; when the X-axis image coordinate classification item and the Y-axis image coordinate classification item are one-hot encoded, the ++>Is cross entropy loss;
pairing semantic keypoint elements in a semantic keypoint predictor set using an optimal indexing functionIs a semantic key element of the group.
6. The method of claim 5, further comprising constructing a loss function of the deep neural network, and supervising the training of the deep neural network, wherein the loss function of the deep neural network is as follows:
wherein Is a coordinate term loss function; />、/>、/>Respectively the +.>Index classification items, X-axis image coordinate classification items and Y-axis image coordinate classification items of semantic key point elements in the semantic key point true value set of the semantic key point prediction elements; / >Represents the +.>The semantic keypoint predictor is the best index of semantic keypoint elements in the semantic keypoint truth set.
7. The method of claim 6, wherein the coordinate term loss functionThe method comprises the following steps:
wherein and />The average of the predicted value and the true value of the coordinate classification term are respectively:
wherein Classifying the predicted dimension of the term for the coordinate term; />Prediction variance for coordinate classification term:
8. the method for measuring pose of spatial non-cooperative target according to claim 6 or 7, wherein when training the deep neural network, conditions for training convergence are:
setting the maximum iteration number, and ending training when the iteration number exceeds the maximum iteration number;
or, setting a loss function threshold, and ending training when the loss function value obtained by current calculation is smaller than the loss function threshold;
or, when the currently calculated loss function value is no longer reduced, the training is ended.
9. The utility model provides a space non-cooperation target position appearance measuring device which characterized in that includes:
the first module is used for acquiring coordinates of each semantic key point on the space non-cooperative target under a space non-cooperative target body coordinate system;
A second module for obtaining sample images containing space non-cooperative targets, projecting coordinates of each semantic key point under the space non-cooperative target body coordinate system to an image coordinate system to obtain coordinates of each semantic key point in each sample image, obtaining a semantic key point true value set of the space non-cooperative target on each sample image, and constructing a training data set by the set, wherein the semantic key point true value set of the space non-cooperative target on each sample image is composed of all semantic key point elements, and the first module is used for acquiring the data set of the space non-cooperative target on each sample image, and the second module is used for acquiring the data set of the space non-cooperative target on each sample imageiThe semantic key point element is described by a descriptioniIndex classification item of corresponding relation of semantic key points on semantic key point elements and space non-cooperative target coordinate systemOne description of the inventioniX-axis image coordinate classification item of coordinates of semantic key point elements on X-axis of image +.>And one describe (1)iY-axis image coordinate classification item of coordinates of semantic key point elements on image Y-axis +.>The number of semantic key point elements in the semantic key point truth value set is equal to the number of semantic key points on a space non-cooperative target +.>
The third module is used for constructing a deep neural network for predicting the semantic key point set;
a fourth module for training the deep neural network using the training data set until training converges;
A fifth module, configured to predict a semantic key point set of a spatial non-cooperative target in the input image by using the trained deep neural network, to obtain a correspondence between coordinates of each semantic key point in the predicted semantic key point set in an image coordinate system and coordinates thereof in a spatial non-cooperative target object coordinate system;
a sixth module, configured to solve, based on the correspondence, a position and an attitude of a spatial non-cooperative target in an input image under a camera coordinate system, where the method includes:
the first semantic key point set of the input image obtained through predictioniX-axis image coordinate classification item and Y-axis image coordinate classification item of semantic key point elements、/>Obtain the firstiCoordinate position of each semantic key element on X-axis and Y-axis of image coordinate system +.>、/>
wherein and />Respectively the firstiX-axis image coordinate classification item and Y-axis image coordinate classification item of each semantic key point element are arranged on the first stagejProbability on individual positions, +.> and />Respectively the width of the input imageAnd high (I)>The coefficient is the ratio of the resolution of the coordinate classification item to the image scale;
acquiring the first semantic key point set of the predicted input imageiThe uncertainty of the positions of the semantic key point elements on the X axis and the Y axis of the image coordinate system is as follows:
The first semantic key point set of the input image obtained through predictioniIndex classification item of each semantic key point element to obtain the firstCoordinates +.>
wherein ,represent the firstiIndexing the semantic key point elements to predefined semantic key points;
constructing a weighted N-point perspective model, and obtaining the position and the posture of a non-cooperative target under a camera coordinate system by solving the weighted N-point perspective model, wherein the weighted N-point perspective model is as follows:
wherein the number of semantic key point elements in the semantic key point predicted value set is as followsAnd->;/> and />The optimal estimated values of the rotation matrix and the translation vector of the space non-cooperative target under the camera coordinate system are respectively called the pose of the space non-cooperative target; />Is an indication function if and only if the condition in brackets is 1, otherwise equal to 0; />Is a robust estimation function; />Is a weighted re-projection residual, expressed as:
wherein ,is an internal parameter matrix of the camera, +.>Pose as spatially non-cooperative target, wherein +.>Rotation matrix under camera coordinate system for spatially non-cooperative targets, +. >Translation vector in camera coordinate system for spatially non-cooperative target, +.>Is the firstiCoordinates under a space non-cooperative target object coordinate system corresponding to each semantic key point element, ++>Is the firstiPhotographic depth of individual semantic key elements, +.>Is->Homogeneous coordinates of the semantic key points in the image coordinate system.
10. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the spatial non-cooperative target pose measurement method according to claim 1.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the spatial non-cooperative target pose measurement method according to claim 1.
CN202310638984.5A 2023-06-01 2023-06-01 Method, device, computer equipment and medium for measuring pose of space non-cooperative target Active CN116363217B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310638984.5A CN116363217B (en) 2023-06-01 2023-06-01 Method, device, computer equipment and medium for measuring pose of space non-cooperative target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310638984.5A CN116363217B (en) 2023-06-01 2023-06-01 Method, device, computer equipment and medium for measuring pose of space non-cooperative target

Publications (2)

Publication Number Publication Date
CN116363217A CN116363217A (en) 2023-06-30
CN116363217B true CN116363217B (en) 2023-08-11

Family

ID=86939989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310638984.5A Active CN116363217B (en) 2023-06-01 2023-06-01 Method, device, computer equipment and medium for measuring pose of space non-cooperative target

Country Status (1)

Country Link
CN (1) CN116363217B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287873A (en) * 2019-06-25 2019-09-27 清华大学深圳研究生院 Noncooperative target pose measuring method, system and terminal device based on deep neural network
CN111862201A (en) * 2020-07-17 2020-10-30 北京航空航天大学 Deep learning-based spatial non-cooperative target relative pose estimation method
CN111862126A (en) * 2020-07-09 2020-10-30 北京航空航天大学 Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm
CN112053373A (en) * 2020-08-11 2020-12-08 北京控制工程研究所 Spatial non-cooperative target posture evaluation method with image scale transformation
CN112651437A (en) * 2020-12-24 2021-04-13 北京理工大学 Spatial non-cooperative target pose estimation method based on deep learning
EP3905194A1 (en) * 2020-04-30 2021-11-03 Siemens Aktiengesellschaft Pose estimation method and apparatus
CN115018876A (en) * 2022-06-08 2022-09-06 哈尔滨理工大学 Non-cooperative target grabbing control system based on ROS
CN115035260A (en) * 2022-05-27 2022-09-09 哈尔滨工程大学 Indoor mobile robot three-dimensional semantic map construction method
CN115661246A (en) * 2022-10-25 2023-01-31 中山大学 Attitude estimation method based on self-supervision learning
CN116109706A (en) * 2023-04-13 2023-05-12 中国人民解放军国防科技大学 Space target inversion method, device and equipment based on priori geometric constraint

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287873A (en) * 2019-06-25 2019-09-27 清华大学深圳研究生院 Noncooperative target pose measuring method, system and terminal device based on deep neural network
EP3905194A1 (en) * 2020-04-30 2021-11-03 Siemens Aktiengesellschaft Pose estimation method and apparatus
CN111862126A (en) * 2020-07-09 2020-10-30 北京航空航天大学 Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm
CN111862201A (en) * 2020-07-17 2020-10-30 北京航空航天大学 Deep learning-based spatial non-cooperative target relative pose estimation method
CN112053373A (en) * 2020-08-11 2020-12-08 北京控制工程研究所 Spatial non-cooperative target posture evaluation method with image scale transformation
CN112651437A (en) * 2020-12-24 2021-04-13 北京理工大学 Spatial non-cooperative target pose estimation method based on deep learning
CN115035260A (en) * 2022-05-27 2022-09-09 哈尔滨工程大学 Indoor mobile robot three-dimensional semantic map construction method
CN115018876A (en) * 2022-06-08 2022-09-06 哈尔滨理工大学 Non-cooperative target grabbing control system based on ROS
CN115661246A (en) * 2022-10-25 2023-01-31 中山大学 Attitude estimation method based on self-supervision learning
CN116109706A (en) * 2023-04-13 2023-05-12 中国人民解放军国防科技大学 Space target inversion method, device and equipment based on priori geometric constraint

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于Transformer模型的卫星单目位姿估计方法;王梓等;《航空学报》;全文 *

Also Published As

Publication number Publication date
CN116363217A (en) 2023-06-30

Similar Documents

Publication Publication Date Title
US11629965B2 (en) Methods, apparatus, and systems for localization and mapping
Huang et al. A comprehensive survey on point cloud registration
CN109974693B (en) Unmanned aerial vehicle positioning method and device, computer equipment and storage medium
CN107230225B (en) Method and apparatus for three-dimensional reconstruction
CN109099915B (en) Mobile robot positioning method, mobile robot positioning device, computer equipment and storage medium
CN110047108B (en) Unmanned aerial vehicle pose determination method and device, computer equipment and storage medium
WO2021052283A1 (en) Method for processing three-dimensional point cloud data and computing device
EP2960859B1 (en) Constructing a 3d structure
KR20190070514A (en) Apparatus for Building Grid Map and Method there of
CN111292377B (en) Target detection method, device, computer equipment and storage medium
KR20220076398A (en) Object recognition processing apparatus and method for ar device
CN112528974B (en) Distance measuring method and device, electronic equipment and readable storage medium
CN115457492A (en) Target detection method and device, computer equipment and storage medium
CN113052907A (en) Positioning method of mobile robot in dynamic environment
CN112651944A (en) 3C component high-precision six-dimensional pose estimation method and system based on CAD model
CN112053383A (en) Method and device for real-time positioning of robot
CN114663598A (en) Three-dimensional modeling method, device and storage medium
JP2010072700A (en) Image processor, image processing method, and image pickup system
CN117635444A (en) Depth completion method, device and equipment based on radiation difference and space distance
CN116363217B (en) Method, device, computer equipment and medium for measuring pose of space non-cooperative target
CN113793251A (en) Pose determination method and device, electronic equipment and readable storage medium
Dan et al. Multifeature energy optimization framework and parameter adjustment-based nonrigid point set registration
CN111582013A (en) Ship retrieval method and device based on gray level co-occurrence matrix characteristics
CN116206302A (en) Three-dimensional object detection method, three-dimensional object detection device, computer equipment and storage medium
CN113763468B (en) Positioning method, device, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant