CN116363217A - Method, device, computer equipment and medium for measuring pose of space non-cooperative target - Google Patents
Method, device, computer equipment and medium for measuring pose of space non-cooperative target Download PDFInfo
- Publication number
- CN116363217A CN116363217A CN202310638984.5A CN202310638984A CN116363217A CN 116363217 A CN116363217 A CN 116363217A CN 202310638984 A CN202310638984 A CN 202310638984A CN 116363217 A CN116363217 A CN 116363217A
- Authority
- CN
- China
- Prior art keywords
- semantic key
- key point
- cooperative target
- semantic
- coordinate system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000012549 training Methods 0.000 claims abstract description 69
- 238000013528 artificial neural network Methods 0.000 claims abstract description 67
- 238000005259 measurement Methods 0.000 claims abstract description 6
- 230000006870 function Effects 0.000 claims description 41
- 238000004590 computer program Methods 0.000 claims description 13
- 239000011159 matrix material Substances 0.000 claims description 9
- 239000013598 vector Substances 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 7
- 238000013519 translation Methods 0.000 claims description 6
- 238000000691 measurement method Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 238000010276 construction Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 5
- 238000005457 optimization Methods 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/24—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for cosmonautical navigation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
Abstract
The invention provides a method, a device, computer equipment and a medium for measuring the pose of a space non-cooperative target, which relate to the relative navigation positioning of space, the selection of semantic key points on the space non-cooperative target in the pose measurement field of the non-cooperative target in the computer vision field, the construction of a training data set, the construction of a depth neural network prediction semantic key point set, the training of the depth neural network by using the training data set, the prediction of the semantic key point set in an input image by using the trained depth neural network after the training of the depth neural network is completed, finally, the establishment of a weighted N-point perspective problem based on the prediction, and the solution of the problem to obtain the position and the pose of the non-cooperative target in the input image under a camera coordinate system. The method can adapt to a complex space non-control environment and realize reliable position and posture prediction of the space non-cooperative target under a camera coordinate system.
Description
Technical Field
The invention mainly relates to the technical field of radar imaging remote sensing, in particular to a method, a device, computer equipment and a medium for measuring pose of a spatial non-cooperative target.
Background
With the rapid development of space technology, for example, the position and the gesture of a target spacecraft relative to a service spacecraft are required to be measured in the tasks of formation flight, invalid satellites, space debris removal and the like, the existing method obtains the relative position and the gesture by predicting the position of a semantic key point on an image defined on the target spacecraft and then solving an N-point perspective problem.
However, the existing method takes each key point as an independent target, trains the deep neural network to predict the position of each key point in the image, lacks overall modeling of the spacecraft, and is difficult to adapt to a complex space uncontrolled environment.
Disclosure of Invention
Aiming at the technical problems existing in the prior art, the invention provides a method, a device, computer equipment and a medium for measuring the pose of a space non-cooperative target.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
in one aspect, the present invention provides a method for measuring pose of a spatial non-cooperative target, including:
acquiring coordinates of each semantic key point on the space non-cooperative target under a space non-cooperative target body coordinate system;
acquiring sample images containing space non-cooperative targets, projecting coordinates of all semantic key points under a space non-cooperative target body coordinate system to an image coordinate system to obtain coordinates of all semantic key points in each sample image, obtaining a semantic key point true value set of the space non-cooperative targets on each sample image, and constructing a training data set;
constructing a deep neural network for predicting a semantic key point set;
training the deep neural network by using the training data set until training converges;
predicting a semantic key point set of a space non-cooperative target in an input image by using the trained deep neural network to obtain a corresponding relation between coordinates of each semantic key point in the predicted semantic key point set in an image coordinate system and coordinates of each semantic key point in the space non-cooperative target body coordinate system;
and solving the position and the gesture of the spatial non-cooperative target in the input image under a camera coordinate system based on the corresponding relation.
Further, the number of semantic key points on the spatial non-cooperative targets is more than or equal to 4.
Further, the semantic key point true value set of the spatial non-cooperative target on each sample image consists of all semantic key point elements, the firstThe semantic key point element is described by +.>Index classification item of corresponding relation between semantic key point elements and semantic key points on space non-cooperative target coordinate system>One description->X-axis image coordinate classification item of coordinates of semantic key point elements on X-axis of image +.>And a description of->Y-axis image coordinate classification item of coordinates of semantic key point elements on image Y-axis +.>Composition is prepared.
Further, the deep neural network comprises a feature extraction network, a feature encoder, a feature decoder and three prediction heads, wherein the three prediction heads are an index classification item prediction head, an X-axis image coordinate classification item prediction head and a Y-axis image coordinate classification item prediction head respectively;
the feature extraction network is used for extracting a feature map from an input image; the feature encoder is used for encoding the extracted features to obtain a feature map after global information encoding; the feature decoder is used for inquiring the feature map coded by the feature encoder by taking the key point inquiry vector as input to obtain the decoded feature corresponding to each prediction element; the index classification item predicting head, the X-axis image coordinate classification item predicting head and the Y-axis image coordinate classification item predicting head respectively predict the index classification item, the X-axis image coordinate classification item and the Y-axis image coordinate classification item of the semantic key point element by receiving the decoded features output by the feature decoder.
Further, the invention uses the training data set to train the deep neural network using a random gradient descent method.
Further, the invention trains the deep neural network, comprising:
the semantic key point true value set of the space non-cooperative target in the input image isWherein->The semantic key element is marked as +.>The number of semantic key point elements in the semantic key point truth value set is +.>,/>Equal to the number of semantic keypoints on spatially non-cooperative targets +.>;
The semantic key point predicted value set predicted by the deep neural network based on the input image is as followsWherein->The semantic key element is marked as +.>The number of semantic key point elements in the semantic key point predicted value set is +.>And->;
True value set to semantic keypointsSupplementing zero element to obtain a set->Make->The number of the semantic key point elements is +.>And semantic key point predictor set +.>The number of semantic key point elements which can only be the same;
defining index function, obtaining optimal index function by minimizing bipartite matching loss functionThe following are provided:
wherein 、/>、/>Respectively +.>Index classification items, X-axis image coordinate classification items and Y-axis image coordinate classification items of semantic key point prediction elements in a semantic key point prediction set; />、/>、/>Respectively +.>Index classification item, X-axis image coordinate classification item and Y-axis image coordinate classification item of semantic key point prediction elements in semantic key point true value set, +.>Indicate->Index of each semantic key point prediction element in the semantic key point true value set; />Is a balance parameter; />Is cross entropy loss; when the X-axis image coordinate classification item and the Y-axis image coordinate classification item are Gaussian distribution, the +.>Is KL loss; when the X-axis image coordinate classification item and the Y-axis image coordinate classification item are one-hot encoded, the ++>Is cross entropy loss;
pairing semantic keypoint elements in a semantic keypoint predictor set using an optimal indexing functionIs a semantic key element of the group.
Further, the invention trains the deep neural network, further comprises constructing a loss function of the deep neural network, and supervising the training of the deep neural network, wherein the loss function of the deep neural network is as follows:
wherein Is a coordinate term loss function; />、/>、/>Respectively the +.>Index classification items, X-axis image coordinate classification items and Y-axis image coordinate classification items of semantic key point elements in the semantic key point true value set of the semantic key point prediction elements; />Represents the +.>The semantic keypoint predictor is the best index of semantic keypoint elements in the semantic keypoint truth set.
Further, the coordinate term loss function of the inventionThe method comprises the following steps:
wherein and />The average of the predicted value and the true value of the coordinate classification term are respectively:
wherein Classifying the predicted dimension of the term for the coordinate term; />Prediction variance for coordinate classification term:
further, when the deep neural network is trained, the conditions for training convergence are as follows:
setting the maximum iteration number, and ending training when the iteration number exceeds the maximum iteration number;
or, setting a loss function threshold, and ending training when the loss function value obtained by current calculation is smaller than the loss function threshold;
or, when the currently calculated loss function value is no longer reduced, the training is ended.
Further, the position and the posture of the spatial non-cooperative target in the camera coordinate system are obtained through the following steps:
the first semantic key point set of the input image obtained through predictionX-axis image coordinate classification item and Y-axis image coordinate classification item of each semantic key point element +.>、/>Get->Coordinate position of each semantic key element on X-axis and Y-axis of image coordinate system +.>、/>;
wherein and />Are respectively->X-axis image coordinate classification item and Y-axis image coordinate classification item of each semantic key point element are at the +.>Probability on individual positions, +.> and />Respectively the width and height of the input image, +.>The coefficient is the ratio of the resolution of the coordinate classification item to the image scale;
acquiring the first semantic key point set of the predicted input imageThe uncertainty of the positions of the semantic key point elements on the X axis and the Y axis of the image coordinate system is as follows:
the first semantic key point set of the input image obtained through predictionIndex classification item of each semantic key point element to obtain the +.>Coordinates +.>:
wherein ,indicate->Personal languageIndex of semantic key point elements to predefined semantic key points;
constructing a weighted N-point perspective model, and obtaining the position and the posture of a non-cooperative target under a camera coordinate system by solving the weighted N-point perspective model, wherein the weighted N-point perspective model is as follows:
wherein and />The optimal estimated values of the rotation matrix and the translation vector of the space non-cooperative target under the camera coordinate system are respectively called the pose of the space non-cooperative target; />Is an indication function if and only if the condition in brackets is 1, otherwise equal to 0; />Is a robust estimation function; />Is a weighted re-projection residual, expressed as:
wherein ,is an internal parameter matrix of the camera, +.>Pose as spatially non-cooperative target, wherein +.>Rotation matrix under camera coordinate system for spatially non-cooperative targets, +.>For translation vectors of spatially non-cooperative targets in the camera coordinate system,is->Coordinates under a space non-cooperative target object coordinate system corresponding to each semantic key point element, ++>Is->Photographic depth of individual semantic key elements.
In another aspect, the present invention provides a spatial non-cooperative target pose measurement apparatus, comprising:
the first module is used for acquiring coordinates of each semantic key point on the space non-cooperative target under a space non-cooperative target body coordinate system;
the second module is used for acquiring sample images containing the space non-cooperative targets, projecting the coordinates of the semantic key points under the space non-cooperative target body coordinate system to the image coordinate system to obtain the coordinates of the semantic key points in each sample image, obtaining a semantic key point true value set of the space non-cooperative targets on each sample image, and constructing a training data set;
the third module is used for constructing a deep neural network for predicting the semantic key point set;
a fourth module for training the deep neural network using the training data set until training converges;
a fifth module, configured to predict a semantic key point set of a spatial non-cooperative target in the input image by using the trained deep neural network, to obtain a correspondence between coordinates of each semantic key point in the predicted semantic key point set in an image coordinate system and coordinates thereof in a spatial non-cooperative target object coordinate system;
and a sixth module, configured to solve, based on the correspondence, a position and an attitude of the spatial non-cooperative target in the input image under a camera coordinate system.
In another aspect, the present invention provides a computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring coordinates of each semantic key point on the space non-cooperative target under a space non-cooperative target body coordinate system;
acquiring sample images containing space non-cooperative targets, projecting coordinates of all semantic key points under a space non-cooperative target body coordinate system to an image coordinate system to obtain coordinates of all semantic key points in each sample image, obtaining a semantic key point true value set of the space non-cooperative targets on each sample image, and constructing a training data set;
constructing a deep neural network for predicting a semantic key point set;
training the deep neural network by using the training data set until training converges;
predicting a semantic key point set of a space non-cooperative target in an input image by using the trained deep neural network to obtain a corresponding relation between coordinates of each semantic key point in the predicted semantic key point set in an image coordinate system and coordinates of each semantic key point in the space non-cooperative target body coordinate system;
and solving the position and the gesture of the spatial non-cooperative target in the input image under a camera coordinate system based on the corresponding relation.
In another aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring coordinates of each semantic key point on the space non-cooperative target under a space non-cooperative target body coordinate system;
acquiring sample images containing space non-cooperative targets, projecting coordinates of all semantic key points under a space non-cooperative target body coordinate system to an image coordinate system to obtain coordinates of all semantic key points in each sample image, obtaining a semantic key point true value set of the space non-cooperative targets on each sample image, and constructing a training data set;
constructing a deep neural network for predicting a semantic key point set;
training the deep neural network by using the training data set until training converges;
predicting a semantic key point set of a space non-cooperative target in an input image by using the trained deep neural network to obtain a corresponding relation between coordinates of each semantic key point in the predicted semantic key point set in an image coordinate system and coordinates of each semantic key point in the space non-cooperative target body coordinate system;
and solving the position and the gesture of the spatial non-cooperative target in the input image under a camera coordinate system based on the corresponding relation.
Compared with the prior art, the invention has the technical effects that:
according to the method, the position of each semantic key point in the spatial non-cooperative target in the image coordinate system is predicted by training the deep neural network, so that the position and the gesture of the spatial non-cooperative target in the camera coordinate system are solved. The method can adapt to a complex space non-control environment and realize reliable position and posture prediction of the space non-cooperative target under a camera coordinate system.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of an embodiment;
FIG. 2 is a schematic diagram of a semantic keypoint truth value set representation of spatial non-cooperative targets in an embodiment;
FIG. 3 is a block diagram of a deep neural network used in one embodiment;
FIG. 4 is a training flow diagram of a deep neural network in one embodiment;
FIG. 5 is a flow diagram of target position and pose estimation based on a deep neural network in one embodiment.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, in one embodiment, a method for measuring pose of a spatial non-cooperative target is provided, including the following steps:
(1) Constructing a training data set;
acquiring coordinates of each semantic key point on the space non-cooperative target under a space non-cooperative target body coordinate system;
the method comprises the steps of obtaining sample images containing space non-cooperative targets, projecting coordinates of semantic key points under a space non-cooperative target body coordinate system to an image coordinate system to obtain coordinates of the semantic key points in each sample image, obtaining a semantic key point true value set of the space non-cooperative targets on each sample image, and constructing a training data set.
(2) Constructing a deep neural network for predicting a semantic key point set;
(3) Training a deep neural network using the training dataset;
and training the deep neural network based on the training data set by using an optimization algorithm until training converges.
(4) Predicting an input image by using the trained deep neural network;
predicting a semantic key point set of a space non-cooperative target in an input image by using the trained deep neural network to obtain a corresponding relation between coordinates of each semantic key point in the predicted semantic key point set in an image coordinate system and coordinates of each semantic key point in the space non-cooperative target body coordinate system;
(5) And solving the position and the posture of the spatial non-cooperative target in the input image under a camera coordinate system.
And solving the position and the gesture of the spatial non-cooperative target in the input image under a camera coordinate system based on the corresponding relation.
It is understood that the number of semantic keypoints on the spatially non-cooperative targets is 4 or more.
The semantic keypoint truth value set of the spatial non-cooperative target on each sample image consists of all semantic keypoint elements. As shown in FIG. 2, in an embodiment, a schematic diagram of a semantic key point truth value set representation of a spatial non-cooperative target is shown, in which 11 semantic key points are selected on the spatial non-cooperative target, and all semantic key points form a semantic key point truth value set, and each semantic key point element in the semantic key point truth value set is respectively recorded asWherein the firstIndividual semantic key elements->Consists of three items including a description +.>Index classification item of corresponding relation between semantic key point elements and semantic key points on space non-cooperative target coordinate system>One description->X-axis image coordinate classification item of coordinates of semantic key point elements on X-axis of image +.>And a description of->Y-axis image coordinate classification item of coordinates of semantic key point elements on image Y-axis +.>,/>、/>、/>。/> and />Respectively the width and height of the input image, +.>Is the resolution coefficient of the coordinate classification term, +.> and />One-dimensional gaussian distribution representation may be employed:
wherein and />Is the true value of the semantic key point element on the image coordinate system,/or->For a fixed spatial variance, its value may be dependent on the resolution of the image>、/>And resolution of coordinate classification item>And (5) adjusting. In addition, the expression +.A.Thermocoded version is also used> and />,
Index classification itemDescribes->And the corresponding relation between each semantic key point element and the semantic key point on the space non-cooperative target body coordinate system is expressed by adopting single-hot coding. Let->Individual semantic key elements->Corresponding to the first +.>Semantic key point, then->The method comprises the following steps:
in addition, a background element is introducedPixels on the image which are not semantic keypoints are described, wherein +.>Index classification item->All are zero:
element(s)May also be referred to as a null element. The background class elements are individually set as one class, so that the dimension of the index classification item is the number of semantic key points plus 1,1 represents the background element.
There are two situations in which a training dataset is constructed:
1) When a three-dimensional model of a space non-cooperative target exists, three-dimensional model editing software can be used for selecting semantic key points of the surface of the space non-cooperative target, and the coordinates of the semantic key points under a space non-cooperative target body coordinate system (also called a body coordinate system for short) are recorded;
2) When only a sample image with pose labels exists, a plurality of semantic key points can be manually selected, and the coordinates of each semantic key point in a body coordinate system are calculated from a plurality of images by using a multi-view intersection technology. The number of semantic key points is recorded as(/>) First->The coordinates of the semantic key points in the body coordinate system are +.>Obtaining the first +.in a sample image by a pinhole imaging model>Coordinates of the individual semantic keypoints:
wherein Is->Homogeneous coordinates of the semantic key points in the image coordinate system. According to the method, the image coordinates of each semantic key point on each image can be obtained, and then the semantic key point set true value of the spatial non-cooperative target on each sample image can be obtained.
The depth neural network adopted by an embodiment comprises a feature extraction network (TZ), a feature encoder, a feature decoder and three prediction heads, wherein the three prediction heads are an index classification item prediction head, an X-axis image coordinate classification item prediction head and a Y-axis image coordinate classification item prediction head respectively.
The feature extraction network is used for extracting a feature map from an input image; the feature encoder is used for encoding the extracted feature map to obtain a feature map with global information encoded, and plays a role in global feature fusion interaction. The feature decoder is used for inquiring the feature map coded by the feature encoder by taking the key point inquiry vector as input to obtain the decoded feature corresponding to each prediction element; the index classification item predicting head, the X-axis image coordinate classification item predicting head and the Y-axis image coordinate classification item predicting head respectively predict the index classification item, the X-axis image coordinate classification item and the Y-axis image coordinate classification item of the semantic key point element by receiving the decoded features output by the feature decoder.
The deep neural network used in one embodiment is shown in fig. 3, and in fig. 3, the input image is extracted into a feature extraction network (TZ) to extract a feature map. The number of feature encoders is K, namely a first feature encoder (BM 1) and a second feature encoder (BM 2), respectively. In fig. 3, by stacking the feature encoder and the feature decoder, the capability of feature encoding and decoding is improved, and the effect of neural network prediction is improved.
The optimization algorithm adopted by the invention for training the deep neural network is not limited, and a person skilled in the art can select the optimization algorithm in the prior art according to the situation.
In one embodiment of the invention, the deep neural network is trained using a stochastic gradient descent method based on the training dataset.
As shown in fig. 4, in one embodiment, a method for training the deep neural network is provided, including:
(1) Inputting a prediction set and a truth value set;
the semantic key point true value set of the space non-cooperative target in the input image isWherein->The semantic key element is marked as +.>The number of semantic key point elements in the semantic key point truth value set is equal to the number of semantic key points on the space non-cooperative target +.>;
The semantic key point predicted value set predicted by the deep neural network based on the input image is as followsWherein the firstThe semantic key element is marked as +.>The number of semantic key point elements in the semantic key point predicted value set is +.>And->。
(2) And supplementing background elements to the truth value set to make the number of elements in the prediction set and the truth value set equal.
True value set to semantic keypointsSupplementing zero element to obtain a set->Make->The number of the semantic key point elements is +.>And semantic key point predictor set +.>The number of semantic key point elements which can only be the same;
(3) Minimizing the matching loss function results in an optimal index function.
Defining index function, obtaining optimal index function by minimizing bipartite matching loss functionThe following are provided:
wherein 、/>、/>Respectively +.>Index classification items, X-axis image coordinate classification items and Y-axis image coordinate classification items of semantic key point prediction elements in a semantic key point prediction set; />、/>、/>Respectively +.>Index classification item, X-axis image coordinate classification item and Y-axis image coordinate classification item of semantic key point prediction elements in semantic key point true value set, +.>Indicate->Index of each semantic key point prediction element in the semantic key point true value set;is a balance parameter; />Is cross entropy loss; when the X-axis image coordinate classification item and the Y-axis image coordinate classification item are adoptedWhere the term is Gaussian, then->Is KL loss; when the X-axis image coordinate classification item and the Y-axis image coordinate classification item are one-hot encoded, the ++>Is a cross entropy loss.
(4) The elements of a truth set are paired for each element of the prediction set by the best index function.
Pairing semantic keypoint elements in a semantic keypoint predictor set using an optimal indexing functionIs a semantic key element of the group.
(5) And calculating a loss function value predicted by the deep neural network.
On the basis of the method for training the deep neural network, the method further comprises the steps of constructing a loss function of the deep neural network and supervising the training of the deep neural network, wherein the loss function of the deep neural network is as follows:
wherein Is a coordinate term loss function; />、/>、/>Respectively the +.>Individual semantic key point prediction element in languageIndex classification items of semantic key point elements in the semantic key point true value set, X-axis image coordinate classification items and Y-axis image coordinate classification items; />Represents the +.>The semantic keypoint predictor is the best index of semantic keypoint elements in the semantic keypoint truth set.
wherein and />The average of the predicted value and the true value of the coordinate classification term are respectively:
wherein Classifying the predicted dimension of the term for the coordinate term; />Prediction variance for coordinate classification term:
it will be appreciated that the present invention is not limited to the end conditions for terminating training of the model, and those skilled in the art can make reasonable settings based on methods known in the art or based on empirical, conventional means, including but not limited to setting the maximum number of iterations, etc. As in training the deep neural network, the conditions for training convergence are any one of the following three:
(a) Setting the maximum iteration number, and ending training when the iteration number exceeds the maximum iteration number;
(b) Setting a loss function threshold, and ending training when the loss function value obtained by current calculation is smaller than the loss function threshold;
(c) And ending training when the currently calculated loss function value is not reduced any more.
In one embodiment, given an image of a spatially non-cooperative target, the estimation process of the position and the pose of the spatially non-cooperative target in the camera coordinate system is shown in fig. 5, and includes the following steps:
(1) A training image is input.
(2) And predicting by the deep neural network to obtain a key point set.
A set of semantic keypoints of the spatial non-cooperative targets in the input image is predicted using the deep neural network that has completed training.
(3) The position and uncertainty of the element on the image coordinate system are obtained through the X-axis and Y-axis classification items of the image.
The first semantic key point set of the input image obtained through predictionX-axis image coordinate classification item and Y-axis image coordinate classification item of each semantic key point element +.>、/>Get->Coordinate position of each semantic key element on X-axis and Y-axis of image coordinate system +.>、/>;/>
Here, the and />Are respectively->X-axis image coordinate classification item and Y-axis image coordinate classification item of each semantic key point element are at the +.>Probability on individual positions, +.> and />Respectively the width and height of the input image, +.>Is a coefficient, which is the ratio of the resolution of the coordinate classification term to the image scale.
Acquiring the first semantic key point set of the predicted input imageThe uncertainty of the positions of the semantic key point elements on the X axis and the Y axis of the image coordinate system is as follows:
(4) And establishing a corresponding relation between the coordinates of the key point image and the coordinates of the body coordinate system through the index classification item.
The first semantic key point set of the input image obtained through predictionIndex classification item of each semantic key point element to obtain the +.>Coordinates +.>:
(5) And constructing and solving an N-point perspective model with weights.
Constructing a weighted N-point perspective model, and obtaining the position and the posture of a non-cooperative target under a camera coordinate system by solving the weighted N-point perspective model, wherein the weighted N-point perspective model is as follows:
wherein and />The optimal estimated values of the rotation matrix and the translation vector of the space non-cooperative target under the camera coordinate system are respectively called the pose of the space non-cooperative target; />Is an indication function if and only if the condition in brackets is true is 1, otherwise equal to 0. The function of the indication function is to eliminate the background elements of the network prediction when estimating the pose. />Is a robust estimation function, e.g. Huber Loss, etc., is a common function in robust estimation and will not be described here. />Is a weighted re-projection residual, expressed as:
wherein ,is an internal parameter matrix of the camera, +.>Pose as spatially non-cooperative target, wherein +.>Rotation matrix under camera coordinate system for spatially non-cooperative targets, +.>For translation vectors of spatially non-cooperative targets in the camera coordinate system,is->Coordinates under a space non-cooperative target object coordinate system corresponding to each semantic key point element, ++>Is->Photographic depth of individual semantic key elements.
The weighted N-point perspective model can be solved through a general optimization library g2o or ceres to obtain the gesture and the position of the space non-cooperative target under a camera coordinate system, namely and />。
In one embodiment, a spatial non-cooperative target pose measurement apparatus is provided, including:
the first module is used for acquiring coordinates of each semantic key point on the space non-cooperative target under a space non-cooperative target body coordinate system;
the second module is used for acquiring sample images containing the space non-cooperative targets, projecting the coordinates of the semantic key points under the space non-cooperative target body coordinate system to the image coordinate system to obtain the coordinates of the semantic key points in each sample image, obtaining a semantic key point true value set of the space non-cooperative targets on each sample image, and constructing a training data set;
the third module is used for constructing a deep neural network for predicting the semantic key point set;
a fourth module for training the deep neural network using the training data set until training converges;
a fifth module, configured to predict a semantic key point set of a spatial non-cooperative target in the input image by using the trained deep neural network, to obtain a correspondence between coordinates of each semantic key point in the predicted semantic key point set in an image coordinate system and coordinates thereof in a spatial non-cooperative target object coordinate system;
and a sixth module, configured to solve, based on the correspondence, a position and an attitude of the spatial non-cooperative target in the input image under a camera coordinate system.
The implementation method of each module and the construction of the model can be the method described in any of the foregoing embodiments, which is not described herein.
In another aspect, the present invention provides a computer device, including a memory and a processor, the memory storing a computer program, the processor implementing the steps of the spatial non-cooperative target pose measurement method provided in any of the embodiments described above when executing the computer program. The computer device may be a server. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing sample data. The network interface of the computer device is used for communicating with an external terminal through a network connection.
In another aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the spatial non-cooperative target pose measurement method provided in any of the embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The invention is not a matter of the known technology.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (13)
1. The method for measuring the pose of the spatial non-cooperative target is characterized by comprising the following steps of:
acquiring coordinates of each semantic key point on the space non-cooperative target under a space non-cooperative target body coordinate system;
acquiring sample images containing space non-cooperative targets, projecting coordinates of all semantic key points under a space non-cooperative target body coordinate system to an image coordinate system to obtain coordinates of all semantic key points in each sample image, obtaining a semantic key point true value set of the space non-cooperative targets on each sample image, and constructing a training data set;
constructing a deep neural network for predicting a semantic key point set;
training the deep neural network by using the training data set until training converges;
predicting a semantic key point set of a space non-cooperative target in an input image by using the trained deep neural network to obtain a corresponding relation between coordinates of each semantic key point in the predicted semantic key point set in an image coordinate system and coordinates of each semantic key point in the space non-cooperative target body coordinate system;
and solving the position and the gesture of the spatial non-cooperative target in the input image under a camera coordinate system based on the corresponding relation.
2. The method for pose measurement of a spatially non-cooperative target according to claim 1, wherein the number of semantic keypoints on the spatially non-cooperative target is 4 or more.
3. The method for measuring the pose of a spatially non-cooperative target according to claim 1 or 2, wherein the set of semantic keypoint truth values of the spatially non-cooperative target on each sample image is composed of all semantic keypoint elements, the firstThe semantic key point element is described by +.>Index classification item of corresponding relation between semantic key point elements and semantic key points on space non-cooperative target coordinate system>One description->X-axis image coordinate classification item of coordinates of semantic key point elements on X-axis of image +.>And a description of->Y-axis image coordinate classification item of coordinates of semantic key point elements on image Y-axis +.>Composition is prepared.
4. The method for measuring the pose of the spatial non-cooperative target according to claim 3, wherein the depth neural network comprises a feature extraction network, a feature encoder, a feature decoder and three prediction heads, wherein the three prediction heads are an index classification item prediction head, an X-axis image coordinate classification item prediction head and a Y-axis image coordinate classification item prediction head respectively;
the feature extraction network is used for extracting a feature map from an input image; the feature encoder is used for encoding the extracted features to obtain a feature map after global information encoding; the feature decoder is used for inquiring the feature map coded by the feature encoder by taking the key point inquiry vector as input to obtain the decoded feature corresponding to each prediction element; the index classification item predicting head, the X-axis image coordinate classification item predicting head and the Y-axis image coordinate classification item predicting head respectively predict the index classification item, the X-axis image coordinate classification item and the Y-axis image coordinate classification item of the semantic key point element by receiving the decoded features output by the feature decoder.
5. A method of spatial non-cooperative target pose measurement according to claim 3, wherein the training data set is used to train the deep neural network using a random gradient descent method.
6. The method of spatial non-cooperative target pose measurement according to claim 5, wherein training the deep neural network comprises:
the semantic key point true value set of the space non-cooperative target in the input image isWherein->The semantic key element is marked as +.>The number of semantic key point elements in the semantic key point truth value set is equal to the number of semantic key points on the space non-cooperative target +.>;
The semantic key point predicted value set predicted by the deep neural network based on the input image is as followsWherein->The semantic key element is marked as +.>The number of semantic key point elements in the semantic key point predicted value set is +.>And->;
True value set to semantic keypointsSupplementing zero element to obtain a set->Make->The number of the semantic key point elements is +.>And semantic key point predictor set +.>The number of semantic key point elements which can only be the same;
defining index function, obtaining optimal index function by minimizing bipartite matching loss functionThe following are provided:
wherein 、/>、/>Respectively +.>Index classification items, X-axis image coordinate classification items and Y-axis image coordinate classification items of semantic key point prediction elements in a semantic key point prediction set; />、/>、/>Respectively +.>Index classification item, X-axis image coordinate classification item and Y-axis image coordinate classification item of semantic key point prediction elements in semantic key point true value set, +.>Indicate->Index of each semantic key point prediction element in the semantic key point true value set; />Is a balance parameter; />Is cross entropy loss; when the X-axis image coordinate classification item and the Y-axis image coordinate classification item are Gaussian distribution, the +.>Is KL loss; when the X-axis image coordinate classification item and the Y-axis image coordinate classification item are one-hot encoded, the ++>Is cross entropy loss;
7. The method of claim 6, further comprising constructing a loss function of the deep neural network, and supervising the training of the deep neural network, wherein the loss function of the deep neural network is as follows:
wherein Is a coordinate term loss function; />、/>、/>Respectively the +.>Index classification items, X-axis image coordinate classification items and Y-axis image coordinate classification items of semantic key point elements in the semantic key point true value set of the semantic key point prediction elements; />Represents the +.>The semantic keypoint predictor is the best index of semantic keypoint elements in the semantic keypoint truth set.
8. The method of claim 7, wherein the coordinate term loss functionThe method comprises the following steps:
wherein and />The average of the predicted value and the true value of the coordinate classification term are respectively:
wherein Classifying the predicted dimension of the term for the coordinate term; />Prediction variance for coordinate classification term:
9. the method for measuring pose of spatial non-cooperative target according to claim 7 or 8, wherein when training the deep neural network, conditions for training convergence are:
setting the maximum iteration number, and ending training when the iteration number exceeds the maximum iteration number;
or, setting a loss function threshold, and ending training when the loss function value obtained by current calculation is smaller than the loss function threshold;
or, when the currently calculated loss function value is no longer reduced, the training is ended.
10. The method for measuring the pose of a spatially non-cooperative target according to claim 4 or 5 or 6 or 7 or 8, wherein the position and the pose of the spatially non-cooperative target in a camera coordinate system are obtained by:
the first semantic key point set of the input image obtained through predictionX-axis image coordinate classification item and Y-axis image coordinate classification item of each semantic key point element +.>、/>Get->Coordinate position of each semantic key element on X-axis and Y-axis of image coordinate system +.>、/>;
wherein and />Are respectively->X-axis image coordinate classification item and Y-axis image coordinate classification item of each semantic key point element are at the +.>Probability on individual positions, +.> and />Respectively the width and height of the input image, +.>The coefficient is the ratio of the resolution of the coordinate classification item to the image scale;
acquiring the first semantic key point set of the predicted input imageThe uncertainty of the positions of the semantic key point elements on the X axis and the Y axis of the image coordinate system is as follows:
the first semantic key point set of the input image obtained through predictionIndex classification item of each semantic key point element to obtain the +.>Coordinates +.>:
constructing a weighted N-point perspective model, and obtaining the position and the posture of a non-cooperative target under a camera coordinate system by solving the weighted N-point perspective model, wherein the weighted N-point perspective model is as follows:
wherein and />The optimal estimated values of the rotation matrix and the translation vector of the space non-cooperative target under the camera coordinate system are respectively called the pose of the space non-cooperative target; />Is an indication function if and only if the condition in brackets is 1, otherwise equal to 0; />Is a robust estimation function; />Is a weighted re-projection residual, expressed as:
wherein ,is an internal parameter matrix of the camera, +.>Pose as spatially non-cooperative target, wherein +.>Rotation matrix under camera coordinate system for spatially non-cooperative targets, +.>Translation vector in camera coordinate system for spatially non-cooperative target, +.>Is->Coordinates under a space non-cooperative target object coordinate system corresponding to each semantic key point element, ++>Is->Photographic depth of individual semantic key elements.
11. The utility model provides a space non-cooperation target position appearance measuring device which characterized in that includes:
the first module is used for acquiring coordinates of each semantic key point on the space non-cooperative target under a space non-cooperative target body coordinate system;
the second module is used for acquiring sample images containing the space non-cooperative targets, projecting the coordinates of the semantic key points under the space non-cooperative target body coordinate system to the image coordinate system to obtain the coordinates of the semantic key points in each sample image, obtaining a semantic key point true value set of the space non-cooperative targets on each sample image, and constructing a training data set;
the third module is used for constructing a deep neural network for predicting the semantic key point set;
a fourth module for training the deep neural network using the training data set until training converges;
a fifth module, configured to predict a semantic key point set of a spatial non-cooperative target in the input image by using the trained deep neural network, to obtain a correspondence between coordinates of each semantic key point in the predicted semantic key point set in an image coordinate system and coordinates thereof in a spatial non-cooperative target object coordinate system;
and a sixth module, configured to solve, based on the correspondence, a position and an attitude of the spatial non-cooperative target in the input image under a camera coordinate system.
12. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the spatial non-cooperative target pose measurement method according to claim 1.
13. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the spatial non-cooperative target pose measurement method according to claim 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310638984.5A CN116363217B (en) | 2023-06-01 | 2023-06-01 | Method, device, computer equipment and medium for measuring pose of space non-cooperative target |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310638984.5A CN116363217B (en) | 2023-06-01 | 2023-06-01 | Method, device, computer equipment and medium for measuring pose of space non-cooperative target |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116363217A true CN116363217A (en) | 2023-06-30 |
CN116363217B CN116363217B (en) | 2023-08-11 |
Family
ID=86939989
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310638984.5A Active CN116363217B (en) | 2023-06-01 | 2023-06-01 | Method, device, computer equipment and medium for measuring pose of space non-cooperative target |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116363217B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110287873A (en) * | 2019-06-25 | 2019-09-27 | 清华大学深圳研究生院 | Noncooperative target pose measuring method, system and terminal device based on deep neural network |
CN111862201A (en) * | 2020-07-17 | 2020-10-30 | 北京航空航天大学 | Deep learning-based spatial non-cooperative target relative pose estimation method |
CN111862126A (en) * | 2020-07-09 | 2020-10-30 | 北京航空航天大学 | Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm |
CN112053373A (en) * | 2020-08-11 | 2020-12-08 | 北京控制工程研究所 | Spatial non-cooperative target posture evaluation method with image scale transformation |
CN112651437A (en) * | 2020-12-24 | 2021-04-13 | 北京理工大学 | Spatial non-cooperative target pose estimation method based on deep learning |
EP3905194A1 (en) * | 2020-04-30 | 2021-11-03 | Siemens Aktiengesellschaft | Pose estimation method and apparatus |
CN115018876A (en) * | 2022-06-08 | 2022-09-06 | 哈尔滨理工大学 | Non-cooperative target grabbing control system based on ROS |
CN115035260A (en) * | 2022-05-27 | 2022-09-09 | 哈尔滨工程大学 | Indoor mobile robot three-dimensional semantic map construction method |
CN115661246A (en) * | 2022-10-25 | 2023-01-31 | 中山大学 | Attitude estimation method based on self-supervision learning |
CN116109706A (en) * | 2023-04-13 | 2023-05-12 | 中国人民解放军国防科技大学 | Space target inversion method, device and equipment based on priori geometric constraint |
-
2023
- 2023-06-01 CN CN202310638984.5A patent/CN116363217B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110287873A (en) * | 2019-06-25 | 2019-09-27 | 清华大学深圳研究生院 | Noncooperative target pose measuring method, system and terminal device based on deep neural network |
EP3905194A1 (en) * | 2020-04-30 | 2021-11-03 | Siemens Aktiengesellschaft | Pose estimation method and apparatus |
CN111862126A (en) * | 2020-07-09 | 2020-10-30 | 北京航空航天大学 | Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm |
CN111862201A (en) * | 2020-07-17 | 2020-10-30 | 北京航空航天大学 | Deep learning-based spatial non-cooperative target relative pose estimation method |
CN112053373A (en) * | 2020-08-11 | 2020-12-08 | 北京控制工程研究所 | Spatial non-cooperative target posture evaluation method with image scale transformation |
CN112651437A (en) * | 2020-12-24 | 2021-04-13 | 北京理工大学 | Spatial non-cooperative target pose estimation method based on deep learning |
CN115035260A (en) * | 2022-05-27 | 2022-09-09 | 哈尔滨工程大学 | Indoor mobile robot three-dimensional semantic map construction method |
CN115018876A (en) * | 2022-06-08 | 2022-09-06 | 哈尔滨理工大学 | Non-cooperative target grabbing control system based on ROS |
CN115661246A (en) * | 2022-10-25 | 2023-01-31 | 中山大学 | Attitude estimation method based on self-supervision learning |
CN116109706A (en) * | 2023-04-13 | 2023-05-12 | 中国人民解放军国防科技大学 | Space target inversion method, device and equipment based on priori geometric constraint |
Non-Patent Citations (8)
Title |
---|
SHARMA S ET AL: "Pose estimation for non-cooperative spacecraft rendezvous using convolutional neural networks", 《THE 21ST IEEE AEROSPACE CONFERENCE》 * |
张世杰;谭校纳;曹喜滨;: "非合作航天器相对位姿的鲁棒视觉确定方法", 哈尔滨工业大学学报, no. 07 * |
徐云飞;张笃周;王立;华宝成;: "非合作目标局部特征识别轻量化特征融合网络设计", 红外与激光工程, no. 07 * |
徐云飞;张笃周;王立;华宝成;石永强;贺盈波;: "一种卷积神经网络非合作目标姿态测量方法", 宇航学报, no. 05 * |
曾占魁;魏祥泉;黄建明;陈凤;曹喜滨;: "空间非合作目标超近距离位姿测量技术研究", 上海航天, no. 06 * |
林婷婷;江晟;李荣华;葛研军;周颖;: "非合作目标视觉位姿测量与地面验证方法", 大连交通大学学报, no. 03 * |
王梓等: "基于Transformer模型的卫星单目位姿估计方法", 《航空学报》 * |
马俊杰;黄大庆;仇男豪;龚永富;: "基于合作目标识别的无人机相对位姿估计", 电子设计工程, no. 10 * |
Also Published As
Publication number | Publication date |
---|---|
CN116363217B (en) | 2023-08-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11629965B2 (en) | Methods, apparatus, and systems for localization and mapping | |
Huang et al. | A comprehensive survey on point cloud registration | |
CN109974693B (en) | Unmanned aerial vehicle positioning method and device, computer equipment and storage medium | |
KR102126724B1 (en) | Method and apparatus for restoring point cloud data | |
CN107230225B (en) | Method and apparatus for three-dimensional reconstruction | |
US10636168B2 (en) | Image processing apparatus, method, and program | |
CN109099915B (en) | Mobile robot positioning method, mobile robot positioning device, computer equipment and storage medium | |
US8437501B1 (en) | Using image and laser constraints to obtain consistent and improved pose estimates in vehicle pose databases | |
US8792726B2 (en) | Geometric feature extracting device, geometric feature extracting method, storage medium, three-dimensional measurement apparatus, and object recognition apparatus | |
KR102095842B1 (en) | Apparatus for Building Grid Map and Method there of | |
CN112219087A (en) | Pose prediction method, map construction method, movable platform and storage medium | |
WO2021052283A1 (en) | Method for processing three-dimensional point cloud data and computing device | |
EP2960859B1 (en) | Constructing a 3d structure | |
CN112528974B (en) | Distance measuring method and device, electronic equipment and readable storage medium | |
US11790661B2 (en) | Image prediction system | |
CN111179433A (en) | Three-dimensional modeling method and device for target object, electronic device and storage medium | |
KR20220076398A (en) | Object recognition processing apparatus and method for ar device | |
CN111292377B (en) | Target detection method, device, computer equipment and storage medium | |
JP2010072700A (en) | Image processor, image processing method, and image pickup system | |
CN113793251A (en) | Pose determination method and device, electronic equipment and readable storage medium | |
CN116363217B (en) | Method, device, computer equipment and medium for measuring pose of space non-cooperative target | |
CN111582013A (en) | Ship retrieval method and device based on gray level co-occurrence matrix characteristics | |
CN114926536B (en) | Semantic-based positioning and mapping method and system and intelligent robot | |
CN116206302A (en) | Three-dimensional object detection method, three-dimensional object detection device, computer equipment and storage medium | |
CN111811501B (en) | Trunk feature-based unmanned aerial vehicle positioning method, unmanned aerial vehicle and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |