CN110287873A - Noncooperative target pose measuring method, system and terminal device based on deep neural network - Google Patents
Noncooperative target pose measuring method, system and terminal device based on deep neural network Download PDFInfo
- Publication number
- CN110287873A CN110287873A CN201910555500.4A CN201910555500A CN110287873A CN 110287873 A CN110287873 A CN 110287873A CN 201910555500 A CN201910555500 A CN 201910555500A CN 110287873 A CN110287873 A CN 110287873A
- Authority
- CN
- China
- Prior art keywords
- point
- cloud
- feature
- point cloud
- carried out
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of noncooperative target pose measuring method, system and terminal device based on deep neural network, and method includes: to carry out down-sampling to the point cloud data and model point cloud of the different angle of noncooperative target to obtain a cloud;It is extracted to obtain the eigenmatrix and global characteristics vector of the feature vector comprising each point with trained PointNet network model;Screening threshold value is detected according to pre-set characteristic point to screen the characteristic point of the point cloud after down-sampling, and is carried out Feature Points Matching and obtained feature point set;Point cloud registering is carried out to feature point set, obtains pose transition matrix;Point cloud data after pose transition matrix to be applied to down-sampling obtains new point cloud, and the model point cloud after new point cloud and down-sampling is carried out point cloud registering and obtains new pose transition matrix.Method of the invention meets the requirement of real-time based on space non-cooperative target pose measurement on the basis of guaranteeing relatively high precision.
Description
Technical field
The present invention relates to short distance noncooperative target pose measurement technical fields, more particularly to one kind to be based on depth nerve net
Short distance noncooperative target pose measuring method, system and the terminal device of network.
Background technique
Deep learning be based on big data quantity training sample under the premise of the complicated deep neural network of training, therefore have very
Strong feature extraction functions, in many fields, especially computer vision field has surmounted traditional computer vision algorithms make, than
Such as ImageNet visual identity challenge match is to object classification recognition detection task.Deep learning obtains on processing two-dimension picture
One of successful reason is that the convolution kernel shared mechanism of rule can be used in the two-dimension picture matrix regularly arranged in array,
Greatly reduce the parameter amount of neural network.And due to the distinctive attribute of three-dimensional point cloud: firstly, point cloud data point dimension is high and non-
Structuring is on a grand scale and at random unordered, and this distinctive geometric attribute is difficult to directly utilize the convolutional Neural of existing maturation
Network model can not migrate.Change with ambient lighting secondly, point cloud data has and cause to be unevenly distributed weighing apparatus, because blocking or
Structure caused by scanning angle is imperfect, also increases the difficulty of points cloud processing;Finally, although three-dimension sensor is quickly grown,
But noise caused by other factors in error generation and environment is remained, the magnanimity of point cloud data is to treatment effeciency
There is very big challenge.
At present by the neural network in two-dimension picture be generalized to the method on three-dimensional point cloud first have to solve be exactly to point
Three dimensional convolution is applied to the point cloud of voxelization by the standardization of cloud sequence, such as VoxNet and Voxception-ResNet.Also
A kind of resolving ideas is to be first had to using the deep neural network of mature processing two-dimension picture by unordered three-dimensional cloud
It is converted to two-dimension picture, such as Multi-view CNN projects to two-dimension picture by renders three-dimensional point cloud or by three-dimensional point cloud
On, then handled on obtained two-dimension picture using mature 2D convolutional network.But both methods has one
Fixed limitation, the method based on voxelization can only handle the point cloud data of the small resolution ratio of small data quantity, and based on multiple view
Method is lost a certain amount of spatial information, to the point cloud data poor robustness of some parts missing.It is big from Stamford in 2017
The PointNet network proposed to be learned to start, the deep neural network for directly handling unordered point cloud data has climbed up the stage of history,
It is symmetric function that max pooling, which is utilized, can effectively extract the feature of a cloud.
The feature manually formulated of most of the existing cloud feature both for particular task.These features are substantially
Some specific geometrical characteristics or statistical attribute are encoded, and is designed to be promoted to the robustness of particular transform and constant
Property.But these features find an optimal feature combination not a duck soup for unknown task.Although manual feature
Comparative maturity, such as SHOT, FPFH etc., but this method based on feature is it is not possible that in the vector space of exhaustive point cloud data
All essential characteristic vectors can only find suitable feature description in limited characteristic vector space.Therefore, such method exists
Bottleneck is certainly existed in convergence and precision.
Summary of the invention
The present invention provides a kind of based on depth nerve net to solve the problems, such as Processing Method of Point-clouds in the prior art
Short distance noncooperative target pose measuring method, system and the terminal device of network.
To solve the above-mentioned problems, the technical solution adopted by the present invention is as described below:
The present invention provides a kind of noncooperative target pose measuring method based on deep neural network, includes the following steps:
S1: down-sampling is carried out to the point cloud data P and model point cloud Q of the different angle of noncooperative target and obtains cloud a P', Q', S2: being used
Trained PointNet network model obtains the feature vector comprising each point to described cloud P', Q' progress feature extraction
Eigenmatrix Αn×1024With global characteristics vector Β1×1024;S3: screening threshold value is detected to described according to pre-set characteristic point
Point cloud P', the characteristic point of Q' are screened, and according to the global characteristics vector Β1×1024It is described every with described cloud P', Q'
The eigenmatrix Α of the feature vector of a pointn×1024It carries out Feature Points Matching and obtains feature point set P ", Q ";S4: to the characteristic point
Collect P ", Q " carries out point cloud registering, obtains pose transition matrix T1=[R1,t1];S5: by the pose transition matrix T1It is applied to
Point cloud P' obtains a cloudAnd by described cloudPoint cloud registering, which is carried out, with described cloud Q' obtains pose transition matrix T2=
[R2,t2]。
Preferably, the down-sampling is carried out according to curvature feature, the density of point cloud or normal in step S1.
Preferably, the sampling number that the down-sampling is carried out in step S1 is 1024.
Preferably, the Feature Points Matching is carried out by TrICP algorithm in step S3.
Preferably, the Feature Points Matching is carried out by TrICP algorithm in step S4.
The present invention also provides a kind of noncooperative target pose measurement system based on deep neural network, comprising: first is single
Member: down-sampling is carried out to the point cloud data P and model point cloud Q of the different angle of noncooperative target and obtains cloud P', Q', the second list
Member: the feature comprising each point is obtained to described cloud P', Q' progress feature extraction with trained PointNet network model
The eigenmatrix Α of vectorn×1024With global characteristics vector Β1×1024;Third unit: it is detected and is sieved according to pre-set characteristic point
Threshold value is selected to screen the characteristic point of cloud P', Q' at described, and according to the global characteristics vector Β1×1024With described cloud
The eigenmatrix Α comprising each point feature vector of P', Q'n×1024It carries out Feature Points Matching and obtains feature point set P ", Q ";
Unit the 4th: are carried out by point cloud registering, obtains pose transition matrix T by the feature point set P ", Q "1=[R1,t1];Unit the 5th:
By the pose transition matrix T1It is applied to a cloud P' and obtains a cloudAnd by described cloudIt is carried out a little with described cloud Q'
Cloud is registrated to obtain pose transition matrix T2=[R2,t2]。
Preferably, the down-sampling is carried out according to curvature feature, the density of point cloud or normal.
Preferably, the sampling number for carrying out the down-sampling is 1024;The characteristic point is carried out by TrICP algorithm
Match.
The present invention provides a kind of noncooperative target pose measurement terminal device based on deep neural network, including storage again
Device, processor and storage in the memory and the computer program that can run on the processor, the processor
The step of as above any the method is realized when executing the computer program.
The present invention provides a kind of computer readable storage medium again, and the computer-readable recording medium storage has computer
Program, when the computer program is executed by processor realize as above any the method the step of.
The invention has the benefit that providing a kind of short distance noncooperative target pose measurement based on deep neural network
Method, system and terminal device, by carrying out principle analysis to deep neural network, then for space non-cooperative target
Pose measurement carries out the production of data set, on the basis of simplifying data removal redundancy, can guarantee the integrality of a cloud as far as possible,
And training deep neural network model, it effectively extracts feature vector and global characteristics vector at every, improves algorithm speed
And matching precision.On the basis of guaranteeing relatively high precision, meet the real-time based on space non-cooperative target pose measurement
It is required that.
Detailed description of the invention
Fig. 1 is the schematic diagram of the noncooperative target pose measuring method in the embodiment of the present invention based on deep neural network.
Fig. 2 is the schematic diagram of the noncooperative target pose measurement system in the embodiment of the present invention based on deep neural network.
Fig. 3 is the initial data schematic diagram before the pose estimation of noncooperative target in the embodiment of the present invention.
Fig. 4 is the result schematic diagram of the characteristic point detection of noncooperative target in the embodiment of the present invention.
Fig. 5 is the thick matching and the matched result schematic diagram of essence of noncooperative target in the embodiment of the present invention.
Specific embodiment
In order to which technical problem to be solved of the embodiment of the present invention, technical solution and beneficial effect is more clearly understood,
The present invention is further described in detail below with reference to the accompanying drawings and embodiments.It should be appreciated that specific implementation described herein
Example is only used to explain the present invention, is not intended to limit the present invention.
It should be noted that it can be directly another when element is referred to as " being fixed on " or " being set to " another element
On one element or indirectly on another element.When an element is known as " being connected to " another element, it can
To be directly to another element or be indirectly connected on another element.In addition, connection can be for fixing
Effect is also possible to act on for circuit communication.
It is to be appreciated that term " length ", " width ", "upper", "lower", "front", "rear", "left", "right", "vertical",
The orientation or positional relationship of the instructions such as "horizontal", "top", "bottom" "inner", "outside" is that orientation based on the figure or position are closed
System is merely for convenience of the description embodiment of the present invention and simplifies description, rather than the device or element of indication or suggestion meaning must
There must be specific orientation, be constructed and operated in a specific orientation, therefore be not considered as limiting the invention.
In addition, term " first ", " second " are used for descriptive purposes only and cannot be understood as indicating or suggesting relative importance
Or implicitly indicate the quantity of indicated technical characteristic.Define " first " as a result, the feature of " second " can be expressed or
Implicitly include one or more this feature.In the description of the embodiment of the present invention, the meaning of " plurality " is two or two
More than, unless otherwise specifically defined.
Embodiment 1
PointNet network is the point cloud classifications and segmentation task for stereoscopic vision field, and in each mainstream data
Good achievement is obtained in library.The starting point of algorithm design is to solve the problems, such as point cloud data randomness.In the initial stage, each
The processing of point is all identical and independent, and in basic setup, each point is made of its three coordinates.This method
It is critical that and has used max pooling as symmetric function, the feature vector extracted is allowed to ignore the unordered of point cloud data
Property.
Due to the presence of point cloud data out-of-order problems, network is difficult to learn the consistent mapping from output is input to.In order to make
Neural network model is not influenced by input data sequence, can be there are three types of resolution policy: the first be the data that will input by
Specification sequence sorts, although sounding simple, in higher dimensional space, actually and there is no the rows of a stable point disturbance
Sequence;It is for second to increase training data by various arrangement modes using list entries as the sequence of training RNN, however RNN pairs
The input sequencing of small length sequences (tens) has preferable robustness, but is difficult to expand to and thousands of or more inputs number
According to, it is clear that be not suitable for the point cloud data of magnanimity;The third is the information for polymerizeing each point using a simple symmetric function,
N vector is inputted, a symmetric function is passed through and exports a new vector not influenced by input sequence, is met:
f({x1,x2,...,xn)=Γ (MAX (h (x1),...,h(xn))) (1)
Wherein, a unordered point set { x is given1,x2,...,xn, xi∈R3, definition set function gamma is by one group of point cloud point
Collection is mapped to a feature vector.Select max pooling as symmetric function, function h and Γ selection in PointNet network
Simple multi-layer perception (MLP) (MLP), and handled point by point, the high dimension vector of each point is obtained, by max pooling couple
Function is claimed to finally obtain a global characteristics vector, this processing strategie is approximately equivalent to find a series of function f to point
Cloud is handled, and every kind of function can obtain a certain feature of point cloud data.Since input point cloud data is easy to answer
With rigidity or affine transform, and each point is independent translation.Therefore, it also proposed addition one in network dependent on more
Spatial alternation network T-Net, the T-Net network of dimension data input is the transformation matrix in a characteristic vector space, is one
Mini-PointNet is input, output regression to 3 × 3 matrixes or 64 × 64 matrixes with original point cloud.Its effect be
To once being standardized before the point processing of input to data, so as to which there is invariance to spatial alternation, further change
Kind result.
Point cloud classifications network: incipient stage is introduced herein, initial n × 3 is tieed up into point cloud data first and passed through
Dimension is the standardized operation of 3 × 3 spatial alternation network T-Net, at the beginning of then putting cloud by shared MLP (64, a 64) network
Each point of beginning data rises to the dimension of n × 64, is the standardized operation of 64 × 64 spatial alternation network T-Net using dimension,
Each point of cloud primary data is risen to n × 1024 by shared MLP (64,128, a 1024) network to tie up, by max
Pooling symmetric function finally obtains a global characteristics vector.Using this global characteristics vector by two layers MLP (512,
256) fully-connected network may finally obtain the score value of k classification to judge classification results.It can be seen that this sorter network is
Based on finally obtained global characteristics vector, and what this global characteristics vector was made of 1024 dimensional vectors that each pair of point is answered
The matrix of n × 1024 is obtained via max pooling symmetric function.All layers in addition to the last layer all include having selected to use
ReLU activation primitive, and all apply batch processing standardization (batch normalization).PointNet network is open
Data set ModelNet40 final classification accuracy has reached 89.2%, significantly larger than the 85.9% of VoxNet.
In order to realize that space non-cooperative target super close distance pose measurement focuses on accuracy and real-time, the present invention is based on depths
It spends neural network and feature extraction is carried out to cloud.The present invention carries out data set first against the pose measurement of space non-cooperative target
Production, then trained deep neural network model is applied to closely non-conjunction by and training deep neural network model
Make in object pose measurement, it is intended to provide the pose data of noncooperative target real-time for space-orbit task.
As shown in Figure 1, the present invention provides a kind of noncooperative target pose measuring method based on deep neural network, including
Following steps:
S1: carrying out down-sampling to the point cloud data P and model point cloud Q of the different angle of noncooperative target and obtain a cloud P',
Q';
In the point cloud data of magnanimity, to wherein any one point x, the geometrical characteristic described a little generally has two classes, i.e., special
Value indicative and corresponding feature vector.Curvature feature is a critically important basis to feature identification, and the curvature value of point reflects this
Point effectively can find match point to two panels dispersion point cloud in the concave-convex degree on cloud surface.Algorithm proposed by the present invention is
Curvature information based on three dimensional point cloud, so we select to carry out down-sampling according to curvature feature.To two panels dispersion point cloud P
Double sampling is carried out with Q, can guarantee the accuracy and integrality of curvature characteristic after sampling.
It is understood that can also be carried out according to the density of cloud, the geometrical characteristics such as normal in addition to according to curvature feature
Down-sampling.
In an embodiment of the present invention, the PointNet network model used is all based on the point cloud number of 1024 points
According to what is trained, so the point numerical digit 1024 after down-sampling.
S2: with trained PointNet network model feature extraction is carried out to cloud P', Q' at described and obtained comprising each
The eigenmatrix Α of the feature vector of pointn×1024With global characteristics vector Β1×1024;
PointNet network model has more powerful ability in feature extraction, puts cloud P', and each point in Q' passes through two layers
The conversion of T-Net network and five layers of MLP feature extraction layer are finally promoted the feature of each point to 1024 dimensions by three-dimensional coordinate.
The feature that PointNet network finally extracts includes the eigenmatrix Α of the feature vector of each pointn×1024With a description point cloud
Whole global characteristics vector Β1×1024。
Global characteristics vector can effectively observe the apparent point of feature in a cloud, it can detect and calculate as characteristic point
Method to represent point cloud entirety with lesser amount of characteristic point, this will be greatly reduced the computation complexity of algorithm;At the same time it can also with
The eigenmatrix of the point feature vector of each point is used to Feature Points Matching.It is based on complete about Feature Points Matching most straightforward approach
For office's feature vector come what is be unfolded, the efficiency being registrated in this way is very high.
S3: detecting screening threshold value according to pre-set characteristic point and screened to the characteristic point of described cloud P', Q', and
According to the global characteristics vector Β1×1024With the eigenmatrix of the feature vector of each point of described cloud P', Q'
Αn×1024It carries out Feature Points Matching and obtains feature point set P ", Q ";
, there are the point cloud quantity after characteristic point detection, feature description in the main reason for by influencing point cloud pose algorithm for estimating efficiency
Calculating, characteristic point pairing and the point cloud matching of son.Wherein characteristic point detection be a very main step, can accurate detection go out most
Representative minimum characteristic point rejects redundant points, is the basis of boosting algorithm efficiency.The description obtained based on step S2
The global characteristics vector Β of point cloud entirety1×1024, it sets characteristic point detection screening threshold tau and characteristic point screening is carried out to P', Q', and
Subvector Β is described according to global characteristics1×1024To feature point set P', the eigenmatrix of the feature vector of each point of Q'
Αn×1024It carries out Feature Points Matching and obtains feature point set P ", Q ".
S4: are carried out by point cloud registering, obtains pose transition matrix T by the feature point set P ", Q "1=[R1,t1];
Two panels dispersion point cloud registration is substantially to find the rigid body comprising a spin matrix R and a translation vector t to become
It changes, which can transform to the magnanimity scattered point cloud data that two panels is under different coordinates in the same coordinate system, and real
Now accurately registration is overlapped.TrICP (Trimmed ICP) is the improvement version of a new robust of traditional ICP algorithm, this method
The square error of each matching double points is ranked up, only the lesser value of certain amount is optimized, this number is basis
The initial Duplication of two panels point cloud obtains, and the data ginseng smaller than the intermediate value of square error sequence is only taken relative to LMedS algorithm
With optimization, TrICP has better rate of convergence and robustness.
S5: by the pose transition matrix T1It is applied to a cloud P' and obtains a cloudAnd by described cloudWith the point
Cloud Q' carries out point cloud registering and obtains pose transition matrix T2=[R2,t2]。
The pose data that rough registration obtains are acted in initial data, TrICP is then further used and is accurately matched
It is quasi-, so that it may to get the pose data of accuracy registration.It can be obtained by the 6DOF position of entire measurement process by this two step
Appearance.The present invention has not only obtained one being added in TrICP (Trimmed ICP) algorithm according to distance weighted method in this way
A more accurate matching double points number, additionally it is possible to further strengthen the effect of correct matching double points, weaken error matching points pair
Influence, simplify the calculation amount of mass data, improve the precision of result.
Compared with prior art, the novelty of the present invention is:
A. the method sampled according to geometrical characteristic is applied to mass data, on the basis of simplifying data removal redundancy, energy
Enough integralities for guaranteeing point cloud as far as possible;
B. during point cloud data feature extraction, trained PointNet deep neural network is utilized, it can be with
The feature vector for effectively extracting at every, improves algorithm speed and robustness;
C. in the search strategy of match point, using extracted complete based on trained PointNet deep neural network
Office's feature vector, has further speeded up algorithmic procedure;
It d., can not only the smart matching of progress in TrICP (Trimmed ICP) algorithm is added to according to distance weighted method
Obtain a more accurate matching double points number, additionally it is possible to further strengthen the effect of correct matching double points, weaken mistake
Influence with point pair, substantially increases the precision and robustness of result.
Embodiment 2
As shown in Fig. 2, the present invention also provides a kind of noncooperative target pose measurement system based on deep neural network, packet
It includes:
First unit: down-sampling is carried out to the point cloud data P and model point cloud Q of the different angle of noncooperative target and is obtained a little
Cloud P', Q';
Second unit: with trained PointNet network model feature extraction is carried out to cloud P', Q' at described and is wrapped
The eigenmatrix Α of feature vector containing each pointn×1024With global characteristics vector Β1×1024;
Third unit: screening threshold value is detected according to pre-set characteristic point, the characteristic point of described cloud P', Q' is carried out
Screening, and according to the global characteristics vector Β1×1024With the spy comprising each point feature vector of described cloud P', Q'
Levy matrix Αn×1024It carries out Feature Points Matching and obtains feature point set P ", Q ";
Unit the 4th: are carried out by point cloud registering, obtains pose transition matrix T by the feature point set P ", Q "1=[R1,t1];
Unit the 5th: by the pose transition matrix T1It is applied to a cloud P' and obtains a cloudAnd by described cloudWith
Described cloud Q' carries out point cloud registering and obtains pose transition matrix T2=[R2,t2]。
In an embodiment of the present invention, first unit is according to described in curvature feature, the density of point cloud or normal progress
Down-sampling.The sampling number for carrying out the down-sampling is 1024;Unit the 4th and Unit the 5th pass through TrICP algorithm progress institute
State Feature Points Matching.
The noncooperative target pose measurement system based on deep neural network of the embodiment further include: processor, storage
Device and storage in the memory and the computer program that can run on the processor, such as to noncooperative target
The point cloud data P and model point cloud Q of different angle carry out down-sampling and obtain a program of cloud P', Q'.The processor executes institute
It is realized when stating computer program in above-mentioned each noncooperative target pose measuring method embodiment based on deep neural network
Step, such as step S1-S5 shown in FIG. 1.Alternatively, the processor realizes above-mentioned each device when executing the computer program
The function of each module/unit in embodiment, such as first unit: to the point cloud data P and mould of the different angle of noncooperative target
Type point cloud Q carries out down-sampling and obtains cloud a P', Q'.
Illustratively, the computer program can be divided into one or more units, and above-mentioned five units are only
Illustratively.One or more of module/units are stored in the memory, and are executed by the processor, with
Complete the present invention.One or more of units can be the series of computation machine program instruction section that can complete specific function,
The instruction segment is for describing the computer program in the noncooperative target pose measurement system based on deep neural network
In implementation procedure.
The noncooperative target pose measurement system based on deep neural network may also include, but be not limited only to, processing
Device, memory.It will be understood by those skilled in the art that the schematic diagram is only based on the noncooperative target of deep neural network
The example of pose measurement system does not constitute the restriction to the noncooperative target pose measurement system based on deep neural network,
It may include perhaps combining certain components or different components, such as described based on deep than illustrating more or fewer components
The noncooperative target pose measurement system for spending neural network can also include input-output equipment, network access equipment, bus etc..
Alleged processor can be central processing unit (Central Processing Unit, CPU), can also be it
His general processor, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor
Deng the processor is the control centre of the noncooperative target pose measurement system based on deep neural network, using each
The various pieces of the entire noncooperative target pose measurement system based on deep neural network of kind of interface and connection.
The memory can be used for storing the computer program and/or module, and the processor is by operation or executes
Computer program in the memory and/or module are stored, and calls the data being stored in memory, described in realization
The various functions of noncooperative target pose measurement system based on deep neural network.The memory can mainly include storage journey
Sequence area and storage data area, wherein storing program area can the (ratio of application program needed for storage program area, at least one function
Such as sound-playing function, image player function) etc.;Storage data area, which can be stored, uses created data according to mobile phone
(such as audio data, phone directory etc.) etc..In addition, memory may include high-speed random access memory, it can also include non-
Volatile memory, such as hard disk, memory, plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), safe number
Word (Secure Digital, SD) card, flash card (Flash Card), at least one disk memory, flush memory device or its
His volatile solid-state part.
If the integrated unit of the noncooperative target pose measurement system based on deep neural network is with software function
The form of unit is realized and when sold or used as an independent product, can store in a computer-readable storage medium
In.Based on this understanding, the present invention realizes all or part of the process in above-described embodiment method, can also pass through computer
Program is completed to instruct relevant hardware, and the computer program can be stored in a computer readable storage medium, should
Computer program is when being executed by processor, it can be achieved that the step of above-mentioned each embodiment of the method.Wherein, the computer program
Including computer program code, the computer program code can be source code form, object identification code form, executable file
Or certain intermediate forms etc..The computer-readable medium may include: can carry the computer program code any
Entity or device, recording medium, USB flash disk, mobile hard disk, magnetic disk, CD, computer storage, read-only memory (ROM, Read-
Only Memory), random access memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and
Software distribution medium etc..It should be noted that the content that the computer-readable medium includes can be according in jurisdiction
Legislation and the requirement of patent practice carry out increase and decrease appropriate, such as in certain jurisdictions, according to legislation and patent practice, meter
Calculation machine readable medium does not include electric carrier signal and telecommunication signal.
Embodiment 3
The present embodiment is to carry out simulating, verifying to the noncooperative target pose measuring method based on deep neural network.Using
Bunny three dimensional point cloud in Stanford University Graphic Laboratory data set is to above-mentioned algorithm
Performance assessed.In an experiment, first the detection of visualization feature point as a result, then with traditional geometric feature description
FPFH carries out the comparison of algorithm, comes real-time, the accuracy of verification algorithm.
As shown in figure 3, being the initial data schematic diagram in the present embodiment before pose estimation.By to the original number in Fig. 3
New point cloud is obtained according to down-sampling is carried out.
As shown in figure 4, being the result schematic diagram of the characteristic point detection of invention preferred embodiment.Pass through PointNet network mould
The global characteristics vector that type obtains carries out characteristic point detection to the point cloud that down-sampling obtains, and has obtained characteristic point and has been about origin cloud
The profile of data can be very good the characteristic point for extracting point cloud, reduce quantity a little, the quantity of figure midpoint cloud is respectively by 1024
Drop to 103 and 109, reduces ten times or so.
As shown in figure 5, obtaining the pose estimation of noncooperative target after rough registration and essence registration.It is only capable of obtaining one
More accurate matching double points number, additionally it is possible to further strengthen the effect of correct matching double points, weaken error matching points pair
It influences, substantially increases the precision and robustness of result.
By 1 two algorithms of table performance relatively from the point of view of, traditional-handwork feature FPFH, which describes son, can not cope with and contain only feature
A small amount of point cloud data of point, because local message missing causes the abstracting power of FPFH description to reduce, the time effect of rough registration
Rate reduces.And algorithm proposed by the present invention is to be based on PointNet neural network, there is very powerful ability in feature extraction, and
It is applied directly in the feature extraction of a cloud by trained model, efficiency is more much higher than traditional-handwork feature.It can from table 1
To find out, a pose estimate total elapsed time be 0.216s, therefore in actual measurement each second can about return to 4
Secondary pose is as a result, meet requirement of real-time.
The performance of 1 three algorithms of table compares
As technology develops rapidly, the quantity of the three-dimensional point cloud point of acquisition is more and more huger, how to handle three dimension of magnanimity
According to and reach quick and high-precision be registrated becomes the difficult point of research.Traditional points cloud processing based on geometric feature description
Method is because it is formulated by hand for specific task, and there is significant limitations in universality.However
Method based on deep neural network does not have this limitation, with the increase of data set, the method based on deep neural network
Ability in feature extraction can be increasingly stronger.So in this field of Point Cloud Processing, the method based on deep neural network
Mainstream can be become in future.
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, and it cannot be said that
Specific implementation of the invention is only limited to these instructions.For those skilled in the art to which the present invention belongs, it is not taking off
Under the premise of from present inventive concept, several equivalent substitute or obvious modifications can also be made, and performance or use is identical, all answered
When being considered as belonging to protection scope of the present invention.
Claims (10)
1. the noncooperative target pose measuring method based on deep neural network, which comprises the steps of:
S1: down-sampling is carried out to the point cloud data P and model point cloud Q of the different angle of noncooperative target and obtains cloud a P', Q';
S2: with trained PointNet network model feature extraction is carried out to cloud P', Q' at described and obtained comprising each point
The eigenmatrix Α of feature vectorn×1024With global characteristics vector Β1×1024;
S3: detecting screening threshold value according to pre-set characteristic point and screened to the characteristic point of described cloud P', Q', and according to
The global characteristics vector Β1×1024With the eigenmatrix Α of the feature vector of each point of described cloud P', Q'n×1024
It carries out Feature Points Matching and obtains feature point set P ", Q ";
S4: are carried out by point cloud registering, obtains pose transition matrix T by the feature point set P ", Q "1=[R1,t1];
S5: by the pose transition matrix T1It is applied to a cloud P' and obtains a cloudAnd by described cloudWith described cloud Q'
It carries out point cloud registering and obtains pose transition matrix T2=[R2,t2]。
2. the noncooperative target pose measuring method based on deep neural network as described in claim 1, which is characterized in that step
The down-sampling is carried out according to curvature feature, the density of point cloud or normal in rapid S1.
3. the noncooperative target pose measuring method based on deep neural network as described in claim 1, which is characterized in that step
The sampling number that the down-sampling is carried out in rapid S1 is 1024.
4. the noncooperative target pose measuring method based on deep neural network as described in claim 1, which is characterized in that step
The Feature Points Matching is carried out by TrICP algorithm in rapid S3.
5. the noncooperative target pose measuring method based on deep neural network as described in claim 1, which is characterized in that step
The Feature Points Matching is carried out by TrICP algorithm in rapid S4.
6. a kind of noncooperative target pose measurement system based on deep neural network characterized by comprising
First unit: down-sampling is carried out to the point cloud data P and model point cloud Q of the different angle of noncooperative target and obtains a cloud
P', Q',
Second unit: with trained PointNet network model feature extraction is carried out to cloud P', Q' at described and obtained comprising every
The eigenmatrix Α of the feature vector of a pointn×1024With global characteristics vector Β1×1024;
Third unit: detecting screening threshold value according to pre-set characteristic point and screened to the characteristic point of described cloud P', Q',
And according to the global characteristics vector Β1×1024With the eigenmatrix comprising each point feature vector of described cloud P', Q'
Αn×1024It carries out Feature Points Matching and obtains feature point set P ", Q ";
Unit the 4th: are carried out by point cloud registering, obtains pose transition matrix T by the feature point set P ", Q "1=[R1,t1];
Unit the 5th: by the pose transition matrix T1It is applied to a cloud P' and obtains a cloudAnd by described cloudWith it is described
Point cloud Q' carries out point cloud registering and obtains pose transition matrix T2=[R2,t2]。
7. the noncooperative target pose measurement system based on deep neural network as claimed in claim 6, which is characterized in that press
The down-sampling is carried out according to curvature feature, the density of point cloud or normal.
8. the noncooperative target pose measurement system based on deep neural network as claimed in claim 6, which is characterized in that into
The sampling number of the row down-sampling is 1024;The Feature Points Matching is carried out by TrICP algorithm.
9. a kind of noncooperative target pose measurement terminal device based on deep neural network, including memory, processor and
Store the computer program that can be run in the memory and on the processor, which is characterized in that the processor is held
The step of the method as any such as claim 1-5 is realized when the row computer program.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists
In being realized when the computer program is executed by processor such as the step of claim 1-5 any the method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910555500.4A CN110287873B (en) | 2019-06-25 | 2019-06-25 | Non-cooperative target pose measurement method and system based on deep neural network and terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910555500.4A CN110287873B (en) | 2019-06-25 | 2019-06-25 | Non-cooperative target pose measurement method and system based on deep neural network and terminal equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110287873A true CN110287873A (en) | 2019-09-27 |
CN110287873B CN110287873B (en) | 2021-06-29 |
Family
ID=68005745
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910555500.4A Active CN110287873B (en) | 2019-06-25 | 2019-06-25 | Non-cooperative target pose measurement method and system based on deep neural network and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110287873B (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111047631A (en) * | 2019-12-04 | 2020-04-21 | 广西大学 | Multi-view three-dimensional point cloud registration method based on single Kinect and round box |
CN111223136A (en) * | 2020-01-03 | 2020-06-02 | 三星(中国)半导体有限公司 | Depth feature extraction method and device for sparse 2D point set |
CN111402328A (en) * | 2020-03-17 | 2020-07-10 | 北京图森智途科技有限公司 | Pose calculation method and device based on laser odometer |
CN111832473A (en) * | 2020-07-10 | 2020-10-27 | 星际空间(天津)科技发展有限公司 | Point cloud feature identification processing method and device, storage medium and electronic equipment |
CN112017225A (en) * | 2020-08-04 | 2020-12-01 | 华东师范大学 | Depth image matching method based on point cloud registration |
CN112559959A (en) * | 2020-12-07 | 2021-03-26 | 中国西安卫星测控中心 | Space-based imaging non-cooperative target rotation state calculation method based on feature vector |
CN112700455A (en) * | 2020-12-28 | 2021-04-23 | 北京超星未来科技有限公司 | Laser point cloud data generation method, device, equipment and medium |
CN113034439A (en) * | 2021-03-03 | 2021-06-25 | 北京交通大学 | High-speed railway sound barrier defect detection method and device |
WO2021129145A1 (en) * | 2019-12-26 | 2021-07-01 | 歌尔股份有限公司 | Image feature point filtering method and terminal |
CN114310873A (en) * | 2021-12-17 | 2022-04-12 | 上海术航机器人有限公司 | Pose conversion model generation method, control method, system, device and medium |
CN116363217A (en) * | 2023-06-01 | 2023-06-30 | 中国人民解放军国防科技大学 | Method, device, computer equipment and medium for measuring pose of space non-cooperative target |
CN117152245A (en) * | 2023-01-31 | 2023-12-01 | 荣耀终端有限公司 | Pose calculation method and device |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104048648A (en) * | 2014-05-27 | 2014-09-17 | 清华大学深圳研究生院 | Relative pose measurement method for large size non-cooperative target |
CN105976353A (en) * | 2016-04-14 | 2016-09-28 | 南京理工大学 | Spatial non-cooperative target pose estimation method based on model and point cloud global matching |
CN106780459A (en) * | 2016-12-12 | 2017-05-31 | 华中科技大学 | A kind of three dimensional point cloud autoegistration method |
CN107449402A (en) * | 2017-07-31 | 2017-12-08 | 清华大学深圳研究生院 | A kind of measuring method of the relative pose of noncooperative target |
CN108133458A (en) * | 2018-01-17 | 2018-06-08 | 视缘(上海)智能科技有限公司 | A kind of method for automatically split-jointing based on target object spatial point cloud feature |
CN108376408A (en) * | 2018-01-30 | 2018-08-07 | 清华大学深圳研究生院 | A kind of three dimensional point cloud based on curvature feature quickly weights method for registering |
CN109102547A (en) * | 2018-07-20 | 2018-12-28 | 上海节卡机器人科技有限公司 | Robot based on object identification deep learning model grabs position and orientation estimation method |
CN109308737A (en) * | 2018-07-11 | 2019-02-05 | 重庆邮电大学 | A kind of mobile robot V-SLAM method of three stage point cloud registration methods |
CN109458994A (en) * | 2018-10-24 | 2019-03-12 | 北京控制工程研究所 | A kind of space non-cooperative target laser point cloud ICP pose matching correctness method of discrimination and system |
CN109523501A (en) * | 2018-04-28 | 2019-03-26 | 江苏理工学院 | One kind being based on dimensionality reduction and the matched battery open defect detection method of point cloud data |
CN109523552A (en) * | 2018-10-24 | 2019-03-26 | 青岛智能产业技术研究院 | Three-dimension object detection method based on cone point cloud |
CN109801337A (en) * | 2019-01-21 | 2019-05-24 | 同济大学 | A kind of 6D position and orientation estimation method of Case-based Reasoning segmentation network and iteration optimization |
CN109887028A (en) * | 2019-01-09 | 2019-06-14 | 天津大学 | A kind of unmanned vehicle assisted location method based on cloud data registration |
CN109919984A (en) * | 2019-04-15 | 2019-06-21 | 武汉惟景三维科技有限公司 | A kind of point cloud autoegistration method based on local feature description's |
-
2019
- 2019-06-25 CN CN201910555500.4A patent/CN110287873B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104048648A (en) * | 2014-05-27 | 2014-09-17 | 清华大学深圳研究生院 | Relative pose measurement method for large size non-cooperative target |
CN105976353A (en) * | 2016-04-14 | 2016-09-28 | 南京理工大学 | Spatial non-cooperative target pose estimation method based on model and point cloud global matching |
CN106780459A (en) * | 2016-12-12 | 2017-05-31 | 华中科技大学 | A kind of three dimensional point cloud autoegistration method |
CN107449402A (en) * | 2017-07-31 | 2017-12-08 | 清华大学深圳研究生院 | A kind of measuring method of the relative pose of noncooperative target |
CN108133458A (en) * | 2018-01-17 | 2018-06-08 | 视缘(上海)智能科技有限公司 | A kind of method for automatically split-jointing based on target object spatial point cloud feature |
CN108376408A (en) * | 2018-01-30 | 2018-08-07 | 清华大学深圳研究生院 | A kind of three dimensional point cloud based on curvature feature quickly weights method for registering |
CN109523501A (en) * | 2018-04-28 | 2019-03-26 | 江苏理工学院 | One kind being based on dimensionality reduction and the matched battery open defect detection method of point cloud data |
CN109308737A (en) * | 2018-07-11 | 2019-02-05 | 重庆邮电大学 | A kind of mobile robot V-SLAM method of three stage point cloud registration methods |
CN109102547A (en) * | 2018-07-20 | 2018-12-28 | 上海节卡机器人科技有限公司 | Robot based on object identification deep learning model grabs position and orientation estimation method |
CN109458994A (en) * | 2018-10-24 | 2019-03-12 | 北京控制工程研究所 | A kind of space non-cooperative target laser point cloud ICP pose matching correctness method of discrimination and system |
CN109523552A (en) * | 2018-10-24 | 2019-03-26 | 青岛智能产业技术研究院 | Three-dimension object detection method based on cone point cloud |
CN109887028A (en) * | 2019-01-09 | 2019-06-14 | 天津大学 | A kind of unmanned vehicle assisted location method based on cloud data registration |
CN109801337A (en) * | 2019-01-21 | 2019-05-24 | 同济大学 | A kind of 6D position and orientation estimation method of Case-based Reasoning segmentation network and iteration optimization |
CN109919984A (en) * | 2019-04-15 | 2019-06-21 | 武汉惟景三维科技有限公司 | A kind of point cloud autoegistration method based on local feature description's |
Non-Patent Citations (2)
Title |
---|
CHARLES R. QI ET AL.: "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation", 《ARXIV》 * |
CHARLES R. QI ET AL.: "PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space", 《31ST CONFERENCE ON NEURAL INFORMATION PROCESSING SYSTEMS》 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111047631A (en) * | 2019-12-04 | 2020-04-21 | 广西大学 | Multi-view three-dimensional point cloud registration method based on single Kinect and round box |
CN111047631B (en) * | 2019-12-04 | 2023-04-07 | 广西大学 | Multi-view three-dimensional point cloud registration method based on single Kinect and round box |
WO2021129145A1 (en) * | 2019-12-26 | 2021-07-01 | 歌尔股份有限公司 | Image feature point filtering method and terminal |
US12051233B2 (en) | 2019-12-26 | 2024-07-30 | Goertek Inc. | Method for filtering image feature points and terminal |
CN111223136A (en) * | 2020-01-03 | 2020-06-02 | 三星(中国)半导体有限公司 | Depth feature extraction method and device for sparse 2D point set |
CN111223136B (en) * | 2020-01-03 | 2024-04-23 | 三星(中国)半导体有限公司 | Depth feature extraction method and device for sparse 2D point set |
CN111402328B (en) * | 2020-03-17 | 2023-11-10 | 北京图森智途科技有限公司 | Pose calculation method and device based on laser odometer |
CN111402328A (en) * | 2020-03-17 | 2020-07-10 | 北京图森智途科技有限公司 | Pose calculation method and device based on laser odometer |
CN111832473B (en) * | 2020-07-10 | 2024-10-01 | 星际空间(天津)科技发展有限公司 | Point cloud characteristic recognition processing method and device, storage medium and electronic equipment |
CN111832473A (en) * | 2020-07-10 | 2020-10-27 | 星际空间(天津)科技发展有限公司 | Point cloud feature identification processing method and device, storage medium and electronic equipment |
CN112017225B (en) * | 2020-08-04 | 2023-06-09 | 华东师范大学 | Depth image matching method based on point cloud registration |
CN112017225A (en) * | 2020-08-04 | 2020-12-01 | 华东师范大学 | Depth image matching method based on point cloud registration |
CN112559959A (en) * | 2020-12-07 | 2021-03-26 | 中国西安卫星测控中心 | Space-based imaging non-cooperative target rotation state calculation method based on feature vector |
CN112559959B (en) * | 2020-12-07 | 2023-11-07 | 中国西安卫星测控中心 | Space-based imaging non-cooperative target rotation state resolving method based on feature vector |
CN112700455A (en) * | 2020-12-28 | 2021-04-23 | 北京超星未来科技有限公司 | Laser point cloud data generation method, device, equipment and medium |
CN113034439A (en) * | 2021-03-03 | 2021-06-25 | 北京交通大学 | High-speed railway sound barrier defect detection method and device |
CN114310873B (en) * | 2021-12-17 | 2024-05-24 | 上海术航机器人有限公司 | Pose conversion model generation method, control method, system, equipment and medium |
CN114310873A (en) * | 2021-12-17 | 2022-04-12 | 上海术航机器人有限公司 | Pose conversion model generation method, control method, system, device and medium |
CN117152245A (en) * | 2023-01-31 | 2023-12-01 | 荣耀终端有限公司 | Pose calculation method and device |
CN117152245B (en) * | 2023-01-31 | 2024-09-03 | 荣耀终端有限公司 | Pose calculation method and device |
CN116363217B (en) * | 2023-06-01 | 2023-08-11 | 中国人民解放军国防科技大学 | Method, device, computer equipment and medium for measuring pose of space non-cooperative target |
CN116363217A (en) * | 2023-06-01 | 2023-06-30 | 中国人民解放军国防科技大学 | Method, device, computer equipment and medium for measuring pose of space non-cooperative target |
Also Published As
Publication number | Publication date |
---|---|
CN110287873B (en) | 2021-06-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110287873A (en) | Noncooperative target pose measuring method, system and terminal device based on deep neural network | |
CN110084173B (en) | Human head detection method and device | |
CN104715254B (en) | A kind of general object identification method merged based on 2D and 3D SIFT features | |
Mian et al. | Three-dimensional model-based object recognition and segmentation in cluttered scenes | |
CN109165540B (en) | Pedestrian searching method and device based on prior candidate box selection strategy | |
CN109522966A (en) | A kind of object detection method based on intensive connection convolutional neural networks | |
CN112801169B (en) | Camouflage target detection method, system, device and storage medium based on improved YOLO algorithm | |
CN110298387A (en) | Incorporate the deep neural network object detection method of Pixel-level attention mechanism | |
CN113223068B (en) | Multi-mode image registration method and system based on depth global features | |
CN109190643A (en) | Based on the recognition methods of convolutional neural networks Chinese medicine and electronic equipment | |
CN109598234A (en) | Critical point detection method and apparatus | |
CN111027576B (en) | Cooperative significance detection method based on cooperative significance generation type countermeasure network | |
CN109658419A (en) | The dividing method of organella in a kind of medical image | |
CN110716792B (en) | Target detector and construction method and application thereof | |
CN111311702B (en) | Image generation and identification module and method based on BlockGAN | |
CN111401219B (en) | Palm key point detection method and device | |
CN115953665A (en) | Target detection method, device, equipment and storage medium | |
CN112364974B (en) | YOLOv3 algorithm based on activation function improvement | |
CN113011253B (en) | Facial expression recognition method, device, equipment and storage medium based on ResNeXt network | |
CN110135505A (en) | Image classification method, device, computer equipment and computer readable storage medium | |
CN111680755A (en) | Medical image recognition model construction method, medical image recognition device, medical image recognition medium and medical image recognition terminal | |
CN112560710A (en) | Method for constructing finger vein recognition system and finger vein recognition system | |
CN117854155B (en) | Human skeleton action recognition method and system | |
CN113128518B (en) | Sift mismatch detection method based on twin convolution network and feature mixing | |
CN109886206A (en) | Three-dimensional object identification method and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |