CN112327317B - Convolution neural network-based spatial non-cooperative target angular velocity measurement method - Google Patents

Convolution neural network-based spatial non-cooperative target angular velocity measurement method Download PDF

Info

Publication number
CN112327317B
CN112327317B CN202011176669.8A CN202011176669A CN112327317B CN 112327317 B CN112327317 B CN 112327317B CN 202011176669 A CN202011176669 A CN 202011176669A CN 112327317 B CN112327317 B CN 112327317B
Authority
CN
China
Prior art keywords
neural network
convolutional neural
angular velocity
distance data
original distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011176669.8A
Other languages
Chinese (zh)
Other versions
CN112327317A (en
Inventor
郑子轩
马川
安效民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Shenzhen Institute of Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Shenzhen Institute of Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University, Shenzhen Institute of Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202011176669.8A priority Critical patent/CN112327317B/en
Publication of CN112327317A publication Critical patent/CN112327317A/en
Application granted granted Critical
Publication of CN112327317B publication Critical patent/CN112327317B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/006Theoretical aspects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Electromagnetism (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses a space non-cooperative target angular velocity measurement method based on a convolutional neural network, and belongs to the field of aerospace. The method utilizes a simulation method to generate training data, trains the convolutional neural network and obtains the trained convolutional neural network; preprocessing the original distance data measured by the laser ranging radar, and inputting the preprocessed original distance data into a trained convolutional neural network to obtain the angular velocity of a non-cooperative target; the specific process of pretreatment is as follows: rearranging all elements in the original distance data matrix, and if the elements in the ith row and the jth column areMove it to row (2 i-1), otherwise move to row 2 (N-i+1); if it isMove it to column (2 j-1), otherwise move to column 2 (M-j+1). The method greatly enhances the recognition capability of the convolutional neural network, and further improves the measuring and calculating precision of the angular velocity of the non-cooperative target.

Description

Convolution neural network-based spatial non-cooperative target angular velocity measurement method
Technical Field
The invention belongs to the field of aerospace, and particularly relates to a space non-cooperative target angular velocity measurement method based on a convolutional neural network.
Background
With the expansion of the scale of space exploration and development by humans, there is an increasing demand for on-orbit services for spatially non-cooperative targets. The space non-cooperative targets mainly comprise an in-orbit fault or failure satellite and various space fragments, and have two important meanings on the in-orbit service of the space non-cooperative targets: firstly, for a fault or failure satellite, on-orbit services such as maintenance, fuel filling and the like are carried out on the fault or failure satellite, so that the on-orbit service life of the fault or failure satellite can be greatly prolonged, and the cost of space exploration and development tasks is obviously reduced; secondly, space fragments occupying important orbits are cleared or recovered, so that the risk of collision between satellites and fragments can be reduced, and the safety of space environment is improved.
The key to on-orbit servicing of a spatially rolled target is to safely capture or dock it. Because of the small air resistance in the space environment, the space non-cooperative targets are typically in an uncontrolled free roll state under the influence of the initial angular momentum. In order to avoid damage caused by collisions during capture or docking, accurate measurements of motion states such as attitude and angular velocity of non-cooperative targets prior to contact are required. Because most non-cooperative targets do not have obvious identifiable feature points, none of the currently mainstream feature-based approaches are adequate for such tasks. For the method without obviously identifying the characteristic points, two types of image data based on a camera and distance data based on a laser range radar exist. The camera-based attitude measurement method has strong dependence on illumination environment and limited practical application scenes. Therefore, the non-contact attitude measurement method based on the laser range radar is the first choice in the non-cooperative target on-orbit service task. However, the traditional laser range radar-based measuring and calculating method often needs to reconstruct a three-dimensional model of a target, so that the algorithm is long in time consumption and the accuracy is difficult to guarantee.
In recent years, artificial intelligence technology represented by convolutional neural networks has provided a possibility for greatly improving the accuracy and efficiency of a non-contact attitude measurement method. However, convolutional neural networks are generally used for processing image data, and when being directly used for processing laser ranging radar data, key features of the data cannot be accurately extracted, and under-fitting or over-fitting phenomena are easy to occur, so that the convolutional neural networks are difficult to apply in practice.
Disclosure of Invention
The invention aims to overcome the defect that a convolution neural network is easy to generate under fitting or over fitting when processing laser ranging radar original data, thereby providing a space non-cooperative target angular velocity measurement method based on the convolution neural network.
In order to achieve the purpose, the invention is realized by adopting the following technical scheme:
a space non-cooperative target angular velocity measurement method based on a convolutional neural network utilizes a simulation method to generate training data, trains the convolutional neural network and obtains a trained convolutional neural network;
preprocessing the original distance data measured by the laser ranging radar, and inputting the preprocessed original distance data into a trained convolutional neural network to obtain the angular velocity of a non-cooperative target;
the specific process of pretreatment is as follows:
rearranging all elements in the original distance data matrix, and if the elements in the ith row and the jth column areMove it to row (2 i-1), otherwise move to row 2 (N-i+1); if->Move it to column (2 j-1), otherwise move to column 2 (M-j+1);
wherein i is more than or equal to 1 and less than or equal to N, j is more than or equal to 1 and less than or equal to M, N is the total number of rows of the original distance data matrix, and M is the total number of columns.
Further, the method comprises the following steps:
s1, constructing a convolutional neural network according to the original distance data dimension of a laser range radar;
s2, generating training data through computer simulation according to the structure of the convolutional neural network;
s3, training the weight of the convolutional neural network by using training data;
s4, preprocessing the original distance data obtained by measuring the laser range radar;
s5, inputting the preprocessed original distance data into a trained neural network to obtain the angular velocity of the non-cooperative target.
Further, the original distance data in S1 includes two matrices, where the two matrices store lengths of distance targets detected by the laser transmitters at two adjacent moments along each direction of the space, respectively.
Further, the convolutional neural network designed in the S1 includes a plurality of convolutional layers, a pooling layer equal to the convolutional layers, an unfolding layer and a plurality of full-connection layers, takes the original distance data of the laser range radar as input, and takes three components of the angular velocity of the target as output.
Further, the sampling windows of the convolution layer and the pooling layer are 2×2 or 3×3;
the number of layers of the convolution layer and the pooling layer reduces the total dimension of the original distance data to 500-2000;
the number of layers of the full connection layer satisfies that the total number of the undetermined weights is below 20000.
Further, the training data generated in the step S2 comprises an input part and an output part, and the data of the two parts are in one-to-one correspondence;
the dimensions of the input and output portions correspond to the convolutional neural network.
Further, the total number of training data is 1-100 times of the undetermined weight of the convolutional neural network.
Further, the appearance and angular velocity of the virtual non-cooperative targets involved in the simulation process cover all of the appearance and angular velocity of the targets that occur in the actual task.
Compared with the prior art, the invention has the following beneficial effects:
according to the space non-cooperative target angular velocity measurement method based on the convolutional neural network, preprocessing is firstly carried out before the original data are input into the neural network, so that the convolutional neural network is helped to extract key features of the data more easily, training efficiency is improved, and the conditions of under fitting or over fitting are avoided. The principle is as follows: the convolution layer window of the convolutional neural network is good at extracting similar characteristics of adjacent elements in the input matrix, which is beneficial to classification and identification of image data; however, in the original data matrix of the laser ranging radar, key characteristic information reflecting the angular speed of a target is mainly concentrated at the edge of the matrix, and diagonal elements rather than adjacent elements are easier to show similar characteristics of data, so that the convolutional neural network is difficult to master the key characteristic information contained in the laser ranging radar information, and the training efficiency and the output precision are low and cannot be applied to practical tasks; according to the method, elements containing similar features in the original matrix data are mutually close by a preprocessing method, so that the convolutional neural network is convenient for extracting key features, and under fitting or over fitting is not easy to generate in the training process. Compared with a classical convolutional neural network without pretreatment, the method provided by the invention has the advantages that the output precision is greatly improved, and the training speed and the stability of the network are higher.
Further, compared with the traditional analytic type attitude measurement method, the convolution neural network has the advantages of no modeling, high online calculation speed and strong adaptability.
Further, the input matrix of the network is two distance tensor matrices at adjacent moments, and the observation data is easily measured by the existing laser ranging radar equipment.
Further, the output of the network is the attitude angle of the target rotating in the three coordinate axis directions, so that the attitude angle of the target at any moment can be conveniently obtained by a numerical integration method.
Further, by limiting the scale of the convolutional neural network, on one hand, the output precision of the network is ensured to be enough to meet the actual task requirement, on the other hand, the excessive scale of the network is prevented, and the conditions of overlong training time or over fitting are avoided.
Further, the scale of training data is limited to eliminate random errors in training, and the neural network is ensured to be converged.
Further, requirements are placed on the breadth and representativeness of the training data to avoid systematic training errors.
Drawings
FIG. 1 is a schematic diagram of a laser range radar (LIDAR) acquiring raw range data;
FIG. 2 is an example of a convolutional neural network structure in accordance with the present invention;
fig. 3 is an example of a frequency distribution diagram of the output accuracy of the convolutional neural network according to the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The invention provides a method for real-time non-contact measurement of angular velocity of a space non-cooperative target without carrying out three-dimensional modeling on the appearance of the target in advance. The angular velocity information of the non-cooperative target can be rapidly and accurately calculated by taking the distance information measured by a laser ranging radar (LIDAR) as original distance data and taking a convolutional neural network as a basis.
The invention comprises the following steps:
(1) Acquiring original distance data of a laser range radar, and constructing a convolutional neural network according to the original distance data dimension of the laser range radar
The original distance data comprises two matrixes, which respectively store the lengths of the distance targets detected by the laser transmitters along each direction of the space at two adjacent moments.
The designed convolutional neural network comprises a plurality of convolutional layers, a pooling layer equal to the number of the convolutional layers, an unfolding layer and a plurality of full-connection layers, wherein the original distance data of the laser range radar is taken as input, and three components of the angular speed of a target are taken as output. The sampling window of the convolution layer and the pooling layer is 2×2 or 3×3, and the layer number is ensured to reduce the total dimension of the original distance data to be in the range of 500-2000. The layer number design of the full connection layer ensures that the total number of the undetermined weights is below 20000.
(2) Generating training data by computer simulation according to the structure of the convolutional neural network
The training data includes an input portion and an output portion, and the data of the two portions are in one-to-one correspondence. The dimensions of the training data are consistent with the convolutional neural network. The total number of the training data is about 1 to 100 times of the undetermined weight of the convolutional neural network; the appearance and angular velocity of the virtual non-cooperative targets involved in the simulation should cover all possible appearance and angular velocities of the targets in the actual task.
(3) Training weights of convolutional neural networks using training data
The training process is finished by off-line calculation before the actual task starts, and in the actual task, the service satellite only needs to store the relevant data of the trained convolutional neural network.
(4) Preprocessing distance data obtained by actually measuring a laser ranging radar, wherein the step is a key for obviously improving the measuring and calculating precision of the method
The pretreatment process comprises the following steps:
rearranging all elements in the original distance data matrix, wherein the specific rule is that for the elements (i is more than or equal to 1 and less than or equal to N, j is more than or equal to 1 and less than or equal to M) of the ith row and j column, wherein N is the total number of rows and M is the total number of columns of the original distance data matrix:
if it isMove it to row (2 i-1), otherwise move to row 2 (N-i+1);
if it isMove it to column (2 j-1), otherwise move to column 2 (M-j+1).
Through the preprocessing process, elements containing similar displacement information in the original distance data matrix are moved to adjacent positions, so that the recognition capability of the convolutional neural network is greatly enhanced, and the measuring and calculating precision of the non-cooperative target angular velocity is further improved.
(5) And inputting the preprocessed actual measurement data into the trained neural network to obtain the angular velocity of the non-cooperative target.
Examples: assuming that the external shape of an unknown target in space is a cuboid, the length and width of the unknown target are unknown values within the range of 0.5-2 meters, the fluctuation range of the angular velocity of the target is 0-0.2 radian/second, and the resolution of the laser ranging radar device for observing the target is 51×51. The method provided by the invention can be used for rapidly and accurately measuring and calculating the gesture of the target.
First, a convolutional neural network as shown in fig. 2 is designed, and the number of undetermined parameters of the network is 17300. Then, using computer simulation method to generate 20000 group simulation data to train the network. After the network training is finished, inputting real measurement data into the network, and obtaining the estimated value of the target attitude angle.
Fig. 3 shows a frequency distribution diagram of the output error obtained by the method of the present invention. As can be seen from the graph, the average error output by the convolutional neural network is about 4.8 degrees and is basically the same as the average error (3-6 degrees) obtained by the traditional analysis method, but the average online calculation time of each measurement and calculation of the method is only about 1 millisecond, which is far less than the average online calculation time (1-5 seconds) of the traditional analysis method. On the other hand, the output error of a classical convolutional neural network which does not adopt the data preprocessing method disclosed by the invention is larger than 20 degrees and is far higher than the output error given by the method disclosed by the invention. Therefore, the scheme provided by the invention not only greatly reduces the time cost required by measurement and calculation, but also effectively ensures the precision of attitude estimation.
The above is only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited by this, and any modification made on the basis of the technical scheme according to the technical idea of the present invention falls within the protection scope of the claims of the present invention.

Claims (8)

1. A space non-cooperative target angular velocity measurement method based on a convolutional neural network is characterized in that training data is generated by using a simulation method, the convolutional neural network is trained, and the trained convolutional neural network is obtained;
preprocessing the original distance data measured by the laser ranging radar, and inputting the preprocessed original distance data into a trained convolutional neural network to obtain the angular velocity of a non-cooperative target;
the specific process of pretreatment is as follows:
rearranging all elements in the original distance data matrix, and if the elements in the ith row and the jth column areMove it to line (2 i-1), otherwise move toRow 2 (N-i+1); if->Move it to column (2 j-1), otherwise move to column 2 (M-j+1);
wherein i is more than or equal to 1 and less than or equal to N, j is more than or equal to 1 and less than or equal to M, N is the total number of rows of the original distance data matrix, and M is the total number of columns.
2. The method for measuring the angular velocity of a spatial non-cooperative target based on a convolutional neural network according to claim 1, comprising the following steps:
s1, constructing a convolutional neural network according to the original distance data dimension of a laser range radar;
s2, generating training data through computer simulation according to the structure of the convolutional neural network;
s3, training the weight of the convolutional neural network by using training data;
s4, preprocessing the original distance data obtained by measuring the laser range radar;
s5, inputting the preprocessed original distance data into a trained neural network to obtain the angular velocity of the non-cooperative target.
3. The method for measuring angular velocity of spatial non-cooperative targets based on convolutional neural network according to claim 2, wherein the raw distance data in S1 comprises two matrices, and the two matrices respectively store the lengths of the distance targets detected by the laser transmitters at two adjacent moments along each direction of the space.
4. The method for measuring angular velocity of spatial non-cooperative targets based on convolutional neural network according to claim 2, wherein the convolutional neural network designed in S1 comprises a plurality of convolutional layers, pooling layers equal to the number of the convolutional layers, an unfolding layer and a plurality of full-connection layers, and takes the original distance data of the laser range radar as input and three components of the angular velocity of the targets as output.
5. The method for measuring angular velocity of spatial non-cooperative targets based on convolutional neural network according to claim 4, wherein the sampling windows of the convolutional layer and the pooling layer are 2×2 or 3×3;
the number of layers of the convolution layer and the pooling layer reduces the total dimension of the original distance data to 500-2000;
the number of layers of the full connection layer satisfies that the total number of the undetermined weights is below 20000.
6. The method for measuring angular velocity of spatial non-cooperative targets based on convolutional neural network according to claim 2, wherein the training data generated in S2 comprises an input part and an output part, and the data of the two parts are in one-to-one correspondence;
the dimensions of the input and output portions correspond to the convolutional neural network.
7. The method for measuring angular velocity of spatial non-cooperative targets based on convolutional neural network according to claim 6, wherein the total number of training data is 1-100 times of the undetermined weight of the convolutional neural network.
8. The convolutional neural network-based spatial non-cooperative target angular velocity measurement method of claim 6, wherein the appearance and angular velocity of the virtual non-cooperative target involved in the simulation process covers all of the appearance and angular velocities of the target that occur in the actual task.
CN202011176669.8A 2020-10-28 2020-10-28 Convolution neural network-based spatial non-cooperative target angular velocity measurement method Active CN112327317B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011176669.8A CN112327317B (en) 2020-10-28 2020-10-28 Convolution neural network-based spatial non-cooperative target angular velocity measurement method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011176669.8A CN112327317B (en) 2020-10-28 2020-10-28 Convolution neural network-based spatial non-cooperative target angular velocity measurement method

Publications (2)

Publication Number Publication Date
CN112327317A CN112327317A (en) 2021-02-05
CN112327317B true CN112327317B (en) 2023-09-22

Family

ID=74296165

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011176669.8A Active CN112327317B (en) 2020-10-28 2020-10-28 Convolution neural network-based spatial non-cooperative target angular velocity measurement method

Country Status (1)

Country Link
CN (1) CN112327317B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113050109A (en) * 2021-04-01 2021-06-29 河海大学常州校区 Laser ranging method based on deep learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106227851A (en) * 2016-07-29 2016-12-14 汤平 Based on the image search method searched for by depth of seam division that degree of depth convolutional neural networks is end-to-end
WO2018028255A1 (en) * 2016-08-11 2018-02-15 深圳市未来媒体技术研究院 Image saliency detection method based on adversarial network
CN109284530A (en) * 2018-08-02 2019-01-29 西北工业大学 Space non-cooperative target appearance rail integration method for parameter estimation based on deep learning
CN111212379A (en) * 2020-01-06 2020-05-29 天津工业大学 Novel CSI indoor positioning method based on convolutional neural network
WO2020152296A1 (en) * 2019-01-23 2020-07-30 Deepmind Technologies Limited Identifying neural networks that generate disentangled representations

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3525000B1 (en) * 2018-02-09 2021-07-21 Bayerische Motoren Werke Aktiengesellschaft Methods and apparatuses for object detection in a scene based on lidar data and radar data of the scene
US20200217950A1 (en) * 2019-01-07 2020-07-09 Qualcomm Incorporated Resolution of elevation ambiguity in one-dimensional radar processing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106227851A (en) * 2016-07-29 2016-12-14 汤平 Based on the image search method searched for by depth of seam division that degree of depth convolutional neural networks is end-to-end
WO2018028255A1 (en) * 2016-08-11 2018-02-15 深圳市未来媒体技术研究院 Image saliency detection method based on adversarial network
CN109284530A (en) * 2018-08-02 2019-01-29 西北工业大学 Space non-cooperative target appearance rail integration method for parameter estimation based on deep learning
WO2020152296A1 (en) * 2019-01-23 2020-07-30 Deepmind Technologies Limited Identifying neural networks that generate disentangled representations
CN111212379A (en) * 2020-01-06 2020-05-29 天津工业大学 Novel CSI indoor positioning method based on convolutional neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Xianghao Hou 等.Parameter estimations of uncooperative space targets using novel mixed artificial neural network.Neurocomputing.2019,全文. *
徐云飞 ; 张笃周 ; 王立 ; 华宝成 ; 石永强 ; 贺盈波 ; .一种卷积神经网络非合作目标姿态测量方法.宇航学报.2020,(05),全文. *
陈诗瑜.空间目标抓捕后欠驱动组合体姿态稳定控制.中国博士学位论文全文数据库工程科技Ⅱ辑.2020,全文. *

Also Published As

Publication number Publication date
CN112327317A (en) 2021-02-05

Similar Documents

Publication Publication Date Title
CN111429574B (en) Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion
CN105371870B (en) A kind of in-orbit accuracy measurement method of star sensor based on star chart data
Opromolla et al. Pose estimation for spacecraft relative navigation using model-based algorithms
CN110849331B (en) Monocular vision measurement and ground test method based on three-dimensional point cloud database model
Pasqualetto Cassinis et al. Cnn-based pose estimation system for close-proximity operations around uncooperative spacecraft
CN110412868B (en) Non-cooperative spacecraft orbit determination method using inter-satellite optical images
CN109631912A (en) A kind of deep space spherical object passive ranging method
Long et al. Object detection research of SAR image using improved faster region-based convolutional neural network
Liu et al. High-precision detection method for structure parameters of catenary cantilever devices using 3-D point cloud data
CN112327317B (en) Convolution neural network-based spatial non-cooperative target angular velocity measurement method
CN115343744A (en) Optical single-double-star combined on-satellite positioning method and system for aerial moving target
Huang et al. Non-model-based monocular pose estimation network for uncooperative spacecraft using convolutional neural network
Cote et al. A neural network-based method for tracking features from satellitesensor images
CN111383273A (en) High-speed rail contact net part positioning method based on improved structure reasoning network
Cassinis et al. Leveraging neural network uncertainty in adaptive unscented Kalman Filter for spacecraft pose estimation
Du et al. Autonomous measurement and semantic segmentation of non-cooperative targets with deep convolutional neural networks
CN114879207A (en) Ultrasonic obstacle avoidance method for L4-level automatic driving vehicle
Gao et al. Attitude determination of large non-cooperative spacecrafts in final approach
CN107194161B (en) ARAIM availabilities prediction technique and device based on user demand classification
Wang et al. Slow-spinning spacecraft cross-range scaling and attitude estimation based on sequential ISAR images
CN112268564A (en) Unmanned aerial vehicle landing space position and attitude end-to-end estimation method
CN109255150B (en) Multi-antenna arrival angle data association method based on bidirectional order association
CN116026325A (en) Navigation method and related device based on neural process and Kalman filtering
Zhang et al. A diverse space target dataset with multidebris and realistic on-orbit environment
Grossman et al. Composite damage assessment employing an optical neural network processor and an embedded fiber-optic sensor array

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant