WO2024052928A1 - Système d'atterrissage planétaire sans danger, visuel et autodéterminant d'un véhicule spatial - Google Patents
Système d'atterrissage planétaire sans danger, visuel et autodéterminant d'un véhicule spatial Download PDFInfo
- Publication number
- WO2024052928A1 WO2024052928A1 PCT/IN2023/050811 IN2023050811W WO2024052928A1 WO 2024052928 A1 WO2024052928 A1 WO 2024052928A1 IN 2023050811 W IN2023050811 W IN 2023050811W WO 2024052928 A1 WO2024052928 A1 WO 2024052928A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- model
- image
- images
- landing
- space vehicle
- Prior art date
Links
- 238000000034 method Methods 0.000 claims abstract description 98
- 238000001514 detection method Methods 0.000 claims abstract description 41
- 238000013136 deep learning model Methods 0.000 claims abstract description 11
- 230000008569 process Effects 0.000 claims description 40
- 238000013135 deep learning Methods 0.000 claims description 29
- 238000013528 artificial neural network Methods 0.000 claims description 14
- 238000013527 convolutional neural network Methods 0.000 claims description 11
- 238000013526 transfer learning Methods 0.000 claims description 6
- 231100001261 hazardous Toxicity 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 238000013459 approach Methods 0.000 claims description 2
- 238000010200 validation analysis Methods 0.000 claims 1
- 238000013473 artificial intelligence Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 6
- 230000033001 locomotion Effects 0.000 description 6
- 239000003795 chemical substances by application Substances 0.000 description 5
- RZVHIXYEVGDQDX-UHFFFAOYSA-N 9,10-anthraquinone Chemical compound C1=CC=C2C(=O)C3=CC=CC=C3C(=O)C2=C1 RZVHIXYEVGDQDX-UHFFFAOYSA-N 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 230000005484 gravity Effects 0.000 description 4
- PEDCQBHIVMGVHV-UHFFFAOYSA-N Glycerine Chemical compound OCC(O)CO PEDCQBHIVMGVHV-UHFFFAOYSA-N 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 3
- 230000015654 memory Effects 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000002787 reinforcement Effects 0.000 description 3
- 230000006403 short-term memory Effects 0.000 description 3
- 230000001934 delay Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- NRNCYVBFPDDJNE-UHFFFAOYSA-N pemoline Chemical compound O1C(N)=NC(=O)C1C1=CC=CC=C1 NRNCYVBFPDDJNE-UHFFFAOYSA-N 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000002265 prevention Effects 0.000 description 2
- 230000002195 synergetic effect Effects 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 241000282412 Homo Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- 230000007786 learning performance Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000008929 regeneration Effects 0.000 description 1
- 238000011069 regeneration method Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/17—Terrestrial scenes taken from planes or by drones
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Definitions
- the present Invention generally relates to system for vision based self-decisive planetary hazard free landing of a space vehicle.
- the present Invention relates to generating vision-based controlling commands for planetary landing of space vehicle using artificial intelligence techniques and the system thereof.
- CN107273929A discloses the Invention of unmanned aerial vehicle autonomous landing method based on deep synergetic neural network.
- a method for autonomous landing of UAV as a drone is proposed wherein acting force on the UAV is earth's gravity.
- the images used are manually pre-processed, vectorised and are converted into motion kinematics of drone. These kinematic equations are used to train the neural network for generating control commands to either stop, continue or hover.
- convolutional neural network is used to extract features from real time images and are directly utilised to train the deep neural network.
- the objective is not only to generate control command but also generate navigation trajectory by identifying safe landing position using hazard detection and prevention method.
- CN1 10543182A discloses the autonomous landing control method and system for small unmanned rotorcraft.
- This Invention claims a method for autonomous landing of rotorcraft wherein inputs landing site images along with the altitude of the rotorcraft (through GPS) are fed to the neural network to generate output as duty cycle of the propeller motor which is the controlling body of the rotorcraft.
- we claim a method for autonomous landing of space vehicle wherein inputs are only landing site real time images and the thrust commands are generated as output for navigation. It also involves method of trajectory state prediction along with real time hazard detection method.
- the proposed invention works under the consideration that the GPS system support is absent on the external planetary body.
- W02017177128A1 discloses systems and methods for Deep Reinforcement Learning using a Brain-Artificial Intelligence Interface.
- This Invention discloses a system for automatic aeroplane/flight control system similar to human pilot which takes into consideration of unpredictable situations like lightning or weather conditions. It consists of artificial neural network. Learning of the neural network takes place through demonstration method which are dependent on human pilots.
- As a flight control system it is an earth-based system and makes use of GPS and other flight sensors.
- W02022072606A1 discloses autonomous multifunctional aerial drone. This
- Invention claims include the method for autonomous navigation of multirotor drones based on artificial intelligence. It uses mix sensor data from camera, speakers, and other sensors to artificial intelligence module for aerial navigation. Infrared or Lidar sensors are used for obstacle detection. It uses GPS and GLONASS technology for guidance. Wherein, in our Invention, we claim a process and method for autonomous landing of space vehicle wherein there are no availability of GPS. Real time surface images will be utilized to predict the future navigation step along with hazard avoidance using trained deep learning models.
- CN1 10825101 A discloses autonomous unmanned aerial vehicle landing method based on deep convolutional neural network.
- This Invention claims a method for autonomous landing of UAV using predefined height parameter, obstacle detection through pattern matching and guiding the UAV to desired location using thresholding.
- the landmark pattern data is generated through GPS using drone captured images.
- Real time surface images will be utilized to predict the future navigation step along with hazard avoidance using trained deep learning models. There is no pattern matching or thresholding performed in our disclosed Invention.
- the present Invention generally relates to processing the images captured by onboard camera using deep learning for autonomous planetary landing.
- the Invention relates to system for vision based self-decisive planetary hazard free landing of a space vehicle and the method thereof.
- the primary data is in one of the forms of images or videos captured through onboard cameras of the space vehicle. While in descent, a space vehicle camera continuously captures images and/or videos of the underlying region of interest.
- the present Invention intends to use this data and available hardware like IMU (Inertial Measurement Unit) for more precision.
- the system is divided into three primary phases.
- the correlation between space craft dynamic state and the current snapshot of the region/camera image is modelled using deep learning technique.
- This model is used as a mapping between image and landing parameters like position, velocity and will be utilized to predict a dynamic state of the system for a real time image input, this module is named as ILP Correlation model (103).
- ILP Correlation model 103
- next state prediction model is built for a space vehicle trajectory using memory enabled Long Short-Term Memory deep neural network. This assures the vehicle movement within permissible thrust limits.
- captured images are used for modelling hazard detection system which in turn help path planning and retargeting.
- a system for generating control actions for a space-vehicle includes an on-board camera (100) for capturing images of the underlying terrain of the planet while a space craft is in descent and whose hazard-free landing spot is to be ascertained; these images are fed to the pretrained deep learning models (206), (303), and (405) obtained through processes (103), (104) and (105) respectively; on receiving an input from the camera, step (103) finds a correlation between system parameters and input images; Step (104) generates thrust values in the form of control actions (106) which can be directly fed to the control unit for further navigation;
- Step (105) is hazard detection model a process of detecting hazards like craters and boulders in the underlying region of interest on receiving image inputs from the camera (100);
- Step (106) are the outcomes from the overall process and are acceleration commands or the necessary thrust values for navigating the space craft into next desired position;
- a process (103) for finding a correlation between captured images and system state parameters is provided by implementing the method illustrated in Figure 1 ;
- Step (200) is of generating synthetic databases (202) for descent images and (205) for system states;
- Step (203) is a deep learning module of finding correlation between image and state parameters wherein a multivariate image regression is implemented and trained using the databases (202) and (205) generated in previous steps;
- the end result of the process is a well-trained ILP correlation model (206) and is further are utilized in state estimation tasks as shown in figure 1 step (103).
- a process for trajectory state prediction is provided for space vehicle descent navigation;
- the process (104) comprises of a deep learning assembly called as deep learning next state prediction module (301 ) wherein a long short term memory (LSTM) architecture is employed for trajectory state prediction;
- Step (301 ) comprises of a built-in feature extraction module; these image features are further used in the process of regression; in the end a state prediction model (303) is generated with final optimized model parameters; whereas the prediction model (303) is utilized on receiving inputs from on-board cameras to predict the best probable next state for a space vehicle; in addition to state prediction the step (104) on receiving hazard location inputs (406) from step (105) decides on retargeting the landing location;
- LSTM long short term memory
- a process for detection of hazards like craters or boulders on the landing site using images involves image annotator (401 ) wherein images are annotated manually with bounding boxes around the hazards like craters and boulders along with its positional details; on receiving these annotated images (403) from a database (402), a deep learning architecture with transfer learning approach (404) is trained for hazard detection;
- An object of the present Invention is to provide a method and a system for generating control actions for a space vehicle navigation using deep learning techniques.
- a sub object of the present Invention is to provide a method and a system for generating the current position of a space vehicle in absence of GPS.
- Further object of the present Invention is to provide a method and a system for detection of potential hazards for taking retargeting decisions.
- An object of the present Invention is to, a. To enable space missions to perform soft landing of space vehicle using artificial intelligence without human intervention b. Estimate IMU parameters like velocity, acceleration, position of space vehicle with the help of captured real time images in absence of GPS system. c. Enable space missions to take real time decisions for autonomous navigation and landing in an environment of any celestial body. d. Perform hazard free landing of space vehicle on celestial body e. Decide appropriate trajectory and thrust actions for soft landing.
- Presented Invention involves deep learning models trained on planetary image/video data. ILP correlation and thrust prediction models guarantee precise manoeuvering to the next feasible trajectory state. The Hazard detection model takes care of the hazardous situation and prompts the system for retargeting decision.
- the integrated assembly of the Invention guarantees real-time autonomous trajectory planning, guidance and navigation.
- the system depends on real-time images captured through onboard camera, hence does not require costly DEM facilities.
- the landing trajectory need not be pre-known as the next step in navigation is predicted via combined outcomes of thrust prediction and hazard detection modules. Overall system brings self decisive capability.
- [25] [fig-1 ] shows an architecture and overall process flow for a proposed system generating control actions for a space vehicle using on-board camera images in accordance with an embodiment of the present Invention; wherein, (100) is an onboard camera of space vehicle, (101 ) are the real time images captured through on board camera of space vehicle (100), (102) is (Inertial measurement unit) IMU sensors like LIDAR, (103) is ILP (Image and Landing Parameter) correlation model, (104) is trajectory state prediction model, (105) is hazard detection model, (106) are control actions predicted using combination of three models i.e. (103), (104) and (105).
- FIG.2 shows a block diagram for a process to build (103) i.e. ILP correlation model for finding a correlation between a captured images and system state parameters in accordance with an embodiment of the present Invention by implementing the method illustrated in Figure 1 ; wherein, (207) is camera mounted on modelled space vehicle in a simulated environment, (200) is agent based synthetic image generator platform, (201 ) are the images captured through simulated environment, (202) forms the image database, (204) are system state parameters, (205) is image state label database, (203) is deep learning module based on multivariate CNN regression for image correlation, (206) is ILP correlation model.
- (207) is camera mounted on modelled space vehicle in a simulated environment
- (200) is agent based synthetic image generator platform
- (201 ) are the images captured through simulated environment
- (202) forms the image database
- (204) are system state parameters
- (205) is image state label database
- (203) is deep learning module based on multivariate CNN regression for image correlation
- (206) is ILP correlation model.
- FIG.3 illustrates the block diagram for the process (104) of obtaining a trained model of trajectory state prediction in accordance with an embodiment of the present Invention; wherein, (202) is image database, (205) is image state labels database, (300) is consecutive images with labels, (301 ) is deep learning next state prediction module, (202) is most probable state, (303) is trajectory state prediction model, (406) is possible hazards and their location.
- (202) is image database
- (205) is image state labels database
- (300) is consecutive images with labels
- (301 ) is deep learning next state prediction module
- (202) is most probable state
- (303) is trajectory state prediction model
- (406) is possible hazards and their location.
- FIG.4 depicts an architecture for process i.e. hazard detection model (105) of proposed system for detection of hazards like craters or boulders on the landing site using images in accordance with an embodiment of the present Invention; wherein, (202) is image database, (400) images, (401 ) is image annotator, (402) is annotated image database with localized hazards, (403) are annotated images, (404) is deep learning hazard detection module, (405) is hazard detection model, (406) is possible hazards and their location prediction.
- the present Invention generally relates to an image processing with the help of deep learning classification technique, multivariate regression technique with and without memory, and a combination of reinforcement learning technique.
- the present Invention relates to a system for generating thrust commands for hazard-free descent and navigation of a space vehicle using deep learning techniques and method thereof.
- the embodiments of the present Invention make certain assumptions or uses some language related with the space vehicle like dynamic state of a system. Following paragraph briefly explains these assumptions.
- Real time descent images are captured through space vehicle on-board cameras. Images are captured at some time interval. As each image so captured is at some altitude, attitude, and space vehicle is carrying some velocity, it is assumed to represent a current state of a space vehicle. It means that an image captured at time instance t represents state St. Analogously, states S_(t-1 ) and S_(t+1 ) are represented by Images at time Instance (t-1 ) and (t+1) respectively. More specifically, time interval between two images is equal to the state transition time t_st for a trajectory. A state at time t, can be expressed in terms of soft- landing parameters as follows,
- FIG. 1 shows an architecture and overall process flow for a system generating control actions for a space vehicle using on-board camera images in accordance with an embodiment of the present invention
- the process includes a lander with camera (100) for capturing descent images of the underlying planetary region; these images are fed to the pre-trained deep learning models (206), (303), and (405) obtained through processes (103), (104) and (105) respectively; the processes (103), (104), and (105) are detailed in following subsections as described in figures 2, 3, and 4;
- receiving an input from the camera step (103) is instantiated to estimate current state of the dynamic system which is required to ascertain the current lander position, and velocity along with altitude information; essentially step (103) is of finding a correlation between system parameters on receiving image inputs from the camera (100); Step (103) is fine tuning the estimates based on the inputs from IMU sensor unit (102); this ascertains the accuracy of state estimates;
- FIG. 2 shows a block diagram for a process (103) for finding a correlation between a captured images and system state parameters in accordance with an embodiment of the present Invention by implementing the method illustrated in Figure 1 ;
- the system (103) includes an agent based platform (200) for generating synthetic images on receiving a planetary environment as input; the platform (200) is calibrated and fine-tuned for virtual descent of the agent on the planetary surface; the virtual system camera and agent's control unit is calibrated to generate descent images and its corresponding state parameters comprising of position, velocity, and altitude information;
- Step (200) of rendering synthetic images (201 ) and corresponding state parameters (204) is of generating synthetic databases (202) for descent images and (205) for system states;
- the process of generating databases (202) and (205) involves manual operation of landing the agent on given planetary environment through keyboard, mouse, joystick or camera interface (207);
- Step (203) is a deep learning module of finding correlation between image and state parameters wherein a multivariate image regression is implemented and trained using the databases (20
- FIG. 3 shows the block diagram for the process (104) of obtaining a trained model of trajectory state prediction in accordance with an embodiment of the present Invention;
- the process (104) comprises of a deep learning assembly called as deep learning next state prediction module (301 ) wherein a long short term memory (LSTM) architecture is employed for trajectory state prediction; an assembly (300) correlates images from image database (202) with their corresponding state labels from database (205); each time such three consecutive image-labels are given as input to the deep learning LSTM module; presence of memory in LSTM allows to remember previous states of the system thereby enhancing a prediction task;
- Step (301 ) comprises of a inbuilt convolutional neural network, and a pooling layer for feature extraction purpose; these image features are further used in the process of regression; loss function is defined to fine tune the weights of the network; once the minimum of the loss is achieved the prediction model (303) is generated with final optimized weights; whereas the prediction model (303) is utilized on receiving inputs from on-board cameras to predict the best probable next state for
- FIG. 4 illustrates architecture and process (105) of proposed system for detection of hazards like craters or boulders on the landing site using images in accordance with an embodiment of the present Invention; the architecture (105) involves image annotator (401 ) wherein images (400) from database (202) are taken as inputs and the images are annotated manually with bounding boxes around the hazards like craters and boulders along with its positional details; this yields localization of hazards and results into a database (402) containing annotated images; on receiving these annotated images (403) from a database (402), a deep learning architecture called as the deep Learning hazard detection module (404) is trained for hazard detection; the deep Learning hazard detection module (404) comprises of convolutional neural network with max pooling layer for feature extraction from these images; further to that it comprises of a deep image classification network layers for classifying the images into three categories namely crater, boulder, and plane surface; the network is trained according to the available image labels and a gradient descent optimization for minimization of loss function; after training
- Example 1 describes an event where a spacecraft is about to land on a planet and loses contact with the earth's base station. In such a situation, the system of present invention will take over the control. The system will estimate the current state of the spacecraft and predict the next navigation state using the thrust prediction module. The hazard detection module will signal the navigation module whether to next state is hazard-free or not. If the next state is hazard-free, then the spacecraft will be navigated to that state otherwise, a retargeting decision will be made. This process will be repeated unless less than 1 m altitude is reached. At the end of the process the last state of the autonomous decisions will be the prefered landing spot. Thus in loss of the communication, the system can take its own decisions and land safely on the planetary surface.
- Example 2 describes an event where a spacecraft is about to land on a planet and loses contact with the earth's base station. In such a situation, the system of present invention will take over the control. The system will estimate the current state of the spacecraft and predict the
- Example 2 describes an event where an aeroplane travels with some speed at higher altitudes. Due to bad weather conditions or some technical issues, the GPS system fails. In such a situation, the proposed system is useful for generating autonomous landing trajectory sequences if configured to be trained on the earth images. ILP Correlation and state prediction modules are useful for the navigation of the plane to the desired location.
- the system can be configured to fit for guidance and navigation in any planetary landing mission, in the Aviation industry for autonomous landing of air vehicles in the absence of GPS.
- the extended applications are in robotics navigation and guidance.
- PTL 1 discloses the Invention of unmanned aerial vehicle autonomous landing method based on deep synergetic neural network.
- a method for autonomous landing of UAV as a drone is proposed wherein acting force on the UAV is earth's gravity.
- the images used are manually pre-processed, vectorised and are converted into motion kinematics of drone. These kinematic equations are used to train the neural network for generating control commands to either stop, continue or hover.
- convolutional neural network is used to extract features from real time images and are directly utilised to train the deep neural network.
- the objective is not only to generate control command but also generate navigation trajectory by identifying safe landing position using hazard detection and prevention method.
- PTL 2 discloses the autonomous landing control method and system for small unmanned rotorcraft.
- This Invention claims a method for autonomous landing of rotorcraft wherein inputs landing site images along with the altitude of the rotorcraft (through GPS) are fed to the neural network to generate output as duty cycle of the propeller motor which is the controlling body of the rotorcraft.
- we claim a method for autonomous landing of space vehicle wherein inputs are only landing site real time images and the thrust commands are generated as output for navigation. It also involves method of trajectory state prediction along with real time hazard detection method.
- the proposed invention works under the consideration that the GPS system support is absent on the external planetary body.
- PTL 3 discloses systems and methods for Deep Reinforcement Learning using a Brain-Artificial Intelligence Interface.
- This Invention discloses a system for automatic aeroplane/flight control system similar to human pilot which takes into consideration of unpredictable situations like lightning or weather conditions. It consists of artificial neural network. Learning of the neural network takes place through demonstration method which are dependent on human pilots.
- As a flight control system it is an earth-based system and makes use of GPS and other flight sensors.
- PTL 4 discloses autonomous multifunctional aerial drone.
- This Invention claims include the method for autonomous navigation of multirotor drones based on artificial intelligence. It uses mix sensor data from camera, speakers, and other sensors to artificial intelligence module for aerial navigation. Infrared or Lidar sensors are used for obstacle detection. It uses GPS and GLONASS technology for guidance. Wherein, in our Invention, we claim a process and method for autonomous landing of space vehicle wherein there are no availability of GPS. Real time surface images will be utilized to predict the future navigation step along with hazard avoidance using trained deep learning models.
- PTL 5 discloses autonomous unmanned aerial vehicle landing method based on deep convolutional neural network.
- This Invention claims a method for autonomous landing of UAV using predefined height parameter, obstacle detection through pattern matching and guiding the UAV to desired location using thresholding.
- the landmark pattern data is generated through GPS using drone captured images.
- Real time surface images will be utilized to predict the future navigation step along with hazard avoidance using trained deep learning models. There is no pattern matching or thresholding performed in our disclosed Invention.
- NPL 1 refers to a design that integrated guidance & navigation functions using Recurrent CNNs, which provided a functional relationship between optical images captured during landing & required thrust actions. Further employed a DAgger method to improve deep learning performance. But it requires an expert to augment a database by exploiting human corrective actions, which is hard to find in space. Vehicle landing problem is considered in two different steps: altitude reduction (1 D) & translational motion (2D), while it is more realistic to consider it as a 3D space maneuver.
- NPL 1 R. Furfaro et al., “Deep learning for autonomous lunar landing,” in 2018 AAS/AIAA Astrodynamics Specialist Conference, 2018, pp. 1-22?
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Remote Sensing (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Image Analysis (AREA)
Abstract
La présente invention concerne un système d'atterrissage planétaire sans danger, visuel et autodéterminant d'un véhicule spatial. L'invention divulgue le procédé consistant à rechercher des corrélations entre des images de descente capturées par des caméras et les paramètres d'état de système pour prédire l'instruction de commande de poussée permettant un guidage et une navigation de descente d'engin spatial. Pour un atterrissage sans danger, un système de détection de danger (105) est activé, le terrain sous le champ de vision actuel étant classé en cratères, rochers et surface plane, qui est ensuite utilisé par l'utilitaire de prédiction de poussée (104) pour prendre des décisions de reciblage. Le module (103) divulgue le procédé de génération d'un ensemble de données synthétiques d'images planétaires, et un paramètre d'état de trajectoire de descente par l'intermédiaire d'une plateforme générative d'image utilisant un agent destiné à un environnement planétaire inconnu. Des modèles d'apprentissage profond à variables multiples sont utilisés pour prédire les actions de commande sous la forme d'une instruction de poussée (106) par combinaison des résultats de modèles (103) et (105).
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IN202221051791 | 2022-09-10 | ||
IN202221051791 | 2022-09-10 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024052928A1 true WO2024052928A1 (fr) | 2024-03-14 |
Family
ID=90192299
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IN2023/050811 WO2024052928A1 (fr) | 2022-09-10 | 2023-08-28 | Système d'atterrissage planétaire sans danger, visuel et autodéterminant d'un véhicule spatial |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024052928A1 (fr) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107748895A (zh) * | 2017-10-29 | 2018-03-02 | 北京工业大学 | 基于dct‑cnn模型的无人机着陆地貌图像分类方法 |
CN107817820A (zh) * | 2017-10-16 | 2018-03-20 | 复旦大学 | 一种基于深度学习的无人机自主飞行控制方法与系统 |
US20200363813A1 (en) * | 2019-05-15 | 2020-11-19 | Baidu Usa Llc | Online agent using reinforcement learning to plan an open space trajectory for autonomous vehicles |
-
2023
- 2023-08-28 WO PCT/IN2023/050811 patent/WO2024052928A1/fr unknown
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107817820A (zh) * | 2017-10-16 | 2018-03-20 | 复旦大学 | 一种基于深度学习的无人机自主飞行控制方法与系统 |
CN107748895A (zh) * | 2017-10-29 | 2018-03-02 | 北京工业大学 | 基于dct‑cnn模型的无人机着陆地貌图像分类方法 |
US20200363813A1 (en) * | 2019-05-15 | 2020-11-19 | Baidu Usa Llc | Online agent using reinforcement learning to plan an open space trajectory for autonomous vehicles |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Moghaddam et al. | On the guidance, navigation and control of in-orbit space robotic missions: A survey and prospective vision | |
Betts | Survey of numerical methods for trajectory optimization | |
US6990406B2 (en) | Multi-agent autonomous system | |
Elfes et al. | Robotic airships for exploration of planetary bodies with an atmosphere: Autonomy challenges | |
Nanjangud et al. | Robotics and AI-enabled on-orbit operations with future generation of small satellites | |
Ono et al. | GNC strategies and flight results of Hayabusa2 first touchdown operation | |
Roberto Sabatini et al. | A low-cost vision based navigation system for small size unmanned aerial vehicle applications | |
Theil et al. | ATON (Autonomous Terrain-based Optical Navigation) for exploration missions: recent flight test results | |
Wolf et al. | Toward improved landing precision on Mars | |
Montgomery et al. | The jet propulsion laboratory autonomous helicopter testbed: A platform for planetary exploration technology research and development | |
Bueno et al. | Project AURORA: Towards an autonomous robotic airship | |
Martin et al. | Astrone–GNC for Enhanced Surface Mobility on Small Solar System Bodies | |
Lombaerts et al. | Adaptive multi-sensor fusion based object tracking for autonomous urban air mobility operations | |
Brady et al. | ALHAT system architecture and operational concept | |
Khoroshylov et al. | Deep learning for space guidance, navigation, and control | |
Rogata et al. | Design and performance assessment of hazard avoidance techniques for vision-based landing | |
Marlow et al. | Local terrain mapping for obstacle avoidance using monocular vision | |
Camara et al. | Design and performance assessment of hazard avoidance techniques for vision based landing | |
Djapic et al. | Autonomous takeoff & landing of small UAS from the USV | |
WO2024052928A1 (fr) | Système d'atterrissage planétaire sans danger, visuel et autodéterminant d'un véhicule spatial | |
Wood | The Evolution of Deep Space Navigation: 2004–2006 | |
Chekakta et al. | Robust deep learning LiDAR-based pose estimation for autonomous space landers | |
Elfes et al. | Modelling, control and perception for an autonomous robotic airship | |
Geiger et al. | Flight testing a real-time direct collocation path planner | |
Sangam et al. | Advanced flight management system for an unmanned reusable space vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23862660 Country of ref document: EP Kind code of ref document: A1 |