US20190063932A1 - Autonomous Vehicle Utilizing Pose Estimation - Google Patents
Autonomous Vehicle Utilizing Pose Estimation Download PDFInfo
- Publication number
- US20190063932A1 US20190063932A1 US16/100,462 US201816100462A US2019063932A1 US 20190063932 A1 US20190063932 A1 US 20190063932A1 US 201816100462 A US201816100462 A US 201816100462A US 2019063932 A1 US2019063932 A1 US 2019063932A1
- Authority
- US
- United States
- Prior art keywords
- autonomous vehicle
- pose estimation
- recited
- pose
- computer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 claims abstract description 32
- 238000004590 computer program Methods 0.000 claims abstract description 8
- 230000002596 correlated effect Effects 0.000 claims abstract description 7
- 238000013527 convolutional neural network Methods 0.000 claims description 28
- 230000033001 locomotion Effects 0.000 claims description 12
- 230000003287 optical effect Effects 0.000 claims description 9
- 238000012549 training Methods 0.000 claims description 8
- 230000011218 segmentation Effects 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 6
- 238000006073 displacement reaction Methods 0.000 claims description 4
- 238000002485 combustion reaction Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 14
- 230000015654 memory Effects 0.000 description 14
- 239000013598 vector Substances 0.000 description 14
- 238000012545 processing Methods 0.000 description 8
- 230000005055 memory storage Effects 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 230000006641 stabilisation Effects 0.000 description 5
- 238000011105 stabilization Methods 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 239000000446 fuel Substances 0.000 description 4
- 238000013519 translation Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- LFQSCWFLJHTTHZ-UHFFFAOYSA-N Ethanol Chemical compound CCO LFQSCWFLJHTTHZ-UHFFFAOYSA-N 0.000 description 2
- UFHFLCQGNIYNRP-UHFFFAOYSA-N Hydrogen Chemical compound [H][H] UFHFLCQGNIYNRP-UHFFFAOYSA-N 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 230000000875 corresponding effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 239000007789 gas Substances 0.000 description 2
- 239000001257 hydrogen Substances 0.000 description 2
- 229910052739 hydrogen Inorganic materials 0.000 description 2
- 239000007788 liquid Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000032258 transport Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/28—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
- G01C21/30—Map- or contour-matching
- G01C21/32—Structuring or formatting of map data
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/08—Control of attitude, i.e. control of roll, pitch, or yaw
- G05D1/0808—Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft
- G05D1/0816—Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft to ensure stability
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U2101/00—UAVs specially adapted for particular uses or applications
- B64U2101/30—UAVs specially adapted for particular uses or applications for imaging, photography or videography
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3804—Creation or updating of map data
- G01C21/3833—Creation or updating of map data characterised by the source of data
- G01C21/3852—Data derived from aerial or satellite images
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/38—Electronic maps specially adapted for navigation; Updating thereof
- G01C21/3863—Structures of map data
- G01C21/387—Organisation of map data, e.g. version management or database structures
- G01C21/3878—Hierarchical structures, e.g. layering
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/0088—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots characterized by the autonomous decision making process, e.g. artificial intelligence, predefined behaviours
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G06F17/30241—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/046—Forward inferencing; Production systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/0004—Transmission of traffic-related information to or from an aircraft
- G08G5/0013—Transmission of traffic-related information to or from an aircraft with a ground station
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/0017—Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information
- G08G5/0021—Arrangements for implementing traffic-related aircraft activities, e.g. arrangements for generating, displaying, acquiring or managing traffic information located in the aircraft
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/0047—Navigation or guidance aids for a single aircraft
- G08G5/0069—Navigation or guidance aids for a single aircraft specially adapted for an unmanned aircraft
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/0073—Surveillance aids
- G08G5/0086—Surveillance aids for monitoring terrain
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G5/00—Traffic control systems for aircraft, e.g. air-traffic control [ATC]
- G08G5/04—Anti-collision systems
- G08G5/045—Navigation or guidance aids, e.g. determination of anti-collision manoeuvers
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U10/00—Type of UAV
- B64U10/10—Rotorcrafts
- B64U10/13—Flying platforms
- B64U10/16—Flying platforms with five or more distinct rotor axes, e.g. octocopters
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U2201/00—UAVs characterised by their flight controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U2201/00—UAVs characterised by their flight controls
- B64U2201/10—UAVs characterised by their flight controls autonomous, i.e. by navigating independently from ground or air stations, e.g. by using inertial navigation systems [INS]
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U50/00—Propulsion; Power supply
- B64U50/10—Propulsion
- B64U50/11—Propulsion using internal combustion piston engines
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U50/00—Propulsion; Power supply
- B64U50/10—Propulsion
- B64U50/19—Propulsion using electrically powered motors
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64U—UNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
- B64U50/00—Propulsion; Power supply
- B64U50/30—Supply or distribution of electrical power
-
- G05D2201/0213—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Definitions
- the present invention relates to visual odometry and more particularly to selecting features that are beneficial for pose estimation in visual odometry.
- Visual odometry is the process of estimating the ego-motion (i.e., three-dimensional (3D) pose) of an agent (e.g., vehicle, robot) using only the input of cameras attached to it.
- Agent e.g., vehicle, robot
- State-of-the-art visual odometry systems are based on hand-crafted features such as scale-invariant feature transform (SIFT) and oriented features from accelerated segment test (FAST) and rotated binary robust independent elementary features (BRIEF), or Oriented FAST and rotated BRIEF (ORB), and tend to keep features that are easy to detect or to track but not good for pose estimation.
- SIFT scale-invariant feature transform
- FAST oriented features from accelerated segment test
- BRIEF binary robust independent elementary features
- ORB Oriented FAST and rotated BRIEF
- distant points may be easy to track due to their small motions in images but produce high uncertainty in pose estimation; or points on trees or buildings are plentiful but can be unin
- an autonomous vehicle utilizing pose estimation includes one or more cameras for capturing images of the autonomous vehicle surroundings.
- the autonomous vehicle also includes a propulsion system for moving the autonomous vehicle responsive to a guidance control system.
- the guidance control system includes a pose estimation system that receives a plurality of images from the one or more cameras and predicts a pose from a score map and a combined feature map, the combined feature map generated from a pair of the plurality of images.
- a computer program product for an autonomous vehicle with guidance control system utilizing pose estimation.
- the computer program product comprising a non-transitory computer readable storage medium having program instructions.
- the program instructions are executable by a computer to cause the computer to perform a method.
- the method includes receiving, by a pose estimation system, a plurality of images from one or more cameras.
- the method also includes predicting, by the pose estimation system, a pose from the score map and a combined feature map, the combined feature map correlated from a pair of the plurality of images.
- the method additionally includes moving, by a propulsion system, the autonomous vehicle responsive to the pose.
- a computer-implemented method for a guidance control system utilizing pose estimation in an autonomous vehicle.
- the method includes receiving, by a pose estimation system, a plurality of images from one or more cameras.
- the method also includes predicting, by the pose estimation system, a pose from the score map and a combined feature map, the combined feature map correlated from a pair of the plurality of images.
- the method additionally includes moving, by a propulsion system, the autonomous vehicle responsive to the pose.
- FIG. 1 shows an exemplary system for an autonomous vehicle utilizing three-dimensional pose estimation, in accordance with an embodiment of the present invention
- FIG. 2 shows block/flow diagram of an exemplary system for training a three-dimensional pose estimation network, in accordance with an embodiment of the present invention
- FIG. 3 shows block/flow diagrams of an exemplary system for a three-dimensional pose estimation network at deployment, in accordance with an embodiment of the present invention
- FIG. 4 shows a block/flow diagram of a feature weighting system, in accordance with an embodiment of the present invention
- FIG. 5 shows an exemplary system for an aerial drone utilizing three-dimensional pose estimation, in accordance with an embodiment of the present principles
- FIG. 6 shows a block/flow diagram of a computer processing system, to be used for three-dimensional pose estimation, in accordance with an embodiment of the present invention
- FIG. 7 shows a block/flow diagram illustrating a method for a guidance control system utilizing pose estimation in an autonomous vehicle, in accordance with an embodiment of the present invention
- FIG. 8 shows a block/flow diagram illustrating a method for pose estimation, in accordance with an embodiment of the present invention.
- FIG. 9 shows a block/flow diagram illustrating a method for a stabilization system utilizing pose estimation in an aerial drone, in accordance with an embodiment of the present invention.
- aspects of the present invention select features that are beneficial for pose estimation by using convolutional neural networks (CNNs) to consider different aspects of the features such as semantics and motions.
- CNNs convolutional neural networks
- aspects of the present invention employ a novel CNN architecture for computing score maps that are used for selecting good features employed for pose estimation.
- the novel CNN architecture for score map prediction that takes into account various factors such as semantics and motions and is designed for direct benefits towards pose estimation. Different signals are used such as semantics and motions to supervise intermediate layers before predicting score maps. Furthermore, the estimated score maps are incorporated directly into intermediate layers that are used for pose prediction. In this way, the score maps have direct effects on pose estimation.
- aspects of the present invention output score maps that can be visually interpretable on the image domain.
- the present invention can work with as few as two images, without the need of an inertia measurement unit (IMU), and can handle various cases of bad features due to deep supervision of semantics and motions. Since the present invention is designed for direct benefits towards pose estimation, it produces more accurate score maps and better pose estimates.
- IMU inertia measurement unit
- Embodiments described herein may be entirely hardware, entirely software or including both hardware and software elements.
- the present invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
- Embodiments may include a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system.
- a computer-usable or computer readable medium may include any apparatus that stores, communicates, propagates, or transports the program for use by or in connection with the instruction execution system, apparatus, or device.
- the medium can be magnetic, optical, electronic, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium.
- the medium may include a computer-readable storage medium such as a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk, etc.
- Each computer program may be tangibly stored in a machine-readable storage media or device (e.g., program memory or magnetic disk) readable by a general or special purpose programmable computer, for configuring and controlling operation of a computer when the storage media or device is read by the computer to perform the procedures described herein.
- the inventive system may also be considered to be embodied in a computer-readable storage medium, configured with a computer program, where the storage medium can be configured to cause a computer to operate in a specific and predefined manner to perform the functions described herein.
- a data processing system suitable for storing and/or executing program code may include at least one processor coupled directly or indirectly to memory elements through a system bus.
- the memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code to reduce the number of times the code is retrieved from bulk storage during execution.
- I/O devices including but not limited to keyboards, displays, pointing devices, etc. may be coupled to the system either directly or through intervening I/O controllers.
- Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks.
- Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
- the system 10 can include an autonomous vehicle 12 .
- the autonomous vehicle 12 can be an automobile.
- the autonomous vehicle 12 can include a boat, plane, helicopter, or truck.
- the autonomous vehicle 12 can include a propulsion system 13 .
- the propulsion system 13 can include propellers or other engines for flying the autonomous vehicle 12 .
- the propulsion system 13 can include wheels or tracks.
- the propulsion system 13 can include a jet engine or hover technology.
- the propulsion system 13 can include one or more motors, which can include an internal combustion engine, electric motor, etc.
- the autonomous vehicle 12 can include a power source 14 .
- the power source 14 can include or employ one or more batteries, liquid fuel (e.g., gasoline, alcohol, diesel, etc.) or other energy sources.
- the power source 14 can include one or more solar cells or one or more fuel cells.
- the power source 14 can include combustive gas (e.g., hydrogen).
- the autonomous vehicle 12 can be equipped with computing functions and controls.
- the autonomous vehicle 12 can include a processor 15 .
- the autonomous vehicle 12 can include a transceiver 16 .
- the transceiver 16 can be coupled to a global position system (GPS) to generate and alert of a position of the autonomous vehicle 12 relative to other vehicles in a common coordinate system.
- GPS global position system
- the transceiver 16 can be equipped to communicate with a cellular network system. In this way, the autonomous vehicle's position can be computed based on triangulation between cell towers base upon signal strength or the like.
- the transceiver 16 can include a WIFI or equivalent radio system.
- the processor 15 , transceiver 16 , and location information can be utilized in a guidance control system 17 for the autonomous vehicle 12 .
- the autonomous vehicle 12 can include memory storage 18 .
- the memory storage 18 can include solid state or soft storage and work in conjunction with other systems on the autonomous vehicle 12 to record data, run algorithms or programs, control the vehicle, etc.
- the memory storage 18 can include a Read Only Memory (ROM), random access memory (RAM), or any other type of memory useful for the present applications.
- ROM Read Only Memory
- RAM random access memory
- the autonomous vehicle 12 can include one or more cameras 19 .
- the one or more cameras 19 can view the area surrounding the autonomous vehicle 12 to input images into a three-dimensional pose estimation system 20 and the guidance control system 17 of the autonomous vehicle 12 .
- the one or more cameras 19 can view objects around the autonomous vehicle 12 , e.g., other vehicles, building, light poles 21 , trees, etc.
- the images obtained by the one or more cameras 19 can be processed in the three-dimensional pose estimation system 20 to learn the pose of the autonomous vehicle 12 without an IMU.
- the pose of the vehicle can be utilized by the guidance control system 17 to adjust the propulsion system 13 of the autonomous vehicle 12 to avoid objects around the autonomous vehicle 12 .
- FIG. 2 a block/flow diagram of an exemplary system for training a three-dimensional pose estimation network is illustratively depicted in accordance with one embodiment of the present invention.
- the training of the three-dimensional pose estimation network 100 can have two input images 105 and 106 .
- the two input images 105 and 106 can each be passed through a feature extraction CNN 110 to produce feature maps feat 1 125 and feat 2 respectively, which are employed to compute a correlation feat 1 *feat 2 121 .
- the feature extraction CNN 110 can include a set of convolutional layers to extract the feature maps feat 1 125 and feat 2 .
- the correlation feat 1 *feat 2 121 can be achieved via multiplicative local patch comparisons or dot products of feature vectors in corresponding local patches between the two feature maps feat 1 125 and feat 2 .
- a combined feature map 120 including the correlation feat 1 *feat 2 121 and feat 1 125 , can then be fed to a feature weighting 130 to estimate a score map 140 .
- the correlation feat 1 *feat 2 121 can be fed into an optical flow CNN 122 to determine an optical flow 123 .
- the optical flow 123 can be used to determine motion loss 124 .
- the feat 1 125 can be fed into a semantic segmentations CNN 126 to determine a semantic segmentation 127 .
- the semantic segmentation 123 can be used to determine semantics loss 128 .
- the motion loss 124 and the semantics loss 128 can be utilized to train the combined feature map 120 for future combinations.
- the optical flow CNN 122 and the semantic segmentation 126 can include optionally a set of convolution layers and then require a set of deconvolutional layers to predict the dense optical flow 123 and the dense semantic segmentation 127 respectively from the correlation feat 1 *feat 2 121 and the feature map feat 1 125 .
- the score map 140 can be employed to update the combined feature map 120 , including correlation feat 1 *feat 2 121 and feat 1 125 , and obtain a weighted feature map 150 .
- the combined feature map 120 and the score map 140 have the same spatial dimension of W ⁇ H (e.g., of sizes W ⁇ H ⁇ C, with C denoting the number of channels of the combined feature map 120 , and W ⁇ H ⁇ 1 respectively), and hence the score map 140 can be used to weight or multiply along each channel of the combined feature map 120 to obtain the (spatially) weighted feature map 150 .
- the weighted feature map 150 can be fed to a pose estimation CNN 160 to predict a pose 170 .
- the pose estimation CNN can consist of a set of fully connected layers with the last few layers having two separate branches for predicting a three-dimensional (3D) rotation vector and a three-dimensional (3D) translation vector respectively.
- the rotation and translation vectors can make up the six-dimensional (6D) pose vector 170 .
- the training of the three-dimensional pose estimation system 100 can have two losses for pose estimation, including a two-dimensional (2D) keypoint displacement loss 180 with Velodyne points 185 and a three-dimensional (3D) pose regression loss 190 .
- the utilization of the 2D keypoint displacement loss 180 with Velodyne points 185 avoids vanishing gradients and makes learning poses more effective.
- the three-dimensional pose estimation network 200 can have two input images 105 and 106 .
- the two input images 105 and 106 can each be passed through a feature extraction CNN 110 to produce feature maps feat 1 125 and feat 2 respectively, which are employed to compute a correlation feat 1 *feat 2 121 .
- the feature extraction CNN 110 can include a set of convolutional layers to extract the feature maps feat 1 125 and feat 2 .
- the correlation feat 1 *feat 2 121 can be achieved via multiplicative local patch comparisons or dot products of feature vectors in corresponding local patches between the two feature maps feat 1 125 and feat 2 .
- a combined feature map 120 including correlation feat 1 *feat 2 121 and feat 1 125 , can then be fed to a feature weighting 130 to estimate a score map 140 .
- the score map 140 can be employed to update the combined feature map 120 , including correlation feat 1 *feat 2 121 and feat 1 125 , and obtain a weighted feature map 150 .
- the combined feature map 120 and the score map 140 have the same spatial dimension of W ⁇ H (e.g., of sizes W ⁇ H ⁇ C, with C denoting the number of channels of the combined feature map 120 , and W ⁇ H ⁇ 1 respectively), and hence the score map 140 can be used to weight or multiply along each channel of the combined feature map 120 to obtain the (spatially) weighted feature map 150 .
- the weighted feature map 150 can be fed to a pose estimation CNN 160 to predict a pose 170 .
- the pose estimation CNN can consist of a set of fully connected layers with the last few layers having two separate branches for predicting a three-dimensional (3D) rotation vector and a three-dimensional (3D) translation vector respectively. The rotation and translation vectors make up the six-dimensional (6D) pose vector 170 .
- the feature weighting system 130 can take a combined feature map 120 to produce a score map 140 .
- the feature weighting system 130 can consist of a reshaping layer 131 , a set of fully connected layers 132 , a softmax layer 133 , and a reshaping layer 134 .
- the reshaping layer 131 can resize the combined feature map 120 of size W ⁇ H ⁇ C into a one-dimensional (1D) vector of size 1 ⁇ (W.H.C), which can then be passed through a set of fully connected layers 132 of various output sizes, e.g., 1024, 512, 256, and 128 dimensional vectors for example.
- the output from fully connected layers 132 can then be passed to a softmax layer 133 to compute a score vector (where each entry value is between zero and one).
- the score vector can then be resized by the reshaping layer 134 to have the size of W ⁇ H (or the same spatial dimension as the combined feature map 120 ).
- the system 30 can include an aerial drone 38 .
- the aerial drone 38 can be an octo-copter.
- the aerial drone 38 can include a plane-style drone.
- the aerial drone 38 can include a propulsion system 39 .
- the propulsion system 39 can include propellers or other engines for flying the aerial drone 38 .
- the propulsion system 39 can include a jet engine or hover technology.
- the propulsion system 39 can include one or more motors, which can include an internal combustion engine, electric motor, etc.
- the aerial drone 38 can include a power source 40 .
- the power source 40 can include or employ one or more batteries, liquid fuel (e.g., gasoline, alcohol, diesel, etc.) or other energy sources.
- the power source 40 can include one or more solar cells or one or more fuel cells.
- the power source 40 can include combustive gas (e.g., hydrogen).
- the aerial drone 38 can be equipped with computing functions and controls.
- the aerial drone 38 can include a processor 41 .
- the aerial drone 38 can include a transceiver 42 .
- the transceiver 42 can be coupled to a global position system (GPS) to generate and alert of a position of the aerial drone 38 relative to other vehicles in a common coordinate system.
- GPS global position system
- the transceiver 42 can be equipped to communicate with a cellular network system. In this way, the aerial drone's position can be computed based on triangulation between cell towers base upon signal strength or the like.
- the transceiver 42 can include a WIFI or equivalent radio system.
- the processor 41 , transceiver 42 , and location information can be utilized in a stabilization system 43 for the aerial drone 38 .
- the aerial drone 38 can include memory storage 44 .
- the memory storage 44 can include solid state or soft storage and work in conjunction with other systems on the aerial drone 38 to record data, run algorithms or programs, control the drone, etc.
- the memory storage 44 can include a Read Only Memory (ROM), random access memory (RAM), or any other type of memory useful for the present applications.
- ROM Read Only Memory
- RAM random access memory
- the aerial drone 38 can include one or more cameras 45 .
- the one or more cameras 45 can view the area surrounding the aerial drone 38 to input images into a three-dimensional pose estimation system 46 and the stabilization system 43 of the aerial drone 38 .
- the one or more cameras 45 can view objects around the aerial drone 38 , e.g., other vehicles, building 36 , light poles, trees, etc.
- the images obtained by the one or more cameras 45 can be processed in the three-dimensional pose estimation system 46 to learn the pose of the aerial drone 38 without an IMU.
- the pose of the drone can be utilized by the stabilization system 43 to adjust the propulsion system 39 of the aerial drone 38 to avoid objects around the aerial drone 38 or remain level.
- the transceiver 42 can be in communication with a remote control device 34 .
- the remote control device 34 can have a display 35 for showing what is currently around the aerial drone 38 from the perspective of the one or more cameras 45 .
- a user 32 can use the remote control device 34 to control the aerial drone 38 while in flight.
- the pose of the drone estimated from the images captured by the one or more cameras 45 can be used to provide an easier to fly and maneuver aerial drone 38 since the aerial drone 38 can keep itself level in changing weather conditions, e.g., wind.
- the computer system 1000 includes at least one processor (CPU) 1005 operatively coupled to other components via a system bus 1002 .
- a cache 1006 operatively coupled to other components via a system bus 1002 .
- ROM Read Only Memory
- RAM Random-Access Memory
- I/O input/output
- sound adapter 1030 operatively coupled to the system bus 1002 .
- network adapter 1070 operatively coupled to the system bus 1002 .
- user interface adapter 1050 operatively coupled to the system bus 1002 .
- display adapter 1060 are operatively coupled to the system bus 1002 .
- a pose estimation CNN 150 and a feature weighting system 130 can be operatively coupled to system bus 1002 by the I/O adapter 1020 .
- the devices 130 and 150 can be employed to weight features to generate a score map and estimate a pose based on the score map.
- a speaker 1032 may be operatively coupled to system bus 1002 by the sound adapter 1030 .
- the speaker 1032 can sound an alarm when controlled.
- a transceiver 1075 is operatively coupled to system bus 1002 by network adapter 1070 .
- a display device 1062 is operatively coupled to system bus 1002 by display adapter 1060 .
- a first user input device 1052 , a second user input device 1059 , and a third user input device 1056 are operatively coupled to system bus 1002 by user interface adapter 1050 .
- the user input devices 1052 , 1059 , and 1056 can be any of a sensor, a keyboard, a mouse, a keypad, a joystick, an image capture device, a motion sensing device, a power measurement device, a microphone, a device incorporating the functionality of at least two of the preceding devices, and so forth. Of course, other types of input devices can also be used in the present invention.
- the user input devices 1052 , 1059 , and 1056 can be the same type of user input device or different types of user input devices.
- the user input devices 1052 , 1059 , and 1056 are used to input and output information to and from system 1000 .
- the computer system 1000 may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements.
- the system described in FIGS. 3 and 4 can be controlled by computer system 1000 .
- various other input devices and/or output devices can be included in computer system 1000 , depending upon the particular implementation of the same, as readily understood by one of ordinary skill in the art.
- various types of wireless and/or wired input and/or output devices can be used.
- additional processors, controllers, memories, and so forth, in various configurations can also be utilized as readily appreciated by one of ordinary skill in the art.
- the computer processing system 1000 can be configured to initiate an action (e.g., a control action) on a controlled system, machine, and/or device responsive to a detected pose.
- action can include, but is not limited to, one or more of: powering down the controlled system, machine, and/or device or a portion thereof; powering down, e.g., a system, machine, and/or a device that is affected by the pose of another device, stopping a centrifuge being operated by a user before an imbalance in the centrifuge causes a critical failure and harm to the user, securing an automatic door, and so forth.
- the action taken is dependent upon the type of controlled system, machine, and/or device to which the action is applied.
- network 100 and network 200 described above with respect to FIGS. 1 and 2 are networks for implementing respective embodiments of the present invention.
- Part or all of computer processing system 1000 may be implemented as one or more of the elements of network 100 and/or one or more of the elements of network 200 .
- computer processing system 1000 may perform at least part of the method described herein including, for example, at least part of method 700 of FIG. 7 and at least part of method 800 of FIG. 8 and at least part of method 900 of FIG. 9 .
- FIG. 7 a block/flow diagram illustrating a method 700 guidance control system utilizing pose estimation in an autonomous vehicle, in accordance with an embodiment of the present invention.
- receive a plurality of images from one or more cameras receive a plurality of images from one or more cameras.
- move the autonomous vehicle responsive to the pose In block 710 , receive a plurality of images from one or more cameras.
- move the autonomous vehicle responsive to the pose move the autonomous vehicle responsive to the pose.
- FIG. 8 a block/flow diagram illustrating a method 800 for pose estimation, in accordance with an embodiment of the present invention.
- receive a plurality of images from one or more cameras receive a plurality of images from one or more cameras.
- block 840 predict, with a pose estimation CNN, a pose from the score map and a combined feature map.
- control an operation of a processor-based machine to change a state of the processor-based machine, responsive to the pose.
- CNN feature extraction convolutional neural network
- FIG. 9 a block diagram illustrating a method 900 for a stabilization system utilizing pose estimation in an aerial drone, in accordance with an embodiment of the present invention.
- receive a plurality of images from one or more cameras receive a plurality of images from one or more cameras.
- move the aerial drone responsive to the pose In block 910 , receive a plurality of images from one or more cameras.
- move the aerial drone responsive to the pose move the aerial drone responsive to the pose.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Theoretical Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Automation & Control Theory (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Business, Economics & Management (AREA)
- Game Theory and Decision Science (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Description
- This application claims priority to 62/550,790, filed on Aug. 28, 2017, incorporated herein by reference herein its entirety. This application is related to an application entitled “Learning Good Features for Visual Odometry”, having attorney docket number 17051A, and which is incorporated by reference herein in its entirety. This application is related to an application entitled “Aerial Drone utilizing Pose Estimation”, having attorney docket number 17051C, and which is incorporated by reference herein in its entirety
- The present invention relates to visual odometry and more particularly to selecting features that are beneficial for pose estimation in visual odometry.
- Visual odometry is the process of estimating the ego-motion (i.e., three-dimensional (3D) pose) of an agent (e.g., vehicle, robot) using only the input of cameras attached to it. State-of-the-art visual odometry systems are based on hand-crafted features such as scale-invariant feature transform (SIFT) and oriented features from accelerated segment test (FAST) and rotated binary robust independent elementary features (BRIEF), or Oriented FAST and rotated BRIEF (ORB), and tend to keep features that are easy to detect or to track but not good for pose estimation. For example, distant points may be may be easy to track due to their small motions in images but produce high uncertainty in pose estimation; or points on trees or buildings are plentiful but can be uninformative due to their ambiguous textures.
- According to an aspect of the present principles, an autonomous vehicle utilizing pose estimation is provided. The autonomous vehicle includes one or more cameras for capturing images of the autonomous vehicle surroundings. The autonomous vehicle also includes a propulsion system for moving the autonomous vehicle responsive to a guidance control system. The guidance control system includes a pose estimation system that receives a plurality of images from the one or more cameras and predicts a pose from a score map and a combined feature map, the combined feature map generated from a pair of the plurality of images.
- According to another aspect of the present principles, a computer program product is provided for an autonomous vehicle with guidance control system utilizing pose estimation. The computer program product comprising a non-transitory computer readable storage medium having program instructions. The program instructions are executable by a computer to cause the computer to perform a method. The method includes receiving, by a pose estimation system, a plurality of images from one or more cameras. The method also includes predicting, by the pose estimation system, a pose from the score map and a combined feature map, the combined feature map correlated from a pair of the plurality of images. The method additionally includes moving, by a propulsion system, the autonomous vehicle responsive to the pose.
- According to yet another aspect of the present principles, a computer-implemented method is provided for a guidance control system utilizing pose estimation in an autonomous vehicle. The method includes receiving, by a pose estimation system, a plurality of images from one or more cameras. The method also includes predicting, by the pose estimation system, a pose from the score map and a combined feature map, the combined feature map correlated from a pair of the plurality of images. The method additionally includes moving, by a propulsion system, the autonomous vehicle responsive to the pose.
- These and other features and advantages will become apparent from the following detailed description of illustrative embodiments thereof, which is to be read in connection with the accompanying drawings.
- The disclosure will provide details in the following description of preferred embodiments with reference to the following figures wherein:
-
FIG. 1 shows an exemplary system for an autonomous vehicle utilizing three-dimensional pose estimation, in accordance with an embodiment of the present invention; -
FIG. 2 shows block/flow diagram of an exemplary system for training a three-dimensional pose estimation network, in accordance with an embodiment of the present invention; -
FIG. 3 shows block/flow diagrams of an exemplary system for a three-dimensional pose estimation network at deployment, in accordance with an embodiment of the present invention; -
FIG. 4 shows a block/flow diagram of a feature weighting system, in accordance with an embodiment of the present invention; -
FIG. 5 shows an exemplary system for an aerial drone utilizing three-dimensional pose estimation, in accordance with an embodiment of the present principles; -
FIG. 6 shows a block/flow diagram of a computer processing system, to be used for three-dimensional pose estimation, in accordance with an embodiment of the present invention; -
FIG. 7 shows a block/flow diagram illustrating a method for a guidance control system utilizing pose estimation in an autonomous vehicle, in accordance with an embodiment of the present invention; -
FIG. 8 shows a block/flow diagram illustrating a method for pose estimation, in accordance with an embodiment of the present invention; and -
FIG. 9 shows a block/flow diagram illustrating a method for a stabilization system utilizing pose estimation in an aerial drone, in accordance with an embodiment of the present invention. - Aspects of the present invention select features that are beneficial for pose estimation by using convolutional neural networks (CNNs) to consider different aspects of the features such as semantics and motions.
- Aspects of the present invention employ a novel CNN architecture for computing score maps that are used for selecting good features employed for pose estimation.
- The novel CNN architecture for score map prediction that takes into account various factors such as semantics and motions and is designed for direct benefits towards pose estimation. Different signals are used such as semantics and motions to supervise intermediate layers before predicting score maps. Furthermore, the estimated score maps are incorporated directly into intermediate layers that are used for pose prediction. In this way, the score maps have direct effects on pose estimation.
- Aspects of the present invention output score maps that can be visually interpretable on the image domain. The present invention can work with as few as two images, without the need of an inertia measurement unit (IMU), and can handle various cases of bad features due to deep supervision of semantics and motions. Since the present invention is designed for direct benefits towards pose estimation, it produces more accurate score maps and better pose estimates.
- Embodiments described herein may be entirely hardware, entirely software or including both hardware and software elements. In a preferred embodiment, the present invention is implemented in software, which includes but is not limited to firmware, resident software, microcode, etc.
- Embodiments may include a computer program product accessible from a computer-usable or computer-readable medium providing program code for use by or in connection with a computer or any instruction execution system. A computer-usable or computer readable medium may include any apparatus that stores, communicates, propagates, or transports the program for use by or in connection with the instruction execution system, apparatus, or device. The medium can be magnetic, optical, electronic, electromagnetic, infrared, or semiconductor system (or apparatus or device) or a propagation medium. The medium may include a computer-readable storage medium such as a semiconductor or solid state memory, magnetic tape, a removable computer diskette, a random access memory (RAM), a read-only memory (ROM), a rigid magnetic disk and an optical disk, etc.
- Each computer program may be tangibly stored in a machine-readable storage media or device (e.g., program memory or magnetic disk) readable by a general or special purpose programmable computer, for configuring and controlling operation of a computer when the storage media or device is read by the computer to perform the procedures described herein. The inventive system may also be considered to be embodied in a computer-readable storage medium, configured with a computer program, where the storage medium can be configured to cause a computer to operate in a specific and predefined manner to perform the functions described herein.
- A data processing system suitable for storing and/or executing program code may include at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code to reduce the number of times the code is retrieved from bulk storage during execution. Input/output or I/O devices (including but not limited to keyboards, displays, pointing devices, etc.) may be coupled to the system either directly or through intervening I/O controllers.
- Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modem and Ethernet cards are just a few of the currently available types of network adapters.
- Referring now in detail to the figures in which like numerals represent the same or similar elements and initially to
FIG. 1 , an exemplary system for an autonomous vehicle utilizing three-dimensional pose estimation is illustratively depicted in accordance with an embodiment of the present invention. Thesystem 10 can include anautonomous vehicle 12. In one embodiment, theautonomous vehicle 12 can be an automobile. In other embodiments, theautonomous vehicle 12 can include a boat, plane, helicopter, or truck. Theautonomous vehicle 12 can include apropulsion system 13. For an airborne embodiment, thepropulsion system 13 can include propellers or other engines for flying theautonomous vehicle 12. In another embodiment, thepropulsion system 13 can include wheels or tracks. In another embodiment, thepropulsion system 13 can include a jet engine or hover technology. Thepropulsion system 13 can include one or more motors, which can include an internal combustion engine, electric motor, etc. - The
autonomous vehicle 12 can include apower source 14. Thepower source 14 can include or employ one or more batteries, liquid fuel (e.g., gasoline, alcohol, diesel, etc.) or other energy sources. In another embodiment, thepower source 14 can include one or more solar cells or one or more fuel cells. In another embodiment, thepower source 14 can include combustive gas (e.g., hydrogen). - The
autonomous vehicle 12 can be equipped with computing functions and controls. Theautonomous vehicle 12 can include aprocessor 15. Theautonomous vehicle 12 can include atransceiver 16. In one embodiment, thetransceiver 16 can be coupled to a global position system (GPS) to generate and alert of a position of theautonomous vehicle 12 relative to other vehicles in a common coordinate system. Thetransceiver 16 can be equipped to communicate with a cellular network system. In this way, the autonomous vehicle's position can be computed based on triangulation between cell towers base upon signal strength or the like. Thetransceiver 16 can include a WIFI or equivalent radio system. Theprocessor 15,transceiver 16, and location information can be utilized in aguidance control system 17 for theautonomous vehicle 12. - The
autonomous vehicle 12 can includememory storage 18. Thememory storage 18 can include solid state or soft storage and work in conjunction with other systems on theautonomous vehicle 12 to record data, run algorithms or programs, control the vehicle, etc. Thememory storage 18 can include a Read Only Memory (ROM), random access memory (RAM), or any other type of memory useful for the present applications. - The
autonomous vehicle 12 can include one ormore cameras 19. The one ormore cameras 19 can view the area surrounding theautonomous vehicle 12 to input images into a three-dimensionalpose estimation system 20 and theguidance control system 17 of theautonomous vehicle 12. The one ormore cameras 19 can view objects around theautonomous vehicle 12, e.g., other vehicles, building,light poles 21, trees, etc. The images obtained by the one ormore cameras 19 can be processed in the three-dimensionalpose estimation system 20 to learn the pose of theautonomous vehicle 12 without an IMU. The pose of the vehicle can be utilized by theguidance control system 17 to adjust thepropulsion system 13 of theautonomous vehicle 12 to avoid objects around theautonomous vehicle 12. - Referring to
FIG. 2 , a block/flow diagram of an exemplary system for training a three-dimensional pose estimation network is illustratively depicted in accordance with one embodiment of the present invention. The training of the three-dimensionalpose estimation network 100 can have twoinput images input images feature extraction CNN 110 to produce feature maps feat1 125 and feat2 respectively, which are employed to compute a correlation feat1*feat2 121. Thefeature extraction CNN 110 can include a set of convolutional layers to extract the feature maps feat1 125 and feat2. The correlation feat1*feat2 121 can be achieved via multiplicative local patch comparisons or dot products of feature vectors in corresponding local patches between the two feature maps feat1 125 and feat2. A combinedfeature map 120, including the correlation feat1*feat2 121 andfeat1 125, can then be fed to afeature weighting 130 to estimate ascore map 140. The correlation feat1*feat2 121 can be fed into anoptical flow CNN 122 to determine anoptical flow 123. Theoptical flow 123 can be used to determinemotion loss 124. Thefeat1 125 can be fed into asemantic segmentations CNN 126 to determine asemantic segmentation 127. Thesemantic segmentation 123 can be used to determinesemantics loss 128. Themotion loss 124 and thesemantics loss 128 can be utilized to train the combinedfeature map 120 for future combinations. Theoptical flow CNN 122 and thesemantic segmentation 126 can include optionally a set of convolution layers and then require a set of deconvolutional layers to predict the denseoptical flow 123 and the densesemantic segmentation 127 respectively from the correlation feat1*feat2 121 and thefeature map feat1 125. - The
score map 140 can be employed to update the combinedfeature map 120, including correlation feat1*feat2 121 andfeat1 125, and obtain aweighted feature map 150. In another embodiment, by our design, the combinedfeature map 120 and thescore map 140 have the same spatial dimension of W×H (e.g., of sizes W×H×C, with C denoting the number of channels of the combinedfeature map 120, and W×H×1 respectively), and hence thescore map 140 can be used to weight or multiply along each channel of the combinedfeature map 120 to obtain the (spatially)weighted feature map 150. Theweighted feature map 150 can be fed to apose estimation CNN 160 to predict apose 170. The pose estimation CNN can consist of a set of fully connected layers with the last few layers having two separate branches for predicting a three-dimensional (3D) rotation vector and a three-dimensional (3D) translation vector respectively. The rotation and translation vectors can make up the six-dimensional (6D) posevector 170. - The training of the three-dimensional
pose estimation system 100 can have two losses for pose estimation, including a two-dimensional (2D)keypoint displacement loss 180 with Velodyne points 185 and a three-dimensional (3D) poseregression loss 190. The utilization of the 2Dkeypoint displacement loss 180 with Velodyne points 185 avoids vanishing gradients and makes learning poses more effective. - Referring now to
FIG. 3 , a block/flow diagram of an exemplary system for a three-dimensional pose estimation network at deployment is illustratively depicted in accordance with an embodiment of the present invention. The three-dimensionalpose estimation network 200 can have twoinput images input images feature extraction CNN 110 to produce feature maps feat1 125 and feat2 respectively, which are employed to compute a correlation feat1*feat2 121. Thefeature extraction CNN 110 can include a set of convolutional layers to extract the feature maps feat1 125 and feat2. The correlation feat1*feat2 121 can be achieved via multiplicative local patch comparisons or dot products of feature vectors in corresponding local patches between the two feature maps feat1 125 and feat2. A combinedfeature map 120, including correlation feat1*feat2 121 andfeat1 125, can then be fed to afeature weighting 130 to estimate ascore map 140. Thescore map 140 can be employed to update the combinedfeature map 120, including correlation feat1*feat2 121 andfeat1 125, and obtain aweighted feature map 150. In another embodiment, by our design, the combinedfeature map 120 and thescore map 140 have the same spatial dimension of W×H (e.g., of sizes W×H×C, with C denoting the number of channels of the combinedfeature map 120, and W×H×1 respectively), and hence thescore map 140 can be used to weight or multiply along each channel of the combinedfeature map 120 to obtain the (spatially)weighted feature map 150. Theweighted feature map 150 can be fed to apose estimation CNN 160 to predict apose 170. The pose estimation CNN can consist of a set of fully connected layers with the last few layers having two separate branches for predicting a three-dimensional (3D) rotation vector and a three-dimensional (3D) translation vector respectively. The rotation and translation vectors make up the six-dimensional (6D) posevector 170. - Referring now to
FIG. 4 , a block diagram of a feature weighting system is illustratively depicted in accordance with an embodiment of the present invention. Thefeature weighting system 130 can take a combinedfeature map 120 to produce ascore map 140. Thefeature weighting system 130 can consist of areshaping layer 131, a set of fully connectedlayers 132, asoftmax layer 133, and areshaping layer 134. In one embodiment, thereshaping layer 131 can resize the combinedfeature map 120 of size W×H×C into a one-dimensional (1D) vector ofsize 1×(W.H.C), which can then be passed through a set of fully connectedlayers 132 of various output sizes, e.g., 1024, 512, 256, and 128 dimensional vectors for example. The output from fully connectedlayers 132 can then be passed to asoftmax layer 133 to compute a score vector (where each entry value is between zero and one). The score vector can then be resized by thereshaping layer 134 to have the size of W×H (or the same spatial dimension as the combined feature map 120). - Referring now to
FIG. 5 , an exemplary system for an aerial drone utilizing three-dimensional pose estimation is illustratively depicted in accordance with an embodiment of the present invention. Thesystem 30 can include anaerial drone 38. In one embodiment, theaerial drone 38 can be an octo-copter. In other embodiments, theaerial drone 38 can include a plane-style drone. Theaerial drone 38 can include apropulsion system 39. In one embodiment, thepropulsion system 39 can include propellers or other engines for flying theaerial drone 38. In another embodiment, thepropulsion system 39 can include a jet engine or hover technology. Thepropulsion system 39 can include one or more motors, which can include an internal combustion engine, electric motor, etc. - The
aerial drone 38 can include a power source 40. The power source 40 can include or employ one or more batteries, liquid fuel (e.g., gasoline, alcohol, diesel, etc.) or other energy sources. In another embodiment, the power source 40 can include one or more solar cells or one or more fuel cells. In another embodiment, the power source 40 can include combustive gas (e.g., hydrogen). - The
aerial drone 38 can be equipped with computing functions and controls. Theaerial drone 38 can include a processor 41. Theaerial drone 38 can include atransceiver 42. In one embodiment, thetransceiver 42 can be coupled to a global position system (GPS) to generate and alert of a position of theaerial drone 38 relative to other vehicles in a common coordinate system. Thetransceiver 42 can be equipped to communicate with a cellular network system. In this way, the aerial drone's position can be computed based on triangulation between cell towers base upon signal strength or the like. Thetransceiver 42 can include a WIFI or equivalent radio system. The processor 41,transceiver 42, and location information can be utilized in a stabilization system 43 for theaerial drone 38. - The
aerial drone 38 can include memory storage 44. The memory storage 44 can include solid state or soft storage and work in conjunction with other systems on theaerial drone 38 to record data, run algorithms or programs, control the drone, etc. The memory storage 44 can include a Read Only Memory (ROM), random access memory (RAM), or any other type of memory useful for the present applications. - The
aerial drone 38 can include one ormore cameras 45. The one ormore cameras 45 can view the area surrounding theaerial drone 38 to input images into a three-dimensional pose estimation system 46 and the stabilization system 43 of theaerial drone 38. The one ormore cameras 45 can view objects around theaerial drone 38, e.g., other vehicles, building 36, light poles, trees, etc. The images obtained by the one ormore cameras 45 can be processed in the three-dimensional pose estimation system 46 to learn the pose of theaerial drone 38 without an IMU. The pose of the drone can be utilized by the stabilization system 43 to adjust thepropulsion system 39 of theaerial drone 38 to avoid objects around theaerial drone 38 or remain level. - The
transceiver 42 can be in communication with aremote control device 34. Theremote control device 34 can have adisplay 35 for showing what is currently around theaerial drone 38 from the perspective of the one ormore cameras 45. Auser 32 can use theremote control device 34 to control theaerial drone 38 while in flight. The pose of the drone estimated from the images captured by the one ormore cameras 45 can be used to provide an easier to fly and maneuveraerial drone 38 since theaerial drone 38 can keep itself level in changing weather conditions, e.g., wind. - Referring now to
FIG. 6 , a block/flow diagram of acomputer processing system 1000, to be employed for three-dimensional pose estimation, is illustratively depicted in accordance with an embodiment of the present principles. Thecomputer system 1000 includes at least one processor (CPU) 1005 operatively coupled to other components via asystem bus 1002. Acache 1006, a Read Only Memory (ROM) 1008, a Random-Access Memory (RAM) 1010, an input/output (I/O)adapter 1020, asound adapter 1030, anetwork adapter 1070, auser interface adapter 1050, and adisplay adapter 1060, are operatively coupled to thesystem bus 1002. - A
pose estimation CNN 150 and afeature weighting system 130 can be operatively coupled tosystem bus 1002 by the I/O adapter 1020. Thedevices - A
speaker 1032 may be operatively coupled tosystem bus 1002 by thesound adapter 1030. Thespeaker 1032 can sound an alarm when controlled. Atransceiver 1075 is operatively coupled tosystem bus 1002 bynetwork adapter 1070. Adisplay device 1062 is operatively coupled tosystem bus 1002 bydisplay adapter 1060. - A first
user input device 1052, a seconduser input device 1059, and a thirduser input device 1056 are operatively coupled tosystem bus 1002 byuser interface adapter 1050. Theuser input devices user input devices user input devices system 1000. - Of course, the
computer system 1000 may also include other elements (not shown), as readily contemplated by one of skill in the art, as well as omit certain elements. For example, the system described inFIGS. 3 and 4 can be controlled bycomputer system 1000. For example, various other input devices and/or output devices can be included incomputer system 1000, depending upon the particular implementation of the same, as readily understood by one of ordinary skill in the art. For example, various types of wireless and/or wired input and/or output devices can be used. Moreover, additional processors, controllers, memories, and so forth, in various configurations can also be utilized as readily appreciated by one of ordinary skill in the art. These and other variations of thecomputer system 1000 are readily contemplated by one of ordinary skill in the art given the teachings of the present invention provided herein. - Moreover, the
computer processing system 1000 can be configured to initiate an action (e.g., a control action) on a controlled system, machine, and/or device responsive to a detected pose. Such action can include, but is not limited to, one or more of: powering down the controlled system, machine, and/or device or a portion thereof; powering down, e.g., a system, machine, and/or a device that is affected by the pose of another device, stopping a centrifuge being operated by a user before an imbalance in the centrifuge causes a critical failure and harm to the user, securing an automatic door, and so forth. As is evident to one of ordinary skill in the art, the action taken is dependent upon the type of controlled system, machine, and/or device to which the action is applied. - Moreover, it is to be appreciated that
network 100 andnetwork 200 described above with respect toFIGS. 1 and 2 are networks for implementing respective embodiments of the present invention. Part or all ofcomputer processing system 1000 may be implemented as one or more of the elements ofnetwork 100 and/or one or more of the elements ofnetwork 200. - Further, it is to be appreciated that
computer processing system 1000 may perform at least part of the method described herein including, for example, at least part ofmethod 700 ofFIG. 7 and at least part ofmethod 800 ofFIG. 8 and at least part ofmethod 900 ofFIG. 9 . - Referring now to
FIG. 7 , a block/flow diagram illustrating amethod 700 guidance control system utilizing pose estimation in an autonomous vehicle, in accordance with an embodiment of the present invention. Inblock 710, receive a plurality of images from one or more cameras. Inblock 720, predict a pose from a score map and a combined feature map, the combined feature map correlated from a pair of the plurality of images. Inblock 730, move the autonomous vehicle responsive to the pose. - Referring now to
FIG. 8 , a block/flow diagram illustrating amethod 800 for pose estimation, in accordance with an embodiment of the present invention. Inblock 810, receive a plurality of images from one or more cameras. Inblock 820, generate, with a feature extraction convolutional neural network (CNN), a feature map for each of the plurality of images. Inblock 830, estimate, with a feature weighting network, a score map from a pair of the feature maps. Inblock 840, predict, with a pose estimation CNN, a pose from the score map and a combined feature map. Inblock 850, control an operation of a processor-based machine to change a state of the processor-based machine, responsive to the pose. - Referring now to
FIG. 9 , a block diagram illustrating amethod 900 for a stabilization system utilizing pose estimation in an aerial drone, in accordance with an embodiment of the present invention. Inblock 910, receive a plurality of images from one or more cameras. Inblock 920, predict a pose from a score map and a combined feature map, the combined feature map correlated from a pair of the plurality of images. Inblock 930, move the aerial drone responsive to the pose. - The foregoing is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that those skilled in the art may implement various modifications without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention. Having thus described aspects of the invention, with the details and particularity required by the patent laws, what is claimed and desired protected by Letters Patent is set forth in the appended claims.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/100,462 US20190063932A1 (en) | 2017-08-28 | 2018-08-10 | Autonomous Vehicle Utilizing Pose Estimation |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762550790P | 2017-08-28 | 2017-08-28 | |
US16/100,462 US20190063932A1 (en) | 2017-08-28 | 2018-08-10 | Autonomous Vehicle Utilizing Pose Estimation |
Publications (1)
Publication Number | Publication Date |
---|---|
US20190063932A1 true US20190063932A1 (en) | 2019-02-28 |
Family
ID=65435056
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/100,479 Active 2039-03-19 US10884433B2 (en) | 2017-08-28 | 2018-08-10 | Aerial drone utilizing pose estimation |
US16/100,462 Abandoned US20190063932A1 (en) | 2017-08-28 | 2018-08-10 | Autonomous Vehicle Utilizing Pose Estimation |
US16/100,445 Active 2039-02-27 US10852749B2 (en) | 2017-08-28 | 2018-08-10 | Learning good features for visual odometry |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/100,479 Active 2039-03-19 US10884433B2 (en) | 2017-08-28 | 2018-08-10 | Aerial drone utilizing pose estimation |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/100,445 Active 2039-02-27 US10852749B2 (en) | 2017-08-28 | 2018-08-10 | Learning good features for visual odometry |
Country Status (1)
Country | Link |
---|---|
US (3) | US10884433B2 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020180469A1 (en) * | 2019-03-07 | 2020-09-10 | Nec Laboratories America, Inc. | Multi-task perception network with applications to scene understanding and advanced driver-assistance system |
CN111811501A (en) * | 2020-06-28 | 2020-10-23 | 鹏城实验室 | Trunk feature-based unmanned aerial vehicle positioning method, unmanned aerial vehicle and storage medium |
US10817777B2 (en) * | 2019-01-31 | 2020-10-27 | StradVision, Inc. | Learning method and learning device for integrating object detection information acquired through V2V communication from other autonomous vehicle with object detection information generated by present autonomous vehicle, and testing method and testing device using the same |
WO2022236647A1 (en) * | 2021-05-11 | 2022-11-17 | Huawei Technologies Co., Ltd. | Methods, devices, and computer readable media for training a keypoint estimation network using cgan-based data augmentation |
DE102021117945A1 (en) | 2021-07-12 | 2023-01-12 | Bayerische Motoren Werke Aktiengesellschaft | Providing combined map information |
CN117808872A (en) * | 2024-03-01 | 2024-04-02 | 西安猎隼航空科技有限公司 | Unmanned aerial vehicle attitude and position estimation method |
Families Citing this family (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9940731B2 (en) * | 2015-09-04 | 2018-04-10 | International Business Machines Corporation | Unsupervised asymmetry detection |
US10360494B2 (en) * | 2016-11-30 | 2019-07-23 | Altumview Systems Inc. | Convolutional neural network (CNN) system based on resolution-limited small-scale CNN modules |
US11657437B2 (en) * | 2017-11-29 | 2023-05-23 | Angelswing Inc | Method and apparatus for providing drone data by matching user with provider |
US10692243B2 (en) * | 2017-12-03 | 2020-06-23 | Facebook, Inc. | Optimizations for dynamic object instance detection, segmentation, and structure mapping |
WO2019157344A1 (en) | 2018-02-12 | 2019-08-15 | Avodah Labs, Inc. | Real-time gesture recognition method and apparatus |
US10289903B1 (en) | 2018-02-12 | 2019-05-14 | Avodah Labs, Inc. | Visual sign language translation training device and method |
US10304208B1 (en) | 2018-02-12 | 2019-05-28 | Avodah Labs, Inc. | Automated gesture identification using neural networks |
US10489639B2 (en) | 2018-02-12 | 2019-11-26 | Avodah Labs, Inc. | Automated sign language translation and communication using multiple input and output modalities |
US10346198B1 (en) | 2018-02-12 | 2019-07-09 | Avodah Labs, Inc. | Data processing architecture for improved data flow |
US11423262B2 (en) * | 2018-08-08 | 2022-08-23 | Nec Corporation | Automatically filtering out objects based on user preferences |
US11468575B2 (en) * | 2018-11-16 | 2022-10-11 | Uatc, Llc | Deep structured scene flow for autonomous devices |
US10387754B1 (en) * | 2019-01-23 | 2019-08-20 | StradVision, Inc. | Learning method and learning device for object detector based on CNN using 1×H convolution to be used for hardware optimization, and testing method and testing device using the same |
US10402695B1 (en) * | 2019-01-23 | 2019-09-03 | StradVision, Inc. | Learning method and learning device for convolutional neural network using 1×H convolution for image recognition to be used for hardware optimization, and testing method and testing device using the same |
USD912139S1 (en) | 2019-01-28 | 2021-03-02 | Avodah, Inc. | Integrated dual display sensor |
CN109798888B (en) * | 2019-03-15 | 2021-09-17 | 京东方科技集团股份有限公司 | Posture determination device and method for mobile equipment and visual odometer |
CN110119148B (en) * | 2019-05-14 | 2022-04-29 | 深圳大学 | Six-degree-of-freedom attitude estimation method and device and computer readable storage medium |
US11455813B2 (en) * | 2019-11-14 | 2022-09-27 | Nec Corporation | Parametric top-view representation of complex road scenes |
CN110954114B (en) * | 2019-11-26 | 2021-11-23 | 苏州智加科技有限公司 | Method and device for generating electronic map, terminal and storage medium |
CN111078008B (en) * | 2019-12-04 | 2021-08-03 | 东北大学 | Control method of early education robot |
US11341719B2 (en) * | 2020-05-07 | 2022-05-24 | Toyota Research Institute, Inc. | System and method for estimating depth uncertainty for self-supervised 3D reconstruction |
US11715213B2 (en) | 2020-06-26 | 2023-08-01 | Intel Corporation | Apparatus and methods for determining multi-subject performance metrics in a three-dimensional space |
WO2021258386A1 (en) * | 2020-06-26 | 2021-12-30 | Intel Corporation | Apparatus and methods for three-dimensional pose estimation |
CN112214028A (en) * | 2020-09-02 | 2021-01-12 | 上海电机学院 | Underwater robot pose control method based on OpenMV |
KR102457387B1 (en) * | 2020-11-04 | 2022-10-21 | 한국전자통신연구원 | Apparatus and method for predicting unsafe approach |
US12095973B2 (en) | 2020-12-22 | 2024-09-17 | Intel Corporation | Method and system of image processing with multi-object multi-view association |
US20230274386A1 (en) * | 2022-02-28 | 2023-08-31 | Ford Global Technologies, Llc | Systems and methods for digital display stabilization |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013105926A1 (en) * | 2011-03-22 | 2013-07-18 | Aerovironment Inc. | Invertible aircraft |
FR2985581B1 (en) * | 2012-01-05 | 2014-11-28 | Parrot | METHOD FOR CONTROLLING A ROTARY SAILING DRONE FOR OPERATING A SHOOTING VIEW BY AN ON-BOARD CAMERA WITH MINIMIZATION OF DISTURBING MOVEMENTS |
US20140008496A1 (en) * | 2012-07-05 | 2014-01-09 | Zhou Ye | Using handheld device to control flying object |
US9004973B2 (en) * | 2012-10-05 | 2015-04-14 | Qfo Labs, Inc. | Remote-control flying copter and method |
FR3000813B1 (en) * | 2013-01-04 | 2016-04-15 | Parrot | ROTARY SAILING DRONE COMPRISING MEANS FOR AUTONOMOUS POSITION DETERMINATION IN AN ABSOLUTE FLOOR - RELATED MARK. |
US20150321758A1 (en) * | 2013-08-31 | 2015-11-12 | II Peter Christopher Sarna | UAV deployment and control system |
FR3028186A1 (en) * | 2014-11-12 | 2016-05-13 | Parrot | LONG RANGE DRONE REMOTE CONTROL EQUIPMENT |
WO2016076586A1 (en) * | 2014-11-14 | 2016-05-19 | Lg Electronics Inc. | Mobile terminal and controlling method thereof |
US20160214715A1 (en) * | 2014-11-21 | 2016-07-28 | Greg Meffert | Systems, Methods and Devices for Collecting Data at Remote Oil and Natural Gas Sites |
US20160214713A1 (en) * | 2014-12-19 | 2016-07-28 | Brandon Cragg | Unmanned aerial vehicle with lights, audio and video |
US9915956B2 (en) * | 2015-01-09 | 2018-03-13 | Workhorse Group Inc. | Package delivery by means of an automated multi-copter UAS/UAV dispatched from a conventional delivery vehicle |
WO2016187760A1 (en) * | 2015-05-23 | 2016-12-01 | SZ DJI Technology Co., Ltd. | Sensor fusion using inertial and image sensors |
US10037028B2 (en) * | 2015-07-24 | 2018-07-31 | The Trustees Of The University Of Pennsylvania | Systems, devices, and methods for on-board sensing and control of micro aerial vehicles |
KR102696652B1 (en) * | 2017-01-26 | 2024-08-21 | 삼성전자주식회사 | Stero matching method and image processing apparatus |
US10235566B2 (en) * | 2017-07-21 | 2019-03-19 | Skycatch, Inc. | Determining stockpile volume based on digital aerial images and three-dimensional representations of a site |
-
2018
- 2018-08-10 US US16/100,479 patent/US10884433B2/en active Active
- 2018-08-10 US US16/100,462 patent/US20190063932A1/en not_active Abandoned
- 2018-08-10 US US16/100,445 patent/US10852749B2/en active Active
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10817777B2 (en) * | 2019-01-31 | 2020-10-27 | StradVision, Inc. | Learning method and learning device for integrating object detection information acquired through V2V communication from other autonomous vehicle with object detection information generated by present autonomous vehicle, and testing method and testing device using the same |
WO2020180469A1 (en) * | 2019-03-07 | 2020-09-10 | Nec Laboratories America, Inc. | Multi-task perception network with applications to scene understanding and advanced driver-assistance system |
CN111811501A (en) * | 2020-06-28 | 2020-10-23 | 鹏城实验室 | Trunk feature-based unmanned aerial vehicle positioning method, unmanned aerial vehicle and storage medium |
WO2022236647A1 (en) * | 2021-05-11 | 2022-11-17 | Huawei Technologies Co., Ltd. | Methods, devices, and computer readable media for training a keypoint estimation network using cgan-based data augmentation |
DE102021117945A1 (en) | 2021-07-12 | 2023-01-12 | Bayerische Motoren Werke Aktiengesellschaft | Providing combined map information |
CN117808872A (en) * | 2024-03-01 | 2024-04-02 | 西安猎隼航空科技有限公司 | Unmanned aerial vehicle attitude and position estimation method |
Also Published As
Publication number | Publication date |
---|---|
US20190064851A1 (en) | 2019-02-28 |
US20190066326A1 (en) | 2019-02-28 |
US10884433B2 (en) | 2021-01-05 |
US10852749B2 (en) | 2020-12-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10884433B2 (en) | Aerial drone utilizing pose estimation | |
Wang et al. | Cooperative USV–UAV marine search and rescue with visual navigation and reinforcement learning-based control | |
CN108230361B (en) | Method and system for enhancing target tracking by fusing unmanned aerial vehicle detector and tracker | |
US10339387B2 (en) | Automated multiple target detection and tracking system | |
JP7274674B1 (en) | Performing 3D reconstruction with unmanned aerial vehicle | |
US11100646B2 (en) | Future semantic segmentation prediction using 3D structure | |
CN112379681B (en) | Unmanned aerial vehicle obstacle avoidance flight method and device and unmanned aerial vehicle | |
CN112378397B (en) | Unmanned aerial vehicle target tracking method and device and unmanned aerial vehicle | |
CN112380933B (en) | Unmanned aerial vehicle target recognition method and device and unmanned aerial vehicle | |
US11320269B2 (en) | Information processing apparatus, information processing method, and information processing program | |
CN112789672A (en) | Control and navigation system, attitude optimization, mapping and positioning technology | |
CN114217303B (en) | Target positioning and tracking method and device, underwater robot and storage medium | |
Ha et al. | Radar based obstacle detection system for autonomous unmanned surface vehicles | |
Yang et al. | Autonomous exploration and navigation of mine countermeasures USV in complex unknown environment | |
KR102368734B1 (en) | Drone and drone control methods | |
Helble et al. | OATS: Oxford aerial tracking system | |
Javanmardi et al. | 3D building map reconstruction in dense urban areas by integrating airborne laser point cloud with 2D boundary map | |
Aswini et al. | Custom Based Obstacle Detection Using Yolo v3 for Low Flying Drones | |
CN111052028B (en) | System and method for automatic surface and sky detection | |
RU2793982C1 (en) | Device and method for optimizing aircraft trajectory | |
EP3893223A1 (en) | Detection capability verification for vehicles | |
RU2794003C1 (en) | Device and method for refining aircraft trajectory | |
Jiang et al. | Dual-satellite integrated intelligent reconnaissance autonomous decision-making model | |
CN118672276A (en) | Unmanned ship autonomous navigation control method and system | |
JP2021035831A (en) | Information processing device, information processing method, and information processing program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NEC LABORATORIES AMERICA, INC., NEW JERSEY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TRAN, QUOC-HUY;CHANDRAKER, MANMOHAN;KIM, HYO JIN;SIGNING DATES FROM 20180806 TO 20180809;REEL/FRAME:046611/0841 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |