WO2017120617A1 - System and method for single lens 3d imagers for situational awareness in autonomous platforms - Google Patents
System and method for single lens 3d imagers for situational awareness in autonomous platforms Download PDFInfo
- Publication number
- WO2017120617A1 WO2017120617A1 PCT/US2017/017103 US2017017103W WO2017120617A1 WO 2017120617 A1 WO2017120617 A1 WO 2017120617A1 US 2017017103 W US2017017103 W US 2017017103W WO 2017120617 A1 WO2017120617 A1 WO 2017120617A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- situational awareness
- imagers
- imaging
- imager
- single lens
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
Definitions
- This invention relates generally to 3 -dimensional (3D) imaging and more specifically to the utilization of single lens 3D imagery to enhance situational awareness in unmanned vehicle platforms.
- SA Situational Awareness
- stereoscopic imaging can derive 3D information from two images, the images must be closely correlated to be accurate or for high speed processing.
- a system is needed for high speed capture and correlation of 3 -Dimensional information from single lens imaging for autonomous vehicle situational awareness.
- Single Lens 3D (SL3) imaging provides a simple and accurate 3D imaging capability for autonomous platforms.
- 3D imaging there are several types of 3D imaging, such as but not limited to variable focus, de-focus, synthetic aperture, and Spatial Phase Integration (SPI).
- SPI Spatial Phase Integration
- Each of these approaches has benefits and drawbacks specific to the technology and to be effective the 3D imaging system should be matched to the performance of the platform and mission parameters.
- the 3D image requires multiple images over time, in other cases high frame rates are possible or high resolution.
- the output is pixel based, and in other systems the 3D data model is the native output of the imager and structured data simplifies extracting significant features relevant to situational awareness functions.
- the SA system may be composed of some combination of one or more 3D imagers coupled with one or more Scene Analysis units. Some number of the imagers may be mounted with a Pan/Tilt/Zoom configuration while others may be fixed. For highest functionality SA systems this embodiment couples a number of SPI native 3D cameras with a Flexible Pipeline Processor (FPP) for fast reduction of data stream to useable SA information.
- FPP Flexible Pipeline Processor
- some number of hardware accelerator architectures could be coupled to each of one or more 3D imagers. Each hardware accelerator may have different characteristics such as but not limited to programming, architecture, input and output requirements, and speed depending on the mission profile for that imaging system.
- multiple frames may be processed by one or more processors to correlate the movement of one or more target objects to derive information such as but not limited to distance, bearing, size, luminosity, speed, direction, and intercept information.
- tasks such as but not limited to target recognition, tracking and following may be required of the SA system.
- FIG. 1 shows an overall diagram of a situational awareness subsystem for autonomous platforms.
- Fig. 1 there is shown an overall diagram of a situational awareness subsystem for autonomous platforms.
- the different illustrative embodiments recognize and take into account a number of different considerations.
- a number as used herein with reference to items, means one or more items.
- a number of different considerations means one or more different considerations.
- “Some number”, as used herein with reference to items, may mean zero or more items.
- one or more 3D image / model capture sensors 100 provide a stream of structured 3D data to a hardware accelerator 110.
- the hardware accelerator may be composed of one or more processes such a but not limited to some number of standard CPUs running software 120, some number of flexible pipeline processors 130, some number of character recognition processors 140, or some number of 3D model comparison processors 150.
- Connections between the sensors and the situational analysis engines may be direct or via a crossbar routing switch 160.
- Other hardware could be added depending on the image sensor, makeup of the data stream, and mission requirements.
- Flexible Pipeline Processors are very efficient for pixel based data, while character recognition might be best handled by a neural network or deep learning system, and 3D model comparison might require a totally custom processing system.
- One or more of the processing modules is then attached to the platform's navigation system 170 to provide it with all SA information as required.
- bypass paths 180 are also available which route data from one of the analysis systems to another, providing the ability to merge sensor information for greater analytic capabilities.
- This connectivity could be part of the crossbar switch or another path as required by the implementation.
- some number of the imager may have pan/tilt/zoom capability, and the control of these parameters is derived from either the hardware accelerator managing that imager, the overall SA system, or in some cases another subsystem might have overall control of the imaging system and allocate use of the imagers to the SA system as available.
- situational awareness may encompass edge detection of structures within a given radius of the platform.
- it may encompass creation of a 3D model of the environment and comparing that model against a stored virtual model of the same area. Comparison of a stored data derived from a model of the environment can, for example without limitation, be used to calculate the position of the vehicle from the differences in the models in order to validate, enhance, or replace external sources of position information such as GPS, for example in GPS-denied environments.
- each block in the flowcharts or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function or functions.
- the functions noted in a block may occur out of the order noted in the figures. For example, the functions of two blocks shown in succession may be executed substantially concurrently, or the functions of the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Image Processing (AREA)
Abstract
Autonomous vehicle platforms require situational awareness for higher level functionality. This invention utilizes 3D single lens camera technologies with high speed preprocessing modules to achieve high levels of real-time situational awareness for autonomous platforms.
Description
System and Method for Single Lens 3D Imagers for Situational Awareness in Autonomous Platforms
FIELD
[0001] This invention relates generally to 3 -dimensional (3D) imaging and more specifically to the utilization of single lens 3D imagery to enhance situational awareness in unmanned vehicle platforms.
BACKGROUND
[0001] Situational Awareness (SA) is a critical function for autonomous and semi -autonomous vehicle platforms. At a minimum SA encompasses obstacle avoidance, but higher functioning systems this could also encompass functionality such as but not limited to tracking, scene analysis, pattern recognition, target acquisition, bearing and distance measurements, landmark recognition, position tri angulation, cooperative swarms, formation flying, and threat assessment.
[0002] While stereoscopic imaging can derive 3D information from two images, the images must be closely correlated to be accurate or for high speed processing.
[0003] A system is needed for high speed capture and correlation of 3 -Dimensional information from single lens imaging for autonomous vehicle situational awareness.
BRIEF SUMMARY OF THE INVENTION
[0004] For real-time assessment and awareness of the space surrounding the platform high speed processing by dedicated pre-processors or hardware accelerators may be required. Single Lens 3D (SL3) imaging provides a simple and accurate 3D imaging capability for autonomous platforms.
[0005] There are several types of 3D imaging, such as but not limited to variable focus, de-focus, synthetic aperture, and Spatial Phase Integration (SPI). Each of these approaches has benefits and drawbacks specific to the technology and to be effective the 3D imaging system should be matched to the performance of the platform and mission parameters. In some cases the 3D image requires multiple images over time, in other cases high frame rates are possible or high resolution. In some systems the output is pixel based, and in other systems the 3D data model is the native output of the imager and structured data simplifies extracting significant features relevant to situational awareness functions.
[0006] In one embodiment the SA system may be composed of some combination of one or more 3D imagers coupled with one or more Scene Analysis units. Some number of the imagers may be mounted with a Pan/Tilt/Zoom configuration while others may be fixed. For highest functionality SA systems this embodiment couples a number of SPI native 3D cameras with a Flexible Pipeline Processor (FPP) for fast reduction of data stream to useable SA information. In another embodiment some number of hardware accelerator architectures could be coupled to each of one or more 3D imagers. Each hardware accelerator may have different characteristics such as but not limited to programming, architecture, input and output requirements, and speed depending on the mission profile for that imaging system.
[0007] In another embodiment multiple frames may be processed by one or more processors to correlate the movement of one or more target objects to derive information such as but not limited to distance, bearing, size, luminosity, speed, direction, and intercept information. In
other embodiments tasks such as but not limited to target recognition, tracking and following may be required of the SA system.
BRIEF DESCRIPTION OF DRAWINGS
[008] The following detailed description illustrates embodiments of the invention by way of example and not by way of limitation. The description clearly enables one skilled in the art to make and use the disclosure, describes several embodiments, adaptations, variations,
alternatives, and use of the disclosure, including what is currently believed to be the best mode of carrying out the disclosure. The disclosure is described as applied to an exemplary embodiment namely, systems and methods of utilization of single lens 3D imaging for UAV situational awareness. However, it is contemplated that this disclosure has general application to vehicle management systems in industrial, commercial, military, and residential applications.
[0009] As used herein, an element or step recited in the singular and preceded with the word "a" or "an" should be understood as not excluding plural elements or steps, unless such exclusion is explicitly recited. Furthermore, references to "one embodiment" of the present invention are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features.
[0010] The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical components or features.
[0011] The novel features believed characteristic of the illustrative embodiments are set forth in the appended claims. The illustrative embodiments, however, as well as a preferred mode of use, further objectives, and features thereof will best be understood by reference to the following detailed description of illustrative embodiments of the present disclosure when read in conjunction with the accompanying drawings, wherein:
[0012] FIG. 1 shows an overall diagram of a situational awareness subsystem for autonomous platforms.
DETAILED DESCRIPTION OF THE INVENTION
[0013] Referring now to the invention in more detail, in Fig. 1 there is shown an overall diagram of a situational awareness subsystem for autonomous platforms. The different illustrative embodiments recognize and take into account a number of different considerations. "A number", as used herein with reference to items, means one or more items. For example, "a number of different considerations" means one or more different considerations. "Some number", as used herein with reference to items, may mean zero or more items.
[0014] In Figure 1, one or more 3D image / model capture sensors 100 provide a stream of structured 3D data to a hardware accelerator 110. The hardware accelerator may be composed of one or more processes such a but not limited to some number of standard CPUs running software 120, some number of flexible pipeline processors 130, some number of character recognition processors 140, or some number of 3D model comparison processors 150.
[0015] Connections between the sensors and the situational analysis engines may be direct or via a crossbar routing switch 160. Other hardware could be added depending on the image sensor, makeup of the data stream, and mission requirements. For example without limitation, Flexible Pipeline Processors are very efficient for pixel based data, while character recognition might be best handled by a neural network or deep learning system, and 3D model comparison might require a totally custom processing system. One or more of the processing modules is then attached to the platform's navigation system 170 to provide it with all SA information as required.
[0016] In another embodiment bypass paths 180 are also available which route data from one of the analysis systems to another, providing the ability to merge sensor information for greater analytic capabilities. This connectivity could be part of the crossbar switch or another path as required by the implementation.
[0017] In another embodiment some number of the imager may have pan/tilt/zoom capability, and the control of these parameters is derived from either the hardware accelerator managing that imager, the overall SA system, or in some cases another subsystem might have overall control of the imaging system and allocate use of the imagers to the SA system as available.
[0018] In one embodiment situational awareness may encompass edge detection of structures within a given radius of the platform. In another embodiment it may encompass creation of a 3D
model of the environment and comparing that model against a stored virtual model of the same area. Comparison of a stored data derived from a model of the environment can, for example without limitation, be used to calculate the position of the vehicle from the differences in the models in order to validate, enhance, or replace external sources of position information such as GPS, for example in GPS-denied environments.
[0019] While the foregoing written description of the invention enables one of ordinary skill to make and use what is considered presently to be the best mode thereof, those of ordinary skill will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The invention should therefore not be limited by the above described embodiment, method, and examples, but by all embodiments and methods within the scope and spirit of the invention. Further, different illustrative embodiments may provide different benefits as compared to other illustrative embodiments. The embodiment or embodiments selected are chosen and described in order to best explain the principles of the embodiments, the practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated.
[0020] The flowcharts and block diagrams described herein illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various illustrative embodiments. In this regard, each block in the flowcharts or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function or functions. It should also be noted that, in some alternative implementations, the functions noted in a block may occur out of the order noted in the figures. For example, the functions of two blocks shown in succession may be executed substantially concurrently, or the functions of the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
Claims
The invention claimed is:
1) A system that utilizes a number of single lens 3D imaging sensors coupled with some number of situational awareness analysis engines to provide high level information to a platform navigation system.
2) The system of 1 where 3D imaging is accomplished by a number of Spatial Phase
Integration 3D imagers.
3) The system of 1 where one or more processors such as but not limited to a Flexible
Pipeline Processor, standard CPU with programming, or other hardware accelerator is used to produce high level situational awareness triggers as required by mission parameters.
4) The system of 1 where 3D imaging is accomplished by synthetic aperture, focus, or de- focus techniques coupled with situational awareness processing.
5) The system of 1 where some number of the imagers is mounted on some combination of Pan, Tilt and/or Zoom platform.
6) The system of 1 where the direction and zoom parameters of the imagers are controlled by the associated hardware accelerators or combined SA functional block.
7) The system of 1 where the direction and zoom parameters of the imagers are specified or requested by the SA functional block but another subsystem has overall control.
8) The system of 1 including use of the situational awareness system for target recognition and/or tracking.
9) The system of 1 where temporally spaced 3D models are compared for some combination of SA data such as but not limited to distance, bearing, size, luminosity, speed, direction, tracking, target bearing and intercept capabilities.
10) The system of 1 where connectivity can be routed from a given imager to one or more of the acceleration processors simultaneously.
11) The system of 1 where output from one hardware accelerator or processing node may be routed to one or more of the other nodes in addition to or instead of the platform navigation system.
12) The system of 1 where processing capability within the 3D imager can pre-process pattern recognition and analysis functions before passing structured data to the hardware accelerator or SA system.
13) The system of 1 where data derived from the 3D imager can be compared against data derived from 3D models of the Area of Operation to compute location information.
14) The system of 1 where imaging sensors may be shared by different subsystems and some number allocated to situational awareness at any given time.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662275705P | 2016-01-06 | 2016-01-06 | |
US62/275,705 | 2016-01-06 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017120617A1 true WO2017120617A1 (en) | 2017-07-13 |
Family
ID=59274330
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2017/017103 WO2017120617A1 (en) | 2016-01-06 | 2017-02-09 | System and method for single lens 3d imagers for situational awareness in autonomous platforms |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2017120617A1 (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080062167A1 (en) * | 2006-09-13 | 2008-03-13 | International Design And Construction Online, Inc. | Computer-based system and method for providing situational awareness for a structure using three-dimensional modeling |
US20090174573A1 (en) * | 2008-01-04 | 2009-07-09 | Smith Alexander E | Method and apparatus to improve vehicle situational awareness at intersections |
US20120169842A1 (en) * | 2010-12-16 | 2012-07-05 | Chuang Daniel B | Imaging systems and methods for immersive surveillance |
US20130278631A1 (en) * | 2010-02-28 | 2013-10-24 | Osterhout Group, Inc. | 3d positioning of augmented reality information |
US8954853B2 (en) * | 2012-09-06 | 2015-02-10 | Robotic Research, Llc | Method and system for visualization enhancement for situational awareness |
US20150381945A1 (en) * | 2014-04-10 | 2015-12-31 | Smartvue Corporation | Systems and Methods for Automated Cloud-Based 3-Dimensional (3D) Analytics for Surveillance Systems |
-
2017
- 2017-02-09 WO PCT/US2017/017103 patent/WO2017120617A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080062167A1 (en) * | 2006-09-13 | 2008-03-13 | International Design And Construction Online, Inc. | Computer-based system and method for providing situational awareness for a structure using three-dimensional modeling |
US20090174573A1 (en) * | 2008-01-04 | 2009-07-09 | Smith Alexander E | Method and apparatus to improve vehicle situational awareness at intersections |
US20130278631A1 (en) * | 2010-02-28 | 2013-10-24 | Osterhout Group, Inc. | 3d positioning of augmented reality information |
US20120169842A1 (en) * | 2010-12-16 | 2012-07-05 | Chuang Daniel B | Imaging systems and methods for immersive surveillance |
US8954853B2 (en) * | 2012-09-06 | 2015-02-10 | Robotic Research, Llc | Method and system for visualization enhancement for situational awareness |
US20150381945A1 (en) * | 2014-04-10 | 2015-12-31 | Smartvue Corporation | Systems and Methods for Automated Cloud-Based 3-Dimensional (3D) Analytics for Surveillance Systems |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10755428B2 (en) | Apparatuses and methods for machine vision system including creation of a point cloud model and/or three dimensional model | |
Zhang et al. | Mutr3d: A multi-camera tracking framework via 3d-to-2d queries | |
WO2018048353A1 (en) | Simultaneous localization and mapping methods and apparatus | |
JP2018522348A (en) | Method and system for estimating the three-dimensional posture of a sensor | |
CN109300143B (en) | Method, device and equipment for determining motion vector field, storage medium and vehicle | |
KR20170053007A (en) | Method and apparatus for estimating pose | |
JP7091485B2 (en) | Motion object detection and smart driving control methods, devices, media, and equipment | |
WO2018081366A1 (en) | Vision-aided inertial navigation with loop closure | |
KR20190030474A (en) | Method and apparatus of calculating depth map based on reliability | |
Murray et al. | Design of stereo heads | |
Deshpande et al. | Deep learning as an alternative to super-resolution imaging in UAV systems | |
CN113096151B (en) | Method and apparatus for detecting motion information of object, device and medium | |
Wang et al. | Correlation flow: robust optical flow using kernel cross-correlators | |
CN112379681A (en) | Unmanned aerial vehicle obstacle avoidance flight method and device and unmanned aerial vehicle | |
CN112378397A (en) | Unmanned aerial vehicle target tracking method and device and unmanned aerial vehicle | |
KR20200023974A (en) | Method and apparatus for synchronization of rotating lidar and multiple cameras | |
Walter et al. | Self-localization of unmanned aerial vehicles based on optical flow in onboard camera images | |
Pinto et al. | Ekf design for online trajectory prediction of a moving object detected onboard of a uav | |
Miller et al. | UAV navigation based on videosequences captured by the onboard video camera | |
KR20200016627A (en) | Device and method to estimate ego motion | |
WO2017120617A1 (en) | System and method for single lens 3d imagers for situational awareness in autonomous platforms | |
Kaputa et al. | YOLBO: you only look back once–a low latency object tracker based on YOLO and optical flow | |
Liu et al. | Hybrid real-time stereo visual odometry for unmanned aerial vehicles | |
EP3905116A1 (en) | Image processing system for identifying and tracking objects | |
Sharma et al. | A hybrid vision system for dynamic obstacle detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17736543 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17736543 Country of ref document: EP Kind code of ref document: A1 |