WO2016176093A1 - Dynamically adjustable situational awareness interface for control of unmanned vehicles - Google Patents

Dynamically adjustable situational awareness interface for control of unmanned vehicles Download PDF

Info

Publication number
WO2016176093A1
WO2016176093A1 PCT/US2016/028449 US2016028449W WO2016176093A1 WO 2016176093 A1 WO2016176093 A1 WO 2016176093A1 US 2016028449 W US2016028449 W US 2016028449W WO 2016176093 A1 WO2016176093 A1 WO 2016176093A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
operator
data set
image data
collection module
Prior art date
Application number
PCT/US2016/028449
Other languages
French (fr)
Inventor
Jerome H. Wei
Original Assignee
Northrop Grumman Systems Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northrop Grumman Systems Corporation filed Critical Northrop Grumman Systems Corporation
Priority to JP2017556201A priority Critical patent/JP6797831B2/en
Publication of WO2016176093A1 publication Critical patent/WO2016176093A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • B64C39/024Aircraft not otherwise provided for characterised by special use of the remote controlled vehicle type, i.e. RPV
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0011Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots associated with a remote control arrangement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/139Format conversion, e.g. of frame-rate or size
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/167Synchronising or controlling image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/25Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristics; using image signals from one sensor to control the characteristics of another sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • H04N7/185Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source from a mobile camera, e.g. for remote control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2201/00UAVs characterised by their flight controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2201/00UAVs characterised by their flight controls
    • B64U2201/20Remote controls

Definitions

  • This disclosure relates to control systems, and more particularly to a system and method to dynamically adjust a situational awareness interface for control of unmanned vehicles.
  • unmanned systems are developed to be focused on unmanned vehicles and sensors, with the user interface being engineered to saturate the operator with data.
  • Current unmanned systems are limited in the effectiveness of direct operator control due to information quality and communications factors, for example.
  • Onboard sensors may not provide sufficient field of view, resolution, or update rate to support operations in highly complex, dynamic environments. Limited bandwidth and latency can degrade quality and timeliness of information from the vehicle to the operator, and delay of user inputs can reduce vehicle controllability. Additionally, presentation of situational awareness information to the operator, and medium of control input from the operator can severely degrade connection between the operator and the vehicle.
  • an apparatus includes an image collection module that monitors at least one parameter to dynamically regulate an amount of data and resolution to be allocated to an area in a scene collected from an image data set.
  • a situational awareness interface (SAI) renders a 3-D video of the scene to an operator based on the amount of data and resolution allocated from the image data set by the image collection module and receives operator commands for an unmanned vehicle (UV) that interacts with the scene.
  • SAI situational awareness interface
  • a system in another aspect, includes a first sensor configured to generate an electro-optical (EO) image data set characterizing a scene.
  • the system includes a second sensor configured to generate a Laser Illuminated Detection and Ranging (LIDAR) image data set characterizing the scene.
  • An image collection module dynamically regulates an amount of data and resolution to be allocated to at least one object within an area of the the scene from the EO image data set and the LIDAR image data set based on at least one parameter to generate a fused image data set to provide a 3-D video of the scene.
  • a situational awareness interface renders the 3-D video of the scene from the fused image data set to an operator and to receive operator commands for an unmanned vehicle (UV) that interacts with the scene.
  • UV unmanned vehicle
  • a method includes receiving image data sets from at least two sensors.
  • the method includes fusing the image data sets to generate a 3-D scene for an operator of an unmanned vehicle (UV) based on the image data sets.
  • This includes determining an available bandwidth to render the scene at an interface for the operator.
  • the method includes adjusting the resolution of an area in the scene based on the available bandwidth.
  • FIG. 1 illustrates an example of a system to dynamically adjust a situational awareness interface for control of unmanned vehicles.
  • FIG. 2 illustrates an example of an image fusion module to dynamically adjust a situational awareness interface for control of unmanned vehicles.
  • FIG. 3 illustrates an example of a calibration procedure for an image fusion module and interface.
  • FIG. 4 illustrates an example of situational awareness interfaces for control of unmanned vehicles.
  • FIG. 5 illustrates example output renderings to a situational awareness interface display based on detected bandwidth parameters.
  • FIG. 6 illustrates example input devices that can be utilized to control an unmanned vehicle via a situational awareness interface and controller.
  • FIG. 7 illustrates an example of a method to dynamically adjust a situational awareness interface for control of unmanned vehicles.
  • This disclosure relates to a system and method to dynamically adjust a situational awareness interface for control of unmanned vehicles. This includes generating a three-dimensional (3-D) video (e.g., a 3-D panoramic video) of a scene via a situational awareness interface (SAI) from onboard sensors mounted on an 3-D video.
  • 3-D three-dimensional
  • SAI situational awareness interface
  • the system can include an omni-directional sensor (e.g., a LadyBug Sensor) for generating electro- optical (EO) images of the scene corresponding to an EO sensor data set, for example.
  • Other sensor data can include a Laser Illuminated Detection and Ranging (LIDAR) sensor for generating LIDAR images of the scene corresponding to a LIDAR sensor data set, for example.
  • LIDAR Laser Illuminated Detection and Ranging
  • the system further includes an image collection module for gathering and processing sensor data such as the EO sensor data with the LIDAR sensor data (and/or other sensor data) to generate an image data set.
  • the image data set can be transmitted across a (wireless) network link to a device (e.g., a virtual reality headset or a 3-D monitor) for rendering the 3-D video of the scene for an operator of the UV in real-time.
  • the image collection module can be configured to dynamically regulate an amount of data (elements) that will be used from the sensor data sets to generate the image data set, which consequently controls a richness level (version) of the 3-D video of the scene presented to the operator via the SAI.
  • the system dynamically regulates the amount of data and/or resolution that will be used from each sensor data set to determine a data size and/or rendering quality of the image data set based on at least one parameter. For instance, the parameter can be determined based on an amount of bandwidth available in the network link, an amount of data captured by the EO and the LIDAR sensor, and/or a processing capability of the system.
  • Dynamically regulating the richness of the 3-D video of the scene based on the parameter (or parameters) enables the operator to continue viewing the scene in real-time, for example, during bandwidth degraded conditions in the network link, but at a lower richness level (e.g., with background objects omitted from the 3-D video of the scene, the 3-D video of the scene at a lower resolution, and so forth).
  • the system can be further configured to utilize a least amount of bandwidth available in the network link based on a priori knowledge of background objects of the scene and by distilling the sensor data into world states.
  • the image fusion module can be configured to analyze the LIDAR data set (or in some applications just the EO data set or both sets) and select model objects from a model database to represent objects (e.g., background objects) within the scene.
  • the image fusion collection can also generate an augmented version of the image data set based on the selected model objects, and a corresponding version of the image data set. This can include annotations within the scene to assist the operator with command and control decisions for the UV.
  • FIG. 1 illustrates an example of a system 100 to dynamically adjust a situational awareness interface (SAI) 1 10 for control of unmanned vehicles (UV) 120.
  • SAI situational awareness interface
  • An image collection module 130 associated with a controller 140 monitors at least one parameter to dynamically regulate an amount of data and resolution to be allocated to at an area in a scene collected from an image data set 150.
  • the term area refers to the rendered image presented to the operator that is based on the collected sensor data representing the collective field of view of UV. The area can include the entire field of view and can also include objects within the field of view.
  • the situational awareness interface (SAI) 1 10 renders a 3-D video of the scene to an operator 160 based on the amount of data and resolution allocated from the image data set 150 by the image collection module 130.
  • the SAI receives operator commands from the operator 160 that are directed to control the unmanned vehicle (UV) 120 that interacts with the scene.
  • the UV 120 can include one or more sensors 170 which are mounted onboard the UV 120 to generate data that can be collected and processed in the image data set 150 to generate the scene as observed from the point of view of the UV, where the operator 160 can view the scene via the SAI 1 10.
  • a bandwidth detector 180 determines available bandwidth and/or resolution that can be rendered for a given scene.
  • the bandwidth detector 180 can include software and/or hardware components that receive information regarding the current network transmission conditions for both collecting the image data set 150 and/or for sending scene data to the SAI 1 10. [0019] The bandwidth detector 180 can monitor a plurality of varying network performance data to generate the parameter to indicate how the given scene should be rendered by the by the SAI 1 10 via the image collection module 130. This can include altering the entire resolution of the given scene from high resolution under good bandwidth conditions to adjusting the resolution of the scene to lower resolution to accommodate poor network capabilities. In some cases, the entire scene can be adjusted for higher or lower resolution. In other examples, a particular object rendered within the scene can be rendered at a higher resolution whereas other objects can be rendered at lower resolution based on operator feedback or predetermined polices and/or detected conditions.
  • bandwidth trade-offs can be made to determine how much data to transmit versus onboard processing (e.g., processing onboard UV and/or processing at image collection module). This can include decisions or policies regulating how much onboard processing should take place given versus available bandwidth to further recognize and characterize objects and then sending a
  • compressed representation for example, or sending raw data in another example.
  • the parameter provided by the bandwidth detector 180 can indicate an amount of available bandwidth in which to render the image data set, a quality of service parameter from a network service provider, a data per second parameter indicating current network performance, or a resolution parameter to indicate an amount of detail to be rendered for the 3-D video scene, for example.
  • the SAI 1 10 can provide feedback from the operator to the image collection module 130 to allocate resolution bandwidth to a particular object within the scene, wherein the feedback can include a voice command, a gaze tracking device input, or a cross hair adjustment via a joystick input, for example, where the feedback indicates objects of interest to be rendered at higher resolution (if possible) from other objects within the scene.
  • the image collection module 130 can process data collected from at least two data sets that are generated from at least two sensors that includes an electro-optical (EO) sensor data set and a Laser Illuminated Detection and Ranging (LIDAR) image sensor data set, for example.
  • Other sensors 170 can include an acoustic sensor, an infrared sensor, an ultraviolet sensor, and/or a visible light sensor, for example.
  • the UV 120 can include an onboard flight/ground controller to react to operator commands provided by the controller 140.
  • the UV 120 can be an airborne system such as a helicopter or an airplane or can be a ground device such as a car, truck, or military asset, for example.
  • the system 100 can provide a full definition, substantially zero latency immersion or transfer of consciousness from the operator 160 to onboard the UV 120 to allow seamless control as if a piloted aircraft or ground vehicle.
  • a rich set of sensors 170 can provide quality, timely information about the external environment to be utilized onboard unmanned vehicles 120.
  • the aggregated and associated information from these sensors 170 can also be leveraged to provide the operator 160 the same situational awareness as available to the autonomous control system. While this may entail high bandwidth communications, it would allow for a high transference of awareness and enable high fidelity control. For certain operating environments, such as the terminal area, this capability may be possible and necessary to maintain safety in difficult situations. Additionally, various technologies in
  • the system 100 can combine five electro optical (EO) images from a Ladybug sensor with a 3D LIDAR point cloud from a
  • Velodyne Lidar sensor into a single fused data set.
  • the resulting information contained within the aggregated/collected/fused data set can be a seamless panoramic view of the environment (e.g., the area) with each data element or object in the set providing color and 3D position of the environment.
  • FIG. 2 illustrates an example of an image fusion module 200 to
  • the image collection module can monitor at least one parameter from a bandwidth detector 220 to dynamically regulate an amount of data and resolution to be allocated to an area in a scene collected from an image data set.
  • the SAI 210 renders a 3-D video of the scene to an operator based on the amount of data and resolution allocated from the image data set by the image collection
  • the SAI 210 can include a richness collection module 230 to provide feedback from the operator to the image collection module 200 to allocate resolution bandwidth to a particular object within the scene.
  • the feedback can include a voice command, a gaze tracking device input, or a cross hair adjustment via a joystick input, where the feedback indicates which objects in the scene the operator would like to see rendered at a higher resolution if possible based on detected bandwidth conditions.
  • the image collection module 200 can collect image data from a first sensor 240 configured to generate an electro-optical (EO) image data set characterizing a scene.
  • a second sensor at 240 can be configured to generate a Laser Illuminated Detection and Ranging (LIDAR) image data set characterizing the scene.
  • the image collection module 200 dynamically regulates an amount of data and resolution to be allocated to at least one object within an area of the scene from the EO image data set 240 and the LIDAR image data set based on at least one parameter to generate a fused image data set to provide a 3-D video of the scene to the SAI 210.
  • the image collection module 250 includes an object coordinate
  • mapper 250 to map situational data received from the sensors 240 to video coordinates of the 3-D video scene.
  • This can include X, Y, Z rectangular coordinate mapping and/or radial mapping where a radius from a given target is specified at a given angle, for example.
  • a calibration protocol for determining the mapping is illustrated and described below with respect to FIG. 3.
  • the image collection module 200 can also include an object identifier 260 to determine object types detected in the scene (e.g., cars, trucks, trees, dumpsters, people, and so forth). In one specific example, the object
  • identifier 260 can include a classifier (or classifiers) to determine the object types based on probabilities associated with a shape or frequency band emitted from the object and as detected by the sensors 240.
  • classifier is a support vector machine (SVM) but other types can be employed to identify objects.
  • the image collection module 200 can include scene output generator and command processor 270 to both provide scene output to the SAI 210 and to receive operator feedback and/or control commands via the SAI.
  • the SAI 210 can also include a virtual reality headset or multiple monitors to render the 3-D scene to the operator, for example (See e.g., FIG. 4).
  • FIG. 3 illustrates an example of a calibration procedure 300 for an image fusion module and interface.
  • various calibration procedures can be performed to map gathered sensor data to scene objects presented to the operator via the SAI described herein.
  • an intrinsic calibration can be performed where lens focal lengths (e.g., lens for gathering data onto the sensor) can be accounted for, principal object data points determined, image skew factors determined, along with radial & tangential distortion factors.
  • an extrinsic calibration can be performed where parameters and coordinates can be determined such as distance to the plane (e.g., as perceived by the operator), unit normal to the plane, a given point on the plane, a determination or rotation between LIDAR and a respective camera head, and a transformation between LIDAR to camera head, for example.
  • data can be interpolated (e.g., data from sensors plotted into a X,Y, Z, 3-D coordinate system) at 330 and rendered as an area (or object) of interest via a filtering at 340, where area refers to a rendered field of view as observed by the operator.
  • the collected data set can be expected to be more complete as the two (or more) complementary data types (e.g., 3D LIDAR and EO images) are combined.
  • the resultant rich, 3D data can readily be presented via a virtual reality headset or 3D monitors to provide detailed spatial awareness to the operator. Even beyond 3D relationships and colors, fusion of other sensors (IR, Acoustic, and so forth) can produce even higher dimensional data that can be presented to the operator. Due to parallax, depth perception of faraway points is not as impacted by 3D presentation, but for close-in objects, 3D presentation can have a dramatic effect. Particularly for close-in movements such as parking or navigating indoor or urban environments, this improvement of presentation of spatial awareness can enhance effectiveness of control.
  • FIG. 4 illustrates an example of situational awareness interfaces 400 for control of unmanned vehicles.
  • a operator is shown wearing a virtual reality headset to both see the rendered video scene shown at 420 but to also provide command feedback to the unmanned vehicle via the headset or other apparatus.
  • This can include voice commands, commands based on eye movements, or commands received from the operators hands such as shown in the examples of FIG. 6.
  • multiple output monitors can be monitored (e.g., via 3-D glasses) by the operator such as shown at 420.
  • the rendered scene 420 is rendered at an overall lower resolution level based on available detected bandwidth.
  • Various other rendering examples from higher to lower resolution are depicted and described with respect to FIG. 5.
  • FIG. 5 illustrates example output renderings to a situational awareness interface display based on detected bandwidth parameters.
  • bandwidth bandwidth
  • sensor data and sensor processing, it is possible to present a hyper awareness to the operator beyond what is directly observable.
  • Classifications, procedural data, and other operational information either extracted or known a priori can be overlaid at 510 and presented to the operator within the immersive 3D environment shown at 520 to facilitate situational awareness transference and control.
  • Information leveraged from the onboard autonomous capabilities can be used to improve operator awareness. If bandwidth constraints become a problem, just the fused sensor data can be presented, at full or degraded resolution such as shown at 530, or a primitive representation of discrete entities in 3D space as extracted by onboard sensor processing such as shown at 540.
  • Another possible approach in one example requires the least amount of communication bandwidth but requires highly capable onboard processing and distilling of sensor information into world states and/or a priori knowledge of the environment to generate a high fidelity 3D rendering of the environment and objects utilizing only high level information of world states such as shown at 550.
  • Presentation format, and subsequently bandwidth utilization can be adjusted based on availability, complexity of situation to be resolved, and uncertainty of correct decision as assessed by the autonomous system, for example.
  • FIG. 6 illustrates example input devices 600 that can be utilized to control an unmanned vehicle via a situational awareness interface and controller.
  • various technologies can be employed to direct control of unmanned vehicles.
  • Body and finger gesture control interfaces via a glove at 610 or touch screen 620 can be used to read inputs from the operator.
  • This can include tactile feedback technologies leveraged to ground the operator to virtual control interfaces in the immersive 3d environment. Intricate, natural interaction with virtual menus and control interfaces can increase operator control precision and reduce workload.
  • traditional stick 630 or wheel controllers at 640 can be used to provide direct input, with or without an immersive world representation.
  • FIG. 7 illustrates an example of a method 700 to dynamically adjust a situational awareness interface for control of unmanned vehicles.
  • the method 700 to dynamically adjust a situational awareness interface for control of unmanned vehicles.
  • method 700 includes receiving image data sets from at least two sensors (e.g., via sensors 170 of FIG. 1 ).
  • the method 700 includes fusing the image data sets to generate a 3-D scene for an operator of an unmanned vehicle (UV) based on the image data sets (e.g., via image collection module 130 of FIG. 1 ).
  • the method 700 includes determining an available bandwidth to render the scene at an interface for the operator (e.g., via bandwidth detector 180 of FIG. 1 ).
  • the method 700 includes adjusting the resolution of an area (or object) in the scene based on the available bandwidth (e.g., via the image collection module 130 of FIG. 1 ). As noted previously, based on the detected bandwidth, resolution of the entire scene can be increased or decreased.
  • the method 700 can also include classifying objects in the scene to determine object types based on probabilities associated with a shape or frequency band emitted from the object, for example.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Traffic Control Systems (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Image Analysis (AREA)

Abstract

An apparatus includes an image collection module (130) that monitors at least one parameter to dynamically regulate an amount of data and resolution to be allocated to at least one object in a scene collected from an image data set. A situational awareness interface (SAI) (110) renders a 3-D video of the scene to an operator based on the amount of data and resolution allocated from the image data set by the image collection module (130) and receives operator commands for an unmanned vehicle (UV) that interacts with the scene.

Description

DYNAMICALLY ADJUSTABLE SITUATIONAL AWARENESS INTERFACE FOR
CONTROL OF UNMANNED VEHICLES
GOVERNMENT INTEREST
[0001] The invention was made under Air Force Research Laboratories Contract Number FA8650-1 1 -C-3104. Therefore, the U.S. Government has rights to the invention as specified in that contract.
RELATED APPLICATION
[0002] This application claims priority from U.S. Patent Application
No. 14/699733 filed on 29 April 2015, the subject matter of which is incorporated herein by reference in its entirety.
TECHNICAL FIELD
[0003] This disclosure relates to control systems, and more particularly to a system and method to dynamically adjust a situational awareness interface for control of unmanned vehicles.
BACKGROUND
[0004] Most unmanned systems require specialized training to operators that require commanders to budget and plan for specialized personnel within an
organizational unit. This is impractical in many situations where the specialized training requires months or even years of training prior to proper employment of the unmanned system. Typically, unmanned systems are developed to be focused on unmanned vehicles and sensors, with the user interface being engineered to saturate the operator with data. Current unmanned systems are limited in the effectiveness of direct operator control due to information quality and communications factors, for example. Onboard sensors may not provide sufficient field of view, resolution, or update rate to support operations in highly complex, dynamic environments. Limited bandwidth and latency can degrade quality and timeliness of information from the vehicle to the operator, and delay of user inputs can reduce vehicle controllability. Additionally, presentation of situational awareness information to the operator, and medium of control input from the operator can severely degrade connection between the operator and the vehicle.
SUMMARY
[0005] This disclosure relates to a system and method to dynamically adjust a situational awareness interface for control of unmanned vehicles. In one aspect, an apparatus includes an image collection module that monitors at least one parameter to dynamically regulate an amount of data and resolution to be allocated to an area in a scene collected from an image data set. A situational awareness interface (SAI) renders a 3-D video of the scene to an operator based on the amount of data and resolution allocated from the image data set by the image collection module and receives operator commands for an unmanned vehicle (UV) that interacts with the scene.
[0006] In another aspect, a system includes a first sensor configured to generate an electro-optical (EO) image data set characterizing a scene. The system includes a second sensor configured to generate a Laser Illuminated Detection and Ranging (LIDAR) image data set characterizing the scene. An image collection module dynamically regulates an amount of data and resolution to be allocated to at least one object within an area of the the scene from the EO image data set and the LIDAR image data set based on at least one parameter to generate a fused image data set to provide a 3-D video of the scene. A situational awareness interface renders the 3-D video of the scene from the fused image data set to an operator and to receive operator commands for an unmanned vehicle (UV) that interacts with the scene.
[0007] In yet another aspect, a method includes receiving image data sets from at least two sensors. The method includes fusing the image data sets to generate a 3-D scene for an operator of an unmanned vehicle (UV) based on the image data sets. This includes determining an available bandwidth to render the scene at an interface for the operator. The method includes adjusting the resolution of an area in the scene based on the available bandwidth.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] FIG. 1 illustrates an example of a system to dynamically adjust a situational awareness interface for control of unmanned vehicles.
[0009] FIG. 2 illustrates an example of an image fusion module to dynamically adjust a situational awareness interface for control of unmanned vehicles.
[0010] FIG. 3 illustrates an example of a calibration procedure for an image fusion module and interface.
[0011] FIG. 4 illustrates an example of situational awareness interfaces for control of unmanned vehicles.
[0012] FIG. 5 illustrates example output renderings to a situational awareness interface display based on detected bandwidth parameters.
[0013] FIG. 6 illustrates example input devices that can be utilized to control an unmanned vehicle via a situational awareness interface and controller.
[0014] FIG. 7 illustrates an example of a method to dynamically adjust a situational awareness interface for control of unmanned vehicles.
DETAILED DESCRIPTION
[0015] This disclosure relates to a system and method to dynamically adjust a situational awareness interface for control of unmanned vehicles. This includes generating a three-dimensional (3-D) video (e.g., a 3-D panoramic video) of a scene via a situational awareness interface (SAI) from onboard sensors mounted on an
unmanned vehicle (UV). A controller can interact with the SAI to enable operator interactions and gestures received from the SAI to control the UV. The system can include an omni-directional sensor (e.g., a LadyBug Sensor) for generating electro- optical (EO) images of the scene corresponding to an EO sensor data set, for example. Other sensor data can include a Laser Illuminated Detection and Ranging (LIDAR) sensor for generating LIDAR images of the scene corresponding to a LIDAR sensor data set, for example. The system further includes an image collection module for gathering and processing sensor data such as the EO sensor data with the LIDAR sensor data (and/or other sensor data) to generate an image data set. The image data set can be transmitted across a (wireless) network link to a device (e.g., a virtual reality headset or a 3-D monitor) for rendering the 3-D video of the scene for an operator of the UV in real-time.
[0016] The image collection module can be configured to dynamically regulate an amount of data (elements) that will be used from the sensor data sets to generate the image data set, which consequently controls a richness level (version) of the 3-D video of the scene presented to the operator via the SAI. The system dynamically regulates the amount of data and/or resolution that will be used from each sensor data set to determine a data size and/or rendering quality of the image data set based on at least one parameter. For instance, the parameter can be determined based on an amount of bandwidth available in the network link, an amount of data captured by the EO and the LIDAR sensor, and/or a processing capability of the system. Dynamically regulating the richness of the 3-D video of the scene based on the parameter (or parameters) enables the operator to continue viewing the scene in real-time, for example, during bandwidth degraded conditions in the network link, but at a lower richness level (e.g., with background objects omitted from the 3-D video of the scene, the 3-D video of the scene at a lower resolution, and so forth).
[0017] The system can be further configured to utilize a least amount of bandwidth available in the network link based on a priori knowledge of background objects of the scene and by distilling the sensor data into world states. For example, the image fusion module can be configured to analyze the LIDAR data set (or in some applications just the EO data set or both sets) and select model objects from a model database to represent objects (e.g., background objects) within the scene. The image fusion collection can also generate an augmented version of the image data set based on the selected model objects, and a corresponding version of the image data set. This can include annotations within the scene to assist the operator with command and control decisions for the UV.
[0018] FIG. 1 illustrates an example of a system 100 to dynamically adjust a situational awareness interface (SAI) 1 10 for control of unmanned vehicles (UV) 120. An image collection module 130 associated with a controller 140 monitors at least one parameter to dynamically regulate an amount of data and resolution to be allocated to at an area in a scene collected from an image data set 150. As used herein, the term area refers to the rendered image presented to the operator that is based on the collected sensor data representing the collective field of view of UV. The area can include the entire field of view and can also include objects within the field of view. The situational awareness interface (SAI) 1 10 renders a 3-D video of the scene to an operator 160 based on the amount of data and resolution allocated from the image data set 150 by the image collection module 130. The SAI receives operator commands from the operator 160 that are directed to control the unmanned vehicle (UV) 120 that interacts with the scene. For example, the UV 120 can include one or more sensors 170 which are mounted onboard the UV 120 to generate data that can be collected and processed in the image data set 150 to generate the scene as observed from the point of view of the UV, where the operator 160 can view the scene via the SAI 1 10. A bandwidth detector 180 determines available bandwidth and/or resolution that can be rendered for a given scene. The bandwidth detector 180 can include software and/or hardware components that receive information regarding the current network transmission conditions for both collecting the image data set 150 and/or for sending scene data to the SAI 1 10. [0019] The bandwidth detector 180 can monitor a plurality of varying network performance data to generate the parameter to indicate how the given scene should be rendered by the by the SAI 1 10 via the image collection module 130. This can include altering the entire resolution of the given scene from high resolution under good bandwidth conditions to adjusting the resolution of the scene to lower resolution to accommodate poor network capabilities. In some cases, the entire scene can be adjusted for higher or lower resolution. In other examples, a particular object rendered within the scene can be rendered at a higher resolution whereas other objects can be rendered at lower resolution based on operator feedback or predetermined polices and/or detected conditions. Also, bandwidth trade-offs can be made to determine how much data to transmit versus onboard processing (e.g., processing onboard UV and/or processing at image collection module). This can include decisions or policies regulating how much onboard processing should take place given versus available bandwidth to further recognize and characterize objects and then sending a
compressed representation, for example, or sending raw data in another example.
[0020] The parameter provided by the bandwidth detector 180 can indicate an amount of available bandwidth in which to render the image data set, a quality of service parameter from a network service provider, a data per second parameter indicating current network performance, or a resolution parameter to indicate an amount of detail to be rendered for the 3-D video scene, for example. The SAI 1 10 can provide feedback from the operator to the image collection module 130 to allocate resolution bandwidth to a particular object within the scene, wherein the feedback can include a voice command, a gaze tracking device input, or a cross hair adjustment via a joystick input, for example, where the feedback indicates objects of interest to be rendered at higher resolution (if possible) from other objects within the scene.
[0021] In a specific sensor example, the image collection module 130 can process data collected from at least two data sets that are generated from at least two sensors that includes an electro-optical (EO) sensor data set and a Laser Illuminated Detection and Ranging (LIDAR) image sensor data set, for example. Other sensors 170 can include an acoustic sensor, an infrared sensor, an ultraviolet sensor, and/or a visible light sensor, for example. Although not shown, the UV 120 can include an onboard flight/ground controller to react to operator commands provided by the controller 140. The UV 120 can be an airborne system such as a helicopter or an airplane or can be a ground device such as a car, truck, or military asset, for example.
[0022] The system 100 can provide a full definition, substantially zero latency immersion or transfer of consciousness from the operator 160 to onboard the UV 120 to allow seamless control as if a piloted aircraft or ground vehicle. To support autonomous operations, a rich set of sensors 170 can provide quality, timely information about the external environment to be utilized onboard unmanned vehicles 120. The aggregated and associated information from these sensors 170 can also be leveraged to provide the operator 160 the same situational awareness as available to the autonomous control system. While this may entail high bandwidth communications, it would allow for a high transference of awareness and enable high fidelity control. For certain operating environments, such as the terminal area, this capability may be possible and necessary to maintain safety in difficult situations. Additionally, various technologies in
presenting 3D information and allowing control input can be utilized to further improve control efficiency and precision. For example, the system 100 can combine five electro optical (EO) images from a Ladybug sensor with a 3D LIDAR point cloud from a
Velodyne Lidar sensor into a single fused data set. The resulting information contained within the aggregated/collected/fused data set can be a seamless panoramic view of the environment (e.g., the area) with each data element or object in the set providing color and 3D position of the environment.
[0023] FIG. 2 illustrates an example of an image fusion module 200 to
dynamically adjust a situational awareness interface (SAI) 210 for control of unmanned vehicles. As noted previously, the image collection module can monitor at least one parameter from a bandwidth detector 220 to dynamically regulate an amount of data and resolution to be allocated to an area in a scene collected from an image data set. The SAI 210 renders a 3-D video of the scene to an operator based on the amount of data and resolution allocated from the image data set by the image collection
module 200. The SAI 210 can include a richness collection module 230 to provide feedback from the operator to the image collection module 200 to allocate resolution bandwidth to a particular object within the scene. For example, the feedback can include a voice command, a gaze tracking device input, or a cross hair adjustment via a joystick input, where the feedback indicates which objects in the scene the operator would like to see rendered at a higher resolution if possible based on detected bandwidth conditions.
[0024] In one specific example, the image collection module 200 can collect image data from a first sensor 240 configured to generate an electro-optical (EO) image data set characterizing a scene. A second sensor at 240 can be configured to generate a Laser Illuminated Detection and Ranging (LIDAR) image data set characterizing the scene. In this example, the image collection module 200 dynamically regulates an amount of data and resolution to be allocated to at least one object within an area of the scene from the EO image data set 240 and the LIDAR image data set based on at least one parameter to generate a fused image data set to provide a 3-D video of the scene to the SAI 210.
[0025] The image collection module 250 includes an object coordinate
mapper 250 to map situational data received from the sensors 240 to video coordinates of the 3-D video scene. This can include X, Y, Z rectangular coordinate mapping and/or radial mapping where a radius from a given target is specified at a given angle, for example. A calibration protocol for determining the mapping is illustrated and described below with respect to FIG. 3. The image collection module 200 can also include an object identifier 260 to determine object types detected in the scene (e.g., cars, trucks, trees, dumpsters, people, and so forth). In one specific example, the object
identifier 260 can include a classifier (or classifiers) to determine the object types based on probabilities associated with a shape or frequency band emitted from the object and as detected by the sensors 240. One example classifier is a support vector machine (SVM) but other types can be employed to identify objects.
[0026] After objects have been identified, object classifications, procedural data, or operational data can be overlaid on to the 3-D scene to facilitate situational awareness of the operator. As shown, the image collection module 200 can include scene output generator and command processor 270 to both provide scene output to the SAI 210 and to receive operator feedback and/or control commands via the SAI. The SAI 210 can also include a virtual reality headset or multiple monitors to render the 3-D scene to the operator, for example (See e.g., FIG. 4).
[0027] FIG. 3 illustrates an example of a calibration procedure 300 for an image fusion module and interface. In order to gather sensor data for the image collection module described herein, various calibration procedures can be performed to map gathered sensor data to scene objects presented to the operator via the SAI described herein. At 310, an intrinsic calibration can be performed where lens focal lengths (e.g., lens for gathering data onto the sensor) can be accounted for, principal object data points determined, image skew factors determined, along with radial & tangential distortion factors. At 320, an extrinsic calibration can be performed where parameters and coordinates can be determined such as distance to the plane (e.g., as perceived by the operator), unit normal to the plane, a given point on the plane, a determination or rotation between LIDAR and a respective camera head, and a transformation between LIDAR to camera head, for example. After extrinsic calibration 320, data can be interpolated (e.g., data from sensors plotted into a X,Y, Z, 3-D coordinate system) at 330 and rendered as an area (or object) of interest via a filtering at 340, where area refers to a rendered field of view as observed by the operator.
[0028] Given knowledge of the sensors' intrinsic and extrinsic calibration parameters, data association between each sensor can be performed by a
transformation of coordinate frames along with correction offsets for sensor distortion followed by sensor data interpolation, as shown in FIG 3. The collected data set can be expected to be more complete as the two (or more) complementary data types (e.g., 3D LIDAR and EO images) are combined. The resultant rich, 3D data can readily be presented via a virtual reality headset or 3D monitors to provide detailed spatial awareness to the operator. Even beyond 3D relationships and colors, fusion of other sensors (IR, Acoustic, and so forth) can produce even higher dimensional data that can be presented to the operator. Due to parallax, depth perception of faraway points is not as impacted by 3D presentation, but for close-in objects, 3D presentation can have a dramatic effect. Particularly for close-in movements such as parking or navigating indoor or urban environments, this improvement of presentation of spatial awareness can enhance effectiveness of control.
[0029] FIG. 4 illustrates an example of situational awareness interfaces 400 for control of unmanned vehicles. At 410, a operator is shown wearing a virtual reality headset to both see the rendered video scene shown at 420 but to also provide command feedback to the unmanned vehicle via the headset or other apparatus. This can include voice commands, commands based on eye movements, or commands received from the operators hands such as shown in the examples of FIG. 6. In an alternative example for observing the rendered scene, multiple output monitors can be monitored (e.g., via 3-D glasses) by the operator such as shown at 420. The rendered scene 420 is rendered at an overall lower resolution level based on available detected bandwidth. Various other rendering examples from higher to lower resolution are depicted and described with respect to FIG. 5.
[0030] FIG. 5 illustrates example output renderings to a situational awareness interface display based on detected bandwidth parameters. Depending on availability of bandwidth, sensor data, and sensor processing, it is possible to present a hyper awareness to the operator beyond what is directly observable. Classifications, procedural data, and other operational information either extracted or known a priori can be overlaid at 510 and presented to the operator within the immersive 3D environment shown at 520 to facilitate situational awareness transference and control.
[0031] Information leveraged from the onboard autonomous capabilities can be used to improve operator awareness. If bandwidth constraints become a problem, just the fused sensor data can be presented, at full or degraded resolution such as shown at 530, or a primitive representation of discrete entities in 3D space as extracted by onboard sensor processing such as shown at 540. Another possible approach in one example requires the least amount of communication bandwidth but requires highly capable onboard processing and distilling of sensor information into world states and/or a priori knowledge of the environment to generate a high fidelity 3D rendering of the environment and objects utilizing only high level information of world states such as shown at 550. Presentation format, and subsequently bandwidth utilization, can be adjusted based on availability, complexity of situation to be resolved, and uncertainty of correct decision as assessed by the autonomous system, for example.
[0032] FIG. 6 illustrates example input devices 600 that can be utilized to control an unmanned vehicle via a situational awareness interface and controller. With regard to operator interaction, various technologies can be employed to direct control of unmanned vehicles. Body and finger gesture control interfaces via a glove at 610 or touch screen 620 can be used to read inputs from the operator. This can include tactile feedback technologies leveraged to ground the operator to virtual control interfaces in the immersive 3d environment. Intricate, natural interaction with virtual menus and control interfaces can increase operator control precision and reduce workload.
Additionally, traditional stick 630 or wheel controllers at 640 can be used to provide direct input, with or without an immersive world representation.
[0033] In view of the foregoing structural and functional features described above, an example method will be better appreciated with reference to FIG. 7. While, for purposes of simplicity of explanation, the method is shown and described as executing serially, it is to be understood and appreciated that the method is not limited by the illustrated order, as parts of the method could occur in different orders and/or concurrently from that shown and described herein. Such method can be executed by various components configured in an IC or a controller, for example.
[0034] FIG. 7 illustrates an example of a method 700 to dynamically adjust a situational awareness interface for control of unmanned vehicles. At 710, the
method 700 includes receiving image data sets from at least two sensors (e.g., via sensors 170 of FIG. 1 ). At 720, the method 700 includes fusing the image data sets to generate a 3-D scene for an operator of an unmanned vehicle (UV) based on the image data sets (e.g., via image collection module 130 of FIG. 1 ). At 730, the method 700 includes determining an available bandwidth to render the scene at an interface for the operator (e.g., via bandwidth detector 180 of FIG. 1 ). At 740, the method 700 includes adjusting the resolution of an area (or object) in the scene based on the available bandwidth (e.g., via the image collection module 130 of FIG. 1 ). As noted previously, based on the detected bandwidth, resolution of the entire scene can be increased or decreased. In another example, resolution of a given object (or objects) within the scene can be increased while other objects in the scene can have their resolution decreased based on available bandwidth and/or operator feedback. Although not shown, the method 700 can also include classifying objects in the scene to determine object types based on probabilities associated with a shape or frequency band emitted from the object, for example.
[0035] What has been described above are examples. It is, of course, not possible to describe every conceivable combination of components or methodologies, but one of ordinary skill in the art will recognize that many further combinations and permutations are possible. Accordingly, the disclosure is intended to embrace all such alterations, modifications, and variations that fall within the scope of this application, including the appended claims. As used herein, the term "includes" means includes but not limited to, the term "including" means including but not limited to. The term "based on" means based at least in part on. Additionally, where the disclosure or claims recite "a," "an," "a first," or "another" element, or the equivalent thereof, it should be interpreted to include one or more than one such element, neither requiring nor excluding two or more such elements.

Claims

CLAIMS What is claimed is:
1 . An apparatus, comprising:
an image collection module that monitors at least one parameter to dynamically regulate an amount of data and resolution to be allocated to an area of a scene collected from an image data set; and
a situational awareness interface (SAI) to render a 3-D video of the scene to an operator based on the amount of data and resolution allocated from the image data set by the image collection module and to receive operator commands for an unmanned vehicle (UV) that interacts with the scene.
2. The apparatus of claim 1 , wherein the at least one parameter indicates an amount of available bandwidth parameter in which to render the image data set, a quality of service parameter from a network service provider, a data per second parameter indicating current network performance, or a resolution parameter to indicate an amount of detail to be rendered for the 3-D video scene.
3. The apparatus of claim 2, wherein the SAI provides feedback from the operator to the image collection module to allocate resolution bandwidth to a particular object within the area of the scene, wherein the feedback includes a voice command, a gaze tracking device input, or a cross hair adjustment via a joystick input.
4. The apparatus of claim 1 , wherein image collection module processes data from at least two data sets that are generated from at least two sensors that includes an electro-optical (EO) sensor data set, a Laser Illuminated Detection and Ranging (LIDAR) image sensor data set, an acoustic sensor, an infrared sensor, an ultraviolet sensor, and a visible light sensor.
5. The apparatus of claim 4, wherein the image collection module includes an object coordinate mapper to map situational data received from the sensors to video coordinates of the 3-D video scene.
6. The apparatus of claim 5, wherein the image collection module includes an object identifier to determine object types detected in the area of the scene.
7. The apparatus of claim 6, wherein the object identifier includes a classifier to determine the object types based on probabilities associated with a shape or frequency band emitted from the object.
8. The apparatus of claim 1 , wherein the SAI includes a virtual reality headset or multiple monitors to render the 3-D scene to the operator.
9. The apparatus of claim 8, further comprising a controller to receive operator commands from the SAI and to send control commands to the UV based on the operator commands.
10. The apparatus of claim 8, wherein object classifications, procedural data, or operational data is overlaid on to the 3-D scene to facilitate situational awareness of the operator.
1 1 . A system, comprising:
a first sensor configured to generate an electro-optical (EO) image data set characterizing a scene;
a second sensor configured to generate a Laser Illuminated Detection and Ranging (LIDAR) image data set characterizing the scene; an image collection module configured to dynamically regulate an amount of data and resolution to be allocated to at least one object within an area of a scene from the EO image data set and the LIDAR image data set based on at least one parameter to generate a fused image data set to provide a 3-D video of the scene; and
a situational awareness interface to render the 3-D video of the scene from the fused image data set to an operator and to receive operator commands for an unmanned vehicle (UV) that interacts with the scene.
12. The system of claim 1 1 , wherein the at least one parameter indicates an amount of available bandwidth parameter in which to render the image data set, a quality of service parameter from a network service provider, a data per second parameter indicating current network performance, or a resolution parameter to indicate an amount of detail to be rendered for the 3-D video scene.
13. The system of claim 1 1 , wherein the SAI provides feedback from the operator to the image collection module to allocate resolution bandwidth to a particular object within the area of the scene, wherein the feedback includes a voice command, a gaze tracking device input, or a cross hair adjustment via a joystick input.
14. The system of claim 1 1 , wherein image collection module fuses data from other data sets that includes an acoustic sensor, an infrared sensor, an ultraviolet sensor, and a visible light sensor.
15. The system of claim 14, wherein the image collection module includes an object coordinate mapper to map situational data received from the sensors to video
coordinates of the 3-D video scene.
16. The system of claim 15, wherein the image collections module includes an object identifier to determine object types detected in the area of the scene.
17. The system of claim 16, wherein the object identifier includes a classifier to determine the object types based on probabilities associated with a shape or frequency band emitted from the object.
18. The system of claim 1 1 , further comprising a controller to receive operator commands from the SAI and to send control commands to the UV based on the operator commands.
19. A method, comprising:
receiving image data sets, via a controller, from at least two sensors,
fusing the image data sets, via the controller, to generate a 3-D scene for an operator of an unmanned vehicle (UV) based on the image data sets;
determining, via the controller, an available bandwidth to render the scene at an interface for the operator; and
adjusting, via the controller, the resolution of an area in the scene based on the available bandwidth.
20. The method of claim 19, further comprising classifying objects in the scene to determine object types based on probabilities associated with a shape or frequency band emitted from the object.
PCT/US2016/028449 2015-04-29 2016-04-20 Dynamically adjustable situational awareness interface for control of unmanned vehicles WO2016176093A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2017556201A JP6797831B2 (en) 2015-04-29 2016-04-20 Dynamically adjustable situational awareness interface for unmanned vehicle control

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/699,733 2015-04-29
US14/699,733 US10142609B2 (en) 2015-04-29 2015-04-29 Dynamically adjustable situational awareness interface for control of unmanned vehicles

Publications (1)

Publication Number Publication Date
WO2016176093A1 true WO2016176093A1 (en) 2016-11-03

Family

ID=55953395

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2016/028449 WO2016176093A1 (en) 2015-04-29 2016-04-20 Dynamically adjustable situational awareness interface for control of unmanned vehicles

Country Status (3)

Country Link
US (1) US10142609B2 (en)
JP (1) JP6797831B2 (en)
WO (1) WO2016176093A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019034907A1 (en) * 2017-08-15 2019-02-21 Saronikos Trading And Services, Unipessoal Lda Improved multirotor aircraft and interface device

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9824596B2 (en) * 2013-08-30 2017-11-21 Insitu, Inc. Unmanned vehicle searches
WO2018105576A1 (en) * 2016-12-05 2018-06-14 Kddi株式会社 Flying device, control device, communication control method, and control method
KR20180075191A (en) * 2016-12-26 2018-07-04 삼성전자주식회사 Method and electronic device for controlling unmanned aerial vehicle
US10692279B2 (en) * 2017-07-31 2020-06-23 Quantum Spatial, Inc. Systems and methods for facilitating making partial selections of multidimensional information while maintaining a multidimensional structure
CN109343061B (en) * 2018-09-19 2021-04-02 百度在线网络技术(北京)有限公司 Sensor calibration method and device, computer equipment, medium and vehicle
KR20200055596A (en) * 2018-11-13 2020-05-21 삼성전자주식회사 Image transmitting method of terminal device mounted on vehicle and image receiving method of remote control device controlling vehicle
US11305887B2 (en) * 2019-09-13 2022-04-19 The Boeing Company Method and system for detecting and remedying situation awareness failures in operators of remotely operated vehicles
US20230169733A1 (en) * 2021-12-01 2023-06-01 Flipkart Internet Private Limited System and method for rendering objects in an extended reality
CN116012474B (en) * 2022-12-13 2024-01-30 昆易电子科技(上海)有限公司 Simulation test image generation and reinjection method and system, industrial personal computer and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100114408A1 (en) * 2008-10-31 2010-05-06 Honeywell International Inc. Micro aerial vehicle quality of service manager
US20110187563A1 (en) * 2005-06-02 2011-08-04 The Boeing Company Methods for remote display of an enhanced image
WO2013074172A2 (en) * 2011-08-29 2013-05-23 Aerovironment, Inc. System and method of high-resolution digital data image transmission

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4855822A (en) * 1988-01-26 1989-08-08 Honeywell, Inc. Human engineered remote driving system
JPH10112856A (en) * 1996-10-04 1998-04-28 Agency Of Ind Science & Technol Image transmitting device and method
JP2001209426A (en) * 2000-01-26 2001-08-03 Nippon Telegr & Teleph Corp <Ntt> Mobile body controller
JP2002344925A (en) * 2001-05-15 2002-11-29 Sony Corp Moving picture distributor, moving picture distribution method, moving picture distribution program, moving picture distribution program storage medium, picture designation device, picture designation method, picture designation program, picture designation program storage medium, and moving picture providing system
US20030164794A1 (en) * 2002-03-04 2003-09-04 Time Domain Corporation Over the horizon communications network and method
JP2003333569A (en) * 2002-05-13 2003-11-21 Sony Corp File format, information processing system, information processing apparatus and method, recording medium, and program
JP2004056335A (en) * 2002-07-18 2004-02-19 Sony Corp Information processing apparatus and method, display apparatus and method, and program
JP2005073218A (en) * 2003-08-07 2005-03-17 Matsushita Electric Ind Co Ltd Image processing apparatus
WO2006137829A2 (en) * 2004-08-10 2006-12-28 Sarnoff Corporation Method and system for performing adaptive image acquisition
US9182228B2 (en) * 2006-02-13 2015-11-10 Sony Corporation Multi-lens array system and method
US8195343B2 (en) * 2007-05-19 2012-06-05 Ching-Fang Lin 4D GIS virtual reality for controlling, monitoring and prediction of manned/unmanned system
JP2009065534A (en) * 2007-09-07 2009-03-26 Sharp Corp Reproduction apparatus, reproduction method, program, and record medium
JP2009273116A (en) * 2008-04-07 2009-11-19 Fujifilm Corp Image processing device, image processing method, and program
KR100955483B1 (en) * 2008-08-12 2010-04-30 삼성전자주식회사 Method of building 3d grid map and method of controlling auto travelling apparatus using the same
US20100106344A1 (en) * 2008-10-27 2010-04-29 Edwards Dean B Unmanned land vehicle having universal interfaces for attachments and autonomous operation capabilities and method of operation thereof
KR101648455B1 (en) * 2009-04-07 2016-08-16 엘지전자 주식회사 Broadcast transmitter, broadcast receiver and 3D video data processing method thereof
EP2672927A4 (en) * 2011-02-10 2014-08-20 Atrial Innovations Inc Atrial appendage occlusion and arrhythmia treatment
US20120306741A1 (en) * 2011-06-06 2012-12-06 Gupta Kalyan M System and Method for Enhancing Locative Response Abilities of Autonomous and Semi-Autonomous Agents
JP5950605B2 (en) * 2012-02-14 2016-07-13 株式会社日立製作所 Image processing system and image processing method
TW201339903A (en) * 2012-03-26 2013-10-01 Hon Hai Prec Ind Co Ltd System and method for remotely controlling AUV
JP2014048859A (en) * 2012-08-31 2014-03-17 Ihi Aerospace Co Ltd Remote control system
IL308285A (en) * 2013-03-11 2024-01-01 Magic Leap Inc System and method for augmented and virtual reality
US9538096B2 (en) * 2014-01-27 2017-01-03 Raytheon Company Imaging system and methods with variable lateral magnification
US20160122038A1 (en) * 2014-02-25 2016-05-05 Singularity University Optically assisted landing of autonomous unmanned aircraft
US9997079B2 (en) * 2014-12-12 2018-06-12 Amazon Technologies, Inc. Commercial and general aircraft avoidance using multi-spectral wave detection
US9880551B2 (en) * 2015-03-06 2018-01-30 Robotic Research, Llc Point-and-click control of unmanned, autonomous vehicle using omni-directional visors

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110187563A1 (en) * 2005-06-02 2011-08-04 The Boeing Company Methods for remote display of an enhanced image
US20100114408A1 (en) * 2008-10-31 2010-05-06 Honeywell International Inc. Micro aerial vehicle quality of service manager
WO2013074172A2 (en) * 2011-08-29 2013-05-23 Aerovironment, Inc. System and method of high-resolution digital data image transmission

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019034907A1 (en) * 2017-08-15 2019-02-21 Saronikos Trading And Services, Unipessoal Lda Improved multirotor aircraft and interface device

Also Published As

Publication number Publication date
US10142609B2 (en) 2018-11-27
JP6797831B2 (en) 2020-12-09
US20170041587A1 (en) 2017-02-09
JP2018523331A (en) 2018-08-16

Similar Documents

Publication Publication Date Title
US10142609B2 (en) Dynamically adjustable situational awareness interface for control of unmanned vehicles
CN109479119B (en) System and method for UAV interactive video broadcasting
US11024083B2 (en) Server, user terminal device, and control method therefor
US11019255B2 (en) Depth imaging system and method of rendering a processed image to include in-focus and out-of-focus regions of one or more objects based on user selection of an object
US20180129200A1 (en) Headset display device, unmanned aerial vehicle, flight system and method for controlling unmanned aerial vehicle
EP2939432B1 (en) Display update time reduction for a near-eye display
WO2018032457A1 (en) Systems and methods for augmented stereoscopic display
CN106454311B (en) A kind of LED 3-D imaging system and method
EP3629309A2 (en) Drone real-time interactive communications system
CN111984114B (en) Multi-person interaction system based on virtual space and multi-person interaction method thereof
CN106303448B (en) Aerial image processing method, unmanned aerial vehicle, head-mounted display device and system
Shen et al. Teleoperation of on-road vehicles via immersive telepresence using off-the-shelf components
US20160249043A1 (en) Three dimensional (3d) glasses, 3d display system and 3d display method
US9667947B2 (en) Stereoscopic 3-D presentation for air traffic control digital radar displays
WO2018187927A1 (en) Vision simulation system for simulating operations of a movable platform
Krückel et al. Intuitive visual teleoperation for UGVs using free-look augmented reality displays
WO2020107454A1 (en) Method and apparatus for accurately locating obstacle, and computer readable storage medium
US20200019782A1 (en) Accommodating object occlusion in point-of-view displays
WO2017222664A1 (en) Controlling capturing of a multimedia stream with user physical responses
JP5971466B2 (en) Flight path display system, method and program
CN110187720A (en) Unmanned plane guidance method, device, system, medium and electronic equipment
CN105472358A (en) Intelligent terminal about video image processing
CN106878651B (en) Three-dimensional video communication method and communication equipment based on unmanned aerial vehicle and unmanned aerial vehicle
CN108762279A (en) A kind of parallel control loop
WO2019061466A1 (en) Flight control method, remote control device, and remote control system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16721564

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2017556201

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16721564

Country of ref document: EP

Kind code of ref document: A1