GB2553617A - Remotely controlling robotic platforms based on multi-modal sensory data - Google Patents

Remotely controlling robotic platforms based on multi-modal sensory data Download PDF

Info

Publication number
GB2553617A
GB2553617A GB1708992.1A GB201708992A GB2553617A GB 2553617 A GB2553617 A GB 2553617A GB 201708992 A GB201708992 A GB 201708992A GB 2553617 A GB2553617 A GB 2553617A
Authority
GB
United Kingdom
Prior art keywords
robotic platform
sensory data
remote control
type
modal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1708992.1A
Other versions
GB2553617B (en
GB201708992D0 (en
Inventor
W Podnar Gregg
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Boeing Co
Original Assignee
Boeing Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Boeing Co filed Critical Boeing Co
Publication of GB201708992D0 publication Critical patent/GB201708992D0/en
Publication of GB2553617A publication Critical patent/GB2553617A/en
Application granted granted Critical
Publication of GB2553617B publication Critical patent/GB2553617B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
    • G05B19/41865Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM] characterised by job scheduling, process planning, material flow
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1689Teleoperation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/005Manipulators for mechanical processing tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0075Manipulators for painting or coating
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/02Hand grip control means
    • B25J13/025Hand grip control means comprising haptic means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/081Touching devices, e.g. pressure-sensitive
    • B25J13/084Tactile sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/085Force or torque sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1687Assembly, peg and hole, palletising, straight line, weaving pattern movement
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64FGROUND OR AIRCRAFT-CARRIER-DECK INSTALLATIONS SPECIALLY ADAPTED FOR USE IN CONNECTION WITH AIRCRAFT; DESIGNING, MANUFACTURING, ASSEMBLING, CLEANING, MAINTAINING OR REPAIRING AIRCRAFT, NOT OTHERWISE PROVIDED FOR; HANDLING, TRANSPORTING, TESTING OR INSPECTING AIRCRAFT COMPONENTS, NOT OTHERWISE PROVIDED FOR
    • B64F5/00Designing, manufacturing, assembling, cleaning, maintaining or repairing aircraft, not otherwise provided for; Handling, transporting, testing or inspecting aircraft components, not otherwise provided for
    • B64F5/40Maintaining or repairing aircraft
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64FGROUND OR AIRCRAFT-CARRIER-DECK INSTALLATIONS SPECIALLY ADAPTED FOR USE IN CONNECTION WITH AIRCRAFT; DESIGNING, MANUFACTURING, ASSEMBLING, CLEANING, MAINTAINING OR REPAIRING AIRCRAFT, NOT OTHERWISE PROVIDED FOR; HANDLING, TRANSPORTING, TESTING OR INSPECTING AIRCRAFT COMPONENTS, NOT OTHERWISE PROVIDED FOR
    • B64F5/00Designing, manufacturing, assembling, cleaning, maintaining or repairing aircraft, not otherwise provided for; Handling, transporting, testing or inspecting aircraft components, not otherwise provided for
    • B64F5/60Testing or inspecting aircraft components or systems
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/0011Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot associated with a remote control arrangement
    • G05D1/0038Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot associated with a remote control arrangement by providing the operator with simple or augmented images from one or more cameras located onboard the vehicle, e.g. tele-operation
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/0011Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot associated with a remote control arrangement
    • G05D1/005Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot associated with a remote control arrangement by providing the operator with signals other than visual, e.g. acoustic, haptic
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/32Operator till task planning
    • G05B2219/32252Scheduling production, machining, job shop
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S901/00Robots
    • Y10S901/30End effector
    • Y10S901/44End effector inspection

Abstract

Method of remotely controlling robotic platform based on multi-modal sensory data comprising positioning 310 robot; communicatively coupling 312 robot to a remote control station; obtaining the sensory data 314 using two or more sensors of the robot, the data comprising at least two sensory response types; transmitting 320 at least a portion of the data; and receiving 330 at the robot remote control instructions from the station. The robot may be positioned in a confined space of a structure, which may be an aircraft wing. The sensory response types may be selected from a binocular stereoscopic vision type, a binaural stereophonic audio type, a force-reflecting haptic manipulation type, and a tactile type. Local control instructions may be generated 334 at the robot based on the data. The robot may perform 350 operations within the space based on the local or remote instructions, or a combination. Operations may include changing 350a position of the robot or removing 350f an object from the space; drilling 350b, installing 350c a fastener into, sealing 350d, painting 350e, or inspecting 350g the structure. Also provided is a remote control station for controlling a robotic platform using multi-modal sensory data.

Description

(71) Applicant(s):
The Boeing Company (Incorporated in USA - Illinois)
100 North Riverside Plaza, Chicago 60606-1596, Illinois, United States of America (56) Documents Cited:
EP 2653273 A1 WO 2011/116332 A2 US 20150148949 A1 US 20020153185 A1
WO 2012/129251 A2 US 20150346722 A1 US 20140114482 A1 (58) Field of Search:
INT CL B25J, B64F, G05D, G06F Other: WPI, EPODOC (72) Inventor(s):
Gregg W Podnar (74) Agent and/or Address for Service:
Kilburn & Strode LLP
Lacon London, 84 Theobald Road, London, Greater London, WC1X 8NL, United Kingdom (54) Title of the Invention: Remotely controlling robotic platforms based on multi-modal sensory data Abstract Title: Remotely controlling robotic platform based on multi-modal sensory data (57) Method of remotely controlling robotic platform based on multi-modal sensory data comprising positioning 310 robot; communicatively coupling 312 robot to a remote control station; obtaining the sensory data 314 using two or more sensors of the robot, the data comprising at least two sensory response types; transmitting 320 at least a portion of the data; and receiving 330 at the robot remote control instructions from the station. The robot may be positioned in a confined space of a structure, which may be an aircraft wing. The sensory response types may be selected from a binocular stereoscopic vision type, a binaural stereophonic audio type, a force-reflecting haptic manipulation type, and a tactile type. Local control instructions may be generated 334 at the robot based on the data. The robot may perform 350 operations within the space based on the local or remote instructions, or a combination. Operations may include changing 350a position of the robot or removing 350f an object from the space; drilling 350b, installing 350c a fastener into, sealing 350d, painting 350e, or inspecting 350g the structure. Also provided is a remote control station for controlling a robotic platform using multi-modal sensory data.
)0 gfART A
\................£.................
Position Robotic Platform within Confined Space 310 j i
Communicatively Couple Robotic Platform to Remote Control Station 312
L...............................................................*..............................................zzz
Obtain Multi-Modal Sensory Data 314 iAugment Multi-Modal Sensory* i Data 316 |
Figure GB2553617A_D0001
Select Multi-Modal Sensory ’ Data for Transm ission 318 ,
Figure GB2553617A_D0002
-YESί Generate Local Control Instructions i 334 J
Perform Operation(s) Using Robotic Platform within Confined Space 350 ΓChange Position of Robotic Platform 350a ι i Drill Component- 350b ]
Γ InsFaTi Fastener 350c i
Γ Sealing Structure 35Qd * [ Painting Structure 350e j f Remove Object 35Qf j i Inspect Structure 350α i c
Figure GB2553617A_D0003
1/8
Figure GB2553617A_D0004
Figure GB2553617A_D0005
Figure GB2553617A_D0006
Figure GB2553617A_D0007
2/8
Figure GB2553617A_D0008
271 271
3/8
Robotic Platform 230
Sensors 510 j Stereoscopic Vision Sensor 512 ’ i [_______Coplanar_Camera Sensors 513_______j ’
Ϊ Stereophonic Audio Sensor 51_4 i
J ' Anihropom orpin leal ly-CorrecFStereophonic ] i i Force-Reflecting Haptic Manipulation Sensor 516 i
J Tactile Sensor 518
Temperature Sensor 517
J Operating Tools 520 J i Drive Mechanism 530 J ! ί Treads 532 j !
Communication Module 540
Assisting Agent 550 J
Figure GB2553617A_D0009
4/8
Remote Control Station 250
User Interface 610
I I ] Output Device(s) 612 I I i
I I I i Display 613a
I Stereo Speakers 613b I i I
i I Input Device(s) 614 I I
I Processor 630 I _____I
I Memory 635 Ί _____I
Communication Module 640
ι ι_____ Assisting Agent 650 I _____I
Figure GB2553617A_D0010
5/8
273
Multi-Modal Sensory Data 272 j Binocular Stereoscopic Vision Type 273a
Binaural Stereophonic Audio Type 273b | Force-Reflecting Haptic Manipulation Type 273c | Tactile Type 273d
Fidelity Level 2/5
Figure GB2553617A_D0011
6/8
Figure GB2553617A_D0012
7/8
Figure GB2553617A_D0013
Figure GB2553617A_D0014
8/8
1200
Figure GB2553617A_D0015
FIG. 8
Remotely Controlling Robotic Platforms Based on MultiModal Sensory Data
CROSS-REFERENCE TO RELATED APPLICATIONS
This application is related to US Patent Application entitled “Multi-Tread Vehicles and Methods of Operating Thereof’ filed concurrently (Docket No. 16-0017-USNPBNGCP081US) and US Patent Application entitled “Stereoscopic Camera and Associated Method of Varying a Scale of a Stereoscopic Image Pair” filed concurrently (Docket No. 15-2607-US-NP), both of which are incorporated herein by reference in their entirety for all purposes.
FIELD
Methods and systems for controlling robotics platforms are provided and, more particularly, a robotic platform remotely disposed in a confined space and controlled using remote and/or local control instructions generated based on multi-modal sensory data is provided in accordance with some arrangements.
BACKGROUND
Robotic platforms may be deployed into various environments that are not ideal for direct human operation. Tele-operated robotic systems including robotic platforms can be used to perform remote operations in such environments with some input from operators positioned remotely. However, operators’ perception of operating environments is limited by the sensory fidelity level of the system. For such systems to be effective, the operators must be effectively tele-present in the operating environments with sufficient and truthfully sensory feedback. In general, higher sensory fidelity provided to an operator yields greater sense of presence in the operating environment and more effective operating instructions provided by the operator. On the flip side, the remote control can be very challenging when a lack of some sensory experience results in limited situation awareness. In most conventional tele-operated robotic systems, operators have limited information about actual operating environments. The primary sensory feedback is visual. Even robotic platforms with sophisticated vision systems provide limited information to their operators. Humans naturally rely on multiple senses to learn about their environment and not only vision. Limiting operators to visual information restricts operator’s ability to comprehensively understand the environment and provide necessary instructions. Furthermore, typical telerobotic systems suffer from what is called “cyclopean vision.” Specifically, such systems include monoscopic cameras and displays that provide no binocular stereopsis. Depth cues are critical for understanding of the environment and performing various operations in this environment, such as manipulation tasks and, more so, for fine manipulation tasks. Each variety of distortion introduced impairs the operator's ability to work precisely and can cause fatigue with prolonged use.
SUMMARY
Provided are methods and systems for remotely controlling of robotic platforms in confined spaces or other like spaces not suitable for direct human operation. The control is achieved using multi-modal sensory data, which may include at least two sensory response types, such as a binocular stereoscopic vision type, a binaural stereophonic audio type, a force-reflecting haptic manipulation type, a tactile type, and the like. The multi-modal sensory data may be obtained by a robotic platform positioned in a confined space and may be transmitted to a remote control station outside of the confined space, where it may be used to generate a representation of the confined space. The multi-modal sensory data may be used to provide multi-sensory high-fidelity telepresence for an operator of the remote control station and allow the operator to provide more accurate user input. This input may be transmitted to the robotic platform to perform various operations within the confined space.
In some arrangements, a method for remotely controlling of a robotic platform based on multi-modal sensory data is provided. The method may comprise positioning the robotic platform, communicatively coupling the robotic platform to a remote control station, obtaining the multi-modal sensory data using two or more sensors of the robotic platform, transmitting at least a portion of the multi-modal sensory data, and/or receiving remote control instructions from the remote control station at the robotic platform. The multi-modal sensory data may comprise at least two sensory response types. The at least two sensory response types may be selected from the group consisting of a binocular stereoscopic vision type, a binaural stereophonic audio type, a force-reflecting haptic manipulation type, and a tactile type. Obtaining the multi-modal sensory data and transmitting the multi-modal sensory data may be repeatedly continuously during execution of the method. The method may involve augmenting the remote control instructions received from the remote control station. In some arrangements, the structure may be an aircraft wing.
In some arrangements, the robotic platform may be positioned in a confined space of a structure. Transmitting at least the portion of the multi-modal sensory data may be performed while the robotic platform is positioned in the confined space.
In some arrangements, the method further comprises generating local control instructions at the robotic platform based on the multi-modal sensory data. The method may also comprise performing one or more operations within the confined space using the robotic platform based on the local control instructions.
In some arrangements, the multi-modal sensory data may comprise at least the binocular stereoscopic vision type, the binaural stereophonic audio type, and the forcereflecting haptic manipulation type. In these arrangements, the one or more operations may comprise drilling the component of the structure.
In some arrangements, the multi-modal sensory data may comprise at least the binocular stereoscopic vision type, the binaural stereophonic audio type, the force-reflecting haptic manipulation type, and the tactile type. In these arrangements, the one or more operations comprise installing the fastener into the structure.
In some arrangements, the method further comprises augmenting the multi-modal sensory data prior to transmitting at least the portion of the multi-modal sensory data. The method may also comprise selecting at least the portion of the multi-modal sensory data for transmitting.
In some arrangements, the method further comprises performing one or more operations within the confined space using the robotic platform based on the remote control instructions received from the remote control station at the robotic platform. For example, the one or more operations may be selected from the group consisting of changing position of the robotic platform within the confined space, drilling a component of the structure, installing a fastener into the structure, sealing the structure, painting the structure, removing an object from a confined space, and inspecting the structure. The fidelity level of the multimodal sensory data may correspond to the one or more operations. In some arrangements, the fidelity level of the multi-modal sensory data may change overtime. In some arrangements, the one or more operations may be performed also based on local control instructions generated at the robotic platform such that the local control instructions are combined with the remote control instructions to perform the one or more operations.
In some arrangements, the one or more operations may comprise changing the position of the robotic platform within the confined space. In these arrangements, the multi-modal sensory data may comprise at least the binocular stereoscopic vision type and the stereoscopic audio type.
In some arrangements, the robotic platform may be communicatively coupled to the remote control station using a local area network. In the same or other arrangements, the robotic platform may be communicatively coupled to the remote control station using at least one wireless communication link. Furthermore, the robotic platform may be communicatively coupled to the remote control station using a global communication network.
Also provided is a method for remotely controlling of a robotic platform in a confined space of a structure based on multi-modal sensory data. The method may comprise receiving the multi-modal sensory data from the robotic platform positioned in the confined space, generating a representation of the multi-modal sensory data by the remote control station, capturing user input at the remote control station, and transmitting remote control instructions to the robotic platform positioned in the confined space. The multi-modal sensory data may be received by a remote control station positioned outside of the confined space and communicatively coupled to the robotic platform. The multi-modal sensory data may comprise at least two sensory response types selected from the group consisting of a binocular stereoscopic vision type, a binaural stereophonic audio type, a force-reflecting haptic manipulation type, and a tactile type.
In some arrangements, generating the representation of the multi-modal sensory data may comprise augmenting the multi-modal sensory data based on at least one of video spectrum, audio spectrum, spatial orientation, and proprioception. The representation may be a multi-sensory high-fidelity telepresence. In some arrangements, the user interface of the remote control station may comprise a 3D display for presenting the binocular stereoscopic vision type of the multi-modal sensory data. The user interface of the remote control station may comprise stereo speakers for presenting the binaural stereophonic audio type of the multi-modal sensory data.
In some arrangements, the remote control instructions may represent one or more operations performed by the robotic platform within the confined space. The one or more operations may be selected from the group consisting of changing position of the robotic platform within the confined space, drilling a component of the structure, installing a fastener into the structure, sealing the structure, painting the structure, removing an object from a confined space, and inspecting the structure.
In some arrangements, at least receiving the multi-modal sensory data and generating the representation may be performed continuously. Furthermore, the remote control instructions may be generated based on the user input. The robotic platform may be communicatively coupled to the remote control station using a local area network. In the same or other arrangements, the robotic platform is communicatively coupled to the remote control station using a global communication network.
Also provided is a robotic platform for operating in a confined space of a structure using multi-modal sensory data. The robotic platform may comprise sensors for generating the multi-modal sensory data and a communication module for communicatively coupling to a remote control station positioned outside of the confined space. The sensors may comprise at least two selected from the group consisting of a binocular stereoscopic vision sensor, a binaural stereophonic audio sensor, a force-reflecting haptic manipulation sensor, and a tactile sensor.
Also provided is a remote control station for controlling a robotic platform using multimodal sensory data. The robotic platform may comprise a communication module for communicatively coupling to the robotic platform and for receiving the multi-modal sensory data from the robotic platform and a user interface comprising an output device for generating a representation of the multi-modal sensory data received from the robotic platform. The multi-modal sensory data may comprise at least two sensory response types.
In some arrangements, the at least two sensory response types may be selected from the group consisting of a binocular stereoscopic vision type, a binaural stereophonic audio type, a force-reflecting haptic manipulation type, and a tactile type.
Also provided is a method for remotely controlling of a robotic platform in a confined space of a structure based on multi-modal sensory data. The method may comprise obtaining the multi-modal sensory data using two or more sensors of the robotic platform, transmitting at least a portion of the multi-modal sensory data to a remote control station, and generating a representation of the multi-modal sensory data by the remote control station.
The multi-modal sensory data may comprise at least two sensory response types. Various other aspects of this method are presented above and elsewhere in this document.
These and other arrangements are described further below with reference to the figures.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. lisa schematic illustration of an aircraft having confined spaces, in accordance with some arrangements.
FIG. 2 is an example of a co-robotic system comprising a robotic platform and a remote control station, in accordance with some arrangements.
FIG. 3 is a schematic representation of a robotic platform, in accordance with some arrangements.
FIG. 4 is a schematic representation of a remote control station, in accordance with some arrangements.
FIG. 5 is a schematic representation of multi-modal sensory data, in accordance with some arrangements.
FIG. 6 is a process flowchart corresponding to a method for remotely controlling of a robotic platform in a confined space, in accordance with some arrangements.
FIG. 7 is a process flowchart corresponding to a method for remotely controlling of a robotic platform in a confined space from a perspective of a remote control station, in accordance with some arrangements.
FIG. 8 is a block diagram of aircraft production and service methodology that may utilize methods and assemblies described herein.
DETAILED DESCRIPTION
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the presented concepts. The presented concepts may be practiced without some or all of these specific details. In other instances, well known process operations have not been described in detail so as to not unnecessarily obscure the described concepts. While some concepts will be described in conjunction with the specific arrangements, it will be understood that these arrangements are not intended to be limiting.
On the contrary, it is intended to cover alternatives, modifications, and equivalents as may be included within the spint and scope of the present disclosure as defined by the appended claims.
Introduction
Adding real-time human control to robotic platforms provide new opportunities for robotics. On one hand, it helps to overcome many challenges associated with fully automated systems. Furthermore, it allows to perform operations in environments not accessible by humans and/or operations not supported by the fully automated system.
Hybrid human-robotic systems can safely leverage individual strengths and achieve substantial synergies when sufficient information about operating environments is presented to human operators. For example, one key strength of robotic platforms is their ability to deploy and operate in various environments not readily accessible for humans, such as confined spaces, hazardous environment, and the like. (For purposes of this disclosure, confined space is defined as enclosed space defined by a cavity and access with the cavity depth being at least 5 times greater than the principal dimension of the access. At the same time, humans are able to operate well in complex and unstructured environments using their senses and cognitive abilities, which currently far exceed capabilities of fully-automated robotic system. Yet, these environmental complexities often access points in areas not accessible to or suitable for humans. For example, an interior of an aircraft wing is a complex environment with many different components that may need to be assembled, serviced, and replaced. The size or, more specifically, the thickness of the wing provides limited access to these components. It should be noted that position, size, and other characteristics of access points to the operating environment also limits access. The current wing design provide various access points designed for human operators. However, these accesses points may not desirable from weight, performance, and other considerations and generally should be smaller and less frequent, if possible.
Effective operation of a hybrid human-robotic system, which may be also referred to as a co-robotic system, depends on providing high fidelity telepresence to an operator so that the operator can provide correct user input. The fidelity level depends on sensory data obtained by a robotic platform present in an operating environment and in particular on different sensory modes of the data. In many instances, each separate sensory data type (e.g., vision) may be insufficient by itself for a human operator to have an adequate perception of the operating environment. In most instances, humans rely on multiple senses to generate their environmental perception.
Provided are apparatus, methods and systems for controlling of a robotic platform positioned in a confined space. The control is provided, at least in part, by a remote control station positioned outside of the confined space. Some level of control may be provided by the robotic platform itself, which may be referred to as an automated portion of the overall control. The control, internal and/or external, is based on multi-modal sensory data obtained by the robotic platform. This co-robotic approach removes human operators from confined spaces and provide safe and ergonomic work environments remotely. This approach allows performing operations in environments that may be not accessible to humans. Furthermore, it opens door to new types of operations that may not be performed directly by humans or unassisted by fully-automated robotic systems. It also opens the door to new structural configurations of operating environments that no longer have to accommodate a human. For example, a greater ratio of chord (Y-dimension) to depth (Z-dimension) in airfoils, lighter structures without the need for human sized access points, and other like features may be used on aircraft. For purposes of this disclosure, multi-modal sensory data is defined as data generated by one or more sensors of a robotic platform positioned in a confined space and corresponding (in a direct form or in an augmented form) to at least two types of different human senses.
Some level of automation may be provided by optional autonomous agents that may assist operators with their controls and/or may be responsible for selecting multi-modal sensory data for transmission and even altering multi-modal sensory data (e.g., scaling, changing sensory spectrums, and the like). These agents may be implemented on the robotic platform, remote control station, or both. Specifically, the robotic platform may perform some operations without any control instructions generated based on the user input. The control instructions for these operations may be generated by one or more autonomous agents based on the multi-modal sensory data. Some examples of these operations may include navigating the robotic platform within a confined space based on a target location and proximity of various surrounding components. Other examples may involve various operations having less complexity than, for example, operations performed based on user input.
In some arrangements, the methods and systems create situational awareness for an operator through immersive multi-sensory high-fidelity presence or, more specifically, telepresence. This type of situational awareness allows the operator to generate user input more accurately and more effectively without being actually present in the confined space where the multi-modal sensory data is obtained. In addition to the improved efficiency, the high fidelity allows controlling more complex operations. Similar to other instances with limitations on human sensory functions, limited sensory presence significantly constrains ability of an operator when performing a task.
Co-robotic system is designed to generate immersive multi-modal sensory feedback (e.g., a combination of vision and audition perceptions of the environment, in some cases combined with force sensing extremities). With such feedback, the operator working through the remote control station will have a more faithful sense of being in the environment and can employ the intuition and careful practices that an in-situ worker would use while ensuring safety. Furthermore, the intuition and practices of an operator, even remotely positioned can exceed some autonomous capabilities of current robotic platforms making control instructions generated based on user input (and understand of the operating environment) truly invaluable. In some arrangements, a visual component of the multi-modal sensory data may be achieved with a high-definition (e.g., 1920x1280 pixels with 24bits per pixel of luminance and color data at 60Hz per eye) geometrically-correct binocular stereoscopic remote viewing sensor. The audio component may be a full-range (e.g., 20kHz bandwidth per ear) stereophonic audio through microphones linked to the visual tele-presence and reproduced to give the operator a natural aural situational awareness.
Other sensor modalities may include scalable force-reflecting manipulators, a scalable amplitude attitude platform driven by one or more inertial measurement sensors on the remote platform or its end effector, and remote tactile sensing with fingertip arrays of pressure and temperature reproducers and vibration reproduction. Other forms of sensory augmentation such as scaling (visual size and spectrum, force, and aural) may be used. The sensory data types may depend on the environment and performed operations. For example, a bulldozer operator may be presented with wide-angle high-definition monoscopic video, combined with full-range stereophonic audio, and an attitude reproducing platform with vibration reproduction. A racing car driver may add (reproduction of a wind speed by blowing air in the face of the operator and direction to gain better situation awareness. The environmental temperature (e.g., air temperature) may be also used as a factor. A surgeon may be presented with scaled high-definition stereoscopic vision, and scaled force-reflecting manipulators. Finally, an explosive ordnance disposal operator may add tactile sensing for finer manipulation.
In some arrangements, the methods and systems allow to scale the representation of the confined environment when, for example, this environment is presented on a user interface of a remote control station. For example, a tool performing an operation and controlled by hand manipulations of the operator may be substantially smaller than the operator’s hand. The scaling may be used to represent the tool at a scale comparable to the size of the hand. It should be noted that different scaling may be used for different sensory data types and even for different subsets of data for the same sensory data type. For example, a visual representation may be scaled up while the force feedback may be scaled down (e.g., to avoid damaging operator’s hand). In other words, the scaling is used to more effectively match the perception and sensory capabilities of the operator with a particular space and/or a task at hand.
For example, a one-to-one scale system would have manipulators the length of human arms and moving the same distances, which may be not suitable for an environment that needs smaller or larger manipulators. Now referring to visual scaling, the stereoscopic camera may be positioned at the same relative distance and position above the actuator as human eyes relative to hands. The stereoscopic camera may have the same inter-pupillary spacing as our eyes. For a two-to-one effective scale increase of the remote environment, the manipulators must be one-half the size of our arms, the distance from and elevation of the stereoscopic camera one-half of the previous distance and height, with the inter-pupillary distance one-half of our (human) inter-pupillary distance).
In some arrangements, the methods and systems augment the multi-modal sensory data based on sensory capabilities of the operator. The augmentation may be performed for different types of sensory data (e.g., imaging, audio, force, temperature, etc.) and even convert one type into another (e.g., creating a visual (colored) representation of a temperature map). This augmentation capability allows using sensory data that may be otherwise ignored if the operator is present in the actual operating environment. For example, an operator may not be capable of seeing infrared radiation (e.g., indicative of a temperature) or hearing outside of a common 20 Hz to 20 kHz (e.g., a sound outside of this range may indicate a particular type of friction forces). The data collected for such ranges may be converted into ranges recognizable by human operator. Furthermore, one sensory type may be presented in a form of another sensory type. For example, a temperature profile of a surface may be presented with different colors of a user interface of the remote control station.
In some arrangements, the methods and systems provide precise physical interaction through haptic teleoperation. Specifically, at least one component of the multi-modal sensory data may be based on senses of touch and proprioception. Various types of sensors (e.g., force sensor, temperature sensor, and the like) may be used on a robotic platform to generate haptic sensory data. Furthermore, the user interface may include various haptic output devices to generate representation of this data.
Overall, complex tasks in unstructured environments are more difficult to characterize and represent than repetitive tasks in well-structured settings, where robotic advancements are currently prevalent and where full robotic automation may be already possible. Operating in unstructured environments still rely on human abilities to understand the environment and provide at least some control instructions. However, operators need sufficient representations of such unstructured environments, which is addressed by utilizing multimodal sensory data. A system including a robotic platform and a remote control station, which generates at least some control instructions for the robotic platform may be referred to as a co-robotic system or a hybrid robotic-human system. This type of system leverages capabilities of each component of the system. Specifically, it utilizes robotic platform capabilities of accessing various environments not suitable for humans, perform special tasks in the environment, and obtain multi-modal sensory data that, in some arrangements, may go beyond human sensory capabilities. The system may support sensory and cognitive augmentation as noted above.
Various sensors are positioned on a robotic platform to generate multi-modal sensory data. Each sensor may represent one end of a remote sensory channel. In some arrangements, a channel may include a monitoring agent, which may be responsible for modifying and/or augmenting data generated by the sensor and/or monitoring control instructions from the operator. For example, the monitoring agent may scale movements, limit accelerations, and/or apply soft limits to the control instructions. This scaling may be used to prevent collisions and other reasons. In some arrangements, different sensory types of the multi-modal sensory data are analyzed concurrently by the same monitoring agent. Furthermore, the data may be presented concurrently at the remote control station. Some examples of this data analysis include, but not limited to, building a 3D map of the operating space (which may be viewable by the operator at the user interface), identifying anomalous features (such as missing fasteners, or surface blemishes), and combining / overlaying different sensory types on the user interface.
The co-robotic systems described herein may include multi-modal tele-perception sensors for binocular stereoscopic vision, binaural stereophonic audition, and/or forcereflecting haptic manipulation. Robotic platforms of these systems may be controlled remotely while being deployed into confined/hazardous spaces. The robotic platforms may be specifically adapted to operating environments (e.g., space requirements, access, and the like) and operations to be performed by these platforms. For example, access through a relatively small convoluted passage may require a snake-like robot platform, while access onorbit may require a free-flying robotic platform.
To provide better understanding of challenges associated with operating in confined spaces, one example of a confined space will now be described with reference to FIG. 1. Specifically, FIG. 1 is a schematic illustration of aircraft 100, in accordance with some arrangements. Aircraft 100 comprises airframe 150 with interior 170. Aircraft 100 includes wings 120 coupled to airframe 150. Aircraft 100 also includes engines 130 coupled to wings 120. In some arrangements, aircraft 100 further includes a number of operations systems 140 and 160 (e.g., avionics), further described below in conjunction with FIG. 8. Any of these aircraft components may have operating environments not easily accessible by humans and, at the same time, too complex for complete autonomous robotic operation. For example, wing 120 may include various ribs and other structural components limiting access within the interior of wing 120.
Overall, a robotic platform may be deployed into a confined area and/or a high-risk area, one into which humans should not or cannot be sent. The robotic platform may be subjected to various risks associated with this environment and/or operations performed in this environment. Risks may include unplanned or unintended actions, such as falls, collisions, becoming entangled or wedged, and the like. These actions are often the result of lack of perceptual awareness of the environment (either by a human operator or by various autonomous agents).
Examples of Co-Robotic Systems and Its Components
FIG. 2 is an example of co-robotic system 200 comprising robotic platform 230 and remote control station 250, in accordance with some arrangements. During operation of corobotic system 200, robotic platform 230 is positioned within confined space 210 of structure
212. While FIG. 2 illustrates an example of structure 212, which is an aircraft wing, one having ordinary skill in the art would understand that any other examples of structure 212 and confined space 210 are also within the scope. Some additional examples of structures 212 and confined space 210 of structure 212 include, but are not limited to, a fuselage, rudders, horizontal stabilizers, flaps, slats, ailerons, keel, crown or other limit access areas of the aircraft. During operation of co-robotic system 200, remote control station 250 is positioned outside of confined space 210 thereby allowing an operator to interact with remote control station 250. While FIG. 2 illustrates an access point positioned at one end of structure 212, one having ordinary skill in the art would understand that other examples of access points are also within the scope. For example, an access point may be provided within a wing tip or, more specifically, within a wing root of a wing tip. In another example, an access point may be in a crown or a keel of a fuselage.
Robotic platform 230 and remote control station 250 are communicatively coupled using, for example, communication link 270. Communication link 270 may be a wired link, a wireless link, or various combinations of the two. Communication link 270 may be established using various communication protocols and/or networks. In some arrangements, communication link 270 may utilize a local area network (LAN), a global communication network (e.g., Internet), and/or the like (where does power (hydraulics, electricity, etc.) run to 230? Is it possible that power is transmitted to 230 via an umbilical running in parallel to 270?). The selection of networks and protocols depend on the proximity of robotic platform 230 and remote control station 250 and other factors. While not shown in FIG. 2, co-robotic system 200 may include a power line (e.g., hydraulic, electrical, pneumatic, and like) extending to robotic platform 230. The power may be supplied to robotic platform 230 from outside of confined space 210. In some arrangements, the power supply may be internal to robotic platform 230.
The multi-modal sensory data is transmitted from robotic platform 230 and remote control station 250 using communication link 270 thereby creating high-fidelity immersive telepresence for the operator of remote control station 250. This type of telepresence provides situation awareness needed for many operations. At the same time, establishing such telepresence may need high-fidelity capture and reproduction of sensory and sensorimotor data obtained by robotic platform 230. Various types of data obtained by robotic platform 230 are collectively referred to as multi-modal sensory data 272. A schematic representation of various components of multi-modal sensory data 272 is presented in FIG. 5. For example, telepresence presentation to an operator may include geometrically13 correct binocular stereoscopic viewing systems and high-fidelity stereophonic audio reproduction. In this example, multi-modal sensory data 272 may include binocular stereoscopic vison type 273a and binaural stereophonic audio type 273b. In the same or other example, a force-reflecting manipulation sensor may be used to generate forcereflecting haptic manipulation type 273c.
In some arrangements, co-robotic system 200 may include one or more additional remote control stations 250’. Additional remote control station 250’ may be communicatively coupled to primary remote control stations 250 or directly to robotic platform 230. Multiple remote control stations 250 and 250’ may be used to have different operators providing user input. Different remote control stations 250 and 250’ may be positioned in the same general location (e.g., a job site) or different locations. For example, remote control station 250 may be a local station positioned within a general vicinity of robotic platform 230, while additional remote control station 250’ may be distal station positioned in a different location. The control over different remote control stations 250 and 250’ may be performed by different parties. For example, local remote control station 250 may be controlled by an aircraft operator (e.g., an airline), airport staff, and/or repair service, while distal remote control station 250 may be controlled by an aircraft manufacturer or the airline headquarters (e.g., with additional knowledge of structure 212).
An operator of additional remote control station 250’ may have more specific domain knowledge than, for example, an operator remote control stations 250 and may be able to support multiple different co-robotic systems 200. This is especially useful when an unforeseen condition is detected for which additional expertise is needed. By supporting this collaboration access to a wide variety of distant domain experts, unexpected situations can be addressed rapidly, without the time and cost to co-locate the experts for consultation. For example, an airline could have one expert trained to operate co-robotic system 200 but multiple co-robotic systems 200 or at least multiple robotic platforms 230. These multiple robotic platforms 230 may be at various facilities. The expert may be able to control of each robotic platform 230 when needed without co-locating to that robotic platform 230.
Each component of co-robotic system 200 will now be described in more detail. FIG. 3 is a schematic representation of robotic platform 230, in accordance with some arrangements.
Robotic platform 230 includes different sensors 510 for obtaining multi-modal sensory data.
Some examples of sensors 510 include, but not limited to, binocular stereoscopic vision sensor 512, binaural stereophonic audio sensor 514, force-reflecting haptic manipulation sensor 516, tactile sensor 518, and temperature sensor 517. The multi-modal sensory data is a combined output of two or more of these sensors.
The selection of sensors 510 on robotic platform 230 may depend on particular aspects of the multi-modal sensory data. The sensory experience generated at remote control station
250 based on the multi-modal sensory data obtained by sensors 510 on robotic platform 230 may be selected for each particular operation as illustratively presented in TABLE 1.
TABLE 1
Operatio n Binocula r Stereoscopic Vision Binaural Stereophonic Audio Force- Reflecting Haptic Proprioception Tactile
Move Robotic Platform YES YES
Drill YES YES YES -
Install Fastener YES YES YES YES
Seal YES YES -
Paint - - - -
Remove Object YES YES YES YES
Inspect YES - - -
Binocular stereoscopic vision is most useful for any manipulation task, such as placing a fastening tool over a fastener or grasping a dropped fastener, as well as inspection tasks, such as confirming the continuous profile of a sealant bead or distinguishing between a mark that does not need repair from a scratch that does require repair. Binaural stereophonic audio may be useful for situational awareness. For example, binocular stereoscopic vision may provide perception of the operating environment based on sound generated by the robotic platform and reflected from interior surfaces of the confined space. It is also useful listening to tools such as drills for issues such as dulling or breakage. For example, a co-robotic system may have software that automatically monitors the sound portion of the multi-modal sensory data in order to detect operational defects (e.g., sub-optimal drilling may be characterized by distinct sounds). Force-reflecting haptic proprioception may be useful for placing fasteners in holes, or forces on a cleaning pad both pressure and drag during wiping.
In some arrangements, a co-robotic system may generate feedback to an operator and/or to one or more automated agents for monitoring the application of force in order to more closely follow operations (e.g., during drilling to detect suboptimal drilling conditions). Tactile sensing may be useful for fine manipulation of tools or components. Finally, vestibular Spatial Orientation is useful to provide perception of the orientation of the remote vehicle or end effector by providing a seat-of-the-pants perception of angles and accelerations. It is also useful for detecting vibration due to motion or scraping of the workpiece.
Vision data type 273a or, more specifically, binocular stereoscopic vision data may be a part of overall multi-modal sensory data 272. The vision data may be obtained using optional stereoscopic vision sensor 512 or, more specifically, geometrically-correct binocular stereoscopic cameras and viewing systems.
Vision sensor 512 may achieve geometrically-correct image capture by utilizing two co-planar camera sensors 513. Camera sensors 513 may be modified to shift the center of each sensor off of the lens optical axis to shift the fields of view, allowing a visual area of the fields of view to be coincident. This particular arrangement of camera sensors 513 yields the geometf cally-correct image not available with conventional stereo-cameras.
It is insufficient to consider only camera sensors 513 in a geometrically-correct telepresence viewing system. To reproduce reality as if the operator gazes on the scene with un-instrumented eyes, user interface 610 of remote control station 250 may include a particular output device 612, such as display 613a, that also adheres to equivalent geometf es. For example, when the vision data is presented in display 613a, it is natural to consider this display 613 as a window through which the operator gazes. By strictly adhering to equivalent geometries of a direct view with human eyes through a window for the binocular stereoscopic vision sensors 512, and the view of a virtual image through the screen of display 613 we can accurately reproduce the object scene.
Specifically, the inter-pupillary distance of operators’ eyes may be a fixed measurement. The width of the window constrains the angle of view for each eye and defines the area of coincidence when the operator positions his or her eyes such that a line drawn through the two pupils is parallel with the window, and position the cyclopean point (the point between the two pupils) normal to the plane of the window and centered on the aperture of the window. Selection of the effective window aperture is limited by the physical width of the display screen. Incorporating the distance of the viewer’s eyes from the display screen completes the system’s geometric constraints.
Referring now to anthropomorphic audition, many subtle depth and manipulation cues are processed subconsciously through human’s stereophonic hearing. At the same time, addition of this sensory modality to co-robotic system 200 is relatively simple, while the situational awareness it provides of the tele-presently perceived environment is substantial. Binaural audition data at a normal human scale may be provided by stereophonic audio sensor 514 or, more specifically, by anthropomorphically-correct stereophonic microphone 515. Stereophonic audio sensor 514 is a part of robotic platform 230 as, for example, shown in FIG. 3. For scaled stereophonic audition (e.g., to complement the scaled stereoscopic vision identified above), stereophonic audio sensor 514 may be high-fidelity miniature microphones. While many of the same binaural localization cues (e.g., intensity, timbre, spectral qualities, reflections in the confined space) may be maintained, the timing cues and phase cues in certain frequency bands may be reduced or altered, for example, if the intermural distance is altered.
Referring now to force-reflecting haptic teleoperation, humans are very capable of navigating without vision (e.g., walking through a dark room) using touches and gentle bumping into objects. Force-reflecting haptic manipulation type 273c may be a low bandwidth type of multi-modal sensory data 272. For interacting with the remote environment, force-feedback actuators and posture proprioception add for presenting sensorimotor controls. For example, co-robotic system 200 may utilize a robotic hand and arm (which may be parts of one or both of sensors 510 and operating tools 520) with forcereflecting exo-skeleton controls (which may be parts of user interface 610 and include both output devices 612 and input devices 614). This approach allows the operator to perform a wide variety of operations naturally. In some arrangements, remote control station 250 includes a 4-axis force-feedback arm, a two-fingered force-feedback hand, and a forcereflecting exo-skeleton for fingers and arm. The reproduction of gross forces at the hand allows proprioception, or kinesthesia, which is the self-sense of the position of limbs and other parts of the body. This provides a significant additional cue to the immersive visual tele-perception and overall enhancement of multi-modal sensory data 272.
Referring now to vestibular spatial orientation, the attitude (orientation) of robotic platform 230 or an operating tool (e.g., an end-effector) of robotic platform 230 may be relayed to remote control station 250 as a part of multi-modal sensory data 272. This attitude may be reproduced by adjusting the attitude of the platform or an operator chair to achieve the vestibular spatial orientation feedback. This feedback may be performed at relatively low frequencies in comparison to other sensory types. Furthermore, the feedback may be scaled and/or limited for safety and other reasons (e.g., to prevent leaning the operator beyond the tipping point (e.g., effectively causing the operator to fall) while providing this feedback). An inertial measurement unit will be incorporated into the distal robotic systems, and relayed to three actuators that will drive the tele-supervisor's support platform.
Robotic platform 230 also includes communication module 540 for communicatively coupling to remote control station 250 positioned outside of confined space 210. Some examples of communication module 540 include modems (wired or wireless) and the like. In some arrangements, communication module 540 is a wireless communication module.
In some arrangements, robotic platform 230 further comprises operating tool 520 for performing one or more operations within confined space 210. Some examples of operating tools 520 include, but are not limited to, a drill, a rivet gun, a sealant applicator, and an inspection device.
In some arrangements, robotic platform 230 further comprises drive mechanism 530 for changing position of robotic platform 230 within confined space 210. One example of drive mechanism 530 is a set of treads coupled to a motor. However, other examples are also within the scope. While robotic platform 230 is shown as a treaded vehicle in FIG. 2, any types of robotic platform 230 capable of generating multi-modal sensory data are within the scope.
In some arrangements, robotic platform 230 and/or remote control station 250 may include one or more optional assisting agents (block 550 in FIG. 3 and block 650 in FIG. 4) to assist the human operator with various control operations of co-robotic system 200. Specifically, an assisting agent may utilize the multi-modal sensory data obtained by robotic platform 230 to provide some level of control to robotic platform 230, modify the multimodal sensory data prior to generating the representation of this data to the operator, and/or to modify the control instructions generated based on the user input. This provides some level of automation. For example, the assisting agent may autonomously monitor, interpret, indicate, automate, and limit operations of robotic platform 230. In some arrangements, one or more task domains of co-robotic system 200 is analyzed and defined allowing their modular development, testing, and incorporation using an assisting agent. For example, an assist agent may perform navigation functions, task-specific planning, and monitoring. Corobotic system 200 supports fall-forward / fallback cooperation between its one or more autonomous agents and user input.
FIG. 4 is a schematic representation of remote control station 250, in accordance with some arrangements. Remote control station 250 comprises user interface 610 for generating one or more representations of the multi-modal sensory data and/or for capturing user input. Specifically, user interface 610 may include one or more output devices 612, some examples of which include, but are not limited to, a display (e.g., a 3-D display), and speakers (e.g., a set of stereophonic speakers). Furthermore, user interface 610 may include one or more input devices 614.
Remote control station 250 also comprises communication module 640 for communicatively coupling to robotic platform 230 while robotic platform 230 is within confined space 210. Communication module 640 may be the same type as communication module of platform 230. In some arrangements, remote control station 250 also comprises processor 230 for generating control instructions for robotic platform 230 and memory 635 for storing these instructions and multi-modal sensory data 272.
As noted above, remote control station 250 may also include one or more optional assisting agents 650. Operations of user interface 610 may be integrated with operations assisting agents 650 such that the multi-modal sensory data may be modified prior to presenting it on user interface 610. In some arrangements, user input captured by user interface 610 may be modified by assisting agents 650 prior to generating control instructions for robotic platform 230.
In some arrangements, high-level human supervision of autonomous actions is supported by intelligent assisting agents. This approach incorporates greater autonomy to various tasks, such as, safe path planning and navigation, automatic task-specific operations, or system 'health' monitoring.
Overall, remote control station 250 may be a hub of planning, control, and collaboration of entire co-robotic system 200. Remote control station 250 may be involved in mobility, manipulation, tele-sensing, autonomous agent tasking, and other operations of corobotic system 200. Remote control station 250 may provide a portal to facilitate remote experts’ collaboration (e.g., including multiple experts and/or assisting agents).
Remote control station 250 supports direct human operation or, more specifically, teleoperation by providing situation awareness through immersive multi-sensory high-fidelity presence. Furthermore, remote control station 250 may provide precise physical interaction through haptic teleoperation. The augmented human operation may be supported by autonomous agents, for example, to monitor for safety, and to assist the worker. Scaling various aspects of the multi-modal sensory data (e.g., scaling vision data) provides better matching of the actual environment and operator’s perception and senses. Furthermore, remote control station 250 may provide augmenting the spectra of vision, audition, spatial orientation, and proprioception.
Examples of Operating Co-Robotic Systems and Its Components
FIG. 6 is a process flowchart corresponding to method 300 for remotely controlling of robotic platform 230 in confined space 210, in accordance with some arrangements. The control is performed based on multi-modal sensory data. Specifically, method 300 refers to operations performed by robotic platform 230. Operations performed at or by remote control station 250 are described below with reference to FIG. 7. One having ordinary skills in the art would understand that both sets of operations are parts of the same operating scheme of co-robotic system 200 even though they can be performed by different parties, such as one party controlling operations of robotic platform 230 and another party controlling operations of remote control station 250.
Method 300 may commence with positioning robotic platform 230 within confined space 210 of structure 212 during operation 310. Structure 212 may be an aircraft wing or any other structure that, for example, may not be suitable for humans to operate in. This positioning operation may involve advancing (e.g., driving) robotic platform 230 into and within confined space 210 based on control instructions generated at robotic platform 230 (e.g., autonomous or semi-autonomous movement) and/or generated at remote control station 250 and transmitted to robotic platform 230. Specifically, robotic platform 230 may include drive mechanism 530 as further described above with reference to FIG. 3 and this drive mechanism 530 may be utilized for positioning robotic platform 230 within confined space 210. Alternatively, robotic platform 230 does not have any drive mechanisms and it may be positioned within confined space 210 manually.
It should be noted that while positioned in confined space 210, robotic platform 230 is communicatively coupled to remote control station 250 positioned outside of confined space 210. In some arrangements, method 300 may involve an operation at which communicative coupling between robotic platform 230 and remote control station 250 is established, such as operation 312 shown in FIG. 6
Once robotic platform 230 is positioned within confined space 210, method 300 may proceed with obtaining multi-modal sensory data 272 during operation 314. Multi-modal sensory data 272 may be obtained using two or more sensors 510 of robotic platform 230. Multi-modal sensory data 272 may include at least two different types 273 of sensory responses, such as binocular stereoscopic vision type 273a, binaural stereophonic audio type 273b, force-reflecting haptic manipulation type 273c, and tactile type 273b. Various aspects of multi-modal sensory data 272 are described above.
In some arrangements, method 300 involves augmenting multi-modal sensory data 272 during optional operation 316. In general, multi-modal sensory data 272 may be augmented prior to transmitting it to remote control station 250 (e.g., at robotic platform 230) or after transmitting (e.g., at remote control station 250). In either case, augmentation of multi-modal sensory data 272 may be performed by autonomous agents. Specifically, the agents may autonomously monitor, interpret, indicate, automate, and limit multi-modal sensory data 272.
For example, a visual augmentation agent may address autonomous detection of visual features-of-interest. This agent may identify these features to the operator using, for example, a 3D visual overlay presented on the user interface. The overlay may be aligned with the actual image of the environment. The operator may be able to turn off the visual augmentation agent to reduce distraction. The feature set may be selected from task-specific needs and may include automatic detection of missing components such as fasteners, coating flaws, and items that should not be present (foreign object debris), such as dropped fasteners or tools, or excess coating material. Different types of inspections are within the scope.
Another example of a visual augmentation agent may address localization within the confined working spaces using sensors. For example, a three-dimensional map of the working space may be build using this agent based on one or more components of the multimodal sensory data, such as the vision component and/or the touch component A separate virtual display may be presented to the operator. This display may show the mapped space and the current position of robotic platform 230 within the space. This reference map may provide a non-immersive higher-level situation awareness. This example of the visual augmentation agent may also include controls, such as point-of-view adjustment.
Another example is a physical augmentation agent, which may provide selective scaling of movement, force, and limiting of force-reflection based on the task and operator requirements. The same or another physical augmentation agent may utilize the localization data described above. For example, based on the mapped workspace and the position and posture of deployed robotic platform 230, the intelligent assisting agent may determine safe work-zones and restricted zones (e.g., to prevent unwanted collisions or damage). These zones may be updated in real time as multi-modal sensory data 272 is being obtained.
One example of augmentation is scaling. Scaling of one or more types of multi-modal sensory data 272 may be used to present data 272 in a format more naturally understood by the operator, e.g., more in line with the scale of the operator. This type of scaling may be referred to as scaling of the human operator. In some arrangements, the scaling may be performed using one or more agents described above. Scaling is a powerful expansion of the perceptual capabilities of the vision and/or other components. Modifying the effective scale of vision type 273a of multi-modal sensory data 272 may not involve changing the magnification of cameras 513 as this introduces depth distortions along the optical axis. It may be achieved by changing the inter-pupillary distance of the camera lenses that a scaled viewing geometry is achieved.
Method 300 may proceed with transmitting the multi-modal sensory data during operation 320. The data is transmitted from robotic platform 230 positioned in confined space 210 to remote control station 250 positioned outside of confined space 210.
Method 300 then proceed with receiving remote control instructions from remote control station 250 during operation 330. The remote control instructions are received by robotic platform 230 using communication link 270 and are generated by remote control station 250. Generations of these remote control instructions is further described below with reference to FIG. 7. Briefly, these remote control instructions may be generated based on user input 254 and/or by various assisting agents 650, which may be provided at remote control station 250. The remote control instructions should be distinguished from local control instructions generated by robotic platform 230.
In some arrangements, method 300 involves generating local control instructions at robotic platform 230 during optional operation 334. The local control instructions may be generated based on the multi-modal sensory data.
In some arrangements, method 300 further comprises performing one or more operations within the confined space using the robotic platform during optional operation 350. Operation 350 may be performed at least based on the remote control instructions received from remote control station 250 at robotic platform 230. In some arrangements, local control instructions may be also used for operation 350. Some examples of operation 350 include, but are not limited to, changing position of robotic platform 230 within confined space 210 (block 350a), drilling component 214 of structure 212 (block 350b), installing a fastener into structure 216 (block 350c), sealing structure 212 (block 350d), painting structure 212 (block 350e), removing an object from confined space 210 (block 350f), and inspecting structure 212 (block 350g). One having ordinary skill in the art would understand that various other examples of operation 350 are also within the scope.
For example, the operation may be changing the position of the robotic platform within the confined space. In this example, the multi-modal sensory data may comprise at least the binocular stereoscopic vision type and the stereoscopic audio type.
In another example, the operation may comprise drilling the component of the structure. The multi-modal sensory data may comprise at least the binocular stereoscopic vision type, the stereoscopic audio type, and the force-reflecting haptic manipulation type.
In yet another example, the operation may comprise installing the fastener into the structure. The multi-modal sensory data may comprise at least the binocular stereoscopic vision type, the stereoscopic audio type, the force-reflecting haptic manipulation type, and the tactile type.
The fidelity level of the multi-modal sensory data may correspond to the one or more operations. Some operations may require a higher fidelity level than other operations. Furthermore, the fidelity level of the multi-modal sensory data may change overtime.
FIG. 7 is a process flowchart corresponding to method 400 for remotely controlling of robotic platform 230 in confined space 210 from a perspective of remote control station 250, in accordance with some arrangements. The control is performed based on multi-modal sensory data. The operations of method 400 are performed by remote control station 250. Operations performed at or by robotic platform 230 are described above with reference to FIG. 6.
Method 400 may commence with receiving the multi-modal sensory data from robotic platform 230 during operation 420. During this receiving operation, robotic platform 230 is positioned in confined space 210. The multi-modal sensory data is received by remote control station 250 positioned outside of confined space 210. Furthermore, remote control station 250 is communicatively coupled to robotic platform 230. As noted above, the multimodal sensory data may comprise at least two of the following sensory response types: a binocular stereoscopic vision type, a binaural stereophonic audio type, a force-reflecting haptic manipulation type, and a tactile type.
Method 400 may proceed with generating a representation of the multi-modal sensory data at the remote control station during operation 430. In some arrangements, this representation generating operation comprises augmenting the multi-modal sensory data based on at least one of video spectrum, audio spectrum, spatial orientation, and proprioception. The representation may be a multi-sensory high-fidelity telepresence.
In some arrangements, the representation is generated on user interface 610 of remote control station 250. User interface 610 or, more specifically, output device 612 of user interface 610 may comprise 3D display 613a generating 3D images based on the multi-modal sensory data. In the same or other arrangements, user interface 610 comprises stereo speakers 613b generating stereo sound on the binaural stereophonic audio type of the multimodal sensory data.
Method 400 may proceed with capturing user input at the remote control station during operation 440. The remote control instructions may be generated based on the user input. In some arrangements, at least some of the remote control instructions are generated by remote control station 250 without the user input.
Method 400 may proceed with transmitting the remote control instructions to robotic platform 230 during operation 460. During this operation, robotic platform 230 is positioned in confined space 210. The remote control instructions may represent one or more operations performed by robotic platform 230 within confined space 210. Some examples of these operations are presented above.
In some arrangements, data receiving operation 420 and representation generating operation 430 are performed continuously.
Examples of Aircraft and Methods of Fabricating and Operating Aircraft
Examples of the present disclosure may be described in the context of aircraft manufacturing and service method 1200 as shown in FIG. 8 and aircraft 100 as shown in FIG. 1. During pre-production, illustrative method 1200 may include specification and design (block 1204) of aircraft 100 and material procurement (block 1206). During production, component and subassembly manufacturing (block 1208) and inspection system integration (block 1210) of aircraft 100 may take place. Described methods and assemblies may involve remotely controlling of a robotic platform based on multi-modal sensory data as described above and can be used in any of specification and design (block 1204) of aircraft 100, material procurement (block 1206), component and subassembly manufacturing (block 1208), and/or inspection system integration (block 1210) of aircraft 100.
Thereafter, aircraft 100 may go through certification and delivery (block 1212) to be placed in service (block 1214). While in service, aircraft 100 may be scheduled for routine maintenance and service (block 1216). Routine maintenance and service may include modification, reconfiguration, refurbishment, etc. of one or more inspection systems of aircraft 100. Described methods and assemblies may involve remotely controlling of a robotic platform based on multi-modal sensory data as described above. This approach may be used in any of certification and delivery (block 1212), service (block 1214), and/or routine maintenance and service (block 1216).
Each of the processes of illustrative method 1200 may be performed or carried out by an inspection system integrator, a third party, and/or an operator (e.g., a customer). For the purposes of this description, an inspection system integrator may include, without limitation, any number of aircraft manufacturers and major-inspection system subcontractors; a third party may include, without limitation, any number of vendors, subcontractors, and suppliers; and an operator may be an airline, leasing company, military entity, service organization, and so on.
As shown in FIG. 1, aircraft 100 produced by illustrative method 1200 may include airframe 150 with an interior 170. As previously described, aircraft 100 further includes wings 120 coupled to airframe 150, with engines 130 coupled to wings 120. Airframe 150 further includes a number of high-level inspection systems such as electrical inspection system 140 and environmental inspection system 160. Any number of other inspection systems may be included. Although an aerospace example is shown, the principles disclosed herein may be applied to other industries, such as the automotive industry. Accordingly, in addition to aircraft 100, the principles disclosed herein may apply to other vehicles, e.g., land vehicles, marine vehicles, space vehicles, etc.
Apparatus(es) and method(s) shown or described herein may be employed during any one or more of the stages of manufacturing and service method (illustrative method 1200). For example, components or subassemblies corresponding to component and subassembly manufacturing (block 1208) may be fabricated or manufactured in a manner similar to components or subassemblies produced while aircraft 100 is in service (block 1214). Also, one or more examples of the apparatus(es), method(s), or combination thereof may be utilized during production stages (block 1208) and (block 1210), for example, by substantially expediting assembly of or reducing the cost of aircraft 100. Similarly, one or more examples of the apparatus or method realizations, or a combination thereof, may be utilized, for example and without limitation, while aircraft 100 is in service (block 1214) and/or during maintenance and service (block 1216).
Conclusion
Different examples of the apparatus(es) and method(s) disclosed herein include a variety of components, features, and functionalities. It should be understood that the various examples of the apparatus(es) and method(s) disclosed herein may include any of the components, features, and functionalities of any of the other examples of the apparatus(es) and method(s) disclosed herein in any combination, and all of such possibilities are intended to be within the scope of the present disclosure.
Many modifications of examples set forth herein will come to mind to one skilled in the art to which the present disclosure pertains having the benefit of the teachings presented in the foregoing descriptions and the associated drawings.
Thus, in summary, the following arrangements are disclosed:
A method 300 for remotely controlling of a robotic platform 230 based on multi-modal sensory data 272 is disclosed, wherein the method 300 may comprise: positioning 310 the robotic platform 230, communicatively coupling the robotic platform 230 to a remote control station 250; obtaining 314 the multi-modal sensory data 272 using two or more sensors 510 of the robotic platform 230, the multi-modal sensory data 272 comprising at least two sensory response types; transmitting 320 at least a portion of the multi-modal sensory data 272; and receiving 330 remote control instructions from the remote control station 250 at the robotic platform 230.
The robotic platform 230 may be positioned in a confined space 210 of a structure 212. Transmitting 320 at least the portion of the multi-modal sensory data 272 may be performed while the robotic platform 230 is positioned in the confined space 210.
The at least two sensory response types may be selected from the group consisting of a binocular stereoscopic vision type 273a, a binaural stereophonic audio type 273b, a forcereflecting haptic manipulation type 273c, and a tactile type 273d. The method 300 may comprise generating 334 local control instructions at the robotic platform 230 based on the multi-modal sensory data 272.
The method 300 may comprise performing 350 one or more operations within the confined space 210 using the robotic platform 230 based on the local control instructions.
The multi-modal sensory data 272 may comprise at least the binocular stereoscopic vision type 273a, the binaural stereophonic audio type 273b, and the force-reflecting haptic manipulation type 273c. The one or more operations 350 may comprise drilling 350b the component 214 of the structure 212. The multi-modal sensory data 272 may comprise at least the binocular stereoscopic vision type 273a, the binaural stereophonic audio type 273b, the force-reflecting haptic manipulation type 273 c, and the tactile type 273d. The one or more operations 350 may comprise installing 350c the fastener into the structure 212.
Obtaining 314 the multi-modal sensory data 272 and transmitting the multi-modal sensory data 272 may be repeatedly continuously. The method 300 may comprise augmenting 316 the multi-modal sensory data 272 prior to transmitting 320 at least the portion of the multi-modal sensory data 272. The method 300 may comprise selecting 318 at least the portion of the multi-modal sensory data 272 for transmitting.
The method 300 may comprise performing 350 one or more operations within the confined space 210 using the robotic platform 230 based on the remote control instructions received from the remote control station 250 at the robotic platform 230.
The one or more operations may be selected from the group consisting of: changing 350a position of the robotic platform 230 within the confined space 210, drilling 350b a component 214 of the structure 212, installing 350c a fastener into the structure 212, sealing 350d the structure 212, painting 350e the structure 212, removing 350f an object from a confined space 210, and inspecting 350g the structure 212.
A fidelity level of the multi-modal sensory data 272 may correspond to the one or more operations. A fidelity level of the multi-modal sensory data 272 may change over time.
The one or more operations may be performed also based on local control instructions generated at the robotic platform 230 such that the local control instructions may be combined with the remote control instructions to perform the one or more operations.
The one or more operations may comprise changing 350a the position of the robotic platform 230 within the confined space 210, and wherein the multi-modal sensory data 272 may comprise at least the binocular stereoscopic vision type 273a and the stereoscopic audio type 273b.
The method may comprise augmenting 336 the remote control instructions received from the remote control station 250. The structure 212 may be an aircraft wing. The robotic platform 230 may be communicatively coupled to the remote control station 250 using a local area network. The robotic platform 230 may be communicatively coupled to the remote control station 250 using at least one wireless communication link. The robotic platform 230 may be communicatively coupled to the remote control station 250 using a global communication network.
The following arrangements are also disclosed: A method 400 for remotely controlling of a robotic platform 230 in a confined space 210 of a structure 212 based on multi-modal sensory data 272 is disclosed, wherein the method 400 may comprise: receiving 420 the multi-modal sensory data 272 from the robotic platform 230 positioned in the confined space 210, the multi-modal sensory data 272 being received by a remote control station 250 positioned outside of the confined space 210 and communicatively coupled to the robotic platform 230, the multi-modal sensory data 272 comprising at least two sensory response types selected from the group consisting of a binocular stereoscopic vision type 273a, a binaural stereophonic audio type 273b, a force-reflecting haptic manipulation type 273 c, and a tactile type 273 d; and generating 430 a representation of the multi-modal sensory data 272 by the remote control station 250; capturing 440 user input at the remote control station 250; and transmitting 460 remote control instructions to the robotic platform 230 positioned in the confined space 210.
Generating 430 the representation of the multi-modal sensory data 272 may comprise augmenting the multi-modal sensory data 272 based on at least one of video spectrum, audio spectrum, spatial orientation, and proprioception. The representation may be a multi-sensory high-fidelity telepresence.
A user interface 610 of the remote control station 250 may comprise a 3D display 613a for presenting the binocular stereoscopic vision type 273a of the multi-modal sensory data 272. A user interface 610 of the remote control station 250 may comprise stereo speakers 613b for presenting the binaural stereophonic audio type 273b of the multi-modal sensory data 272. The remote control instructions may represent one or more operations performed by the robotic platform 230 within the confined space 210.
The one or more operations may be selected from the group consisting of: changing 350a position of the robotic platform 230 within the confined space 210, drilling 350a a component 214 of the structure 212, installing 350a a fastener into the structure 212, sealing 350a the structure 212, painting 350a the structure 212, removing 350a an object from a confined space 210, and inspecting 350a the structure 212.
At least receiving 420 the multi-modal sensory data 272 and generating 430 the representation may be performed continuously. The remote control instructions may be generated based on the user input. The robotic platform 230 may be communicatively coupled to the remote control station 250 using a local area network. The robotic platform 230 may be communicatively coupled to the remote control station 250 using a global communication network.
The following arrangements are also disclosed: A robotic platform 230 for operating in a confined space 210 of a structure 212 using multi-modal sensory data 272 is disclosed, wherein the robotic platform 230 may comprise: sensors 510 for generating the multi-modal sensory data 272; and a communication module 540 for communicatively coupling to a remote control station 250 positioned outside of the confined space 210.
The sensors 510 may comprise at least two selected from the group consisting of a binocular stereoscopic vision sensor 512, a binaural stereophonic audio sensor 514, a forcereflecting haptic manipulation sensor 516, and a tactile sensor 518.
The following arrangements are also disclosed: A remote control station 250 for controlling a robotic platform 230 using multi-modal sensory data 272 is disclosed, wherein the robotic platform 230 may comprise: a communication module 540 for communicatively coupling to the robotic platform 230 and for receiving the multi-modal sensory data 272 from the robotic platform 230, the multi-modal sensory data 272 comprising at least two sensory response types; and a user interface 610 comprising an output device 612 for generating a representation of the multi-modal sensory data 272 received from the robotic platform 230.
The at least two sensory response types may be selected from the group consisting of a binocular stereoscopic vision type 273a, a binaural stereophonic audio type 273b, a forcereflecting haptic manipulation type 273c, and a tactile type 273d.
The following arrangements are also: A method 300 for remotely controlling a robotic platform 230 in a confined space 210 of a structure 212 based on multi-modal sensory data 272 is disclosed, wherein the method 300 may comprise: obtaining 314 the multi-modal sensory data 272 using two or more sensors 510 of the robotic platform 230, the multi-modal sensory data 272 comprising at least two sensory response types; transmitting 320 at least a portion of the multi-modal sensory data 272 to a remote control station 250; and generating a representation of the multi-modal sensory data 272 by the remote control station 250.
The robotic platform 230 may be positioned in a confined space 210 of a structure 212. Transmitting at least the portion of the multi-modal sensory data 272 may be performed while the robotic platform 230 is positioned in the confined space 210.
The at least two sensory response types may be selected from the group consisting of a binocular stereoscopic vision type 273a, a binaural stereophonic audio type 273b, a forcereflecting haptic manipulation type 273c, and a tactile type 273d. The method 300 may comprise augmenting the multi-modal sensory data 272 prior to transmitting at least the portion of the multi-modal sensory data 272.
The method 300 may comprise selecting at least the portion of the multi-modal sensory data 272 for transmitting. The method 300 may comprise performing 350 one or more operations within the confined space 210 using the robotic platform 230 based on the remote control instructions received from the remote control station 250 at the robotic platform 230.
The one or more operations may be selected from the group consisting of changing 350a position of the robotic platform 230 within the confined space 210, drilling 350b a component 214 of the structure 212, installing 350c a fastener into the structure 212, sealing 350d the structure 212, painting 350e the structure 212, removing 35Of an object from a confined space 210, and inspecting 350g the structure 212.
A fidelity level of the multi-modal sensory data 272 may correspond to the one or more operations. A fidelity level of the multi-modal sensory data 272 may change overtime.
The one or more operations may be performed also based on local control instructions generated at the robotic platform 230 such that the local control instructions may be combined with the remote control instructions to perform the one or more operations.
The method 300 may comprise: capturing 440 user input at the remote control station 250; and transmitting 460 remote control instructions to the robotic platform 230 positioned in the confined space 210.
Generating 430 the representation of the multi-modal sensory data 272 may comprise augmenting the multi-modal sensory data 272 based on at least one of video spectrum, audio spectrum, spatial orientation, and proprioception. The representation may be a multi-sensory high-fidelity telepresence.
A user interface 610 of the remote control station 250 may comprise a 3D display 613a for presenting the binocular stereoscopic vision type 273a of the multi-modal sensory data
272. A user interface 610 of the remote control station 250 may comprise stereo speakers
613b for presenting the binaural stereophonic audio type 273b of the multi-modal sensory data 272.
Therefore, it is to be understood that the present disclosure is not to be limited to the specific examples illustrated and that modifications and other examples are intended to be included within the scope of the appended claims. Moreover, although the foregoing description and the associated drawings describe examples of the present disclosure in the context of certain illustrative combinations of elements and/or functions, it should be appreciated that different combinations of elements and/or functions may be provided by alternative implementations without departing from the scope of the appended claims.
Accordingly, parenthetical reference numerals in the appended claims are presented for illustrative purposes only and are not intended to limit the scope of the claimed subject matter to the specific examples provided in the present disclosure.

Claims (15)

1. A method (300) for remotely controlling of a robotic platform (230) based on multi-modal sensory data (272), the method (300) comprising:
positioning (310) the robotic platform (230), communicatively coupling the robotic platform (230) to a remote control station (250); obtaining (314) the multi-modal sensory data (272) using two or more sensors (510) of the robotic platform (230), the multi-modal sensory data (272) comprising at least two sensory response types; transmitting (320) at least a portion of the multi-modal sensory data (272); and receiving (330) remote control instructions from the remote control station (250) at the robotic platform (230).
2. The method (300) of claim 1, wherein the robotic platform (230) is positioned in a confined space (210) of a structure (212).
3. The method (300) of claim 2, wherein transmitting (320) at least the portion of the multi-modal sensory data (272) is performed while the robotic platform (230) is positioned in the confined space (210).
4. The method (300) of any preceding claim, wherein the at least two sensory response types are selected from the group consisting of a binocular stereoscopic vision type (273a), a binaural stereophonic audio type (273b), a force-reflecting haptic manipulation type (273c), and a tactile type (273d).
5. The method (300) of any preceding claim, further comprising generating (334) local control instructions at the robotic platform (230) based on the multi-modal sensory data (272).
6. The method (300) of claim 5, further comprising performing (350) one or more operations within a/the confined space (210) using the robotic platform (230) based on the local control instructions.
7. The method (300) of any of claims 4-6, wherein the multi-modal sensory data (272) comprises at least the binocular stereoscopic vision type (273a), the binaural stereophonic audio type (273b), and the force-reflecting haptic manipulation type (273c).
8 The method (300) of claims 4-6, wherein the multi-modal sensory data (272) comprises at least the binocular stereoscopic vision type (273a), the binaural stereophonic audio type (273b), the force-reflecting haptic manipulation type (273c), and the tactile type (273 d).
9. The method (300) of any preceding claim, further comprising augmenting (316) the multi-modal sensory data (272) prior to transmitting (320) at least the portion of the multi-modal sensory data (272).
10. The method (300) of any preceding claim, further comprising performing (350) one or more operations within the confined space (210) using the robotic platform (230) based on the remote control instructions received from the remote control station (250) at the robotic platform (230).
11. The method (300) of claim 10, wherein the one or more operations are selected from the group consisting of:
changing (350a) position of the robotic platform (230) within the confined space (210), drilling (350b) a component (214) of the structure (212), installing (350c) a fastener into the structure (212), sealing (350d) the structure (212), painting (350e) the structure (212), removing (350f) an object from a confined space (210), and inspecting (350g) the structure (212).
12. The method (300) of claim 10 or 11, wherein the one or more operations are performed also based on local control instructions generated at the robotic platform (230) such that the local control instructions are combined with the remote control instructions to perform the one or more operations.
13. The method (300) any preceding claim, wherein the structure (212) is an aircraft wing.
14. A remote control station (250) for controlling a robotic platform (230) using 5 multi-modal sensory data (272), the robotic platform (230) comprising:
a communication module (540) for communicatively coupling to the robotic platform (230) and for receiving the multi-modal sensory data (272) from the robotic platform (230), the multi-modal sensory data (272) comprising at least two sensory response types; and a user interface (610) comprising an output device (612) for generating a representation
10 of the multi-modal sensory data (272) received from the robotic platform (230).
15. The remote control station (250) of claim 14, wherein the at least two sensory response types are selected from the group consisting of a binocular stereoscopic vision type (273a), a binaural stereophonic audio type (273b), a force-reflecting haptic manipulation type
15 (273c), and a tactile type (273d).
Intellectual
Property
Office
Application No: Claims searched:
GB 1708992.1 1-13
GB1708992.1A 2016-06-10 2017-06-06 Remotely controlling robotic platforms based on multi-modal sensory data Active GB2553617B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/179,493 US10272572B2 (en) 2016-06-10 2016-06-10 Remotely controlling robotic platforms based on multi-modal sensory data

Publications (3)

Publication Number Publication Date
GB201708992D0 GB201708992D0 (en) 2017-07-19
GB2553617A true GB2553617A (en) 2018-03-14
GB2553617B GB2553617B (en) 2020-09-16

Family

ID=59350033

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1708992.1A Active GB2553617B (en) 2016-06-10 2017-06-06 Remotely controlling robotic platforms based on multi-modal sensory data

Country Status (5)

Country Link
US (1) US10272572B2 (en)
JP (2) JP2018008369A (en)
KR (1) KR102369855B1 (en)
CN (1) CN107491043A (en)
GB (1) GB2553617B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10023250B2 (en) 2016-06-10 2018-07-17 The Boeing Company Multi-tread vehicles and methods of operating thereof
US10257241B2 (en) * 2016-12-21 2019-04-09 Cisco Technology, Inc. Multimodal stream processing-based cognitive collaboration system
US20190219994A1 (en) * 2018-01-18 2019-07-18 General Electric Company Feature extractions to model large-scale complex control systems
JP6823016B2 (en) * 2018-07-17 2021-01-27 ファナック株式会社 Numerical control device
US11007637B2 (en) 2019-05-17 2021-05-18 The Boeing Company Spherical mechanism robot assembly, system, and method for accessing a confined space in a vehicle to perform confined space operations
CN111209942B (en) * 2019-12-27 2023-12-19 广东省智能制造研究所 Multi-mode sensing abnormality monitoring method for foot robot
NO346361B1 (en) * 2020-04-29 2022-06-27 Conrobotix As CONTROL SYSTEM FOR OPERATING WORKING OPERATIONS WITH TOOLS IN A ROBOT ADAPTED FOR TOOL HANDLING
KR102495920B1 (en) * 2020-10-16 2023-02-06 위더스(주) Wireless communication system to operate android platform in android based device without display and wireless communication method thereof
US20220331966A1 (en) * 2021-04-09 2022-10-20 Beyond Imagination Inc. Mobility surrogates
CN113119125B (en) * 2021-04-14 2022-08-05 福建省德腾智能科技有限公司 Monitoring interaction method based on multi-mode information
CN113829344B (en) * 2021-09-24 2022-05-03 深圳群宾精密工业有限公司 Visual guide track generation method, device, equipment and medium suitable for flexible product
CN113927602B (en) * 2021-11-12 2023-03-17 哈尔滨工业大学(深圳) Robot precision assembly control method and system based on visual and tactile fusion
WO2023188104A1 (en) * 2022-03-30 2023-10-05 三菱電機株式会社 Remote experience system, information processing device, information processing method, and program

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020153185A1 (en) * 2001-04-18 2002-10-24 Jeong-Gon Song Robot cleaner, system employing the same and method for re-connecting to external recharging device
WO2011116332A2 (en) * 2010-03-18 2011-09-22 SPI Surgical, Inc. Surgical cockpit comprising multisensory and multimodal interfaces for robotic surgery and methods related thereto
WO2012129251A2 (en) * 2011-03-23 2012-09-27 Sri International Dexterous telemanipulator system
EP2653273A1 (en) * 2010-12-16 2013-10-23 Samsung Heavy Ind. Co., Ltd. Wind turbine assembly and management robot and wind turbine system including same
US20140114482A1 (en) * 2011-03-31 2014-04-24 Tobor Technology, Llc Roof inspection systems with autonomous guidance
US20150148949A1 (en) * 2013-11-26 2015-05-28 Elwha Llc Structural assessment, maintenance, and repair apparatuses and methods
US20150346722A1 (en) * 2014-05-27 2015-12-03 Recreational Drone Event Systems, Llc Virtual and Augmented Reality Cockpit and Operational Control Systems

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0724751A (en) * 1989-02-13 1995-01-27 Toshiba Corp Inspection work robot
JP3217383B2 (en) * 1991-02-01 2001-10-09 衛 光石 Realism reproduction system and processing system
JPH04354275A (en) * 1991-05-30 1992-12-08 Meitec Corp Head motion follow-up type video system
JP3583777B2 (en) * 1992-01-21 2004-11-04 エス・アール・アイ・インターナシヨナル Teleoperator system and telepresence method
JPH07237106A (en) * 1994-02-28 1995-09-12 Nippon Steel Corp Remote control flaw repairing method and device therefor
EP1189052B1 (en) * 2000-06-28 2007-10-17 Robert Bosch Gmbh Device for image acquisition of piece goods
JP2002046088A (en) * 2000-08-03 2002-02-12 Matsushita Electric Ind Co Ltd Robot device
JP4214860B2 (en) * 2003-08-12 2009-01-28 沖電気工業株式会社 Robot relay system, robot relay program and method
JP4876246B2 (en) * 2006-03-03 2012-02-15 国立大学法人長岡技術科学大学 Haptic control method and tactile control device
US8217478B2 (en) * 2008-10-10 2012-07-10 Seagate Technology Llc Magnetic stack with oxide to reduce switching current
US9643316B2 (en) * 2009-10-27 2017-05-09 Battelle Memorial Institute Semi-autonomous multi-use robot system and method of operation
CN101791750B (en) * 2009-12-31 2012-06-06 哈尔滨工业大学 Robot remote control welding system and method used for remote welding
CN102060057B (en) * 2010-12-27 2012-09-26 中国民航大学 Robot system for inspecting airplane fuel tank and control method thereof
CN102699919A (en) * 2011-03-28 2012-10-03 江苏久祥汽车电器集团有限公司 Intelligent decision and drive control technology
US8943892B2 (en) * 2012-05-11 2015-02-03 The Boeing Company Automated inspection of spar web in hollow monolithic structure
KR102235965B1 (en) * 2012-08-03 2021-04-06 스트리커 코포레이션 Systems and methods for robotic surgery
US9226796B2 (en) * 2012-08-03 2016-01-05 Stryker Corporation Method for detecting a disturbance as an energy applicator of a surgical instrument traverses a cutting path
DE102013204151B4 (en) * 2013-03-11 2016-12-15 Continental Automotive Gmbh Control device for operating a machine tool and machine tool
US20160030373A1 (en) * 2013-03-13 2016-02-04 The General Hospital Corporation 2-AAA as a Biomarker and Therapeutic Agent for Diabetes
FR3012425B1 (en) * 2013-10-24 2017-03-24 European Aeronautic Defence & Space Co Eads France COLLABORATIVE ROBOT FOR VISUAL INSPECTION OF AN AIRCRAFT
CA2945189C (en) * 2014-04-10 2022-10-11 Quanser Consulting Inc. Robotic systems and methods of operating robotic systems
US9856037B2 (en) * 2014-06-18 2018-01-02 The Boeing Company Stabilization of an end of an extended-reach apparatus in a limited-access space
CN104057450B (en) * 2014-06-20 2016-09-07 哈尔滨工业大学深圳研究生院 A kind of higher-dimension motion arm teleoperation method for service robot
US10406593B2 (en) * 2014-07-09 2019-09-10 The Boeing Company Method of using a tower for accessing an interior of a fuselage assembly
CN104656653A (en) * 2015-01-15 2015-05-27 长源动力(北京)科技有限公司 Interactive system and method based on robot
US10023250B2 (en) 2016-06-10 2018-07-17 The Boeing Company Multi-tread vehicles and methods of operating thereof
WO2018215977A1 (en) * 2017-05-26 2018-11-29 Invert Robotics Limited Climbing robot for detection of defects on an aircraft body

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020153185A1 (en) * 2001-04-18 2002-10-24 Jeong-Gon Song Robot cleaner, system employing the same and method for re-connecting to external recharging device
WO2011116332A2 (en) * 2010-03-18 2011-09-22 SPI Surgical, Inc. Surgical cockpit comprising multisensory and multimodal interfaces for robotic surgery and methods related thereto
EP2653273A1 (en) * 2010-12-16 2013-10-23 Samsung Heavy Ind. Co., Ltd. Wind turbine assembly and management robot and wind turbine system including same
WO2012129251A2 (en) * 2011-03-23 2012-09-27 Sri International Dexterous telemanipulator system
US20140114482A1 (en) * 2011-03-31 2014-04-24 Tobor Technology, Llc Roof inspection systems with autonomous guidance
US20150148949A1 (en) * 2013-11-26 2015-05-28 Elwha Llc Structural assessment, maintenance, and repair apparatuses and methods
US20150346722A1 (en) * 2014-05-27 2015-12-03 Recreational Drone Event Systems, Llc Virtual and Augmented Reality Cockpit and Operational Control Systems

Also Published As

Publication number Publication date
JP7381632B2 (en) 2023-11-15
GB2553617B (en) 2020-09-16
KR20170140070A (en) 2017-12-20
CN107491043A (en) 2017-12-19
KR102369855B1 (en) 2022-03-02
JP2018008369A (en) 2018-01-18
JP2022081591A (en) 2022-05-31
US20170355080A1 (en) 2017-12-14
GB201708992D0 (en) 2017-07-19
US10272572B2 (en) 2019-04-30

Similar Documents

Publication Publication Date Title
US10272572B2 (en) Remotely controlling robotic platforms based on multi-modal sensory data
US9579797B2 (en) Robotic systems and methods of operating robotic systems
Chen et al. Human performance issues and user interface design for teleoperated robots
US8634969B2 (en) Teleoperation method and human robot interface for remote control of a machine by a human operator
US20200055195A1 (en) Systems and Methods for Remotely Controlling a Robotic Device
Buss et al. Control problems in multi-modal telepresence systems
Naceri et al. Towards a virtual reality interface for remote robotic teleoperation
Su et al. Mixed reality-integrated 3D/2D vision mapping for intuitive teleoperation of mobile manipulator
Chellali et al. What maps and what displays for remote situation awareness and rov localization?
Szczurek et al. Multimodal multi-user mixed reality human–robot interface for remote operations in hazardous environments
GB2598345A (en) Remote operation of robotic systems
Vagvolgyi et al. Scene modeling and augmented virtuality interface for telerobotic satellite servicing
CN113021082A (en) Robot casting polishing method based on teleoperation and panoramic vision
Materna et al. Teleoperating assistive robots: A novel user interface relying on semi-autonomy and 3D environment mapping
Pryor et al. Experimental evaluation of teleoperation interfaces for cutting of satellite insulation
Fong et al. A personal user interface for collaborative human-robot exploration
Gregg-Smith et al. Investigating spatial guidance for a cooperative handheld robot
Fernando et al. Effectiveness of Spatial Coherent Remote Drive Experience with a Telexistence Backhoe for Construction Sites.
Young et al. The effects of interface views on performing aerial telemanipulation tasks using small UAVs
Rossmann et al. The virtual testbed: Latest virtual reality technologies for space robotic applications
Filipenko et al. Virtual commissioning with mixed reality for next-generation robot-based mechanical component testing
Tharp et al. Virtual window telepresence system for telerobotic inspection
Nagai et al. Audio feedback system for teleoperation experiments on engineering test satellite vii system design and assessment using eye mark recorder for capturing task
Wyckoff et al. Feedback and Control of Dynamics and Robotics using Augmented Reality
Sita et al. Robot companion for industrial process monitoring based on virtual fixtures