US20200086497A1 - Stopping Robot Motion Based On Sound Cues - Google Patents

Stopping Robot Motion Based On Sound Cues Download PDF

Info

Publication number
US20200086497A1
US20200086497A1 US16/571,025 US201916571025A US2020086497A1 US 20200086497 A1 US20200086497 A1 US 20200086497A1 US 201916571025 A US201916571025 A US 201916571025A US 2020086497 A1 US2020086497 A1 US 2020086497A1
Authority
US
United States
Prior art keywords
sound
robot
context
library
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/571,025
Inventor
David M.S. Johnson
Syler Wagner
Anthony Tayoun
Steven Lines
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Charles Stark Draper Laboratory Inc
Original Assignee
The Charles Stark Draper Laboratory, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Charles Stark Draper Laboratory, Inc. filed Critical The Charles Stark Draper Laboratory, Inc.
Priority to US16/571,025 priority Critical patent/US20200086497A1/en
Publication of US20200086497A1 publication Critical patent/US20200086497A1/en
Assigned to THE CHARLES STARK DRAPER LABORATORY, INC. reassignment THE CHARLES STARK DRAPER LABORATORY, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JOHNSON, DAVID M.S., TAYOUN, Anthony, WAGNER, Syler
Abandoned legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47JKITCHEN EQUIPMENT; COFFEE MILLS; SPICE MILLS; APPARATUS FOR MAKING BEVERAGES
    • A47J44/00Multi-purpose machines for preparing food with several driving units
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/003Controls for manipulators by means of an audio-responsive input
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0045Manipulators used in the food industry
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/085Force or torque sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/088Controls for manipulators by means of sensing devices, e.g. viewing or touching devices with position, velocity or acceleration sensors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/1653Programme controls characterised by the control loop parameters identification, estimation, stiffness, accuracy, error analysis
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1682Dual arm manipulator; Coordination of several manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J15/00Gripping heads and other end effectors
    • B25J15/0052Gripping heads and other end effectors multiple gripper units or multiple end effectors
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J15/00Gripping heads and other end effectors
    • B25J15/04Gripping heads and other end effectors with provision for the remote detachment or exchange of the head or parts thereof
    • B25J15/0408Connections means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/0075Means for protecting the manipulator from its environment or vice versa
    • B25J19/0083Means for protecting the manipulator from its environment or vice versa using gaiters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0009Constructional details, e.g. manipulator supports, bases
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/1633Programme controls characterised by the control loop compliant, force, torque control, e.g. combined with position control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1687Assembly, peg and hole, palletising, straight line, weaving pattern movement
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G1/00Storing articles, individually or in orderly arrangement, in warehouses or magazines
    • B65G1/02Storage devices
    • B65G1/04Storage devices mechanical
    • B65G1/137Storage devices mechanical with arrangements or automatic control means for selecting which articles are to be removed
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
    • G05B19/406Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by monitoring or safety
    • G05B19/4061Avoiding collision or forbidden zones
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/32Operator till task planning
    • G05B2219/32335Use of ann, neural network
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/39Robotics, robotics to robotics hand
    • G05B2219/39001Robot, manipulator control
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/39Robotics, robotics to robotics hand
    • G05B2219/39091Avoid collision with moving obstacles
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/39Robotics, robotics to robotics hand
    • G05B2219/39319Force control, force as reference, active compliance
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/39Robotics, robotics to robotics hand
    • G05B2219/39342Adaptive impedance control
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/39Robotics, robotics to robotics hand
    • G05B2219/39468Changeable hand, tool, code carrier, detector
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40201Detect contact, collision with human
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40202Human robot coexistence
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40411Robot assists human in non-industrial environment like home or office
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40497Collision monitor controls planner in real time to replan if collision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/49Nc machine tool, till multiple
    • G05B2219/49157Limitation, collision, interference, forbidden zones, avoid obstacles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0631Resource planning, allocation, distributing or scheduling for enterprises or organisations
    • G06Q10/06316Sequencing of tasks or work
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue

Definitions

  • Robots operate in environments where they must avoid both fixed and moving obstacles, and often those obstacles are their human co-workers. Collisions with the objects, e.g., human co-workers, are unacceptable. Existing methods for robot-obstacle avoidance and for robot control in environments are cumbersome and inadequate.
  • robots identify dangers and faults only when measurements cross certain thresholds.
  • robots today identify dangers by (1) capturing intrusions into predefined zones, (2) measuring quantities such as torque, voltage, or current and comparing the measured quantities to predefined limits, or (3) receiving a mechanical input such as an emergency stop button.
  • a mechanical input such as an emergency stop button.
  • Embodiments solve problems in relation to employing robotics in a dynamic workspace, frequently alongside human workers, and enhance a robot's ability to sense and react to dangerous situations. Unlike existing methods, embodiments provide functionality for robots to infer from context the amount of danger that a situation presents by incorporating one or more data sources, capturing one or more details from these one or more data sources, and using pattern matching and other analysis techniques to recognize danger.
  • Embodiments of the present disclosure provide methods and systems for modifying motion of a robot.
  • One such embodiment detects a sound in an environment using a sound capturing device and then processes the detected sound.
  • the processing includes at least one of: (1) comparing the detected sound to a library of sound characteristics associated with sound cues and (2) extracting features or characteristics from the detected sound using a model.
  • such an embodiment modifies motion of a robot based on a context of the robot and at least one of: (i) the comparison, (ii) the features extracted from the detected sound, and (iii) the characteristics extracted from the detected sound.
  • An embodiment creates the library of sound characteristics associated with the sound cues. Such an embodiment creates the library by (1) recording a plurality of sounds in an environment, (2) identifying one or more of the recorded plurality of sounds as a sound cue, (3) determining sound characteristics of the one or more plurality of sounds identified as a sound cue, (4) associating the determined sound characteristics with the one or more plurality of sounds identified as a sound cue in computer memory of the library, and (5) associating, in the computer memory of the library, a respective action rule with the one or more plurality of sounds identified as a sound cue.
  • embodiments may employ a variety of different input data to identify one or more of the recorded plurality of sounds as a sound cue. For instance, embodiments may identify sounds as a sound cue based on user input flagging a given sound as a sound cue, context obtained from analyzing non-sound sensor input, and output of a neural network trained to identify sound cues using the recorded plurality of sounds as input.
  • comparing the detected sound to the library of sound characteristics associated with sound cues utilizes a neural network.
  • Such an embodiment processes the detected sound using a neural network trained to identify one or more characteristics of the detected sound that matches at least one of the sound characteristics associated with the sound cues.
  • comparing the detected sound to the library of sound characteristics associated with sound cues includes identifying a sound characteristic of the detected sound matching a given sound characteristic associated with a given sound cue in the library.
  • modifying the motion of the robot includes identifying one or more action rules associated with the given sound cue (the sound cue with a matching sound characteristic) and modifying the motion of the robot to be in accordance with the one or more action rules.
  • the one or more action rules may dictate the operation of the robot given the sound cue.
  • the action rules may dictate the operation of the robot given the sound cue and the context of the robot.
  • the one or more action rules associated with the given sound cue may be a set of action rules.
  • at least one of the one or more action rules dictates a first result for the motion of the robot and a second result for the motion of the robot, where the motion of the robot is modified to be in accordance with the first result or the second result based upon the context of the robot.
  • Embodiments may treat any sound as a sound cue.
  • the sound cues may include at least one of: a keyword, a phrase, a sound indicating a safety-relevant, e.g., dangerous event, and a sound relevant to an action.
  • an embodiment may treat any sound relevant to operation of a robot as a sound cue.
  • context may include any conditions related in any way to the robot.
  • context may include any data related to the robot, the task performed by robot, the motion of the robot, and the environment in which the robot is operating, amongst other examples.
  • context of the robot includes at least one of: torque of a joint of the robot; velocity of a link of the robot; acceleration of a link of the robot, jerk of a link of the robot; force of an end effector attached to the robot; torque of an end effector attached to the robot; pressure of an end effector attached to the robot; velocity of an end effector attached to the robot; acceleration of an end effector attached to the robot; task performed by the robot; and characteristics of an environment in which the robot is operating.
  • the context may include any context data as described in U.S. Patent Application titled “Controlling Robot Torque And Velocity Based On Context”, Attorney Docket No. 5000.1055-001.
  • modifying motion of the robot includes comparing the context of the robot to a library of contexts to detect a matching context, identifying one or more action rules associated with the matching context, and modifying the motion of the robot to be in accordance with the one or more action rules.
  • the motion of the robot may be modified as described in U.S. Patent Application titled “Robot Interaction With Human Co-Workers”, Attorney Docket No. 5000.1057-001.
  • the context library is created by recording a plurality of contexts, e.g., data indicating context, in an environment and associating, in computer memory of the library, a respective action rule with one or more of the plurality of recorded contexts.
  • the contexts may be recorded using any sensor known in the art that can capture context data, i.e., data relevant to the operation of a robot.
  • the context data may be recorded using at least one of: a vision sensor, a depth sensor, a torque sensor, and a position sensor, amongst other examples.
  • An embodiment that creates the context library may also identify the respective action rule associated with the one or more of the plurality of recorded contexts.
  • identifying the action rule associated with the recorded contexts includes (1) processing the plurality of recorded contexts to identify at least one of: a pattern in the environment in which the contexts were captured and a condition in the environment in which the contexts were captured and (2) identifying the respective action rule using at least one of the identified pattern and condition.
  • processing the plurality of recorded contexts to identify at least one of a pattern and a condition includes at least one of (i) comparing the plurality of recorded contexts to a library of predefined context conditions and (ii) evaluating output of a neural network trained to identify patterns or conditions of a context from the plurality of recorded contexts.
  • Another embodiment is directed to a system for modifying motion of a robot.
  • the system includes a processor and a memory with computer code instructions stored thereon.
  • the processor and the memory, with the computer code instructions are configured to cause the system to implement any embodiments described herein.
  • Yet another embodiment is directed to a computer program product for modifying motion of a robot.
  • the computer program product comprises a computer-readable medium with computer code instructions stored thereon where, the computer code instructions, when executed by a processor, cause an apparatus associated with the processor to perform any embodiments described herein.
  • a method embodiment for defining/recording one or more sound cues, e.g., keywords, phrases, and one or more sound wave characteristics (amplitude, frequency, speed) and defining/recording other sensor data, such as camera data, depth data, and torque measurements.
  • Such an embodiment monitors for this data (sound cues and other sensor data, i.e., context data) in an environment in which a robot is operating. This monitoring detects the data and patterns and/or conditions related to this data. Upon meeting pre-defined conditions related to the measured data in the environment, one or more rules governing the operation of the robot are executed.
  • Another embodiment is directed to a method for monitoring keywords, sound wave profiles, and other sensor data.
  • Such an embodiment monitors speech or sounds and other sensor data, i.e., context data, for at least one of: (i) a pre-defined keyword, (ii) a pre-defined phrase, (iii) a characteristic, and (iv) a data pattern.
  • a pre-defined keyword, phrase, characteristic, and/or other sensor data, i.e., context data, pattern Upon detecting the pre-defined keyword, phrase, characteristic, and/or other sensor data, i.e., context data, pattern, such an embodiment executes a set of rules and actions that are based on the matched pre-defined keyword, phrase, or sound wave characteristic, or context data pattern. In an embodiment, processing these rules results in identifying changes to robot motion based upon the detected pre-defined keyword, phrase, characteristic, or context data pattern.
  • FIG. 1A is a block diagram illustrating an example embodiment of a quick service food environment of embodiments of the present disclosure.
  • FIG. 1B is a block diagram illustrating an example embodiment of the present disclosure.
  • FIG. 2 is a flowchart depicting a method for modifying motion of a robot according to an embodiment.
  • FIG. 3 is a block diagram illustrating an example system in which embodiments may be implemented.
  • FIG. 4 is a flowchart of an embodiment for controlling a robot in an environment.
  • FIG. 5 is a flowchart of a method for training a model that may be employed in embodiments.
  • FIG. 6 depicts a computer network or similar digital processing environment in which embodiments may be implemented.
  • FIG. 7 is a diagram of an example internal structure of a computer in the environment of FIG. 6 .
  • Embodiments provide functionality for modifying motion of a robot. Such functionality can be employed in any variety of environments in which control of robot motion is desired.
  • FIG. 1A illustrates a food preparation environment 100 in which embodiments may be employed.
  • the end effectors e.g., utensils
  • Contamination can include allergens (e.g., peanuts), dietary preferences (e.g., contamination from pork for a vegetarian or kosher customer), dirt/bacteria/viruses, or other non-ingestible materials (e.g., oil, plastic, or particles from the robot itself).
  • allergens e.g., peanuts
  • dietary preferences e.g., contamination from pork for a vegetarian or kosher customer
  • dirt/bacteria/viruses e.g., oil, plastic, or particles from the robot itself.
  • the robot should be operated within its design specifications, and not exposed to excessive temperatures or incompatible liquids, without sacrificing cleanliness.
  • the robot should be able to manipulate food stuffs, which are often fracturable and deformable materials, and further the robot must be able to measure an amount of material controlled by its utensil in order to dispense specific portions.
  • the robot should be able to automatically and seamlessly switch utensils (e.g., switch between a ladle and salad tongs).
  • the utensils should be adapted to be left in an assigned food container and interchanged with the robot as needed, in situ.
  • the interchangeable parts e.g., utensils
  • the robot should be able to autonomously generate a task plan and motion plan(s) to assemble all ingredients in a recipe, and execute that plan.
  • the robot should be able to modify or stop a motion plan based on detected interference or voice commands to stop or modify the robot's plan.
  • the robot should be able to minimize the applied torque based on safety requirements or the task context or the task parameters (e.g., density and viscosity) of the material to be gathered.
  • the system should be able to receive an electronic order from a user, assemble the meal for the user, and place the meal for the user in a designated area for pickup automatically with minimal human involvement.
  • FIG. 1 is a block diagram illustrating an example embodiment of a quick service food environment 100 of embodiments of the present disclosure.
  • the quick service food environment 100 includes a food preparation area 102 and a patron area 120 .
  • the food preparation area 102 includes a plurality of ingredient containers 106 a - d each having a particular foodstuff (e.g., lettuce, chicken, cheese, tortilla chips, guacamole, beans, rice, various sauces or dressings, etc.).
  • Each ingredient container 106 a - d stores in situ its corresponding ingredients.
  • Utensils 108 a - d may be stored in situ in the ingredient containers or in a stand-alone tool rack 109 .
  • the utensils 108 a - d can be spoons, ladles, tongs, dishers (scoopers), spatulas, or other utensils.
  • Each utensil 108 a - e is configured to mate with and disconnect from a tool changer interface 112 of a robot arm 110 . While the term utensil is used throughout this application, a person having ordinary skill in the art can recognize that the principles described in relation to utensils can apply in general to end effectors in other contexts (e.g., end effectors for moving fracturable or deformable materials in construction with an excavator or backhoe, etc.); and a robot arm can be replaced with any computer controlled actuatable system which can interact with its environment to manipulate a deformable material.
  • the robot arm 110 includes sensor elements/modules such as stereo vision systems (SVS), 3D vision sensors (e.g., Microsoft KinectTM or an Intel RealSenseTM), LIDAR sensors, audio sensors (e.g., microphones), inertial sensors (e.g., internal motion unit (IMU), torque sensor, weight sensor, etc.) for sensing aspects of the environment, including pose (i.e., X, Y, Z coordinates and roll, pitch, and yaw angles) of tools for the robot to mate, shape and volume of foodstuffs in ingredient containers, shape and volume of foodstuffs deposited into food assembly container, moving or static obstacles in the environment, etc.
  • SVS stereo vision systems
  • 3D vision sensors e.g., Microsoft KinectTM or an Intel RealSenseTM
  • LIDAR sensors e.g., LIDAR sensors
  • audio sensors e.g., microphones
  • inertial sensors e.g., internal motion unit (IMU), torque sensor, weight sensor, etc.
  • pose
  • a patron in the patron area 120 enters an order 124 in an ordering station 122 a - b , which is forwarded to a network 126 .
  • a patron on a mobile device 128 can, within or outside of the patron area 120 , generate an optional order 132 .
  • the network 126 forwards the order to a controller 114 of the robot arm 110 .
  • the controller generates a task plan 130 for the robot arm 110 to execute.
  • the task plan 130 includes a list of motion plans 132 a - d for the robot arm 110 to execute.
  • Each motion plan 132 a - d is a plan for the robot arm 110 to engage with a respective utensil 108 a - e , gather ingredients from the respective ingredient container 106 a - d , and empty the utensil 108 a - e in an appropriate location of a food assembly container 104 for the patron, which can be a plate, bowl, or other container.
  • the robot arm 110 then returns the utensil 108 a - e to its respective ingredient container 106 a - d , the tool rack 109 , or other location as determined by the task plan 130 or motion plan 132 a - d , and releases the utensil 108 a - d .
  • the robot arm executes each motion plan 132 a - d in a specified order, causing the food to be assembled within the food assembly container 104 in a planned and aesthetic manner.
  • the environment 100 illustrated by FIG. 1 can improve food service to patrons by assembling meals faster, more accurately, and more sanitarily than a human can assemble a meal.
  • FIG. 1B illustrates using an embodiment of the present disclosure to control the robot arm, i.e., robot, 110 in the environment 160 based on context and sound.
  • the robot arm 110 includes an array of several microphones 140 a - d that are mounted on the robot arm 110 .
  • the microphones 140 a - d are configured to detect and record sound waves 142 .
  • the recorded sound data 143 is reported to a controller 114 .
  • the sound data 143 can be organized into data from individual microphones as mic data 144 a - d .
  • the controller 114 can process the sound data 143 and if a sound cue is detected (e.g., a stop or distress sound, e.g., “ouch”) then the controller 114 can issue a stop command 146 . Before issuing the stop comment 146 , the controller 114 can also consider the context of the sound data 143 . For instance, the controller 114 can consider the proximity of the sound waves 142 to the robot arm 110 . If, for example, the sound 142 is far from the robot arm 110 , the controller 114 would consider this when deciding to issue a stop command.
  • a sound cue e.g., a stop or distress sound, e.g., “ouch”
  • the controller 114 can also consider the context of the sound data 143 . For instance, the controller 114 can consider the proximity of the sound waves 142 to the robot arm 110 . If, for example, the sound 142 is far from the robot arm 110 , the controller 114 would consider this when deciding to issue a stop command.
  • the controller 114 is not limited to issuing the stop command 146 and, instead, the controller 114 can issue commands modifying the operation of the robot, such as, the robot's motion, path, speed, and torque, amongst other examples.
  • the microphones 140 a - d are depicted as located on the robot arm 110 and the controller 114 is located separately from the robot arm 110 , embodiments are not limited to this configuration and sound capturing devices may be in any location.
  • the processing performed by the controller 114 may be performed by one or more processing devices that are capable of obtaining and processing sound data and issuing controls for the robot. These processing devices may be located on/in the robot or may be located locally or remotely in relation to the robot arm 110 .
  • FIG. 1B further illustrates sound waves 150 beginning from the patron area 120 .
  • the controller 114 can determine a triangulated location 152 of the sound waves 150 .
  • the controller 114 can process the sound waves 150 to determine if the sound waves 150 correspond to a sound cue for which action should be taken and the controller 114 can also consider the context of the robot arm 110 , such as the location 152 of the sound waves 150 in relation to the robot arm 110 . Based upon the sound waves 150 and the context, the controller 114 can determine modifications, if any, for the robot arm's 110 motion.
  • the controller 114 can determine that the triangulated location 152 is in the patron area 120 and the controller 114 can consider the proximity of the location 152 to the robot arm 110 and ignore the sound waves 150 altogether even if the sound waves 150 correspond to a sound cue for which action would be taken if the sound cue occurred in closer proximity to the robot arm 110 .
  • FIG. 2 is a flow chart of a method 220 for modifying motion of a robot according to an embodiment.
  • the method 220 detects a sound in an environment using a sound capturing device.
  • the method 220 continuously operates during conditions in which the robot is configured to move and the robot can possibly collide with objects.
  • the method 220 processes, at 222 , the sound detected from 221 .
  • the processing 222 determines whether the detected sound is a sound for which action should be taken.
  • the processing 222 includes at least one of: (1) comparing the detected sound to a library of sound characteristics associated with sound cues and (2) extracting features or characteristics from the detected sound using a model.
  • the comparing at 222 using a model is done via a neural network serving as the model.
  • motion of a robot is modified based on a context of the robot and at least one of: (i) the comparison, (ii) the features extracted from the detected sound, and (iii) the characteristics extracted from the detected sound.
  • the motion modification can take any form, such as the motion modification described in U.S. Patent Application titled “Robot Interaction With Human Co-Workers”, Attorney Docket No. 5000.1057-001, including moving to a known safe region, stopping all motion, or using the sound to apply additional context to the current action. If the current action context is dangerous, then a triggering sound cue may be configured to drop all robot joint torques below a safe threshold until a human operator signals that it is safe to continue robot operation.
  • the sound waves 142 are detected by the microphones 140 a - d recorded as the sound data 143 .
  • the sounds data 143 is processed by the controller 114 which compares the sound data 143 to a library. In such an example, the comparison identifies that the sound data 143 matches the sound cue of a person yelling stop.
  • the controller 114 determines that the robot should be stopped and issues the stop command 146 .
  • An embodiment of the method 220 creates the library of sound characteristics associated with the sound cues used at 222 .
  • Such an embodiment creates the library by (1) recording a plurality of sounds in an environment, (2) identifying one or more of the recorded plurality of sounds as a sound cue, (3) determining sound characteristics of the one or more plurality of sounds identified as a sound cue, (4) associating the determined sound characteristics with the one or more plurality of sounds identified as a sound cue in computer memory of the library, and (5) associating, in the computer memory of the library, a respective action rule with the one or more plurality of sounds identified as a sound cue.
  • creating the library as described trains a neural network, e.g., model, using the action rules associated with the plurality of sounds identified as a sound cue.
  • a neural network may be created that can receive a sound recorded in an environment and determine an appropriate action rule to be executed.
  • sound cues may be labeled by what the sounds indicate, e.g., collisions, broken plate, etc.
  • the sound cues may be labeled with a classification of the sound.
  • sound cues may also be associated with the context data of the conditions under which the sounds were recorded, e.g., location.
  • the library may associate, in the computer memory, an action rule with the sounds identified as a sound cue and the relevant context data.
  • This data, sound characteristics of a sound cue, context data, and action rule(s) may be used to train a neural network and thus, the trained neural network can identify action rules to execute given input sound data and context data.
  • Action rules may indicate any action given associated conditions, e.g., sound and context.
  • one action rule may indicate that if the detected sound is “ouch,” and the context is the robot moving (likely the robot hit someone), the resulting action should be stopping the robot's motion or if the detected sound is “ouch,” and the context is the robot is stopped and exerting a torque (likely indicating that the robot pinned a person), the robot's motion should be changed to zero torque.
  • embodiments may employ a variety of different input data to identify one or more of the recorded plurality of sounds as a sound cue. For instance, embodiments may identify sounds as a sound cue based on (i) a user input flagging a given sound as a sound cue, (ii) context obtained from analyzing non-sound sensor input, and (iii) output of a machine learning method as described herein, such as the method 550 . Such functionality may employ a neural network trained to identify sound cues using the recorded plurality of sounds as input.
  • the non-sound sensor may be any sensor known in the art, such as a camera, torque sensor, and force sensor, amongst other examples.
  • an embodiment may identify a recorded sound as a sound cue using a neural network trained to identify a recorded sound as a sound cue based on the recorded sound and the non-sound sensor context data.
  • a recorded sound is a collision.
  • the collision itself is identifiable in an image (non-sound sensor input).
  • a neural network can be trained to identify the sound (the collision) as a sound cue based upon input of the image showing the collision.
  • this non-sound sensor may be any sensor known in the art, such as a camera, depth sensor, torque sensor, camera, lidar, thermometer, and pressure sensor, amongst other examples.
  • context data can be obtained using image data from a camera which indicates that the robot collided with a glass object and broke the glass object.
  • the recorded sound of glass breaking should be a sound cue because the image showed the robot breaking an image and the sound can be stored accordingly in the library.
  • comparing the detected sound to the library of sound characteristics associated with sound cues at 222 utilizes a neural network.
  • Such functionality may utilize any neural network described herein.
  • processes the detected sound using a neural network trained to identify one or more characteristics of the detected sound that matches at least one of the sound characteristics associated with the sound cues.
  • 222 may utilize a model that is a neural network classifier that characterizes the detected sound.
  • Such an embodiment may simply determine if a sound is “bad” or “not bad” and, in turn, at 223 , the robot's motion is modified based upon context and whether the sound is “bad” or “not bad”.
  • the neural work may be implemented using supervised learning where the neural network is trained with sound examples that have been labelled as “bad” or “not bad.”
  • comparing the detected sound to the library of sound characteristics associated with sound cues includes identifying a sound characteristic of the detected sound matching a given sound characteristic associated with a given sound cue in the library.
  • the matching is determined through a tuned threshold which is selective to avoid false positives, but meets required levels of safety in conjunction with safe operation, such as working with human co-workers as described in U.S. Patent Application titled “Robot Interaction With Human Co-Workers”, Attorney Docket No. 5000.1057-001 and utilizing safe torques as described in U.S. Patent Application titled “Controlling Robot Torque And Velocity Based On Context”, Attorney Docket No. 5000.1055-001.
  • the sound characteristic may be any characteristic of a sound wave known in the art, such as frequency, amplitude, direction, and velocity.
  • modifying the motion of the robot 223 is based upon the result of the comparison, the features extracted, and/or the characteristics extracted at 222 .
  • modifying the motion of the robot 223 includes identifying one or more action rules associated with the given sound cue (the sound cue with a sound characteristic that matches a sound characteristic of the recorded sound).
  • the robot motion is modified to be in accordance with the one or more action rules.
  • the modifying 223 may be done in accordance with the extracted features or characteristics. For instance, if a feature is extracted which simply indicates the detected sound is a “bad sound,” e.g., associated with injury, the robot may be stopped when the feature is extracted from the detected sound.
  • sounds which are encoded in the library as being associated with dangerous situations for the human or the co-worker are used to modify the context of the executed action to either stop or change the motion of the robot as described in U.S. Patent Application titled “Controlling Robot Torque And Velocity Based On Context”, Attorney Docket No. 5000.1055-001 (torque based on context) and U.S. Patent Application titled “Robot Interaction With Human Co-Workers”, Attorney Docket No. 5000.1057-001 (working with human co-workers).
  • the one or more action rules may dictate the operation of the robot given the sound cue. Further, the action rules may dictate the operation of the robot given the sound cue and the context of the robot. Further, the one or more action rules associated with the given sound cue may be a set of action rules. These rules may indicate different actions to take based upon different characteristics of a recorded sound and different context data of the environment in which the sound was recorded.
  • the set rules may be based upon different sounds, characteristics of sounds, classification of sounds, classification of characteristics of sounds, and context of sounds, e.g., location of sounds.
  • at least one of the one or more action rules dictates a first result for the motion of the robot and a second result for the motion of the robot, where the motion of the robot is modified to be in accordance with the first result or the second result based upon the context of the robot.
  • the detected sound is glass breaking. After detecting this sound and comparing the detected sound to the library of sound characteristics associated with sound cues, it is determined that the detected sound has characteristics matching the “breaking glass” sound cue.
  • the breaking glass sound cue has action rules which dictate a result based on context.
  • the rules may indicate that the robot's motion should stop if the broken glass sound occurred within 10 feet of the robot and the robot can operate normally if the broken glass sound occurred more than 10 feet from the robot.
  • Embodiments of the method 220 may treat any sound as a sound cue.
  • the sound cues may include at least one of: a keyword, a phrase, a sound indicating a safety-relevant, e.g., dangerous, event, and a sound relevant to an action.
  • embodiments may treat any sound relevant to operation of a robot as a sound cue.
  • context may include any conditions related, in any way, to the robot such as environmental context and operational context.
  • context may include any data related to the robot, the task performed by robot, the motion of the robot, and the environment in which the robot is operating, amongst other examples.
  • context of the robot includes at least one of: torque of a joint of the robot; velocity of a link of the robot; acceleration of a link of the robot; jerk of a link of the robot; force of an end effector attached to the robot; torque of an end effector attached to the robot; pressure of an end effector attached to the robot; velocity of an end effector attached to the robot; acceleration of an end effector attached to the robot; task performed by the robot; and characteristics of an environment in which the robot is operating.
  • context may include an action state of the robot (e.g., idle, moving, changing tool, scooping, cutting, picking), state (e.g., in workspace, speed of movement, not in workspace, proximity, can collide, unable to collide) of objects (humans, robots, animals, etc.).
  • the context may include any context data as described in U.S. Patent Application titled “Controlling Robot Torque And Velocity Based On Context”, Attorney Docket No. 5000.1055-001 and the context may include predicted motion of an object as described in U.S. Patent Application titled “Robot Interaction With Human Co-Workers”, Attorney Docket No. 5000.1057-001.
  • the level of reaction to the sound cue can be modified. For example, if the robot is engaged in a dangerous activity which requires high torque and a sharp object, then any sound cue indicating distress results in an immediate and drastic reduction in robot output torque to below a safe threshold.
  • modifying motion of the robot at 223 includes comparing the context, i.e., context data, of the robot to a library of contexts, i.e., context data, to detect a matching context.
  • a library of contexts i.e., context data
  • Such an embodiment identifies one or more action rules associated with the matching context and modifies the motion of the robot to be in accordance with the one or more action rules.
  • Comparing the context to a library of context may be made by a neural network or by comparing features of the context of the robot to features of the contexts in the library. In this way, embodiments may utilize statistical models.
  • the context library is created by recording a plurality of contexts, i.e., recording data indicating contexts, in an environment and associating, in computer memory of the library, a respective action rule with one or more of the plurality of recorded contexts.
  • the contexts may be recorded using any sensor known in the art that can capture context data, i.e., data relevant to the operation of a robot.
  • the context data may be recorded using at least one of: a vision sensor, a depth sensor, a torque sensor, and a position sensor, amongst other examples.
  • An embodiment of the method 220 that creates the context library may also identify the respective action rule associated with the one or more of the plurality of recorded contexts.
  • identifying the action rule associated with the recorded contexts includes (1) processing the plurality of recorded contexts to identify at least one of: a pattern in the environment in which the contexts were captured and a condition in the environment in which the contexts were captured.
  • the respective action rule is identified using at least one of the identified pattern and condition.
  • processing the plurality of recorded contexts to identify at least one of a pattern and a condition includes at least one of (i) comparing the plurality of recorded contexts to a library of predefined context conditions and (ii) evaluating output of a neural network trained to identify patterns or conditions of a context from the plurality of recorded contexts.
  • Such an embodiment may apply a modification to the technique described in U.S. Patent Application titled “Controlling Robot Torque And Velocity Based On Context”, Attorney Docket No. 5000.1055-001 (controlling torque based on context) where sounds are matched to an action context. In future execution, whenever a sound of that type is detected, it can be used to update and modify the current robot context.
  • Embodiments can use a neural network architecture to implement the various functionalities described herein.
  • a convolutional neural network CNN
  • FCN fully convolutional neural network
  • RNN recurrent neural network
  • LSTM long-short term memory neural network
  • any data described herein e.g., sound and context data, or a combination thereof, can be used to train such a neural network.
  • a neural network is trained according to methods known to those skilled in the art.
  • a neural network which determines a robot's reaction based on a given context is trained by using the additional information provided by the detected sounds.
  • the sound neural network can be informed by the current context and action of the robot. For example, if the robot is handling pots and pans, the clanging and banging noises associated with that motion are indicative of normal operation. In contrast, a detected clanging or banging while preparing a stir-fry in a wok is likely to be indicative of a problem.
  • FIG. 3 is a block diagram illustrating an example system 330 in which embodiments may be implemented.
  • the system 330 comprises a computer 331 , having input and output ports.
  • the computer 331 is suitable for running software capable of running a keyword or phrase matching program, a sound wave characteristic matching program, a multi-variable pattern recognition program, and a robot controlling system, as well as other operating systems.
  • the computer 331 may be any processing device known in the art such as a personal computer or a processor complex.
  • the computer 331 is connected to an input device 332 .
  • the input device 332 can be a microphone which allows a user to record a digital voice print to customize the system 330 to detect voice commands and accordingly perform a set of rules.
  • the input device 332 can be used to load a set of keywords or phrases into a database 333 .
  • the input device 332 can also be used to record or load a set of sound wave characteristics (e.g., digitization of the sound of glass breaking) into the database 333 .
  • the computer 331 is communicatively coupled to the database 333 which can contain a preset or continuously changing set of keywords, phrases, sound wave characteristics, or other sensor data.
  • Database 333 can also be a trained neural network, trained model, or a heuristic model.
  • the computer 331 is also connected to a sensor 334 .
  • the sensor 334 provides contextual information to the computer 331 , and can affect the rules that the system 330 executes.
  • the sensor 334 may be a camera capturing a real-time feed of an environment.
  • the sensor 334 may be a torque measurement device connected to the robot 335 .
  • the sensor 334 may be a collection of cameras, torque measurement devices, and other sensors and measurement devices.
  • the sensor 334 produces a data feed 336 which is a collection of data points coming from the variety of input sensors 334 .
  • the computer 331 may also issue commands/controls to the sensor 334 .
  • the computer 331 is connected to the microphone 337 (which may be an array of audio capture devices).
  • the microphone 337 captures sound data and relays it via data stream 338 to the computer 331 .
  • the computer 331 compares incoming audio signals from the microphone 337 to a database of sounds 333 , and performs a set of predefined rules based on the comparison.
  • the comparison can be made by matching sound wave components from data stream 338 against a library or model of known sound wave fingerprints in the database 333 , or by matching a keyword or phrase against a library of pre-defined keywords or phrases in the database 333 .
  • the comparison can be made using a Bayesian estimator, a convolutional neural network, or a recurrent neural network.
  • the comparison generates a confidence indicating whether an alert should be triggered, i.e., whether motion of the robot should be modified.
  • a variety of threshold functions can be used to determine if a recorded sound should be acted upon (e.g., a single threshold value, above a threshold for a period of time, or some other function of time, confidence, and other signals in the environment).
  • the computer 331 may control the robot 335 , based on a set of rules related to the comparison performed on the aggregation of data streams 338 (sound data) and 336 (context data), and other inputs. In an embodiment, other inputs (data in addition to the data from the microphone 337 and sensor 334 ) can be provided by the robot 335 to the computer 331 .
  • the computer 331 may also control output on an external display 339 such as a monitor. In an embodiment, the display 339 alerts a user whenever the system detects danger.
  • FIG. 4 is a flowchart of a method embodiment 440 for controlling a robot in an environment.
  • the method 440 may be implemented on computer program code in combination with one or more hardware devices.
  • the computer program code may be stored on storage media, or may be transferred to a workstation over the Internet or some other type of network for execution.
  • the method 440 starts 441 and at 442 , sound capturing devices are connected to the system.
  • the sound capturing devices can be a microphone or an array of microphones or any other sound capturing device known in the art.
  • an array of microphones is used to detect the source of a sound.
  • additional sensors are connected to the system. These additional sensors can include cameras, depth sensors, sonars, and force torque sensors, amongst other examples. In embodiments, these sensors provide context data related to the environment in which the robot being controlled is operating. This context information can include the nature of the surroundings or the actions performed by an object in the environment.
  • a keyword and sound database is loaded that is indexed and searchable by different parameters such as keywords and sound characteristics, e.g., frequencies.
  • This database can either be built through use of computer software that copies pre-defined keywords, phrases, sound wave characteristics, and other data metrics, or by a live recording of sounds or keywords narrated by a human speaker, or through any other simulation of the data source, i.e., an environment in which is robot is being controlled.
  • the database may also be dynamically updated based on self-generated feedback or manually using input feedback provided by a user to a particular recording.
  • a set of rules are defined, and associated with different data values, e.g., sounds and keyword, and other cues such as environmental context or input from other sensory devices.
  • the rules can be pre-defined and copied via computer software, or can be changed dynamically based on input. For example, user input may be used to customize the rules for the operating environment.
  • the rules may also be changed dynamically based on feedback captured by a system implementing the method 440 .
  • the robot is connected to the system.
  • the robot may provide information, such as motion data, image capture data, or other sensor output.
  • the robot may also be commanded by a system implementing the method 440 to modify its operation. These modifications may include reducing the robot's speed, modifying the robot's movement plan, or completely stopping.
  • the robot performs its predefined actions.
  • the robot performing these actions can be implemented as part of software implementing the method 440 or these actions can be dictated by an independent software program.
  • sounds and other data inputs are monitored and processed.
  • the processing can be an aggregation of sound data and context data, or the execution of other mathematical functions on sound and context data. For instance, an embodiment can utilize mathematical functions to perform preprocessing, filtering, data shaping, feature extracting, classification, and matching of the sound and context data.
  • a check is made whether one or more of these data points or a collection of these data points or a pattern of these data points meet one or many conditions associated with the database or model loaded at 444 .
  • the check 449 may involve receiving words, phrases, sounds, and other inputs from a system that processes this data to remove noise or perform other mathematical transformations. If no condition is met, then the monitoring process continues at 448 .
  • a rule or rules are processed and executed. These rules can be executed by the robot, as shown in flow 451 .
  • the rules can be dynamically updated based on the fact that the rule or rules have been executed.
  • the database and model loaded at 444 can be updated based on the fact that the rule or rules have been executed.
  • a check is made whether the rule or rules require any human intervention. If no human intervention is needed, then the monitoring process continues at 448 . However, if human intervention is needed, then at 453 the human provides the input. This input can be physical input, such as pushing a button or a switch, or a digital input, such as pushing a button on a computer display screen.
  • the human input can be sent as a command to the robot as shown in flow 454 . After the human provided input, the monitoring process continues at 448 .
  • FIG. 5 is a flowchart of a method 550 for training a model, i.e., a deep neural network (DNN) that may be employed in embodiments to recognize sound cues or to extract sound features and characteristics.
  • the audio is preprocessed to extract features suitable to be fed into the DNN.
  • Feature extraction can be done using Mel-Frequency Cepstral Coefficients or other spectral analysis methods.
  • the extracted features are fed into a neural network model such as a convolutional neural network (CNN), or into a support vector machine (SVM), or into another machine learning technique.
  • a convolutional neural network consists of a combination of convolutional layers, max pooling layers and fully connected dense layers.
  • the final layer is used for classifying the original sound cue using, for example, a softmax function or a mixture of softmaxes (MoS).
  • the method 550 may be implemented using computer program code in combination with one or more hardware devices.
  • the computer program code may be stored on storage media, or may be transferred to a workstation for execution over the Internet or any type of network.
  • the method 550 starts 551 and at 552 sound capturing devices are connected to a system executing the method 550 .
  • the sound capturing devices can be a microphone or an array of microphones or any other sound capturing device known in the art.
  • an array of microphones connected at 552 are used to detect a sound and the source of a sound.
  • other sensors are connected to the system. These other sensors can include cameras, depth sensors, sonars, and force torque sensors, amongst other examples. In embodiments, these sensors provide context data, such as the nature of the surroundings or the actions performed in an environment.
  • a robot is connected to the system executing the method 550 .
  • the robot may provide information to the overall system, such as motion data, image capture data, or other sensor output.
  • the robot may also be commanded by a system implementing the method 550 .
  • the robot may be commanded to reduce its speed, to completely stop, or to execute actions to record and generate additional data.
  • sounds (from the devices connected at 552 ) and other data inputs such as camera feeds and torque information (from the devices connected at 553 ) are measured and processed.
  • the processing can be an aggregation of this data, or the execution of other mathematical functions on this data.
  • this data is recorded and stored in a database. This database can be indexed and searchable.
  • rules are defined and associated with one or more data entries or data patterns from the database. These rules can be actions to be executed by the robot, such as stopping the robot or reducing the speed of the robot's motions.
  • An embodiment provides a sound-based emergency stop method to stop robot motion without a physical interface (button, switch, etc.). Such an embodiment listens for a variety of sounds which indicate a human, distress, a human command, or a mechanical impact or failure.
  • An audio signal received/recorded via a microphone or an array of microphones, is compared to a library of sounds (e.g., verbal cues, such as “stop” or “ouch,” and non-verbal cues, such as the sound of glass breaking, or impact between two rigid objects). The comparison can be done using a voice or acoustic model.
  • the comparison can be made by matching sound characteristics, e.g., frequency components, against a library or model of known frequency fingerprints using (i) a Bayesian estimator, (ii) a convolutional neural network, or (iii) a recurrent neural network.
  • sound characteristics e.g., frequency components
  • the comparison determines a confidence in the comparison, i.e., whether a detected sound matches a sound cue.
  • Embodiments can utilize a variety of threshold functions (e.g., a single threshold value, above a threshold for a period of time, or some other function of time, confidence, and other signals in the environment) to determine if a detected sound matches a sound cue and should be acted upon.
  • threshold functions e.g., a single threshold value, above a threshold for a period of time, or some other function of time, confidence, and other signals in the environment.
  • the robot's motion can be modified, e.g., slowed or halted.
  • Embodiments can modify motion for a mobile or stationary robot.
  • Embodiments can perform sound recognition, i.e., determining if a detected sound matches a sound cue using (i) a library of sound cues, (ii) a model of sound (i.e., frequency) cues, (iii) a trained neural network, (iv) a Bayesian estimator, (v) a convolutional neural network, or (vi) a system using a recurrent neural network architecture.
  • sound capturing devices such as a microphone or array of microphones
  • sound capturing devices can be mounted to the robot itself.
  • sound capturing devices can be mounted to locations in an environment in which the robot operates.
  • locations of the sound capturing devices can be known to a system processing the sound to further enable noise cancellation and triangulation of a sound source. If mounted to the robot, the system can calculate the location of the sound capturing device(s) as they move with the robot. This allow an embodiment to perform calculations, e.g., sound triangulation, that are based on the dynamic location at the time the sound is recorded.
  • An embodiment continually monitors the sound capturing device input and determines if any sounds correspond to sounds which trigger an action, e.g., an emergency halt of the robot.
  • a command such as, ‘emergency stop’ or ‘zero torque’ can be issued to the robot.
  • Embodiments provide numerous benefits over existing methods for robot control.
  • Existing solutions rely on an emergency whistle, voice commands, e.g., a shouted ‘STOP’ command, or other non-verbal cues, such as excessive force, torque, or other physical signals.
  • Other existing methods rely on position based signals such as light curtains, pressure sensors, or motion sensors; or physical switches such as an emergency stop button.
  • Existing systems also use verbal cues to shut down alarms, such as the NEST smoke detector which looks for waiving arms and verbal cues to sense false alarms.
  • embodiments provide functionality to modify robot motion based on both verbal and nonverbal cues in the same system, with no hardware required for the user.
  • the novel methods and systems described herein allow the robot to autonomously modify its motion based on non-verbal sound cues (e.g. the sound of glass breaking) without the need for a human operator to signal the modification.
  • existing systems do not consider the variety of sounds which can occur in a robot's environment that are indicative of a severe problem or harmful situation for a human operator. For instance, the human operator might be accidentally injured by the robot and unable to press the emergency stop button or issue a verbal ‘stop’ command.
  • the impact of a collision for example, can be identified and processed automatically so as to modify a robot's motion and prevent further injury.
  • the robot does not react if it is not already moving. In other words, the robot uses context about its environment. In embodiments, certain commands may cause the robot to slow down instead of a complete stop. An embodiment can recognize the person speaking using speaker recognition so as to prevent unauthorized users from shouting commands. Embodiments can also triangulate the sound to determine the source of a sound based on an array of sound capturing devices, and providing a lower weight to sounds from a particular area, e.g., a customer area. While methods of triangulation of a source of sound are known by a person of ordinary skill of the art, these methods focus on microphones having fixed locations. In embodiments where the sound capturing devices are mounted to the robot, the sound capturing devices move as the robot arm moves and, in such an embodiment, the triangulation calculation is changed dynamically by tracking the location of the sound capturing devices.
  • An embodiment provides a context-driven, sound or data-based emergency stop and motion reduction method to limit robot motion without a direct physical interface such as button or switch.
  • An audio signal received/recorded via a microphone or an array of microphones, is compared to a library of sounds (e.g., verbal cues, such as “stop” or “ouch,” and non-verbal cues, such as the sound of glass breaking). The comparison can be done using a voice or acoustic model.
  • other data inputs such as a visual camera feed, depth information, and torque measurements, can be compared to a similar library of corresponding data.
  • a combination of this data, or a pattern of this data can trigger a positive match for predetermined conditions.
  • a command can be issued to the robot to execute a set of predefined rules, such as reducing its speed or completely halting its motion.
  • An embodiment employs a mobile or stationary robot, a microphone or array of microphones, context sensors (such as camera, depth, torque), a library of data points that if detected by the sensor(s) (sound and context sensors), initiate a set of rules to be executed by the robot.
  • Embodiments can also implement a system using a recurrent neural network architecture for data and pattern recognition.
  • the sound capturing device or array of sound capturing devices, or the context sensor or array of context sensors can be mounted to the robot itself.
  • the array of sound capturing devices and context sensors can be mounted to locations in the environment in which the robot operates.
  • locations of the array of sound capturing devices and context sensors can be known to a system processing the data and sound to further enable noise cancellation and triangulation, i.e., locating, of data sources. If mounted to the robot, the system can calculate the location of the sound capturing devices and context sensors as they move with the robot.
  • a recurrent neural network can be used to perform speech recognition (e.g., converting audio to written text or another form) for processing.
  • a physical interface device such as a button or switch
  • comparing measurements against thresholds such as with torque, voltage, or current limits, or by detecting boundary crossings such as intrusions into predefined zones.
  • thresholds such as with torque, voltage, or current limits
  • boundary crossings such as intrusions into predefined zones.
  • embodiments enable modifying robot motion based on verbal and nonverbal cues in the same system and inferred context of the environment that is based on a collection of data sources. Unlike existing methods, embodiments require no particular hardware for users. Embodiments provide a novel approach that allows the robot to autonomously stop or modify its speed or motion based on non-verbal sound cues (e.g., the sound of glass breaking) and context data without requiring a human operator to signal the change.
  • non-verbal sound cues e.g., the sound of glass breaking
  • embodiments can also use context of a robot's current task and motion plan, and the state of the surroundings as measured by other sensors to inform the modifications to the robot's movement. For instance, if no obstacles or humans are detected in the environment, then the confidence that a collision occurred is reduced. Conversely, if a human is present, and in close proximity to the robot, then it is highly likely that a collision occurred and the threshold for halting the robot is significantly reduced.
  • the robot if it is not moving, it should not react, as this might cause additional harm, e.g. a human could have accidentally impacted a stationary robot, so the robot should not move as a result of that collision.
  • the reaction of the robot to the emergency signal can vary based on context.
  • Embodiments may use a plurality of sound capturing devices. For instance, using more than one microphone, e.g., four microphones, allows the sound origin to be determined which allows more weight to be given to commands which originate within reach of the robot.
  • FIG. 6 illustrates a computer network or similar digital processing environment in which embodiments of the present disclosure may be implemented.
  • Client computer(s)/devices 50 and server computer(s) 60 provide processing, storage, and input/output devices executing application programs and the like.
  • the client computer(s)/devices 50 can also be linked through communications network 70 to other computing devices, including other client devices/processes 50 and server computer(s) 60 .
  • the communications network 70 can be part of a remote access network, a global network (e.g., the Internet), a worldwide collection of computers, local area or wide area networks, and gateways that currently use respective protocols (TCP/IP, Bluetooth®, etc.) to communicate with one another.
  • Other electronic device/computer network architectures are suitable.
  • FIG. 7 is a diagram of an example internal structure of a computer (e.g., client processor/device 50 or server computers 60 ) in the computer system of FIG. 6 .
  • Each computer 50 , 60 contains a system bus 79 , where a bus is a set of hardware lines used for data transfer among the components of a computer or processing system.
  • the system bus 79 is essentially a shared conduit that connects different elements of a computer system (e.g., processor, disk storage, memory, input/output ports, network ports, etc.) that enables the transfer of information between the elements.
  • Attached to the system bus 79 is an I/O device interface 82 for connecting various input and output devices (e.g., keyboard, mouse, displays, printers, speakers, etc.) to the computer 50 , 60 .
  • a network interface 86 allows the computer to connect to various other devices attached to a network (e.g., network 70 of FIG. 6 ).
  • Memory 90 provides volatile storage for computer software instructions 92 and data 94 used to implement an embodiment of the present disclosure (e.g., structure generation module, computation module, and combination module code detailed above).
  • Disk storage 95 provides non-volatile storage for computer software instructions 92 and data 94 used to implement an embodiment of the present disclosure.
  • a central processor unit 84 is also attached to the system bus 79 and provides for the execution of computer instructions.
  • the processor routines 92 and data 94 are a computer program product (generally referenced 92 ), including a non-transitory computer-readable medium (e.g., a removable storage medium such as one or more DVD-ROM's, CD-ROM's, diskettes, tapes, etc.) that provides at least a portion of the software instructions for the embodiment.
  • the computer program product 92 can be installed by any suitable software installation procedure, as is well known in the art.
  • at least a portion of the software instructions may also be downloaded over a cable communication and/or wireless connection.
  • the invention programs are a computer program propagated signal product embodied on a propagated signal on a propagation medium (e.g., a radio wave, an infrared wave, a laser wave, a sound wave, or an electrical wave propagated over a global network such as the Internet, or other network(s)).
  • a propagation medium e.g., a radio wave, an infrared wave, a laser wave, a sound wave, or an electrical wave propagated over a global network such as the Internet, or other network(s)
  • Such carrier medium or signals may be employed to provide at least a portion of the software instructions for the present invention routines/program 92 .

Abstract

Embodiments provide methods and systems to modify motion of a robot based on sound and context. An embodiment detects a sound in an environment and processes the sound. The processing includes comparing the detected sound to a library of sound characteristics associated with sound cues and/or extracting features or characteristics from the detected sound using a model. Motion of a robot is modified based on a context of the robot and at least one of: (i) the comparison, (ii) the features extracted from the detected sound, and (iii) the characteristics extracted from the detected sound.

Description

    RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Application No. 62/730,703, filed on Sep. 13, 2018, U.S. Provisional Application No. 62/730,947, filed on Sep. 13, 2018, U.S. Provisional Application No. 62/730,933, filed on Sep. 13, 2018, U.S. Provisional Application No. 62/730,918, filed on Sep. 13, 2018, U.S. Provisional Application No. 62/730,934, filed on Sep. 13, 2018 and U.S. Provisional Application No. 62/731,398, filed on Sep. 14, 2018.
  • This application is related to U.S. Patent Application titled “Manipulating Fracturable And Deformable Materials Using Articulated Manipulators”, Attorney Docket No. 5000.1049-001; U.S. Patent Application titled “Food-Safe, Washable, Thermally-Conductive Robot Cover”, Attorney Docket No. 5000.1050-000; U.S. Patent Application titled “Food-Safe, Washable Interface For Exchanging Tools”, Attorney Docket No. 5000.1051-000; U.S. Patent Application titled “An Adaptor for Food-Safe, Bin-Compatible, Washable, Tool-Changer Utensils”, Attorney Docket No. 5000.1052-001; U.S. Patent Application titled “Locating And Attaching Interchangeable Tools In-Situ”, Attorney Docket No. 5000.1053-001; U.S. Patent Application titled “Determining How To Assemble A Meal”, Attorney Docket No. 5000.1054-001; U.S. Patent Application titled “Controlling Robot Torque And Velocity Based On Context”, Attorney Docket No. 5000.1055-001; U.S. Patent Application titled “Robot Interaction With Human Co-Workers”, Attorney Docket No. 5000.1057-001; U.S. Patent Application titled “Voice Modification To Robot Motion Plans”, Attorney Docket No. 5000.1058-000; and U.S. Patent Application titled “One-Click Robot Order”, Attorney Docket No. 5000.1059-000, all of the above U.S. Patent Applications having a first named inventor David M. S. Johnson and all being filed on the same day, Sep. 13, 2019.
  • The entire teachings of the above applications are incorporated herein by reference.
  • BACKGROUND
  • Robots operate in environments where they must avoid both fixed and moving obstacles, and often those obstacles are their human co-workers. Collisions with the objects, e.g., human co-workers, are unacceptable. Existing methods for robot-obstacle avoidance and for robot control in environments are cumbersome and inadequate.
  • SUMMARY
  • In environments, such as high-traffic restaurant kitchens, humans can typically identify dangerous situations by detecting input from various senses (hearing, touching, smelling, seeing, etc.), analyzing input from external sources (e.g., warnings from other colleagues, alarms), and understanding the contexts of these inputs. Humans can decide accordingly to alter their subsequent actions from these inputs. In contrast, using existing methods, robots cannot adequately and appropriately modify their operations based upon input from their operating environment.
  • Today, robots identify dangers and faults only when measurements cross certain thresholds. In particular, robots today identify dangers by (1) capturing intrusions into predefined zones, (2) measuring quantities such as torque, voltage, or current and comparing the measured quantities to predefined limits, or (3) receiving a mechanical input such as an emergency stop button. There are currently no known methods for robots to detect danger through sounds cues and by deducing context from a given set of measured inputs, whether intrinsic (information measured by the robot) or external (alarm system or human sound). Further, current methods primarily rely on information relayed by other sensors such as vision, sonar, or torque sensors, and do not consider generalized inputs such as alarms, the sound of events, e.g., breaking glass and human screams, amongst other examples.
  • Embodiments solve problems in relation to employing robotics in a dynamic workspace, frequently alongside human workers, and enhance a robot's ability to sense and react to dangerous situations. Unlike existing methods, embodiments provide functionality for robots to infer from context the amount of danger that a situation presents by incorporating one or more data sources, capturing one or more details from these one or more data sources, and using pattern matching and other analysis techniques to recognize danger.
  • Embodiments of the present disclosure provide methods and systems for modifying motion of a robot. One such embodiment detects a sound in an environment using a sound capturing device and then processes the detected sound. The processing includes at least one of: (1) comparing the detected sound to a library of sound characteristics associated with sound cues and (2) extracting features or characteristics from the detected sound using a model. In turn, such an embodiment modifies motion of a robot based on a context of the robot and at least one of: (i) the comparison, (ii) the features extracted from the detected sound, and (iii) the characteristics extracted from the detected sound.
  • An embodiment creates the library of sound characteristics associated with the sound cues. Such an embodiment creates the library by (1) recording a plurality of sounds in an environment, (2) identifying one or more of the recorded plurality of sounds as a sound cue, (3) determining sound characteristics of the one or more plurality of sounds identified as a sound cue, (4) associating the determined sound characteristics with the one or more plurality of sounds identified as a sound cue in computer memory of the library, and (5) associating, in the computer memory of the library, a respective action rule with the one or more plurality of sounds identified as a sound cue.
  • When creating the library, embodiments may employ a variety of different input data to identify one or more of the recorded plurality of sounds as a sound cue. For instance, embodiments may identify sounds as a sound cue based on user input flagging a given sound as a sound cue, context obtained from analyzing non-sound sensor input, and output of a neural network trained to identify sound cues using the recorded plurality of sounds as input.
  • According to an embodiment, comparing the detected sound to the library of sound characteristics associated with sound cues utilizes a neural network. Such an embodiment processes the detected sound using a neural network trained to identify one or more characteristics of the detected sound that matches at least one of the sound characteristics associated with the sound cues.
  • In another embodiment, comparing the detected sound to the library of sound characteristics associated with sound cues includes identifying a sound characteristic of the detected sound matching a given sound characteristic associated with a given sound cue in the library. In such an embodiment, modifying the motion of the robot includes identifying one or more action rules associated with the given sound cue (the sound cue with a matching sound characteristic) and modifying the motion of the robot to be in accordance with the one or more action rules.
  • In an embodiment, the one or more action rules may dictate the operation of the robot given the sound cue. Similarly, the action rules may dictate the operation of the robot given the sound cue and the context of the robot. Further, the one or more action rules associated with the given sound cue may be a set of action rules. Further still, in an embodiment, at least one of the one or more action rules dictates a first result for the motion of the robot and a second result for the motion of the robot, where the motion of the robot is modified to be in accordance with the first result or the second result based upon the context of the robot.
  • Embodiments may treat any sound as a sound cue. For instance, the sound cues may include at least one of: a keyword, a phrase, a sound indicating a safety-relevant, e.g., dangerous event, and a sound relevant to an action. Simply, an embodiment may treat any sound relevant to operation of a robot as a sound cue.
  • In embodiments, “context” may include any conditions related in any way to the robot. For example, context may include any data related to the robot, the task performed by robot, the motion of the robot, and the environment in which the robot is operating, amongst other examples. In embodiments, context of the robot includes at least one of: torque of a joint of the robot; velocity of a link of the robot; acceleration of a link of the robot, jerk of a link of the robot; force of an end effector attached to the robot; torque of an end effector attached to the robot; pressure of an end effector attached to the robot; velocity of an end effector attached to the robot; acceleration of an end effector attached to the robot; task performed by the robot; and characteristics of an environment in which the robot is operating. Further, the context may include any context data as described in U.S. Patent Application titled “Controlling Robot Torque And Velocity Based On Context”, Attorney Docket No. 5000.1055-001.
  • According to an embodiment, modifying motion of the robot includes comparing the context of the robot to a library of contexts to detect a matching context, identifying one or more action rules associated with the matching context, and modifying the motion of the robot to be in accordance with the one or more action rules. Further, in embodiments, the motion of the robot may be modified as described in U.S. Patent Application titled “Robot Interaction With Human Co-Workers”, Attorney Docket No. 5000.1057-001.
  • Yet another embodiment creates the library of contexts. In such an embodiment, the context library is created by recording a plurality of contexts, e.g., data indicating context, in an environment and associating, in computer memory of the library, a respective action rule with one or more of the plurality of recorded contexts. In such an embodiment, the contexts may be recorded using any sensor known in the art that can capture context data, i.e., data relevant to the operation of a robot. For instance, the context data may be recorded using at least one of: a vision sensor, a depth sensor, a torque sensor, and a position sensor, amongst other examples.
  • An embodiment that creates the context library may also identify the respective action rule associated with the one or more of the plurality of recorded contexts. In an embodiment, identifying the action rule associated with the recorded contexts includes (1) processing the plurality of recorded contexts to identify at least one of: a pattern in the environment in which the contexts were captured and a condition in the environment in which the contexts were captured and (2) identifying the respective action rule using at least one of the identified pattern and condition. In such an embodiment, processing the plurality of recorded contexts to identify at least one of a pattern and a condition includes at least one of (i) comparing the plurality of recorded contexts to a library of predefined context conditions and (ii) evaluating output of a neural network trained to identify patterns or conditions of a context from the plurality of recorded contexts.
  • Another embodiment is directed to a system for modifying motion of a robot. The system includes a processor and a memory with computer code instructions stored thereon. In such an embodiment, the processor and the memory, with the computer code instructions, are configured to cause the system to implement any embodiments described herein.
  • Yet another embodiment is directed to a computer program product for modifying motion of a robot. The computer program product comprises a computer-readable medium with computer code instructions stored thereon where, the computer code instructions, when executed by a processor, cause an apparatus associated with the processor to perform any embodiments described herein.
  • A method embodiment is provided for defining/recording one or more sound cues, e.g., keywords, phrases, and one or more sound wave characteristics (amplitude, frequency, speed) and defining/recording other sensor data, such as camera data, depth data, and torque measurements. Such an embodiment monitors for this data (sound cues and other sensor data, i.e., context data) in an environment in which a robot is operating. This monitoring detects the data and patterns and/or conditions related to this data. Upon meeting pre-defined conditions related to the measured data in the environment, one or more rules governing the operation of the robot are executed.
  • Another embodiment is directed to a method for monitoring keywords, sound wave profiles, and other sensor data. Such an embodiment monitors speech or sounds and other sensor data, i.e., context data, for at least one of: (i) a pre-defined keyword, (ii) a pre-defined phrase, (iii) a characteristic, and (iv) a data pattern. Upon detecting the pre-defined keyword, phrase, characteristic, and/or other sensor data, i.e., context data, pattern, such an embodiment executes a set of rules and actions that are based on the matched pre-defined keyword, phrase, or sound wave characteristic, or context data pattern. In an embodiment, processing these rules results in identifying changes to robot motion based upon the detected pre-defined keyword, phrase, characteristic, or context data pattern.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The foregoing will be apparent from the following more particular description of example embodiments, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments.
  • FIG. 1A is a block diagram illustrating an example embodiment of a quick service food environment of embodiments of the present disclosure.
  • FIG. 1B is a block diagram illustrating an example embodiment of the present disclosure.
  • FIG. 2 is a flowchart depicting a method for modifying motion of a robot according to an embodiment.
  • FIG. 3 is a block diagram illustrating an example system in which embodiments may be implemented.
  • FIG. 4 is a flowchart of an embodiment for controlling a robot in an environment.
  • FIG. 5 is a flowchart of a method for training a model that may be employed in embodiments.
  • FIG. 6 depicts a computer network or similar digital processing environment in which embodiments may be implemented.
  • FIG. 7 is a diagram of an example internal structure of a computer in the environment of FIG. 6.
  • DETAILED DESCRIPTION
  • A description of example embodiments follows.
  • Embodiments provide functionality for modifying motion of a robot. Such functionality can be employed in any variety of environments in which control of robot motion is desired. FIG. 1A illustrates a food preparation environment 100 in which embodiments may be employed.
  • Operating a robot in a food preparation environment, such as a quick service restaurant, can be challenging for several reasons. First, the end effectors (e.g., utensils), that the robot uses need to remain clean from contamination. Contamination can include allergens (e.g., peanuts), dietary preferences (e.g., contamination from pork for a vegetarian or kosher customer), dirt/bacteria/viruses, or other non-ingestible materials (e.g., oil, plastic, or particles from the robot itself). Second, the robot should be operated within its design specifications, and not exposed to excessive temperatures or incompatible liquids, without sacrificing cleanliness. Third, the robot should be able to manipulate food stuffs, which are often fracturable and deformable materials, and further the robot must be able to measure an amount of material controlled by its utensil in order to dispense specific portions. Fourth, the robot should be able to automatically and seamlessly switch utensils (e.g., switch between a ladle and salad tongs). Fifth, the utensils should be adapted to be left in an assigned food container and interchanged with the robot as needed, in situ. Sixth, the interchangeable parts (e.g., utensils) should be washable and dishwasher safe. Seventh, the robot should be able to autonomously generate a task plan and motion plan(s) to assemble all ingredients in a recipe, and execute that plan. Eighth, the robot should be able to modify or stop a motion plan based on detected interference or voice commands to stop or modify the robot's plan. Ninth, the robot should be able to minimize the applied torque based on safety requirements or the task context or the task parameters (e.g., density and viscosity) of the material to be gathered. Tenth, the system should be able to receive an electronic order from a user, assemble the meal for the user, and place the meal for the user in a designated area for pickup automatically with minimal human involvement.
  • FIG. 1 is a block diagram illustrating an example embodiment of a quick service food environment 100 of embodiments of the present disclosure. The quick service food environment 100 includes a food preparation area 102 and a patron area 120.
  • The food preparation area 102 includes a plurality of ingredient containers 106 a-d each having a particular foodstuff (e.g., lettuce, chicken, cheese, tortilla chips, guacamole, beans, rice, various sauces or dressings, etc.). Each ingredient container 106 a-d stores in situ its corresponding ingredients. Utensils 108 a-d may be stored in situ in the ingredient containers or in a stand-alone tool rack 109. The utensils 108 a-d can be spoons, ladles, tongs, dishers (scoopers), spatulas, or other utensils. Each utensil 108 a-e is configured to mate with and disconnect from a tool changer interface 112 of a robot arm 110. While the term utensil is used throughout this application, a person having ordinary skill in the art can recognize that the principles described in relation to utensils can apply in general to end effectors in other contexts (e.g., end effectors for moving fracturable or deformable materials in construction with an excavator or backhoe, etc.); and a robot arm can be replaced with any computer controlled actuatable system which can interact with its environment to manipulate a deformable material. The robot arm 110 includes sensor elements/modules such as stereo vision systems (SVS), 3D vision sensors (e.g., Microsoft Kinect™ or an Intel RealSense™), LIDAR sensors, audio sensors (e.g., microphones), inertial sensors (e.g., internal motion unit (IMU), torque sensor, weight sensor, etc.) for sensing aspects of the environment, including pose (i.e., X, Y, Z coordinates and roll, pitch, and yaw angles) of tools for the robot to mate, shape and volume of foodstuffs in ingredient containers, shape and volume of foodstuffs deposited into food assembly container, moving or static obstacles in the environment, etc.
  • To initiate an order, a patron in the patron area 120 enters an order 124 in an ordering station 122 a-b, which is forwarded to a network 126. Alternatively, a patron on a mobile device 128 can, within or outside of the patron area 120, generate an optional order 132. Regardless of the source of the order, the network 126 forwards the order to a controller 114 of the robot arm 110. The controller generates a task plan 130 for the robot arm 110 to execute.
  • The task plan 130 includes a list of motion plans 132 a-d for the robot arm 110 to execute. Each motion plan 132 a-d is a plan for the robot arm 110 to engage with a respective utensil 108 a-e, gather ingredients from the respective ingredient container 106 a-d, and empty the utensil 108 a-e in an appropriate location of a food assembly container 104 for the patron, which can be a plate, bowl, or other container. The robot arm 110 then returns the utensil 108 a-e to its respective ingredient container 106 a-d, the tool rack 109, or other location as determined by the task plan 130 or motion plan 132 a-d, and releases the utensil 108 a-d. The robot arm executes each motion plan 132 a-d in a specified order, causing the food to be assembled within the food assembly container 104 in a planned and aesthetic manner.
  • Within the above environment, various of the above described problems can be solved. The environment 100 illustrated by FIG. 1 can improve food service to patrons by assembling meals faster, more accurately, and more sanitarily than a human can assemble a meal. Some of the problems described above can be solved in accordance with the disclosure below.
  • For instance, operating a robot alongside human co-workers, such as in the quick service restaurant environment 100, can be challenging for a number of reasons. One the most important reasons is ensuring the safe operations of the robot and properly identifying and reacting to dangerous situations. Existing safety mechanisms either rely on a physical interface (button, switch, etc.), or on a non-contextual sensory data point (e.g., radar detecting human proximity).
  • In contrast FIG. 1B illustrates using an embodiment of the present disclosure to control the robot arm, i.e., robot, 110 in the environment 160 based on context and sound. In a similar environment as FIG. 1A, the robot arm 110 includes an array of several microphones 140 a-d that are mounted on the robot arm 110. The microphones 140 a-d are configured to detect and record sound waves 142. As the microphones 140 a-d record the sound waves 142, the recorded sound data 143 is reported to a controller 114. The sound data 143 can be organized into data from individual microphones as mic data 144 a-d. The controller 114 can process the sound data 143 and if a sound cue is detected (e.g., a stop or distress sound, e.g., “ouch”) then the controller 114 can issue a stop command 146. Before issuing the stop comment 146, the controller 114 can also consider the context of the sound data 143. For instance, the controller 114 can consider the proximity of the sound waves 142 to the robot arm 110. If, for example, the sound 142 is far from the robot arm 110, the controller 114 would consider this when deciding to issue a stop command. Further, it is noted that the controller 114 is not limited to issuing the stop command 146 and, instead, the controller 114 can issue commands modifying the operation of the robot, such as, the robot's motion, path, speed, and torque, amongst other examples. Further, it is noted that while the microphones 140 a-d are depicted as located on the robot arm 110 and the controller 114 is located separately from the robot arm 110, embodiments are not limited to this configuration and sound capturing devices may be in any location. Similarly, the processing performed by the controller 114 may be performed by one or more processing devices that are capable of obtaining and processing sound data and issuing controls for the robot. These processing devices may be located on/in the robot or may be located locally or remotely in relation to the robot arm 110.
  • FIG. 1B further illustrates sound waves 150 beginning from the patron area 120. With the multiple microphones 140 a-d, the controller 114 can determine a triangulated location 152 of the sound waves 150. In turn, the controller 114 can process the sound waves 150 to determine if the sound waves 150 correspond to a sound cue for which action should be taken and the controller 114 can also consider the context of the robot arm 110, such as the location 152 of the sound waves 150 in relation to the robot arm 110. Based upon the sound waves 150 and the context, the controller 114 can determine modifications, if any, for the robot arm's 110 motion. In the example of the sound waves 150, the controller 114 can determine that the triangulated location 152 is in the patron area 120 and the controller 114 can consider the proximity of the location 152 to the robot arm 110 and ignore the sound waves 150 altogether even if the sound waves 150 correspond to a sound cue for which action would be taken if the sound cue occurred in closer proximity to the robot arm 110.
  • FIG. 2 is a flow chart of a method 220 for modifying motion of a robot according to an embodiment. The method 220, at 221, detects a sound in an environment using a sound capturing device. In an embodiment, the method 220 continuously operates during conditions in which the robot is configured to move and the robot can possibly collide with objects. In turn, the method 220 processes, at 222, the sound detected from 221. The processing 222 determines whether the detected sound is a sound for which action should be taken. According to an embodiment, the processing 222 includes at least one of: (1) comparing the detected sound to a library of sound characteristics associated with sound cues and (2) extracting features or characteristics from the detected sound using a model. In an embodiment, the comparing at 222 using a model is done via a neural network serving as the model. To continue the method 220, at 223, motion of a robot is modified based on a context of the robot and at least one of: (i) the comparison, (ii) the features extracted from the detected sound, and (iii) the characteristics extracted from the detected sound. The motion modification can take any form, such as the motion modification described in U.S. Patent Application titled “Robot Interaction With Human Co-Workers”, Attorney Docket No. 5000.1057-001, including moving to a known safe region, stopping all motion, or using the sound to apply additional context to the current action. If the current action context is dangerous, then a triggering sound cue may be configured to drop all robot joint torques below a safe threshold until a human operator signals that it is safe to continue robot operation.
  • To illustrate the method 220, consider the example environment 160 depicted in FIG. 1B. In such an example embodiment, at 221, the sound waves 142 are detected by the microphones 140 a-d recorded as the sound data 143. In turn, at 222, the sounds data 143 is processed by the controller 114 which compares the sound data 143 to a library. In such an example, the comparison identifies that the sound data 143 matches the sound cue of a person yelling stop. At 223, based on the context, which in this example is a person's hand approaching the food preparation area 102, and the comparison determining that the recorded sound data 143 matches the person yelling the “stop” sound cue, the controller 114 determines that the robot should be stopped and issues the stop command 146.
  • An embodiment of the method 220 creates the library of sound characteristics associated with the sound cues used at 222. Such an embodiment creates the library by (1) recording a plurality of sounds in an environment, (2) identifying one or more of the recorded plurality of sounds as a sound cue, (3) determining sound characteristics of the one or more plurality of sounds identified as a sound cue, (4) associating the determined sound characteristics with the one or more plurality of sounds identified as a sound cue in computer memory of the library, and (5) associating, in the computer memory of the library, a respective action rule with the one or more plurality of sounds identified as a sound cue.
  • According to an embodiment, creating the library as described trains a neural network, e.g., model, using the action rules associated with the plurality of sounds identified as a sound cue. As such, a neural network may be created that can receive a sound recorded in an environment and determine an appropriate action rule to be executed. In such an embodiment, sound cues may be labeled by what the sounds indicate, e.g., collisions, broken plate, etc. As such, the sound cues may be labeled with a classification of the sound. Further, sound cues may also be associated with the context data of the conditions under which the sounds were recorded, e.g., location. In such and embodiment, the library may associate, in the computer memory, an action rule with the sounds identified as a sound cue and the relevant context data. This data, sound characteristics of a sound cue, context data, and action rule(s) may be used to train a neural network and thus, the trained neural network can identify action rules to execute given input sound data and context data.
  • Action rules may indicate any action given associated conditions, e.g., sound and context. To illustrate, one action rule may indicate that if the detected sound is “ouch,” and the context is the robot moving (likely the robot hit someone), the resulting action should be stopping the robot's motion or if the detected sound is “ouch,” and the context is the robot is stopped and exerting a torque (likely indicating that the robot pinned a person), the robot's motion should be changed to zero torque.
  • When creating the library of sound characteristics embodiments may employ a variety of different input data to identify one or more of the recorded plurality of sounds as a sound cue. For instance, embodiments may identify sounds as a sound cue based on (i) a user input flagging a given sound as a sound cue, (ii) context obtained from analyzing non-sound sensor input, and (iii) output of a machine learning method as described herein, such as the method 550. Such functionality may employ a neural network trained to identify sound cues using the recorded plurality of sounds as input. In an embodiment, the non-sound sensor may be any sensor known in the art, such as a camera, torque sensor, and force sensor, amongst other examples.
  • Further, an embodiment, may identify a recorded sound as a sound cue using a neural network trained to identify a recorded sound as a sound cue based on the recorded sound and the non-sound sensor context data. To illustrate, consider an example where a recorded sound is a collision. The collision itself is identifiable in an image (non-sound sensor input). A neural network can be trained to identify the sound (the collision) as a sound cue based upon input of the image showing the collision.
  • In the example of identifying a sound as a sound cue from context obtained from analyzing non-sound sensor input, this non-sound sensor may be any sensor known in the art, such as a camera, depth sensor, torque sensor, camera, lidar, thermometer, and pressure sensor, amongst other examples. To illustrate, consider an example where as part of creating the library, the sound of glass breaking is recorded. In such an embodiment, context data can be obtained using image data from a camera which indicates that the robot collided with a glass object and broke the glass object. As such, it can be determined that the recorded sound of glass breaking should be a sound cue because the image showed the robot breaking an image and the sound can be stored accordingly in the library.
  • According to an embodiment, comparing the detected sound to the library of sound characteristics associated with sound cues at 222 utilizes a neural network. Such functionality may utilize any neural network described herein. Further, such an embodiment processes the detected sound using a neural network trained to identify one or more characteristics of the detected sound that matches at least one of the sound characteristics associated with the sound cues. In an embodiment, 222 may utilize a model that is a neural network classifier that characterizes the detected sound. Such an embodiment may simply determine if a sound is “bad” or “not bad” and, in turn, at 223, the robot's motion is modified based upon context and whether the sound is “bad” or “not bad”. In such an embodiment, the neural work may be implemented using supervised learning where the neural network is trained with sound examples that have been labelled as “bad” or “not bad.”
  • In another embodiment of the method 220, comparing the detected sound to the library of sound characteristics associated with sound cues includes identifying a sound characteristic of the detected sound matching a given sound characteristic associated with a given sound cue in the library. According to an embodiment, the matching is determined through a tuned threshold which is selective to avoid false positives, but meets required levels of safety in conjunction with safe operation, such as working with human co-workers as described in U.S. Patent Application titled “Robot Interaction With Human Co-Workers”, Attorney Docket No. 5000.1057-001 and utilizing safe torques as described in U.S. Patent Application titled “Controlling Robot Torque And Velocity Based On Context”, Attorney Docket No. 5000.1055-001. In such an embodiment, the sound characteristic may be any characteristic of a sound wave known in the art, such as frequency, amplitude, direction, and velocity.
  • In an embodiment, modifying the motion of the robot 223 is based upon the result of the comparison, the features extracted, and/or the characteristics extracted at 222. To illustrate, in the example where the comparing 222 includes identifying a sound characteristic of the detected sound that matches a given sound characteristic associated with a given sound cue in the library, modifying the motion of the robot 223 includes identifying one or more action rules associated with the given sound cue (the sound cue with a sound characteristic that matches a sound characteristic of the recorded sound). In such an embodiment, the robot motion is modified to be in accordance with the one or more action rules.
  • Similarly, the modifying 223 may be done in accordance with the extracted features or characteristics. For instance, if a feature is extracted which simply indicates the detected sound is a “bad sound,” e.g., associated with injury, the robot may be stopped when the feature is extracted from the detected sound. In an embodiment, sounds which are encoded in the library as being associated with dangerous situations for the human or the co-worker are used to modify the context of the executed action to either stop or change the motion of the robot as described in U.S. Patent Application titled “Controlling Robot Torque And Velocity Based On Context”, Attorney Docket No. 5000.1055-001 (torque based on context) and U.S. Patent Application titled “Robot Interaction With Human Co-Workers”, Attorney Docket No. 5000.1057-001 (working with human co-workers).
  • In embodiments of the method 220, the one or more action rules may dictate the operation of the robot given the sound cue. Further, the action rules may dictate the operation of the robot given the sound cue and the context of the robot. Further, the one or more action rules associated with the given sound cue may be a set of action rules. These rules may indicate different actions to take based upon different characteristics of a recorded sound and different context data of the environment in which the sound was recorded.
  • In embodiments the set rules may be based upon different sounds, characteristics of sounds, classification of sounds, classification of characteristics of sounds, and context of sounds, e.g., location of sounds. For instance, in an embodiment, at least one of the one or more action rules dictates a first result for the motion of the robot and a second result for the motion of the robot, where the motion of the robot is modified to be in accordance with the first result or the second result based upon the context of the robot. To illustrate, again consider the example where the detected sound is glass breaking. After detecting this sound and comparing the detected sound to the library of sound characteristics associated with sound cues, it is determined that the detected sound has characteristics matching the “breaking glass” sound cue. The breaking glass sound cue has action rules which dictate a result based on context. For example, the rules may indicate that the robot's motion should stop if the broken glass sound occurred within 10 feet of the robot and the robot can operate normally if the broken glass sound occurred more than 10 feet from the robot.
  • Embodiments of the method 220 may treat any sound as a sound cue. For instance, the sound cues may include at least one of: a keyword, a phrase, a sound indicating a safety-relevant, e.g., dangerous, event, and a sound relevant to an action. Simply, embodiments may treat any sound relevant to operation of a robot as a sound cue.
  • In embodiments of the method 220, “context” may include any conditions related, in any way, to the robot such as environmental context and operational context. For example, context may include any data related to the robot, the task performed by robot, the motion of the robot, and the environment in which the robot is operating, amongst other examples. For instance, in embodiments, context of the robot includes at least one of: torque of a joint of the robot; velocity of a link of the robot; acceleration of a link of the robot; jerk of a link of the robot; force of an end effector attached to the robot; torque of an end effector attached to the robot; pressure of an end effector attached to the robot; velocity of an end effector attached to the robot; acceleration of an end effector attached to the robot; task performed by the robot; and characteristics of an environment in which the robot is operating. Further, context may include an action state of the robot (e.g., idle, moving, changing tool, scooping, cutting, picking), state (e.g., in workspace, speed of movement, not in workspace, proximity, can collide, unable to collide) of objects (humans, robots, animals, etc.). Further, the context may include any context data as described in U.S. Patent Application titled “Controlling Robot Torque And Velocity Based On Context”, Attorney Docket No. 5000.1055-001 and the context may include predicted motion of an object as described in U.S. Patent Application titled “Robot Interaction With Human Co-Workers”, Attorney Docket No. 5000.1057-001. By knowing the context of the action and thus the implied level of danger associated with it, the level of reaction to the sound cue can be modified. For example, if the robot is engaged in a dangerous activity which requires high torque and a sharp object, then any sound cue indicating distress results in an immediate and drastic reduction in robot output torque to below a safe threshold.
  • In an embodiment of the method 220, modifying motion of the robot at 223 includes comparing the context, i.e., context data, of the robot to a library of contexts, i.e., context data, to detect a matching context. Such an embodiment identifies one or more action rules associated with the matching context and modifies the motion of the robot to be in accordance with the one or more action rules. Comparing the context to a library of context may be made by a neural network or by comparing features of the context of the robot to features of the contexts in the library. In this way, embodiments may utilize statistical models.
  • Yet another embodiment of the method 220 creates the library of contexts. In such an embodiment, the context library is created by recording a plurality of contexts, i.e., recording data indicating contexts, in an environment and associating, in computer memory of the library, a respective action rule with one or more of the plurality of recorded contexts. In such an embodiment, the contexts may be recorded using any sensor known in the art that can capture context data, i.e., data relevant to the operation of a robot. For instance, the context data may be recorded using at least one of: a vision sensor, a depth sensor, a torque sensor, and a position sensor, amongst other examples.
  • An embodiment of the method 220 that creates the context library may also identify the respective action rule associated with the one or more of the plurality of recorded contexts. In an embodiment, identifying the action rule associated with the recorded contexts includes (1) processing the plurality of recorded contexts to identify at least one of: a pattern in the environment in which the contexts were captured and a condition in the environment in which the contexts were captured. In turn, the respective action rule is identified using at least one of the identified pattern and condition. In such an embodiment, processing the plurality of recorded contexts to identify at least one of a pattern and a condition includes at least one of (i) comparing the plurality of recorded contexts to a library of predefined context conditions and (ii) evaluating output of a neural network trained to identify patterns or conditions of a context from the plurality of recorded contexts. Such an embodiment may apply a modification to the technique described in U.S. Patent Application titled “Controlling Robot Torque And Velocity Based On Context”, Attorney Docket No. 5000.1055-001 (controlling torque based on context) where sounds are matched to an action context. In future execution, whenever a sound of that type is detected, it can be used to update and modify the current robot context.
  • Embodiments can use a neural network architecture to implement the various functionalities described herein. For instance, embodiments can utilize a convolutional neural network (CNN), a fully convolutional neural network (FCN), a recurrent neural network (RNN), a long-short term memory neural network (LSTM), or any other known neural network architecture. In embodiments, any data described herein, e.g., sound and context data, or a combination thereof, can be used to train such a neural network. In an embodiment, a neural network is trained according to methods known to those skilled in the art. According to an embodiment, a neural network which determines a robot's reaction based on a given context is trained by using the additional information provided by the detected sounds. Additionally, in an embodiment the sound neural network can be informed by the current context and action of the robot. For example, if the robot is handling pots and pans, the clanging and banging noises associated with that motion are indicative of normal operation. In contrast, a detected clanging or banging while preparing a stir-fry in a wok is likely to be indicative of a problem.
  • FIG. 3 is a block diagram illustrating an example system 330 in which embodiments may be implemented. The system 330 comprises a computer 331, having input and output ports. The computer 331 is suitable for running software capable of running a keyword or phrase matching program, a sound wave characteristic matching program, a multi-variable pattern recognition program, and a robot controlling system, as well as other operating systems. In embodiments, the computer 331 may be any processing device known in the art such as a personal computer or a processor complex.
  • The computer 331 is connected to an input device 332. In embodiments, the input device 332 can be a microphone which allows a user to record a digital voice print to customize the system 330 to detect voice commands and accordingly perform a set of rules. In embodiments, the input device 332 can be used to load a set of keywords or phrases into a database 333. The input device 332 can also be used to record or load a set of sound wave characteristics (e.g., digitization of the sound of glass breaking) into the database 333.
  • The computer 331 is communicatively coupled to the database 333 which can contain a preset or continuously changing set of keywords, phrases, sound wave characteristics, or other sensor data. Database 333 can also be a trained neural network, trained model, or a heuristic model.
  • The computer 331 is also connected to a sensor 334. The sensor 334 provides contextual information to the computer 331, and can affect the rules that the system 330 executes. In embodiments, the sensor 334 may be a camera capturing a real-time feed of an environment. In embodiments, the sensor 334 may be a torque measurement device connected to the robot 335. Further, in embodiments, the sensor 334 may be a collection of cameras, torque measurement devices, and other sensors and measurement devices. The sensor 334 produces a data feed 336 which is a collection of data points coming from the variety of input sensors 334. Further, while not depicted in FIG. 3, the computer 331 may also issue commands/controls to the sensor 334.
  • In the system 330, the computer 331 is connected to the microphone 337 (which may be an array of audio capture devices). The microphone 337 captures sound data and relays it via data stream 338 to the computer 331.
  • In an embodiment, the computer 331 compares incoming audio signals from the microphone 337 to a database of sounds 333, and performs a set of predefined rules based on the comparison. The comparison can be made by matching sound wave components from data stream 338 against a library or model of known sound wave fingerprints in the database 333, or by matching a keyword or phrase against a library of pre-defined keywords or phrases in the database 333. The comparison can be made using a Bayesian estimator, a convolutional neural network, or a recurrent neural network. In an embodiment, the comparison generates a confidence indicating whether an alert should be triggered, i.e., whether motion of the robot should be modified. In embodiments, a variety of threshold functions can be used to determine if a recorded sound should be acted upon (e.g., a single threshold value, above a threshold for a period of time, or some other function of time, confidence, and other signals in the environment).
  • The computer 331 may control the robot 335, based on a set of rules related to the comparison performed on the aggregation of data streams 338 (sound data) and 336 (context data), and other inputs. In an embodiment, other inputs (data in addition to the data from the microphone 337 and sensor 334) can be provided by the robot 335 to the computer 331. The computer 331 may also control output on an external display 339 such as a monitor. In an embodiment, the display 339 alerts a user whenever the system detects danger.
  • FIG. 4 is a flowchart of a method embodiment 440 for controlling a robot in an environment. The method 440 may be implemented on computer program code in combination with one or more hardware devices. The computer program code may be stored on storage media, or may be transferred to a workstation over the Internet or some other type of network for execution.
  • The method 440 starts 441 and at 442, sound capturing devices are connected to the system. The sound capturing devices can be a microphone or an array of microphones or any other sound capturing device known in the art. In embodiments, an array of microphones is used to detect the source of a sound. At 443 additional sensors are connected to the system. These additional sensors can include cameras, depth sensors, sonars, and force torque sensors, amongst other examples. In embodiments, these sensors provide context data related to the environment in which the robot being controlled is operating. This context information can include the nature of the surroundings or the actions performed by an object in the environment.
  • At 444, a keyword and sound database is loaded that is indexed and searchable by different parameters such as keywords and sound characteristics, e.g., frequencies. This database can either be built through use of computer software that copies pre-defined keywords, phrases, sound wave characteristics, and other data metrics, or by a live recording of sounds or keywords narrated by a human speaker, or through any other simulation of the data source, i.e., an environment in which is robot is being controlled. The database may also be dynamically updated based on self-generated feedback or manually using input feedback provided by a user to a particular recording.
  • At 445, a set of rules are defined, and associated with different data values, e.g., sounds and keyword, and other cues such as environmental context or input from other sensory devices. The rules can be pre-defined and copied via computer software, or can be changed dynamically based on input. For example, user input may be used to customize the rules for the operating environment. The rules may also be changed dynamically based on feedback captured by a system implementing the method 440.
  • At 446 the robot is connected to the system. The robot may provide information, such as motion data, image capture data, or other sensor output. The robot may also be commanded by a system implementing the method 440 to modify its operation. These modifications may include reducing the robot's speed, modifying the robot's movement plan, or completely stopping.
  • At 447, the robot performs its predefined actions. The robot performing these actions can be implemented as part of software implementing the method 440 or these actions can be dictated by an independent software program.
  • At 448, sounds and other data inputs are monitored and processed. The processing can be an aggregation of sound data and context data, or the execution of other mathematical functions on sound and context data. For instance, an embodiment can utilize mathematical functions to perform preprocessing, filtering, data shaping, feature extracting, classification, and matching of the sound and context data. At 449, a check is made whether one or more of these data points or a collection of these data points or a pattern of these data points meet one or many conditions associated with the database or model loaded at 444. The check 449 may involve receiving words, phrases, sounds, and other inputs from a system that processes this data to remove noise or perform other mathematical transformations. If no condition is met, then the monitoring process continues at 448. However, if a match is detected, then at 450, a rule or rules are processed and executed. These rules can be executed by the robot, as shown in flow 451. Optionally, the rules can be dynamically updated based on the fact that the rule or rules have been executed. Optionally, the database and model loaded at 444 can be updated based on the fact that the rule or rules have been executed. At 452 a check is made whether the rule or rules require any human intervention. If no human intervention is needed, then the monitoring process continues at 448. However, if human intervention is needed, then at 453 the human provides the input. This input can be physical input, such as pushing a button or a switch, or a digital input, such as pushing a button on a computer display screen. Optionally, the human input can be sent as a command to the robot as shown in flow 454. After the human provided input, the monitoring process continues at 448.
  • FIG. 5 is a flowchart of a method 550 for training a model, i.e., a deep neural network (DNN) that may be employed in embodiments to recognize sound cues or to extract sound features and characteristics. In an embodiment, the audio is preprocessed to extract features suitable to be fed into the DNN. Feature extraction can be done using Mel-Frequency Cepstral Coefficients or other spectral analysis methods. The extracted features are fed into a neural network model such as a convolutional neural network (CNN), or into a support vector machine (SVM), or into another machine learning technique. A convolutional neural network consists of a combination of convolutional layers, max pooling layers and fully connected dense layers. The final layer is used for classifying the original sound cue using, for example, a softmax function or a mixture of softmaxes (MoS). The method 550 may be implemented using computer program code in combination with one or more hardware devices. The computer program code may be stored on storage media, or may be transferred to a workstation for execution over the Internet or any type of network.
  • The method 550 starts 551 and at 552 sound capturing devices are connected to a system executing the method 550. The sound capturing devices can be a microphone or an array of microphones or any other sound capturing device known in the art. In embodiments, an array of microphones connected at 552 are used to detect a sound and the source of a sound. At 553 other sensors are connected to the system. These other sensors can include cameras, depth sensors, sonars, and force torque sensors, amongst other examples. In embodiments, these sensors provide context data, such as the nature of the surroundings or the actions performed in an environment.
  • At 554 a robot is connected to the system executing the method 550. The robot may provide information to the overall system, such as motion data, image capture data, or other sensor output. The robot may also be commanded by a system implementing the method 550. The robot may be commanded to reduce its speed, to completely stop, or to execute actions to record and generate additional data.
  • At 555, sounds (from the devices connected at 552) and other data inputs such as camera feeds and torque information (from the devices connected at 553) are measured and processed. The processing can be an aggregation of this data, or the execution of other mathematical functions on this data. At 556 this data is recorded and stored in a database. This database can be indexed and searchable.
  • At 557, rules are defined and associated with one or more data entries or data patterns from the database. These rules can be actions to be executed by the robot, such as stopping the robot or reducing the speed of the robot's motions.
  • An embodiment provides a sound-based emergency stop method to stop robot motion without a physical interface (button, switch, etc.). Such an embodiment listens for a variety of sounds which indicate a human, distress, a human command, or a mechanical impact or failure. An audio signal, received/recorded via a microphone or an array of microphones, is compared to a library of sounds (e.g., verbal cues, such as “stop” or “ouch,” and non-verbal cues, such as the sound of glass breaking, or impact between two rigid objects). The comparison can be done using a voice or acoustic model. Further, the comparison can be made by matching sound characteristics, e.g., frequency components, against a library or model of known frequency fingerprints using (i) a Bayesian estimator, (ii) a convolutional neural network, or (iii) a recurrent neural network.
  • In an embodiment, the comparison determines a confidence in the comparison, i.e., whether a detected sound matches a sound cue. Embodiments can utilize a variety of threshold functions (e.g., a single threshold value, above a threshold for a period of time, or some other function of time, confidence, and other signals in the environment) to determine if a detected sound matches a sound cue and should be acted upon. In response to finding a positive match of a recorded sound to a sound recorded in a library and based upon context of the robot, the robot's motion can be modified, e.g., slowed or halted.
  • Embodiments can modify motion for a mobile or stationary robot. Embodiments can perform sound recognition, i.e., determining if a detected sound matches a sound cue using (i) a library of sound cues, (ii) a model of sound (i.e., frequency) cues, (iii) a trained neural network, (iv) a Bayesian estimator, (v) a convolutional neural network, or (vi) a system using a recurrent neural network architecture.
  • In an embodiment, sound capturing devices, such as a microphone or array of microphones, can be mounted to the robot itself. In another embodiment, sound capturing devices can be mounted to locations in an environment in which the robot operates. In an embodiment, locations of the sound capturing devices can be known to a system processing the sound to further enable noise cancellation and triangulation of a sound source. If mounted to the robot, the system can calculate the location of the sound capturing device(s) as they move with the robot. This allow an embodiment to perform calculations, e.g., sound triangulation, that are based on the dynamic location at the time the sound is recorded.
  • An embodiment continually monitors the sound capturing device input and determines if any sounds correspond to sounds which trigger an action, e.g., an emergency halt of the robot. In such an embodiment, a command, such as, ‘emergency stop’ or ‘zero torque’ can be issued to the robot.
  • Embodiments provide numerous benefits over existing methods for robot control. Existing solutions rely on an emergency whistle, voice commands, e.g., a shouted ‘STOP’ command, or other non-verbal cues, such as excessive force, torque, or other physical signals. Other existing methods rely on position based signals such as light curtains, pressure sensors, or motion sensors; or physical switches such as an emergency stop button. Existing systems also use verbal cues to shut down alarms, such as the NEST smoke detector which looks for waiving arms and verbal cues to sense false alarms.
  • Currently, implementations executing emergency stops in robotics rely on a physical interface device, such as a button or switch, which can be either wired or wireless. The drawback of this approach is the human operator must remain in close physical proximity to the emergency stop device to activate the emergency stop feature. Other existing methods involve emergency stop based on sound, but are limited in scope (i.e., the specific sound of a whistle), or require specific hardware carried by the operator (i.e., using a headset).
  • In contrast, embodiments provide functionality to modify robot motion based on both verbal and nonverbal cues in the same system, with no hardware required for the user. The novel methods and systems described herein allow the robot to autonomously modify its motion based on non-verbal sound cues (e.g. the sound of glass breaking) without the need for a human operator to signal the modification.
  • Further, existing systems do not consider the variety of sounds which can occur in a robot's environment that are indicative of a severe problem or harmful situation for a human operator. For instance, the human operator might be accidentally injured by the robot and unable to press the emergency stop button or issue a verbal ‘stop’ command. However, using embodiments, the impact of a collision, for example, can be identified and processed automatically so as to modify a robot's motion and prevent further injury.
  • In an embodiment, the robot does not react if it is not already moving. In other words, the robot uses context about its environment. In embodiments, certain commands may cause the robot to slow down instead of a complete stop. An embodiment can recognize the person speaking using speaker recognition so as to prevent unauthorized users from shouting commands. Embodiments can also triangulate the sound to determine the source of a sound based on an array of sound capturing devices, and providing a lower weight to sounds from a particular area, e.g., a customer area. While methods of triangulation of a source of sound are known by a person of ordinary skill of the art, these methods focus on microphones having fixed locations. In embodiments where the sound capturing devices are mounted to the robot, the sound capturing devices move as the robot arm moves and, in such an embodiment, the triangulation calculation is changed dynamically by tracking the location of the sound capturing devices.
  • An embodiment provides a context-driven, sound or data-based emergency stop and motion reduction method to limit robot motion without a direct physical interface such as button or switch. An audio signal, received/recorded via a microphone or an array of microphones, is compared to a library of sounds (e.g., verbal cues, such as “stop” or “ouch,” and non-verbal cues, such as the sound of glass breaking). The comparison can be done using a voice or acoustic model. Similarly, other data inputs, such as a visual camera feed, depth information, and torque measurements, can be compared to a similar library of corresponding data. Similarly, a combination of this data, or a pattern of this data, can trigger a positive match for predetermined conditions. In response to such a match, a command can be issued to the robot to execute a set of predefined rules, such as reducing its speed or completely halting its motion.
  • An embodiment employs a mobile or stationary robot, a microphone or array of microphones, context sensors (such as camera, depth, torque), a library of data points that if detected by the sensor(s) (sound and context sensors), initiate a set of rules to be executed by the robot. Embodiments can also implement a system using a recurrent neural network architecture for data and pattern recognition. In an embodiment, the sound capturing device or array of sound capturing devices, or the context sensor or array of context sensors, can be mounted to the robot itself. In another embodiment, the array of sound capturing devices and context sensors can be mounted to locations in the environment in which the robot operates. In an embodiment, locations of the array of sound capturing devices and context sensors can be known to a system processing the data and sound to further enable noise cancellation and triangulation, i.e., locating, of data sources. If mounted to the robot, the system can calculate the location of the sound capturing devices and context sensors as they move with the robot. In an embodiment, a recurrent neural network can be used to perform speech recognition (e.g., converting audio to written text or another form) for processing.
  • As noted herein, existing implementations for controlling a robot are implemented via a physical interface device, such as a button or switch, comparing measurements against thresholds, such as with torque, voltage, or current limits, or by detecting boundary crossings such as intrusions into predefined zones. The drawback of these approaches is that either a human operator must remain in close physical proximity to the emergency stop device to activate the emergency stop feature, the thresholds are too conservative in nature and produce too many false positives, or certain event are missed and not captured such as glass breaking.
  • Existing methods that involve emergency stop based on sound are limited in scope (i.e., the specific sound of a whistle), or require specific hardware carried by the operator (i.e., using a headset). Other existing methods are limited to human screams or are limited in functionality and cannot be used for real-time emergency alerts.
  • In contrast, embodiments enable modifying robot motion based on verbal and nonverbal cues in the same system and inferred context of the environment that is based on a collection of data sources. Unlike existing methods, embodiments require no particular hardware for users. Embodiments provide a novel approach that allows the robot to autonomously stop or modify its speed or motion based on non-verbal sound cues (e.g., the sound of glass breaking) and context data without requiring a human operator to signal the change.
  • Besides using sound as a trigger, embodiments can also use context of a robot's current task and motion plan, and the state of the surroundings as measured by other sensors to inform the modifications to the robot's movement. For instance, if no obstacles or humans are detected in the environment, then the confidence that a collision occurred is reduced. Conversely, if a human is present, and in close proximity to the robot, then it is highly likely that a collision occurred and the threshold for halting the robot is significantly reduced.
  • Additionally, in an embodiment, if the robot is not moving, it should not react, as this might cause additional harm, e.g. a human could have accidentally impacted a stationary robot, so the robot should not move as a result of that collision. For robots where a sudden stop might have catastrophic consequences, the reaction of the robot to the emergency signal can vary based on context.
  • Embodiments may use a plurality of sound capturing devices. For instance, using more than one microphone, e.g., four microphones, allows the sound origin to be determined which allows more weight to be given to commands which originate within reach of the robot.
  • FIG. 6 illustrates a computer network or similar digital processing environment in which embodiments of the present disclosure may be implemented. Client computer(s)/devices 50 and server computer(s) 60 provide processing, storage, and input/output devices executing application programs and the like. The client computer(s)/devices 50 can also be linked through communications network 70 to other computing devices, including other client devices/processes 50 and server computer(s) 60. The communications network 70 can be part of a remote access network, a global network (e.g., the Internet), a worldwide collection of computers, local area or wide area networks, and gateways that currently use respective protocols (TCP/IP, Bluetooth®, etc.) to communicate with one another. Other electronic device/computer network architectures are suitable.
  • FIG. 7 is a diagram of an example internal structure of a computer (e.g., client processor/device 50 or server computers 60) in the computer system of FIG. 6. Each computer 50, 60 contains a system bus 79, where a bus is a set of hardware lines used for data transfer among the components of a computer or processing system. The system bus 79 is essentially a shared conduit that connects different elements of a computer system (e.g., processor, disk storage, memory, input/output ports, network ports, etc.) that enables the transfer of information between the elements. Attached to the system bus 79 is an I/O device interface 82 for connecting various input and output devices (e.g., keyboard, mouse, displays, printers, speakers, etc.) to the computer 50, 60. A network interface 86 allows the computer to connect to various other devices attached to a network (e.g., network 70 of FIG. 6). Memory 90 provides volatile storage for computer software instructions 92 and data 94 used to implement an embodiment of the present disclosure (e.g., structure generation module, computation module, and combination module code detailed above). Disk storage 95 provides non-volatile storage for computer software instructions 92 and data 94 used to implement an embodiment of the present disclosure. A central processor unit 84 is also attached to the system bus 79 and provides for the execution of computer instructions.
  • In one embodiment, the processor routines 92 and data 94 are a computer program product (generally referenced 92), including a non-transitory computer-readable medium (e.g., a removable storage medium such as one or more DVD-ROM's, CD-ROM's, diskettes, tapes, etc.) that provides at least a portion of the software instructions for the embodiment. The computer program product 92 can be installed by any suitable software installation procedure, as is well known in the art. In another embodiment, at least a portion of the software instructions may also be downloaded over a cable communication and/or wireless connection. In other embodiments, the invention programs are a computer program propagated signal product embodied on a propagated signal on a propagation medium (e.g., a radio wave, an infrared wave, a laser wave, a sound wave, or an electrical wave propagated over a global network such as the Internet, or other network(s)). Such carrier medium or signals may be employed to provide at least a portion of the software instructions for the present invention routines/program 92.
  • The teachings of all patents, published applications and references cited herein are incorporated by reference in their entirety.
  • While example embodiments have been particularly shown and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the embodiments encompassed by the appended claims.

Claims (20)

What is claimed is:
1. A method for modifying motion of a robot, the method comprising:
detecting a sound in an environment using a sound capturing device;
processing the detected sound, the processing including at least one of:
comparing the detected sound to a library of sound characteristics associated with sound cues; and
extracting features or characteristics from the detected sound using a model; and
modifying motion of a robot based on a context of the robot and at least one of: (i) the comparison, (ii) the features extracted from the detected sound, and (iii) the characteristics extracted from the detected sound.
2. The method of claim 1, further comprising:
creating the library of sound characteristics associated with the sound cues by:
recording a plurality of sounds in an environment;
identifying one or more of the recorded plurality of sounds as a sound cue;
determining sound characteristics of the one or more plurality of sounds identified as a sound cue;
associating the determined sound characteristics with the one or more plurality of sounds identified as a sound cue in computer memory of the library; and
associating, in the computer memory of the library, a respective action rule with the one or more plurality of sounds identified as a sound cue.
3. The method of claim 2, wherein identifying one or more of the recorded plurality of sounds as a sound cue is based upon at least one of:
user input flagging a given sound as a sound cue;
context obtained from analyzing non-sound sensor input; and
output of a neural network trained to identify sound cues using the recorded plurality of sounds as input.
4. The method of claim 1, wherein comparing the detected sound to the library of sound characteristics associated with sound cues includes:
processing the detected sound using a neural network trained to identify one or more characteristics of the detected sound that matches at least one of the sound characteristics associated with the sound cues.
5. The method of claim 1, wherein comparing the detected sound to the library of sound characteristics associated with sound cues includes:
identifying a sound characteristic of the detected sound matching a given sound characteristic associated with a given sound cue in the library.
6. The method of claim 5, wherein based on the context and the comparison, modifying motion of the robot includes:
identifying one or more action rules associated with the given sound cue; and
modifying the motion of the robot to be in accordance with the one or more action rules.
7. The method of claim 6, wherein at least one of the one or more action rules dictates a first result for the motion of the robot and a second result for the motion of the robot, where the motion of the robot is modified to be in accordance with the first result or the second result based upon the context of the robot.
8. The method of claim 1, wherein the sound cues include at least one of: a keyword, a phrase, a sound indicating a safety-relevant event, and a sound relevant to an action.
9. The method of claim 1, wherein context of the robot includes at least one of: torque of a joint of the robot; velocity of a link of the robot; acceleration of a link of the robot, jerk of a link of the robot; force of an end effector attached to the robot; torque of an end effector attached to the robot; pressure of an end effector attached to the robot; velocity of an end effector attached to the robot; acceleration of an end effector attached to the robot; task performed by the robot; and characteristics of an environment in which the robot is operating.
10. The method of claim 1, wherein modifying motion of the robot includes:
comparing the context of the robot to a library of contexts to detect a matching context;
identifying one or more action rules associated with the matching context; and
modifying the motion of the robot to be in accordance with the one or more action rules.
11. The method of claim 10, further comprising creating the library of contexts by:
recording a plurality of contexts in an environment; and
associating, in computer memory of the library, a respective action rule with one or more of the plurality of recorded contexts.
12. The method of claim 11, wherein recording the plurality of contexts in the environment uses at least one of: a vision sensor; a depth sensor; a torque sensor; and a position sensor.
13. The method of claim 11 further comprising identifying the respective action rule associated with the one or more of the plurality of recorded contexts by:
processing the plurality of recorded contexts to identify at least one of: a pattern in the environment in which the contexts were captured and a condition in the environment in which the contexts were captured; and
identifying the respective action rule using at least one of the identified pattern and condition.
14. The method of claim 13 wherein processing the plurality of recorded contexts to identify at least one of a pattern and a condition includes at least one of:
comparing the plurality of recorded contexts to a library of predefined context conditions; and
evaluating output of a neural network trained to identify patterns or conditions of a context from the plurality of recorded contexts.
15. A system for modifying motion of a robot, the system comprising:
a processor; and
a memory with computer code instructions stored thereon, the processor and the memory, with the computer code instructions, being configured to cause the system to:
detect a sound in an environment using a sound capturing device;
process the detected sound, the processing including at least one of:
comparing the detected sound to a library of sound characteristics associated with sound cues; and
extracting features or characteristics from the detected sound using a model; and
modify motion of a robot based on a context of the robot and at least one of: (i) the comparison, (ii) the features extracted from the detected sound, and (iii) the characteristics extracted from the detected sound.
16. The system of claim 15 where, in comparing the detected sound to the library of sound characteristics associated with sound cues, the processor and the memory, with the computer code instructions, are further configured to cause the system to:
identify a sound characteristic of the detected sound matching a given sound characteristic associated with a given sound cue in the library.
17. The system of claim 16 where, in modifying motion of the robot based on the comparison and context, the processor and the memory, with the computer code instructions, are further configured to cause the system to:
identify one or more action rules associated with the given sound cue; and
modify the motion of the robot to be in accordance with the one or more action rules.
18. The system of claim 15 where, in modifying motion of the robot, the processor and the memory, with the computer code instructions, are further configured to cause the system to:
compare the context of the robot to a library of contexts to detect a matching context;
identify one or more action rules associated with the matching context; and
modify the motion of the robot to be in accordance with the one or more action rules.
19. The system of claim 15 wherein the processor and the memory, with the computer code instructions, are further configured to cause the system to:
create the library of sound characteristics associated with the sound cues by:
recording a plurality of sounds in an environment;
identifying one or more of the recorded plurality of sounds as a sound cue;
determining sound characteristics of the one or more plurality of sounds identified as a sound cue;
associating the determined sound characteristics with the one or more plurality of sounds identified as a sound cue in computer memory of the library; and
associating, in the computer memory of the library, a respective action rule with the one or more plurality of sounds identified as a sound cue.
20. A non-transitory computer program product for modifying motion of a robot, the computer program product comprising a computer-readable medium with computer code instructions stored thereon, the computer code instructions being configured, when executed by a processor, to cause an apparatus associated with the processor to:
detect a sound in an environment using a sound capturing device;
process the detected sound, the processing including at least one of:
comparing the detected sound to a library of sound characteristics associated with sound cues; and
extracting features or characteristics from the detected sound using a model; and
modify motion of a robot based on a context of the robot and at least one of: (i) the comparison, (ii) the features extracted from the detected sound, and (iii) the characteristics extracted from the detected sound.
US16/571,025 2018-09-13 2019-09-13 Stopping Robot Motion Based On Sound Cues Abandoned US20200086497A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/571,025 US20200086497A1 (en) 2018-09-13 2019-09-13 Stopping Robot Motion Based On Sound Cues

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US201862730918P 2018-09-13 2018-09-13
US201862730934P 2018-09-13 2018-09-13
US201862730947P 2018-09-13 2018-09-13
US201862730703P 2018-09-13 2018-09-13
US201862730933P 2018-09-13 2018-09-13
US201862731398P 2018-09-14 2018-09-14
US16/571,025 US20200086497A1 (en) 2018-09-13 2019-09-13 Stopping Robot Motion Based On Sound Cues

Publications (1)

Publication Number Publication Date
US20200086497A1 true US20200086497A1 (en) 2020-03-19

Family

ID=68069913

Family Applications (11)

Application Number Title Priority Date Filing Date
US16/571,041 Active 2039-12-05 US11648669B2 (en) 2018-09-13 2019-09-13 One-click robot order
US16/570,736 Active 2039-10-01 US11597084B2 (en) 2018-09-13 2019-09-13 Controlling robot torque and velocity based on context
US16/570,855 Active 2041-11-22 US11673268B2 (en) 2018-09-13 2019-09-13 Food-safe, washable, thermally-conductive robot cover
US16/570,100 Active 2039-11-28 US11628566B2 (en) 2018-09-13 2019-09-13 Manipulating fracturable and deformable materials using articulated manipulators
US16/571,040 Active 2040-08-18 US11597087B2 (en) 2018-09-13 2019-09-13 User input or voice modification to robot motion plans
US16/570,955 Active 2041-06-08 US11597086B2 (en) 2018-09-13 2019-09-13 Food-safe, washable interface for exchanging tools
US16/570,915 Active 2039-09-24 US11597085B2 (en) 2018-09-13 2019-09-13 Locating and attaching interchangeable tools in-situ
US16/571,003 Active 2041-02-14 US11607810B2 (en) 2018-09-13 2019-09-13 Adaptor for food-safe, bin-compatible, washable, tool-changer utensils
US16/571,025 Abandoned US20200086497A1 (en) 2018-09-13 2019-09-13 Stopping Robot Motion Based On Sound Cues
US16/570,606 Active 2040-04-20 US11872702B2 (en) 2018-09-13 2019-09-13 Robot interaction with human co-workers
US16/570,976 Active 2040-06-18 US11571814B2 (en) 2018-09-13 2019-09-13 Determining how to assemble a meal

Family Applications Before (8)

Application Number Title Priority Date Filing Date
US16/571,041 Active 2039-12-05 US11648669B2 (en) 2018-09-13 2019-09-13 One-click robot order
US16/570,736 Active 2039-10-01 US11597084B2 (en) 2018-09-13 2019-09-13 Controlling robot torque and velocity based on context
US16/570,855 Active 2041-11-22 US11673268B2 (en) 2018-09-13 2019-09-13 Food-safe, washable, thermally-conductive robot cover
US16/570,100 Active 2039-11-28 US11628566B2 (en) 2018-09-13 2019-09-13 Manipulating fracturable and deformable materials using articulated manipulators
US16/571,040 Active 2040-08-18 US11597087B2 (en) 2018-09-13 2019-09-13 User input or voice modification to robot motion plans
US16/570,955 Active 2041-06-08 US11597086B2 (en) 2018-09-13 2019-09-13 Food-safe, washable interface for exchanging tools
US16/570,915 Active 2039-09-24 US11597085B2 (en) 2018-09-13 2019-09-13 Locating and attaching interchangeable tools in-situ
US16/571,003 Active 2041-02-14 US11607810B2 (en) 2018-09-13 2019-09-13 Adaptor for food-safe, bin-compatible, washable, tool-changer utensils

Family Applications After (2)

Application Number Title Priority Date Filing Date
US16/570,606 Active 2040-04-20 US11872702B2 (en) 2018-09-13 2019-09-13 Robot interaction with human co-workers
US16/570,976 Active 2040-06-18 US11571814B2 (en) 2018-09-13 2019-09-13 Determining how to assemble a meal

Country Status (3)

Country Link
US (11) US11648669B2 (en)
EP (3) EP3849756A1 (en)
WO (11) WO2020056377A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200089235A1 (en) * 2014-09-26 2020-03-19 Ecovacs Robotics Co., Ltd. Self-moving robot movement boundary determining method
US20220067149A1 (en) * 2020-08-25 2022-03-03 Robert Bosch Gmbh System and method for improving measurements of an intrusion detection system by transforming one dimensional measurements into multi-dimensional images
US11571814B2 (en) 2018-09-13 2023-02-07 The Charles Stark Draper Laboratory, Inc. Determining how to assemble a meal
US11591170B2 (en) 2019-10-25 2023-02-28 Dexai Robotics, Inc. Robotic systems and methods for conveyance of items

Families Citing this family (60)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170221296A1 (en) 2016-02-02 2017-08-03 6d bytes inc. Automated preparation and dispensation of food and beverage products
FR3052861B1 (en) * 2016-06-20 2018-07-13 Ixblue METHOD FOR COMPENSATION OF CORIOLIS, CENTRIFUGAL AND GRAVITY COUPLES IN A MOTION SIMULATOR, MOTION SIMULATOR SYSTEM
CN106406312B (en) * 2016-10-14 2017-12-26 平安科技(深圳)有限公司 Guide to visitors robot and its moving area scaling method
WO2018165038A1 (en) 2017-03-06 2018-09-13 Miso Robotics, Inc. Augmented reality-enhanced food preparation system and related methods
US11142412B2 (en) 2018-04-04 2021-10-12 6d bytes inc. Dispenser
US20190307262A1 (en) 2018-04-04 2019-10-10 6d bytes inc. Solid Dispenser
US11420344B2 (en) 2018-04-24 2022-08-23 Miso Robotics, Inc. Smooth surfaced flexible and stretchable skin for covering robotic arms in restaurant and food preparation applications
US11170224B2 (en) 2018-05-25 2021-11-09 Vangogh Imaging, Inc. Keyframe-based object scanning and tracking
US11192258B2 (en) 2018-08-10 2021-12-07 Miso Robotics, Inc. Robotic kitchen assistant for frying including agitator assembly for shaking utensil
US11436753B2 (en) 2018-10-30 2022-09-06 Liberty Reach, Inc. Machine vision-based method and system to facilitate the unloading of a pile of cartons in a carton handling system
US11577401B2 (en) * 2018-11-07 2023-02-14 Miso Robotics, Inc. Modular robotic food preparation system and related methods
JP6997068B2 (en) * 2018-12-19 2022-01-17 ファナック株式会社 Robot control device, robot control system, and robot control method
JP7339124B2 (en) * 2019-02-26 2023-09-05 株式会社Preferred Networks Control device, system and control method
US11452226B2 (en) * 2019-03-13 2022-09-20 Lg Electronics Inc. Robot with through-hole to receive pin
US11170526B2 (en) * 2019-03-26 2021-11-09 Samsung Electronics Co., Ltd. Method and apparatus for estimating tool trajectories
US11170552B2 (en) 2019-05-06 2021-11-09 Vangogh Imaging, Inc. Remote visualization of three-dimensional (3D) animation with synchronized voice in real-time
US11232633B2 (en) * 2019-05-06 2022-01-25 Vangogh Imaging, Inc. 3D object capture and object reconstruction using edge cloud computing resources
US20210048823A1 (en) * 2019-08-16 2021-02-18 isee Latent belief space planning using a trajectory tree
US20220234209A1 (en) * 2019-08-23 2022-07-28 Ilya A. Kriveshko Safe operation of machinery using potential occupancy envelopes
US11348332B2 (en) * 2019-09-25 2022-05-31 Toyota Research Institute, Inc. Object location analysis
US11164336B2 (en) 2019-09-27 2021-11-02 Martin Adrian FISCH Methods and apparatus for orientation keypoints for complete 3D human pose computerized estimation
US11315326B2 (en) * 2019-10-15 2022-04-26 At&T Intellectual Property I, L.P. Extended reality anchor caching based on viewport prediction
US11288509B2 (en) * 2019-11-12 2022-03-29 Toyota Research Institute, Inc. Fall detection and assistance
US11335063B2 (en) 2020-01-03 2022-05-17 Vangogh Imaging, Inc. Multiple maps for 3D object scanning and reconstruction
US11446875B2 (en) * 2020-03-09 2022-09-20 International Business Machines Corporation Devising a self-movement path for at least one printing device
CN111360780A (en) * 2020-03-20 2020-07-03 北京工业大学 Garbage picking robot based on visual semantic SLAM
US20210349462A1 (en) 2020-05-08 2021-11-11 Robust Al, Inc. Ultraviolet end effector
GB2595289A (en) * 2020-05-21 2021-11-24 Bae Systems Plc Collaborative robot system
CN111696152B (en) * 2020-06-12 2023-05-12 杭州海康机器人股份有限公司 Method, device, computing equipment, system and storage medium for detecting package stack
CN113759731A (en) * 2020-06-22 2021-12-07 北京京东乾石科技有限公司 Method and device for dispatching mechanical arms
JP2022017739A (en) * 2020-07-14 2022-01-26 株式会社キーエンス Image processing apparatus
CN111823277A (en) * 2020-07-24 2020-10-27 上海大学 Object grabbing platform and method based on machine vision
US20220048186A1 (en) * 2020-08-15 2022-02-17 Rapyuta Robotics Co., Ltd. Dynamically generating solutions for updating plans and task allocation strategies
EP3960392A1 (en) * 2020-08-24 2022-03-02 ABB Schweiz AG Method and system for robotic programming
WO2022074448A1 (en) * 2020-10-06 2022-04-14 Mark Oleynik Robotic kitchen hub systems and methods for minimanipulation library adjustments and calibrations of multi-functional robotic platforms for commercial and residential environments with artificial intelligence and machine learning
US20220152824A1 (en) * 2020-11-13 2022-05-19 Armstrong Robotics, Inc. System for automated manipulation of objects using a vision-based collision-free motion plan
US11607809B2 (en) 2020-12-22 2023-03-21 Intrinsic Innovation Llc Robot motion planning accounting for object pose estimation accuracy
US11945117B2 (en) 2021-03-10 2024-04-02 Samsung Electronics Co., Ltd. Anticipating user and object poses through task-based extrapolation for robot-human collision avoidance
US20220297293A1 (en) * 2021-03-22 2022-09-22 X Development Llc Dynamic torque saturation limits for robot actuator(s)
KR20220132241A (en) * 2021-03-23 2022-09-30 삼성전자주식회사 Robot and method for controlling thereof
US20220315353A1 (en) * 2021-03-30 2022-10-06 Dexterity, Inc. Robotic line kitting system safety features
US11833691B2 (en) 2021-03-30 2023-12-05 Samsung Electronics Co., Ltd. Hybrid robotic motion planning system using machine learning and parametric trajectories
US11897706B2 (en) 2021-03-30 2024-02-13 Dexterity, Inc. Robotic system with zone-based control
DE102021109333B4 (en) * 2021-04-14 2023-07-06 Robert Bosch Gesellschaft mit beschränkter Haftung Device and method for training a neural network for controlling a robot for an insertion task
US20220346598A1 (en) 2021-05-01 2022-11-03 Miso Robotics, Inc. Automated bin system for accepting food items in robotic kitchen workspace
US11883962B2 (en) * 2021-05-28 2024-01-30 Mitsubishi Electric Research Laboratories, Inc. Object manipulation with collision avoidance using complementarity constraints
US11833680B2 (en) 2021-06-25 2023-12-05 Boston Dynamics, Inc. Robot movement and online trajectory optimization
US11893989B2 (en) * 2021-07-13 2024-02-06 Snap Inc. Voice-controlled settings and navigation
WO2023014911A2 (en) 2021-08-04 2023-02-09 Chef Robotics, Inc. System and/or method for robotic foodstuff assembly
CN113696186B (en) * 2021-10-09 2022-09-30 东南大学 Mechanical arm autonomous moving and grabbing method based on visual-touch fusion under complex illumination condition
CN113848803B (en) * 2021-10-14 2023-09-12 成都永峰科技有限公司 Deep cavity curved surface machining tool path generation method
CN113843803A (en) * 2021-10-20 2021-12-28 上海景吾智能科技有限公司 Method and system for planning overturning real-time following track of overturning object
US20230138330A1 (en) * 2021-10-28 2023-05-04 Toyota Research Institute, Inc. Robots having a lift actuator and a tilt structure for lifting and supporting large objects
KR20230069333A (en) * 2021-11-12 2023-05-19 한국전자기술연구원 Data processing method and device for smart logistics control
CN114536338B (en) * 2022-03-03 2023-09-26 深圳亿嘉和科技研发有限公司 Control method of hydraulic mechanical arm
WO2023170988A1 (en) * 2022-03-08 2023-09-14 株式会社安川電機 Robot control system, robot control method, and robot control program
CN114833825A (en) * 2022-04-19 2022-08-02 深圳市大族机器人有限公司 Cooperative robot control method and device, computer equipment and storage medium
WO2023212260A1 (en) * 2022-04-28 2023-11-02 Theai, Inc. Agent-based training of artificial intelligence character models
KR102594422B1 (en) * 2023-07-11 2023-10-27 주식회사 딥핑소스 Method for training object detector capable of predicting center of mass of object projected onto the ground, method for identifying same object in specific space captured from multiple cameras having different viewing frustums using trained object detector, and learning device and object identifying device using the same
CN116934206A (en) * 2023-09-18 2023-10-24 浙江菜鸟供应链管理有限公司 Scheduling method and system

Citations (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4523409A (en) * 1983-05-19 1985-06-18 The Charles Stark Draper Laboratory, Inc. Automatic contour grinding system
US4896357A (en) * 1986-04-09 1990-01-23 Tokico Ltd. Industrial playback robot having a teaching mode in which teaching data are given by speech
US5774841A (en) * 1995-09-20 1998-06-30 The United States Of America As Represented By The Adminstrator Of The National Aeronautics And Space Administration Real-time reconfigurable adaptive speech recognition command and control apparatus and method
US20020158599A1 (en) * 2000-03-31 2002-10-31 Masahiro Fujita Robot device, robot device action control method, external force detecting device and external force detecting method
US20020181773A1 (en) * 2001-03-28 2002-12-05 Nobuo Higaki Gesture recognition system
US20030060930A1 (en) * 2000-10-13 2003-03-27 Masahiro Fujita Robot device and behavior control method for robot device
US20040039483A1 (en) * 2001-06-01 2004-02-26 Thomas Kemp Man-machine interface unit control method, robot apparatus, and its action control method
US20050004710A1 (en) * 2002-03-06 2005-01-06 Hideki Shimomura Learning equipment and learning method, and robot apparatus
US20050171643A1 (en) * 1998-09-10 2005-08-04 Kotaro Sabe Robot apparatus
US20060137164A1 (en) * 2002-09-13 2006-06-29 Daimlerchrysler Ag Method and device for mounting several add-on parts on production part
US20070233321A1 (en) * 2006-03-29 2007-10-04 Kabushiki Kaisha Toshiba Position detecting device, autonomous mobile device, method, and computer program product
US20070276539A1 (en) * 2006-05-25 2007-11-29 Babak Habibi System and method of robotically engaging an object
US20070274812A1 (en) * 2006-05-29 2007-11-29 Fanuc Ltd Workpiece picking device and method
US20080059178A1 (en) * 2006-08-30 2008-03-06 Kabushiki Kaisha Toshiba Interface apparatus, interface processing method, and interface processing program
US20080161970A1 (en) * 2004-10-19 2008-07-03 Yuji Adachi Robot apparatus
US20080177421A1 (en) * 2007-01-19 2008-07-24 Ensky Technology (Shenzhen) Co., Ltd. Robot and component control module of the same
US20110125504A1 (en) * 2009-11-24 2011-05-26 Samsung Electronics Co., Ltd. Mobile device and method and computer-readable medium controlling same
US20110238212A1 (en) * 2010-03-26 2011-09-29 Sony Corporation Robot apparatus, information providing method carried out by the robot apparatus and computer storage media
US20130079930A1 (en) * 2011-09-27 2013-03-28 Disney Enterprises, Inc. Operational space control of rigid-body dynamical systems including humanoid robots
US20140316636A1 (en) * 2013-04-23 2014-10-23 Samsung Electronics Co., Ltd. Moving robot, user terminal apparatus and control method thereof
US20150032260A1 (en) * 2013-07-29 2015-01-29 Samsung Electronics Co., Ltd. Auto-cleaning system, cleaning robot and method of controlling the cleaning robot
US20150052703A1 (en) * 2013-08-23 2015-02-26 Lg Electronics Inc. Robot cleaner and method for controlling a robot cleaner
US20150149175A1 (en) * 2013-11-27 2015-05-28 Sharp Kabushiki Kaisha Voice recognition terminal, server, method of controlling server, voice recognition system, non-transitory storage medium storing program for controlling voice recognition terminal, and non-transitory storage medium storing program for controlling server
US9189742B2 (en) * 2013-11-20 2015-11-17 Justin London Adaptive virtual intelligent agent
US20160103202A1 (en) * 2013-04-12 2016-04-14 Hitachi, Ltd. Mobile Robot and Sound Source Position Estimation System
US20160372138A1 (en) * 2014-03-25 2016-12-22 Sharp Kabushiki Kaisha Interactive home-appliance system, server device, interactive home appliance, method for allowing home-appliance system to interact, and nonvolatile computer-readable data recording medium encoded with program for allowing computer to implement the method
US9615066B1 (en) * 2016-05-03 2017-04-04 Bao Tran Smart lighting and city sensor
US9621984B1 (en) * 2015-10-14 2017-04-11 Amazon Technologies, Inc. Methods to process direction data of an audio input device using azimuth values
US20170133009A1 (en) * 2015-11-10 2017-05-11 Samsung Electronics Co., Ltd. Electronic device and method for controlling the same
US9800973B1 (en) * 2016-05-10 2017-10-24 X Development Llc Sound source estimation based on simulated sound sensor array responses
US9801517B2 (en) * 2015-03-06 2017-10-31 Wal-Mart Stores, Inc. Shopping facility assistance object detection systems, devices and methods
US20170361468A1 (en) * 2016-06-15 2017-12-21 Irobot Corporation Systems and methods to control an autonomous mobile robot
US20180043952A1 (en) * 2016-08-12 2018-02-15 Spin Master Ltd. Spherical mobile robot with shifting weight steering
US20180200885A1 (en) * 2017-01-17 2018-07-19 Fanuc Corporation Robot control device
US20180345479A1 (en) * 2017-06-03 2018-12-06 Rocco Martino Robotic companion device
US20180354140A1 (en) * 2015-12-07 2018-12-13 Kawasaki Jukogyo Kabushiki Kaisha Robot system and operation method thereof
US20190001489A1 (en) * 2017-07-03 2019-01-03 X Development Llc Determining and utilizing corrections to robot actions
US20190066680A1 (en) * 2017-08-25 2019-02-28 Samsung Electronics Co., Ltd. Method of activating voice-recognition service and electronic device for implementing same
US20190212441A1 (en) * 2018-01-08 2019-07-11 Anki, Inc. Map Related Acoustic Filtering by a Mobile Robot
US20200073367A1 (en) * 2018-08-29 2020-03-05 Rockwell Automation Technologies, Inc. Audio recognition-based industrial automation control

Family Cites Families (133)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3186795A (en) 1961-10-31 1965-06-01 United States Steel Corp Method of recovering ammonia
US3095237A (en) 1962-04-10 1963-06-25 Joseph H Desnoyers Retractible chair coupler
GB2093911A (en) 1981-02-27 1982-09-08 Ford Motor Co Ic engine cylinder head combustion chambers
US4624043A (en) 1982-09-29 1986-11-25 The Boeing Company Quick release tool holder for robots
US4512709A (en) 1983-07-25 1985-04-23 Cincinnati Milacron Inc. Robot toolchanger system
WO1985001496A1 (en) 1983-10-03 1985-04-11 American Telephone & Telegraph Company Protective robot covering
US4611377A (en) * 1984-06-04 1986-09-16 Eoa Systems, Inc. Interchangeable robot end-of-arm tooling system
US4676142A (en) 1984-06-04 1987-06-30 Eoa Systems, Inc. Adapter with modular components for a robot end-of-arm interchangeable tooling system
US4604787A (en) 1984-08-15 1986-08-12 Transamerica Delaval Inc. Tool changer for manipulator arm
DE3723329A1 (en) 1986-07-16 1988-01-21 Tokico Ltd Industrial reproduction robot
US5018266A (en) * 1987-12-07 1991-05-28 Megamation Incorporated Novel means for mounting a tool to a robot arm
US4875275A (en) 1987-12-07 1989-10-24 Megamation Incoporated Novel automatic tool changer
DE3823102C2 (en) * 1988-07-07 1995-02-09 Siemens Ag Method for operating a numerical control
US4904514A (en) 1988-09-13 1990-02-27 Kimberly-Clark Corporation Protective covering for a mechanical linkage
JPH02160487A (en) 1988-12-12 1990-06-20 Fanuc Ltd Correction of manual feeding of robot
JP2608161B2 (en) 1990-03-29 1997-05-07 ファナック株式会社 Industrial robot stop control method
US5044063A (en) 1990-11-02 1991-09-03 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Robotic tool change mechanism
US5131706A (en) 1991-07-01 1992-07-21 Rockwell International Corporation Straight line gripper tool and changer
US5360249A (en) * 1991-09-13 1994-11-01 Refac Technology Development, Corporation Multifunctional end effectors
US5396346A (en) 1992-06-10 1995-03-07 Canon Kabushiki Kaisha Image processing method and apparatus using rounding processing
US5879277A (en) 1997-06-11 1999-03-09 Kawasaki Robotics (Usa) Inc. Tool storage and retrieval system
JPH1133973A (en) 1997-07-14 1999-02-09 Fanuc Ltd Shielded-type industrial robot
US6223110B1 (en) 1997-12-19 2001-04-24 Carnegie Mellon University Software architecture for autonomous earthmoving machinery
US6678572B1 (en) 1998-12-31 2004-01-13 Asml Holdings, N.V. Recipe cascading in a wafer processing system
US6427995B1 (en) 1999-04-26 2002-08-06 Prairie Technical Industries, Inc. Quick change jaw system
JP4122652B2 (en) 1999-09-27 2008-07-23 松下電器産業株式会社 Robot control device
US6543307B2 (en) 2001-04-06 2003-04-08 Metrica, Inc. Robotic system
US20020151848A1 (en) 2001-04-11 2002-10-17 Capote Dagoberto T. Protective cover for an elongated instrument
CN100445948C (en) 2001-09-29 2008-12-24 张晓林 Automatic cooking method and system
US6569070B1 (en) 2002-01-09 2003-05-27 Dallas Design And Technology, Inc. System for changing the tooling carried by a robot
WO2003064116A2 (en) 2002-01-31 2003-08-07 Braintech Canada, Inc. Method and apparatus for single camera 3d vision guided robotics
US10105844B2 (en) 2016-06-16 2018-10-23 General Electric Company System and method for controlling robotic machine assemblies to perform tasks on vehicles
JP2004295620A (en) 2003-03-27 2004-10-21 Toyota Motor Corp Device for detecting possibility of vehicle collision
US20040187989A1 (en) 2003-03-31 2004-09-30 Mark D' Andreta Robot cover
FI123306B (en) 2004-01-30 2013-02-15 Wisematic Oy Robot tool system, and its control method, computer program and software product
US8276505B2 (en) * 2004-02-18 2012-10-02 David Benjamin Buehler Food preparation system
US7672845B2 (en) 2004-06-22 2010-03-02 International Business Machines Corporation Method and system for keyword detection using voice-recognition
JP2006049462A (en) 2004-08-03 2006-02-16 Seiko Epson Corp Dry etching device and manufacturing method of semiconductor apparatus
US20060165953A1 (en) * 2004-11-29 2006-07-27 T D Industrial Covering, Inc. Ring assembly for a covered paint robot
GB2428110A (en) 2005-07-06 2007-01-17 Armstrong Healthcare Ltd A robot and method of registering a robot.
JPWO2007122717A1 (en) 2006-04-20 2009-08-27 株式会社ガードナー Robot jacket
JP4817312B2 (en) 2006-08-28 2011-11-16 独立行政法人産業技術総合研究所 Robot emergency stop method and system using scream
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US8585854B2 (en) 2007-03-27 2013-11-19 Butterworth Industries, Inc. Polymeric cover for robots
JP2008290228A (en) 2007-04-24 2008-12-04 Fanuc Ltd Fitting device
CN101687321B (en) 2007-07-05 2012-08-08 松下电器产业株式会社 Robot arm control device and control method, robot and control program
DE102007042187B3 (en) 2007-08-28 2009-04-09 IPR-Intelligente Peripherien für Roboter GmbH Tool change system for an industrial robot
WO2009045827A2 (en) 2007-09-30 2009-04-09 Intuitive Surgical, Inc. Methods and systems for tool locating and tool tracking robotic instruments in robotic surgical systems
DE102008005901B4 (en) 2008-01-24 2018-08-09 Deutsches Zentrum für Luft- und Raumfahrt e.V. Sterile barrier for a surgical robot with torque sensors
US7971916B2 (en) 2008-05-22 2011-07-05 GM Global Technology Operations LLC Reconfigurable robotic end-effectors for material handling
JP5415040B2 (en) 2008-08-01 2014-02-12 三重電子株式会社 Module for automatic tool changer
DE102009040145A1 (en) 2009-09-04 2011-03-10 Kuka Roboter Gmbh Method and device for stopping a manipulator
US9131807B2 (en) 2010-06-04 2015-09-15 Shambhu Nath Roy Robotic kitchen top cooking apparatus and method for preparation of dishes using computer recipies
EP2581885A1 (en) 2010-06-11 2013-04-17 Kabushiki Kaisha Yaskawa Denki Service providing system and service providing method
US20120255388A1 (en) 2011-04-05 2012-10-11 Mcclosky Stan H Line management system and a method for routing flexible lines for a robot
US9259289B2 (en) 2011-05-13 2016-02-16 Intuitive Surgical Operations, Inc. Estimation of a position and orientation of a frame used in controlling movement of a tool
US20130103918A1 (en) 2011-10-24 2013-04-25 Barracuda Networks, Inc Adaptive Concentrating Data Transmission Heap Buffer and Method
US9427876B2 (en) 2011-12-19 2016-08-30 Irobot Corporation Inflatable robots, robotic components and assemblies and methods including same
US11253327B2 (en) 2012-06-21 2022-02-22 Globus Medical, Inc. Systems and methods for automatically changing an end-effector on a surgical robot
JP2015526309A (en) 2012-08-31 2015-09-10 リシンク ロボティクス インコーポレイテッド System and method for safe robot operation
CN110279427B (en) 2012-12-10 2024-01-16 直观外科手术操作公司 Collision avoidance during controlled movement of movable arm of image acquisition device and steerable device
CN103130680B (en) 2013-02-04 2014-12-10 上海交通大学 High-optical-purity alkannin and Akannin naphthazarin nuclear parent hydroxyl methylation carbonyl oxime derivative and preparation method and application thereof
US9259840B1 (en) 2013-03-13 2016-02-16 Hrl Laboratories, Llc Device and method to localize and control a tool tip with a robot arm
US9186795B1 (en) 2013-06-24 2015-11-17 Redwood Robotics, Inc. Programming and execution of force-based tasks with torque-controlled robot arms
US9120227B2 (en) 2013-08-15 2015-09-01 Disney Enterprises, Inc. Human motion tracking control with strict contact force constraints for floating-base humanoid robots
US10112169B2 (en) 2013-10-28 2018-10-30 University Of Houston System System and method for ultrasound identification and manipulation of molecular interactions
US9248569B2 (en) 2013-11-22 2016-02-02 Brain Corporation Discrepancy detection apparatus and methods for machine learning
US9607015B2 (en) 2013-12-20 2017-03-28 Qualcomm Incorporated Systems, methods, and apparatus for encoding object formations
FR3015334B1 (en) 2013-12-23 2017-02-10 Electricite De France ROTARY ANNULAR CONNECTOR FOR ENVELOPE FOR PROTECTING AN ARTICULATED ROBOT ARM
WO2015117156A1 (en) 2014-02-03 2015-08-06 Chen, Jiafang Food preparation device
US9659225B2 (en) 2014-02-12 2017-05-23 Microsoft Technology Licensing, Llc Restaurant-specific food logging from images
EP3107429B1 (en) * 2014-02-20 2023-11-15 MBL Limited Methods and systems for food preparation in a robotic cooking kitchen
US9841749B2 (en) 2014-04-01 2017-12-12 Bot & Dolly, Llc Runtime controller for robotic manufacturing system
ES2773136T3 (en) 2014-06-05 2020-07-09 Softbank Robotics Europe Humanoid robot with collision avoidance and trajectory recovery capabilities
US20150375402A1 (en) 2014-06-25 2015-12-31 Td Industrial Coverings, Inc. Cover member for a robot used in a painting process having absorptive properties
US9283678B2 (en) 2014-07-16 2016-03-15 Google Inc. Virtual safety cages for robotic devices
US9452530B2 (en) 2014-09-12 2016-09-27 Toyota Jidosha Kabushiki Kaisha Robot motion replanning based on user motion
US20160073644A1 (en) 2014-09-15 2016-03-17 Roger Dickey Automated processing and placement of three-dimensional food ingredients on a surface of an object
US9547306B2 (en) 2014-09-30 2017-01-17 Speak Loud SPA State and context dependent voice based interface for an unmanned vehicle or robot
JP6282758B2 (en) 2014-11-13 2018-02-21 マクセル株式会社 Projection-type image display device and image display method
US9857786B2 (en) 2015-03-31 2018-01-02 Recognition Robotics, Inc. System and method for aligning a coordinated movement machine reference frame with a measurement system reference frame
US20170004406A1 (en) 2015-06-30 2017-01-05 Qualcomm Incorporated Parallel belief space motion planner
EP3324874B1 (en) 2015-07-17 2021-11-10 DEKA Products Limited Partnership Robotic surgery system
US20220184823A1 (en) 2015-07-23 2022-06-16 Think Surgical, Inc. Protective drape for robotic systems
US20180200014A1 (en) * 2015-07-23 2018-07-19 Think Surgical, Inc. Protective drape for robotic systems
US9744668B1 (en) 2015-08-21 2017-08-29 X Development Llc Spatiotemporal robot reservation systems and method
US11167411B2 (en) 2015-08-24 2021-11-09 Rethink Robotics Gmbh Quick-release mechanism for tool adapter plate and robots incorporating the same
EP3342561B1 (en) 2015-08-25 2022-08-10 Kawasaki Jukogyo Kabushiki Kaisha Remote control robot system
US10414047B2 (en) 2015-09-28 2019-09-17 Siemens Product Lifecycle Management Software Inc. Method and a data processing system for simulating and handling of anti-collision management for an area of a production plant
US10705528B2 (en) 2015-12-15 2020-07-07 Qualcomm Incorporated Autonomous visual navigation
US10242455B2 (en) 2015-12-18 2019-03-26 Iris Automation, Inc. Systems and methods for generating a 3D world model using velocity data of a vehicle
US9776323B2 (en) * 2016-01-06 2017-10-03 Disney Enterprises, Inc. Trained human-intention classifier for safe and efficient robot navigation
US11927965B2 (en) 2016-02-29 2024-03-12 AI Incorporated Obstacle recognition method for autonomous robots
CN111832702A (en) 2016-03-03 2020-10-27 谷歌有限责任公司 Deep machine learning method and device for robot grabbing
CA3019438A1 (en) 2016-03-29 2017-10-05 Cognibotics Ab Method, constraining device and system for determining geometric properties of a manipulator
US9687983B1 (en) * 2016-05-11 2017-06-27 X Development Llc Generating a grasp pose for grasping of an object by a grasping end effector of a robot
WO2017197170A1 (en) * 2016-05-12 2017-11-16 The Regents Of The University Of California Safely controlling an autonomous entity in presence of intelligent agents
GB2550396B (en) 2016-05-19 2021-08-18 Cmr Surgical Ltd Cooling a surgical robot arm
KR101980603B1 (en) 2016-05-20 2019-05-22 구글 엘엘씨 Relating to predicting the motion (s) of the object (s) in the robotic environment based on the image (s) capturing the object (s) and parameter (s) for future robot motion in the environment Methods and apparatus
US10264916B2 (en) 2016-06-14 2019-04-23 Vinay Shivaiah Recipe driven kitchen automation of food preparation
JP6517762B2 (en) 2016-08-23 2019-05-22 ファナック株式会社 A robot system that learns the motion of a robot that a human and a robot work together
WO2018049249A1 (en) 2016-09-09 2018-03-15 Mark Ganninger System and method for automated preparation of food-based materials
US10131053B1 (en) 2016-09-14 2018-11-20 X Development Llc Real time robot collision avoidance
CN106313066A (en) 2016-09-14 2017-01-11 华南理工大学 Multi-purpose mechanical arm device based on plane quadrilateral mechanism
CN106313068A (en) 2016-09-23 2017-01-11 长沙喵厨智能科技有限公司 Automatic cooking robot
US10991033B2 (en) 2016-10-28 2021-04-27 International Business Machines Corporation Optimization of delivery to a recipient in a moving vehicle
US20180144244A1 (en) 2016-11-23 2018-05-24 Vital Images, Inc. Distributed clinical workflow training of deep learning neural networks
US10293488B2 (en) * 2016-11-28 2019-05-21 Hall Labs Llc Container and robot communication in inventory system
US20180202819A1 (en) 2017-01-18 2018-07-19 Microsoft Technology Licensing, Llc Automatic routing to event endpoints
CN108354435A (en) 2017-01-23 2018-08-03 上海长膳智能科技有限公司 Automatic cooking apparatus and the method cooked using it
DE112018000950T5 (en) 2017-02-22 2019-10-31 Kawasaki Jukogyo Kabushiki Kaisha DEVICE FOR COOKING FOODS
US11351673B2 (en) 2017-03-06 2022-06-07 Miso Robotics, Inc. Robotic sled-enhanced food preparation system and related methods
US11366450B2 (en) 2017-03-23 2022-06-21 Abb Schweiz Ag Robot localization in a workspace via detection of a datum
US20180338504A1 (en) 2017-05-25 2018-11-29 Ghanshyam Lavri Automated made to order food preparation device and system
CN107092209A (en) 2017-05-25 2017-08-25 葛武 One kind is cooked robot
US20180348783A1 (en) 2017-05-31 2018-12-06 Neato Robotics, Inc. Asynchronous image classification
US10427306B1 (en) 2017-07-06 2019-10-01 X Development Llc Multimodal object identification
WO2019021058A2 (en) * 2017-07-25 2019-01-31 Mbl Limited Systems and methods for operations a robotic system and executing robotic interactions
CA3070624A1 (en) 2017-07-28 2019-01-31 Nuro, Inc. Flexible compartment design on autonomous and semi-autonomous vehicle
US10656657B2 (en) * 2017-08-08 2020-05-19 Uatc, Llc Object motion prediction and autonomous vehicle control
US11112796B2 (en) 2017-08-08 2021-09-07 Uatc, Llc Object motion prediction and autonomous vehicle control
US11016491B1 (en) 2018-01-26 2021-05-25 X Development Llc Trajectory planning for mobile robots
GB2570514B8 (en) 2018-01-30 2023-06-07 Cmr Surgical Ltd Surgical drape
US10732639B2 (en) * 2018-03-08 2020-08-04 GM Global Technology Operations LLC Method and apparatus for automatically generated curriculum sequence based reinforcement learning for autonomous vehicles
US20190307262A1 (en) 2018-04-04 2019-10-10 6d bytes inc. Solid Dispenser
US11420344B2 (en) 2018-04-24 2022-08-23 Miso Robotics, Inc. Smooth surfaced flexible and stretchable skin for covering robotic arms in restaurant and food preparation applications
IT201800006402A1 (en) 2018-06-18 2019-12-18 TOOL CHANGE DEVICE FOR A ROBOTIC ARM
US10953548B2 (en) * 2018-07-19 2021-03-23 International Business Machines Corporation Perform peg-in-hole task with unknown tilt
US11192258B2 (en) * 2018-08-10 2021-12-07 Miso Robotics, Inc. Robotic kitchen assistant for frying including agitator assembly for shaking utensil
JP6816070B2 (en) 2018-08-24 2021-01-20 ファナック株式会社 Interference avoidance device and robot system
JP7299642B2 (en) 2018-08-30 2023-06-28 ヴェオ ロボティクス, インコーポレイテッド System and method for automatic sensor alignment and configuration
US10744650B2 (en) 2018-09-04 2020-08-18 Irobot Corporation Mobile robots with intelligent capacitive touch sensing
US11648669B2 (en) 2018-09-13 2023-05-16 The Charles Stark Draper Laboratory, Inc. One-click robot order

Patent Citations (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4523409A (en) * 1983-05-19 1985-06-18 The Charles Stark Draper Laboratory, Inc. Automatic contour grinding system
US4896357A (en) * 1986-04-09 1990-01-23 Tokico Ltd. Industrial playback robot having a teaching mode in which teaching data are given by speech
US5774841A (en) * 1995-09-20 1998-06-30 The United States Of America As Represented By The Adminstrator Of The National Aeronautics And Space Administration Real-time reconfigurable adaptive speech recognition command and control apparatus and method
US20050171643A1 (en) * 1998-09-10 2005-08-04 Kotaro Sabe Robot apparatus
US20020158599A1 (en) * 2000-03-31 2002-10-31 Masahiro Fujita Robot device, robot device action control method, external force detecting device and external force detecting method
US20030060930A1 (en) * 2000-10-13 2003-03-27 Masahiro Fujita Robot device and behavior control method for robot device
US20020181773A1 (en) * 2001-03-28 2002-12-05 Nobuo Higaki Gesture recognition system
US20040039483A1 (en) * 2001-06-01 2004-02-26 Thomas Kemp Man-machine interface unit control method, robot apparatus, and its action control method
US20050004710A1 (en) * 2002-03-06 2005-01-06 Hideki Shimomura Learning equipment and learning method, and robot apparatus
US20060137164A1 (en) * 2002-09-13 2006-06-29 Daimlerchrysler Ag Method and device for mounting several add-on parts on production part
US20080161970A1 (en) * 2004-10-19 2008-07-03 Yuji Adachi Robot apparatus
US20070233321A1 (en) * 2006-03-29 2007-10-04 Kabushiki Kaisha Toshiba Position detecting device, autonomous mobile device, method, and computer program product
US20070276539A1 (en) * 2006-05-25 2007-11-29 Babak Habibi System and method of robotically engaging an object
US20070274812A1 (en) * 2006-05-29 2007-11-29 Fanuc Ltd Workpiece picking device and method
US20080059178A1 (en) * 2006-08-30 2008-03-06 Kabushiki Kaisha Toshiba Interface apparatus, interface processing method, and interface processing program
US20080177421A1 (en) * 2007-01-19 2008-07-24 Ensky Technology (Shenzhen) Co., Ltd. Robot and component control module of the same
US20110125504A1 (en) * 2009-11-24 2011-05-26 Samsung Electronics Co., Ltd. Mobile device and method and computer-readable medium controlling same
US20110238212A1 (en) * 2010-03-26 2011-09-29 Sony Corporation Robot apparatus, information providing method carried out by the robot apparatus and computer storage media
US20130079930A1 (en) * 2011-09-27 2013-03-28 Disney Enterprises, Inc. Operational space control of rigid-body dynamical systems including humanoid robots
US20160103202A1 (en) * 2013-04-12 2016-04-14 Hitachi, Ltd. Mobile Robot and Sound Source Position Estimation System
US20140316636A1 (en) * 2013-04-23 2014-10-23 Samsung Electronics Co., Ltd. Moving robot, user terminal apparatus and control method thereof
US20150032260A1 (en) * 2013-07-29 2015-01-29 Samsung Electronics Co., Ltd. Auto-cleaning system, cleaning robot and method of controlling the cleaning robot
US20150052703A1 (en) * 2013-08-23 2015-02-26 Lg Electronics Inc. Robot cleaner and method for controlling a robot cleaner
US9189742B2 (en) * 2013-11-20 2015-11-17 Justin London Adaptive virtual intelligent agent
US20150149175A1 (en) * 2013-11-27 2015-05-28 Sharp Kabushiki Kaisha Voice recognition terminal, server, method of controlling server, voice recognition system, non-transitory storage medium storing program for controlling voice recognition terminal, and non-transitory storage medium storing program for controlling server
US20160372138A1 (en) * 2014-03-25 2016-12-22 Sharp Kabushiki Kaisha Interactive home-appliance system, server device, interactive home appliance, method for allowing home-appliance system to interact, and nonvolatile computer-readable data recording medium encoded with program for allowing computer to implement the method
US9801517B2 (en) * 2015-03-06 2017-10-31 Wal-Mart Stores, Inc. Shopping facility assistance object detection systems, devices and methods
US9621984B1 (en) * 2015-10-14 2017-04-11 Amazon Technologies, Inc. Methods to process direction data of an audio input device using azimuth values
US20170133009A1 (en) * 2015-11-10 2017-05-11 Samsung Electronics Co., Ltd. Electronic device and method for controlling the same
US20180354140A1 (en) * 2015-12-07 2018-12-13 Kawasaki Jukogyo Kabushiki Kaisha Robot system and operation method thereof
US9615066B1 (en) * 2016-05-03 2017-04-04 Bao Tran Smart lighting and city sensor
US9800973B1 (en) * 2016-05-10 2017-10-24 X Development Llc Sound source estimation based on simulated sound sensor array responses
US20170361468A1 (en) * 2016-06-15 2017-12-21 Irobot Corporation Systems and methods to control an autonomous mobile robot
US20180043952A1 (en) * 2016-08-12 2018-02-15 Spin Master Ltd. Spherical mobile robot with shifting weight steering
US20180200885A1 (en) * 2017-01-17 2018-07-19 Fanuc Corporation Robot control device
US20180345479A1 (en) * 2017-06-03 2018-12-06 Rocco Martino Robotic companion device
US20190001489A1 (en) * 2017-07-03 2019-01-03 X Development Llc Determining and utilizing corrections to robot actions
US20190066680A1 (en) * 2017-08-25 2019-02-28 Samsung Electronics Co., Ltd. Method of activating voice-recognition service and electronic device for implementing same
US20190212441A1 (en) * 2018-01-08 2019-07-11 Anki, Inc. Map Related Acoustic Filtering by a Mobile Robot
US20200073367A1 (en) * 2018-08-29 2020-03-05 Rockwell Automation Technologies, Inc. Audio recognition-based industrial automation control

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200089235A1 (en) * 2014-09-26 2020-03-19 Ecovacs Robotics Co., Ltd. Self-moving robot movement boundary determining method
US11597085B2 (en) 2018-09-13 2023-03-07 The Charles Stark Draper Laboratory, Inc. Locating and attaching interchangeable tools in-situ
US11571814B2 (en) 2018-09-13 2023-02-07 The Charles Stark Draper Laboratory, Inc. Determining how to assemble a meal
US11597086B2 (en) 2018-09-13 2023-03-07 The Charles Stark Draper Laboratory, Inc. Food-safe, washable interface for exchanging tools
US11597084B2 (en) 2018-09-13 2023-03-07 The Charles Stark Draper Laboratory, Inc. Controlling robot torque and velocity based on context
US11597087B2 (en) 2018-09-13 2023-03-07 The Charles Stark Draper Laboratory, Inc. User input or voice modification to robot motion plans
US11607810B2 (en) 2018-09-13 2023-03-21 The Charles Stark Draper Laboratory, Inc. Adaptor for food-safe, bin-compatible, washable, tool-changer utensils
US11628566B2 (en) 2018-09-13 2023-04-18 The Charles Stark Draper Laboratory, Inc. Manipulating fracturable and deformable materials using articulated manipulators
US11648669B2 (en) 2018-09-13 2023-05-16 The Charles Stark Draper Laboratory, Inc. One-click robot order
US11673268B2 (en) 2018-09-13 2023-06-13 The Charles Stark Draper Laboratory, Inc. Food-safe, washable, thermally-conductive robot cover
US11872702B2 (en) 2018-09-13 2024-01-16 The Charles Stark Draper Laboratory, Inc. Robot interaction with human co-workers
US11591170B2 (en) 2019-10-25 2023-02-28 Dexai Robotics, Inc. Robotic systems and methods for conveyance of items
US11550904B2 (en) * 2020-08-25 2023-01-10 Robert Bosch Gmbh System and method for improving measurements of an intrusion detection system by transforming one dimensional measurements into multi-dimensional images
US20220067149A1 (en) * 2020-08-25 2022-03-03 Robert Bosch Gmbh System and method for improving measurements of an intrusion detection system by transforming one dimensional measurements into multi-dimensional images

Also Published As

Publication number Publication date
US11597084B2 (en) 2023-03-07
US20200090099A1 (en) 2020-03-19
US20200086509A1 (en) 2020-03-19
US20200086482A1 (en) 2020-03-19
US11597087B2 (en) 2023-03-07
WO2020056376A1 (en) 2020-03-19
US20200086487A1 (en) 2020-03-19
EP3849755A1 (en) 2021-07-21
US20200086498A1 (en) 2020-03-19
WO2020056279A1 (en) 2020-03-19
WO2020056377A1 (en) 2020-03-19
US20200086502A1 (en) 2020-03-19
US11607810B2 (en) 2023-03-21
US20200086503A1 (en) 2020-03-19
US11597086B2 (en) 2023-03-07
WO2020056295A3 (en) 2020-04-30
US11872702B2 (en) 2024-01-16
US11673268B2 (en) 2023-06-13
WO2020056374A1 (en) 2020-03-19
US11597085B2 (en) 2023-03-07
US11648669B2 (en) 2023-05-16
US20200087069A1 (en) 2020-03-19
WO2020056295A2 (en) 2020-03-19
US20200086485A1 (en) 2020-03-19
WO2020056373A1 (en) 2020-03-19
WO2020056353A1 (en) 2020-03-19
WO2020056380A1 (en) 2020-03-19
WO2020056362A1 (en) 2020-03-19
US20200086437A1 (en) 2020-03-19
WO2020056301A1 (en) 2020-03-19
US11571814B2 (en) 2023-02-07
WO2020056375A1 (en) 2020-03-19
EP3849754A1 (en) 2021-07-21
EP3849756A1 (en) 2021-07-21
US11628566B2 (en) 2023-04-18

Similar Documents

Publication Publication Date Title
US20200086497A1 (en) Stopping Robot Motion Based On Sound Cues
AU2021269293C1 (en) Using human motion sensors to detect movement when in the vicinity of hydraulic robots
JP6898012B2 (en) Work space safety monitoring and equipment control
JP6431017B2 (en) Human cooperative robot system with improved external force detection accuracy by machine learning
US10325485B1 (en) System or process to detect, discriminate, aggregate, track, and rank safety related information in a collaborative workspace
US9914218B2 (en) Methods and apparatuses for responding to a detected event by a robot
CN102099614A (en) System for safety protection of human beings against hazardous incidents with robots
US10369697B2 (en) Collision detection
KR102032662B1 (en) Human-computer interaction with scene space monitoring
Hoffmann et al. Environment-aware proximity detection with capacitive sensors for human-robot-interaction
JP7243979B2 (en) Robot interference determination device, robot interference determination method, robot control device, robot control system, human motion prediction device, and human motion prediction method
US10444852B2 (en) Method and apparatus for monitoring in a monitoring space
EP4197710A1 (en) Situation-aware safety assessment of robot-human activities
Sung et al. Smart garbage bin based on AIoT
US10824126B2 (en) Device and method for the gesture control of a screen in a control room
CN112327867B (en) Automatic operation method and system
Narber et al. Anticipation of Touch Gestures to Improve Robot Reaction Time

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

AS Assignment

Owner name: THE CHARLES STARK DRAPER LABORATORY, INC., MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JOHNSON, DAVID M.S.;WAGNER, SYLER;TAYOUN, ANTHONY;SIGNING DATES FROM 20220722 TO 20220816;REEL/FRAME:062764/0273

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION