US20200171667A1 - Vision-based robot control system - Google Patents

Vision-based robot control system Download PDF

Info

Publication number
US20200171667A1
US20200171667A1 US16/538,950 US201916538950A US2020171667A1 US 20200171667 A1 US20200171667 A1 US 20200171667A1 US 201916538950 A US201916538950 A US 201916538950A US 2020171667 A1 US2020171667 A1 US 2020171667A1
Authority
US
United States
Prior art keywords
machine
gesture
image data
user
voice command
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/538,950
Inventor
Glen J. Anderson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Priority to US16/538,950 priority Critical patent/US20200171667A1/en
Publication of US20200171667A1 publication Critical patent/US20200171667A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor

Definitions

  • Embodiments described herein generally relate to robotic controls and in particular, to a vision-based robot control system.
  • Robots are mechanical or electro-mechanical machines able to act as agents for human operators. Some robots are automated or semi-automated and able to perform tasks with minimal human input. Robots are used in residential, industrial, and commercial settings. As electronics and manufacturing processes scale, robot use is becoming more widespread.
  • FIG. 1 is a diagram illustrating a robot operating in an environment, according to an embodiment
  • FIG. 2 is a flowchart illustrating the data and control flow, according to an embodiment
  • FIG. 3 is a block diagram illustrating a vision-based robot control system for controlling a robot, according to an embodiment
  • FIG. 4 is a flowchart illustrating a method for providing a vision-based robot control system, according to an embodiment
  • FIG. 5 is a block diagram illustrating an example machine upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform, according to an example embodiment.
  • Robots may include various sensors to detect their environment.
  • Example sensors include proximity sensors, position sensors, impact sensors, and cameras or other image-based sensors. While some robots are able to be controlled using buttons on an exterior housing, with remote control devices, or with auxiliary controls, such as controls at a charging station, what is needed are more intuitive ways to control robots.
  • FIG. 1 is a diagram illustrating a robot 100 operating in an environment 102 , according to an embodiment.
  • the robot 100 may be any type of self-propelled, mobile machine.
  • the robot 100 may use various apparatus to move, such as wheels, treads, tracks, or the like.
  • the robot 100 may also implement one or more legs to hop, walk, crawl, or otherwise move (e.g., fly) about an environment.
  • the robot 100 may include an onboard camera system, which may include one or more cameras.
  • the camera system may include a visual light camera (e.g., an RGB camera), an infrared (IR) light camera, or other cameras.
  • the IR camera may be used for night vision, as a depth camera, or thermal imagery.
  • the robot 100 may also be equipped with other sensors, such as a sonar system, radar system, etc. to navigate environments.
  • the robot 100 also includes one or more communication systems, which may include a wireless networking communication system.
  • the wireless networking communication system may use one or more of a variety of protocols or technologies, including Wi-Fi, 3G, and 4G LTE/LTE-A, WiMAX networks, Bluetooth, near field communication (NFC), or the like.
  • the robot 100 may interface with one or more cameras 104 A, 104 B, 104 C, 104 D (collectively referred to as 104 ).
  • the cameras 104 may capture a user's movement, actions, or other aspects of the user 106 as the user 106 moves about in the environment 102 .
  • Cameras 104 in the environment 100 may include visible light cameras, infrared cameras, or other types of cameras.
  • the cameras 104 may be connected using wired or wireless connections.
  • one or more of the cameras 104 may use one or more servos for pan and tilt to follow a subject while it is within the operating field of view of the camera 104 .
  • the camera 104 may track a subject using shape recognition or with a physical marker that the subject holds or wears and the camera 104 actively tracks.
  • the physical marker may be wireless connected to the camera 104 using a technology such as Bluetooth.
  • FIG. 1 Although only four cameras 104 are illustrated in FIG. 1 , it is understood that more or fewer cameras may be implemented based on the size of the environment, obstructions in the environment, or other considerations. Combinations of onboard cameras and environmental cameras may be used.
  • the user 106 may interface with the robot 100 using various intuitive techniques.
  • the user 106 gestures in a particular manner, which may then be captured by the cameras 104 or the robot 100 , or by other environmental sensors.
  • the gesture may provide instruction to the robot 100 based on a preconfigured association.
  • the user 106 may mark a spot on the floor by tapping his foot in a prescribed manner.
  • the robot 100 may be a cleaning robot, such as semi-automated vacuum.
  • the tapping may be detected by the user's motion using cameras 104 , by sound processing (e.g., using a microphone array), by vibration (e.g., by using in-floor sensors to detect vibration location), or by other mechanisms or combinations of mechanisms.
  • the robot 100 may concentrate in the area indicated by the gesture, such as by performing extra passes or by temporarily slowing down to clean the area more thoroughly.
  • gesture detection is used to identify a particular location to perform a robotic action, and as such, in some cases, the gesture is used to intuitively identify the location (e.g., pointing to a spot on the floor, motioning to an area, or tapping the floor with a foot, etc.).
  • the user 106 may provide instructions using a marker with an infrared-detectable ink.
  • the ink is not visible to human eyes, so it does not discolor or mar materials, such as carpet or furniture upholstery.
  • the robot 100 may be configured with an IR camera, or the environmental camera 104 may be an IR camera, which is able to see the markings left by the user 106 . Different markings may be used, such as a box to indicate an extra clean, a circle to instruct the robot 100 to use a different cleaner, or an “X” to avoid an area. Other markings may be used.
  • the robot 100 may clean the IR-detectable ink when cleaning the marked area. As such, the ink marking may act as a one-time instruction. Alternatively, the IR-ink may be left behind purposefully, so that the mark is able to be observed on subsequent cleanings.
  • the user 106 may also provide instruction with other modalities, such as verbally or with geolocation.
  • the user 106 may gesture and then speak a verbal command, which is received at the robot 100 and then acted upon by the robot 100 .
  • the user 106 may point to a spot on the floor with a laser pointer and speak “move the shelf over here.”
  • the robot 100 may directly receive such commands with an onboard camera system and a microphone.
  • environmental sensors such as cameras 104 and microphones, may detect the user's actions and communicate them as commands to the robot 100 .
  • the robot 100 may move to the shelf, lift it, and then move it to the location indicated by the user's gesture.
  • the user 106 may use a geolocation as an additional input.
  • the user 106 may use a mobile device 108 (e.g., smartphone, tablet, wearable device, etc.) and perform a gesture while holding or wearing the mobile device.
  • the user 106 may also initiate verbal commands to the mobile device.
  • the location of the mobile device 108 e.g., geolocation
  • FIG. 2 is a flowchart illustrating the data and control flow, according to an embodiment.
  • a triggering action performed by a user is observed by a camera system (operation 200 ). For instance, a user may perform a gesture or leave a mark visible to the robot (e.g., with a token or with an ink visible to the camera system that may or may not be visible to unaided humans).
  • the camera system interprets the action (operation 202 ). The interpretation may include gesture recognition, pattern recognition, shape recognition, or other forms of analysis to determine the type of triggering action performed by the user. If the triggering action is recognized, then additional processing may occur to determine whether the user provided any other triggering commands (operation 204 ), such as a verbal command. If a verbal command is recognized, for example, then the verbal command may be parsed and used in the subsequent operations. Additionally, geolocations, locations of pointing gestures, or other commands, may be analyzed at operation 204 .
  • a lookup table is referenced (operation 206 ) to determine which operation the robot is to perform based on the trigger input.
  • the types of trigger input and resulting robot operations may differ according to the design and function of the robot. For example, a cleaning robot may be programmed to respond to certain triggering actions, where a security robot may be programmed to respond to other triggering actions.
  • the user may map the triggering actions to only trigger one of several robots.
  • the user may configure the commands such that a single command may cause multiple robots to perform certain functions.
  • the robot is scheduled to perform the resulting operation (operation 208 ).
  • the robot may perform the operation immediately or may be configured to perform the operation at the next duty cycle (e.g., the next cleaning cycle).
  • control flow returns to operation 200 , where the system monitors for additional user triggering actions.
  • FIG. 3 is a block diagram illustrating a vision-based robot control system 300 for controlling a robot, according to an embodiment.
  • the system 300 may be installed in a robot. Alternatively, the system 300 may be separate from a robot, but communicatively coupled to the robot in order provide control signals.
  • the system 300 includes a camera system interface 302 , a trigger detection unit 304 , an operation database interface 306 , and a transceiver 308 .
  • the camera system interface 302 , trigger detection unit 304 , operation database interface 306 , and transceiver 308 are understood to encompass tangible entities that are physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all of any operations described herein.
  • Such tangible entitles may be constructed using one or more circuits, such as with dedicated hardware (e.g., field programmable gate arrays (FPGAs), logic gates, graphics processing unit (GPU), a digital signal processor (DSP), etc.).
  • FPGAs field programmable gate arrays
  • GPU graphics processing unit
  • DSP digital signal processor
  • the camera system interface 302 may receive camera signal information from a camera array, which may be installed on the system 300 or be remote, but communicatively coupled to the system 300 .
  • the camera system interface 302 may receive raw video signal information or processed signal information (e.g., compressed video signals).
  • the camera system interface 302 may be used to receive tagged information from a camera array, which analyzed and preprocessed the raw video signal at, or near, the camera array.
  • Trigger detection unit 304 is capable of analyzing image or video data received by the camera system interface 302 , and detect and identify a gesture exhibited by movements performed by a person in the image or video data. The trigger detection unit 304 determines whether the movements constitute a recognized gesture. If the movements do constitute a recognized gesture, the trigger detection unit 304 may trigger operations performed by a robot.
  • the trigger detection unit 304 may access image data of an arm, finger, foot, or hand motion of a user captured by a camera system, and identify the gesture based on the image data.
  • the image data may be a number of successive images (e.g., video) over which the gesture is performed.
  • the trigger detection unit 304 accesses depth image data of an arm, finger, foot or hand motion of the user and identifies the gesture based on the depth image data.
  • the depth image data may be a number of successive images (e.g., video) over which the gesture is performed.
  • the trigger detection unit 304 operates independent from the camera system interface 302 , and receives information or data that indicates a gesture via a different pathway.
  • the trigger detection unit 304 is to access motion data from an auxiliary device, the motion data describing an arm, finger, foot, or hand motion of the user and identify the gesture based on the motion data.
  • the auxiliary device may be a mobile device, such as a smartphone, a wearable device, such as a smartwatch or a glove, or other type of device moveable by the user in free space. Examples include, but are not limited to, smartphones, smartwatches, e-textiles (e.g., shirts, gloves, pants, shoes, etc.), smart bracelets, smart rings, or the like.
  • the operation database interface 306 is used to determine whether the gesture is detected and recognized as being a triggering gesture. If it is a recognized triggering gesture, then commands are transmitted to a robot using the transceiver 308 .
  • the transceiver 308 may be configured to transmit over various wireless networks, such as a Wi-Fi network (e.g., according to the IEEE 802.11 family of standards), cellular network, such as a network designed according to the Long-Term Evolution (LTE), LTE-Advanced, 5G or Global System for Mobile Communications (GSM) families of standards, or the like.
  • LTE Long-Term Evolution
  • GSM Global System for Mobile Communications
  • the system 300 describes a vision-based robot control system, the system 300 comprising the camera system interface 302 , the trigger detection unit 304 , and the transceiver 308 .
  • the camera system interface 302 may be configured to receive image data from a camera system/
  • the trigger detection unit 304 may be configured to determine a triggering action from the image data.
  • the transceiver 308 may be configured to initiate a robot operation associated with the triggering action. In an embodiment, to initiate the robot operation, the transceiver 308 is to transmit a command sequence to a robot over a wireless network. In an embodiment, the robot operation comprises a cleaning task.
  • the camera system interface 302 is to receive an image of a user performing a gesture.
  • the trigger detection unit 304 is to determine the triggering action corresponding to the gesture.
  • the gesture comprises pointing to a location in an environment.
  • the gesture comprises tapping a foot on a location in an environment.
  • the triggering action is the gesture and the robot operation comprises performing extra cleaning at the location in the environment.
  • the image data includes a gesture performed by a user
  • the trigger detection unit 304 is to receive a voice command issued by the user and determine the triggering action using the gesture and the voice command.
  • the gesture is used to specify a location in the environment
  • the voice command is used to specify an action to be taken by the robot.
  • the image data includes a non-visible marking made by a user, the non-visible marking acting as the triggering action.
  • the non-visible marking comprises an infrared-visible ink marking.
  • the non-visible marking is a symbol, and to initiate the robot operation, the operation database interface 306 is used to access a lookup table and search for the symbol to identify a corresponding robot operation.
  • the trigger detection unit 304 is to obtain a geolocation associated with the triggering action and the transceiver 308 is to initiate the robot operation to be performed at the geolocation.
  • the geolocation is obtained from a device operated by a user performing the triggering action.
  • the geolocation is obtained from a gesture performed by the user, the gesture captured in the image data.
  • FIG. 4 is a flowchart illustrating a method 400 for providing a vision-based robot control system, according to an embodiment.
  • image data from a camera system is received at a processor-based robot control system.
  • a triggering action is determined from the image data.
  • a robot operation associated with the triggering action is initiated.
  • initiating the robot operation comprises transmitting a command sequence to a robot over a wireless network.
  • Various networks may be used, such as Bluetooth, Wi-Fi, or the like.
  • the robot operation comprises a cleaning task.
  • Other robotic tasks are understood to be within the scope of this disclosure.
  • receiving the image data comprises receiving an image of a user performing a gesture.
  • determining the triggering action comprises determining the triggering action corresponding to the gesture.
  • the gesture may be any type of gesture, as discussed above, and may include actions such as pointing, tapping one's foot, or other similar gestures.
  • the gesture comprises pointing to a location in an environment.
  • the gesture comprises tapping a foot on a location in an environment.
  • the triggering action is the gesture and the robot operation comprises performing extra cleaning at the location in the environment.
  • the image data includes a gesture performed by a user.
  • the user may contemporaneously issue a voice command to accompany the gesture and provide further parameters on the gesture command.
  • the method 400 includes receiving a voice command issued by the user. Determining the triggering action then comprises using the gesture and the voice command to determine the triggering action.
  • the gesture is used to specify a location in the environment
  • the voice command is used to specify an action to be taken by the robot.
  • the image data includes a non-visible marking made by a user, the non-visible marking acting as the triggering action.
  • the non-visible marking may be infrared ink, as described above.
  • the non-visible marking comprises an infrared-visible ink marking.
  • the non-visible marking is a symbol
  • initiating the robot operation comprises searching a lookup table for the symbol to identify a corresponding robot operation.
  • the lookup table may be administered by the user, or by an administrative person, such as the manufacturer or provider of the robot and related services.
  • the method 400 includes obtaining a geolocation associated with the triggering action and initiating the robot operation to be performed at the geolocation.
  • the geolocation may be obtained in various ways, such as with a mobile device that is able to determine a global positioning system (GPS) location (or similar), or by a camera system able to track and determine the user's location when the user performs a gesture or leaves a marking.
  • GPS global positioning system
  • the geolocation is obtained from a device operated by a user performing the triggering action.
  • the geolocation is obtained from a gesture performed by the user, the gesture captured in the image data.
  • Embodiments may be implemented in one or a combination of hardware, firmware, and software. Embodiments may also be implemented as instructions stored on a machine-readable storage device, which may be read and executed by at least one processor to perform the operations described herein.
  • a machine-readable storage device may include any non-transitory mechanism for storing information in a form readable by a machine (e.g., a computer).
  • a machine-readable storage device may include read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash-memory devices, and other storage devices and media.
  • a processor subsystem may be used to execute the instruction on the machine-readable medium.
  • the processor subsystem may include one or more processors, each with one or more cores. Additionally, the processor subsystem may be disposed on one or more physical devices.
  • the processor subsystem may include one or more specialized processors, such as a graphics processing unit (GPU), a digital signal processor (DSP), a field programmable gate array (FPGA), or a fixed function processor.
  • GPU graphics processing unit
  • DSP digital signal processor
  • FPGA field programmable gate array
  • Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms.
  • Modules may be hardware, software, or firmware communicatively coupled to one or more processors in order to carry out the operations described herein.
  • Modules may be hardware modules, and as such modules may be considered tangible entities capable of performing specified operations and may be configured or arranged in a certain manner.
  • circuits may be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module.
  • the whole or part of one or more computer systems may be configured by firmware or software (e.g., instructions, an application portion, or an application) as a module that operates to perform specified operations.
  • the software may reside on a machine-readable medium.
  • the software when executed by the underlying hardware of the module, causes the hardware to perform the specified operations.
  • the term hardware module is understood to encompass a tangible entity, be that an entity that is physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all of any operation described herein.
  • each of the modules need not be instantiated at any one moment in time.
  • the modules comprise a general-purpose hardware processor configured using software; the general-purpose hardware processor may be configured as respective different modules at different times.
  • Software may accordingly configure a hardware processor, for example, to constitute a particular module at one instance of time and to constitute a different module at a different instance of time.
  • Modules may also be software or firmware modules, which operate to perform the methodologies described herein.
  • Circuitry or circuits may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry such as computer processors comprising one or more individual instruction processing cores, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry.
  • the circuits, circuitry, or modules may, collectively or individually, be embodied as circuitry that forms part of a larger system, for example, an integrated circuit (IC), system on-chip (SoC), desktop computers, laptop computers, tablet computers, servers, smart phones, etc.
  • IC integrated circuit
  • SoC system on-chip
  • FIG. 5 is a block diagram illustrating a machine in the example form of a computer system 500 , within which a set or sequence of instructions may be executed to cause the machine to perform any one of the methodologies discussed herein, according to an example embodiment.
  • the machine operates as a standalone device or may be connected (e.g., networked) to other machines.
  • the machine may operate in the capacity of either a server or a client machine in server-client network environments, or it may act as a peer machine in peer-to-peer (or distributed) network environments.
  • the machine may be a wearable device, personal computer (PC), a tablet PC, a hybrid tablet, a personal digital assistant (PDA), a mobile telephone, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine.
  • PC personal computer
  • PDA personal digital assistant
  • machine shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
  • the term “processor-based system” shall be taken to include any set of one or more machines that are controlled by or operated by a processor (e.g., a computer) to individually or jointly execute instructions to perform any one or more of the methodologies discussed herein.
  • Example computer system 500 includes at least one processor 502 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both, processor cores, compute nodes, etc.), a main memory 504 and a static memory 506 , which communicate with each other via a link 508 (e.g., bus).
  • the computer system 500 may further include a video display unit 510 , an alphanumeric input device 512 (e.g., a keyboard), and a user interface (UI) navigation device 514 (e.g., a mouse).
  • the video display unit 510 , input device 512 and UI navigation device 514 are incorporated into a touch screen display.
  • the computer system 500 may additionally include a storage device 516 (e.g., a drive unit), a signal generation device 518 (e.g., a speaker), a network interface device 520 , and one or more sensors (not shown), such as a global positioning system (GPS) sensor, compass, accelerometer, gyrometer, magnetometer, or other sensor.
  • a storage device 516 e.g., a drive unit
  • a signal generation device 518 e.g., a speaker
  • a network interface device 520 e.g., a Wi-Fi sensor
  • sensors not shown
  • GPS global positioning system
  • the storage device 516 includes a machine-readable medium 522 on which is stored one or more sets of data structures and instructions 524 (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein.
  • the instructions 524 may also reside, completely or at least partially, within the main memory 504 , static memory 506 , and/or within the processor 502 during execution thereof by the computer system 500 , with the main memory 504 , static memory 506 , and the processor 502 also constituting machine-readable media.
  • machine-readable medium 522 is illustrated in an example embodiment to be a single medium, the term “machine-readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more instructions 524 .
  • the term “machine-readable medium” shall also be taken to include any tangible medium that is capable of storing, encoding or carrying instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions.
  • the term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.
  • machine-readable media include non-volatile memory, including but not limited to, by way of example, semiconductor memory devices (e.g., electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)) and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • semiconductor memory devices e.g., electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)
  • EPROM electrically programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • flash memory devices e.g., electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)
  • flash memory devices e.g., electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM
  • the instructions 524 may further be transmitted or received over a communications network 526 using a transmission medium via the network interface device 520 utilizing any one of a number of well-known transfer protocols (e.g., HTTP).
  • Examples of communication networks include a local area network (LAN), a wide area network (WAN), the Internet, mobile telephone networks, plain old telephone (POTS) networks, and wireless data networks (e.g., Bluetooth, Wi-Fi, 3G, and 4G LTE/LTE-A or WiMAX networks).
  • POTS plain old telephone
  • wireless data networks e.g., Bluetooth, Wi-Fi, 3G, and 4G LTE/LTE-A or WiMAX networks.
  • transmission medium shall be taken to include any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
  • Example 1 is a vision-based robot control system, the system comprising: a camera system interface to receive image data from a camera system; a trigger detection unit to determine a triggering action from the image data; and a transceiver to initiate a robot operation associated with the triggering action.
  • Example 2 the subject matter of Example 1 optionally includes wherein to receive the image data, the camera system interface is to receive an image of a user performing a gesture; and wherein to determine the triggering action, the trigger detection unit is to determine the triggering action corresponding to the gesture.
  • Example 3 the subject matter of Example 2 optionally includes wherein the gesture comprises pointing to a location in an environment.
  • Example 4 the subject matter of any one or more of Examples 2-3 optionally include wherein the gesture comprises tapping a foot on a location in an environment.
  • Example 5 the subject matter of any one or more of Examples 3-4 optionally include or 4, wherein the triggering action is the gesture and the robot operation comprises performing extra cleaning at the location in the environment.
  • Example 6 the subject matter of any one or more of Examples 1-5 optionally include wherein the image data includes a gesture performed by a user, and wherein the trigger detection unit is to receive a voice command issued by the user and determine the triggering action using the gesture and the voice command.
  • Example 7 the subject matter of Example 6 optionally includes wherein the gesture is used to specify a location in the environment, and wherein the voice command is used to specify an action to be taken by the robot.
  • Example 8 the subject matter of any one or more of Examples 1-7 optionally include wherein the image data includes a non-visible marking made by a user, the non-visible marking acting as the triggering action.
  • Example 9 the subject matter of Example 8 optionally includes wherein the non-visible marking comprises an infrared-visible ink marking.
  • Example 10 the subject matter of any one or more of Examples 8-9 optionally include wherein the non-visible marking is a symbol, and to initiate the robot operation, an operation database interface is used to access a lookup table and search for the symbol to identify a corresponding robot operation.
  • Example 11 the subject matter of any one or more of Examples 1-10 optionally include wherein the trigger detection unit is to obtain a geolocation associated with the triggering action; and wherein the transceiver is to initiate the robot operation to be performed at the geolocation.
  • Example 12 the subject matter of Example 11 optionally includes wherein the geolocation is obtained from a device operated by a user performing the triggering action.
  • Example 13 the subject matter of any one or more of Examples 11-12 optionally include wherein the geolocation is obtained from a gesture performed by the user, the gesture captured in the image data.
  • Example 14 the subject matter of any one or more of Examples 1-13 optionally include wherein to initiate the robot operation, the transceiver is to transmit a command sequence to a robot over a wireless network.
  • Example 15 the subject matter of any one or more of Examples 1-14 optionally include wherein the robot operation comprises a cleaning task.
  • Example 16 is a method of providing a vision-based robot control system, the method comprising: receiving, at a processor-based robot control system, image data from a camera system; determining a triggering action from the image data; and initiating a robot operation associated with the triggering action.
  • Example 17 the subject matter of Example 16 optionally includes wherein receiving the image data comprises receiving an image of a user performing a gesture; and wherein determining the triggering action comprises determining the triggering action corresponding to the gesture.
  • Example 18 the subject matter of Example 17 optionally includes wherein the gesture comprises pointing to a location in an environment.
  • Example 19 the subject matter of any one or more of Examples 17-18 optionally include wherein the gesture comprises tapping a foot on a location in an environment.
  • Example 20 the subject matter of any one or more of Examples 18-19 optionally include or 19, wherein the triggering action is the gesture and the robot operation comprises performing extra cleaning at the location in the environment.
  • Example 21 the subject matter of any one or more of Examples 16-20 optionally include wherein the image data includes a gesture performed by a user, and wherein the method comprises receiving a voice command issued by the user; and wherein determining the triggering action comprises using the gesture and the voice command to determine the triggering action.
  • Example 22 the subject matter of Example 21 optionally includes wherein the gesture is used to specify a location in the environment, and wherein the voice command is used to specify an action to be taken by the robot.
  • Example 23 the subject matter of any one or more of Examples 16-22 optionally include wherein the image data includes a non-visible marking made by a user, the non-visible marking acting as the triggering action.
  • Example 24 the subject matter of Example 23 optionally includes wherein the non-visible marking comprises an infrared-visible ink marking.
  • Example 25 the subject matter of any one or more of Examples 23-24 optionally include wherein the non-visible marking is a symbol, and wherein initiating the robot operation comprises searching a lookup table for the symbol to identify a corresponding robot operation.
  • Example 26 the subject matter of any one or more of Examples 16-25 optionally include obtaining a geolocation associated with the triggering action; and initiating the robot operation to be performed at the geolocation.
  • Example 27 the subject matter of Example 26 optionally includes wherein the geolocation is obtained from a device operated by a user performing the triggering action.
  • Example 28 the subject matter of any one or more of Examples 26-27 optionally include wherein the geolocation is obtained from a gesture performed by the user, the gesture captured in the image data.
  • Example 29 the subject matter of any one or more of Examples 16-28 optionally include wherein initiating the robot operation comprises transmitting a command sequence to a robot over a wireless network.
  • Example 30 the subject matter of any one or more of Examples 16-29 optionally include wherein the robot operation comprises a cleaning task.
  • Example 31 is at least one machine-readable medium including instructions, which when executed by a machine, cause the machine to perform operations of any of the methods of Examples 16-30.
  • Example 32 is an apparatus comprising means for performing any of the methods of Examples 16-30.
  • Example 33 is an apparatus for providing a vision-based robot control system, the apparatus comprising: means for receiving, at a processor-based robot control system, image data from a camera system; means for determining a triggering action from the image data; and means for initiating a robot operation associated with the triggering action.
  • Example 34 the subject matter of Example 33 optionally includes wherein the means for receiving the image data comprise means for receiving an image of a user performing a gesture; and wherein the means for determining the triggering action comprise means for determining the triggering action corresponding to the gesture.
  • Example 35 the subject matter of Example 34 optionally includes wherein the gesture comprises pointing to a location in an environment.
  • Example 36 the subject matter of any one or more of Examples 34-35 optionally include wherein the gesture comprises tapping a foot on a location in an environment.
  • Example 37 the subject matter of any one or more of Examples 35-36 optionally include or 36, wherein the triggering action is the gesture and the robot operation comprises performing extra cleaning at the location in the environment.
  • Example 38 the subject matter of any one or more of Examples 33-37 optionally include wherein the image data includes a gesture performed by a user, and wherein the apparatus comprises means for receiving a voice command issued by the user; and wherein the means for determining the triggering action comprise means for using the gesture and the voice command to determine the triggering action.
  • Example 39 the subject matter of Example 38 optionally includes wherein the gesture is used to specify a location in the environment, and wherein the voice command is used to specify an action to be taken by the robot.
  • Example 40 the subject matter of any one or more of Examples 33-39 optionally include wherein the image data includes a non-visible marking made by a user, the non-visible marking acting as the triggering action.
  • Example 41 the subject matter of Example 40 optionally includes wherein the non-visible marking comprises an infrared-visible ink marking.
  • Example 42 the subject matter of any one or more of Examples 40-41 optionally include wherein the non-visible marking is a symbol, and wherein the means for initiating the robot operation comprise means for searching a lookup table for the symbol to identify a corresponding robot operation.
  • Example 43 the subject matter of any one or more of Examples 33-42 optionally include means for obtaining a geolocation associated with the triggering action; and means for initiating the robot operation to be performed at the geolocation.
  • Example 44 the subject matter of Example 43 optionally includes wherein the geolocation is obtained from a device operated by a user performing the triggering action.
  • Example 45 the subject matter of any one or more of Examples 43-44 optionally include wherein the geolocation is obtained from a gesture performed by the user, the gesture captured in the image data.
  • Example 46 the subject matter of any one or more of Examples 33-45 optionally include wherein the means for initiating the robot operation comprise means for transmitting a command sequence to a robot over a wireless network.
  • Example 47 the subject matter of any one or more of Examples 33-46 optionally include wherein the robot operation comprises a cleaning task.
  • the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.”
  • the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Manipulator (AREA)
  • Toys (AREA)

Abstract

Various systems and methods for providing a vision-based robot control system are provided herein. A vision-based robot control system comprises a camera system interface to receive image data from a camera system; a trigger detection unit to determine a triggering action from the image data; and a transceiver to initiate a robot operation associated with the triggering action.

Description

    PRIORITY APPLICATION
  • This application is a continuation of U.S. application Ser. No. 15/185,241, filed Jun. 17, 2016, which is incorporated herein by reference in its entirety.
  • TECHNICAL FIELD
  • Embodiments described herein generally relate to robotic controls and in particular, to a vision-based robot control system.
  • BACKGROUND
  • Robots are mechanical or electro-mechanical machines able to act as agents for human operators. Some robots are automated or semi-automated and able to perform tasks with minimal human input. Robots are used in residential, industrial, and commercial settings. As electronics and manufacturing processes scale, robot use is becoming more widespread.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar components. Some embodiments are illustrated by way of example, and not limitation, in the figures of the accompanying drawings in which:
  • FIG. 1 is a diagram illustrating a robot operating in an environment, according to an embodiment;
  • FIG. 2 is a flowchart illustrating the data and control flow, according to an embodiment;
  • FIG. 3 is a block diagram illustrating a vision-based robot control system for controlling a robot, according to an embodiment;
  • FIG. 4 is a flowchart illustrating a method for providing a vision-based robot control system, according to an embodiment; and
  • FIG. 5 is a block diagram illustrating an example machine upon which any one or more of the techniques (e.g., methodologies) discussed herein may perform, according to an example embodiment.
  • DETAILED DESCRIPTION
  • In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of some example embodiments. It will be evident, however, to one skilled in the art that the present disclosure may be practiced without these specific details.
  • Disclosed herein are systems and methods that provide a vision-based robot control system. Consumer-level robots are becoming a reality with lower cost manufacturing and more affordable electronics. Robots may include various sensors to detect their environment. Example sensors include proximity sensors, position sensors, impact sensors, and cameras or other image-based sensors. While some robots are able to be controlled using buttons on an exterior housing, with remote control devices, or with auxiliary controls, such as controls at a charging station, what is needed are more intuitive ways to control robots.
  • FIG. 1 is a diagram illustrating a robot 100 operating in an environment 102, according to an embodiment. The robot 100 may be any type of self-propelled, mobile machine. The robot 100 may use various apparatus to move, such as wheels, treads, tracks, or the like. The robot 100 may also implement one or more legs to hop, walk, crawl, or otherwise move (e.g., fly) about an environment. The robot 100 may include an onboard camera system, which may include one or more cameras. The camera system may include a visual light camera (e.g., an RGB camera), an infrared (IR) light camera, or other cameras. The IR camera may be used for night vision, as a depth camera, or thermal imagery. In addition to a camera system, the robot 100 may also be equipped with other sensors, such as a sonar system, radar system, etc. to navigate environments. The robot 100 also includes one or more communication systems, which may include a wireless networking communication system. The wireless networking communication system may use one or more of a variety of protocols or technologies, including Wi-Fi, 3G, and 4G LTE/LTE-A, WiMAX networks, Bluetooth, near field communication (NFC), or the like.
  • Alternatively, the robot 100 may interface with one or more cameras 104A, 104B, 104C, 104D (collectively referred to as 104). The cameras 104 may capture a user's movement, actions, or other aspects of the user 106 as the user 106 moves about in the environment 102. Cameras 104 in the environment 100 may include visible light cameras, infrared cameras, or other types of cameras. The cameras 104 may be connected using wired or wireless connections. In addition, one or more of the cameras 104 may use one or more servos for pan and tilt to follow a subject while it is within the operating field of view of the camera 104. The camera 104 may track a subject using shape recognition or with a physical marker that the subject holds or wears and the camera 104 actively tracks. The physical marker may be wireless connected to the camera 104 using a technology such as Bluetooth. Although only four cameras 104 are illustrated in FIG. 1, it is understood that more or fewer cameras may be implemented based on the size of the environment, obstructions in the environment, or other considerations. Combinations of onboard cameras and environmental cameras may be used.
  • The user 106 may interface with the robot 100 using various intuitive techniques. In an aspect, the user 106 gestures in a particular manner, which may then be captured by the cameras 104 or the robot 100, or by other environmental sensors. The gesture may provide instruction to the robot 100 based on a preconfigured association. For instance, the user 106 may mark a spot on the floor by tapping his foot in a prescribed manner. The robot 100 may be a cleaning robot, such as semi-automated vacuum. The tapping may be detected by the user's motion using cameras 104, by sound processing (e.g., using a microphone array), by vibration (e.g., by using in-floor sensors to detect vibration location), or by other mechanisms or combinations of mechanisms. After receiving an indication of the gesture performed by the user 106, the robot 100 may concentrate in the area indicated by the gesture, such as by performing extra passes or by temporarily slowing down to clean the area more thoroughly.
  • It is understood that while some embodiments discussed in this disclosure include camera and image processing, other modalities of gesture detect may be used instead of, or in combination with, camera and image processing techniques. Thus, gesture detection is used to identify a particular location to perform a robotic action, and as such, in some cases, the gesture is used to intuitively identify the location (e.g., pointing to a spot on the floor, motioning to an area, or tapping the floor with a foot, etc.).
  • In another aspect, the user 106 may provide instructions using a marker with an infrared-detectable ink. The ink is not visible to human eyes, so it does not discolor or mar materials, such as carpet or furniture upholstery. The robot 100 may be configured with an IR camera, or the environmental camera 104 may be an IR camera, which is able to see the markings left by the user 106. Different markings may be used, such as a box to indicate an extra clean, a circle to instruct the robot 100 to use a different cleaner, or an “X” to avoid an area. Other markings may be used. In another aspect, the robot 100 may clean the IR-detectable ink when cleaning the marked area. As such, the ink marking may act as a one-time instruction. Alternatively, the IR-ink may be left behind purposefully, so that the mark is able to be observed on subsequent cleanings.
  • In addition to visual ques, the user 106 may also provide instruction with other modalities, such as verbally or with geolocation. In an example, the user 106 may gesture and then speak a verbal command, which is received at the robot 100 and then acted upon by the robot 100. For instance, the user 106 may point to a spot on the floor with a laser pointer and speak “move the shelf over here.” The robot 100 may directly receive such commands with an onboard camera system and a microphone. Alternatively, environmental sensors, such as cameras 104 and microphones, may detect the user's actions and communicate them as commands to the robot 100. Upon receiving an actionable instruction, the robot 100 may move to the shelf, lift it, and then move it to the location indicated by the user's gesture.
  • In another aspect, the user 106 may use a geolocation as an additional input. For instance, the user 106 may use a mobile device 108 (e.g., smartphone, tablet, wearable device, etc.) and perform a gesture while holding or wearing the mobile device. The user 106 may also initiate verbal commands to the mobile device. The location of the mobile device 108 (e.g., geolocation) may be obtained and transmitted to the robot 100 along with an instruction, as defined by the gesture or verbal instruction provided by the user 106.
  • FIG. 2 is a flowchart illustrating the data and control flow, according to an embodiment. A triggering action performed by a user is observed by a camera system (operation 200). For instance, a user may perform a gesture or leave a mark visible to the robot (e.g., with a token or with an ink visible to the camera system that may or may not be visible to unaided humans). The camera system interprets the action (operation 202). The interpretation may include gesture recognition, pattern recognition, shape recognition, or other forms of analysis to determine the type of triggering action performed by the user. If the triggering action is recognized, then additional processing may occur to determine whether the user provided any other triggering commands (operation 204), such as a verbal command. If a verbal command is recognized, for example, then the verbal command may be parsed and used in the subsequent operations. Additionally, geolocations, locations of pointing gestures, or other commands, may be analyzed at operation 204.
  • A lookup table is referenced (operation 206) to determine which operation the robot is to perform based on the trigger input. The types of trigger input and resulting robot operations may differ according to the design and function of the robot. For example, a cleaning robot may be programmed to respond to certain triggering actions, where a security robot may be programmed to respond to other triggering actions. In the case where several robots operate in the same environment, the user may map the triggering actions to only trigger one of several robots. Alternatively, the user may configure the commands such that a single command may cause multiple robots to perform certain functions.
  • If the triggering actions map to an action found in the lookup table, then the robot is scheduled to perform the resulting operation (operation 208). The robot may perform the operation immediately or may be configured to perform the operation at the next duty cycle (e.g., the next cleaning cycle).
  • If the triggering actions are not found in the lookup table, then the control flow returns to operation 200, where the system monitors for additional user triggering actions.
  • FIG. 3 is a block diagram illustrating a vision-based robot control system 300 for controlling a robot, according to an embodiment. The system 300 may be installed in a robot. Alternatively, the system 300 may be separate from a robot, but communicatively coupled to the robot in order provide control signals. The system 300 includes a camera system interface 302, a trigger detection unit 304, an operation database interface 306, and a transceiver 308.
  • The camera system interface 302, trigger detection unit 304, operation database interface 306, and transceiver 308 are understood to encompass tangible entities that are physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all of any operations described herein. Such tangible entitles may be constructed using one or more circuits, such as with dedicated hardware (e.g., field programmable gate arrays (FPGAs), logic gates, graphics processing unit (GPU), a digital signal processor (DSP), etc.). As such, the tangible entities described herein may be referred to as circuits, circuitry, processor units, subsystems, or the like.
  • The camera system interface 302 may receive camera signal information from a camera array, which may be installed on the system 300 or be remote, but communicatively coupled to the system 300. For instance, the camera system interface 302 may receive raw video signal information or processed signal information (e.g., compressed video signals). In another aspect, the camera system interface 302 may be used to receive tagged information from a camera array, which analyzed and preprocessed the raw video signal at, or near, the camera array.
  • Trigger detection unit 304 is capable of analyzing image or video data received by the camera system interface 302, and detect and identify a gesture exhibited by movements performed by a person in the image or video data. The trigger detection unit 304 determines whether the movements constitute a recognized gesture. If the movements do constitute a recognized gesture, the trigger detection unit 304 may trigger operations performed by a robot.
  • To detect the gesture, the trigger detection unit 304 may access image data of an arm, finger, foot, or hand motion of a user captured by a camera system, and identify the gesture based on the image data. The image data may be a number of successive images (e.g., video) over which the gesture is performed.
  • In another aspect, the trigger detection unit 304 accesses depth image data of an arm, finger, foot or hand motion of the user and identifies the gesture based on the depth image data. The depth image data may be a number of successive images (e.g., video) over which the gesture is performed.
  • In another aspect, the trigger detection unit 304 operates independent from the camera system interface 302, and receives information or data that indicates a gesture via a different pathway. For example, in an aspect, to detect the selection gesture, the trigger detection unit 304 is to access motion data from an auxiliary device, the motion data describing an arm, finger, foot, or hand motion of the user and identify the gesture based on the motion data. The auxiliary device may be a mobile device, such as a smartphone, a wearable device, such as a smartwatch or a glove, or other type of device moveable by the user in free space. Examples include, but are not limited to, smartphones, smartwatches, e-textiles (e.g., shirts, gloves, pants, shoes, etc.), smart bracelets, smart rings, or the like.
  • The operation database interface 306 is used to determine whether the gesture is detected and recognized as being a triggering gesture. If it is a recognized triggering gesture, then commands are transmitted to a robot using the transceiver 308. The transceiver 308 may be configured to transmit over various wireless networks, such as a Wi-Fi network (e.g., according to the IEEE 802.11 family of standards), cellular network, such as a network designed according to the Long-Term Evolution (LTE), LTE-Advanced, 5G or Global System for Mobile Communications (GSM) families of standards, or the like. When the system 300 is incorporated into a robot, then the commands may be directly communicated to a robot controller by way of a wired connection.
  • Thus, the system 300 describes a vision-based robot control system, the system 300 comprising the camera system interface 302, the trigger detection unit 304, and the transceiver 308. The camera system interface 302 may be configured to receive image data from a camera system/
  • The trigger detection unit 304 may be configured to determine a triggering action from the image data.
  • The transceiver 308 may be configured to initiate a robot operation associated with the triggering action. In an embodiment, to initiate the robot operation, the transceiver 308 is to transmit a command sequence to a robot over a wireless network. In an embodiment, the robot operation comprises a cleaning task.
  • In an embodiment, to receive the image data, the camera system interface 302 is to receive an image of a user performing a gesture. In such an embodiment, to determine the triggering action, the trigger detection unit 304 is to determine the triggering action corresponding to the gesture. In an embodiment, the gesture comprises pointing to a location in an environment. In an embodiment, the gesture comprises tapping a foot on a location in an environment. In an embodiment, the triggering action is the gesture and the robot operation comprises performing extra cleaning at the location in the environment.
  • In an embodiment, the image data includes a gesture performed by a user, and the trigger detection unit 304 is to receive a voice command issued by the user and determine the triggering action using the gesture and the voice command. In a further embodiment, the gesture is used to specify a location in the environment, and the voice command is used to specify an action to be taken by the robot.
  • In an embodiment, the image data includes a non-visible marking made by a user, the non-visible marking acting as the triggering action. In a further embodiment, the non-visible marking comprises an infrared-visible ink marking. In another embodiment, the non-visible marking is a symbol, and to initiate the robot operation, the operation database interface 306 is used to access a lookup table and search for the symbol to identify a corresponding robot operation.
  • In an embodiment, the trigger detection unit 304 is to obtain a geolocation associated with the triggering action and the transceiver 308 is to initiate the robot operation to be performed at the geolocation. In a further embodiment, the geolocation is obtained from a device operated by a user performing the triggering action. In another embodiment, the geolocation is obtained from a gesture performed by the user, the gesture captured in the image data.
  • FIG. 4 is a flowchart illustrating a method 400 for providing a vision-based robot control system, according to an embodiment. At block 402, image data from a camera system is received at a processor-based robot control system.
  • At block 404, a triggering action is determined from the image data.
  • At block 406, a robot operation associated with the triggering action is initiated. In an embodiment, initiating the robot operation comprises transmitting a command sequence to a robot over a wireless network. Various networks may be used, such as Bluetooth, Wi-Fi, or the like.
  • In an embodiment, the robot operation comprises a cleaning task. Other robotic tasks are understood to be within the scope of this disclosure.
  • In an embodiment, receiving the image data comprises receiving an image of a user performing a gesture. In such an embodiment, determining the triggering action comprises determining the triggering action corresponding to the gesture. The gesture may be any type of gesture, as discussed above, and may include actions such as pointing, tapping one's foot, or other similar gestures. Thus, in an embodiment, the gesture comprises pointing to a location in an environment. In another embodiment, the gesture comprises tapping a foot on a location in an environment. In such embodiments, the triggering action is the gesture and the robot operation comprises performing extra cleaning at the location in the environment.
  • In an embodiment, the image data includes a gesture performed by a user. The user may contemporaneously issue a voice command to accompany the gesture and provide further parameters on the gesture command. Thus, in such an embodiment, the method 400 includes receiving a voice command issued by the user. Determining the triggering action then comprises using the gesture and the voice command to determine the triggering action.
  • In an embodiment, the gesture is used to specify a location in the environment, and the voice command is used to specify an action to be taken by the robot.
  • In an embodiment, the image data includes a non-visible marking made by a user, the non-visible marking acting as the triggering action. The non-visible marking may be infrared ink, as described above. Thus, in an embodiment, the non-visible marking comprises an infrared-visible ink marking. Various words, symbols, or other indicia may be made with such ink and the robot control system may decipher the indicia and determine the meaning of the command. Thus, in an embodiment, the non-visible marking is a symbol, and initiating the robot operation comprises searching a lookup table for the symbol to identify a corresponding robot operation. The lookup table may be administered by the user, or by an administrative person, such as the manufacturer or provider of the robot and related services.
  • In an embodiment, the method 400 includes obtaining a geolocation associated with the triggering action and initiating the robot operation to be performed at the geolocation. The geolocation may be obtained in various ways, such as with a mobile device that is able to determine a global positioning system (GPS) location (or similar), or by a camera system able to track and determine the user's location when the user performs a gesture or leaves a marking. Thus, in an embodiment, the geolocation is obtained from a device operated by a user performing the triggering action. In another embodiment, the geolocation is obtained from a gesture performed by the user, the gesture captured in the image data.
  • Embodiments may be implemented in one or a combination of hardware, firmware, and software. Embodiments may also be implemented as instructions stored on a machine-readable storage device, which may be read and executed by at least one processor to perform the operations described herein. A machine-readable storage device may include any non-transitory mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable storage device may include read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash-memory devices, and other storage devices and media.
  • A processor subsystem may be used to execute the instruction on the machine-readable medium. The processor subsystem may include one or more processors, each with one or more cores. Additionally, the processor subsystem may be disposed on one or more physical devices. The processor subsystem may include one or more specialized processors, such as a graphics processing unit (GPU), a digital signal processor (DSP), a field programmable gate array (FPGA), or a fixed function processor.
  • Examples, as described herein, may include, or may operate on, logic or a number of components, modules, or mechanisms. Modules may be hardware, software, or firmware communicatively coupled to one or more processors in order to carry out the operations described herein. Modules may be hardware modules, and as such modules may be considered tangible entities capable of performing specified operations and may be configured or arranged in a certain manner. In an example, circuits may be arranged (e.g., internally or with respect to external entities such as other circuits) in a specified manner as a module. In an example, the whole or part of one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware processors may be configured by firmware or software (e.g., instructions, an application portion, or an application) as a module that operates to perform specified operations. In an example, the software may reside on a machine-readable medium. In an example, the software, when executed by the underlying hardware of the module, causes the hardware to perform the specified operations. Accordingly, the term hardware module is understood to encompass a tangible entity, be that an entity that is physically constructed, specifically configured (e.g., hardwired), or temporarily (e.g., transitorily) configured (e.g., programmed) to operate in a specified manner or to perform part or all of any operation described herein. Considering examples in which modules are temporarily configured, each of the modules need not be instantiated at any one moment in time. For example, where the modules comprise a general-purpose hardware processor configured using software; the general-purpose hardware processor may be configured as respective different modules at different times. Software may accordingly configure a hardware processor, for example, to constitute a particular module at one instance of time and to constitute a different module at a different instance of time. Modules may also be software or firmware modules, which operate to perform the methodologies described herein.
  • Circuitry or circuits, as used in this document, may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry such as computer processors comprising one or more individual instruction processing cores, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. The circuits, circuitry, or modules may, collectively or individually, be embodied as circuitry that forms part of a larger system, for example, an integrated circuit (IC), system on-chip (SoC), desktop computers, laptop computers, tablet computers, servers, smart phones, etc.
  • FIG. 5 is a block diagram illustrating a machine in the example form of a computer system 500, within which a set or sequence of instructions may be executed to cause the machine to perform any one of the methodologies discussed herein, according to an example embodiment. In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of either a server or a client machine in server-client network environments, or it may act as a peer machine in peer-to-peer (or distributed) network environments. The machine may be a wearable device, personal computer (PC), a tablet PC, a hybrid tablet, a personal digital assistant (PDA), a mobile telephone, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. Similarly, the term “processor-based system” shall be taken to include any set of one or more machines that are controlled by or operated by a processor (e.g., a computer) to individually or jointly execute instructions to perform any one or more of the methodologies discussed herein.
  • Example computer system 500 includes at least one processor 502 (e.g., a central processing unit (CPU), a graphics processing unit (GPU) or both, processor cores, compute nodes, etc.), a main memory 504 and a static memory 506, which communicate with each other via a link 508 (e.g., bus). The computer system 500 may further include a video display unit 510, an alphanumeric input device 512 (e.g., a keyboard), and a user interface (UI) navigation device 514 (e.g., a mouse). In one embodiment, the video display unit 510, input device 512 and UI navigation device 514 are incorporated into a touch screen display. The computer system 500 may additionally include a storage device 516 (e.g., a drive unit), a signal generation device 518 (e.g., a speaker), a network interface device 520, and one or more sensors (not shown), such as a global positioning system (GPS) sensor, compass, accelerometer, gyrometer, magnetometer, or other sensor.
  • The storage device 516 includes a machine-readable medium 522 on which is stored one or more sets of data structures and instructions 524 (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. The instructions 524 may also reside, completely or at least partially, within the main memory 504, static memory 506, and/or within the processor 502 during execution thereof by the computer system 500, with the main memory 504, static memory 506, and the processor 502 also constituting machine-readable media.
  • While the machine-readable medium 522 is illustrated in an example embodiment to be a single medium, the term “machine-readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more instructions 524. The term “machine-readable medium” shall also be taken to include any tangible medium that is capable of storing, encoding or carrying instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media. Specific examples of machine-readable media include non-volatile memory, including but not limited to, by way of example, semiconductor memory devices (e.g., electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)) and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • The instructions 524 may further be transmitted or received over a communications network 526 using a transmission medium via the network interface device 520 utilizing any one of a number of well-known transfer protocols (e.g., HTTP). Examples of communication networks include a local area network (LAN), a wide area network (WAN), the Internet, mobile telephone networks, plain old telephone (POTS) networks, and wireless data networks (e.g., Bluetooth, Wi-Fi, 3G, and 4G LTE/LTE-A or WiMAX networks). The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible medium to facilitate communication of such software.
  • Additional Notes & Examples
  • Example 1 is a vision-based robot control system, the system comprising: a camera system interface to receive image data from a camera system; a trigger detection unit to determine a triggering action from the image data; and a transceiver to initiate a robot operation associated with the triggering action.
  • In Example 2, the subject matter of Example 1 optionally includes wherein to receive the image data, the camera system interface is to receive an image of a user performing a gesture; and wherein to determine the triggering action, the trigger detection unit is to determine the triggering action corresponding to the gesture.
  • In Example 3, the subject matter of Example 2 optionally includes wherein the gesture comprises pointing to a location in an environment.
  • In Example 4, the subject matter of any one or more of Examples 2-3 optionally include wherein the gesture comprises tapping a foot on a location in an environment.
  • In Example 5, the subject matter of any one or more of Examples 3-4 optionally include or 4, wherein the triggering action is the gesture and the robot operation comprises performing extra cleaning at the location in the environment.
  • In Example 6, the subject matter of any one or more of Examples 1-5 optionally include wherein the image data includes a gesture performed by a user, and wherein the trigger detection unit is to receive a voice command issued by the user and determine the triggering action using the gesture and the voice command.
  • In Example 7, the subject matter of Example 6 optionally includes wherein the gesture is used to specify a location in the environment, and wherein the voice command is used to specify an action to be taken by the robot.
  • In Example 8, the subject matter of any one or more of Examples 1-7 optionally include wherein the image data includes a non-visible marking made by a user, the non-visible marking acting as the triggering action.
  • In Example 9, the subject matter of Example 8 optionally includes wherein the non-visible marking comprises an infrared-visible ink marking.
  • In Example 10, the subject matter of any one or more of Examples 8-9 optionally include wherein the non-visible marking is a symbol, and to initiate the robot operation, an operation database interface is used to access a lookup table and search for the symbol to identify a corresponding robot operation.
  • In Example 11, the subject matter of any one or more of Examples 1-10 optionally include wherein the trigger detection unit is to obtain a geolocation associated with the triggering action; and wherein the transceiver is to initiate the robot operation to be performed at the geolocation.
  • In Example 12, the subject matter of Example 11 optionally includes wherein the geolocation is obtained from a device operated by a user performing the triggering action.
  • In Example 13, the subject matter of any one or more of Examples 11-12 optionally include wherein the geolocation is obtained from a gesture performed by the user, the gesture captured in the image data.
  • In Example 14, the subject matter of any one or more of Examples 1-13 optionally include wherein to initiate the robot operation, the transceiver is to transmit a command sequence to a robot over a wireless network.
  • In Example 15, the subject matter of any one or more of Examples 1-14 optionally include wherein the robot operation comprises a cleaning task.
  • Example 16 is a method of providing a vision-based robot control system, the method comprising: receiving, at a processor-based robot control system, image data from a camera system; determining a triggering action from the image data; and initiating a robot operation associated with the triggering action.
  • In Example 17, the subject matter of Example 16 optionally includes wherein receiving the image data comprises receiving an image of a user performing a gesture; and wherein determining the triggering action comprises determining the triggering action corresponding to the gesture.
  • In Example 18, the subject matter of Example 17 optionally includes wherein the gesture comprises pointing to a location in an environment.
  • In Example 19, the subject matter of any one or more of Examples 17-18 optionally include wherein the gesture comprises tapping a foot on a location in an environment.
  • In Example 20, the subject matter of any one or more of Examples 18-19 optionally include or 19, wherein the triggering action is the gesture and the robot operation comprises performing extra cleaning at the location in the environment.
  • In Example 21, the subject matter of any one or more of Examples 16-20 optionally include wherein the image data includes a gesture performed by a user, and wherein the method comprises receiving a voice command issued by the user; and wherein determining the triggering action comprises using the gesture and the voice command to determine the triggering action.
  • In Example 22, the subject matter of Example 21 optionally includes wherein the gesture is used to specify a location in the environment, and wherein the voice command is used to specify an action to be taken by the robot.
  • In Example 23, the subject matter of any one or more of Examples 16-22 optionally include wherein the image data includes a non-visible marking made by a user, the non-visible marking acting as the triggering action.
  • In Example 24, the subject matter of Example 23 optionally includes wherein the non-visible marking comprises an infrared-visible ink marking.
  • In Example 25, the subject matter of any one or more of Examples 23-24 optionally include wherein the non-visible marking is a symbol, and wherein initiating the robot operation comprises searching a lookup table for the symbol to identify a corresponding robot operation.
  • In Example 26, the subject matter of any one or more of Examples 16-25 optionally include obtaining a geolocation associated with the triggering action; and initiating the robot operation to be performed at the geolocation.
  • In Example 27, the subject matter of Example 26 optionally includes wherein the geolocation is obtained from a device operated by a user performing the triggering action.
  • In Example 28, the subject matter of any one or more of Examples 26-27 optionally include wherein the geolocation is obtained from a gesture performed by the user, the gesture captured in the image data.
  • In Example 29, the subject matter of any one or more of Examples 16-28 optionally include wherein initiating the robot operation comprises transmitting a command sequence to a robot over a wireless network.
  • In Example 30, the subject matter of any one or more of Examples 16-29 optionally include wherein the robot operation comprises a cleaning task.
  • Example 31 is at least one machine-readable medium including instructions, which when executed by a machine, cause the machine to perform operations of any of the methods of Examples 16-30.
  • Example 32 is an apparatus comprising means for performing any of the methods of Examples 16-30.
  • Example 33 is an apparatus for providing a vision-based robot control system, the apparatus comprising: means for receiving, at a processor-based robot control system, image data from a camera system; means for determining a triggering action from the image data; and means for initiating a robot operation associated with the triggering action.
  • In Example 34, the subject matter of Example 33 optionally includes wherein the means for receiving the image data comprise means for receiving an image of a user performing a gesture; and wherein the means for determining the triggering action comprise means for determining the triggering action corresponding to the gesture.
  • In Example 35, the subject matter of Example 34 optionally includes wherein the gesture comprises pointing to a location in an environment.
  • In Example 36, the subject matter of any one or more of Examples 34-35 optionally include wherein the gesture comprises tapping a foot on a location in an environment.
  • In Example 37, the subject matter of any one or more of Examples 35-36 optionally include or 36, wherein the triggering action is the gesture and the robot operation comprises performing extra cleaning at the location in the environment.
  • In Example 38, the subject matter of any one or more of Examples 33-37 optionally include wherein the image data includes a gesture performed by a user, and wherein the apparatus comprises means for receiving a voice command issued by the user; and wherein the means for determining the triggering action comprise means for using the gesture and the voice command to determine the triggering action.
  • In Example 39, the subject matter of Example 38 optionally includes wherein the gesture is used to specify a location in the environment, and wherein the voice command is used to specify an action to be taken by the robot.
  • In Example 40, the subject matter of any one or more of Examples 33-39 optionally include wherein the image data includes a non-visible marking made by a user, the non-visible marking acting as the triggering action.
  • In Example 41, the subject matter of Example 40 optionally includes wherein the non-visible marking comprises an infrared-visible ink marking.
  • In Example 42, the subject matter of any one or more of Examples 40-41 optionally include wherein the non-visible marking is a symbol, and wherein the means for initiating the robot operation comprise means for searching a lookup table for the symbol to identify a corresponding robot operation.
  • In Example 43, the subject matter of any one or more of Examples 33-42 optionally include means for obtaining a geolocation associated with the triggering action; and means for initiating the robot operation to be performed at the geolocation.
  • In Example 44, the subject matter of Example 43 optionally includes wherein the geolocation is obtained from a device operated by a user performing the triggering action.
  • In Example 45, the subject matter of any one or more of Examples 43-44 optionally include wherein the geolocation is obtained from a gesture performed by the user, the gesture captured in the image data.
  • In Example 46, the subject matter of any one or more of Examples 33-45 optionally include wherein the means for initiating the robot operation comprise means for transmitting a command sequence to a robot over a wireless network.
  • In Example 47, the subject matter of any one or more of Examples 33-46 optionally include wherein the robot operation comprises a cleaning task.
  • The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments that may be practiced. These embodiments are also referred to herein as “examples.” Such examples may include elements in addition to those shown or described. However, also contemplated are examples that include the elements shown or described. Moreover, also contemplated are examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein.
  • Publications, patents, and patent documents referred to in this document are incorporated by reference herein in their entirety, as though individually incorporated by reference. In the event of inconsistent usages between this document and those documents so incorporated by reference, the usage in the incorporated reference(s) are supplementary to that of this document; for irreconcilable inconsistencies, the usage in this document controls.
  • In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to suggest a numerical order for their objects.
  • The above description is intended to be illustrative, and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with others. Other embodiments may be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. However, the claims may not set forth every feature disclosed herein as embodiments may feature a subset of said features. Further, embodiments may include fewer features than those disclosed in a particular example. Thus, the following claims are hereby incorporated into the Detailed Description, with a claim standing on its own as a separate embodiment. The scope of the embodiments disclosed herein is to be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.

Claims (27)

1. (canceled)
2. A control system for operation of a mobile machine, comprising:
a sensor interface to receive image data from a camera system, wherein the image data captures an action of a user that includes a gesture;
detection circuitry to recognize the gesture from the image data and identify an operation for the mobile machine based on the recognized gesture; and
a command interface to initiate the identified operation with the mobile machine.
3. The system of claim 2, wherein the gesture includes one or more motions provided from at least one of: arm, hand, finger, or foot movements of the user.
4. The system of claim 3, wherein the image data includes depth image data, wherein the camera system includes an infrared light camera to capture the depth image data of the one or more motions, and wherein the gesture is recognized based on three-dimensional information from the depth image data.
5. The system of claim 2, wherein the action of the user further includes a voice command, and wherein the identified operation is determined from information obtained from the recognized gesture in combination with information obtained from the voice command.
6. The system of claim 5, wherein the sensor interface is further to receive voice data, the voice data including the voice command of the user,
wherein the detection circuitry is further to recognize the voice command from the voice data, and
wherein the operation for the mobile machine is further identified based on the recognized voice command.
7. The system of claim 2, wherein the identified operation causes movement of the mobile machine.
8. The system of claim 2, wherein the identified operation causes performance of one or more actions by the mobile machine.
9. The system of claim 2, wherein the camera system and the control system are integrated within the mobile machine.
10. The system of claim 2, wherein the identified operation of the mobile machine relates to a location indicated by the action of the user.
11. A machine, comprising:
a camera system to capture image data; and
a control system to control operation of the machine, the control system comprising:
a sensor interface to receive the image data from the camera system, wherein the image data captures an action of a user that includes a gesture;
detection circuitry to recognize the gesture from the image data and identify an operation for the machine based on the recognized gesture; and
a command interface to initiate the identified operation with the machine.
12. The machine of claim 11, wherein the gesture includes one or more motions provided from at least one of: arm, hand, finger, or foot movements of the user.
13. The machine of claim 12, wherein the image data includes depth image data, wherein the camera system includes an infrared light camera to capture the depth image data of the one or more motions, and wherein the gesture is recognized based on three-dimensional information from the depth image data.
14. The machine of claim 11, wherein the action of the user further includes a voice command, and wherein the identified operation is determined from information obtained from the recognized gesture in combination with information obtained from the voice command.
15. The machine of claim 14, wherein the sensor interface is further to receive voice data, the voice data including the voice command of the user, wherein the detection circuitry is further to recognize the voice command from the voice data, and wherein the operation for the machine is further identified based on the recognized voice command.
16. The machine of claim 11, wherein the identified operation causes movement of the machine.
17. The machine of claim 11, wherein the identified operation causes performance of one or more actions by the machine.
18. The machine of claim 11, wherein the identified operation of the machine relates to a location indicated by the action of the user.
19. At least one machine-readable storage medium comprising instructions stored thereupon, which when executed by circuitry of a control system for a mobile machine, cause the circuitry to perform operations comprising:
obtaining image data captured by a camera system, wherein the image data captures an action of a user that includes a gesture;
recognizing the gesture from the image data;
identifying an operation for the mobile machine based on the recognized gesture; and
initiate the identified operation with the mobile machine.
20. The machine-readable storage medium of claim 19, wherein the gesture includes one or more motions provided from at least one of: arm, hand, finger, or foot movements of the user.
21. The machine-readable storage medium of claim 20, wherein the image data includes depth image data, wherein the camera system includes an infrared light camera to capture the depth image data of the one or more motions, and wherein the gesture is recognized based on three-dimensional information from the depth image data.
22. The machine-readable storage medium of claim 19, wherein the action of the user further includes a voice command, and wherein the identified operation is determined from information obtained from the recognized gesture in combination with information obtained from the voice command.
23. The machine-readable storage medium of claim 22, the operations further comprising:
obtaining voice data including the voice command of the user;
recognizing the voice command from the voice data; and
wherein the operation for the mobile machine is further identified based on the recognized voice command.
24. The machine-readable storage medium of claim 19, wherein the identified operation causes movement of the mobile machine.
25. The machine-readable storage medium of claim 19, wherein the identified operation causes performance of one or more actions by the mobile machine.
26. The machine-readable storage medium of claim 19, wherein the camera system and the control system are integrated within the mobile machine.
27. The machine-readable storage medium of claim 19, wherein the identified operation of the mobile machine relates to a location indicated by the action of the user.
US16/538,950 2016-06-17 2019-08-13 Vision-based robot control system Abandoned US20200171667A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/538,950 US20200171667A1 (en) 2016-06-17 2019-08-13 Vision-based robot control system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/185,241 US10377042B2 (en) 2016-06-17 2016-06-17 Vision-based robot control system
US16/538,950 US20200171667A1 (en) 2016-06-17 2019-08-13 Vision-based robot control system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/185,241 Continuation US10377042B2 (en) 2016-06-17 2016-06-17 Vision-based robot control system

Publications (1)

Publication Number Publication Date
US20200171667A1 true US20200171667A1 (en) 2020-06-04

Family

ID=60660709

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/185,241 Active 2037-06-08 US10377042B2 (en) 2016-06-17 2016-06-17 Vision-based robot control system
US16/538,950 Abandoned US20200171667A1 (en) 2016-06-17 2019-08-13 Vision-based robot control system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/185,241 Active 2037-06-08 US10377042B2 (en) 2016-06-17 2016-06-17 Vision-based robot control system

Country Status (4)

Country Link
US (2) US10377042B2 (en)
CN (2) CN109153122A (en)
DE (2) DE112017008358B3 (en)
WO (1) WO2017218084A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220283769A1 (en) * 2021-03-02 2022-09-08 Iscoollab Co., Ltd. Device and method for robotic process automation of multiple electronic computing devices

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10377042B2 (en) 2016-06-17 2019-08-13 Intel Corporation Vision-based robot control system
US20180345300A1 (en) * 2017-05-31 2018-12-06 Nike, Inc. Air Masking Nozzle
US11413755B2 (en) * 2017-12-31 2022-08-16 Sarcos Corp. Covert identification tags viewable by robots and robotic devices
US20190314995A1 (en) * 2018-04-12 2019-10-17 Aeolus Robotics Corporation Limited Robot and method for controlling the same
WO2020184733A1 (en) * 2019-03-08 2020-09-17 엘지전자 주식회사 Robot
CN110262507B (en) * 2019-07-04 2022-07-29 杭州蓝芯科技有限公司 Camera array robot positioning method and device based on 5G communication
CN112405530B (en) * 2020-11-06 2022-01-11 齐鲁工业大学 Robot vision tracking control system and control method based on wearable vision
DE102022109047A1 (en) 2022-04-13 2023-10-19 B-Horizon GmbH Method for automatically adjusting the behavior of a mobile user terminal, in particular for measuring pressure, gas and/or humidity based on ambient humidity

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6681031B2 (en) 1998-08-10 2004-01-20 Cybernet Systems Corporation Gesture-controlled interfaces for self-service machines and other applications
CN1925988A (en) * 2004-03-27 2007-03-07 微型机器人株式会社 Navigation system for position self control robot and floor materials for providing absolute coordinates used thereof
JP5631535B2 (en) * 2005-02-08 2014-11-26 オブロング・インダストリーズ・インコーポレーテッド System and method for a gesture-based control system
US10511950B2 (en) * 2006-05-16 2019-12-17 RedSky Technologies, Inc. Method and system for an emergency location information service (E-LIS) for Internet of Things (IoT) devices
US7606411B2 (en) 2006-10-05 2009-10-20 The United States Of America As Represented By The Secretary Of The Navy Robotic gesture recognition system
CA2591808A1 (en) * 2007-07-11 2009-01-11 Hsien-Hsiang Chiu Intelligent object tracking and gestures sensing input device
US8294956B2 (en) * 2008-05-28 2012-10-23 Xerox Corporation Finishing control system using inline sensing with clear toner
US20110196563A1 (en) * 2010-02-09 2011-08-11 Carefusion 303, Inc. Autonomous navigation and ink recognition system
US9195345B2 (en) * 2010-10-28 2015-11-24 Microsoft Technology Licensing, Llc Position aware gestures with visual feedback as input method
KR101842459B1 (en) 2011-04-12 2018-05-14 엘지전자 주식회사 Robot cleaner and method for controlling the same
EP2537645B1 (en) * 2011-06-20 2017-10-25 Kabushiki Kaisha Yaskawa Denki Robot System
JP5892360B2 (en) * 2011-08-02 2016-03-23 ソニー株式会社 Robot instruction apparatus, robot instruction method, program, and communication system
CN202512439U (en) * 2012-02-28 2012-10-31 陶重犇 Human-robot cooperation system with webcam and wearable sensor
KR102124509B1 (en) * 2013-06-13 2020-06-19 삼성전자주식회사 Cleaning robot and method for controlling the same
KR102094347B1 (en) 2013-07-29 2020-03-30 삼성전자주식회사 Auto-cleaning system, cleaning robot and controlling method thereof
DE102013108114B4 (en) 2013-07-30 2015-02-12 gomtec GmbH Input device for gesture control with protective device
CN103984315A (en) * 2014-05-15 2014-08-13 成都百威讯科技有限责任公司 Domestic multifunctional intelligent robot
CN105303952A (en) * 2014-07-02 2016-02-03 北京泺喜文化传媒有限公司 Teaching device
US9720519B2 (en) * 2014-07-30 2017-08-01 Pramod Kumar Verma Flying user interface
KR20160065574A (en) 2014-12-01 2016-06-09 엘지전자 주식회사 Robot cleaner and method for controlling the same
CN105353873B (en) * 2015-11-02 2019-03-15 深圳奥比中光科技有限公司 Gesture control method and system based on Three-dimensional Display
CN105364933B (en) * 2015-12-17 2017-10-27 成都英博格科技有限公司 Intelligent robot
US10377042B2 (en) 2016-06-17 2019-08-13 Intel Corporation Vision-based robot control system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220283769A1 (en) * 2021-03-02 2022-09-08 Iscoollab Co., Ltd. Device and method for robotic process automation of multiple electronic computing devices
US11748053B2 (en) * 2021-03-02 2023-09-05 Iscoollab Co., Ltd. Device and method for robotic process automation of multiple electronic computing devices

Also Published As

Publication number Publication date
US20170361466A1 (en) 2017-12-21
CN109153122A (en) 2019-01-04
US10377042B2 (en) 2019-08-13
DE112017008358B3 (en) 2022-08-11
DE112017003030T5 (en) 2019-03-21
CN111452050A (en) 2020-07-28
WO2017218084A1 (en) 2017-12-21

Similar Documents

Publication Publication Date Title
US20200171667A1 (en) Vision-based robot control system
US11185982B2 (en) Serving system using robot and operation method thereof
US11826897B2 (en) Robot to human feedback
US9751212B1 (en) Adapting object handover from robot to human using perceptual affordances
US9895802B1 (en) Projection of interactive map data
JP6705465B2 (en) Observability grid-based autonomous environment search
JP5852706B2 (en) Mobile robot system
US8965104B1 (en) Machine vision calibration with cloud computing systems
KR102018763B1 (en) Interfacing with a mobile telepresence robot
CN109478267B (en) Observation-based event tracking
CN109918975A (en) A kind of processing method of augmented reality, the method for Object identifying and terminal
CN107533767A (en) The configuration and control of the enhancing of robot
US20230057965A1 (en) Robot and control method therefor
US20180005445A1 (en) Augmenting a Moveable Entity with a Hologram
CN111527461B (en) Information processing device, information processing method, and program
CN110785775B (en) System and method for optical tracking
KR101623642B1 (en) Control method of robot cleaner and terminal device and robot cleaner control system including the same
US12018947B2 (en) Method for providing navigation service using mobile terminal, and mobile terminal
US10026189B2 (en) System and method for using image data to determine a direction of an actor
US20200009734A1 (en) Robot and operating method thereof
JP2018151950A (en) Information processing apparatus, information processing system and program
JP7074061B2 (en) Information processing equipment, information processing methods and computer programs
US20160379416A1 (en) Apparatus and method for controlling object movement
CN115225682B (en) Management server, remote operation system, remote operation method, and storage medium
KR20240055359A (en) Method and apparatus for calibrating the boarding position of an elevator of an unmanned vehicle using a collision detection area

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION