US20190302714A1 - Systems and methods to operate controllable devices with gestures and/or noises - Google Patents

Systems and methods to operate controllable devices with gestures and/or noises Download PDF

Info

Publication number
US20190302714A1
US20190302714A1 US16/376,206 US201916376206A US2019302714A1 US 20190302714 A1 US20190302714 A1 US 20190302714A1 US 201916376206 A US201916376206 A US 201916376206A US 2019302714 A1 US2019302714 A1 US 2019302714A1
Authority
US
United States
Prior art keywords
controllable
gesture
detection
change
predetermined gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/376,206
Inventor
Patryk Laurent
Eugene Izhikevich
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Brain Corp
Original Assignee
Brain Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Brain Corp filed Critical Brain Corp
Priority to US16/376,206 priority Critical patent/US20190302714A1/en
Publication of US20190302714A1 publication Critical patent/US20190302714A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/038Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2807Exchanging configuration information on appliance services in a home automation network
    • H04L12/2809Exchanging configuration information on appliance services in a home automation network indicating that an appliance service is present in a home automation network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2816Controlling appliance services of a home automation network by calling their functionalities
    • H04L12/282Controlling appliance services of a home automation network by calling their functionalities based on user interaction within the home
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/72415User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories for remote control of appliances
    • H04M1/72533
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/021Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/72412User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories using two-way short-range wireless interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W8/00Network data management
    • H04W8/005Discovery of network devices, e.g. terminals

Definitions

  • the present disclosure relates to, inter alia, gesture detection systems and methods of utilizing the same. Specifically, in one aspect, the present disclosure relates to gesture detection systems and methods useful for operation of controllable devices.
  • controllable devices e.g., lights, televisions, stereos, etc.
  • lamps that have conductive surfaces that turn the lamp either on or off upon being touched by a user have become increasingly popular.
  • this technology may only work for these specific devices and often it isn't immediately apparent to a user whether or not these specific devices operate as intended.
  • many home appliances often include a power switch that is located in an inconvenient location such as, for example, on a cord that goes behind other furniture, or behind/underneath the device to be controlled.
  • controllable devices Other conventional mechanisms for simplifying operation of controllable devices include use of, for example, remote control devices.
  • a device-specific remote control is used for each controllable device and hence, for users that have multiple devices to control, the user will often have to keep track of multiple remote controls. Additionally, often times these device-specific remote controls are not intuitive to use, thereby demanding a user to manipulate a series of buttons on the remote control in order to operate these devices.
  • Other known solutions include use of personal computing devices (e.g., laptops, smart phones, tablets, etc.) that often have software applications that utilize user-assisted programming in order to enable a user to operate these controllable devices.
  • Such applications may be complicated and/or time-consuming to set up for operation, and additionally are often bulky, thereby requiring a user to carry around the computing device. Further, in some examples, the user may need to “teach” the software application how to operate the controllable device(s).
  • a non-transitory computer-readable storage medium includes a plurality of instructions, the instructions being executable by a processing apparatus to operate a gesture detection system, the processor being in signal communication with one or more detection devices, the instructions configured to, when executed by the processing apparatus, cause the processing apparatus to: emit one or more signals from the detection system; discover a location of each of one or more controllable devices within a detectable area based at least in part on a response of each of the one or more controllable devices to the emitted signals; and detect via the one or more detection devices a predetermined gesture, wherein the detection of the predetermined gesture commands at least in part a change in a user-controlled operational state of the one or more controllable devices.
  • the instructions when executed by the processing apparatus further cause the processing apparatus to: assign a boundary area to each of the one or more controllable devices after discovery of the location.
  • the detection of the predetermined gesture further includes detection of the predetermined gesture within the assigned boundary area.
  • the one or more controllable devices includes a first wirelessly controllable device having a first unique identifier, and assignment of the boundary area to each of the one or more controllable devices further includes association of a first boundary area with the first unique identifier.
  • the predetermined gesture is at least one of a gesture directed towards the one or more controllable devices, a gesture in proximity to the one or more controllable devices, and a gesture interacting with the one or more controllable devices.
  • the one or more controllable devices includes a second wirelessly controllable device, the second wirelessly controllable device having a second unique identifier; and assignment of the boundary area to each of the one or more controllable devices further includes association of a second boundary area with the second unique identifier.
  • discovery of the location of each of the one or more controllable devices further includes wireless detection within a wireless network of the first unique identifier and the second unique identifier.
  • detection of the predetermined gesture further includes detection of a first gesture within the first boundary area for user-controlled operation of the first wirelessly controllable device and detection of a second predetermined gesture within the second boundary area for user-controlled operation of the second wirelessly controllable device.
  • the user-controlled operational state of the second wirelessly controllable device in response to the first predetermined gesture, is substantially maintained, and in response to the second predetermined gesture, the user-controlled operational state of the first wirelessly controllable device is substantially maintained.
  • the predetermined gesture includes a motion performed by at least one of a person, animal, and object.
  • the one or more detection devices includes a camera and the response includes a visual change detectable by the camera.
  • the one or more detection devices includes a microphone and the response includes an auditory change detectable by the microphone.
  • the instructions when executed by the processing apparatus further cause the processing apparatus to detect via the one or more detection devices a noise created by a user to command at least in part the change in the user-controlled operational state of the one or more controllable devices.
  • the instructions when executed by the processing apparatus further cause the processing apparatus to learn a demonstrated gesture as the predetermined gesture and associate at least in part the detection of the demonstrated gesture to the change in the user-controlled operational state of the one or more controllable devices.
  • a method of operating a gesture detection system having a detection device in signal communication with a processor includes emitting one or more signals from the gesture detection system; discovering a location of at least one controllable device within a detectable area based at least in part on a response of the at least one controllable device to the emitted signals; detecting at least one predetermined gesture performed by a user to command operation of the at least one controllable device; and sending a signal to the at least one controllable device that is configured to regulate operation of the at least one controllable device.
  • the act of discovering the location of the at least one controllable device within the detectable area includes automatically operating the at least one controllable device and detecting a measurable change associated with the operation of the at least one controllable device.
  • the method further includes assigning a boundary area to the at least one controllable device and the act of detecting the at least one predetermined gesture includes detecting at least one predetermined gesture being performed within the boundary area.
  • the act of discovering the location of the at least one controllable device further includes detecting within a local area network a unique identifier associated with the at least one controllable device; and the act of assigning the boundary area to the at least one controllable device further includes associating the unique identifier with the boundary area.
  • a system configured to detect gestures for operation of one or more controllable devices within a detectable area.
  • the system includes a detection device; a processing apparatus in communication with the detection device; and a non-transitory computer readable storage medium having a plurality of instructions stored thereon, the instructions when executed by the processing apparatus, cause the processing apparatus to: operate a controllable device so as to cause the controllable device to generate a measurable change in state; detect the measurable change in state of the controllable device; discover a location of the controllable device based on the detection of the measurable change; assign a boundary area to the controllable device based on the discovered location; detect a predetermined gesture performed by a user within the assigned boundary area; and in response to the detection of the predetermined gesture, send a signal to the controllable device to cause the controllable device to operate in accordance with the predetermined gesture.
  • the non-transitory computer readable storage medium can have instructions thereon that when executed by the processing apparatus, cause the processing apparatus to place the location of the controllable device within a predefined map.
  • a controllable device for use with a detection apparatus is disclosed.
  • detection apparatus for use with a detection apparatus is disclosed.
  • a computing device for use with a detection apparatus is disclosed.
  • FIG. 1 is a plan view of an example gesture recognition system incorporated into a user premises in accordance with one implementation of the present disclosure.
  • FIG. 2 is a plan view of an example gesture recognition system incorporated into a user premises in accordance with another implementation of the present disclosure.
  • FIG. 3 is a functional block diagram of an example computing device for use with the gesture recognition systems shown in FIGS. 1 and 2 .
  • FIG. 4 is a process flow diagram of an exemplary method for auto-configuring and operating the gesture recognition systems shown in FIGS. 1 and 2 .
  • FIG. 5 is a perspective view that includes an example coordinate map and two-dimensional (2D) boundary areas in accordance with some implementations of the present disclosure.
  • FIG. 6 is a perspective view that includes an example coordinate map and three-dimensional (3D) boundary areas in accordance with another implementation of the present disclosure.
  • FIG. 7 is a perspective view that includes example 2D boundary areas and pathways between the boundary areas and a detection device in accordance with some implementations of the present disclosure.
  • the terms computer, and computing device can include, but are not limited to, personal computers (PCs) and minicomputers, whether desktop, laptop, or otherwise, mainframe computers, workstations, servers, personal digital assistants (PDAs), handheld computers, embedded computers, programmable logic devices, personal communicators, tablet computers, portable navigation aids, J2ME equipped devices, cellular telephones, smart phones, personal integrated communication or entertainment devices, and/or any device capable of executing a set of instructions and processing an incoming data signal.
  • PCs personal computers
  • PDAs personal digital assistants
  • handheld computers handheld computers
  • embedded computers embedded computers
  • programmable logic devices personal communicators
  • tablet computers tablet computers
  • portable navigation aids J2ME equipped devices
  • J2ME equipped devices J2ME equipped devices
  • cellular telephones cellular telephones
  • smart phones smart phones
  • personal integrated communication or entertainment devices personal integrated communication or entertainment devices
  • the terms computer program or software can include any sequence or human or machine cognizable steps which perform a function.
  • Such program may be rendered in virtually any programming language or environment including, for example, C/C++, C#, Fortran, COBOL, MATLABTM, PASCAL, Python, assembly language, markup languages (e.g., HTML, SGML, XML, VoXML), and the like, as well as object-oriented environments such as the Common Object Request Broker Architecture (CORBA), JavaTM (including J2ME, Java Beans, etc.), Binary Runtime Environment (e.g., BREW), and the like.
  • CORBA Common Object Request Broker Architecture
  • JavaTM including J2ME, Java Beans, etc.
  • BREW Binary Runtime Environment
  • connection link, transmission channel, delay line, wireless can include a causal link between any two or more entities (whether physical or logical/virtual), which enables information exchange between the entities.
  • memory can include any type of integrated circuit or other storage device adapted for storing digital data including, without limitation, ROM, PROM, EEPROM, DRAM, Mobile DRAM, SDRAM, DDR/2 SDRAM, EDO/FPMS, RLDRAM, SRAM, “flash” memory (e.g., NAND/NOR), memristor memory, and PSRAM.
  • flash memory e.g., NAND/NOR
  • processors include all types of digital processing devices including, without limitation, digital signal processors (DSPs), reduced instruction set computers (RISC), general-purpose (CISC) processors, microprocessors, gate arrays (e.g., field programmable gate arrays (FPGAs)), PLDs, reconfigurable computer fabrics (RCFs), array processors, secure microprocessors, and application-specific integrated circuits (ASICs).
  • DSPs digital signal processors
  • RISC reduced instruction set computers
  • CISC general-purpose processors
  • microprocessors e.g., field programmable gate arrays (FPGAs)), PLDs, reconfigurable computer fabrics (RCFs), array processors, secure microprocessors, and application-specific integrated circuits (ASICs).
  • FPGAs field programmable gate arrays
  • RCFs reconfigurable computer fabrics
  • ASICs application-specific integrated circuits
  • Such digital processors may be contained on a single unitary IC die, or distributed across multiple components.
  • the term network interface can include any signal, data, or software interface with a component, network, or process including, without limitation, those of the FireWire (e.g., FW400, FW800, FWS800T, FWS1600, FWS3200, etc.), USB (e.g., USB 1.X, USB 2.0, USB 3.0, USB Type-C, etc.), Ethernet (e.g., 10/100, 10/100/1000 (Gigabit Ethernet), 10-Gig-E, etc.), MoCA, Coaxsys (e.g., TVnetTM), radio frequency tuner (e.g., in-band or OOB, cable modem, etc.), Wi-Fi (802.11), WiMAX (802.16), PAN (e.g., 802.15), cellular (e.g., 3G, LTE/LTE-A/TD-LTE, GSM, etc.) or IrDA families.
  • FireWire e.g., FW400, FW800, FWS800T, FWS1
  • Wi-Fi can include one or more of IEEE-Std. 802.11, variants of IEEE-Std. 802.11, standards related to IEEE-Std. 802.11 (e.g., 802.11 a/b/g/n/ac/ad/af/ah/ai/aj/aq/ax/ay), and/or other wireless standards.
  • IEEE-Std. 802.11 variants of IEEE-Std. 802.11
  • standards related to IEEE-Std. 802.11 e.g., 802.11 a/b/g/n/ac/ad/af/ah/ai/aj/aq/ax/ay
  • other wireless standards e.g., 802.11 a/b/g/n/ac/ad/af/ah/ai/aj/aq/ax/ay
  • wireless can include any wireless signal, data, communication, or other interface including without limitation Wi-Fi, Bluetooth, 3G (3GPP/3GPP2), HSDPA/HSUPA, TDMA, CDMA (e.g., IS-95A, WCDMA, etc.), FHSS, DSSS, GSM, PAN/802.15, WiMAX (802.16), 802.20, narrowband/FDMA, OFDM, PCS/DCS, LTE/LTE-A/TD-LTE, analog cellular, CDPD, satellite systems, millimeter wave or microwave systems, acoustic, any radio transmission, any radio frequency field, and infrared (e.g., IrDA).
  • wired can include any signal transferred over a wire, including any wire with a signal line.
  • wires with signal lines can include cables such as Ethernet cables, coaxial cables, Universal Serial Bus (USB), firewire, data lines, wire, and/or any wired connection known in the art.
  • cables such as Ethernet cables, coaxial cables, Universal Serial Bus (USB), firewire, data lines, wire, and/or any wired connection known in the art.
  • USB Universal Serial Bus
  • any of the various descriptions in this disclosure can be instantiated in hardware and/or software.
  • functionality and structure described in this disclosure can represent physical hardware that is hardcoded to perform such functionality.
  • software can be run that performs such functionality and/or has such structure.
  • the present disclosure provides for gesture recognition systems that are configured to provide users with simplified operation of various controllable devices such as, for example, in-home controllable devices (e.g., lights, televisions, stereos, electrically operated fireplaces, electrically operated tea kettles, etc.).
  • the gesture recognition systems disclosed herein may also automatically configure themselves through a so-called auto-configuration mode in order to determine the respective physical locations and/or identities of controllable devices.
  • this functionality is available “out of the box” without requiring the user to perform complicated setup protocols, or otherwise requiring extensive supervised training of their devices.
  • each controllable device within the detectable area of an imaging device can be readily tracked, even when controllable devices within a detectable area are subsequently moved.
  • a gesture recognition system is configured to: (i) automatically discover locations associated with one or more controllable devices within a detectable area, (ii) assign a boundary area to each of the discovered controllable devices, (iii) detect a predetermined gesture performed by a user to command operation of one of the controllable devices, and (iv) send a signal to operate the controllable device corresponding to the boundary area (e.g., an area proximal to and/or at the controllable device) within which the detected gesture was performed.
  • the disclosed systems and methods may be configured for one or more additional functions such as, for example, unsupervised learning, calculation of local coordinate maps, overlay of boundary areas onto a camera view, etc.
  • a system having such functionality reduces the need for device-specific remote control devices; (ii) enables intuitive operative commands for these controllable devices; (iii) discovers the locations/identities of controllable devices automatically; and (iv) generates multiple commands for simultaneous control of multiple controllable aspects of a given controllable device (e.g., for control of channel, volume, etc. for a television) as well as for operation of multiple controllable devices contemporaneously with one another.
  • Methods of utilizing the aforementioned detection systems are also disclosed. Other advantages are readily discernable by one of ordinary skill given the contents of the present disclosure.
  • system 100 includes a detection device 102 a that is able to detect motion and/or light within its field of view (e.g., a detectable area 104 a ).
  • the detection device 102 a may include, for example, an imaging device such as a video camera for the detection of visible light.
  • Other implementations of the detection device include additional technological means to increase upon the spectral range and intensity of the images it captures, including in low light conditions (e.g., so-called night vision technology).
  • the detection device will include an infrared (IR) camera instead of, or in addition to, a visible light camera.
  • IR infrared
  • the aforementioned detection device implementations can include stereoscopic vision equipment in order to provide, for example, the ability for the detection device to perceive depth.
  • the gesture recognition system 100 is disposed in an environment, such as a room in a home, having non-controllable objects 108 a and 108 b (e.g., a couch and a coffee table, respectively in the illustrated implementation) as well as controllable devices 112 a , 112 b , 112 c , and 112 d .
  • Controllable devices 112 a - 112 d include, in the illustrated implementation, a first light, a stereo, a television, and a second light, respectively.
  • controllable devices 112 a , 112 b , 112 c , and 112 d can be other controllable devices such as, without limitation, appliances (e.g., dishwashers, refrigerators, washing machines, drink machines, etc.), electronics (e.g., telephones, thermostats, televisions, home entertainment systems, video game consoles, projectors, stereos, etc.), fixtures (e.g., lights, hot tubs, etc.), heaters, air conditioners, vending machines, machinery, toys, and/or any other device that can perform actions under control (e.g., remote control, electronic switches, buttons, etc.).
  • appliances e.g., dishwashers, refrigerators, washing machines, drink machines, etc.
  • electronics e.g., telephones, thermostats, televisions, home entertainment systems, video game consoles, projectors, stereos, etc.
  • fixtures e.g., lights, hot tubs, etc.
  • heaters e.g., air conditioners, vending machines, machinery, toys
  • controllable devices 112 a - 112 c are within the detectable area 104 a of the detection device 102 a , while controllable device 112 d is located outside of the detectable area 104 a of the detection device 102 a . Accordingly, in this illustrated example, controllable devices 112 a - 112 c may be operated with the gesture recognition system 100 , while controllable device 112 d may not, as it is outside of the detectable area 104 a for the detection device 102 a.
  • controllable device 112 a e.g., the first light
  • a wireless enabled electrical outlet device 114 such as, for example, a BELKIN® WEMO® switch.
  • operation of the controllable device 112 a can be controlled via wireless signals originating from the detection device 102 a and/or computing system 106 that are received by the wireless enabled electrical outlet device 114 thereby enabling the application (or withdrawal) of power to the controllable device 112 a.
  • controllable devices 112 b and 112 c are controlled via conventional IR signals originating from the detection device 102 a .
  • Consumer IR devices can communicate via the transmission of digitally-coded pulses of IR radiation to control functions such as power, volume, tuning, etc.
  • detection device 102 a is able to self-determine the make and model number of a given controllable device via its extant imaging capability as well as its coupling to, for example, the Internet (e.g., Internet 306 , FIG. 3 ) via, for example, computing system 106 .
  • detection device 102 a captures an image of controllable device 112 c , and subsequently forwards this image to computing system 106 via network interface 116 a .
  • Computing system 106 includes a module that is able to recognize the manufacturer of the television (e.g., SAMSUNG®) by virtue of this captured image and is subsequently able to determine the proper IR coding in order to control the functions of this device 112 c .
  • Computing system 106 subsequently transmits this IR coding information to the detection device 102 a so that the detection device is able to subsequently control the operation of the controllable device 112 c.
  • system 100 upon initialization of system 100 (e.g., upon initial installation of the system 100 , periodic updates of the system 100 , etc.), the system will scan the wireless network in order to determine the presence of wireless enabled electrical outlet devices. Subsequent to this discovery, or contemporaneously therewith, detection device 102 a will “blast” detectable area 104 a with various wireless and/or IR signals while simultaneously monitoring the environment within its detectable area 104 a .
  • system 100 is able to identify controllable devices (e.g., controllable devices 112 a - 112 c ) as well as associate each of these controllable devices to various wireless/IR signals in order to effectuate their control.
  • controllable devices e.g., controllable devices 112 a - 112 c
  • FIG. 2 illustrates a block diagram of another exemplary gesture recognition system 100 incorporated into a user premises.
  • system 100 includes a first detection device 102 a and a second detection device 102 b that are configured to detect motion and/or light within their respective fields of view (e.g., detectable areas 104 a and 104 b ).
  • Detection devices 102 a and 102 b are each in signal and/or data communication with a computing system 106 via network interfaces 116 a and 116 b , respectively. Similar to the discussion with regards to FIG.
  • system 100 is disposed in an environment, such as a room in a home, having non-controllable objects 108 a , 108 b (e.g., a couch, coffee table, etc.) and controllable devices 112 a , 112 b , 112 c , and 112 d .
  • Controllable devices 112 a - 112 d may be those described herein in reference to system 100 .
  • controllable devices 112 a - 112 c are within detectable area 104 a .
  • controllable devices 112 a , 112 c and 112 d are within detectable area 104 b .
  • controllable devices 112 a - 112 d may all be operated with gesture recognition as each of these controllable devices 112 a - 112 d is within the detectable areas associated with the detection devices.
  • System 100 of FIG. 2 is similar to, and may be operated in a similar manner with the system illustrated and discussed with regards to FIG. 1 ; however, in the case of the system illustrated in FIG. 2 , inclusion of a second detection device 102 b allows the system to detect additional controllable devices (e.g., controllable device 112 d ) that might otherwise lie outside of the detectable area.
  • the disclosed gesture recognition systems may include additional detection devices of the same type (e.g., three cameras, four cameras, etc.) for increasing the coverage scope of detectable areas and/or for including additional detectable areas in other rooms, etc. It will be further appreciated that the disclosed gesture detection systems may include additional detection devices or substitute detection devices of different types, such as the aforementioned night vision cameras and stereoscopic vision cameras. It will be still further appreciated that the detection devices may be placed in a location which optimizes the detectable area depending on locations of the controllable devices within the environment (e.g., a room in a home). For example, and as illustrated in FIG.
  • controllable devices 112 a - 112 c are within the detectable area 104 a of detection device 102 a
  • controllable device 112 a , 112 c and 112 d are within the detectable area 104 b of detection device 102 b
  • every controllable device 112 a - 112 d within the illustrated room is within one or more of the detectable areas for the detection devices.
  • Computing device includes, inter alia, one or more processors 300 in data communication with memory 302 as well as network interface(s) 304 for communications external to computing device.
  • network interface provides for data communications with the detection device 102 (which can be representative of, e.g., detection device 102 a , detection device 102 b , or any other detection device) via a wireless or wireline communications link 116 .
  • network interface 304 optionally provides for communication with the internet 306 via communications link 314 , as well as optionally provides for communication with controllable device 112 via communications link 310 .
  • detection device 102 may also have a communications link 308 with the Internet 306
  • controllable device 112 may optionally have a communications link 312 with the Internet as well.
  • the memory 302 of computing device 106 also includes a computer program with computer-readable instructions that when executed by the processor 300 carry out one or more methods operable to implement the gesture recognition systems described herein. Exemplary methods for operating the gesture recognition system are discussed herein, such as in reference to method 400 shown in FIG. 4 discussed infra.
  • a first portion 401 of the method 400 includes the discovery and mapping capability portions for the gesture recognition system 100 that will be discussed in subsequent detail herein via steps 402 - 410 (e.g., so-called “auto-configuration mode”).
  • the system 100 performs the first portion 401 of the method 400 upon initial start-up/reboot of the system 100 as well as upon determining that one or more detection devices 102 a - 102 b and/or one or more controllable device 112 a - 112 d has been moved.
  • the system 100 determines that one or more detection devices 102 a - 102 b and/or one or more controllable devices 112 a - 112 d has been moved by determining that otherwise stationary objects have changed locations (e.g., by analyzing two or more images captured by the detection device and/or taking the difference between those two or more images) and/or through receipt of a signal from a motion sensor (not shown).
  • motion sensor is disposed on or within either of detection devices 102 a - 102 b for determining when/if one or more of detection devices 102 a - 102 b has moved.
  • motion sensor is a discrete device that is separate and apart from the various devices illustrated in FIGS. 1 and 2 .
  • method 400 includes operation of controllable devices at steps 412 and 414 .
  • gesture operation of controllable devices includes user commanded operation of the one or more controllable devices.
  • the first portion 401 of method 400 may, in some cases, be carried out without user interaction and/or when a user is out of the detectable area (e.g., detectable areas 104 a - 104 b ), while the second portion of method 400 can be carried out while a user is performing a predetermined gesture within the detectable area.
  • the gesture recognition system 100 first scans a network for wireless controllable devices in order to determine their respective identifiers.
  • the wirelessly controllable devices are identified via a unique identifier (e.g., IP address, media access control (MAC) address, unique device identifier (UDID), serial number, etc.).
  • Each wirelessly controllable device can have a unique identifier.
  • one or more of the devices 112 a - 112 d may be associated with a unique MAC address.
  • discovery of wirelessly controllable devices can be carried out via standard transmission control protocol/internet protocol (TCP/IP) discovery processes through, for example, simple network management protocol (SNMP) network discovery processes.
  • TCP/IP transmission control protocol/internet protocol
  • SNMP simple network management protocol
  • discovery of wirelessly controllable devices can be carried out via port scanning (e.g., Transmission Control Protocol (TCP) scanning, SYN scanning, User Datagram Protocol (UDP) scanning, etc.) and/or BLUETOOTH® discovery protocols.
  • the gesture recognition system 100 generates a device list specifying one or more devices, e.g., devices 112 a - 112 d , that are available for configuration and/or operation.
  • devices 112 a - 112 d are available for configuration and/or operation.
  • the gesture recognition system 100 After scanning for controllable devices, at step 404 , the gesture recognition system 100 operates the detection device in its visual discovery mode.
  • the detection device continuously sends (e.g., blasts) a number of signals (e.g., ON/OFF signals in accordance with known IR coding protocols) in order to cause the controllable devices to respond in a manner that can be detected visually and/or audibly (e.g., turning on a television).
  • Step 404 can allow the gesture recognition system 100 to determine which controllable devices are in a detectable area and the location of those devices.
  • the gesture recognition system 100 may operate each of the controllable devices consecutively (e.g., serially).
  • the system may first operate device 112 a , then operate device 112 b , next operate device 112 c , and finally operate device 112 d .
  • the system may operate the controllable devices 112 a - 112 d concurrently (e.g., in parallel) using a distinct operative pattern for each of controllable devices 112 a - 112 d .
  • the gesture recognition system 100 may operate two or more detection devices substantially simultaneously such that, for example, device 112 a is operating according to a first ON/OFF pattern via a first detection device 102 a , while device 112 d is concurrently operating according to a second ON/OFF pattern via a second detection device 102 b .
  • the gesture recognition system 100 can execute a combination of both consecutive and concurrent operation of the controllable devices.
  • device 112 a is operating according to a first ON/OFF pattern via a first detection device 102 a
  • device 112 d is concurrently operating according to a second ON/OFF pattern via a second detection device 102 b
  • device 112 b is operating according to a first ON/OFF pattern via first detection device 102 a
  • device 112 c is concurrently operating according to a second ON/OFF pattern via a second detection device 102 b.
  • the gesture recognition system 100 detects visual changes that occur.
  • the visual changes can be identified by taking a difference between two or more frames of images taken by a detection device (e.g., detection device 102 a - 102 b ). This difference can be computed by subtracting the frames of images (e.g., subtracting the first frame from the second frame) and/or by comparing the frames of images.
  • the camera of the detection device 102 a can store a first frame of an image in memory (e.g., memory of computing device 106 ) while device 112 a is off. Then, device 112 a can receive an ON/OFF signal for operation from the detection device 102 a .
  • the camera of detection device 102 a can then record a second frame of an image in memory that captures a change in light intensity proximal to the location of controllable device 112 a in response to this ON/OFF signal. Based at least in part on the difference between the first frame of the image, where the light appears off, and the second frame of the image, where the light appears on, the detection device 102 a can recognize that the lamp has turned from OFF to ON.
  • the gesture recognition system 100 then records the location of controllable device 112 a in memory (e.g., memory of computing device 106 ) and/or on detection device 102 a.
  • detection device 102 a can store a first frame of an image in memory (e.g., memory of computing device 106 ).
  • this first frame of image can capture the display of a channel number on device 112 c , which can be a television.
  • display 112 c may typically display a channel number, or the channel may be on display because a user recently changed the channel prior to the timing of taking of the first frame. Then, device 112 c can receive a change channel signal for operation from detection device 102 a .
  • the camera of detection device 102 a can then record a second frame of an image in memory that captures the channel change (e.g., where the new, changed channel number is displayed on device 112 c ) proximal to the location of controllable device 112 c in response to this change channel signal.
  • the channel change e.g., where the new, changed channel number is displayed on device 112 c
  • detection device 102 a Based at least in part on the difference between the first frame of the image, where the channel number is a first number, and the second frame of the image, where the channel number is a second number, detection device 102 a can recognize that the channel has changed, determine the location of controllable device 112 c , and record the location of controllable device 112 a in memory (e.g., memory of computing device 106 ).
  • detection device 102 a can be configured to recognize shapes of numbers and identify the numbers displayed on device 112 c . Detection device 102 a can then recognize the channel number captured in the first frame and the second frame, and determine if the channel has been turned in response to the change channel signal. In some cases, where the channel number has skipped to a non-consecutive number, detection device can recognize changes that have occurred due to other commands (e.g., page up/page down, or the entering of a channel number). The gesture recognition system 100 then recognizes the changes in response to the commands and records the location of controllable device 112 c in memory of computing device 106 and/or of detection device 102 c.
  • commands e.g., page up/page down, or the entering of a channel number
  • the gesture recognition system 100 can include other or additional detection devices, such as a microphone, to enable the gesture recognition system 100 to detect an audible measurable change in response to the blast of signals at step 404 .
  • device 112 b may be a stereo which receives an ON/OFF signal for operation from detection device 102 .
  • the microphone then captures a change in sound intensity proximal to the location of device 112 b via the processing of audio signals captured by the microphone.
  • a signal processor communicatively coupled to the microphone can include sound recognition, which can allow the signal processor to detect and/or identify predetermined sounds/noises.
  • multiple detection devices can be utilized alternatively or in combination (e.g., contemporaneously).
  • visual imaging and/or audio detection can be used.
  • detection devices can have increased robustness for identifying the location of devices 112 a - 112 d even in noisy (e.g., visually and/or audibly noisy) environments because there are multiple mediums (e.g., visual and audio) in which these detection devices can detect changes in devices 112 a - 112 d .
  • Having both vision and audio can also advantageously decrease false positives by looking for e.g., devices 112 a - 112 d that respond to two different mediums (e.g., audio and visual) instead of one.
  • a single detection device 102 can detect a plurality of indicia such as both audio and visual.
  • video cameras can include microphones so that the video cameras can sense and/or record both audio and visual.
  • An additional advantage is that locating devices 112 a - 112 d through both audio and visual changes can enhance the ability of detection device 102 to detect those devices accurately by allowing verification of the locations of those devices. For example, once detection device 102 identifies the location of one of devices 112 a - 112 d by either visual or audio, that location can then be verified and/or better approximated by the other of visual or audio.
  • a controllable device is located outside of the detectable area (e.g., the device may be in another room), and therefore automatic operation of the device may generate no measurable change in the environment.
  • the system may ignore a device ID determined at step 402 and optionally later report to a user that the device was not visually detected.
  • the gesture recognition system 100 detects the visual change using virtually any suitable technique. For example, the gesture recognition system 100 detects the measurable change by detecting a sudden change (e.g., within a prescribed time window) of a given sensory characteristic (e.g., a change in color, brightness, local contrast, sound level, etc.).
  • the gesture recognition system 100 detects the measurable change by comparing a current sensory characteristic with a reference sensory characteristic.
  • the reference sensory characteristic may be configured based on a reference frame depicting an initial state of the environment (e.g., a state where all controllable devices are off).
  • the gesture recognition system 100 defines regions of interest (ROI) at step 408 .
  • ROI regions of interest
  • the gesture recognition system 100 records in memory the location of the measurable change and associates the location with a specific controllable device ID to create a local coordinate map of the detectable area.
  • the local coordinate map generated will consist of a 2D map.
  • the exact form of this local coordinate map can be formatted into any suitable format (e.g., a planar Cartesian coordinate map, a polar coordinate map, etc.).
  • FIG. 5 An example of a local 2D coordinate map for the detectable area of a gesture recognition system 100 is depicted in FIG. 5 .
  • the local 2D coordinate map corresponds to the X-Y coordinates of images captured by the detection device. Based on the measurable change as detected in the images, the system associates the X 1 , Y 1 region of the coordinate map with controllable device 112 d . Similarly, the system associates the X 2 , Y 2 region of the coordinate map with controllable device 112 c , and associates the X 3 , Y 3 region of the coordinate map with controllable device 112 a.
  • the local coordinate map consists of a 3D map (e.g., a 3D Cartesian coordinate map, a cylindrical coordinate map, a spherical coordinate map, etc.).
  • the 3D map may correspond to the X-Y coordinates of images captured by the detection device and a Z coordinate indicating, for example, a distance of the controllable device from the detection device.
  • An example of a local 3D coordinate map for the detectable area of a gesture recognition system 100 is depicted in FIG. 6 .
  • multiple detection devices can be used to capture a 3D map.
  • a first detection device can take a first video at a first angle. This first video can be 2D having Xa-Ya dimensions (which can be mapped with Xa-Ya coordinates).
  • the second detection device can take a second video at a second angle. This second video can be 2D having Xb-Yb dimensions (which can be mapped with Xb-Yb coordinates).
  • a computing device can receive the first video and second video. In some cases, the computing device can create a 3D map based at least in part on Xa-Ya dimensions from the first video and a Za dimension calculated at least in part on the Xb-Yb dimensions of the second video.
  • the first detection device can be substantially orthogonal to the second detection device and/or lie in substantially the same horizontal plane.
  • the field of view (and consequently ROIs) of the first detection device and second detection device can have substantial overlap (e.g., 30% or more overlap).
  • the 3D map can be generated in some implementations by taking the Xa-Ya dimensions of the first video and basing the Za dimension of the 3D map at least in part on the Xb (or Yb) dimension of the second video.
  • the first detection device, the second detection device, and/or any other detection device may not be substantially orthogonal and/or not and/or lie in substantially the same horizontal plane to each other.
  • the first detection device and second device may even only have a small area of overlap between their fields of view.
  • the computing device can construct the three-dimensional map using three-dimensional reconstruction from line projections in regions of overlap based at least in part on the videos taken from the detection devices.
  • the system can assign a boundary area to each of the discovered controllable devices at step 410 .
  • the boundary area may be an area with a location and dimensions that are comparable to the location and dimensions of the controllable device.
  • the assigned boundary area may consist of an area with roughly the same size/shape as the light shade of the lamp.
  • the boundary area may additionally include a region adjacent to the device, such as the dimensions of the device plus an additional zone that is a pre-defined distance (e.g., 1 inch, 2 inches, or fractions thereof, etc.) beyond the dimensions of the controllable device.
  • the boundary areas will be 2D as well.
  • the boundary areas will be 3D (or alternatively 2D). In both examples, the boundary areas may be overlaid on a detection device view for the detectable area.
  • the gesture recognition system 100 will assign two or more ROIs for a given/single controllable device. For example, in instances in which the detection device detects two separate visual changes to a scene (e.g., step 406 ), the recognition system can assign two different spatially distinct boundary areas for a given controllable device. Such a scenario could take place in a room where, for example, there is a mirror which causes two representations of a given controllable device (e.g., the actual controllable device and the controllable devices reflection within the mirror). In such an instance, the gesture recognition system 100 will, in an exemplary implementation, assign boundary areas to both areas of interest.
  • FIG. 5 An example of a 2D boundary overlay 700 from the view point of a detection device is illustrated in FIG. 5 .
  • controllable devices 112 a , 112 c , and 112 d are in the detectable area of a detection device.
  • the gesture recognition system 100 assigns a boundary area 702 a to device 112 a , a boundary area 702 c to device 112 c , and a boundary area 702 d to device 112 d .
  • Each of the boundary areas enclose the device and include an additional zone a pre-defined distance beyond the outline of the controllable devices.
  • the boundary areas are shown as squares; however, in other examples the boundary areas can trace the shape of the devices and/or be of other shapes (e.g., circular, triangular, etc.). Moreover, while the boundary areas are shown as being symmetrically placed about devices 112 a - 112 c , it is recognized that in alternative implementations, these boundary areas may be placed asymmetrically about these devices. Each of these boundary areas will be associated with an identifier for each of the respective controllable devices.
  • FIG. 6 An example of a 3D boundary overlay 800 from the view point of a detection device is illustrated in FIG. 6 .
  • controllable devices 112 a , 112 c , and 112 d are in the detectable area of a detection device.
  • the gesture recognition system 100 assigns a boundary area 802 a to device 112 a , a boundary area 802 c to device 112 c , and a boundary area 802 d to device 112 d .
  • Each of the boundary areas enclose the device and include an additional zone that is a pre-defined distance beyond the shape of the device (whether symmetrically or asymmetrically).
  • boundary areas are shown as cubes, however, in other examples the boundary areas can trace the shape of the devices and/or be of other shapes (e.g., spherical, pyramidal, etc.). Each of these boundary areas can be associated with an identifier for each of the respective controllable devices (e.g., devices 112 a - 112 d ).
  • method 400 includes operation of controllable devices at steps 412 and 414 .
  • the gesture recognition system 100 is configured to receive commands from a user for gesture operation of these discovered controllable devices (e.g., those discovered devices and assigned ROI's defined during auto-configuration 401 ).
  • audio or other sensory inputs can be used in the alternative or in combination with gestures.
  • audio can be detected by a detection device such as a microphone.
  • gesture recognition system 100 can associate predetermined audio with operations.
  • a user may make a predetermined noise, such as, without limitation: saying a word, making a sound/noise, using a device that speaks words or makes sounds/noises, hitting/tapping an object, or any other way of making a noise.
  • This predetermined noise can be associated at least in part with any of the operations and/or controllable actions described in this disclosure, such as turning a channel, turning on/off a device, etc.
  • the predetermined noise itself can activate the operation initiated by gesture recognition system 100 .
  • the noise at a particular location as determined at least in part by a visual system of gesture recognition system 100 and/or triangulation of the noise by the gesture recognition system 100 , can activate the operation initiated by gesture recognition system 100 .
  • the noise in combination with a gesture can activate the operation initiated by gesture recognition system 100 .
  • false positives can be reduced.
  • a gesture recognition system 100 may associate a user reaching towards a device as a gesture to turn on/off that device.
  • a user may on occasion reach towards a device without actually intending to turn on/off that device.
  • a drink may be sitting next to a lamp.
  • Gesture recognition system 100 may associate reaching towards the lamp as a gesture that commands gesture recognition system 100 to turn on/off the lamp.
  • a user might actually be reaching for a drink sitting next to a lamp instead of reaching towards the lamp.
  • gesture recognition system 100 associates both a noise (e.g., the user saying “lamp” or making some other noise) and a gesture (e.g., reaching towards the lamp or some other gesture) with an operation, merely reaching towards the lamp would not have produced the false positive of the lamp turning on/off.
  • having both audio and visual gestures, whatever they may be, used to command operations can reduce false positives.
  • the system detects a predetermined gesture and/or an audible noise performed by a user within a boundary area to command operation of a controllable device.
  • the gesture recognition system 100 is configured to disregard and/or ignore user movements (e.g., gestures) that are made outside of the boundary areas assigned at step 410 and/or gestures that do not match a predetermined gesture. In this way, such gestures performed in such boundary areas can sometimes be called region-specific gestures.
  • a gesture recognition system 100 can detect movements/gestures and/or audible noises, and perform operations without direct regard to the boundary areas assigned at step 410 , or no boundary areas assigned in step 410 .
  • the gesture recognition system 100 may detect gestures in proximity, moving towards, and/or interacting with one or more controllable devices 112 a - 112 d .
  • the gesture recognition system 100 through systems and methods described in this disclosure, may detect a user moving towards or gesturing towards (e.g., reaching his/her arm towards) devices 112 a - 112 d .
  • a user may be across a room and interact by gesturing (e.g., pointing) to one of devices 112 a - 112 d , which gesture recognition system 100 can recognize as a command to turn on/off that one of devices 112 a - 112 d and/or perform any other operation associated at least in part with the gesture.
  • gesturing e.g., pointing
  • gesture recognition system 100 can recognize as a command to turn on/off that one of devices 112 a - 112 d and/or perform any other operation associated at least in part with the gesture.
  • a user may be in close proximity to one of devices 112 a - 112 d (e.g., just outside the boundary area assigned at step 410 or within a predefined distance (e.g., approximately 1, 2, or 3 feet, or any predetermined absolute (e.g., in US, or standard units) or relative distance (e.g., pixels, lengths, etc.)) without regard to any boundary area) and interact by gesturing (e.g., pointing) at that device, which gesture recognition system 100 can recognize as a command to turn on/off that one of devices 112 a - 112 d and/or perform any other operation associated at least in part with the gesture.
  • a predefined distance e.g., approximately 1, 2, or 3 feet, or any predetermined absolute (e.g., in US, or standard units) or relative distance (e.g., pixels, lengths, etc.)
  • gesturing e.g., pointing
  • a user may walk a distance (e.g., approximately 1, 2, or 3 feet, or any predetermined absolute (e.g., in US, or standard units) or relative distance (e.g., pixels, lengths, etc.) towards one of devices 112 a - 112 d while looking at that device, which gesture recognition system 100 can recognize as a command to turn on/off that one of devices 112 a - 112 d and/or perform any other operation associated at least in part with the gesture.
  • gesture recognition system 100 can detect sounds in the alternative or in combination with any of the aforementioned example gestures.
  • a user can say “lamp” or make some other predefined noise (e.g., sound, words, etc.) while gesturing (e.g., pointing) at device 112 a , which gesture recognition system 100 can recognize as a command to turn on/off device 112 a and/or perform any other operation associated at least in part with the gesture and/or noise.
  • a user can just say “lamp on” or make some other predefined noise (e.g., sounds, words, etc.), which gesture recognition system 100 can recognize as a command to turn on/off device 112 a and/or perform any other operation associated at least in part with the noise.
  • a user may touch or otherwise directly interact with one of devices 112 a - 112 d , which gesture recognition system 100 can recognize as a command to turn on/off that one of devices 112 a - 112 d and/or perform any other operation associated at least in part with the gesture.
  • a user may make a predetermined noise while positioned in a predetermined location (e.g., as identified visually through a detection device).
  • Gesture recognition system 100 can recognize the combination of the noise and location as a command to turn on/off one or more of devices 112 a - 112 d and/or perform any other operation associated at least in part with the noise and location. Any of the aforementioned examples can also be performed by any person, animal (e.g., pet or other animal), or object (e.g., vehicle or toy).
  • gestures may be a pre-defined and/or predetermined movement the gesture recognition system 100 is programmed to recognize.
  • the movement can include motion (e.g., driving, walking, turning, etc.) by a person, animal (e.g., pet or other animal), object (e.g., vehicle or toy), etc.
  • one such predefined gesture may include tapping/touching a portion of a lamp (e.g., the bottom portion, top portion, left portion, right portion, or any portion as desired), or tapping/touching the lampshade in order to turn on/off the lamp.
  • a portion the lamp can include, but is not limited to a half, third, quarter, tenth, or other fraction of the lamp body.
  • a user may have to tap/touch within a given ROI a predefined number of times (e.g., two) so as to avoid, for example, false positives.
  • the user may touch the ROI once, wait for the device to briefly respond (e.g., a lamp may briefly blink on/off) and then touch the ROI once again to confirm the operation.
  • a user may place his/her hand in the predefined ROI and hold it in place (e.g., for 1, 2, 3, 4, or any predefined number of seconds) until the device responds.
  • a given controllable device can be assigned multiple ROI. For example, in the context of the exemplary lamp, touching/tapping the top portion of the lamp may result in the lamp being turned ON, while touching/tapping the bottom portion of the lamp may result in the lamp being turned OFF. In some implementations, a user may make a visible “0” using that user's thumb and pointing finger when touching/approaching a light that the user wishes to turn ON.
  • the gesture recognition system 100 may instead rely on dynamic gestures in order to control a given controllable device. For example, a user wishing to control the volume on his/her television may swipe upwards within the assigned boundary area in order to raise the volume, or swipe downwards in order to lower the volume. Similarly, swiping upwards/downwards may operate the dimmer functionality of a light (e.g., swiping upwards brightens the light, while swiping downwards dims the light, etc.). Moreover, a user may choose to swipe either right-to-left/left-to-right in order to, for example, change the channel or change the color used for a given light source.
  • a user may swipe upwards within the assigned boundary area in order to raise the volume, or swipe downwards in order to lower the volume.
  • swiping upwards/downwards may operate the dimmer functionality of a light (e.g., swiping upwards brightens the light, while swiping
  • controllable devices can be operated by passing, for example, a user's hand along a controllable devices outline.
  • the gesture recognition system 100 can recognize when a user passes his/her hand along a predefined percentage (e.g., 50%) of the device's outline in order to prevent, inter alia, false positive, accidental gesture detections, etc.
  • a user has the ability to customize the operation of the gesture recognition system 100 by selecting how to control operation (e.g., define gestures) for their controllable devices.
  • a user may also optionally be given the option to determine which predetermined gestures and/or audio that the user wishes to use. For example, a user could choose to touch/tap a ROI and then subsequently give a confirmatory gesture (e.g., a thumbs up, show two fingers, etc.) in order to confirm the user's selection of the touch/tap gesture for operating the respective controllable device.
  • a user can also train the gesture recognition system 100 to associate particular gestures with particular commands by repeatedly demonstrating the gesture and performing the command for the controllable device in view of the detection device.
  • a user can associate a demonstrated gesture as the predetermined gesture in order to change a user-controlled operational state of one or more controllable devices using the systems and methods of this disclosure.
  • a similar approach can be done using audio or other sensory inputs, or detection combination of gestures and audio and/or sensory inputs, where a user can train gesture recognition system 100 to associate commands to controllable devices based at least in part on those gestures, audio, and/or other sensory inputs.
  • the method of this demonstrative association is described in U.S. Publication No. 2016/0075015 to Izhikevich et al., which is incorporated herein by reference in its entirety.
  • a user has the ability to set up other rules such as timing rules for auto-configuration mode 401 and operation mode 411 .
  • a user may elect to have the auto-configuration mode operated on a nightly basis or may elect to only control operation of certain controllable devices within a specified time period.
  • the gesture recognition system 100 implementations described herein is simpler, faster and much more intuitive than prior art methods.
  • the gesture recognition system 100 detects the gestures (e.g., predetermined gestures) by detecting a sudden change within a predefined time window (e.g., a window of 1 seconds, 5 seconds, 10 seconds, etc.). In some implementations, the gesture recognition system 100 detects the gestures by comparing a current sensory input characteristic and a reference sensory input characteristic. In implementations that utilize image processing, the reference sensory input characteristic may be configured based on a reference frame depicting an initial state of the environment (e.g., a state where no user is present in the frame).
  • the gesture recognition system 100 detects the gestures by performing, for example, a pixel-wise discrepancy analysis between images in a sequence in order to perform gesture recognition.
  • the gesture recognition system 100 can also optionally use other suitable motion detection techniques to detect the gestures including, for example, background subtraction, optical flow analysis, image segmentation, and so forth.
  • the gesture recognition system 100 detects gestures performed in a pathway between the detection device and the controllable device.
  • a pathway 704 a extends between the detection device and controllable device 112 a
  • a pathway 704 c extends between the detection device and controllable device 112 c
  • a pathway 704 d extends between the detection device and controllable device 112 d .
  • the detection device is located at a convergence point of pathways 704 a , 704 c , and 704 d.
  • a predetermined gesture may be made by a user within the 3D boundary area of the device in order to operate the controllable device. Accordingly, the gesture recognition system 100 detects a predetermined gesture performed at or near the location of the device within the boundary area of the device. In some implementations, the gesture recognition system 100 may detect a gesture performed in a pathway between the detection device and the 3D boundary area. In other implementations, the gesture recognition system may not detect a gesture performed outside of the 3D boundary area. For example, the gesture recognition system 100 may not detect the gesture performed in the pathway between the detection device and the 3D boundary area (which is outside of the 3D boundary area).
  • the gesture recognition system 100 operates the specified controllable device in response to the detected gesture (e.g., step 412 ).
  • the gesture recognition system 100 operates the device by sending a signal to the controllable device associated with the region within which the gesture was detected.
  • the system may transmit the signal to the device via a wired interface and/or wireless (e.g., using radio frequency (RF), infrared (IR), pressure (sound), light, and/or other wireless carrier transmissions).
  • RF radio frequency
  • IR infrared
  • sound sound
  • light and/or other wireless carrier transmissions
  • a computing system that has components including a central processing unit (CPU), input/output (I/O) components, storage, and memory can be used to execute the detector system, or specific components and/or subcomponents of the system.
  • the executable code modules of the monitoring system can be stored in the memory of the computing system and/or on other types of non-transitory computer-readable storage media.
  • the detector system can be configured differently than described above.
  • code modules executed by one or more computers, computer processors, or machines configured to execute computer instructions.
  • the code modules can be stored on any type of non-transitory computer-readable medium or tangible computer storage device, such as hard drives, solid state memory, optical disc, and/or the like.
  • the systems and modules can also be transmitted as generated data signals (e.g., as part of a carrier wave or other analog or digital propagated signal) on a variety of computer-readable transmission mediums, including wireless-based and wired/cable-based mediums, and can take a variety of forms (e.g., as part of a single or multiplexed analog signal, or as multiple discrete digital packets or frames).
  • the processes and algorithms can be implemented partially or wholly in application-specific circuitry. The results of the disclosed processes and process steps can be stored, persistently or otherwise, in any type of non-transitory.
  • the term “including” should be read to mean “including, without limitation,” “including but not limited to,” or the like; the term “comprising” as used herein is synonymous with “including,” “containing,” or “characterized by,” and is inclusive or open-ended and does not exclude additional, unrecited elements or method steps; the term “having” should be interpreted as “having at least;” the term “such as” should be interpreted as “such as, without limitation;” the term ‘includes” should be interpreted as “includes but is not limited to;” the term “example” is used to provide exemplary instances of the item in discussion, not an exhaustive or limiting list thereof, and should be interpreted as “example, but without limitation;” adjectives such as “known,” “normal,” “standard,” and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass known, normal, or standard technologies that may be available or
  • a group of items linked with the conjunction “and” should not be read as requiring that each and every one of those items be present in the grouping, but rather should be read as “and/or” unless expressly stated otherwise.
  • a group of items linked with the conjunction “or” should not be read as requiring mutual exclusivity among that group, but rather should be read as “and/or” unless expressly stated otherwise.
  • the terms “about” or “approximate” and the like are synonymous and are used to indicate that the value modified by the term has an understood range associated with it, where the range can be ⁇ 20%, ⁇ 15%, ⁇ 10%, ⁇ 5%, or ⁇ 1%.
  • a result e.g., measurement value
  • close can mean, for example, the result is within 80% of the value, within 90% of the value, within 95% of the value, or within 99% of the value.
  • defined or “determined” can include “predefined” or “predetermined” and/or otherwise determined values, conditions, thresholds, measurements, and the like.

Abstract

Gesture recognition systems that are configured to provide users with simplified operation of various controllable devices such as, for example, in-home controllable devices. In one implementation, the gesture recognition system automatically configures itself in order to determine the respective physical locations and/or identities of controllable devices as well as an operating mode for controlling the controllable devices through predetermined gesturing. In some implementations, the gesture recognition systems are also configured to assign boundary areas associated with the controllable devices. Apparatus and methods associated with the gesture recognition systems are also disclosed.

Description

    COPYRIGHT
  • A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.
  • BACKGROUND Technological Field
  • The present disclosure relates to, inter alia, gesture detection systems and methods of utilizing the same. Specifically, in one aspect, the present disclosure relates to gesture detection systems and methods useful for operation of controllable devices.
  • Background
  • Simplified operation of controllable devices (e.g., lights, televisions, stereos, etc.) has become desirable for users. For example, lamps that have conductive surfaces that turn the lamp either on or off upon being touched by a user have become increasingly popular. However, this technology may only work for these specific devices and often it isn't immediately apparent to a user whether or not these specific devices operate as intended. Additionally, many home appliances often include a power switch that is located in an inconvenient location such as, for example, on a cord that goes behind other furniture, or behind/underneath the device to be controlled. These inconveniences become further pronounced for users who are partially immobilized, or have other disabilities that affect their mobility, dexterity, and/or eye sight.
  • Other conventional mechanisms for simplifying operation of controllable devices include use of, for example, remote control devices. In some cases, a device-specific remote control is used for each controllable device and hence, for users that have multiple devices to control, the user will often have to keep track of multiple remote controls. Additionally, often times these device-specific remote controls are not intuitive to use, thereby demanding a user to manipulate a series of buttons on the remote control in order to operate these devices. Other known solutions include use of personal computing devices (e.g., laptops, smart phones, tablets, etc.) that often have software applications that utilize user-assisted programming in order to enable a user to operate these controllable devices. Such applications may be complicated and/or time-consuming to set up for operation, and additionally are often bulky, thereby requiring a user to carry around the computing device. Further, in some examples, the user may need to “teach” the software application how to operate the controllable device(s).
  • SUMMARY
  • The foregoing needs are satisfied by the present disclosure, which provides for inter alia, apparatus and methods for the simplified operation and control of various appliances such as, for example, lamps, home multimedia equipment, etc.
  • In a first aspect, a non-transitory computer-readable storage medium is disclosed. In some implementations, the non-transitory computer-readable storage medium includes a plurality of instructions, the instructions being executable by a processing apparatus to operate a gesture detection system, the processor being in signal communication with one or more detection devices, the instructions configured to, when executed by the processing apparatus, cause the processing apparatus to: emit one or more signals from the detection system; discover a location of each of one or more controllable devices within a detectable area based at least in part on a response of each of the one or more controllable devices to the emitted signals; and detect via the one or more detection devices a predetermined gesture, wherein the detection of the predetermined gesture commands at least in part a change in a user-controlled operational state of the one or more controllable devices.
  • In some implementations, the instructions when executed by the processing apparatus further cause the processing apparatus to: assign a boundary area to each of the one or more controllable devices after discovery of the location.
  • In some implementations, the detection of the predetermined gesture further includes detection of the predetermined gesture within the assigned boundary area.
  • In some implementations, the one or more controllable devices includes a first wirelessly controllable device having a first unique identifier, and assignment of the boundary area to each of the one or more controllable devices further includes association of a first boundary area with the first unique identifier.
  • In some implementations, the predetermined gesture is at least one of a gesture directed towards the one or more controllable devices, a gesture in proximity to the one or more controllable devices, and a gesture interacting with the one or more controllable devices.
  • In some implementations, the one or more controllable devices includes a second wirelessly controllable device, the second wirelessly controllable device having a second unique identifier; and assignment of the boundary area to each of the one or more controllable devices further includes association of a second boundary area with the second unique identifier.
  • In some implementations, discovery of the location of each of the one or more controllable devices further includes wireless detection within a wireless network of the first unique identifier and the second unique identifier.
  • In some implementations, detection of the predetermined gesture further includes detection of a first gesture within the first boundary area for user-controlled operation of the first wirelessly controllable device and detection of a second predetermined gesture within the second boundary area for user-controlled operation of the second wirelessly controllable device.
  • In some implementations, in response to the first predetermined gesture, the user-controlled operational state of the second wirelessly controllable device is substantially maintained, and in response to the second predetermined gesture, the user-controlled operational state of the first wirelessly controllable device is substantially maintained.
  • In some implementations, the predetermined gesture includes a motion performed by at least one of a person, animal, and object.
  • In some implementations, the one or more detection devices includes a camera and the response includes a visual change detectable by the camera.
  • In some implementations, the one or more detection devices includes a microphone and the response includes an auditory change detectable by the microphone.
  • In some implementations, the instructions when executed by the processing apparatus, further cause the processing apparatus to detect via the one or more detection devices a noise created by a user to command at least in part the change in the user-controlled operational state of the one or more controllable devices.
  • In some implementations, the instructions when executed by the processing apparatus, further cause the processing apparatus to learn a demonstrated gesture as the predetermined gesture and associate at least in part the detection of the demonstrated gesture to the change in the user-controlled operational state of the one or more controllable devices.
  • In a second aspect, a method of operating a gesture detection system having a detection device in signal communication with a processor is disclosed. In one implementation, the method includes emitting one or more signals from the gesture detection system; discovering a location of at least one controllable device within a detectable area based at least in part on a response of the at least one controllable device to the emitted signals; detecting at least one predetermined gesture performed by a user to command operation of the at least one controllable device; and sending a signal to the at least one controllable device that is configured to regulate operation of the at least one controllable device.
  • In some implementations, the act of discovering the location of the at least one controllable device within the detectable area includes automatically operating the at least one controllable device and detecting a measurable change associated with the operation of the at least one controllable device.
  • In some implementations, the method further includes assigning a boundary area to the at least one controllable device and the act of detecting the at least one predetermined gesture includes detecting at least one predetermined gesture being performed within the boundary area.
  • In some implementations, the act of discovering the location of the at least one controllable device further includes detecting within a local area network a unique identifier associated with the at least one controllable device; and the act of assigning the boundary area to the at least one controllable device further includes associating the unique identifier with the boundary area.
  • In a third aspect, a system configured to detect gestures for operation of one or more controllable devices within a detectable area is disclosed. In one implementation, the system includes a detection device; a processing apparatus in communication with the detection device; and a non-transitory computer readable storage medium having a plurality of instructions stored thereon, the instructions when executed by the processing apparatus, cause the processing apparatus to: operate a controllable device so as to cause the controllable device to generate a measurable change in state; detect the measurable change in state of the controllable device; discover a location of the controllable device based on the detection of the measurable change; assign a boundary area to the controllable device based on the discovered location; detect a predetermined gesture performed by a user within the assigned boundary area; and in response to the detection of the predetermined gesture, send a signal to the controllable device to cause the controllable device to operate in accordance with the predetermined gesture.
  • In some implementations, the non-transitory computer readable storage medium can have instructions thereon that when executed by the processing apparatus, cause the processing apparatus to place the location of the controllable device within a predefined map.
  • In a fourth aspect of the disclosure, a controllable device for use with a detection apparatus is disclosed.
  • In a fifth aspect of the disclosure, methods of operating the aforementioned controllable device are disclosed.
  • In a sixth aspect of the disclosure, detection apparatus for use with a detection apparatus is disclosed.
  • In a seventh aspect of the disclosure, methods of operating the aforementioned detection apparatus are disclosed.
  • In an eighth aspect of the disclosure, a computing device for use with a detection apparatus is disclosed.
  • These and other objects, features, and characteristics of the present disclosure, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the disclosure. As used in the specification and in the claims, the singular form of “a”, “an”, and “the” include plural referents unless the context clearly dictates otherwise.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a plan view of an example gesture recognition system incorporated into a user premises in accordance with one implementation of the present disclosure.
  • FIG. 2 is a plan view of an example gesture recognition system incorporated into a user premises in accordance with another implementation of the present disclosure.
  • FIG. 3 is a functional block diagram of an example computing device for use with the gesture recognition systems shown in FIGS. 1 and 2.
  • FIG. 4 is a process flow diagram of an exemplary method for auto-configuring and operating the gesture recognition systems shown in FIGS. 1 and 2.
  • FIG. 5 is a perspective view that includes an example coordinate map and two-dimensional (2D) boundary areas in accordance with some implementations of the present disclosure.
  • FIG. 6 is a perspective view that includes an example coordinate map and three-dimensional (3D) boundary areas in accordance with another implementation of the present disclosure.
  • FIG. 7 is a perspective view that includes example 2D boundary areas and pathways between the boundary areas and a detection device in accordance with some implementations of the present disclosure.
  • All Figures disclosed herein are © Copyright 2015-2016 Brain Corporation. All rights reserved.
  • DETAILED DESCRIPTION
  • Implementations of the present technology will now be described in detail with reference to the drawings, which are provided as illustrative examples so as to enable those skilled in the art to practice the technology. Notably, the figures and examples below are not meant to limit the scope of the present disclosure to a single implementation, but other implementations are possible by way of interchange of, or combination with, some or all of the described or illustrated elements. Wherever convenient, the same reference numbers will be used throughout the drawings to refer to same or like parts.
  • Where certain elements of these implementations can be partially or fully implemented using known components, only those portions of such known components that are necessary for an understanding of the present technology will be described, and detailed descriptions of other portions of such known components will be omitted so as not to obscure the disclosure.
  • In the present specification, an implementation showing a singular component should not be considered limiting; rather, the disclosure is intended to encompass other implementations including a plurality of the same components, and vice-versa, unless explicitly stated otherwise herein. Further, the present disclosure encompasses present and future known equivalents to the components referred to herein by way of illustration.
  • As used herein, the terms computer, and computing device can include, but are not limited to, personal computers (PCs) and minicomputers, whether desktop, laptop, or otherwise, mainframe computers, workstations, servers, personal digital assistants (PDAs), handheld computers, embedded computers, programmable logic devices, personal communicators, tablet computers, portable navigation aids, J2ME equipped devices, cellular telephones, smart phones, personal integrated communication or entertainment devices, and/or any device capable of executing a set of instructions and processing an incoming data signal.
  • As used herein, the terms computer program or software can include any sequence or human or machine cognizable steps which perform a function. Such program may be rendered in virtually any programming language or environment including, for example, C/C++, C#, Fortran, COBOL, MATLAB™, PASCAL, Python, assembly language, markup languages (e.g., HTML, SGML, XML, VoXML), and the like, as well as object-oriented environments such as the Common Object Request Broker Architecture (CORBA), Java™ (including J2ME, Java Beans, etc.), Binary Runtime Environment (e.g., BREW), and the like.
  • As used herein, the terms connection, link, transmission channel, delay line, wireless can include a causal link between any two or more entities (whether physical or logical/virtual), which enables information exchange between the entities.
  • As used herein, the term memory can include any type of integrated circuit or other storage device adapted for storing digital data including, without limitation, ROM, PROM, EEPROM, DRAM, Mobile DRAM, SDRAM, DDR/2 SDRAM, EDO/FPMS, RLDRAM, SRAM, “flash” memory (e.g., NAND/NOR), memristor memory, and PSRAM.
  • As used herein, the terms processor, microprocessor and digital processor include all types of digital processing devices including, without limitation, digital signal processors (DSPs), reduced instruction set computers (RISC), general-purpose (CISC) processors, microprocessors, gate arrays (e.g., field programmable gate arrays (FPGAs)), PLDs, reconfigurable computer fabrics (RCFs), array processors, secure microprocessors, and application-specific integrated circuits (ASICs). Such digital processors may be contained on a single unitary IC die, or distributed across multiple components.
  • As used herein, the term network interface can include any signal, data, or software interface with a component, network, or process including, without limitation, those of the FireWire (e.g., FW400, FW800, FWS800T, FWS1600, FWS3200, etc.), USB (e.g., USB 1.X, USB 2.0, USB 3.0, USB Type-C, etc.), Ethernet (e.g., 10/100, 10/100/1000 (Gigabit Ethernet), 10-Gig-E, etc.), MoCA, Coaxsys (e.g., TVnet™), radio frequency tuner (e.g., in-band or OOB, cable modem, etc.), Wi-Fi (802.11), WiMAX (802.16), PAN (e.g., 802.15), cellular (e.g., 3G, LTE/LTE-A/TD-LTE, GSM, etc.) or IrDA families.
  • As used herein, the term Wi-Fi can include one or more of IEEE-Std. 802.11, variants of IEEE-Std. 802.11, standards related to IEEE-Std. 802.11 (e.g., 802.11 a/b/g/n/ac/ad/af/ah/ai/aj/aq/ax/ay), and/or other wireless standards.
  • As used herein, the term wireless can include any wireless signal, data, communication, or other interface including without limitation Wi-Fi, Bluetooth, 3G (3GPP/3GPP2), HSDPA/HSUPA, TDMA, CDMA (e.g., IS-95A, WCDMA, etc.), FHSS, DSSS, GSM, PAN/802.15, WiMAX (802.16), 802.20, narrowband/FDMA, OFDM, PCS/DCS, LTE/LTE-A/TD-LTE, analog cellular, CDPD, satellite systems, millimeter wave or microwave systems, acoustic, any radio transmission, any radio frequency field, and infrared (e.g., IrDA).
  • As used herein, the term wired can include any signal transferred over a wire, including any wire with a signal line. For example, and without limitation, such wires with signal lines can include cables such as Ethernet cables, coaxial cables, Universal Serial Bus (USB), firewire, data lines, wire, and/or any wired connection known in the art.
  • Any of the various descriptions in this disclosure (e.g., examples, implementations, implementations, systems, methods, etc.) can be instantiated in hardware and/or software. For example, and without limitation, functionality and structure described in this disclosure can represent physical hardware that is hardcoded to perform such functionality. As another example, software can be run that performs such functionality and/or has such structure.
  • Overview
  • The present disclosure provides for gesture recognition systems that are configured to provide users with simplified operation of various controllable devices such as, for example, in-home controllable devices (e.g., lights, televisions, stereos, electrically operated fireplaces, electrically operated tea kettles, etc.). The gesture recognition systems disclosed herein may also automatically configure themselves through a so-called auto-configuration mode in order to determine the respective physical locations and/or identities of controllable devices. As a result of this auto-configuration mode, this functionality is available “out of the box” without requiring the user to perform complicated setup protocols, or otherwise requiring extensive supervised training of their devices. Moreover, each controllable device within the detectable area of an imaging device can be readily tracked, even when controllable devices within a detectable area are subsequently moved. In implementations described subsequently herein, a gesture recognition system is configured to: (i) automatically discover locations associated with one or more controllable devices within a detectable area, (ii) assign a boundary area to each of the discovered controllable devices, (iii) detect a predetermined gesture performed by a user to command operation of one of the controllable devices, and (iv) send a signal to operate the controllable device corresponding to the boundary area (e.g., an area proximal to and/or at the controllable device) within which the detected gesture was performed. The disclosed systems and methods may be configured for one or more additional functions such as, for example, unsupervised learning, calculation of local coordinate maps, overlay of boundary areas onto a camera view, etc.
  • A system having such functionality: (i) reduces the need for device-specific remote control devices; (ii) enables intuitive operative commands for these controllable devices; (iii) discovers the locations/identities of controllable devices automatically; and (iv) generates multiple commands for simultaneous control of multiple controllable aspects of a given controllable device (e.g., for control of channel, volume, etc. for a television) as well as for operation of multiple controllable devices contemporaneously with one another. Methods of utilizing the aforementioned detection systems are also disclosed. Other advantages are readily discernable by one of ordinary skill given the contents of the present disclosure.
  • Detailed Description of Exemplary Implementations
  • Detailed descriptions of the various implementations and variants of the apparatus and methods of the disclosure are now provided. While primarily discussed in the context of residential applications, it will be appreciated that the described systems and methods contained herein can be used in other environments including, for example, industrial implementations for controlling operation of industrial equipment, office implementations for controlling operation of office equipment, etc. Myriad other exemplary implementations or uses for the technology described herein would be readily envisaged by persons having ordinary skill in the art, given the contents of the present disclosure.
  • Gesture Recognition Apparatus—
  • Turning now to FIG. 1, a block diagram of an exemplary gesture recognition system 100 incorporated into a user premises is shown. As depicted in FIG. 1, system 100 includes a detection device 102 a that is able to detect motion and/or light within its field of view (e.g., a detectable area 104 a). The detection device 102 a may include, for example, an imaging device such as a video camera for the detection of visible light. Other implementations of the detection device include additional technological means to increase upon the spectral range and intensity of the images it captures, including in low light conditions (e.g., so-called night vision technology). For example, in some exemplary implementations, the detection device will include an infrared (IR) camera instead of, or in addition to, a visible light camera. In some implementations, the aforementioned detection device implementations can include stereoscopic vision equipment in order to provide, for example, the ability for the detection device to perceive depth.
  • As illustrated in FIG. 1, the gesture recognition system 100 is disposed in an environment, such as a room in a home, having non-controllable objects 108 a and 108 b (e.g., a couch and a coffee table, respectively in the illustrated implementation) as well as controllable devices 112 a, 112 b, 112 c, and 112 d. Controllable devices 112 a-112 d include, in the illustrated implementation, a first light, a stereo, a television, and a second light, respectively. It should be recognized by a person having ordinary skill in the art that the illustrated implementation is just an example and any of controllable devices 112 a, 112 b, 112 c, and 112 d can be other controllable devices such as, without limitation, appliances (e.g., dishwashers, refrigerators, washing machines, drink machines, etc.), electronics (e.g., telephones, thermostats, televisions, home entertainment systems, video game consoles, projectors, stereos, etc.), fixtures (e.g., lights, hot tubs, etc.), heaters, air conditioners, vending machines, machinery, toys, and/or any other device that can perform actions under control (e.g., remote control, electronic switches, buttons, etc.). In the present illustrated example, controllable devices 112 a-112 c are within the detectable area 104 a of the detection device 102 a, while controllable device 112 d is located outside of the detectable area 104 a of the detection device 102 a. Accordingly, in this illustrated example, controllable devices 112 a-112 c may be operated with the gesture recognition system 100, while controllable device 112 d may not, as it is outside of the detectable area 104 a for the detection device 102 a.
  • One or more of the controllable devices is placed in signal communication (as opposed to visual communication) with system 100. For example, controllable device 112 a (e.g., the first light) can be placed into signal communication with a wireless enabled electrical outlet device 114 such as, for example, a BELKIN® WEMO® switch. Accordingly, operation of the controllable device 112 a can be controlled via wireless signals originating from the detection device 102 a and/or computing system 106 that are received by the wireless enabled electrical outlet device 114 thereby enabling the application (or withdrawal) of power to the controllable device 112 a.
  • Moreover, controllable devices 112 b and 112 c (e.g., a stereo and a television in the present exemplary example) are controlled via conventional IR signals originating from the detection device 102 a. Consumer IR devices can communicate via the transmission of digitally-coded pulses of IR radiation to control functions such as power, volume, tuning, etc. In some exemplary implementations, detection device 102 a is able to self-determine the make and model number of a given controllable device via its extant imaging capability as well as its coupling to, for example, the Internet (e.g., Internet 306, FIG. 3) via, for example, computing system 106. For example, detection device 102 a captures an image of controllable device 112 c, and subsequently forwards this image to computing system 106 via network interface 116 a. Computing system 106 includes a module that is able to recognize the manufacturer of the television (e.g., SAMSUNG®) by virtue of this captured image and is subsequently able to determine the proper IR coding in order to control the functions of this device 112 c. Computing system 106 subsequently transmits this IR coding information to the detection device 102 a so that the detection device is able to subsequently control the operation of the controllable device 112 c.
  • In some implementations, and upon initialization of system 100 (e.g., upon initial installation of the system 100, periodic updates of the system 100, etc.), the system will scan the wireless network in order to determine the presence of wireless enabled electrical outlet devices. Subsequent to this discovery, or contemporaneously therewith, detection device 102 a will “blast” detectable area 104 a with various wireless and/or IR signals while simultaneously monitoring the environment within its detectable area 104 a. Accordingly, for example, by detecting visual and/or audio changes in response to these “blast” signals, system 100 is able to identify controllable devices (e.g., controllable devices 112 a-112 c) as well as associate each of these controllable devices to various wireless/IR signals in order to effectuate their control.
  • Herein lies a salient advantage of the gesture recognition system 100 as described herein, namely the ability to auto-configure (e.g., auto-configuration mode) various control functions for various ones of the controllable devices 112 a-112 d. Although auto-configuration functionality is exemplary, it is readily appreciated that the makes/models of various ones of the controllable devices may be inputted manually into computing system 106 and/or detection device 102 a. Various features of the auto-configuration functionality, as well as user commanded operation, will be discussed in subsequent detail with regards to the discussion of FIG. 4 infra.
  • FIG. 2 illustrates a block diagram of another exemplary gesture recognition system 100 incorporated into a user premises. As depicted in FIG. 2, system 100 includes a first detection device 102 a and a second detection device 102 b that are configured to detect motion and/or light within their respective fields of view (e.g., detectable areas 104 a and 104 b). Detection devices 102 a and 102 b are each in signal and/or data communication with a computing system 106 via network interfaces 116 a and 116 b, respectively. Similar to the discussion with regards to FIG. 1, system 100 is disposed in an environment, such as a room in a home, having non-controllable objects 108 a, 108 b (e.g., a couch, coffee table, etc.) and controllable devices 112 a, 112 b, 112 c, and 112 d. Controllable devices 112 a-112 d may be those described herein in reference to system 100. In the present example, controllable devices 112 a-112 c are within detectable area 104 a. Furthermore, controllable devices 112 a, 112 c and 112 d are within detectable area 104 b. Thus, in this example of system 100, controllable devices 112 a-112 d may all be operated with gesture recognition as each of these controllable devices 112 a-112 d is within the detectable areas associated with the detection devices. System 100 of FIG. 2 is similar to, and may be operated in a similar manner with the system illustrated and discussed with regards to FIG. 1; however, in the case of the system illustrated in FIG. 2, inclusion of a second detection device 102 b allows the system to detect additional controllable devices (e.g., controllable device 112 d) that might otherwise lie outside of the detectable area.
  • The disclosed gesture recognition systems (e.g., gesture recognition system 100) may include additional detection devices of the same type (e.g., three cameras, four cameras, etc.) for increasing the coverage scope of detectable areas and/or for including additional detectable areas in other rooms, etc. It will be further appreciated that the disclosed gesture detection systems may include additional detection devices or substitute detection devices of different types, such as the aforementioned night vision cameras and stereoscopic vision cameras. It will be still further appreciated that the detection devices may be placed in a location which optimizes the detectable area depending on locations of the controllable devices within the environment (e.g., a room in a home). For example, and as illustrated in FIG. 2, controllable devices 112 a-112 c are within the detectable area 104 a of detection device 102 a, while controllable device 112 a, 112 c and 112 d are within the detectable area 104 b of detection device 102 b. Accordingly, every controllable device 112 a-112 d within the illustrated room is within one or more of the detectable areas for the detection devices.
  • Referring now to FIG. 3, a block diagram for a computing device 106 is shown and described in detail. Computing device includes, inter alia, one or more processors 300 in data communication with memory 302 as well as network interface(s) 304 for communications external to computing device. For example, as illustrated network interface provides for data communications with the detection device 102 (which can be representative of, e.g., detection device 102 a, detection device 102 b, or any other detection device) via a wireless or wireline communications link 116. Additionally, network interface 304 optionally provides for communication with the internet 306 via communications link 314, as well as optionally provides for communication with controllable device 112 via communications link 310. Additionally, it is noted that detection device 102 may also have a communications link 308 with the Internet 306, while controllable device 112 may optionally have a communications link 312 with the Internet as well.
  • The memory 302 of computing device 106 also includes a computer program with computer-readable instructions that when executed by the processor 300 carry out one or more methods operable to implement the gesture recognition systems described herein. Exemplary methods for operating the gesture recognition system are discussed herein, such as in reference to method 400 shown in FIG. 4 discussed infra.
  • Methods—
  • Referring now to FIG. 4, a process flow diagram of an exemplary method 400 for auto-configuring and operating the gesture recognition system 100 shown in, for example, FIGS. 1 and 2. Specifically, as is illustrated in FIG. 4, a first portion 401 of the method 400 includes the discovery and mapping capability portions for the gesture recognition system 100 that will be discussed in subsequent detail herein via steps 402-410 (e.g., so-called “auto-configuration mode”). In one or more implementations, the system 100 performs the first portion 401 of the method 400 upon initial start-up/reboot of the system 100 as well as upon determining that one or more detection devices 102 a-102 b and/or one or more controllable device 112 a-112 d has been moved. For example, the system 100 determines that one or more detection devices 102 a-102 b and/or one or more controllable devices 112 a-112 d has been moved by determining that otherwise stationary objects have changed locations (e.g., by analyzing two or more images captured by the detection device and/or taking the difference between those two or more images) and/or through receipt of a signal from a motion sensor (not shown). In one or more implementations, motion sensor is disposed on or within either of detection devices 102 a-102 b for determining when/if one or more of detection devices 102 a-102 b has moved. In some implementations, motion sensor is a discrete device that is separate and apart from the various devices illustrated in FIGS. 1 and 2.
  • After execution of the first portion 401 of method 400, in a second portion 411, method 400 includes operation of controllable devices at steps 412 and 414. In one or more exemplary implementations, gesture operation of controllable devices includes user commanded operation of the one or more controllable devices. It will be further appreciated that the first portion 401 of method 400 may, in some cases, be carried out without user interaction and/or when a user is out of the detectable area (e.g., detectable areas 104 a-104 b), while the second portion of method 400 can be carried out while a user is performing a predetermined gesture within the detectable area.
  • Auto-Configuration—
  • Referring again to FIG. 4, at step 402, the gesture recognition system 100 first scans a network for wireless controllable devices in order to determine their respective identifiers. In implementations, the wirelessly controllable devices are identified via a unique identifier (e.g., IP address, media access control (MAC) address, unique device identifier (UDID), serial number, etc.). Each wirelessly controllable device can have a unique identifier. For example, one or more of the devices 112 a-112 d may be associated with a unique MAC address. In some examples, discovery of wirelessly controllable devices can be carried out via standard transmission control protocol/internet protocol (TCP/IP) discovery processes through, for example, simple network management protocol (SNMP) network discovery processes. Additionally or alternatively, discovery of wirelessly controllable devices can be carried out via port scanning (e.g., Transmission Control Protocol (TCP) scanning, SYN scanning, User Datagram Protocol (UDP) scanning, etc.) and/or BLUETOOTH® discovery protocols. In exemplary implementations, the gesture recognition system 100 generates a device list specifying one or more devices, e.g., devices 112 a-112 d, that are available for configuration and/or operation. Additionally, although primarily envisioned as a wireless discovery process, it is appreciated that known wired discovery protocols may be readily substituted in implementations.
  • After scanning for controllable devices, at step 404, the gesture recognition system 100 operates the detection device in its visual discovery mode. In implementations, the detection device continuously sends (e.g., blasts) a number of signals (e.g., ON/OFF signals in accordance with known IR coding protocols) in order to cause the controllable devices to respond in a manner that can be detected visually and/or audibly (e.g., turning on a television). Step 404 can allow the gesture recognition system 100 to determine which controllable devices are in a detectable area and the location of those devices. In some implementations, the gesture recognition system 100 may operate each of the controllable devices consecutively (e.g., serially). For example, the system may first operate device 112 a, then operate device 112 b, next operate device 112 c, and finally operate device 112 d. In another example, the system may operate the controllable devices 112 a-112 d concurrently (e.g., in parallel) using a distinct operative pattern for each of controllable devices 112 a-112 d. For example, the gesture recognition system 100 may operate two or more detection devices substantially simultaneously such that, for example, device 112 a is operating according to a first ON/OFF pattern via a first detection device 102 a, while device 112 d is concurrently operating according to a second ON/OFF pattern via a second detection device 102 b. It will be appreciated that in other examples the gesture recognition system 100 can execute a combination of both consecutive and concurrent operation of the controllable devices. For example, at a first time instance, device 112 a is operating according to a first ON/OFF pattern via a first detection device 102 a, while device 112 d is concurrently operating according to a second ON/OFF pattern via a second detection device 102 b. Subsequently at a second time instance, device 112 b is operating according to a first ON/OFF pattern via first detection device 102 a, while device 112 c is concurrently operating according to a second ON/OFF pattern via a second detection device 102 b.
  • In response to the blast of signals at step 404, at step 406, the gesture recognition system 100 detects visual changes that occur. The visual changes can be identified by taking a difference between two or more frames of images taken by a detection device (e.g., detection device 102 a-102 b). This difference can be computed by subtracting the frames of images (e.g., subtracting the first frame from the second frame) and/or by comparing the frames of images. For example, the camera of the detection device 102 a can store a first frame of an image in memory (e.g., memory of computing device 106) while device 112 a is off. Then, device 112 a can receive an ON/OFF signal for operation from the detection device 102 a. The camera of detection device 102 a can then record a second frame of an image in memory that captures a change in light intensity proximal to the location of controllable device 112 a in response to this ON/OFF signal. Based at least in part on the difference between the first frame of the image, where the light appears off, and the second frame of the image, where the light appears on, the detection device 102 a can recognize that the lamp has turned from OFF to ON. The gesture recognition system 100 then records the location of controllable device 112 a in memory (e.g., memory of computing device 106) and/or on detection device 102 a.
  • As another non-limiting example, detection device 102 a can store a first frame of an image in memory (e.g., memory of computing device 106). In some cases, this first frame of image can capture the display of a channel number on device 112 c, which can be a television. For example, display 112 c may typically display a channel number, or the channel may be on display because a user recently changed the channel prior to the timing of taking of the first frame. Then, device 112 c can receive a change channel signal for operation from detection device 102 a. The camera of detection device 102 a can then record a second frame of an image in memory that captures the channel change (e.g., where the new, changed channel number is displayed on device 112 c) proximal to the location of controllable device 112 c in response to this change channel signal. Based at least in part on the difference between the first frame of the image, where the channel number is a first number, and the second frame of the image, where the channel number is a second number, detection device 102 a can recognize that the channel has changed, determine the location of controllable device 112 c, and record the location of controllable device 112 a in memory (e.g., memory of computing device 106). In some implementations, detection device 102 a can be configured to recognize shapes of numbers and identify the numbers displayed on device 112 c. Detection device 102 a can then recognize the channel number captured in the first frame and the second frame, and determine if the channel has been turned in response to the change channel signal. In some cases, where the channel number has skipped to a non-consecutive number, detection device can recognize changes that have occurred due to other commands (e.g., page up/page down, or the entering of a channel number). The gesture recognition system 100 then recognizes the changes in response to the commands and records the location of controllable device 112 c in memory of computing device 106 and/or of detection device 102 c.
  • In some implementations, the gesture recognition system 100 can include other or additional detection devices, such as a microphone, to enable the gesture recognition system 100 to detect an audible measurable change in response to the blast of signals at step 404. For example, device 112 b may be a stereo which receives an ON/OFF signal for operation from detection device 102. The microphone then captures a change in sound intensity proximal to the location of device 112 b via the processing of audio signals captured by the microphone. In some cases, a signal processor communicatively coupled to the microphone can include sound recognition, which can allow the signal processor to detect and/or identify predetermined sounds/noises. Advantageously, multiple detection devices can be utilized alternatively or in combination (e.g., contemporaneously). By way of illustrative example, visual imaging and/or audio detection can be used. In some implementations, where visual imaging and audio detection are used in combination, detection devices can have increased robustness for identifying the location of devices 112 a-112 d even in noisy (e.g., visually and/or audibly noisy) environments because there are multiple mediums (e.g., visual and audio) in which these detection devices can detect changes in devices 112 a-112 d. Having both vision and audio can also advantageously decrease false positives by looking for e.g., devices 112 a-112 d that respond to two different mediums (e.g., audio and visual) instead of one. In some cases, instead of multiple detection devices, a single detection device 102 can detect a plurality of indicia such as both audio and visual. For example, video cameras can include microphones so that the video cameras can sense and/or record both audio and visual. An additional advantage is that locating devices 112 a-112 d through both audio and visual changes can enhance the ability of detection device 102 to detect those devices accurately by allowing verification of the locations of those devices. For example, once detection device 102 identifies the location of one of devices 112 a-112 d by either visual or audio, that location can then be verified and/or better approximated by the other of visual or audio.
  • In some implementations, a controllable device is located outside of the detectable area (e.g., the device may be in another room), and therefore automatic operation of the device may generate no measurable change in the environment. In this example, the system may ignore a device ID determined at step 402 and optionally later report to a user that the device was not visually detected. The gesture recognition system 100 detects the visual change using virtually any suitable technique. For example, the gesture recognition system 100 detects the measurable change by detecting a sudden change (e.g., within a prescribed time window) of a given sensory characteristic (e.g., a change in color, brightness, local contrast, sound level, etc.). As yet another example, the gesture recognition system 100 detects the measurable change by comparing a current sensory characteristic with a reference sensory characteristic. In the context of image processing, the reference sensory characteristic may be configured based on a reference frame depicting an initial state of the environment (e.g., a state where all controllable devices are off).
  • Subsequent to detecting visual changes in a scene at step 406, the gesture recognition system 100 defines regions of interest (ROI) at step 408. For example, the gesture recognition system 100 records in memory the location of the measurable change and associates the location with a specific controllable device ID to create a local coordinate map of the detectable area. In implementations where one or more of the detection devices capture information in two dimensions (e.g., 2D-enabled), the local coordinate map generated will consist of a 2D map. The exact form of this local coordinate map can be formatted into any suitable format (e.g., a planar Cartesian coordinate map, a polar coordinate map, etc.). An example of a local 2D coordinate map for the detectable area of a gesture recognition system 100 is depicted in FIG. 5. As shown in FIG. 5, the local 2D coordinate map corresponds to the X-Y coordinates of images captured by the detection device. Based on the measurable change as detected in the images, the system associates the X1, Y1 region of the coordinate map with controllable device 112 d. Similarly, the system associates the X2, Y2 region of the coordinate map with controllable device 112 c, and associates the X3, Y3 region of the coordinate map with controllable device 112 a.
  • In other implementations where one or more of the detection devices capture information in three dimensions (e.g., 3D-enabled) (e.g., stereoscopic cameras, etc.), the local coordinate map consists of a 3D map (e.g., a 3D Cartesian coordinate map, a cylindrical coordinate map, a spherical coordinate map, etc.). For example, the 3D map may correspond to the X-Y coordinates of images captured by the detection device and a Z coordinate indicating, for example, a distance of the controllable device from the detection device. An example of a local 3D coordinate map for the detectable area of a gesture recognition system 100 is depicted in FIG. 6.
  • In some implementations, multiple detection devices can be used to capture a 3D map. A first detection device can take a first video at a first angle. This first video can be 2D having Xa-Ya dimensions (which can be mapped with Xa-Ya coordinates). At substantially the same time, the second detection device can take a second video at a second angle. This second video can be 2D having Xb-Yb dimensions (which can be mapped with Xb-Yb coordinates). A computing device can receive the first video and second video. In some cases, the computing device can create a 3D map based at least in part on Xa-Ya dimensions from the first video and a Za dimension calculated at least in part on the Xb-Yb dimensions of the second video.
  • In some implementations, the first detection device can be substantially orthogonal to the second detection device and/or lie in substantially the same horizontal plane. In such cases, the field of view (and consequently ROIs) of the first detection device and second detection device can have substantial overlap (e.g., 30% or more overlap). In those cases, the 3D map can be generated in some implementations by taking the Xa-Ya dimensions of the first video and basing the Za dimension of the 3D map at least in part on the Xb (or Yb) dimension of the second video. However, in some cases, the first detection device, the second detection device, and/or any other detection device may not be substantially orthogonal and/or not and/or lie in substantially the same horizontal plane to each other. In some cases, the first detection device and second device may even only have a small area of overlap between their fields of view. In these cases, the computing device can construct the three-dimensional map using three-dimensional reconstruction from line projections in regions of overlap based at least in part on the videos taken from the detection devices.
  • Based at least on the local coordinate map and the detected measurable change, the system can assign a boundary area to each of the discovered controllable devices at step 410. For example, the boundary area may be an area with a location and dimensions that are comparable to the location and dimensions of the controllable device. For example, if the controllable device is a lamp, the assigned boundary area may consist of an area with roughly the same size/shape as the light shade of the lamp. The boundary area may additionally include a region adjacent to the device, such as the dimensions of the device plus an additional zone that is a pre-defined distance (e.g., 1 inch, 2 inches, or fractions thereof, etc.) beyond the dimensions of the controllable device. In examples where one or more of the detection devices are 2D-enabled and a 2D map is generated, the boundary areas will be 2D as well. Alternatively, in examples where one or more of the detection devices are 3D-enabled, the boundary areas will be 3D (or alternatively 2D). In both examples, the boundary areas may be overlaid on a detection device view for the detectable area.
  • Additionally, in certain implementations, the gesture recognition system 100 will assign two or more ROIs for a given/single controllable device. For example, in instances in which the detection device detects two separate visual changes to a scene (e.g., step 406), the recognition system can assign two different spatially distinct boundary areas for a given controllable device. Such a scenario could take place in a room where, for example, there is a mirror which causes two representations of a given controllable device (e.g., the actual controllable device and the controllable devices reflection within the mirror). In such an instance, the gesture recognition system 100 will, in an exemplary implementation, assign boundary areas to both areas of interest.
  • An example of a 2D boundary overlay 700 from the view point of a detection device is illustrated in FIG. 5. In this example, controllable devices 112 a, 112 c, and 112 d are in the detectable area of a detection device. After discovering and mapping the device locations (e.g., steps 402-408 of method 400), the gesture recognition system 100 assigns a boundary area 702 a to device 112 a, a boundary area 702 c to device 112 c, and a boundary area 702 d to device 112 d. Each of the boundary areas enclose the device and include an additional zone a pre-defined distance beyond the outline of the controllable devices. In the illustrated example, the boundary areas are shown as squares; however, in other examples the boundary areas can trace the shape of the devices and/or be of other shapes (e.g., circular, triangular, etc.). Moreover, while the boundary areas are shown as being symmetrically placed about devices 112 a-112 c, it is recognized that in alternative implementations, these boundary areas may be placed asymmetrically about these devices. Each of these boundary areas will be associated with an identifier for each of the respective controllable devices.
  • An example of a 3D boundary overlay 800 from the view point of a detection device is illustrated in FIG. 6. In this example, controllable devices 112 a, 112 c, and 112 d are in the detectable area of a detection device. After discovering and mapping the device locations (steps 402-408 of method 400), the gesture recognition system 100 assigns a boundary area 802 a to device 112 a, a boundary area 802 c to device 112 c, and a boundary area 802 d to device 112 d. Each of the boundary areas enclose the device and include an additional zone that is a pre-defined distance beyond the shape of the device (whether symmetrically or asymmetrically). In the present example, the boundary areas are shown as cubes, however, in other examples the boundary areas can trace the shape of the devices and/or be of other shapes (e.g., spherical, pyramidal, etc.). Each of these boundary areas can be associated with an identifier for each of the respective controllable devices (e.g., devices 112 a-112 d).
  • User Commanded Operation—
  • Returning again to FIG. 4, after execution of the auto-configuration mode 401 of method 400, in a second portion 411, method 400 includes operation of controllable devices at steps 412 and 414. In other words, the gesture recognition system 100 is configured to receive commands from a user for gesture operation of these discovered controllable devices (e.g., those discovered devices and assigned ROI's defined during auto-configuration 401).
  • As mentioned, audio or other sensory inputs can be used in the alternative or in combination with gestures. For example, audio can be detected by a detection device such as a microphone. As with gestures, gesture recognition system 100 can associate predetermined audio with operations. As an example, a user may make a predetermined noise, such as, without limitation: saying a word, making a sound/noise, using a device that speaks words or makes sounds/noises, hitting/tapping an object, or any other way of making a noise. This predetermined noise can be associated at least in part with any of the operations and/or controllable actions described in this disclosure, such as turning a channel, turning on/off a device, etc. By illustrative example, the predetermined noise itself can activate the operation initiated by gesture recognition system 100. As another example, the noise at a particular location, as determined at least in part by a visual system of gesture recognition system 100 and/or triangulation of the noise by the gesture recognition system 100, can activate the operation initiated by gesture recognition system 100. As another example, the noise in combination with a gesture can activate the operation initiated by gesture recognition system 100. Advantageously, where both noise and visuals are used to initiate an operation, false positives can be reduced. By way of illustration, in some of the examples that will be described below, a gesture recognition system 100 may associate a user reaching towards a device as a gesture to turn on/off that device. However, a user may on occasion reach towards a device without actually intending to turn on/off that device. For example, a drink may be sitting next to a lamp. Gesture recognition system 100 may associate reaching towards the lamp as a gesture that commands gesture recognition system 100 to turn on/off the lamp. However, a user might actually be reaching for a drink sitting next to a lamp instead of reaching towards the lamp. If gesture recognition system 100 associates both a noise (e.g., the user saying “lamp” or making some other noise) and a gesture (e.g., reaching towards the lamp or some other gesture) with an operation, merely reaching towards the lamp would not have produced the false positive of the lamp turning on/off. Similarly, having both audio and visual gestures, whatever they may be, used to command operations can reduce false positives.
  • At step 412, the system detects a predetermined gesture and/or an audible noise performed by a user within a boundary area to command operation of a controllable device. In one or more implementations, the gesture recognition system 100 is configured to disregard and/or ignore user movements (e.g., gestures) that are made outside of the boundary areas assigned at step 410 and/or gestures that do not match a predetermined gesture. In this way, such gestures performed in such boundary areas can sometimes be called region-specific gestures.
  • In some implementations, however, a gesture recognition system 100 can detect movements/gestures and/or audible noises, and perform operations without direct regard to the boundary areas assigned at step 410, or no boundary areas assigned in step 410. For example, the gesture recognition system 100 may detect gestures in proximity, moving towards, and/or interacting with one or more controllable devices 112 a-112 d. By way of illustration, the gesture recognition system 100, through systems and methods described in this disclosure, may detect a user moving towards or gesturing towards (e.g., reaching his/her arm towards) devices 112 a-112 d. For example, a user may be across a room and interact by gesturing (e.g., pointing) to one of devices 112 a-112 d, which gesture recognition system 100 can recognize as a command to turn on/off that one of devices 112 a-112 d and/or perform any other operation associated at least in part with the gesture. A user may be in close proximity to one of devices 112 a-112 d (e.g., just outside the boundary area assigned at step 410 or within a predefined distance (e.g., approximately 1, 2, or 3 feet, or any predetermined absolute (e.g., in US, or standard units) or relative distance (e.g., pixels, lengths, etc.)) without regard to any boundary area) and interact by gesturing (e.g., pointing) at that device, which gesture recognition system 100 can recognize as a command to turn on/off that one of devices 112 a-112 d and/or perform any other operation associated at least in part with the gesture. A user may walk a distance (e.g., approximately 1, 2, or 3 feet, or any predetermined absolute (e.g., in US, or standard units) or relative distance (e.g., pixels, lengths, etc.) towards one of devices 112 a-112 d while looking at that device, which gesture recognition system 100 can recognize as a command to turn on/off that one of devices 112 a-112 d and/or perform any other operation associated at least in part with the gesture. As another example, gesture recognition system 100 can detect sounds in the alternative or in combination with any of the aforementioned example gestures. For example, a user can say “lamp” or make some other predefined noise (e.g., sound, words, etc.) while gesturing (e.g., pointing) at device 112 a, which gesture recognition system 100 can recognize as a command to turn on/off device 112 a and/or perform any other operation associated at least in part with the gesture and/or noise. As another example, a user can just say “lamp on” or make some other predefined noise (e.g., sounds, words, etc.), which gesture recognition system 100 can recognize as a command to turn on/off device 112 a and/or perform any other operation associated at least in part with the noise. As another example, a user may touch or otherwise directly interact with one of devices 112 a-112 d, which gesture recognition system 100 can recognize as a command to turn on/off that one of devices 112 a-112 d and/or perform any other operation associated at least in part with the gesture. As another example, a user may make a predetermined noise while positioned in a predetermined location (e.g., as identified visually through a detection device). Gesture recognition system 100 can recognize the combination of the noise and location as a command to turn on/off one or more of devices 112 a-112 d and/or perform any other operation associated at least in part with the noise and location. Any of the aforementioned examples can also be performed by any person, animal (e.g., pet or other animal), or object (e.g., vehicle or toy).
  • In some implementations, gestures (e.g., region-specific gesture or gestures without direct regard to any boundary area) may be a pre-defined and/or predetermined movement the gesture recognition system 100 is programmed to recognize. For example, the movement can include motion (e.g., driving, walking, turning, etc.) by a person, animal (e.g., pet or other animal), object (e.g., vehicle or toy), etc. As another example, one such predefined gesture may include tapping/touching a portion of a lamp (e.g., the bottom portion, top portion, left portion, right portion, or any portion as desired), or tapping/touching the lampshade in order to turn on/off the lamp. A portion the lamp can include, but is not limited to a half, third, quarter, tenth, or other fraction of the lamp body. In some implementations, a user may have to tap/touch within a given ROI a predefined number of times (e.g., two) so as to avoid, for example, false positives. In some implementations, the user may touch the ROI once, wait for the device to briefly respond (e.g., a lamp may briefly blink on/off) and then touch the ROI once again to confirm the operation. In some implementations, a user may place his/her hand in the predefined ROI and hold it in place (e.g., for 1, 2, 3, 4, or any predefined number of seconds) until the device responds. Such implementations can reduce false positives resultant from transient movements (e.g., brief accidental touching or accidental crossing of the imaginary line between the detection device and the lamp). In some implementations, a given controllable device can be assigned multiple ROI. For example, in the context of the exemplary lamp, touching/tapping the top portion of the lamp may result in the lamp being turned ON, while touching/tapping the bottom portion of the lamp may result in the lamp being turned OFF. In some implementations, a user may make a visible “0” using that user's thumb and pointing finger when touching/approaching a light that the user wishes to turn ON.
  • In addition to these generally static gestures, the gesture recognition system 100 may instead rely on dynamic gestures in order to control a given controllable device. For example, a user wishing to control the volume on his/her television may swipe upwards within the assigned boundary area in order to raise the volume, or swipe downwards in order to lower the volume. Similarly, swiping upwards/downwards may operate the dimmer functionality of a light (e.g., swiping upwards brightens the light, while swiping downwards dims the light, etc.). Moreover, a user may choose to swipe either right-to-left/left-to-right in order to, for example, change the channel or change the color used for a given light source. Additionally, certain controllable devices can be operated by passing, for example, a user's hand along a controllable devices outline. For example, the gesture recognition system 100 can recognize when a user passes his/her hand along a predefined percentage (e.g., 50%) of the device's outline in order to prevent, inter alia, false positive, accidental gesture detections, etc. In some implementations, a user has the ability to customize the operation of the gesture recognition system 100 by selecting how to control operation (e.g., define gestures) for their controllable devices.
  • In addition to those specific examples given above (e.g., static and dynamic gestures), a user may also optionally be given the option to determine which predetermined gestures and/or audio that the user wishes to use. For example, a user could choose to touch/tap a ROI and then subsequently give a confirmatory gesture (e.g., a thumbs up, show two fingers, etc.) in order to confirm the user's selection of the touch/tap gesture for operating the respective controllable device. A user can also train the gesture recognition system 100 to associate particular gestures with particular commands by repeatedly demonstrating the gesture and performing the command for the controllable device in view of the detection device. In these ways, a user can associate a demonstrated gesture as the predetermined gesture in order to change a user-controlled operational state of one or more controllable devices using the systems and methods of this disclosure. A similar approach can be done using audio or other sensory inputs, or detection combination of gestures and audio and/or sensory inputs, where a user can train gesture recognition system 100 to associate commands to controllable devices based at least in part on those gestures, audio, and/or other sensory inputs. The method of this demonstrative association is described in U.S. Publication No. 2016/0075015 to Izhikevich et al., which is incorporated herein by reference in its entirety.
  • Additionally, a user has the ability to set up other rules such as timing rules for auto-configuration mode 401 and operation mode 411. For example, a user may elect to have the auto-configuration mode operated on a nightly basis or may elect to only control operation of certain controllable devices within a specified time period. Advantageously, the gesture recognition system 100 implementations described herein is simpler, faster and much more intuitive than prior art methods.
  • In some implementations, the gesture recognition system 100 detects the gestures (e.g., predetermined gestures) by detecting a sudden change within a predefined time window (e.g., a window of 1 seconds, 5 seconds, 10 seconds, etc.). In some implementations, the gesture recognition system 100 detects the gestures by comparing a current sensory input characteristic and a reference sensory input characteristic. In implementations that utilize image processing, the reference sensory input characteristic may be configured based on a reference frame depicting an initial state of the environment (e.g., a state where no user is present in the frame). In implementations that utilize a video camera detection device, the gesture recognition system 100 detects the gestures by performing, for example, a pixel-wise discrepancy analysis between images in a sequence in order to perform gesture recognition. The gesture recognition system 100 can also optionally use other suitable motion detection techniques to detect the gestures including, for example, background subtraction, optical flow analysis, image segmentation, and so forth.
  • In some implementations, the gesture recognition system 100 detects gestures performed in a pathway between the detection device and the controllable device. As shown in FIG. 7, a pathway 704 a extends between the detection device and controllable device 112 a, a pathway 704 c extends between the detection device and controllable device 112 c, and a pathway 704 d extends between the detection device and controllable device 112 d. Although not specifically shown, the detection device is located at a convergence point of pathways 704 a, 704 c, and 704 d.
  • In the 3D boundary area overlay depicted in FIG. 6, a predetermined gesture may be made by a user within the 3D boundary area of the device in order to operate the controllable device. Accordingly, the gesture recognition system 100 detects a predetermined gesture performed at or near the location of the device within the boundary area of the device. In some implementations, the gesture recognition system 100 may detect a gesture performed in a pathway between the detection device and the 3D boundary area. In other implementations, the gesture recognition system may not detect a gesture performed outside of the 3D boundary area. For example, the gesture recognition system 100 may not detect the gesture performed in the pathway between the detection device and the 3D boundary area (which is outside of the 3D boundary area).
  • Returning again to FIG. 4, at step 414, the gesture recognition system 100 operates the specified controllable device in response to the detected gesture (e.g., step 412). The gesture recognition system 100 operates the device by sending a signal to the controllable device associated with the region within which the gesture was detected. The system may transmit the signal to the device via a wired interface and/or wireless (e.g., using radio frequency (RF), infrared (IR), pressure (sound), light, and/or other wireless carrier transmissions).
  • It will be recognized that while certain aspects of the disclosure are described in terms of a specific sequence of steps of a method, these descriptions are only illustrative of the broader methods of the disclosure, and may be modified as required by the particular application. Certain steps may be rendered unnecessary or optional under certain circumstances. Additionally, certain steps or functionality may be added to the disclosed implementations, or the order of performance of two or more steps permuted. All such variations are considered to be encompassed within the disclosure disclosed and claimed herein.
  • While the above detailed description has shown, described, and pointed out novel features of the disclosure as applied to various implementations, it will be understood that various omissions, substitutions, and changes in the form and details of the device or process illustrated may be made by those skilled in the art without departing from the disclosure. The foregoing description is of the best mode presently contemplated of carrying out the disclosure. This description is in no way meant to be limiting, but rather should be taken as illustrative of the general principles of the disclosure. The scope of the disclosure should be determined with reference to the claims.
  • In some implementations, a computing system that has components including a central processing unit (CPU), input/output (I/O) components, storage, and memory can be used to execute the detector system, or specific components and/or subcomponents of the system. The executable code modules of the monitoring system can be stored in the memory of the computing system and/or on other types of non-transitory computer-readable storage media. In some implementations, the detector system can be configured differently than described above.
  • Each of the processes, methods, and algorithms described in the preceding sections can be embodied in, and fully or partially automated by, code modules executed by one or more computers, computer processors, or machines configured to execute computer instructions. The code modules can be stored on any type of non-transitory computer-readable medium or tangible computer storage device, such as hard drives, solid state memory, optical disc, and/or the like. The systems and modules can also be transmitted as generated data signals (e.g., as part of a carrier wave or other analog or digital propagated signal) on a variety of computer-readable transmission mediums, including wireless-based and wired/cable-based mediums, and can take a variety of forms (e.g., as part of a single or multiplexed analog signal, or as multiple discrete digital packets or frames). The processes and algorithms can be implemented partially or wholly in application-specific circuitry. The results of the disclosed processes and process steps can be stored, persistently or otherwise, in any type of non-transitory.
  • The various features and processes described above can be used independently of one another, or can be combined in various ways. All possible combinations and sub-combinations are intended to fall within the scope of this disclosure. In addition, certain method or process blocks can be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate. For example, described tasks or events can be performed in an order other than that specifically disclosed, or multiple can be combined in a single block or state. The example tasks or events can be performed in serial, in parallel, or in some other manner. Tasks or events can be added to or removed from the disclosed example implementations. The example systems and components described herein can be configured differently than described. For example, elements can be added to, removed from, or rearranged compared to the disclosed example implementations.
  • Unless otherwise defined, all terms (including technical and scientific terms) may be given their ordinary and customary meaning to a person of ordinary skill in the art, and are not to be limited to a special or customized meaning unless expressly so defined herein. It should be noted that the use of particular terminology when describing certain features or aspects of the disclosure should not be taken to imply that the terminology is being re-defined herein to be restricted to include any specific characteristics of the features or aspects of the disclosure with which that terminology is associated. As applicable, terms and phrases used in this application, and variations thereof, especially in the appended claims, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. As examples of the foregoing, the term “including” should be read to mean “including, without limitation,” “including but not limited to,” or the like; the term “comprising” as used herein is synonymous with “including,” “containing,” or “characterized by,” and is inclusive or open-ended and does not exclude additional, unrecited elements or method steps; the term “having” should be interpreted as “having at least;” the term “such as” should be interpreted as “such as, without limitation;” the term ‘includes” should be interpreted as “includes but is not limited to;” the term “example” is used to provide exemplary instances of the item in discussion, not an exhaustive or limiting list thereof, and should be interpreted as “example, but without limitation;” adjectives such as “known,” “normal,” “standard,” and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass known, normal, or standard technologies that may be available or known now or at any time in the future; and use of terms like “preferably,” “preferred,” “desired,” or “desirable,” and words of similar meaning should not be understood as implying that certain features are critical, essential, or even important to the structure or function of the present disclosure, but instead as merely intended to highlight alternative or additional features that may or may not be utilized in a particular embodiment. Likewise, a group of items linked with the conjunction “and” should not be read as requiring that each and every one of those items be present in the grouping, but rather should be read as “and/or” unless expressly stated otherwise. Similarly, a group of items linked with the conjunction “or” should not be read as requiring mutual exclusivity among that group, but rather should be read as “and/or” unless expressly stated otherwise. The terms “about” or “approximate” and the like are synonymous and are used to indicate that the value modified by the term has an understood range associated with it, where the range can be ±20%, ±15%, ±10%, ±5%, or ±1%. The term “substantially” is used to indicate that a result (e.g., measurement value) is close to a targeted value, where close can mean, for example, the result is within 80% of the value, within 90% of the value, within 95% of the value, or within 99% of the value. Also, as used herein “defined” or “determined” can include “predefined” or “predetermined” and/or otherwise determined values, conditions, thresholds, measurements, and the like.
  • It will be further appreciated that while certain steps and aspects of the various methods and apparatus described herein may be performed by a human being, the disclosed aspects and individual methods and apparatus are generally computerized/computer-implemented. Computerized apparatus and methods are necessary to fully implement these aspects for any number of reasons including, without limitation, commercial viability, practicality, and even feasibility (i.e., certain steps/processes simply cannot be performed by a human being in any viable fashion).

Claims (19)

1-20. (canceled)
21. A non-transitory computer readable storage medium having computer readable instructions stored thereon, that when executed by a processor, configure the processor to:
operate in an auto-configuration mode to:
emit one or more signals from a detection device;
detect at least one region of interest to localize at least one controllable device within a detection region of the detection device, the at least one region of interest corresponding to a spatial region comprising a detected temporal change in response to the one or more signals emitted from the detection device, the temporal change comprising at least one of an auditory or visual change of the at least one controllable devices in response to the one or more signals; and
operate in an operative mode, different from the auto-configuration mode, to:
detect a predetermined gesture from a user, the predetermined gesture being associated with one or more of the at least one controllable devices, the predetermined gesture configures the one or more of the at least one controllable devices to change an operative state of the one or more of the at least one controllable devices.
22. The non-transitory computer readable storage medium of claim 21, wherein the processor is further configured to execute the computer readable instructions to:
assign a boundary area around each of the at least one region of interest and correlate the boundary area with a respective device identifier, the respective device identifier corresponding to an associated controllable device within the boundary area.
23. The non-transitory computer readable storage medium of claim 22, wherein,
the predetermined gesture is detected proximal to or within the boundary area of the associated controllable device, the predetermined gesture includes a gesture directed towards or otherwise indicating the associated controllable device within the boundary area to change the operative state of the associated controllable device.
24. The non-transitory computer readable storage medium of claim 22, wherein the processor is further configured to execute the computer readable instructions to:
determine the respective device identifier of each associated controllable device based on wireless detection within a wireless network of the respective device identifier.
25. The non-transitory computer readable storage medium of claim 21, wherein,
the predetermined gesture comprises at least one of an auditory noise or physical movement performed by the user, the predetermined gesture being correlated, in accordance with a training process, with a change in operative state of the one or more of the at least one controllable device in response to detection of the predetermined gesture.
26. The non-transitory computer readable storage medium of claim 25, wherein the training process further comprises the processor configured to execute the computer readable instructions to:
detect, within the detection region, the user performing a gesture;
detect a change in the operative state of the at least one controllable device effectuated by the user within the at least one region of interest, the change comprises at least one of an auditory or visual temporal change; and
correlate the change in the operative state with the gesture and store in memory the correlation as the predetermined gesture.
27. A gesture detection system, comprising:
a non-transitory computer readable storage medium comprising a plurality of instructions embodied thereon; and
a processor configured to execute the instructions to:
operate in an auto-configuration mode to:
emit one or more signals from a detection device;
detect at least one region of interest to localize at least one controllable device within a detection region of the detection device, the at least one region of interest corresponding to a spatial region comprising a detected temporal change in response to the one or more signals emitted from the detection device, the temporal change comprising at least one of an auditory or visual change of the at least one controllable devices in response to the one or more signals; and
operate in an operative mode, different from the auto-configuration mode, to:
detect a predetermined gesture from a user, the predetermined gesture being associated with one or more of the at least one controllable devices, the predetermined gesture configures the one or more of the at least one controllable devices to change an operative state of the one or more of the at least one controllable devices.
28. The system of claim 27, wherein the instructions further configure the processor to:
assign a boundary area around each of the at least one region of interest and correlate the boundary area with a respective device identifier, the respective device identifier corresponding to an associated controllable device within the boundary area.
29. The system of claim 28, wherein,
the predetermined gesture is detected proximal to or within the boundary area of the associated controllable device, the predetermined gesture includes a gesture directed towards or otherwise indicating the associated controllable device within the boundary area to change the operative state of the associated controllable device.
30. The system of claim 28, wherein the processor is further configured to execute the instructions to:
determine the respective device identifier of each associated controllable device based on wireless detection within a wireless network of the respective device identifier.
31. The system of claim 27, wherein,
the predetermined gesture comprises at least one of an auditory noise or physical movement performed by the user, the predetermined gesture being correlated, in accordance with a training process, with a change in operative state of the one or more of the at least one controllable device in response to detection of the predetermined gesture.
32. The system of claim 31, wherein the training process further comprises the processor executing the instructions to:
detect, within the detection region, the user performing a gesture;
detect a change in the operative state of the at least one controllable device effectuated by the user within the at least one region of interest, the change comprises at least one of an auditory or visual temporal change; and
correlate the change in the operative state with the gesture and store in memory the correlation as the predetermined gesture.
33. A method effectuated by a processor executing computer readable instructions, comprising the processor:
operating in an auto-configuration mode to:
emit one or more signals from a detection device;
detect at least one region of interest to localize at least one controllable device within a detection region of the detection device, the at least one region of interest corresponding to a spatial region comprising a detected temporal change in response to the one or more signals emitted from the detection device, the temporal change comprising at least one of an auditory or visual change of the at least one controllable devices in response to the one or more signals; and
operating in an operative mode, different from the auto-configuration mode, to:
detect a predetermined gesture from a user, the predetermined gesture being associated with one or more of the at least one controllable devices, the predetermined gesture configures the one or more of the at least one controllable devices to change an operative state of the one or more of the at least one controllable devices.
34. The method claim 33, further comprising the processor:
assigning a boundary area around each of the at least one region of interest and correlate the boundary area with a respective device identifier, the respective device identifier corresponding to an associated controllable device within the boundary area.
35. The method of claim 34, wherein,
the predetermined gesture is detected proximal to or within the boundary area of the associated controllable device, the predetermined gesture includes a gesture directed towards or otherwise indicating the associated controllable device within the boundary area to change the operative state of the associated controllable device.
36. The method of claim 34, further comprising the processor:
determining the respective device identifier of each associated controllable device based on wireless detection within a wireless network of the respective device identifier.
37. The method of claim 33, wherein,
the predetermined gesture comprises at least one of an auditory noise or physical movement performed by the user, the predetermined gesture being correlated, in accordance with a training process, with a change in operative state of the one or more of the at least one controllable device in response to detection of the predetermined gesture.
38. The method of claim 37, wherein the training process further comprises the processor:
detecting, within the detection region, the user performing a gesture;
detecting a change in the operative state of the at least one controllable device effectuated by the user within the at least one region of interest, the change comprises at least one of an auditory or visual temporal change; and
correlating the change in the operative state with the gesture and store in memory the correlation as the predetermined gesture.
US16/376,206 2016-04-29 2019-04-05 Systems and methods to operate controllable devices with gestures and/or noises Abandoned US20190302714A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/376,206 US20190302714A1 (en) 2016-04-29 2019-04-05 Systems and methods to operate controllable devices with gestures and/or noises

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/143,397 US10295972B2 (en) 2016-04-29 2016-04-29 Systems and methods to operate controllable devices with gestures and/or noises
US16/376,206 US20190302714A1 (en) 2016-04-29 2019-04-05 Systems and methods to operate controllable devices with gestures and/or noises

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/143,397 Continuation US10295972B2 (en) 2016-04-29 2016-04-29 Systems and methods to operate controllable devices with gestures and/or noises

Publications (1)

Publication Number Publication Date
US20190302714A1 true US20190302714A1 (en) 2019-10-03

Family

ID=60158252

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/143,397 Active 2036-11-18 US10295972B2 (en) 2016-04-29 2016-04-29 Systems and methods to operate controllable devices with gestures and/or noises
US16/376,206 Abandoned US20190302714A1 (en) 2016-04-29 2019-04-05 Systems and methods to operate controllable devices with gestures and/or noises

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/143,397 Active 2036-11-18 US10295972B2 (en) 2016-04-29 2016-04-29 Systems and methods to operate controllable devices with gestures and/or noises

Country Status (1)

Country Link
US (2) US10295972B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110811475A (en) * 2019-11-25 2020-02-21 佛山市顺德区美的洗涤电器制造有限公司 Dish washing machine, control panel and control method thereof
CN111150227A (en) * 2019-12-31 2020-05-15 创维集团有限公司 Tea table with intelligent home control function
CN111736697A (en) * 2020-06-22 2020-10-02 四川长虹电器股份有限公司 Camera-based gesture control method
US20220057922A1 (en) * 2019-04-30 2022-02-24 Google Llc Systems and interfaces for location-based device control

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106154860B (en) * 2016-08-26 2019-07-02 深圳市新国都支付技术有限公司 A kind of intelligent switch and the smart home system using the intelligent switch
US20210117680A1 (en) * 2017-05-10 2021-04-22 Humane, Inc. Wearable multimedia device and cloud computing platform with laser projection system
US10204624B1 (en) * 2017-08-14 2019-02-12 Lenovo (Singapore) Pte. Ltd. False positive wake word
KR20200042627A (en) * 2018-10-16 2020-04-24 삼성전자주식회사 Electronic apparatus and controlling method thereof
CN109709818B (en) * 2019-01-09 2021-08-13 腾讯科技(深圳)有限公司 Equipment control method, device, system and medium
KR20210079061A (en) * 2019-12-19 2021-06-29 엘지전자 주식회사 Information processing method and apparatus therefor
CN113269075A (en) * 2021-05-19 2021-08-17 广州繁星互娱信息科技有限公司 Gesture track recognition method and device, storage medium and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140142729A1 (en) * 2012-11-21 2014-05-22 Microsoft Corporation Controlling hardware in an environment
US20150163945A1 (en) * 2013-12-11 2015-06-11 Honeywell International Inc. Hvac controller with thermistor biased against an outer housing
US20160075015A1 (en) * 2014-09-17 2016-03-17 Brain Corporation Apparatus and methods for remotely controlling robotic devices

Family Cites Families (102)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5092343A (en) 1988-02-17 1992-03-03 Wayne State University Waveform analysis apparatus and method using neural network techniques
US5046022A (en) 1988-03-10 1991-09-03 The Regents Of The University Of Michigan Tele-autonomous system and method employing time/position synchrony/desynchrony
US5079491A (en) 1989-05-23 1992-01-07 Honda Giken Kogyo Kabushiki Kaisha Robot control system
US5063603A (en) 1989-11-06 1991-11-05 David Sarnoff Research Center, Inc. Dynamic method for recognizing objects and image processing system therefor
JPH0487423A (en) 1990-07-31 1992-03-19 Toshiba Corp Decoding circuit
US5467428A (en) 1991-06-06 1995-11-14 Ulug; Mehmet E. Artificial neural network method and architecture adaptive signal filtering
US5408588A (en) 1991-06-06 1995-04-18 Ulug; Mehmet E. Artificial neural network method and architecture
US5875108A (en) 1991-12-23 1999-02-23 Hoffberg; Steven M. Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US6850252B1 (en) * 1999-10-05 2005-02-01 Steven M. Hoffberg Intelligent electronic appliance system and method
US5245672A (en) 1992-03-09 1993-09-14 The United States Of America As Represented By The Secretary Of Commerce Object/anti-object neural network segmentation
US5355435A (en) 1992-05-18 1994-10-11 New Mexico State University Technology Transfer Corp. Asynchronous temporal neural processing element
US5673367A (en) 1992-10-01 1997-09-30 Buckley; Theresa M. Method for neural network control of motion using real-time environmental feedback
FI92361C (en) 1992-12-14 1994-10-25 Nokia Telecommunications Oy Procedure for controlling overload situations in a frame switching network and a node for a frame switching network
US5388186A (en) 1993-02-01 1995-02-07 At&T Corp. Differential process controller using artificial neural networks
RU2108612C1 (en) 1994-09-14 1998-04-10 Круглов Сергей Петрович Adaptive control system with identifier and implicit reference model
JP4014662B2 (en) 1995-09-18 2007-11-28 ファナック株式会社 Robot teaching operation panel
US5845271A (en) 1996-01-26 1998-12-01 Thaler; Stephen L. Non-algorithmically implemented artificial neural networks and components thereof
US6009418A (en) 1996-05-02 1999-12-28 Cooper; David L. Method and apparatus for neural networking using semantic attractor architecture
EP0988585A4 (en) 1997-06-11 2007-12-26 Univ Southern California Dynamic synapse for signal processing in neural networks
JP4131340B2 (en) 1997-07-11 2008-08-13 ソニー株式会社 Control device, control method, and reception device
US6458157B1 (en) 1997-08-04 2002-10-01 Suaning Gregg Joergen Retinal stimulator
US6581046B1 (en) 1997-10-10 2003-06-17 Yeda Research And Development Co. Ltd. Neuronal phase-locked loops
US6545705B1 (en) 1998-04-10 2003-04-08 Lynx System Developers, Inc. Camera with object recognition/data output
US6546291B2 (en) 2000-02-16 2003-04-08 Massachusetts Eye & Ear Infirmary Balance prosthesis
US7054850B2 (en) 2000-06-16 2006-05-30 Canon Kabushiki Kaisha Apparatus and method for detecting or recognizing pattern by employing a plurality of feature detecting elements
EP1344613B1 (en) 2000-11-17 2010-08-18 Honda Giken Kogyo Kabushiki Kaisha Remote controller of biped robot
US7328196B2 (en) 2003-12-31 2008-02-05 Vanderbilt University Architecture for multiple interacting robot intelligences
US6910060B2 (en) 2001-05-21 2005-06-21 Computational Sensor Corp. Spatio-temporal filter and method
CN1273912C (en) 2001-08-23 2006-09-06 索尼公司 Robot apparatus, face recognition method and face recognition apparatus
JP2003205483A (en) 2001-11-07 2003-07-22 Sony Corp Robot system and control method for robot device
WO2003042002A1 (en) 2001-11-13 2003-05-22 Kabushiki Kaisha Equos Research Data creation apparatus
US8156057B2 (en) 2003-03-27 2012-04-10 Knowm Tech, Llc Adaptive neural network utilizing nanotechnology-based components
ITTO20020862A1 (en) 2002-10-04 2004-04-05 Comau Spa PROGRAMMING SYSTEM FOR ROBOTS OR SIMILAR APPARATUS
US20040136439A1 (en) 2002-11-15 2004-07-15 Brandon Dewberry Methods and systems acquiring impulse signals
JP3961408B2 (en) 2002-11-21 2007-08-22 ファナック株式会社 Assembly method and apparatus
ITRM20020604A1 (en) 2002-11-29 2004-05-30 S I S S A Scuola Internaz Su Periore Di Stu METHOD FOR PROCESSING IMAGES WITH NEURON CULTURES E
WO2004112298A2 (en) 2003-05-27 2004-12-23 The Trustees Of Columbia University In The City Of New York Multichannel time encoding and decoding of a signal
US7426501B2 (en) 2003-07-18 2008-09-16 Knowntech, Llc Nanotechnology neural network methods and systems
US20060161218A1 (en) 2003-11-26 2006-07-20 Wicab, Inc. Systems and methods for treating traumatic brain injury
JP4780921B2 (en) 2004-03-17 2011-09-28 キヤノン株式会社 Parallel pulse signal processing apparatus and control method thereof
JP4358081B2 (en) 2004-03-31 2009-11-04 パナソニック株式会社 Video recording device
JP2005352900A (en) 2004-06-11 2005-12-22 Canon Inc Device and method for information processing, and device and method for pattern recognition
US20100081375A1 (en) * 2008-09-30 2010-04-01 Apple Inc. System and method for simplified control of electronic devices
US20100231506A1 (en) * 2004-09-07 2010-09-16 Timothy Pryor Control of appliances, kitchen and home
JP2006187826A (en) 2005-01-05 2006-07-20 Kawasaki Heavy Ind Ltd Robot controller
US7420396B2 (en) 2005-06-17 2008-09-02 Knowmtech, Llc Universal logic gate utilizing nanotechnology
US7395251B2 (en) 2005-07-01 2008-07-01 International Business Machines Corporation Neural networks for prediction and control
JP2007030060A (en) 2005-07-22 2007-02-08 Honda Motor Co Ltd Control device of mobile robot
US8346692B2 (en) 2005-12-23 2013-01-01 Societe De Commercialisation Des Produits De La Recherche Appliquee-Socpra-Sciences Et Genie S.E.C. Spatio-temporal pattern recognition using a spiking neural network and processing thereof on a portable and/or distributed computer
JP2007299366A (en) 2006-01-31 2007-11-15 Sony Corp Learning system and method, recognition device and method, creation device and method, recognition and creation device and method, and program
JP4162015B2 (en) 2006-05-18 2008-10-08 ソニー株式会社 Information processing apparatus, information processing method, and program
US7849030B2 (en) 2006-05-31 2010-12-07 Hartford Fire Insurance Company Method and system for classifying documents
US8014880B2 (en) 2006-09-29 2011-09-06 Fisher-Rosemount Systems, Inc. On-line multivariate analysis in a distributed process control system
WO2008042900A2 (en) 2006-10-02 2008-04-10 University Of Florida Research Foundation, Inc. Pulse-based feature extraction for neural recordings
US8103602B2 (en) 2006-12-29 2012-01-24 Neurosciences Research Foundation, Inc. Solving the distal reward problem through linkage of STDP and dopamine signaling
US7426920B1 (en) 2007-06-06 2008-09-23 Omnitek Engineering Corp. Fuel mixer apparatus and method
WO2009006405A1 (en) 2007-06-28 2009-01-08 The Trustees Of Columbia University In The City Of New York Multi-input multi-output time encoding and decoding machines
US7945349B2 (en) 2008-06-09 2011-05-17 Abb Technology Ab Method and a system for facilitating calibration of an off-line programmed robot cell
WO2010017448A1 (en) 2008-08-07 2010-02-11 Massachusetts Institute Of Technology Coding for visual prostheses
US20100084468A1 (en) 2008-10-02 2010-04-08 Silverbrook Research Pty Ltd Method of imaging coding pattern comprising columns and rows of coordinate data
US8160354B2 (en) 2008-12-26 2012-04-17 Five Apes, Inc. Multi-stage image pattern recognizer
WO2010135372A1 (en) * 2009-05-18 2010-11-25 Alarm.Com Incorporated Remote device control and energy monitoring
AU2010250499B2 (en) 2009-05-19 2013-09-12 Nippon Steel Corporation Bending apparatus
US8200593B2 (en) 2009-07-20 2012-06-12 Corticaldb Inc Method for efficiently simulating the information processing in cells and tissues of the nervous system with a temporal series compressed encoding neural network
US9838255B2 (en) * 2009-08-21 2017-12-05 Samsung Electronics Co., Ltd. Mobile demand response energy management system with proximity control
WO2011036865A1 (en) 2009-09-28 2011-03-31 パナソニック株式会社 Control device and control method for robot arm, robot, control program for robot arm, and integrated electronic circuit for controlling robot arm
US8600166B2 (en) 2009-11-06 2013-12-03 Sony Corporation Real time hand tracking, pose classification and interface control
US8275727B2 (en) 2009-11-13 2012-09-25 International Business Machines Corporation Hardware analog-digital neural networks
US8311965B2 (en) 2009-11-18 2012-11-13 International Business Machines Corporation Area efficient neuromorphic circuits using field effect transistors (FET) and variable resistance material
US9405975B2 (en) 2010-03-26 2016-08-02 Brain Corporation Apparatus and methods for pulse-code invariant object recognition
US9311593B2 (en) 2010-03-26 2016-04-12 Brain Corporation Apparatus and methods for polychronous encoding and multiplexing in neuronal prosthetic devices
US8315305B2 (en) 2010-03-26 2012-11-20 Brain Corporation Systems and methods for invariant pulse latency coding
US9122994B2 (en) 2010-03-26 2015-09-01 Brain Corporation Apparatus and methods for temporally proximate object recognition
US8433665B2 (en) 2010-07-07 2013-04-30 Qualcomm Incorporated Methods and systems for three-memristor synapse with STDP and dopamine signaling
JP4951722B2 (en) 2010-07-27 2012-06-13 パナソニック株式会社 Moving path search device and moving path search method
US8510239B2 (en) 2010-10-29 2013-08-13 International Business Machines Corporation Compact cognitive synaptic computing circuits with crossbar arrays spatially in a staggered pattern
JP4991023B2 (en) 2010-11-12 2012-08-01 パナソニック株式会社 Moving path search device and moving path search method
CN102226740B (en) 2011-04-18 2013-05-22 中国计量学院 Bearing fault detection method based on manner of controlling stochastic resonance by external periodic signal
US9147156B2 (en) 2011-09-21 2015-09-29 Qualcomm Technologies Inc. Apparatus and methods for synaptic update in a pulse-coded network
US8725662B2 (en) 2011-09-21 2014-05-13 Brain Corporation Apparatus and method for partial evaluation of synaptic updates based on system events
US8719199B2 (en) 2011-09-21 2014-05-06 Brain Corporation Systems and methods for providing a neural network having an elementary network description for efficient implementation of event-triggered plasticity rules
US8725658B2 (en) 2011-09-21 2014-05-13 Brain Corporation Elementary network description for efficient memory management in neuromorphic systems
US9104973B2 (en) 2011-09-21 2015-08-11 Qualcomm Technologies Inc. Elementary network description for neuromorphic systems with plurality of doublets wherein doublet events rules are executed in parallel
US8712941B2 (en) 2011-09-21 2014-04-29 Brain Corporation Elementary network description for efficient link between neuronal models and neuromorphic systems
JP5512048B2 (en) 2011-09-06 2014-06-04 パナソニック株式会社 ROBOT ARM CONTROL DEVICE AND CONTROL METHOD, ROBOT, CONTROL PROGRAM, AND INTEGRATED ELECTRONIC CIRCUIT
US9104186B2 (en) 2012-06-04 2015-08-11 Brain Corporation Stochastic apparatus and methods for implementing generalized learning rules
US8943008B2 (en) 2011-09-21 2015-01-27 Brain Corporation Apparatus and methods for reinforcement learning in artificial neural networks
US8712939B2 (en) 2011-09-21 2014-04-29 Brain Corporation Tag-based apparatus and methods for neural networks
US9015092B2 (en) 2012-06-04 2015-04-21 Brain Corporation Dynamically reconfigurable stochastic learning apparatus and methods
US9098811B2 (en) 2012-06-04 2015-08-04 Brain Corporation Spiking neuron network apparatus and methods
US20130151449A1 (en) 2011-12-07 2013-06-13 Filip Ponulak Apparatus and methods for implementing learning for analog and spiking signals in artificial neural networks
US9146546B2 (en) 2012-06-04 2015-09-29 Brain Corporation Systems and apparatus for implementing task-specific learning using spiking neurons
US10210452B2 (en) 2011-09-21 2019-02-19 Qualcomm Incorporated High level neuromorphic network description apparatus and methods
US9117176B2 (en) 2011-09-21 2015-08-25 Qualcomm Technologies Inc. Round-trip engineering apparatus and methods for neural networks
US8873813B2 (en) 2012-09-17 2014-10-28 Z Advanced Computing, Inc. Application of Z-webs and Z-factors to analytics, search engine, learning, recognition, natural language, and other utilities
JP5636119B2 (en) 2011-11-30 2014-12-03 パナソニック株式会社 Robot teaching apparatus, robot apparatus, robot teaching apparatus control method, robot teaching apparatus control program
US20130201316A1 (en) * 2012-01-09 2013-08-08 May Patents Ltd. System and method for server based control
US9141196B2 (en) 2012-04-16 2015-09-22 Qualcomm Incorporated Robust and efficient learning object tracker
US9224090B2 (en) 2012-05-07 2015-12-29 Brain Corporation Sensory input processing apparatus in a spiking neural network
US9111215B2 (en) 2012-07-03 2015-08-18 Brain Corporation Conditional plasticity spiking neuron network apparatus and methods
US9367798B2 (en) 2012-09-20 2016-06-14 Brain Corporation Spiking neuron network adaptive control apparatus and methods
US9398059B2 (en) * 2013-11-22 2016-07-19 Dell Products, L.P. Managing information and content sharing in a virtual collaboration session

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140142729A1 (en) * 2012-11-21 2014-05-22 Microsoft Corporation Controlling hardware in an environment
US20150163945A1 (en) * 2013-12-11 2015-06-11 Honeywell International Inc. Hvac controller with thermistor biased against an outer housing
US20160075015A1 (en) * 2014-09-17 2016-03-17 Brain Corporation Apparatus and methods for remotely controlling robotic devices

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220057922A1 (en) * 2019-04-30 2022-02-24 Google Llc Systems and interfaces for location-based device control
CN110811475A (en) * 2019-11-25 2020-02-21 佛山市顺德区美的洗涤电器制造有限公司 Dish washing machine, control panel and control method thereof
CN111150227A (en) * 2019-12-31 2020-05-15 创维集团有限公司 Tea table with intelligent home control function
CN111736697A (en) * 2020-06-22 2020-10-02 四川长虹电器股份有限公司 Camera-based gesture control method

Also Published As

Publication number Publication date
US10295972B2 (en) 2019-05-21
US20170315519A1 (en) 2017-11-02

Similar Documents

Publication Publication Date Title
US10295972B2 (en) Systems and methods to operate controllable devices with gestures and/or noises
JP6968154B2 (en) Control systems and control processing methods and equipment
US9860077B2 (en) Home animation apparatus and methods
US9849588B2 (en) Apparatus and methods for remotely controlling robotic devices
US20230205151A1 (en) Systems and methods of gestural interaction in a pervasive computing environment
US9579790B2 (en) Apparatus and methods for removal of learned behaviors in robots
CN107528753B (en) Intelligent household voice control method, intelligent equipment and device with storage function
US9565238B2 (en) Method for controlling electronic apparatus, handheld electronic apparatus and monitoring system
US20160075016A1 (en) Apparatus and methods for context determination using real time sensor data
US20160162039A1 (en) Method and system for touchless activation of a device
EP3100249B1 (en) Gesture control
CN105739315B (en) Control method and device for indoor user electrical equipment
KR102481486B1 (en) Method and apparatus for providing audio
CN104023434A (en) Intelligent lamp turning-on method, device and system
TW201636784A (en) Environment-oriented smart control apparatus, control system and control method
KR101937823B1 (en) Method, system and non-transitory computer-readable recording medium for assisting object control
CN108605400A (en) A method of control lighting apparatus
CN108874142B (en) A kind of Wireless intelligent control device and its control method based on gesture
KR20150097049A (en) self-serving robot system using of natural UI
US11475664B2 (en) Determining a control mechanism based on a surrounding of a remove controllable device
WO2023236848A1 (en) Device control method, apparatus and system, and electronic device and readable storage medium
JP6948420B2 (en) Interaction methods, devices, systems, electronic devices and storage media
CN112287717A (en) Intelligent system, gesture control method, electronic device and storage medium
CN212319825U (en) Smoke machine
CN113359503A (en) Equipment control method and related device

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION