US20130339850A1 - Interactive input device - Google Patents

Interactive input device Download PDF

Info

Publication number
US20130339850A1
US20130339850A1 US13/918,451 US201313918451A US2013339850A1 US 20130339850 A1 US20130339850 A1 US 20130339850A1 US 201313918451 A US201313918451 A US 201313918451A US 2013339850 A1 US2013339850 A1 US 2013339850A1
Authority
US
United States
Prior art keywords
user
input
command
implementations
control system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/918,451
Inventor
Jason Hardi
John Cawley
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Muzik LLC
Original Assignee
Muzik LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US13/918,451 priority Critical patent/US20130339850A1/en
Application filed by Muzik LLC filed Critical Muzik LLC
Assigned to A&L SERVICES CORPORATION reassignment A&L SERVICES CORPORATION SECURITY AGREEMENT Assignors: Muzik LLC
Assigned to A&L SERVICES CORP. reassignment A&L SERVICES CORP. SECURITY AGREEMENT Assignors: HARDI, Jason
Publication of US20130339850A1 publication Critical patent/US20130339850A1/en
Assigned to Muzik LLC reassignment Muzik LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HARDI, Jason, CAWLEY, JOHN
Assigned to A&L SERVICES GROUP reassignment A&L SERVICES GROUP SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Muzik LLC
Assigned to Muzik LLC reassignment Muzik LLC RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: A&L SERVICES GROUP
Priority to US14/751,952 priority patent/US20160103511A1/en
Priority to US15/628,206 priority patent/US20180048750A1/en
Priority to US16/747,926 priority patent/US20200162599A1/en
Priority to US17/661,421 priority patent/US20220337693A1/en
Assigned to FYRST, TIM reassignment FYRST, TIM SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: Muzik, Inc.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • H04M1/6041Portable telephones adapted for handsfree use
    • H04M1/6058Portable telephones adapted for handsfree use involving the use of a headset accessory device connected to the portable telephone
    • H04M1/6066Portable telephones adapted for handsfree use involving the use of a headset accessory device connected to the portable telephone including a wireless connection
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62DMOTOR VEHICLES; TRAILERS
    • B62D1/00Steering controls, i.e. means for initiating a change of direction of the vehicle
    • B62D1/02Steering controls, i.e. means for initiating a change of direction of the vehicle vehicle-mounted
    • B62D1/04Hand wheels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/169Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated pointing device, e.g. trackball in the palm rest area, mini-joystick integrated between keyboard keys, touch pads or touch stripes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1694Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being a single or a set of motion sensors for pointer control or gesture input obtained by sensing movements of the portable computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/38Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
    • H04B1/3827Portable transceivers
    • H04B1/385Transceivers carried on the body, e.g. in helmets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/611Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for multicast or broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1041Mechanical or electronic switches, or control elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication

Definitions

  • This specification relates to remote input devices and more specifically input devices integrated with output devices.
  • Computing devices are commonly used by a user to perform a wide variety of functions.
  • a user issues commands to a computing device by interacting with one or more controls, the input is often done through an input device such as a keyboard, touchpad, mouse, or touchscreen.
  • the computing device outputs content in response to the user commands in various forms via a video monitor, speaker, headphones or other sensory/perceptive device(s).
  • rudimentary output commands such as “play,” “stop,” “pause,” and “volume”
  • current output devices do not allow for controls or input to software programs running on the computing device.
  • This specification describes technologies relating to interactive remote input devices and interactive output devices, such as, for example and without limitation network connected interactive headphones, interactive dongles, interactive cables, interactive speakers and interactive hand controllers.
  • one innovative aspect of the subject matter described in this specification can be embodied in a headphone apparatus and a media player device that are used in conjunction to provide a user with audio playback of media content, and to allow the user to interact with social media sites, email providers, supplementary content providers, and ad providers based on the media content being played.
  • the headphones are operably connected to the media player through a hardwire connection or through a wireless connection, such as Bluetooth or Wi-Fi.
  • the media player communicates with a network gateway through wireless network connection, such as through a cellular connection or Wi-Fi connection.
  • the network gateway provides network connectivity to the Internet, facilitating access to various content and service providers connected to the Internet.
  • Content and service providers may include email servers, social media sites, ad servers, and content servers.
  • the media player may be one of many types of mobile devices, such as a cellular telephone, a tablet, a computer, a pager, a gaming device, or a media player.
  • the wireless network connection may be one of many types of communications networks through which data can be transferred, such as a Wi-Fi network, a cellular telephone network, a satellite communications network, a Bluetooth network, or an infrared network.
  • the content and service providers may also include search engines, digital content merchant sites, instant messaging providers, SMS message providers, VOIP providers, fax providers, content review sites, and online user forums.
  • Implementations of the present invention may include a system for interacting with an application on a processing apparatus, comprising: an input module; a processing module; and a transmission module; wherein the input component is configured to detect a tactile input applied by a user; wherein the processing component is configured to translate the input into an application command; and wherein the transmission component is adapted to transmit the command to the processing apparatus.
  • a method for providing input to a processing device comprises: providing a networked processing device, wherein the processing device delivers content as an output to an output device; providing an input module configured to detect a tactile input applied by a user, and translating the input into an application command at the processing unit.
  • Implementations of the present invention may comprise one or more of the following features.
  • the input component is adjacent the output device.
  • the tactile input on the input component comprises one or more of the following: a momentary touching gesture, a sustained touching gesture, and a swiping gesture.
  • the tactile input comprises a series of two of more of the following: a momentary touching gesture, a sustained touching gesture, and a swiping motion gesture.
  • the processing component is configured to determine a number of fingers used by the user to apply the input.
  • the processing component is configured to translate the input into an application command based on the number of fingers detected.
  • the system comprises one or more audio speakers.
  • the application is a media management application.
  • the processing apparatus is one of the following: a media player, a smartphone, a gaming device, and a computer.
  • the command comprises a command to control a media playback function of the processing apparatus.
  • the command comprises a command to broadcast a user preference or user indication over a network, such as a social network.
  • the command comprises a command to transmit a message to a recipient through a communications network.
  • the communications network comprises one or more of the following: a LAN, a WAN, the internet, and a cellular network.
  • the recipient is a communications device, a social media website, an email server, and a telephone.
  • the system comprises a user control device for controlling a device unassociated with the system for interacting with an application on a processing apparatus.
  • the control device comprises a steering wheel for controlling a vehicle.
  • the output device comprises one or more video displays, audio speakers, headphones, ear buds,
  • FIG. 1 is a schematic diagram of an example control system.
  • FIG. 2 is a flow chart showing an example usage of a control system.
  • FIG. 3 is a flow chart showing an example usage of a control system.
  • FIG. 4A-D are example embodiments of an input module.
  • FIGS. 5A-F are example user interactions.
  • FIGS. 6A-H are example user interactions.
  • FIG. 7 is an example input module.
  • FIGS. 8A-E are example embodiments of control systems.
  • FIG. 9 shows example embodiments of control systems.
  • FIG. 10 is an example network of the present invention including interactive, networked headphones.
  • FIG. 11A is an example of an interactive, networked headphone of the present invention.
  • FIG. 11B is an example of an implementation of the present invention.
  • FIG. 11C is an example of an implementation of the present invention.
  • FIG. 12 is an example of a method of an implementation of the present invention.
  • FIG. 13 is an example of an implementation of the present invention.
  • FIG. 14 is an example of an implementation of the present invention.
  • FIG. 15 is an example of a method of an implementation of the present invention.
  • an implementation of the technology includes a control device that is used in conjunction with a computing device (e.g. a media player or smartphone), that allows a user to control the operation of the computing device without directly handling the computing device itself.
  • the computing device may be controlled from a traditional output device, such as a headphone, speaker, ear bud, speaker cable, wearable output display such as heads-up display glasses or visors, a cable comprising an input device such as an interactive input device on a headphone or ear bud cable, or even a remote input device disassociated with the computing device, such as a steering wheel, a dash board panel, a visual or audio kiosk, and the like.
  • Providing input to the computing device from an interactive output device or remote input device allows the user to interact with the computing device in a more convenient manner.
  • a user may use the control device to interact with the computing device, without first having to remove the computing device from a storage location (e.g. a clothing pocket, a carrying bag, a holding bracket, an armband, etc.)
  • a user may use the control device to operate the computing device, without exposing the computing device to potential damage due to mishandling or environmental factors.
  • the user may also use the control device to operate a computing device that is not readily accessible, for example a device that is secured in a container or a protective housing, or built into a fixed enclosure (e.g. a household audio system or a media system in a vehicle).
  • a user may use the control device to interact with the computing device without having to look at either the control device or the computing device.
  • a user may use the computing device while engaging in other activities, such as walking, running, reading, driving, or any other activity where averting one's attention is undesirable.
  • a user may use the control device to simplify specific tasks of the computing device, such that the user may issue complex instructions to the computing device using relatively simple inputs on the control device.
  • the user may share or “like” content, such as a music recording being played on a mobile phone and delivered to the user via an output device such as headphones, wherein the headphones include an input component such that the user can communicate preferences for the music file with other users over a social network by simple manipulation of the input device on the headset.
  • a music recording being played on a mobile phone and delivered to the user via an output device such as headphones
  • the headphones include an input component such that the user can communicate preferences for the music file with other users over a social network by simple manipulation of the input device on the headset.
  • a user may share content in real time to a predetermined set of additional users (e.g., members of a contact list, attendees to an event, users within a geographic or localized area).
  • additional users e.g., members of a contact list, attendees to an event, users within a geographic or localized area.
  • multiple users can communicate and share files via a network with a single device, such as a communal audio speaker (e.g., multiple users can share one or more music files to a device to create a just in time play list by simple manipulation of the input component on the user headphones.
  • FIG. 1 illustrates an example implementation of a control device 100 used to control the operation of a computing device 150 .
  • a control device 100 includes an input module 102 , a processing module 104 , a transmission module 106 , and a power module 108 .
  • Each module 102 , 104 , 106 , and 108 may be interconnected through one or more connection interfaces 110 , which may provide a connective pathway for the transfer of power or data between each of the modules.
  • the transmission module 106 is connected to the computing device 150 through another connection interface 152 , which provides a connective pathway for the transfer of data between control device 100 and computing device 150 .
  • the input module 102 is provided so that a user can physically interact with the control device 100 .
  • the input module 102 includes one or more sensors to detect physical interaction from the user, and also includes electronic components necessary to convert the physical interactions into a form that may be interpreted by the other modules of the device (e.g. by digitizing the input so that it may be understood by the processing module 104 and the transmission module 106 ).
  • the input module 102 may include one or more types of sensors, for instance touch-sensitive sensors, buttons, switches, or dials, or combinations of one or more of these sensors.
  • the processing module 104 is provided so that control device 100 may interpret the user's physical interactions, and translate these interactions into specific commands to the computing device 150 .
  • the transmission module 106 is provided so that the control device 100 can transmit commands to the computing device 150 .
  • the transmission module 106 may include components to encode and transmit data to computing device 150 in a form recognizable by computing device 150 .
  • the transmission module 106 may include, for example, a serial communication module, a universal serial bus (USB) communication module, a Bluetooth networking module, a WiFi networking module, a cellular phone communication module (e.g. a CDMA or GSM radio), or any other module for communicating with computing device 150 .
  • the power module 108 is provided to supply power to each of the other modules of control device 100 .
  • the power module 108 may be of any standard form for powering small electronic circuit board devices such as the following power cells: alkaline, lithium hydride, lithium ion, lithium polymer, nickel cadmium, solar cells, and/or the like. Other types of AC or DC power sources may be used as well.
  • the case provides an aperture through which the solar cell may capture photonic energy.
  • power module 108 is located external to system 100 , and power for each of the modules of control device 100 is provided through connection interface 110 or another connection interface, which may provide a connective pathway for the transfer of power between the externally located power module 108 and the components of system 100 .
  • connection interface 152 which provides a connective pathway for the transfer of data between control device 100 and computing device 150 .
  • Connection interface 152 may be a wired connection, a wireless connection, or a combination of both.
  • connection interface 152 may be a serial connection, a USB connection, a Bluetooth connection, WiFi connection, a cellular connection (e.g. a connection made through a CDMA or GSM network), or combinations or one or more of these connections.
  • connection interface 152 is established over a wireless connection, and a “paired” relationship must be established between control system 100 and computing device 150 before the user may use control system 100 to issue commands to computing device 150 .
  • connection interface 152 is a Bluetooth connection, and a user interacts with computing device 150 , first to view a list of active Bluetooth modules in the vicinity, then to select the module representing control system 100 .
  • Computing device 150 and control system 100 establish a Bluetooth connection interface 152 , and data can be transmitted between the two through this connection interface.
  • control system 100 may contain a near-field communication (NFC) identification tag that uniquely identifies control system 100
  • computing device 150 has a NFC reader that is capable of reading the identification information from the NFC identification tag.
  • NFC near-field communication
  • a user may pair control system 100 and computing device 150 by placing the NFC identification tag in proximity to the NFC reader, and computing device 150 establishes a connection interface 152 with control system 100 .
  • This paired relationship may be retained by control system 100 or computing device 150 , such that each device will continue to communicate with the other over subsequent usage sessions.
  • paired relationship may also be altered, such that control system 100 may be paired with another computing device 150 .
  • connection interface 152 may also be used to transmit data unassociated with control system 100 .
  • a control system 100 may be used with an audio headset, such that audio information from computing device 150 is transmitted through connection interface 152 to transmission module 106 , then transmitted to the audio headset using another connection interface.
  • data associated with control system 100 i.e. data that related to the operation of control system 100
  • data unassociated with control system 100 may both be transmitted through connection interface 152 , either simultaneously or in an alternating manner.
  • connection interface 152 is an analog connection, such as those commonly used to transfer analog audio data to a headset, and may include multiple channels, for instance channels commonly used for left audio, right audio, microphone input, etc.
  • Control system 100 may transfer analog audio data, as well as data associated with control system 100 , through such a connection interface 152 .
  • transmission module 106 may encode the data according to patterns of shorted analog channels, then transmit the encoded data by shorting one or more of the channels of connection interface 152 . These patterns of shorted channels may be interpreted by computing system 150 .
  • transmission module 106 may transfer data by sending patterns of analog voltage waveforms to computing system 150 , where may then be interpreted by computing system 150 . Combinations of more than one encoding and transmission techniques may also be used.
  • FIG. 2 An example usage 200 of a control device 100 for controlling a computing device 150 is illustrated in FIG. 2 .
  • the control device 100 first detects an input from the user (block 202 ).
  • the input is detected by the input module 102 , and may include any form of physical interaction, for instance a touching motion, a swiping or sweeping motion, a pressing motion, a toggling motion, or a turning motion.
  • the control device 100 may detect the number of fingers that the user uses to apply the input, in addition to detecting the physical motion of the input.
  • the control device 100 then translates the input into a command to a computing device (block 204 ).
  • input module 102 detects and digitizes the user's input
  • processing module 104 translates the digitized input into a command.
  • the processing module 104 receives digitized information describing the user's interaction with input module 102 and compares it to a predetermined list of inputs and associated commands. If processing module 104 determines that the user's input matches an input from the predetermined list, it selects the command associated with the matched input from the list. In some implementations, processing module 104 does not require an exact match between the user's interactions, and may instead select the closest match.
  • a match may be based on one or more criteria, for instance based on a temporal similarity, a spatial similarity, or a combination of the two.
  • a command may include any instruction to a computing device to perform a particular task.
  • example commands may include instructions to the computing device to start the playback of a particular media file, stop the playback of a media file, adjust the volume of playback, jog to a particular time-point within the media file, or select another media file entirely.
  • commands may include instructions to the computing device to transmit data to a recipient across a network connection.
  • Example network connections include local area networks (LANs), wide area networks (WANs), Bluetooth networks, cellular networks, or any other network where data may be transmitted between two computing devices.
  • Example recipients may include other computing devices, for instance a computer, a smartphone, or a server.
  • a command may include instructions to send a particular message across a WAN to a server operated by a social media website, such that a user may interact with the social media website.
  • a command may include instructions to send a particular message across a cellular network to another computing device, such that a user may interact with another user through the other user's computing device.
  • the control device 100 then transmits the translated command to the computing device (block 206 ).
  • the translated command is transmitted through the transmission module 206 to the computing device 150 .
  • the computing device 150 may execute the command.
  • control device 100 first detects an input from the user ( 302 ).
  • the input is detected by the input module 102 , and may include any form of physical interaction, for instance a touching motion, a swiping or sweeping motion, a pressing motion, a toggling motion, or a turning motion.
  • the control device 100 may detect the number of fingers that the user uses to apply the input, in addition to detecting the physical motion of the input.
  • the control device 100 then associates a new command to the detected input (block 304 ).
  • a user may specify the new command in various ways. For instance, in some implementations, a user may interact with control device 100 , computing device 150 , or a combination of the two, to instruct control device 100 to associate a new command with a particular user input. In an example, a user may interact with control device 100 to browse through a list of possible commands and select from a command from this list. In another example, a user may interact with computing device 150 to browse through a list of possible commands and select a command from this list, where the selected command is transmitted to control device 100 through connection interface 152 . The selected command is received by the transmission module 106 , then is transmitted to the processing module 104 .
  • the control device 100 then stores the association between the new command and the detected input for future retrieval (block 306 ).
  • the association between the new command and the detected input is saved by processing module 104 , and may be recalled whenever a user interacts with control device 100 or computing device 150 .
  • a user can initiate commands to a program with simple remote inputs, for example, a user can set up the control device to recognize that designated user inputs (e.g., a two finger sweep across the control interface or input module) should instruct a music player to share a music file with a designated group of friends or other users.
  • designated user inputs e.g., a two finger sweep across the control interface or input module
  • voting options may be incorporated into traditionally passive content, for example, an audio stream such as that provided by an internet radio provider could include a prompt to initiate a known user input (e.g. hold two fingers on the interface or control module or depress two buttons simultaneously on the control module) to have an e-mail sent about a product advertised, or to enroll the recipient in an additional service.
  • a known user input e.g. hold two fingers on the interface or control module or depress two buttons simultaneously on the control module
  • the input module 102 includes one or more sensors to detect physical interaction from the user.
  • Example arrangements of sensors are shown in FIG. 4 .
  • an input module 102 a may include a single touch-sensitive sensor 402 .
  • the sensor 402 may be of various types, for instance a sensor capable of resistive sensing or a sensor capable of conductance sensing.
  • the sensor 402 may detect interaction from a user in the form of physical interaction, for instance a touching motion, a swiping or sweeping motion, a pressing motion, a toggling motion, or a turning motion.
  • the senor 402 may detect the absolute or relative position of a user's finger upon the sensor 402 , in order to provide additional spatial information regarding the nature of the user's interactions. In some implementations, the sensor 402 may detect the period of time in which a user interacts with sensor 402 , in order to provide additional temporal information regarding the nature of the user's interactions.
  • another example input module 102 b may include several individual touch-sensitive sensors, for instance five sensors 404 a - e.
  • the sensors 404 a - e may be of various types, for instance a sensor capable of resistive sensing or a sensor capable of capacitive sensing, or a combination of two or more types of sensors.
  • the sensors 404 a - e may detect the absolute or relative position of a user's finger upon the sensors 404 a - e, in order to provide additional spatial information regarding the nature of the user's interactions.
  • the sensors 404 a - e may detect the period of time in which a user interacts with sensors 404 a - e, in order to provide additional temporal information regarding the nature of the user's interactions.
  • each of the sensors 404 a - e is discrete, such that input module 102 b is able to discern which of the sensors 404 a - e were touched by the user. While input module 102 b is illustrated as having five sensors, any number of individual sensors may be used. Similarly, while input module 102 b is illustrated as having rectangular sensors arranged in a grid-like pattern, sensors may take any shape, and may be arranged in any pattern.
  • an example input module 102 c may include eight sensors 406 a - h arranged in a circle, such that each sensor represents a sector of a circle.
  • an input module 102 includes a printed circuit board (PCB) in a two layer stack.
  • a first layer includes one or more conductive surfaces (for instance copper pads) that serve as capacitive elements, where each capacitive element corresponds to a touch-sensitive sensor.
  • the opposing layer houses a microcontroller and support circuitry to enable resistive-capacitive (RC) based capacitive touch sensing in each of the capacitive elements.
  • RC resistive-capacitive
  • a gesture detection algorithm may be included as a part of the firmware.
  • a gesture event is detected when the capacitance measurement of any capacitive element increases over a finger detection threshold set in firmware. The gesture event ceases in either case of the capacitance measurement dropping back below the finger detection threshold or a timeout is reached
  • the microcontroller may communicate with another component, for instance processing module 104 , which gesture occurred. This communication may occur through a wired connection, such as communication interface 110 , or through a wireless connection, such as through a WiFi, Bluetooth, infrared, near-field communication (NFC) connection.
  • a wired connection such as communication interface 110
  • a wireless connection such as through a WiFi, Bluetooth, infrared, near-field communication (NFC) connection.
  • NFC near-field communication
  • the input module 102 uses a capacitive touch scheme that measures capacitance of an isolated section of one or more of the capacitive elements by alternatively charging and discharging the capacitive elements through a known resistor.
  • the combination of the resistor value and capacitance of the capacitive elements define the rate at which the capacitive elements charge and discharge. Since the resistor is a fixed value, the discharge rate has a direct relation to each capacitive element's capacitance.
  • the capacitance of the capacitive elements is measured by recording the amount of time it takes to charge then discharge the capacitive elements. The capacitance of the capacitive elements in an unchanging environment will remain the same.
  • the finger When a finger comes very close or touches the capacitive elements, the finger increases the measureable capacitance of the capacitive elements by storing charge, thus causing the charge and discharge events to take longer. Since the measured capacitance increases in the presence of a finger, the firmware may then use the capacitance measurement as a means to decide that a finger is touching a sensor of input module 102 .
  • Input module 102 may include sensors other than touch-sensitive sensors.
  • an input module 102 d may include several physical buttons 408 a - d.
  • a user may interact with input module 102 by pressing one or more of the buttons 408 a - d.
  • Input module 102 may detect one or more events associated with this interaction, for example by detecting when a button is depressed, how long a button is held, when a button is released, a sequence or pattern of button presses, or a timing between two or more button pressed.
  • input module 102 includes one or more proximity sensors. These proximity sensors may be used to detect the motion objects near to the sensor. For example, proximity sensors may be used to detect a user waving his hand close to input module 102 . These proximity sensors may also be used to detect the presence of objects near to the sensor. For example, a proximity sensor may be used to detect that system 100 is in close proximity to a user.
  • input module 102 includes one or more accelerometer sensors, such that input module 102 may determine the motion or the orientation of control system 100 .
  • an input module 102 with one or more accelerometer sensors may be able to determine if control system 100 is upright or not upright, or if control system 100 is being moved or stationary.
  • the input modules 102 may detect a broad range of user interactions.
  • an input module 102 a that includes a single touch-sensitive sensor 402 may detect and differentiate between several distinct types of user interaction. For instance, referring to FIG. 5A , the input module 102 a may determine that a user applied a horizontal left-to-right motion to the input module by recognizing that the user initiated contact with sensor 402 at point 510 a, sustained contact along path 510 b in the direction of arrow 510 c, then released contact at point 510 d. In another example, referring to FIG.
  • the input module 102 a may determine that a user applied a vertical bottom-to-top motion to the input module by recognizing that the user initiated contact with sensor 402 at point 520 a, sustained contact along path 520 b in the direction of arrow 520 c, then released contact at point 520 d.
  • the input module 102 is not limited to recognizing straight-line user interactions. For instance, referring to FIG. 5C , the input module 102 a may determine that a user applied an
  • the input module 102 may also detect touching motions. For instance, referring to FIG. 5D , the input module 102 a may determine that a user applied a touching motion to the input module by recognizing that the user initiated contact with the sensor 402 at point 540 and released contact at point 540 .
  • sensor 402 is sensitive to the location of point 520 a, and can differentiate among different points of contact along sensor 402 .
  • sensor 402 is sensitive to the time in between when the user initiated contact with the sensor and when the user released contact with the sensor.
  • input module 102 may provide both spatial and temporal information regarding a user's interactions.
  • input module 102 may also detect multiple points of contact, and may differentiate, for example, between an interaction applied by a single finger and an interaction applied by multiple fingers. For instance, referring to FIG. 5E , the input module 102 a may determine that a user applied a touching motion to the input module using two fingers by recognizing that the user initiated contact with the sensor 402 at two points 550 a and 552 a, and released contact from points 550 and 552 . In another example, referring to FIG.
  • the input module 102 a may determine that a user applied a horizontal left-to-right motion to the input module using two fingers by recognizing that the user initiated contact with sensor 402 at points 560 a and 562 b, sustained contact along paths 560 b and 562 b in the direction of arrows 560 c and 562 c, then released contact at points 560 d and 562 d.
  • the input module 102 may determine spatially and temporally-dependent information about a user's input, even if each sensor is limited only to making a binary determination regarding whether the sensor is being touched, and is otherwise not individually capable of determining more detailed spatial information.
  • an input module 102 b may include several individual touch-sensitive sensors, for instance five sensors 402 a - e. If the sensors 402 a - e are capable of making only a binary determination regarding the presence or lack of user contact on each of the sensors, and cannot make a determination about the specific location of contact on each sensor, the input module 102 b may still recognize several types of user interaction.
  • the input module 102 b may determine that a user applied a horizontal left-to-right motion to the input module by recognizing that the user initiated contact with at point 610 a, sustained contact along path 610 b in the direction of arrow 610 c, then released contact at point 610 d. Input module 102 b may make this determination based on a detection of contact on sensors 404 b, 404 c, and 404 d in sequential order.
  • the input module 102 b may similarly determine that a user applied a vertical bottom-to-top motion to the input module by recognizing that the user initiated contact with at point 620 a, sustained contact along path 620 b in the direction of arrow 620 c, then released contact at point 620 d. Input module 102 b may make this determination based on a detection of contact on sensors 404 e, 404 c, and 404 a in sequential order.
  • the input module 102 b may determine that a user initiated contact at point 630 a, sustained contact along path 630 b in the direction of arrow 630 c, then released contact at point 630 d.
  • the input module 102 may also detect touching motions. For instance, referring to FIG. 5D , the input module 102 b may determine that a user applied a touching motion to the input module by recognizing that the user initiated contact with the sensor 402 c at point 640 and released contact at point 640 .
  • the input module 102 may also detect touching motions from multiple points of contact. For instance, referring to FIG. 5E , the input module 102 b may determine that a user applied a touching motion to the input module by recognizing that the user initiated contact with the sensor 402 b at point 650 a and contact with the sensor 402 c at point 650 b, and released contact at points 650 a and 650 b.
  • the input module 102 b may determine that a user applied a horizontal left-to-right motion to the input module using two fingers by recognizing that the user initiated contact at points 660 a and 662 b, sustained contact along paths 660 b and 662 b in the direction of arrows 660 c and 662 c, then released contact at points 660 d and 662 d. Input module 102 b may make this determination based on a detection of contact on sensors 404 e, 404 c and 404 a simultaneously, and 404 a in sequential order.
  • sensors 402 may be capable of individually determining spatial information, and may use this information to further differentiate between different types of user interaction. For instance, referring to FIG. 6G , an input module 102 b may determine that a user applied multiple points of contact onto a single sensor 404 c. In another example, referring to FIG. 6H , an input module 102 b may determine that a user applied user applied a horizontal left-to-right motion to the input module using two fingers by recognizing that two points of contact exist along the same sequence of sensors 404 b, 404 c, and 404 d.
  • An input module 102 need not have sensors arranged in a grid-like pattern in order to determine spatial information about a user's interaction.
  • an input module 102 c with eight sensors 406 a - h arranged as sectors of a circle has a sensor 406 a in the 0° position, a sensor 406 b in the 45° position, a sensor 406 c in the 90° position, a sensor 406 d in the 135° position, a sensor 406 e in the 180° position, a sensor 406 f in the 225° position, a sensor 406 g in the 270° position, and a sensor 406 h in the 360° position.
  • each sensor's capacitance measurement reading is converted into X and Y components that provide the finger's location relative to the center of the array of sensors 406 a - h.
  • each sensor 406 is centered on 45° offsets from the unit circle 0°.
  • the firmware multiplies the sensor's capacitance reading by cos(45°).
  • the Y component the sensor's capacitance reading is multiplied by sin(45°).
  • each sensor will have some non-zero, but similar capacitance reading.
  • the X components of two oppositely faced sensors e.g. sensors 406 a and 408 e
  • they have a cancelling effect.
  • the finger moves outward from center, one or two sensors will show increasing capacitance readings, while the other 6-7 sensors' readings will decrease.
  • the result seen in the summed X and Y values tracks the finger away from center and outward in the direction of the one or two sensors that the finger is in contact with.
  • a gesture detection algorithm may be included as a part of the firmware of input module 102 c.
  • the algorithm stores the starting location, as determined by the summed X and Y values calculated from the capacitance readings. The algorithm then waits for the detection of the finger to disappear at which point the last known location of the finger before being removed is stored off as an ending location. If no timeout is reached, and both a start and stop event have occurred, the gesture algorithm decides which gesture has occurred by analyzing the measured starting point, ending point, slope of the line formed by the two points, distance between the two points, and the change in X and Y.
  • the algorithm may differentiate between multiple gestures.
  • the algorithm determines whether the gesture was horizontal or vertical, then determines whether the motion was forward, backward, up or down.
  • the algorithm may compare the change in X (X 2 -X 1 ) to the change in Y (Y 2 -Y 1 ) and select the larger of the two. If change in Y is larger than change in X, then the motion is assumed to be vertical.
  • the algorithm may determine if Y 2 -Y 1 is positive or negative. For example, if Y 2 -Y 1 is positive, then the motion is assumed to be upward. If Y 2 -Y 1 is negative, then the motion is assumed to be downward. If change in X is larger than change in Y, then the motion is assumed to be horizontal.
  • the algorithm may determine if X 2 -X 1 is positive or negative. For example, if X 2 -X 1 is positive, then the motion is assumed to be forward. If X 2 -X 1 is negative, then the motion is assumed to be backward.
  • each direction swipe may initiate a separate command.
  • the touch algorithm can also detect that a user is holding the finger on one or more of the sensors of the input module 102 .
  • a “hold” gesture is detected in the event that a timeout occurs prior to an end-of-gesture event (finger lifting off the sensors of input module 102 ). Once it is determined that a finger is not being removed from the sensors, its location is analyzed to determine which command is intended.
  • input module 102 c may differentiate between different finger locations when a user interacts with input module 102 c.
  • input module 102 c includes an algorithm that differentiates between four finger locations (e.g. “up”, “down”, “back”, “forward”). This may be done by comparing the capacitance readings of the sensors centered in each cardinal location, for instance sensor 406 a at the 0° position, sensor 406 c at the 90° position, sensor 306 e at the 180° position, and sensor 406 g at the 270° position. The sensor with the highest capacitance reading indicates the position of the finger.
  • the input module 102 may detect and differentiate between several different types of user interaction, and control device 100 use this information to assign a unique command to each of these different user interactions.
  • the user may use one or more actions (e.g. pressing a button, gesturing, waving a hand, etc.) to interact with computing device 150 , without requiring that the user directly interact with computing device 150 .
  • actions e.g. pressing a button, gesturing, waving a hand, etc.
  • a user may instead use control device 100 to interact with control device 100 , and may use gestures to replace or supplement the normal commands of computing device 150 .
  • a user may input a forward swiping gesture in order to command the computing device 150 to skip a currently playing content item, and to playback the next content item on a playlist.
  • a user may input a backward swiping gesture in order to command the computing device 150 to playback the previous content item on a playlist.
  • each gesture may correspond to a particular command, and a user may input these gestures into input module 102 rather than manually enter the commands into computing device 150 .
  • a user may also use one or more actions to input commands unrelated to controlling content playback.
  • gestures may be used to input commands related to interacting with other systems on a network, for instance websites and social media sites.
  • a user may input a hold gesture on a forward part of input module 102 in order to command the computing device 150 to share the currently playing content item on a social media site. Sharing may include transmitting data to the social media site that includes identifying information regarding the currently playing content item, a user action relating to the content (e.g. “liking” the content, “linking” the content to other users, etc.), a pre-determined message introducing the content to other users, or any other data related to sharing the content item with others.
  • a user action relating to the content e.g. “liking” the content, “linking” the content to other users, etc.
  • a pre-determined message introducing the content to other users, or any other data related to sharing the content item with others.
  • gestures and other actions may be used to issue any command, including commands to visit a website, send a message (e.g. an email, SMS message, instant message, etc.), purchase an item (e.g. a content item, a physical product, a service, etc.), or any other command that may be performed on the computing device 150 .
  • a message e.g. an email, SMS message, instant message, etc.
  • purchase an item e.g. a content item, a physical product, a service, etc.
  • any other command e.g. a command that may be performed on the computing device 150 .
  • control system 100 may send commands to computing device 150 based on the proximity of control system 100 to the user.
  • a control system 100 may include an input module 102 with one or more proximity sensors. These sensors may detect when control system 100 is in close proximity to the user, and may issue commands according to this detection.
  • input module 102 may be arranged in such a way that its proximity sensors are positioned to detect the presence of a user when control system 100 is in a typical usage position (e.g. against the body of a user).
  • control system 100 may respond by issuing one or more commands to computing system 150 , for instance a command to stop playback of any currently playing content items, a command to send one or more messages indicating that the user is away from the device, or a command to switch computing system 150 into a lower power state to conserve energy. Commands may also be issued when the control system 100 is moved back towards the user and into a typical usage position. For example, when control system 100 is moved back towards the user, control system 100 may respond by issuing commands to computing system 150 to restart playback of a content item, send one or more messages indicating that the user has returned, or a command to switch computing system 150 into an active-use state.
  • commands may also be issued when the control system 100 is moved back towards the user and into a typical usage position. For example, when control system 100 is moved back towards the user, control system 100 may respond by issuing commands to computing system 150 to restart playback of a content item, send one or more messages indicating that the user has returned, or a command to switch computing system 150 into an active-use
  • control system 100 may send commands to a computing device 150 based on the orientation or motion of control system 100 .
  • a control system 100 may include an input module 102 with one or more accelerometer sensors. These sensors may the orientation of control system 100 , and may issue commands according to this detection.
  • input module 102 may detect that control system 100 is upright, and send a command to computing system 150 in response.
  • control system 100 may send commands to computing system 150 based on determinations from more than one sensor. For example, control system 100 may determine if control system 100 is being actively used by a user based on determinations from the proximity sensors and the accelerometers. For instance, if the proximity sensors determine that no objects are in proximity to control system 100 , and the accelerometers determine that control system 100 is in a non-upright position, control system 100 may determine that it is not being actively used a user, and will command computing system 150 to enter a lower power state. For all other combinations of proximity and orientation, the system may determine that it is being actively used by a user, and will command computing system 150 to enter an actively-use state. In this manner, control system 100 may consider determinations from more than one sensor before issuing a particular command.
  • control system 100 may include one or more audio sensors, such as microphones. These sensors may be provided in order for control system 100 detect and interpret auditory data. For instance, an audio sensor may be used to listen for spoken commands from the user. Control system 100 may interpret these spoken commands and translate them into commands to computing system 150 . In some implementations, control system 100 may include more than one audio sensor. In some implementations, different audio sensors can be used for difference purposes. For example, some implementations may include two audio sensors, one for recording audio used for telephone calls, and one for recording audio for detecting spoken user commands.
  • control system 100 may include one or more display modules in order to display information to a user. These display modules may be, for example, LED lights, incandescent lights, LCD displays, OLED displays, LCD displays, or any other type of display component that can visually present information to a user.
  • control system 100 includes multiple display modules, with either multiple display modules of the same type, or with multiple display modules of more than one type. Display modules can display any type of visual information to a user. For instance, a display module may display information regarding the operation of control system 100 (i.e. power status, pairing status, command-related statuses, etc.), information regarding computing system 150 (i.e. volume level, content item information, email content, Internet content, telephone content, power status, pairing status, command-related statuses, etc.), or other content (i.e. variating aesthetically-pleasing displays, advertisements, frequency spectrum histograms, etc.).
  • control system 100 i.e. power status, pairing status, command-related statuses, etc.
  • computing system 150 i.
  • control device 100 may be used in conjunction with a variety of user-controllable devices.
  • control device 100 may be used in conjunction with an audio headset 700 .
  • Portions of control device 100 may mounted to headset 800 , or mounted within portions of headset 800 .
  • portions of control device 100 may be mounted within ear piece 802 , ear piece 804 , or connecting band 806 .
  • portions of input module 102 may be mounted to headset 800 in such a way that is readily accessible by a user.
  • sensor 402 is mounted on an exterior surface of ear piece 802 , so that a user may interact with control device 100 by touching ear piece 802 .
  • the control device 100 may be connected to a computing device 150 through a connection interface 152 .
  • Connection interface 152 may also provide a connectively pathway for the transfer of data between headset 800 and the computing device, for instance audio information when computing device 150 plays back a media file.
  • FIG. 8A illustrates the use of a single touch sensor 402 , various types of sensors may be used, and in various combinations. For example, multiple touch sensors may be used (for example sensors 404 a - e and 406 a - h ), physical controls (for example buttons 408 a - d ), or combinations of touch sensors and physical controls.
  • control device 100 may be used in conjunction with an audio headset, but the components of control device 100 may be in a separate housing rather than mounted within the headset.
  • a control device 100 may be housed in a casing 820 that is external to a headset 830 . Portions of control device 100 may be mounted to the exterior of the casing 820 .
  • buttons 408 a - d are mounted on an exterior surface of casing 820 , so that a user may interact with control device 100 by touching an exterior surface of casing 820 .
  • Connection interface 152 may be connected on one end to the transmission module 106 of the control device 100 , and may have a detachable connector 824 on the other end.
  • the detectable connector 824 may plug into a computing device 150 , and may be repeatedly connected and detached by the user so that control device 100 may be swapped between computing devices. While FIG. 8B illustrates the use of a buttons 408 a - d, various types of sensors may be used, and in various combinations. For example, one or more touch sensors may be used (for example sensors 402 , 404 a - e, and 406 a - h ), physical controls (for example buttons 408 a - d ), or combinations of touch sensors and physical controls.
  • control device 100 may be used in conjunction with an audio headset, but the components of control device 100 may be in a separate housing that is mounted away from the headset.
  • a control device 100 may be housed in a casing 820 that is separate from a headset 842 . Portions of control device 100 may be mounted to the exterior of the casing 840 .
  • touch sensor 402 is mounted on an exterior surface of casing 840 , so that a user may interact with control device 100 by touching an exterior surface of casing 840 .
  • Connection interface 152 may be connected on one end to the transmission module 106 of the control device 100 , and may have a detachable connector 824 on the other end.
  • the detectable connector 824 may plug into a computing device 150 , and may be repeatedly connected and detached by the user so that control device 100 may be swapped between computing devices.
  • a connector port 826 may be provided on the exterior of casing 820 , where the connector port 826 provides detachable data transmission access to control device 100 .
  • connector port 826 provides a connection for data transmission between computing device 150 and headset 842 , such that headset 842 can also communicate with computing device 150 .
  • a user may plug the headset 842 into connector port 826 so that the headset or presentation device can receive audio information from computing device 150 .
  • other devices may be plugged into connector port 826 , either in addition to or instead of headset 842 .
  • a microphone may be plugged into connector port 826 , such audio information from the microphone is transmitted to control system 100 , then transmitted to computing device 150 .
  • a display device may be plugged into connector port 826 , such that audio and/or video data from computing device 150 is transmitted to the display device for presentation.
  • one than one device may be plugged into connector port 826 .
  • a headset and a display device may be plugged into connector port 826 , such that audio information from computing device 150 is played back on the headset, and video information is played back on the display device.
  • a headset and a microphone may be plugged into connector port 826 , such that audio information from computing device 150 is played back on the headset, and audio information from the microphone is transmitted from the microphone to computing device 150 .
  • FIG. 8C illustrates the use of a single touch sensor 402 , various types of sensors may be used, and in various combinations. For example, multiple touch sensors may be used (for example sensors 404 a - e and 406 a - h ), physical controls (for example buttons 408 a - d ), or combinations of touch sensors and physical controls.
  • a user may use a control device 100 during a performance in order to share information regarding the performance to a group of pre-determined recipients or recipients belonging to a group.
  • a control device 100 may be connected a computing device 150 , and to one or more other devices, such as a headset, a microphone, or a display device.
  • a user may use computing device 150 to play content items for an audience, for example to play audio and/or video content to an audience as a part of a performance.
  • the user may use the one or more devices connected to control device 100 , for instance a headset (e.g. to monitor audio information from computing device 150 ), a microphone (e.g. to address an audience), and a display device (e.g.
  • control system 100 may also use control system 100 to send commands to computing device 150 , such as to control the playback of content items, and to share information regarding the performance.
  • the user may use control system 100 to command computing device 150 to transmit information regarding the currently playing content item (e.g. the name of a song or video, the name of the creator of the song or video, or any other information) to one or more recipients, such as by posting a message to a social media site, by emailing one or more users, by sending an SMS or instant message to one or more users, or by other such communications methods.
  • a user may use a control device 100 in conjunction with several other devices to render a performance, as well as to share information regarding the performance to one or more recipients.
  • control device 100 may be used in conjunction with other audio and video playback devices, for instance a speaker apparatus 860 .
  • Portions of control device 100 may mounted to the speaker apparatus 860 , or mounted within portions of the speaker apparatus 860 .
  • portions of input module 102 may be mounted to speaker apparatus 860 in such a way that is readily accessible by a user.
  • sensor 402 is mounted on an exterior surface of speaker apparatus 840 , so that a user may interact with control device 100 by touching an exterior surface of speaker apparatus 860 .
  • speaker apparatus 860 may also include a connection interface (not shown), that provides data connectivity between control device 100 and a computing device 150 .
  • speaker apparatus 860 includes a computing device 150 .
  • the computing device 150 includes data storage and data processing capabilities, such that media files may be stored and played back from within the speaker apparatus 860 .
  • the computing device 150 may also include one or more data connection interfaces, such that a user may transmit media files to the computing device 150 for playback.
  • the data connection interfaces may be include components for transferring data through local area networks (LANs), wide area networks (WANs), Bluetooth networks, cellular networks, or any other network where network capable of data transmission.
  • FIG. 8D illustrates the use of a single touch sensor 402 , various types of sensors may be used, and in various combinations. For example, multiple touch sensors may be used (for example sensors 404 a - e and 406 a - h ), physical controls (for example buttons 408 a - d ), or combinations of touch sensors and physical controls.
  • control device 100 may be used in conjunction with other devices not normally associated with media playback.
  • a control device 100 may be used in conjunction with a steering wheel 880 .
  • a steering wheel 880 is manipulated by a user to control the direction of a vehicle.
  • the portions of control device 100 may mounted to the steering wheel 880 , or mounted within portions of steering wheel 880 .
  • portions of input module 102 may be mounted to steering wheel 880 in such a way that is readily accessible by a user.
  • sensor 402 is mounted on an exterior surface of the steering wheel 880 , so that a user may interact with control device 100 by touching steering wheel 880 .
  • the control device 100 may be connected to a computing device 150 through a connection interface 152 . While FIG. 8E illustrates the use of a multiple single touch sensors 402 , various types of sensors may be used, and in various combinations. For example, multiple touch sensors may be used (for example sensors 404 a - e and 406 a - h ), physical controls (for example buttons 408 a - d ), or combinations of touch sensors and physical controls.
  • control device 100 may be used in conjunction with any user-operated device, and may be used to control any computing device 150 .
  • the control system 100 may include a computing component attached to a headphone 800 (such as one of headphones 800 a - e ), where the computing device includes a touch pad, a motion sensor and/or a camera.
  • a user may make a hand gesture near the headphone to provide a control command to the audio source, e.g., swirling the point finger clockwise may indicate a “replay” request; pointing one finger forwardly may indicate a “forward” request; pointing one finger downwardly may indicate a “pause” request, and/or the like.
  • control system 100 may capture the user hand gesture via a camera, or a remote control sensor held by the user, or the user making different movement on the touch pad, and/or the like. Such captured movement data may be analyzed by the control system 100 and translated into control commands for the audio source to change the audio playing status.
  • control system 100 may facilitate social sharing between users. For example, a user may make a command so that the control system 100 may automatically post the currently played song to social media, e.g., Tweeting “John Smith is listening to #Scientist #Coldplay,” a Facebook message “John Smith likes Engineer, Coldplay,” and/or the like.
  • social media e.g., Tweeting “John Smith is listening to #Scientist #Coldplay,” a Facebook message “John Smith likes Engineer, Coldplay,” and/or the like.
  • a user may make a gesture to share audio content to another user using another control system 100 .
  • the user may made a “S” shaped gesture on a touch pad of the control system 100 of a headphone 800 , which may indicate “sharing” with another control system 100 user in a detectable range (e.g., Bluetooth, etc.).
  • the control system 100 may communicate via Near Field Communication (NFC) handshake.
  • NFC Near Field Communication
  • the second control system 100 may receive the sharing message and adjust the audio source to an Internet radio the first control system 100 user is listening to, so that the two users may be able to listen to the same audio content.
  • the sharing may be conducted among two or more control system 100 users.
  • the control system 100 may share the radio frequency from one user to another, so that they can be tuned to the same radio channel.
  • control system 100 may allow a user to configure user preferred “shortcut keys” for a command.
  • the control system 100 may be connected to a second device (e.g., other than a headphone), such as a computer, a smart phone, and/or the like, which may provide a user interface for a user to set up short-key movements.
  • a second device e.g., other than a headphone
  • the user may select one finger double-tab as sharing the currently played song to a social media platform (e.g., Twitter, Facebook, etc.) as a “like” event, two finger double-tab as sharing the currently played song to social media by posting a link of the song, and/or the like.
  • a social media platform e.g., Twitter, Facebook, etc.
  • the control system 100 may include a headphone with aesthetic designs.
  • the earpad portion may have a transparent design, a colored exterior spin that may feature sponsor information and/or branding logos.
  • the control system 100 may include a headphone with touch pad, a touch screen that may show social sharing information (e.g., Tweets, Facebook messages, etc.).
  • the control system 100 may include a headphone with a removable headband portion to feature user customized graphics. The user may remove the headband portion from the headphone for cleaning purposes.
  • the control system 100 may include a headphone that may be adaptable to helmets.
  • control system 100 may be engaged in a “DJ display” mode, wherein a digital screen at the headphone may display color visualizations including variating color bars that illustrates the frequency of the audio content being played.
  • the control system 100 may provide APIs to allow third party services.
  • the control system 100 may include a microphone so that a user may speak over a phone call.
  • a user may instantiate a mobile component at the audio source (e.g., a computer, a smart phone, etc.).
  • the audio source detects an incoming audio communication request (e.g., a Skype call, a phone call, and/or the like)
  • the control system 100 may automatically turn down the volume of the media player, and a user may make a gesture to answer to the incoming audio communication request, e.g., by tapping on the touch pad of the headphone as the user may have configured one-tap as the shortcut key, etc.
  • control system 100 may allow a user to sing and record the user's own singing.
  • the Motion-HP may instantiate a “Karaoke” mode so that the control system 100 may perform remix of background soundtrack of a song that is being played and the recorded user's singing to make a cover version of the song.
  • the user may make a gesture on the touch pad of the control system 100 to share the “cover” version to social media.
  • control system 100 may provide audio recognition (e.g., a “Shazam” like component, etc.).
  • audio recognition e.g., a “Shazam” like component, etc.
  • the control system 100 may identify the audio content via an audio recognition procedure.
  • control system 100 may broadcast audio content it receives from an audio source to other control systems 100 via Bluetooth, NFC, etc.
  • a user may connect his/her control system 100 to a computer to listen to media content, and broadcast the content to other control systems 100 so that other users may hear the same media content via broadcasting without directly connecting to an audio source.
  • control system 100 may include accelerometers to sense the body movement of the user to facilitate game control in a game play environment.
  • the control system 100 may be engaged as a remote game control via Bluetooth, NFC, WiFi, and/or the like, and a user may move his head to create motions which indicate game control commands.
  • control system 100 may automatically send real-time audio listening status of a user to his subscribed followers, e.g., the fan base, etc.
  • control system 100 may be accompanied by a wrist band, which may detect a user's pulse to determine the user's emotional status, so that the control system 100 may automatically select music for the user. For example, when a heavy pulse is sensed, the control system 100 may select soft and soothing music to the user.
  • control system 100 may comprise a flash memory to store the user's social media feeds, user's configuration of audio settings, user defined shortcut keys, and/or the like. For example, when the user connects a control system 100 to a different audio source, the user does not need to re-configure the parameters of control system 100 .
  • control system 100 may allow a user to add third party music services, such as but not limited to iTunes, Pandora, Rhapsody, and/or the like, to the control system 100 .
  • third party music services such as but not limited to iTunes, Pandora, Rhapsody, and/or the like.
  • the user may configure shortcut keys for selection of music services, control the playlist, and/or the like.
  • control system 100 may provide registration services in order to access full usage of the control system 100 .
  • a user may access a registration platform via a computer, etc.
  • a user may be allowed to access limited features of the control system 100 , e.g., play music, etc., but not able to access additional features such as “DJ mode,” “Karaoke mode,” and/or the like.
  • control system 100 includes analytics for targeting advertisements, revenue sharing between advertising channels and sponsors, music selection and recommendation to a user, and/or the like.
  • FIG. 10 illustrates an exemplary embodiment of the apparatus 1000 .
  • the headphones 1000 are operably connected to a media player 1002 through a connection 1004 , for instance hardwire connection or a wireless connection, such as Bluetooth or Wi-Fi.
  • the media player 1002 communicates with a network gateway 1006 through wireless network connection 1008 , such as through a cellular connection or Wi-Fi connection.
  • the network gateway 1006 provides network connectivity to the Internet 1010 through a network connection 1014 , facilitating access to various content and service providers 1012 a - d connected to the Internet 1010 through network connections 1016 .
  • Content and service providers 1012 a -d may include email servers, social media sites, ad servers, and content servers.
  • the media player 1002 may be one of many types of mobile devices, such as a cellular telephone, a tablet, a computer, a pager, a gaming device, or a media player.
  • the wireless network connection 1008 may be one of many types of communications networks through which data can be transferred, such as a Wi-Fi network, a cellular telephone network, a satellite communications network, a Bluetooth network, or an infrared network.
  • the content and service providers 1012 a - d may also include search engines, digital content merchant sites, instant messaging providers, SMS message providers, VOIP providers, fax providers, content review sites, and online user forums.
  • FIG. 11A illustrates an example embodiment of the headphones 1000 .
  • the headphones include a first earpiece assembly 1102 , a second earpiece assembly 1104 , and a headband assembly 1106 that securely positions the earpieces 1102 and 1104 over the ears of a user.
  • Each earpiece assembly 1102 and 1104 includes one or more externally accessible touch sensor arrays 1108 and 1110 for user interaction.
  • FIG. 11B illustrates the components of the first earpiece assembly 1102 .
  • a microcontroller 1114 Mounted on the Main PCB 1112 are a microcontroller 1114 , a baseband digital signal processor (DSP) 1116 , a Kalimba DSP 1118 , an audio/video codec 1120 , random access memory (RAM) 1122 , and non-volatile “flash” memory 1124 .
  • a USB connector 1126 Also connected to the Main PCB 1112 are a USB connector 1126 , a wired connector 1128 , light emitting diode (LED) indicators 1130 , a power switch 1132 , an audio driver 1134 , and touch sensor array 1108 .
  • the first earpiece assembly 1102 is connected to the second earpiece 1106 assembly through a wired connection 1136 passing through the headband assembly 1106 .
  • FIG. 11C illustrates the components of the second earpiece assembly 1104 .
  • the Slave PCB 1138 is connected to the Main PCB 1112 of the first earpiece assembly 1102 through a hardwire connection 1136 . Also connected to the Slave PCB 1138 are a battery 1142 , microphone array 1144 , near-field communication (NFC) module 1146 , an audio driver 1148 , and a touch sensor array 1110 .
  • NFC near-field communication
  • the Main PCB 1112 and Slave PCB 1138 provide connectivity between the various components of the earpiece assemblies.
  • the microcontroller 1114 accepts inputs from the touch sensor arrays 1108 and 1100 , USB connector 1126 , and wired connector 1128 , and if necessary, translates the inputs into machine compatible commands. Commands and other data are transmitted between the microprocessor and/or the connected components. For example, audio from the microphone array 1144 and the wired connector 1128 is digitally encoded by the codec 1120 and processed by the baseband DSP 1116 and Kalimba DSP 1118 , where it may be modified and mixed with other audio information. Mixed audio is decoded by the codec 1120 into an analog representation and is output to the audio drivers 1134 and 1148 for playback.
  • LEDs 1130 are connected to the microcontroller 1114 and may be illuminated or flashed to indicate the operational status of the headphone apparatus. Power is supplied by the battery 1142 connected to the microcontroller 1114 , and power may be toggled by using a power switch 1132 . Additional components, such as wireless transceivers 1150 , may be connected to and controlled by the microprocessor 1114 .
  • the microprocessor 1114 may transmit data to an externally connected computing device, such as a smart phone or media player, via the wireless transceivers 1114 , the USB connector 1126 , or the wired connector 1128 .
  • the data may include data used to identify the specific model, features, and unique identifying information of the headphones.
  • one or more of the touch sensor arrays 1108 and 1110 may instead be physical buttons, switches, or dials. Additional connectors may be provided on the first or second earpiece assemblies 1102 and 1104 , including an audio output port, an optical port, Firewire port, an Ethernet port, a SATA port, a power input port, a Lightning port, or a serial port. Power, digital data, or analog data may be input into the apparatus or output from the apparatus using these ports.
  • the headphone apparatus 1000 may also include a video display unit, such that visual content may be displayed on the device.
  • the video display unit may be a LCD display, or may be a heads-up display (HUD) that overlays visual data over a transparent or translucent viewing element.
  • HUD heads-up display
  • one or more of the components stored in each of the earpiece assemblies 1102 and 1104 may be relocated to the other earpiece assembly or to an external housing unit.
  • the housing unit may be positioned on the headband 1106 , on one of the wired connections (i.e. connections 1128 and 1136 ), or elsewhere on the headphone apparatus 1000 .
  • the headphone 1000 may have a GPS device that can be used to determine locational data.
  • the battery 1142 is removable.
  • the user may use the touch sensor arrays 1108 and 1110 to input commands into the headphone apparatus 1000 .
  • each of the individual input surfaces of sensor arrays 1108 and 1110 of may be programmed to correspond to specific functions, such as play, stop, rewind, fast forward, pause, repeat, skip, volume increase, or volume decrease.
  • Additional commands may include a command to wirelessly “pair” the headphone 1000 to another wireless device, a command to create a post on a social networking site, a command to draft an email, or a command to search for additional information regarding the media content currently being played.
  • the touch sensor arrays 1108 and 1110 may be of a PCB, Flex-PCB, or ITO film based design.
  • Additional commands may be programmed depending on the length of time the button or touch sensor is activated. For example, a brief touch may correspond to a command to fast forward, while a longer touch may correspond to a command to skip forward to the next track. Additional commands may be programmed depending on a sequence of multiple inputs. For example, pressing the touch array 1108 or 1110 twice may correspond to a command to create a post on a social media site, while pressing the touch array 1108 or 1110 three times may correspond to a command to draft an email. In addition, touching the sensor array 1108 or 1110 in a specific order and within a certain timeframe, such to simulate a gesture, can correspond to a command.
  • touching the bottom, middle, and top sensors of array 1108 in sequence in single sliding motion may correspond to a command to increase the volume.
  • Touching the top, middle, and bottom sensors of array 1108 in sequence in a single sliding motion may correspond to a command to decrease the volume.
  • Other such “gestures” can be recognized as user commands, including a sliding left to right motion, a sliding right of left motion, a clockwise circular motion, or a counter-clockwise circular motion.
  • the headphone 1000 may be “paired” with another device through a wireless connection, such that the headphone will only communicate with the paired device.
  • Example wireless connections may include Bluetooth, enabled through an appropriately provided Bluetooth transceiver.
  • NFC tags for instance a tag on NFC module 1146 , may be used to simplify the “pairing” process.
  • the NFC tag may be pre-programmed from the factory with the unique Bluetooth ID information of the Bluetooth transceiver.
  • a device capable of reading NFC tags can be passed over the NFC tag in order to access the Bluetooth ID information. This information can be used to uniquely identify the Bluetooth transceiver contained within the headphone assembly and to establish the “paired” connection without requiring additional manual entry of the Bluetooth ID by a user.
  • the NFC tag may also contain other information used to identify the specific model, features, and unique identifying information of the headphones 1000 .
  • FIG. 12 illustrates exemplary tasks that may be performed by various implementations of the present technology.
  • a media player for instance, media player 1002 loads a playlist of media content to be played (block 1202 ), plays the media content, and recognizes contextual information about the media contents of the playlist (block 1204 ). Examples of contextual information may include the name of the track, the media player may also determine the location of the user using a built in GPS sensor, or using a GPS sensor located on the headphone assembly (block 1205 ).
  • the apparatus may deliver supplemental content to the user.
  • the media player sends a request to content servers (for instance, content servers 1012 a - d ) for supplemental content based on the contextual and location information acquired (block 1206 ).
  • Supplemental content may include information such as biographical information about the artist, album art or other visual data about the artist, social media messages written by or written about the artist, a list of past and previous tour dates by the artist, “remixed” or alternative tracks, a listing of related merchandise, or a list of “similar” artists and tracks.
  • the media player receives the supplemental content from the content servers (block 1208 ), aggregates the summary information into display templates (block 1210 ), and displays the aggregated information to the user ( 1212 ).
  • FIG. 13 An example display template 1300 with aggregated information 1302 is illustrated in FIG. 13 .
  • a user may interact with the aggregated data by selecting an item 1306 that he wishes to learn more about.
  • the phone will direct the user to an external site where more detailed information is displayed about the selected item, or to an Internet-based marketplace where merchandise related to the selected item is offered for sale (block 1214 ).
  • the apparatus may also deliver ad content based on the contextual information and location information collected.
  • the media player sends a request to ad servers for ad content based on the contextual and location information acquired (block 1220 ).
  • Ad content may include static images, videos, text, audio recordings, or other forms of media (block 1222 ).
  • the media player receives the ad content from the ad servers, inserts the ads into display templates (block 1224 ), and displays the ads to the user (block 1226 ).
  • An example display template 1300 with ads 1308 is illustrated in FIG. 13 .
  • the user may interact with the ads by selecting an ad that he wishes to learn more about.
  • the phone will direct the user to an external site where more detailed information is displayed about the selected ad, or to an Internet-based marketplace where merchandise related to the selected ad is offered for sale (block 1228 ).
  • the apparatus may also allow the user to share media or other content with one or more users.
  • the media player receives a command from the user to share content with a local second user ( 1240 ).
  • the command may be of a voice command or an input from the touch sensor array.
  • the media player searches and connects to the local second user's device over a wireless connection ( 1242 ).
  • Wireless connection can be established over any of several common wireless networks including Wi-Fi, Bluetooth, or infrared. After establishing a connection, the media player transmits the content to the second user's device over the wireless connection ( 1244 ).
  • the user may instead share media or other content with one or more users over an Internet connection.
  • the media player may access the Internet and search for a second user or for a content sharing site through the Internet connection. Access to the Internet may be over any of several common wireless networks including Wi-Fi, Bluetooth, infrared, a cellular network, or a satellite network.
  • the media player connects to the second user's device or the content sharing site over the Internet connection, and transmits the content to the second user's device or content sharing site.
  • the media player may also draft and send a message to one or more users, notifying the one or more users of the newly shared content and providing the location from which it can be retrieved.
  • the apparatus may also allow the user to interact with various social media sites based upon the contextual data and locational data acquired.
  • the media player receives a command from the user to interact with a social media site (block 1260 ).
  • the media player generates a message or an action based upon the contextual and location information (block 1262 ). Examples of messages may include “[User Name] is listening to [Track Name] by [Artist Name] at [Location]”, “[User Name] is playing [Album Name] on the way to [Location],” or any similar message identifying contextual and location information in a social media compatible format.
  • Messages and actions may be transmitted to social media sites using established application programming interfaces (APIs) to ensure compatibility (block 1264 ).
  • APIs application programming interfaces
  • the message may also be modified by the user to allow for personalization.
  • the message may also include photographs, videos, audio, or any other related content, either generated by the user or retrieved from content servers or ad servers.
  • Examples of actions may include “liking” an artist or track and subscribing to an artist's social media page.
  • Example social media sites may include Facebook, Twitter, Google+, Instagram, or any other such site.
  • the apparatus may also send messages or perform other such actions over other networking sites or services, such as email, instant messaging providers, SMS message providers, VOIP providers, fax providers, content review sites, and online user forums.
  • the apparatus may operate in “karaoke mode,” such that it records the user's voice and mixes it with a background audio sound track.
  • the apparatus enters “karaoke mode” after receiving an appropriate command from the user via voice command or touch sensor input (block 1402 ).
  • Audio content from a playlist is played on one side audio channel (i.e. audio driver 1134 ), while audio is recorded from the microphone (i.e. microphone array 1144 ) and played over the other side audio channel (i.e. audio driver 1148 ) (block 1404 ). Audio from the microphone is mixed with the audio track and saved locally, for example on the flash memory 1124 or RAM 1122 (block 1406 ).
  • the mixed audio track may be uploaded to a content sharing site or a social media site via an appropriate Internet connection (block 1410 ).
  • the mixed audio track may be shared using mechanisms described above, such as through the use of a generated message on a social media site, a generated email message, or message through any other such communications network (block 1410 ).
  • This generated message is then transmitted to the recipient, for example transmitted to the social media site using an appropriate API or to an email server for transmission to the recipient (block 1412 ).
  • the mixed audio track may also be retained on local storage for future playback.
  • “karaoke mode” may instead identify the selected audio track using contextual information and access a vocal-free version of the audio track from an appropriate content server.
  • the vocal-free version of the audio track may be used in place of the vocalized version, resulting in a “karaoke” mix that better accentuates the user's own voice without interference from the original vocalizations.
  • the vocal-free version of the audio track may also be mixed with the vocalized version, such that a reduced portion of the original vocalizations remain in the final mix.
  • accessing the vocal-free versions may also include connecting to an Internet-connected marketplace, such that vocal-free versions may be purchased, downloaded, stored, and used for “karaoke” mode using the apparatus.
  • the features of the media player 1002 may be limited or enabled based upon the connected headphone 1000 . Identifying information from the headphone 1000 may be transferred from the headphone 1000 to the media player 1002 via the wired connector 1128 or via a wireless connection, such as through as through a Bluetooth network, Wi-Fi network, NFC, or other such communication connections. Identifying information is validated against a list of authorized devices, and features of the media player may be disabled or enabled as desired. For example, a user may plug in a headphone 1000 as described above. Information identifying the headphone is transmitted to the media player 1002 and is validated against a recognized list of compatible devices, and all features of the media player are enabled as a result. The user may alternatively plug in a headphone that is not recognized by the media player 1002 . Certain features, for example “karaoke mode,” may be disabled on the media player as a result
  • Various implementations of the present invention allow for the control of, interaction with, and creation of content via a remote device, such as an audio headphone, to a base station such as a mobile device, mp3 player, cell phone, mobile phone, smart phone, tablet computer, e-book reader, laptop computer, smart television, smart video screen, networked video players, game networks and the like.
  • a remote device such as an audio headphone
  • a base station such as a mobile device, mp3 player, cell phone, mobile phone, smart phone, tablet computer, e-book reader, laptop computer, smart television, smart video screen, networked video players, game networks and the like.
  • example embodiments of the present invention allow for the programming of short-cut commands, such as hand gestures received at or near the headphones, to initiate a command on a software application running on a smart phone, such as posting a “like” on social network relating to a song played on the headphone.
  • Previous attempts to control content or content players via remote devices such as headphones and remote controls have allowed user manipulation of the audio visual content as experienced by the user (e.g., adjusting volume, pausing, rewinding, etc.).
  • Implementations of the present invention allow for the user to create additional content from the remote device for distribution over a network, such as comments relating to content, accessing promotional offers, product registration, participation in live promotions, etc.
  • Such layered content creation has previously been done through user input at the base device, such as typing into a smart phone to indicate a favorable response or opinion for a song.
  • a user can program the base device, such as a smart phone, to recognize simple inputs made at the remote device and associate those inputs with a specific command to be executed in programs or applications running on the device or accessible by the device.
  • a user can download a program onto a smartphone that recognizes input made via an input pad on a headphone.
  • the input such as a circle made by the finger on the input pad (or touch sensor array) can be associated with a command on an mp3 player application.
  • the circle motion can be associated with a command to pull all songs of a related genre from a sponsor's play list.
  • a method of remote access to a hosted application comprises the steps of creating associated command (block 1500 ) (e.g., abbreviated inputs at a remote device associated with the execution of a function or step in a hosted application) and receiving a remote command for execution.
  • creating associated command e.g., abbreviated inputs at a remote device associated with the execution of a function or step in a hosted application
  • a method of remote access to a hosted application comprises the steps of: recording a user input from a sensor on a remote device (block 1502 ); associating the recorded user input with a specific command (block 1504 ); storing the command-input association (block 1506 ); receiving a user input on a sensor on a remote device (block 1508 ); transmitting the user input from the remote device to a base device (block 1510 ); receiving at the base device the user input transmitted from the remote device (block 1512 ); comparing the input with the previously recorded inputs for association with a command specific to an application running on or accessible by the base device (block 1514 ); matching the user input to the desired command (block 1516 ) and executing the command (block 1518 ).
  • the execution of the command may initiate certain cloud functionality (block 1520 ) to allow user interaction with content available over a network, such as the Internet, a web page, a blogosphere, a blog spot, a social networked, a shared media network, a closed or private network, and the like.
  • a network such as the Internet, a web page, a blogosphere, a blog spot, a social networked, a shared media network, a closed or private network, and the like.
  • Various implementations of the invention utilize human vital and biological data collected via the external device, such as interactive headphones, to choose music according to mood and/or activity level. For example, when a user is working out in the gym more up-beat music is played while running and more relaxing music is played as the user begins to walk, cool-off and wind down an activity session.
  • This includes a relational database of music, artist and songs with mood classification (pumped-up, calm/relax, etc.)
  • the association of content with activity can be made with simple commands entered via the touch pad on the interactive headphones, or the device can include an accelerometer to detect activity levels.
  • the application running on the base device can include GPS or other location determining software as well as logic to correlate location with calendar entries or other data to determine or confirm activity.
  • the software application of some implementations of the device can recognize when headphones are removed via indication from the headphones.
  • a music aggregator such as Pandora would be able to determine when music is played and when it is paused based on whether the interactive headphones are over the ears or not, thereby avoiding unnecessary licensing fees for the music.
  • a user can interact with content, such as just-in-time promotions, targeted marketing, geo-based marketing, and the like, by associating simple commands with registration of the user for participation in a promotional offer, opt-in or opt-out of promotional offers or materials, voting, association, and the like.
  • content such as just-in-time promotions, targeted marketing, geo-based marketing, and the like
  • Implementations of the invention are not limited to headphones, but can be incorporated into dongles, or other external input devices.
  • the methods of creating layered content and interacting with programs and content hosted on a base device via commands entered into a remote device can be implemented in video devices or headphones/video combinations.
  • the operations described in this specification can be implemented as operations performed by a data processing apparatus on data stored on one or more computer-readable storage devices or received from other sources.
  • the term “data processing apparatus” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing
  • the apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
  • the apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them.
  • the apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures.
  • a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment.
  • a computer program may, but need not, correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • the processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output.
  • the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read-only memory or a random access memory or both.
  • the essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
  • a computer need not have such devices.
  • a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a universal serial bus (USB) flash drive), to name just a few.
  • Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
  • a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer.
  • a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • keyboard and a pointing device e.g., a mouse or a trackball
  • Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
  • a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a
  • Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components.
  • the components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network.
  • Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), an inter-network (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks).
  • LAN local area network
  • WAN wide area network
  • inter-network e.g., the Internet
  • peer-to-peer networks e.g., ad hoc peer-to-peer networks.
  • the computing system can include clients and servers.
  • a client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
  • a server transmits data (e.g., an HTML page) to a client device (e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device).
  • client device e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device.
  • Data generated at the client device e.g., a result of the user interaction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Combustion & Propulsion (AREA)
  • Chemical & Material Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephonic Communication Services (AREA)

Abstract

An implementation of the technology includes a control device that is used in conjunction with a computing device (e.g. a media player or smartphone), that allows a user to control the operation of the computing device without directly handling the computing device itself. This allows the user to interact with the computing device in a more convenient manner.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of priority to U.S. Prov. App. No., 61/660,662; Social User Motion Controlled Headphone Apparatuses, Methods and Systems; filed Jun. 15, 2012; U.S. Prov. App. No. 61/749,710; Interactive Networked Headphones; filed Jan. 7, 2013; and U.S. Prov. App. No. 61/762,605; System and Method of Remote Content Interaction; filed February 8, 2013; all of which are incorporated by reference in their entirety.
  • BACKGROUND
  • This specification relates to remote input devices and more specifically input devices integrated with output devices. Computing devices are commonly used by a user to perform a wide variety of functions. A user issues commands to a computing device by interacting with one or more controls, the input is often done through an input device such as a keyboard, touchpad, mouse, or touchscreen. The computing device outputs content in response to the user commands in various forms via a video monitor, speaker, headphones or other sensory/perceptive device(s). It may be desirable to input controls and commands to the computing device directly from the output device, such as inputting commands to an audio player via a headphone or interacting with a social media channel in real time via a headphone as an audio file is played. With the exception of rudimentary output commands such as “play,” “stop,” “pause,” and “volume,” current output devices do not allow for controls or input to software programs running on the computing device.
  • SUMMARY
  • This specification describes technologies relating to interactive remote input devices and interactive output devices, such as, for example and without limitation network connected interactive headphones, interactive dongles, interactive cables, interactive speakers and interactive hand controllers.
  • In general, one innovative aspect of the subject matter described in this specification can be embodied in a headphone apparatus and a media player device that are used in conjunction to provide a user with audio playback of media content, and to allow the user to interact with social media sites, email providers, supplementary content providers, and ad providers based on the media content being played. In an exemplary embodiment of the apparatus the headphones are operably connected to the media player through a hardwire connection or through a wireless connection, such as Bluetooth or Wi-Fi. The media player communicates with a network gateway through wireless network connection, such as through a cellular connection or Wi-Fi connection. The network gateway provides network connectivity to the Internet, facilitating access to various content and service providers connected to the Internet. Content and service providers may include email servers, social media sites, ad servers, and content servers.
  • Other implementations are contemplated. For example, the media player may be one of many types of mobile devices, such as a cellular telephone, a tablet, a computer, a pager, a gaming device, or a media player. In other implementations, the wireless network connection may be one of many types of communications networks through which data can be transferred, such as a Wi-Fi network, a cellular telephone network, a satellite communications network, a Bluetooth network, or an infrared network. In other implementations, the content and service providers may also include search engines, digital content merchant sites, instant messaging providers, SMS message providers, VOIP providers, fax providers, content review sites, and online user forums.
  • Implementations of the present invention may include a system for interacting with an application on a processing apparatus, comprising: an input module; a processing module; and a transmission module; wherein the input component is configured to detect a tactile input applied by a user; wherein the processing component is configured to translate the input into an application command; and wherein the transmission component is adapted to transmit the command to the processing apparatus.
  • In another implementation of the present invention, a method for providing input to a processing device comprises: providing a networked processing device, wherein the processing device delivers content as an output to an output device; providing an input module configured to detect a tactile input applied by a user, and translating the input into an application command at the processing unit.
  • Implementations of the present invention may comprise one or more of the following features. The input component is adjacent the output device. The tactile input on the input component comprises one or more of the following: a momentary touching gesture, a sustained touching gesture, and a swiping gesture. The tactile input comprises a series of two of more of the following: a momentary touching gesture, a sustained touching gesture, and a swiping motion gesture. The processing component is configured to determine a number of fingers used by the user to apply the input. The processing component is configured to translate the input into an application command based on the number of fingers detected. The system comprises one or more audio speakers. The application is a media management application. The processing apparatus is one of the following: a media player, a smartphone, a gaming device, and a computer. The command comprises a command to control a media playback function of the processing apparatus. The command comprises a command to broadcast a user preference or user indication over a network, such as a social network. The command comprises a command to transmit a message to a recipient through a communications network. The communications network comprises one or more of the following: a LAN, a WAN, the internet, and a cellular network. The recipient is a communications device, a social media website, an email server, and a telephone. The system comprises a user control device for controlling a device unassociated with the system for interacting with an application on a processing apparatus. The control device comprises a steering wheel for controlling a vehicle. The output device comprises one or more video displays, audio speakers, headphones, ear buds,
  • The details of one or more embodiments of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a schematic diagram of an example control system.
  • FIG. 2 is a flow chart showing an example usage of a control system.
  • FIG. 3 is a flow chart showing an example usage of a control system.
  • FIG. 4A-D are example embodiments of an input module.
  • FIGS. 5A-F are example user interactions.
  • FIGS. 6A-H are example user interactions.
  • FIG. 7 is an example input module.
  • FIGS. 8A-E are example embodiments of control systems.
  • FIG. 9 shows example embodiments of control systems.
  • FIG. 10 is an example network of the present invention including interactive, networked headphones.
  • FIG. 11A is an example of an interactive, networked headphone of the present invention.
  • FIG. 11B is an example of an implementation of the present invention.
  • FIG. 11C is an example of an implementation of the present invention.
  • FIG. 12 is an example of a method of an implementation of the present invention.
  • FIG. 13 is an example of an implementation of the present invention.
  • FIG. 14 is an example of an implementation of the present invention.
  • FIG. 15 is an example of a method of an implementation of the present invention.
  • Like reference numbers and designations in the various drawings indicate like elements.
  • DETAILED DESCRIPTION
  • Broadly, an implementation of the technology includes a control device that is used in conjunction with a computing device (e.g. a media player or smartphone), that allows a user to control the operation of the computing device without directly handling the computing device itself. In example implementations the computing device may be controlled from a traditional output device, such as a headphone, speaker, ear bud, speaker cable, wearable output display such as heads-up display glasses or visors, a cable comprising an input device such as an interactive input device on a headphone or ear bud cable, or even a remote input device disassociated with the computing device, such as a steering wheel, a dash board panel, a visual or audio kiosk, and the like.
  • Providing input to the computing device from an interactive output device or remote input device, allows the user to interact with the computing device in a more convenient manner. For instance, a user may use the control device to interact with the computing device, without first having to remove the computing device from a storage location (e.g. a clothing pocket, a carrying bag, a holding bracket, an armband, etc.) In this manner, a user may use the control device to operate the computing device, without exposing the computing device to potential damage due to mishandling or environmental factors. The user may also use the control device to operate a computing device that is not readily accessible, for example a device that is secured in a container or a protective housing, or built into a fixed enclosure (e.g. a household audio system or a media system in a vehicle).
  • In another example, a user may use the control device to interact with the computing device without having to look at either the control device or the computing device. In this manner, a user may use the computing device while engaging in other activities, such as walking, running, reading, driving, or any other activity where averting one's attention is undesirable. In addition, a user may use the control device to simplify specific tasks of the computing device, such that the user may issue complex instructions to the computing device using relatively simple inputs on the control device. For example, in one implementation the user may share or “like” content, such as a music recording being played on a mobile phone and delivered to the user via an output device such as headphones, wherein the headphones include an input component such that the user can communicate preferences for the music file with other users over a social network by simple manipulation of the input device on the headset.
  • In yet another example, a user may share content in real time to a predetermined set of additional users (e.g., members of a contact list, attendees to an event, users within a geographic or localized area). Also, multiple users can communicate and share files via a network with a single device, such as a communal audio speaker (e.g., multiple users can share one or more music files to a device to create a just in time play list by simple manipulation of the input component on the user headphones.
  • FIG. 1 illustrates an example implementation of a control device 100 used to control the operation of a computing device 150. A control device 100 includes an input module 102, a processing module 104, a transmission module 106, and a power module 108. Each module 102, 104, 106, and 108 may be interconnected through one or more connection interfaces 110, which may provide a connective pathway for the transfer of power or data between each of the modules. The transmission module 106 is connected to the computing device 150 through another connection interface 152, which provides a connective pathway for the transfer of data between control device 100 and computing device 150.
  • In general, the input module 102 is provided so that a user can physically interact with the control device 100. The input module 102 includes one or more sensors to detect physical interaction from the user, and also includes electronic components necessary to convert the physical interactions into a form that may be interpreted by the other modules of the device (e.g. by digitizing the input so that it may be understood by the processing module 104 and the transmission module 106). The input module 102 may include one or more types of sensors, for instance touch-sensitive sensors, buttons, switches, or dials, or combinations of one or more of these sensors.
  • In general, the processing module 104 is provided so that control device 100 may interpret the user's physical interactions, and translate these interactions into specific commands to the computing device 150. In general, the transmission module 106 is provided so that the control device 100 can transmit commands to the computing device 150. The transmission module 106 may include components to encode and transmit data to computing device 150 in a form recognizable by computing device 150. The transmission module 106 may include, for example, a serial communication module, a universal serial bus (USB) communication module, a Bluetooth networking module, a WiFi networking module, a cellular phone communication module (e.g. a CDMA or GSM radio), or any other module for communicating with computing device 150.
  • In general, the power module 108 is provided to supply power to each of the other modules of control device 100. The power module 108 may be of any standard form for powering small electronic circuit board devices such as the following power cells: alkaline, lithium hydride, lithium ion, lithium polymer, nickel cadmium, solar cells, and/or the like. Other types of AC or DC power sources may be used as well. In the case of solar cells, in some implementations, the case provides an aperture through which the solar cell may capture photonic energy. In some implementations, power module 108 is located external to system 100, and power for each of the modules of control device 100 is provided through connection interface 110 or another connection interface, which may provide a connective pathway for the transfer of power between the externally located power module 108 and the components of system 100.
  • In general, the connection interface 152 which provides a connective pathway for the transfer of data between control device 100 and computing device 150. Connection interface 152 may be a wired connection, a wireless connection, or a combination of both. For instance, connection interface 152 may be a serial connection, a USB connection, a Bluetooth connection, WiFi connection, a cellular connection (e.g. a connection made through a CDMA or GSM network), or combinations or one or more of these connections.
  • In some implementations, connection interface 152 is established over a wireless connection, and a “paired” relationship must be established between control system 100 and computing device 150 before the user may use control system 100 to issue commands to computing device 150. For example, in some implementations, connection interface 152 is a Bluetooth connection, and a user interacts with computing device 150, first to view a list of active Bluetooth modules in the vicinity, then to select the module representing control system 100. Computing device 150 and control system 100 establish a Bluetooth connection interface 152, and data can be transmitted between the two through this connection interface. In another example, control system 100 may contain a near-field communication (NFC) identification tag that uniquely identifies control system 100, and computing device 150 has a NFC reader that is capable of reading the identification information from the NFC identification tag. In these implementations, a user may pair control system 100 and computing device 150 by placing the NFC identification tag in proximity to the NFC reader, and computing device 150 establishes a connection interface 152 with control system 100. This paired relationship may be retained by control system 100 or computing device 150, such that each device will continue to communicate with the other over subsequent usage sessions. In paired relationship may also be altered, such that control system 100 may be paired with another computing device 150.
  • In some implementations, connection interface 152 may also be used to transmit data unassociated with control system 100. For example, in some implementations, a control system 100 may be used with an audio headset, such that audio information from computing device 150 is transmitted through connection interface 152 to transmission module 106, then transmitted to the audio headset using another connection interface. In these implementations, data associated with control system 100 (i.e. data that related to the operation of control system 100) and data unassociated with control system 100, may both be transmitted through connection interface 152, either simultaneously or in an alternating manner. In some implementations, connection interface 152 is an analog connection, such as those commonly used to transfer analog audio data to a headset, and may include multiple channels, for instance channels commonly used for left audio, right audio, microphone input, etc. Control system 100 may transfer analog audio data, as well as data associated with control system 100, through such a connection interface 152. For instance, transmission module 106 may encode the data according to patterns of shorted analog channels, then transmit the encoded data by shorting one or more of the channels of connection interface 152. These patterns of shorted channels may be interpreted by computing system 150. In some implementations, transmission module 106 may transfer data by sending patterns of analog voltage waveforms to computing system 150, where may then be interpreted by computing system 150. Combinations of more than one encoding and transmission techniques may also be used.
  • An example usage 200 of a control device 100 for controlling a computing device 150 is illustrated in FIG. 2. The control device 100 first detects an input from the user (block 202). Generally, the input is detected by the input module 102, and may include any form of physical interaction, for instance a touching motion, a swiping or sweeping motion, a pressing motion, a toggling motion, or a turning motion. In some implementations, the control device 100 may detect the number of fingers that the user uses to apply the input, in addition to detecting the physical motion of the input.
  • The control device 100 then translates the input into a command to a computing device (block 204). Generally, input module 102 detects and digitizes the user's input, and processing module 104 translates the digitized input into a command. In some implementations, the processing module 104 receives digitized information describing the user's interaction with input module 102 and compares it to a predetermined list of inputs and associated commands. If processing module 104 determines that the user's input matches an input from the predetermined list, it selects the command associated with the matched input from the list. In some implementations, processing module 104 does not require an exact match between the user's interactions, and may instead select the closest match. A match may be based on one or more criteria, for instance based on a temporal similarity, a spatial similarity, or a combination of the two.
  • In general, a command may include any instruction to a computing device to perform a particular task. For instance, example commands may include instructions to the computing device to start the playback of a particular media file, stop the playback of a media file, adjust the volume of playback, jog to a particular time-point within the media file, or select another media file entirely. In another example, commands may include instructions to the computing device to transmit data to a recipient across a network connection. Example network connections include local area networks (LANs), wide area networks (WANs), Bluetooth networks, cellular networks, or any other network where data may be transmitted between two computing devices. Example recipients may include other computing devices, for instance a computer, a smartphone, or a server. For instance, a command may include instructions to send a particular message across a WAN to a server operated by a social media website, such that a user may interact with the social media website. In another example a command may include instructions to send a particular message across a cellular network to another computing device, such that a user may interact with another user through the other user's computing device.
  • The control device 100 then transmits the translated command to the computing device (block 206). In general, the translated command is transmitted through the transmission module 206 to the computing device 150. Upon receipt of the translated command, the computing device 150 may execute the command.
  • Another example usage 300 of control device 100 to revise the association between a user's inputs and the corresponding commands is illustrated in FIG. 3 The control device 100 first detects an input from the user (302). Generally, the input is detected by the input module 102, and may include any form of physical interaction, for instance a touching motion, a swiping or sweeping motion, a pressing motion, a toggling motion, or a turning motion. In some implementations, the control device 100 may detect the number of fingers that the user uses to apply the input, in addition to detecting the physical motion of the input.
  • The control device 100 then associates a new command to the detected input (block 304). A user may specify the new command in various ways. For instance, in some implementations, a user may interact with control device 100, computing device 150, or a combination of the two, to instruct control device 100 to associate a new command with a particular user input. In an example, a user may interact with control device 100 to browse through a list of possible commands and select from a command from this list. In another example, a user may interact with computing device 150 to browse through a list of possible commands and select a command from this list, where the selected command is transmitted to control device 100 through connection interface 152. The selected command is received by the transmission module 106, then is transmitted to the processing module 104.
  • The control device 100 then stores the association between the new command and the detected input for future retrieval (block 306). In general, the association between the new command and the detected input is saved by processing module 104, and may be recalled whenever a user interacts with control device 100 or computing device 150. In this manner a user can initiate commands to a program with simple remote inputs, for example, a user can set up the control device to recognize that designated user inputs (e.g., a two finger sweep across the control interface or input module) should instruct a music player to share a music file with a designated group of friends or other users. In another example, voting options may be incorporated into traditionally passive content, for example, an audio stream such as that provided by an internet radio provider could include a prompt to initiate a known user input (e.g. hold two fingers on the interface or control module or depress two buttons simultaneously on the control module) to have an e-mail sent about a product advertised, or to enroll the recipient in an additional service.
  • In general, the input module 102 includes one or more sensors to detect physical interaction from the user. Example arrangements of sensors are shown in FIG. 4. Referring to FIG. 4A, an input module 102 a may include a single touch-sensitive sensor 402. The sensor 402 may be of various types, for instance a sensor capable of resistive sensing or a sensor capable of conductance sensing. In some implementations, the sensor 402 may detect interaction from a user in the form of physical interaction, for instance a touching motion, a swiping or sweeping motion, a pressing motion, a toggling motion, or a turning motion. In some implementations the sensor 402 may detect the absolute or relative position of a user's finger upon the sensor 402, in order to provide additional spatial information regarding the nature of the user's interactions. In some implementations, the sensor 402 may detect the period of time in which a user interacts with sensor 402, in order to provide additional temporal information regarding the nature of the user's interactions.
  • Referring to FIG. 4B, another example input module 102 b may include several individual touch-sensitive sensors, for instance five sensors 404 a-e. The sensors 404 a-e may be of various types, for instance a sensor capable of resistive sensing or a sensor capable of capacitive sensing, or a combination of two or more types of sensors. In some implementations the sensors 404 a-e may detect the absolute or relative position of a user's finger upon the sensors 404 a-e, in order to provide additional spatial information regarding the nature of the user's interactions. In some implementations, the sensors 404 a-e may detect the period of time in which a user interacts with sensors 404 a-e, in order to provide additional temporal information regarding the nature of the user's interactions. In some implementations, each of the sensors 404 a-e is discrete, such that input module 102 b is able to discern which of the sensors 404 a-e were touched by the user. While input module 102 b is illustrated as having five sensors, any number of individual sensors may be used. Similarly, while input module 102 b is illustrated as having rectangular sensors arranged in a grid-like pattern, sensors may take any shape, and may be arranged in any pattern. For example, referring to FIG. 4C, an example input module 102 c may include eight sensors 406 a-h arranged in a circle, such that each sensor represents a sector of a circle.
  • Input modules 102 that detect physical interaction from a user through capacitive sensing may be implemented in various ways. For instance, in some embodiments, an input module 102 includes a printed circuit board (PCB) in a two layer stack. A first layer includes one or more conductive surfaces (for instance copper pads) that serve as capacitive elements, where each capacitive element corresponds to a touch-sensitive sensor. The opposing layer houses a microcontroller and support circuitry to enable resistive-capacitive (RC) based capacitive touch sensing in each of the capacitive elements. Firmware loaded into the microcontroller continuously measures the capacitance of each capacitive element and uses the capacitance reading to detect the presence of a finger, track finger swipe motion, and determine the gesture motion of the user.
  • A gesture detection algorithm may be included as a part of the firmware. In some implementations, a gesture event is detected when the capacitance measurement of any capacitive element increases over a finger detection threshold set in firmware. The gesture event ceases in either case of the capacitance measurement dropping back below the finger detection threshold or a timeout is reached
  • Once a gesture is detected, the microcontroller may communicate with another component, for instance processing module 104, which gesture occurred. This communication may occur through a wired connection, such as communication interface 110, or through a wireless connection, such as through a WiFi, Bluetooth, infrared, near-field communication (NFC) connection.
  • The input module 102 uses a capacitive touch scheme that measures capacitance of an isolated section of one or more of the capacitive elements by alternatively charging and discharging the capacitive elements through a known resistor. The combination of the resistor value and capacitance of the capacitive elements define the rate at which the capacitive elements charge and discharge. Since the resistor is a fixed value, the discharge rate has a direct relation to each capacitive element's capacitance. The capacitance of the capacitive elements is measured by recording the amount of time it takes to charge then discharge the capacitive elements. The capacitance of the capacitive elements in an unchanging environment will remain the same. When a finger comes very close or touches the capacitive elements, the finger increases the measureable capacitance of the capacitive elements by storing charge, thus causing the charge and discharge events to take longer. Since the measured capacitance increases in the presence of a finger, the firmware may then use the capacitance measurement as a means to decide that a finger is touching a sensor of input module 102.
  • Input module 102 may include sensors other than touch-sensitive sensors. For example, referring to FIG. 4D, an input module 102 d may include several physical buttons 408 a-d. A user may interact with input module 102 by pressing one or more of the buttons 408 a-d. Input module 102 may detect one or more events associated with this interaction, for example by detecting when a button is depressed, how long a button is held, when a button is released, a sequence or pattern of button presses, or a timing between two or more button pressed.
  • In some implementations, input module 102 includes one or more proximity sensors. These proximity sensors may be used to detect the motion objects near to the sensor. For example, proximity sensors may be used to detect a user waving his hand close to input module 102. These proximity sensors may also be used to detect the presence of objects near to the sensor. For example, a proximity sensor may be used to detect that system 100 is in close proximity to a user.
  • In some implementations, input module 102 includes one or more accelerometer sensors, such that input module 102 may determine the motion or the orientation of control system 100. For instance, an input module 102 with one or more accelerometer sensors may be able to determine if control system 100 is upright or not upright, or if control system 100 is being moved or stationary.
  • In general, the input modules 102 may detect a broad range of user interactions. Referring to FIG. 5, an input module 102 a that includes a single touch-sensitive sensor 402 may detect and differentiate between several distinct types of user interaction. For instance, referring to FIG. 5A, the input module 102 a may determine that a user applied a horizontal left-to-right motion to the input module by recognizing that the user initiated contact with sensor 402 at point 510 a, sustained contact along path 510 b in the direction of arrow 510 c, then released contact at point 510 d. In another example, referring to FIG. 5B, the input module 102 a may determine that a user applied a vertical bottom-to-top motion to the input module by recognizing that the user initiated contact with sensor 402 at point 520 a, sustained contact along path 520 b in the direction of arrow 520 c, then released contact at point 520 d.
  • The input module 102 is not limited to recognizing straight-line user interactions. For instance, referring to FIG. 5C, the input module 102 a may determine that a user applied an
  • S-shaped motion to the input module by recognizing that the user initiated contact with sensor 402 at point 530 a, sustained contact along path 530 b in the direction of arrow 530 c, then released contact at point 530 d.
  • The input module 102 may also detect touching motions. For instance, referring to FIG. 5D, the input module 102 a may determine that a user applied a touching motion to the input module by recognizing that the user initiated contact with the sensor 402 at point 540 and released contact at point 540. In some implementations, sensor 402 is sensitive to the location of point 520 a, and can differentiate among different points of contact along sensor 402. In some implementations, sensor 402 is sensitive to the time in between when the user initiated contact with the sensor and when the user released contact with the sensor. Thus, input module 102 may provide both spatial and temporal information regarding a user's interactions.
  • In addition, input module 102 may also detect multiple points of contact, and may differentiate, for example, between an interaction applied by a single finger and an interaction applied by multiple fingers. For instance, referring to FIG. 5E, the input module 102 a may determine that a user applied a touching motion to the input module using two fingers by recognizing that the user initiated contact with the sensor 402 at two points 550 a and 552 a, and released contact from points 550 and 552. In another example, referring to FIG. 5F, the input module 102 a may determine that a user applied a horizontal left-to-right motion to the input module using two fingers by recognizing that the user initiated contact with sensor 402 at points 560 a and 562 b, sustained contact along paths 560 b and 562 b in the direction of arrows 560 c and 562 c, then released contact at points 560 d and 562 d.
  • In some implementations, the input module 102 may determine spatially and temporally-dependent information about a user's input, even if each sensor is limited only to making a binary determination regarding whether the sensor is being touched, and is otherwise not individually capable of determining more detailed spatial information. For example, referring to FIG. 6, an input module 102 b may include several individual touch-sensitive sensors, for instance five sensors 402 a-e. If the sensors 402 a-e are capable of making only a binary determination regarding the presence or lack of user contact on each of the sensors, and cannot make a determination about the specific location of contact on each sensor, the input module 102 b may still recognize several types of user interaction.
  • For instance, referring to FIG. 6A, the input module 102 b may determine that a user applied a horizontal left-to-right motion to the input module by recognizing that the user initiated contact with at point 610 a, sustained contact along path 610 b in the direction of arrow 610 c, then released contact at point 610 d. Input module 102 b may make this determination based on a detection of contact on sensors 404 b, 404 c, and 404 d in sequential order.
  • In another example, referring to FIG. 6B, the input module 102 b may similarly determine that a user applied a vertical bottom-to-top motion to the input module by recognizing that the user initiated contact with at point 620 a, sustained contact along path 620 b in the direction of arrow 620 c, then released contact at point 620 d. Input module 102 b may make this determination based on a detection of contact on sensors 404 e, 404 c, and 404 a in sequential order.
  • In another example, referring to FIG. 6C, the input module 102 b may determine that a user initiated contact at point 630 a, sustained contact along path 630 b in the direction of arrow 630 c, then released contact at point 630 d.
  • The input module 102 may also detect touching motions. For instance, referring to FIG. 5D, the input module 102 b may determine that a user applied a touching motion to the input module by recognizing that the user initiated contact with the sensor 402 c at point 640 and released contact at point 640.
  • The input module 102 may also detect touching motions from multiple points of contact. For instance, referring to FIG. 5E, the input module 102 b may determine that a user applied a touching motion to the input module by recognizing that the user initiated contact with the sensor 402 b at point 650 a and contact with the sensor 402 c at point 650 b, and released contact at points 650 a and 650 b.
  • In another example, referring to FIG. 6F, the input module 102 b may determine that a user applied a horizontal left-to-right motion to the input module using two fingers by recognizing that the user initiated contact at points 660 a and 662 b, sustained contact along paths 660 b and 662 b in the direction of arrows 660 c and 662 c, then released contact at points 660 d and 662 d. Input module 102 b may make this determination based on a detection of contact on sensors 404 e, 404 c and 404 a simultaneously, and 404 a in sequential order.
  • In some implementations, sensors 402, for instance sensors 402 a-e, may be capable of individually determining spatial information, and may use this information to further differentiate between different types of user interaction. For instance, referring to FIG. 6G, an input module 102 b may determine that a user applied multiple points of contact onto a single sensor 404 c. In another example, referring to FIG. 6H, an input module 102 b may determine that a user applied user applied a horizontal left-to-right motion to the input module using two fingers by recognizing that two points of contact exist along the same sequence of sensors 404 b, 404 c, and 404 d.
  • An input module 102 need not have sensors arranged in a grid-like pattern in order to determine spatial information about a user's interaction. For example, referring to FIG. 7A, an input module 102 c with eight sensors 406 a-h arranged as sectors of a circle has a sensor 406 a in the 0° position, a sensor 406 b in the 45° position, a sensor 406 c in the 90° position, a sensor 406 d in the 135° position, a sensor 406 e in the 180° position, a sensor 406 f in the 225° position, a sensor 406 g in the 270° position, and a sensor 406 h in the 360° position. In an example implementation, each sensor's capacitance measurement reading is converted into X and Y components that provide the finger's location relative to the center of the array of sensors 406 a-h. This is possible because the angle of each sensor 406 is known and may be used with cosine and sine functions to extract the components. For simplicity's sake, each sensor 406 is centered on 45° offsets from the unit circle 0°. To calculate the X component of the sensor 406 b located at 45°, for example, the firmware multiplies the sensor's capacitance reading by cos(45°). For the Y component, the sensor's capacitance reading is multiplied by sin(45°). Once the X and Y components are known for all 8 sensors, all X's are summed together, and all Y's are summed together. The resulting point approximates the location of the finger on the array of sensors 406 a-h.
  • If the finger is in the center of the array of sensors 406 a-h, then each sensor will have some non-zero, but similar capacitance reading. When the X components of two oppositely faced sensors (e.g. sensors 406 a and 408 e), are summed together, they have a cancelling effect. As the finger moves outward from center, one or two sensors will show increasing capacitance readings, while the other 6-7 sensors' readings will decrease. The result seen in the summed X and Y values tracks the finger away from center and outward in the direction of the one or two sensors that the finger is in contact with.
  • A gesture detection algorithm may be included as a part of the firmware of input module 102 c. In some implementations, when a user's finger first touches the input module 102, the algorithm stores the starting location, as determined by the summed X and Y values calculated from the capacitance readings. The algorithm then waits for the detection of the finger to disappear at which point the last known location of the finger before being removed is stored off as an ending location. If no timeout is reached, and both a start and stop event have occurred, the gesture algorithm decides which gesture has occurred by analyzing the measured starting point, ending point, slope of the line formed by the two points, distance between the two points, and the change in X and Y.
  • In general, the algorithm may differentiate between multiple gestures. In some implementations, there are 4 “non-timeout” gestures that must be distinguished from each other (each direction is referenced to headphones on a user's head, so forward is motion from the back of the head towards the face): “up,” “down,” “forward,” and “backward”. First, the algorithm determines whether the gesture was horizontal or vertical, then determines whether the motion was forward, backward, up or down.
  • For example, to differentiate between a horizontal motion and a vertical motion, the algorithm may compare the change in X (X2-X1) to the change in Y (Y2-Y1) and select the larger of the two. If change in Y is larger than change in X, then the motion is assumed to be vertical.
  • To differentiate between an upward or downward motion, the algorithm may determine if Y2-Y1 is positive or negative. For example, if Y2-Y1 is positive, then the motion is assumed to be upward. If Y2-Y1 is negative, then the motion is assumed to be downward. If change in X is larger than change in Y, then the motion is assumed to be horizontal.
  • To differentiate between a forward and a backward motion, the algorithm may determine if X2-X1 is positive or negative. For example, if X2-X1 is positive, then the motion is assumed to be forward. If X2-X1 is negative, then the motion is assumed to be backward.
  • In general, each direction swipe may initiate a separate command. In addition to the swipe based gestures shown above, the touch algorithm can also detect that a user is holding the finger on one or more of the sensors of the input module 102. A “hold” gesture is detected in the event that a timeout occurs prior to an end-of-gesture event (finger lifting off the sensors of input module 102). Once it is determined that a finger is not being removed from the sensors, its location is analyzed to determine which command is intended.
  • In some implementations, input module 102 c may differentiate between different finger locations when a user interacts with input module 102 c. For instance, in some embodiments, input module 102 c includes an algorithm that differentiates between four finger locations (e.g. “up”, “down”, “back”, “forward”). This may be done by comparing the capacitance readings of the sensors centered in each cardinal location, for instance sensor 406 a at the 0° position, sensor 406 c at the 90° position, sensor 306 e at the 180° position, and sensor 406 g at the 270° position. The sensor with the highest capacitance reading indicates the position of the finger.
  • In this manner, the input module 102 may detect and differentiate between several different types of user interaction, and control device 100 use this information to assign a unique command to each of these different user interactions.
  • The user may use one or more actions (e.g. pressing a button, gesturing, waving a hand, etc.) to interact with computing device 150, without requiring that the user directly interact with computing device 150. For instance, without a control device 100, a user must interact with a computing device 150 by averting his attention from a different activity to ensure that he is sending his intended commands to computing device 150 (i.e. touching the correct area of a touchscreen, inputting the correct sequence of commands, etc.) In contrast, a user may instead use control device 100 to interact with control device 100, and may use gestures to replace or supplement the normal commands of computing device 150. As an example, a user may input a forward swiping gesture in order to command the computing device 150 to skip a currently playing content item, and to playback the next content item on a playlist. In another example, a user may input a backward swiping gesture in order to command the computing device 150 to playback the previous content item on a playlist. In this manner, each gesture may correspond to a particular command, and a user may input these gestures into input module 102 rather than manually enter the commands into computing device 150.
  • In general, a user may also use one or more actions to input commands unrelated to controlling content playback. For example, in some implementations, gestures may be used to input commands related to interacting with other systems on a network, for instance websites and social media sites. For instance, a user may input a hold gesture on a forward part of input module 102 in order to command the computing device 150 to share the currently playing content item on a social media site. Sharing may include transmitting data to the social media site that includes identifying information regarding the currently playing content item, a user action relating to the content (e.g. “liking” the content, “linking” the content to other users, etc.), a pre-determined message introducing the content to other users, or any other data related to sharing the content item with others. In general, gestures and other actions may be used to issue any command, including commands to visit a website, send a message (e.g. an email, SMS message, instant message, etc.), purchase an item (e.g. a content item, a physical product, a service, etc.), or any other command that may be performed on the computing device 150.
  • In some embodiments, control system 100 may send commands to computing device 150 based on the proximity of control system 100 to the user. For example, a control system 100 may include an input module 102 with one or more proximity sensors. These sensors may detect when control system 100 is in close proximity to the user, and may issue commands according to this detection. For example, input module 102 may be arranged in such a way that its proximity sensors are positioned to detect the presence of a user when control system 100 is in a typical usage position (e.g. against the body of a user). When control system 100 is moved away from the user, control system 100 may respond by issuing one or more commands to computing system 150, for instance a command to stop playback of any currently playing content items, a command to send one or more messages indicating that the user is away from the device, or a command to switch computing system 150 into a lower power state to conserve energy. Commands may also be issued when the control system 100 is moved back towards the user and into a typical usage position. For example, when control system 100 is moved back towards the user, control system 100 may respond by issuing commands to computing system 150 to restart playback of a content item, send one or more messages indicating that the user has returned, or a command to switch computing system 150 into an active-use state.
  • In some embodiments, control system 100 may send commands to a computing device 150 based on the orientation or motion of control system 100. For example, a control system 100 may include an input module 102 with one or more accelerometer sensors. These sensors may the orientation of control system 100, and may issue commands according to this detection. For example, input module 102 may detect that control system 100 is upright, and send a command to computing system 150 in response.
  • In some implementations, control system 100 may send commands to computing system 150 based on determinations from more than one sensor. For example, control system 100 may determine if control system 100 is being actively used by a user based on determinations from the proximity sensors and the accelerometers. For instance, if the proximity sensors determine that no objects are in proximity to control system 100, and the accelerometers determine that control system 100 is in a non-upright position, control system 100 may determine that it is not being actively used a user, and will command computing system 150 to enter a lower power state. For all other combinations of proximity and orientation, the system may determine that it is being actively used by a user, and will command computing system 150 to enter an actively-use state. In this manner, control system 100 may consider determinations from more than one sensor before issuing a particular command.
  • In some implementations, control system 100 may include one or more audio sensors, such as microphones. These sensors may be provided in order for control system 100 detect and interpret auditory data. For instance, an audio sensor may be used to listen for spoken commands from the user. Control system 100 may interpret these spoken commands and translate them into commands to computing system 150. In some implementations, control system 100 may include more than one audio sensor. In some implementations, different audio sensors can be used for difference purposes. For example, some implementations may include two audio sensors, one for recording audio used for telephone calls, and one for recording audio for detecting spoken user commands.
  • In some implementations, control system 100 may include one or more display modules in order to display information to a user. These display modules may be, for example, LED lights, incandescent lights, LCD displays, OLED displays, LCD displays, or any other type of display component that can visually present information to a user. In some embodiments, control system 100 includes multiple display modules, with either multiple display modules of the same type, or with multiple display modules of more than one type. Display modules can display any type of visual information to a user. For instance, a display module may display information regarding the operation of control system 100 (i.e. power status, pairing status, command-related statuses, etc.), information regarding computing system 150 (i.e. volume level, content item information, email content, Internet content, telephone content, power status, pairing status, command-related statuses, etc.), or other content (i.e. variating aesthetically-pleasing displays, advertisements, frequency spectrum histograms, etc.).
  • In general, control device 100 may be used in conjunction with a variety of user-controllable devices. For instance, referring to FIG. 8A, control device 100 may be used in conjunction with an audio headset 700. Portions of control device 100 may mounted to headset 800, or mounted within portions of headset 800. For instance, portions of control device 100 may be mounted within ear piece 802, ear piece 804, or connecting band 806. In some implementations, portions of input module 102 may be mounted to headset 800 in such a way that is readily accessible by a user. For instance, in some implementations, sensor 402 is mounted on an exterior surface of ear piece 802, so that a user may interact with control device 100 by touching ear piece 802. The control device 100 may be connected to a computing device 150 through a connection interface 152. Connection interface 152 may also provide a connectively pathway for the transfer of data between headset 800 and the computing device, for instance audio information when computing device 150 plays back a media file. While FIG. 8A illustrates the use of a single touch sensor 402, various types of sensors may be used, and in various combinations. For example, multiple touch sensors may be used (for example sensors 404 a-e and 406 a-h), physical controls (for example buttons 408 a-d), or combinations of touch sensors and physical controls.
  • In another example, control device 100 may be used in conjunction with an audio headset, but the components of control device 100 may be in a separate housing rather than mounted within the headset. Referring to FIG. 8B, a control device 100 may be housed in a casing 820 that is external to a headset 830. Portions of control device 100 may be mounted to the exterior of the casing 820. For instance, in some implementations, buttons 408 a-d are mounted on an exterior surface of casing 820, so that a user may interact with control device 100 by touching an exterior surface of casing 820. Connection interface 152 may be connected on one end to the transmission module 106 of the control device 100, and may have a detachable connector 824 on the other end. The detectable connector 824 may plug into a computing device 150, and may be repeatedly connected and detached by the user so that control device 100 may be swapped between computing devices. While FIG. 8B illustrates the use of a buttons 408 a-d, various types of sensors may be used, and in various combinations. For example, one or more touch sensors may be used (for example sensors 402, 404 a-e, and 406 a-h), physical controls (for example buttons 408 a-d), or combinations of touch sensors and physical controls.
  • In another example, control device 100 may be used in conjunction with an audio headset, but the components of control device 100 may be in a separate housing that is mounted away from the headset. Referring to FIG. 8C, a control device 100 may be housed in a casing 820 that is separate from a headset 842. Portions of control device 100 may be mounted to the exterior of the casing 840. For instance, in some implementations, touch sensor 402 is mounted on an exterior surface of casing 840, so that a user may interact with control device 100 by touching an exterior surface of casing 840. Connection interface 152 may be connected on one end to the transmission module 106 of the control device 100, and may have a detachable connector 824 on the other end. The detectable connector 824 may plug into a computing device 150, and may be repeatedly connected and detached by the user so that control device 100 may be swapped between computing devices. In some implementations, a connector port 826 may be provided on the exterior of casing 820, where the connector port 826 provides detachable data transmission access to control device 100. In some implementations, connector port 826 provides a connection for data transmission between computing device 150 and headset 842, such that headset 842 can also communicate with computing device 150. A user may plug the headset 842 into connector port 826 so that the headset or presentation device can receive audio information from computing device 150. In some implementations, other devices may be plugged into connector port 826, either in addition to or instead of headset 842. For instance, in some implementations, a microphone may be plugged into connector port 826, such audio information from the microphone is transmitted to control system 100, then transmitted to computing device 150. In some implementations, a display device may be plugged into connector port 826, such that audio and/or video data from computing device 150 is transmitted to the display device for presentation. In some implementations, one than one device may be plugged into connector port 826. For instance, in some implementations, a headset and a display device may be plugged into connector port 826, such that audio information from computing device 150 is played back on the headset, and video information is played back on the display device. In another example, a headset and a microphone may be plugged into connector port 826, such that audio information from computing device 150 is played back on the headset, and audio information from the microphone is transmitted from the microphone to computing device 150. While FIG. 8C illustrates the use of a single touch sensor 402, various types of sensors may be used, and in various combinations. For example, multiple touch sensors may be used (for example sensors 404 a-e and 406 a-h), physical controls (for example buttons 408 a-d), or combinations of touch sensors and physical controls.
  • In some implementations, a user may use a control device 100 during a performance in order to share information regarding the performance to a group of pre-determined recipients or recipients belonging to a group. In an example, a control device 100 may be connected a computing device 150, and to one or more other devices, such as a headset, a microphone, or a display device. A user may use computing device 150 to play content items for an audience, for example to play audio and/or video content to an audience as a part of a performance. During this performance, the user may use the one or more devices connected to control device 100, for instance a headset (e.g. to monitor audio information from computing device 150), a microphone (e.g. to address an audience), and a display device (e.g. to present visual data, such as images or video to an audience). During this performance, the user may also use control system 100 to send commands to computing device 150, such as to control the playback of content items, and to share information regarding the performance. For instance, the user may use control system 100 to command computing device 150 to transmit information regarding the currently playing content item (e.g. the name of a song or video, the name of the creator of the song or video, or any other information) to one or more recipients, such as by posting a message to a social media site, by emailing one or more users, by sending an SMS or instant message to one or more users, or by other such communications methods. In this manner, a user may use a control device 100 in conjunction with several other devices to render a performance, as well as to share information regarding the performance to one or more recipients.
  • In another example, control device 100 may be used in conjunction with other audio and video playback devices, for instance a speaker apparatus 860. Portions of control device 100 may mounted to the speaker apparatus 860, or mounted within portions of the speaker apparatus 860. In some implementations, portions of input module 102 may be mounted to speaker apparatus 860 in such a way that is readily accessible by a user. For instance, in some implementations, sensor 402 is mounted on an exterior surface of speaker apparatus 840, so that a user may interact with control device 100 by touching an exterior surface of speaker apparatus 860. In some implementations, speaker apparatus 860 may also include a connection interface (not shown), that provides data connectivity between control device 100 and a computing device 150. In some implementations, speaker apparatus 860 includes a computing device 150. In some implementations, the computing device 150 includes data storage and data processing capabilities, such that media files may be stored and played back from within the speaker apparatus 860. In some implementations, the computing device 150 may also include one or more data connection interfaces, such that a user may transmit media files to the computing device 150 for playback. The data connection interfaces may be include components for transferring data through local area networks (LANs), wide area networks (WANs), Bluetooth networks, cellular networks, or any other network where network capable of data transmission. While FIG. 8D illustrates the use of a single touch sensor 402, various types of sensors may be used, and in various combinations. For example, multiple touch sensors may be used (for example sensors 404 a-e and 406 a-h), physical controls (for example buttons 408 a-d), or combinations of touch sensors and physical controls.
  • In another example, control device 100 may be used in conjunction with other devices not normally associated with media playback. For instance, referring to FIG. 8D, a control device 100 may be used in conjunction with a steering wheel 880. Typically, a steering wheel 880 is manipulated by a user to control the direction of a vehicle. In some implementations, the portions of control device 100 may mounted to the steering wheel 880, or mounted within portions of steering wheel 880. In some implementations, portions of input module 102 may be mounted to steering wheel 880 in such a way that is readily accessible by a user. For instance, in some implementations, sensor 402 is mounted on an exterior surface of the steering wheel 880, so that a user may interact with control device 100 by touching steering wheel 880. The control device 100 may be connected to a computing device 150 through a connection interface 152. While FIG. 8E illustrates the use of a multiple single touch sensors 402, various types of sensors may be used, and in various combinations. For example, multiple touch sensors may be used (for example sensors 404 a-e and 406 a-h), physical controls (for example buttons 408 a-d), or combinations of touch sensors and physical controls.
  • While example implementations of control device 100 are described above, these examples are not exhaustive. In general, control device 100 may be used in conjunction with any user-operated device, and may be used to control any computing device 150.
  • For instance, referring to FIG. 9, in another example implementation the control system 100 may include a computing component attached to a headphone 800 (such as one of headphones 800 a-e), where the computing device includes a touch pad, a motion sensor and/or a camera. In one implementation, a user may make a hand gesture near the headphone to provide a control command to the audio source, e.g., swirling the point finger clockwise may indicate a “replay” request; pointing one finger forwardly may indicate a “forward” request; pointing one finger downwardly may indicate a “pause” request, and/or the like. In one implementation, control system 100 may capture the user hand gesture via a camera, or a remote control sensor held by the user, or the user making different movement on the touch pad, and/or the like. Such captured movement data may be analyzed by the control system 100 and translated into control commands for the audio source to change the audio playing status.
  • In some implementations, the control system 100 may facilitate social sharing between users. For example, a user may make a command so that the control system 100 may automatically post the currently played song to social media, e.g., Tweeting “John Smith is listening to #Scientist #Coldplay,” a Facebook message “John Smith likes Scientist, Coldplay,” and/or the like.
  • In some implementations, a user may make a gesture to share audio content to another user using another control system 100. For example, the user may made a “S” shaped gesture on a touch pad of the control system 100 of a headphone 800, which may indicate “sharing” with another control system 100 user in a detectable range (e.g., Bluetooth, etc.). In another implementation, the control system 100 may communicate via Near Field Communication (NFC) handshake. The second control system 100 may receive the sharing message and adjust the audio source to an Internet radio the first control system 100 user is listening to, so that the two users may be able to listen to the same audio content. In some implementations, the sharing may be conducted among two or more control system 100 users. In some implementations, the control system 100 may share the radio frequency from one user to another, so that they can be tuned to the same radio channel.
  • In some implementations, the control system 100 may allow a user to configure user preferred “shortcut keys” for a command. For example, in one implementation, the control system 100 may be connected to a second device (e.g., other than a headphone), such as a computer, a smart phone, and/or the like, which may provide a user interface for a user to set up short-key movements. For example, the user may select one finger double-tab as sharing the currently played song to a social media platform (e.g., Twitter, Facebook, etc.) as a “like” event, two finger double-tab as sharing the currently played song to social media by posting a link of the song, and/or the like.
  • In some implementations, the control system 100 may include a headphone with aesthetic designs. For example, the earpad portion may have a transparent design, a colored exterior spin that may feature sponsor information and/or branding logos. In some implementations, the control system 100 may include a headphone with touch pad, a touch screen that may show social sharing information (e.g., Tweets, Facebook messages, etc.). In some implementations, the control system 100 may include a headphone with a removable headband portion to feature user customized graphics. The user may remove the headband portion from the headphone for cleaning purposes. In some implementations, the control system 100 may include a headphone that may be adaptable to helmets.
  • In some implementations, the control system 100 may be engaged in a “DJ display” mode, wherein a digital screen at the headphone may display color visualizations including variating color bars that illustrates the frequency of the audio content being played.
  • In some implementations, the control system 100 may provide APIs to allow third party services. For example, the control system 100 may include a microphone so that a user may speak over a phone call. In some implementations, a user may instantiate a mobile component at the audio source (e.g., a computer, a smart phone, etc.). When the audio source detects an incoming audio communication request (e.g., a Skype call, a phone call, and/or the like), the control system 100 may automatically turn down the volume of the media player, and a user may make a gesture to answer to the incoming audio communication request, e.g., by tapping on the touch pad of the headphone as the user may have configured one-tap as the shortcut key, etc.
  • In some implementations, the control system 100 may allow a user to sing and record the user's own singing. In one implementation, the Motion-HP may instantiate a “Karaoke” mode so that the control system 100 may perform remix of background soundtrack of a song that is being played and the recorded user's singing to make a cover version of the song. In one implementation, the user may make a gesture on the touch pad of the control system 100 to share the “cover” version to social media.
  • In some implementations, the control system 100 may provide audio recognition (e.g., a “Shazam” like component, etc.). In some implementations, when a user is listening to a radio channel without digital identification of the audio content, the control system 100 may identify the audio content via an audio recognition procedure.
  • In some implementations, the control system 100 may broadcast audio content it receives from an audio source to other control systems 100 via Bluetooth, NFC, etc. For example, a user may connect his/her control system 100 to a computer to listen to media content, and broadcast the content to other control systems 100 so that other users may hear the same media content via broadcasting without directly connecting to an audio source.
  • In some implementations, the control system 100 may include accelerometers to sense the body movement of the user to facilitate game control in a game play environment. In one implementation, the control system 100 may be engaged as a remote game control via Bluetooth, NFC, WiFi, and/or the like, and a user may move his head to create motions which indicate game control commands.
  • In some implementations, the control system 100 may automatically send real-time audio listening status of a user to his subscribed followers, e.g., the fan base, etc.
  • In some implementations, the control system 100 may be accompanied by a wrist band, which may detect a user's pulse to determine the user's emotional status, so that the control system 100 may automatically select music for the user. For example, when a heavy pulse is sensed, the control system 100 may select soft and soothing music to the user.
  • In some implementations, the control system 100 may comprise a flash memory to store the user's social media feeds, user's configuration of audio settings, user defined shortcut keys, and/or the like. For example, when the user connects a control system 100 to a different audio source, the user does not need to re-configure the parameters of control system 100.
  • In some implementations, the control system 100 may allow a user to add third party music services, such as but not limited to iTunes, Pandora, Rhapsody, and/or the like, to the control system 100. In further implementations, the user may configure shortcut keys for selection of music services, control the playlist, and/or the like.
  • In some implementations, the control system 100 may provide registration services in order to access full usage of the control system 100. For example, a user may access a registration platform via a computer, etc. A user may be allowed to access limited features of the control system 100, e.g., play music, etc., but not able to access additional features such as “DJ mode,” “Karaoke mode,” and/or the like.
  • Further implementations of the control system 100 include analytics for targeting advertisements, revenue sharing between advertising channels and sponsors, music selection and recommendation to a user, and/or the like.
  • Some implementations of the present technology may be fully integrated into a headphone apparatus, and used in conjunction with a media player device to provide a user with audio playback of media content, and to allow the user to interact with social media sites, email providers, supplementary content providers, and ad providers based on the media content being played. FIG. 10 illustrates an exemplary embodiment of the apparatus 1000. The headphones 1000 are operably connected to a media player 1002 through a connection 1004, for instance hardwire connection or a wireless connection, such as Bluetooth or Wi-Fi. The media player 1002 communicates with a network gateway 1006 through wireless network connection 1008, such as through a cellular connection or Wi-Fi connection. The network gateway 1006 provides network connectivity to the Internet 1010 through a network connection 1014, facilitating access to various content and service providers 1012 a-d connected to the Internet 1010 through network connections 1016. Content and service providers 1012 a-d may include email servers, social media sites, ad servers, and content servers.
  • Other implementations are contemplated. For example, the media player 1002 may be one of many types of mobile devices, such as a cellular telephone, a tablet, a computer, a pager, a gaming device, or a media player. In some implementations, the wireless network connection 1008 may be one of many types of communications networks through which data can be transferred, such as a Wi-Fi network, a cellular telephone network, a satellite communications network, a Bluetooth network, or an infrared network. In some implementations, the content and service providers 1012 a-d may also include search engines, digital content merchant sites, instant messaging providers, SMS message providers, VOIP providers, fax providers, content review sites, and online user forums.
  • FIG. 11A illustrates an example embodiment of the headphones 1000. The headphones include a first earpiece assembly 1102, a second earpiece assembly 1104, and a headband assembly 1106 that securely positions the earpieces 1102 and 1104 over the ears of a user. Each earpiece assembly 1102 and 1104 includes one or more externally accessible touch sensor arrays 1108 and 1110 for user interaction.
  • FIG. 11B illustrates the components of the first earpiece assembly 1102. Mounted on the Main PCB 1112 are a microcontroller 1114, a baseband digital signal processor (DSP) 1116, a Kalimba DSP 1118, an audio/video codec 1120, random access memory (RAM) 1122, and non-volatile “flash” memory 1124. Also connected to the Main PCB 1112 are a USB connector 1126, a wired connector 1128, light emitting diode (LED) indicators 1130, a power switch 1132, an audio driver 1134, and touch sensor array 1108. The first earpiece assembly 1102 is connected to the second earpiece 1106 assembly through a wired connection 1136 passing through the headband assembly 1106.
  • FIG. 11C illustrates the components of the second earpiece assembly 1104. The Slave PCB 1138 is connected to the Main PCB 1112 of the first earpiece assembly 1102 through a hardwire connection 1136. Also connected to the Slave PCB 1138 are a battery 1142, microphone array 1144, near-field communication (NFC) module 1146, an audio driver 1148, and a touch sensor array 1110.
  • The Main PCB 1112 and Slave PCB 1138 provide connectivity between the various components of the earpiece assemblies. The microcontroller 1114 accepts inputs from the touch sensor arrays 1108 and 1100, USB connector 1126, and wired connector 1128, and if necessary, translates the inputs into machine compatible commands. Commands and other data are transmitted between the microprocessor and/or the connected components. For example, audio from the microphone array 1144 and the wired connector 1128 is digitally encoded by the codec 1120 and processed by the baseband DSP 1116 and Kalimba DSP 1118, where it may be modified and mixed with other audio information. Mixed audio is decoded by the codec 1120 into an analog representation and is output to the audio drivers 1134 and 1148 for playback. LEDs 1130 are connected to the microcontroller 1114 and may be illuminated or flashed to indicate the operational status of the headphone apparatus. Power is supplied by the battery 1142 connected to the microcontroller 1114, and power may be toggled by using a power switch 1132. Additional components, such as wireless transceivers 1150, may be connected to and controlled by the microprocessor 1114. The microprocessor 1114 may transmit data to an externally connected computing device, such as a smart phone or media player, via the wireless transceivers 1114, the USB connector 1126, or the wired connector 1128. The data may include data used to identify the specific model, features, and unique identifying information of the headphones.
  • Other implementations are contemplated. For example, one or more of the touch sensor arrays 1108 and 1110 may instead be physical buttons, switches, or dials. Additional connectors may be provided on the first or second earpiece assemblies 1102 and 1104, including an audio output port, an optical port, Firewire port, an Ethernet port, a SATA port, a power input port, a Lightning port, or a serial port. Power, digital data, or analog data may be input into the apparatus or output from the apparatus using these ports. In some implementations, the headphone apparatus 1000 may also include a video display unit, such that visual content may be displayed on the device. The video display unit may be a LCD display, or may be a heads-up display (HUD) that overlays visual data over a transparent or translucent viewing element. In some embodiments, one or more of the components stored in each of the earpiece assemblies 1102 and 1104 may be relocated to the other earpiece assembly or to an external housing unit. The housing unit may be positioned on the headband 1106, on one of the wired connections (i.e. connections 1128 and 1136), or elsewhere on the headphone apparatus 1000. In some implementations, the headphone 1000 may have a GPS device that can be used to determine locational data. In some implementations, the battery 1142 is removable.
  • The user may use the touch sensor arrays 1108 and 1110 to input commands into the headphone apparatus 1000. For example, each of the individual input surfaces of sensor arrays 1108 and 1110 of may be programmed to correspond to specific functions, such as play, stop, rewind, fast forward, pause, repeat, skip, volume increase, or volume decrease. Additional commands may include a command to wirelessly “pair” the headphone 1000 to another wireless device, a command to create a post on a social networking site, a command to draft an email, or a command to search for additional information regarding the media content currently being played. The touch sensor arrays 1108 and 1110 may be of a PCB, Flex-PCB, or ITO film based design.
  • Additional commands may be programmed depending on the length of time the button or touch sensor is activated. For example, a brief touch may correspond to a command to fast forward, while a longer touch may correspond to a command to skip forward to the next track. Additional commands may be programmed depending on a sequence of multiple inputs. For example, pressing the touch array 1108 or 1110 twice may correspond to a command to create a post on a social media site, while pressing the touch array 1108 or 1110 three times may correspond to a command to draft an email. In addition, touching the sensor array 1108 or 1110 in a specific order and within a certain timeframe, such to simulate a gesture, can correspond to a command. For example, touching the bottom, middle, and top sensors of array 1108 in sequence in single sliding motion may correspond to a command to increase the volume. Touching the top, middle, and bottom sensors of array 1108 in sequence in a single sliding motion may correspond to a command to decrease the volume. Other such “gestures” can be recognized as user commands, including a sliding left to right motion, a sliding right of left motion, a clockwise circular motion, or a counter-clockwise circular motion.
  • In some implementations, the headphone 1000 may be “paired” with another device through a wireless connection, such that the headphone will only communicate with the paired device. Example wireless connections may include Bluetooth, enabled through an appropriately provided Bluetooth transceiver. Near-field communication (NFC) tags, for instance a tag on NFC module 1146, may be used to simplify the “pairing” process. For example, the NFC tag may be pre-programmed from the factory with the unique Bluetooth ID information of the Bluetooth transceiver. A device capable of reading NFC tags can be passed over the NFC tag in order to access the Bluetooth ID information. This information can be used to uniquely identify the Bluetooth transceiver contained within the headphone assembly and to establish the “paired” connection without requiring additional manual entry of the Bluetooth ID by a user. The NFC tag may also contain other information used to identify the specific model, features, and unique identifying information of the headphones 1000.
  • FIG. 12 illustrates exemplary tasks that may be performed by various implementations of the present technology. A media player (for instance, media player 1002) loads a playlist of media content to be played (block 1202), plays the media content, and recognizes contextual information about the media contents of the playlist (block 1204). Examples of contextual information may include the name of the track, the media player may also determine the location of the user using a built in GPS sensor, or using a GPS sensor located on the headphone assembly (block 1205).
  • Using the contextual and location information, the apparatus may deliver supplemental content to the user. The media player sends a request to content servers (for instance, content servers 1012 a-d) for supplemental content based on the contextual and location information acquired (block 1206). Supplemental content may include information such as biographical information about the artist, album art or other visual data about the artist, social media messages written by or written about the artist, a list of past and previous tour dates by the artist, “remixed” or alternative tracks, a listing of related merchandise, or a list of “similar” artists and tracks. The media player receives the supplemental content from the content servers (block 1208), aggregates the summary information into display templates (block 1210), and displays the aggregated information to the user (1212). An example display template 1300 with aggregated information 1302 is illustrated in FIG. 13. A user may interact with the aggregated data by selecting an item 1306 that he wishes to learn more about. The phone will direct the user to an external site where more detailed information is displayed about the selected item, or to an Internet-based marketplace where merchandise related to the selected item is offered for sale (block 1214).
  • The apparatus may also deliver ad content based on the contextual information and location information collected. The media player sends a request to ad servers for ad content based on the contextual and location information acquired (block 1220). Ad content may include static images, videos, text, audio recordings, or other forms of media (block 1222). The media player receives the ad content from the ad servers, inserts the ads into display templates (block 1224), and displays the ads to the user (block 1226). An example display template 1300 with ads 1308 is illustrated in FIG. 13. The user may interact with the ads by selecting an ad that he wishes to learn more about. The phone will direct the user to an external site where more detailed information is displayed about the selected ad, or to an Internet-based marketplace where merchandise related to the selected ad is offered for sale (block 1228).
  • The apparatus may also allow the user to share media or other content with one or more users. The media player receives a command from the user to share content with a local second user (1240). The command may be of a voice command or an input from the touch sensor array. The media player searches and connects to the local second user's device over a wireless connection (1242). Wireless connection can be established over any of several common wireless networks including Wi-Fi, Bluetooth, or infrared. After establishing a connection, the media player transmits the content to the second user's device over the wireless connection (1244).
  • In some implementations, the user may instead share media or other content with one or more users over an Internet connection. In these implementations, the media player may access the Internet and search for a second user or for a content sharing site through the Internet connection. Access to the Internet may be over any of several common wireless networks including Wi-Fi, Bluetooth, infrared, a cellular network, or a satellite network. The media player connects to the second user's device or the content sharing site over the Internet connection, and transmits the content to the second user's device or content sharing site. The media player may also draft and send a message to one or more users, notifying the one or more users of the newly shared content and providing the location from which it can be retrieved.
  • The apparatus may also allow the user to interact with various social media sites based upon the contextual data and locational data acquired. In these implementations, the media player receives a command from the user to interact with a social media site (block 1260). The media player generates a message or an action based upon the contextual and location information (block 1262). Examples of messages may include “[User Name] is listening to [Track Name] by [Artist Name] at [Location]”, “[User Name] is playing [Album Name] on the way to [Location],” or any similar message identifying contextual and location information in a social media compatible format. Messages and actions may be transmitted to social media sites using established application programming interfaces (APIs) to ensure compatibility (block 1264).
  • In some implementations, the message may also be modified by the user to allow for personalization. The message may also include photographs, videos, audio, or any other related content, either generated by the user or retrieved from content servers or ad servers. Examples of actions may include “liking” an artist or track and subscribing to an artist's social media page. Example social media sites may include Facebook, Twitter, Google+, Instagram, or any other such site. In some embodiments, the apparatus may also send messages or perform other such actions over other networking sites or services, such as email, instant messaging providers, SMS message providers, VOIP providers, fax providers, content review sites, and online user forums.
  • Referring to FIG. 14, in some implementations, the apparatus may operate in “karaoke mode,” such that it records the user's voice and mixes it with a background audio sound track. The apparatus enters “karaoke mode” after receiving an appropriate command from the user via voice command or touch sensor input (block 1402). Audio content from a playlist is played on one side audio channel (i.e. audio driver 1134), while audio is recorded from the microphone (i.e. microphone array 1144) and played over the other side audio channel (i.e. audio driver 1148) (block 1404). Audio from the microphone is mixed with the audio track and saved locally, for example on the flash memory 1124 or RAM 1122 (block 1406). The mixed audio track may be uploaded to a content sharing site or a social media site via an appropriate Internet connection (block 1410). The mixed audio track may be shared using mechanisms described above, such as through the use of a generated message on a social media site, a generated email message, or message through any other such communications network (block 1410). This generated message is then transmitted to the recipient, for example transmitted to the social media site using an appropriate API or to an email server for transmission to the recipient (block 1412). The mixed audio track may also be retained on local storage for future playback.
  • In some embodiments, “karaoke mode” may instead identify the selected audio track using contextual information and access a vocal-free version of the audio track from an appropriate content server. The vocal-free version of the audio track may be used in place of the vocalized version, resulting in a “karaoke” mix that better accentuates the user's own voice without interference from the original vocalizations. The vocal-free version of the audio track may also be mixed with the vocalized version, such that a reduced portion of the original vocalizations remain in the final mix. In some embodiments, accessing the vocal-free versions may also include connecting to an Internet-connected marketplace, such that vocal-free versions may be purchased, downloaded, stored, and used for “karaoke” mode using the apparatus.
  • In some embodiments, the features of the media player 1002 may be limited or enabled based upon the connected headphone 1000. Identifying information from the headphone 1000 may be transferred from the headphone 1000 to the media player 1002 via the wired connector 1128 or via a wireless connection, such as through as through a Bluetooth network, Wi-Fi network, NFC, or other such communication connections. Identifying information is validated against a list of authorized devices, and features of the media player may be disabled or enabled as desired. For example, a user may plug in a headphone 1000 as described above. Information identifying the headphone is transmitted to the media player 1002 and is validated against a recognized list of compatible devices, and all features of the media player are enabled as a result. The user may alternatively plug in a headphone that is not recognized by the media player 1002. Certain features, for example “karaoke mode,” may be disabled on the media player as a result
  • Various implementations of the present invention allow for the control of, interaction with, and creation of content via a remote device, such as an audio headphone, to a base station such as a mobile device, mp3 player, cell phone, mobile phone, smart phone, tablet computer, e-book reader, laptop computer, smart television, smart video screen, networked video players, game networks and the like. For example, example embodiments of the present invention allow for the programming of short-cut commands, such as hand gestures received at or near the headphones, to initiate a command on a software application running on a smart phone, such as posting a “like” on social network relating to a song played on the headphone.
  • Previous attempts to control content or content players via remote devices such as headphones and remote controls have allowed user manipulation of the audio visual content as experienced by the user (e.g., adjusting volume, pausing, rewinding, etc.). Implementations of the present invention allow for the user to create additional content from the remote device for distribution over a network, such as comments relating to content, accessing promotional offers, product registration, participation in live promotions, etc. Such layered content creation has previously been done through user input at the base device, such as typing into a smart phone to indicate a favorable response or opinion for a song. With various implementations of the present invention, a user can program the base device, such as a smart phone, to recognize simple inputs made at the remote device and associate those inputs with a specific command to be executed in programs or applications running on the device or accessible by the device.
  • By way of example, and without limitation, a user can download a program onto a smartphone that recognizes input made via an input pad on a headphone. The input, such as a circle made by the finger on the input pad (or touch sensor array) can be associated with a command on an mp3 player application. The circle motion can be associated with a command to pull all songs of a related genre from a sponsor's play list.
  • In a broad implementation of the present invention, and with reference to FIG. 15, a method of remote access to a hosted application comprises the steps of creating associated command (block 1500) (e.g., abbreviated inputs at a remote device associated with the execution of a function or step in a hosted application) and receiving a remote command for execution. More specifically, a method of remote access to a hosted application comprises the steps of: recording a user input from a sensor on a remote device (block 1502); associating the recorded user input with a specific command (block 1504); storing the command-input association (block 1506); receiving a user input on a sensor on a remote device (block 1508); transmitting the user input from the remote device to a base device (block 1510); receiving at the base device the user input transmitted from the remote device (block 1512); comparing the input with the previously recorded inputs for association with a command specific to an application running on or accessible by the base device (block 1514); matching the user input to the desired command (block 1516) and executing the command (block 1518). In some embodiments the execution of the command (block 1518) may initiate certain cloud functionality (block 1520) to allow user interaction with content available over a network, such as the Internet, a web page, a blogosphere, a blog spot, a social networked, a shared media network, a closed or private network, and the like.
  • Various implementations of the invention utilize human vital and biological data collected via the external device, such as interactive headphones, to choose music according to mood and/or activity level. For example, when a user is working out in the gym more up-beat music is played while running and more relaxing music is played as the user begins to walk, cool-off and wind down an activity session. This includes a relational database of music, artist and songs with mood classification (pumped-up, calm/relax, etc.) The association of content with activity can be made with simple commands entered via the touch pad on the interactive headphones, or the device can include an accelerometer to detect activity levels. The application running on the base device can include GPS or other location determining software as well as logic to correlate location with calendar entries or other data to determine or confirm activity.
  • In other examples, the software application of some implementations of the device can recognize when headphones are removed via indication from the headphones. In a particular commercial implementation, a music aggregator, such as Pandora would be able to determine when music is played and when it is paused based on whether the interactive headphones are over the ears or not, thereby avoiding unnecessary licensing fees for the music.
  • In another example, a user can interact with content, such as just-in-time promotions, targeted marketing, geo-based marketing, and the like, by associating simple commands with registration of the user for participation in a promotional offer, opt-in or opt-out of promotional offers or materials, voting, association, and the like.
  • Implementations of the invention are not limited to headphones, but can be incorporated into dongles, or other external input devices. The methods of creating layered content and interacting with programs and content hosted on a base device via commands entered into a remote device can be implemented in video devices or headphones/video combinations.
  • The operations described in this specification can be implemented as operations performed by a data processing apparatus on data stored on one or more computer-readable storage devices or received from other sources.
  • The term “data processing apparatus” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing The apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures.
  • A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
  • Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a universal serial bus (USB) flash drive), to name just a few. Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
  • To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser.
  • Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), an inter-network (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks).
  • The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, a server transmits data (e.g., an HTML page) to a client device (e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device). Data generated at the client device (e.g., a result of the user interaction) can be received from the client device at the server.
  • While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any inventions or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
  • Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
  • Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous.

Claims (13)

What is claimed is:
1. A system for interacting with an application on a processing apparatus, comprising:
an input module;
a processing module; and
a transmission module;
wherein the input component is configured to detect a tactile input applied by a user;
wherein the processing component is configured to translate the input into an application command; and
wherein the transmission component is adapted to transmit the command to the processing apparatus.
2. The system of claim 1, wherein the tactile input comprises one or more of the following: a momentary touching gesture, a sustained touching gesture, and a swiping gesture.
3. The system of claim 1, wherein the tactile input comprises a series of two of more of the following: a momentary touching gesture, a sustained touching gesture, and a swiping motion gesture.
4. The system of claim 1, wherein the processing component is configured to determine a number of fingers used by the user to apply the input.
5. The system of claim 4, wherein the processing component is configured to translate the input into an application command based on the number of fingers detected.
6. The system of claim 1, further comprising one or more audio speakers.
7. The system of claim 1, wherein the application is a media management application.
8. The system of claim 1, wherein the processing apparatus is one of the following: a media player, a smartphone, a gaming device, and a computer.
9. The system of claim 1, wherein the command comprises a command to control a media playback function of the processing apparatus.
10. The system of claim 1, wherein the command comprises a command to transmit a message to a recipient through a communications network.
11. The system of claim 10, wherein the communications network comprises one or more of the following: a LAN, a WAN, the internet, and a cellular network.
12. The system of claim 10, wherein the recipient is a communications device, a social media website, an email server, and a telephone.
13. The system of claim 10, further comprising a steering wheel for controlling a vehicle.
US13/918,451 2012-06-15 2013-06-14 Interactive input device Abandoned US20130339850A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US13/918,451 US20130339850A1 (en) 2012-06-15 2013-06-14 Interactive input device
US14/751,952 US20160103511A1 (en) 2012-06-15 2015-06-26 Interactive input device
US15/628,206 US20180048750A1 (en) 2012-06-15 2017-06-20 Audio/video wearable computer system with integrated projector
US16/747,926 US20200162599A1 (en) 2012-06-15 2020-01-21 Audio/Video Wearable Computer System with Integrated Projector
US17/661,421 US20220337693A1 (en) 2012-06-15 2022-04-29 Audio/Video Wearable Computer System with Integrated Projector

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201261660662P 2012-06-15 2012-06-15
US201361749710P 2013-01-07 2013-01-07
US201361762605P 2013-02-08 2013-02-08
US13/918,451 US20130339850A1 (en) 2012-06-15 2013-06-14 Interactive input device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/751,952 Continuation US20160103511A1 (en) 2012-06-15 2015-06-26 Interactive input device

Publications (1)

Publication Number Publication Date
US20130339850A1 true US20130339850A1 (en) 2013-12-19

Family

ID=49757146

Family Applications (9)

Application Number Title Priority Date Filing Date
US13/802,217 Abandoned US20130339859A1 (en) 2012-06-15 2013-03-13 Interactive networked headphones
US13/918,451 Abandoned US20130339850A1 (en) 2012-06-15 2013-06-14 Interactive input device
US14/751,952 Abandoned US20160103511A1 (en) 2012-06-15 2015-06-26 Interactive input device
US15/162,152 Active US9992316B2 (en) 2012-06-15 2016-05-23 Interactive networked headphones
US15/992,421 Active - Reinstated 2033-05-18 US10567564B2 (en) 2012-06-15 2018-05-30 Interactive networked apparatus
US16/783,331 Abandoned US20200252495A1 (en) 2012-06-15 2020-02-06 Interactive networked apparatus
US17/650,546 Active US11924364B2 (en) 2012-06-15 2022-02-10 Interactive networked apparatus
US18/419,412 Pending US20240163359A1 (en) 2012-06-15 2024-01-22 Interactive networked apparatus
US18/442,925 Pending US20240187511A1 (en) 2012-06-15 2024-02-15 Interactive networked apparatus

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/802,217 Abandoned US20130339859A1 (en) 2012-06-15 2013-03-13 Interactive networked headphones

Family Applications After (7)

Application Number Title Priority Date Filing Date
US14/751,952 Abandoned US20160103511A1 (en) 2012-06-15 2015-06-26 Interactive input device
US15/162,152 Active US9992316B2 (en) 2012-06-15 2016-05-23 Interactive networked headphones
US15/992,421 Active - Reinstated 2033-05-18 US10567564B2 (en) 2012-06-15 2018-05-30 Interactive networked apparatus
US16/783,331 Abandoned US20200252495A1 (en) 2012-06-15 2020-02-06 Interactive networked apparatus
US17/650,546 Active US11924364B2 (en) 2012-06-15 2022-02-10 Interactive networked apparatus
US18/419,412 Pending US20240163359A1 (en) 2012-06-15 2024-01-22 Interactive networked apparatus
US18/442,925 Pending US20240187511A1 (en) 2012-06-15 2024-02-15 Interactive networked apparatus

Country Status (3)

Country Link
US (9) US20130339859A1 (en)
EP (2) EP2862044A4 (en)
WO (2) WO2013188769A1 (en)

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104902360A (en) * 2014-03-07 2015-09-09 美国戴豪公司 Headphones for receiving and transmitting audio signals
WO2015171228A1 (en) * 2014-05-05 2015-11-12 Usablenet Inc. Methods for facilitating a remote interface and devices thereof
US9298907B2 (en) 2014-04-01 2016-03-29 Snowshoefood, Inc. Methods for enabling real-time digital object and tangible object interactions
US20160124707A1 (en) * 2014-10-31 2016-05-05 Microsoft Technology Licensing, Llc Facilitating Interaction between Users and their Environments Using a Headset having Input Mechanisms
US20160212515A1 (en) * 2015-01-20 2016-07-21 Taction Technology Inc. Apparatus and methods for altering the appearance of wearable devices
US20160216943A1 (en) * 2015-01-25 2016-07-28 Harman International Industries, Inc. Headphones with integral image display
WO2016154590A1 (en) * 2015-03-26 2016-09-29 General Electric Company Detection and usability of personal electronic devices for field engineers
US20160283191A1 (en) * 2009-05-27 2016-09-29 Hon Hai Precision Industry Co., Ltd. Voice command processing method and electronic device utilizing the same
US20160295341A1 (en) * 2012-01-06 2016-10-06 Bit Cauldron Corporation Method and apparatus for providing 3d audio
US20160313973A1 (en) * 2015-04-24 2016-10-27 Seiko Epson Corporation Display device, control method for display device, and computer program
US20160346604A1 (en) * 2015-05-28 2016-12-01 Nike, Inc. Music streaming for athletic activities
US20170216675A1 (en) * 2016-02-03 2017-08-03 Disney Enterprises, Inc. Fitness-based game mechanics
CN107210950A (en) * 2014-10-10 2017-09-26 沐择歌有限责任公司 Equipment for sharing user mutual
US20170330429A1 (en) * 2016-05-10 2017-11-16 Google Inc. LED Design Language for Visual Affordance of Voice User Interfaces
US9832644B2 (en) 2014-09-08 2017-11-28 Snowshoefood, Inc. Systems and methods for hybrid hardware authentication
CN107567634A (en) * 2015-05-22 2018-01-09 惠普发展公司有限责任合伙企业 Media content selects
US20180018965A1 (en) * 2016-07-12 2018-01-18 Bose Corporation Combining Gesture and Voice User Interfaces
CN107666492A (en) * 2016-07-25 2018-02-06 中兴通讯股份有限公司 A kind of control method, service sensor, service unit and terminal
US20180146293A1 (en) * 2016-11-18 2018-05-24 Muzik, Llc Systems, methods and computer program products providing a bone conduction headband with a cross-platform application programming interface
US20180161626A1 (en) * 2016-12-12 2018-06-14 Blue Goji Llc Targeted neurogenesis stimulated by aerobic exercise with brain function-specific tasks
WO2018154327A1 (en) * 2017-02-24 2018-08-30 Guy's And St. Thomas' Nhs Foundation Trust Computer interface system and method
US20190007776A1 (en) * 2015-12-27 2019-01-03 Philip Scott Lyren Switching Binaural Sound
WO2019027912A1 (en) * 2017-07-31 2019-02-07 Bose Corporation Adaptive headphone system
CN109416610A (en) * 2018-09-18 2019-03-01 深圳市汇顶科技股份有限公司 Touch control component, device and touch control method
US10390139B2 (en) 2015-09-16 2019-08-20 Taction Technology, Inc. Apparatus and methods for audio-tactile spatialization of sound and perception of bass
US10402450B2 (en) 2016-05-13 2019-09-03 Google Llc Personalized and contextualized audio briefing
US10448520B2 (en) 2016-10-03 2019-10-15 Google Llc Voice-activated electronic device assembly with separable base
CN110475174A (en) * 2019-08-28 2019-11-19 深圳市索爱创新科技有限公司 A kind of bluetooth headset based on Internet of Things
US10484793B1 (en) * 2015-08-25 2019-11-19 Apple Inc. Electronic devices with orientation sensing
US10509558B2 (en) * 2017-12-08 2019-12-17 Spotify Ab System and method for enabling advertisement interaction with an electronic device
US10535966B2 (en) 2016-10-03 2020-01-14 Google Llc Planar electrical connector for an electronic device
US10573139B2 (en) 2015-09-16 2020-02-25 Taction Technology, Inc. Tactile transducer with digital signal processing for improved fidelity
US10599831B2 (en) 2014-02-07 2020-03-24 Snowshoefood Inc. Increased security method for hardware-tool-based authentication
US10659885B2 (en) 2014-09-24 2020-05-19 Taction Technology, Inc. Systems and methods for generating damped electromagnetically actuated planar motion for audio-frequency vibrations
USD885436S1 (en) 2016-05-13 2020-05-26 Google Llc Panel of a voice interface device
US10678502B2 (en) 2016-10-20 2020-06-09 Qualcomm Incorporated Systems and methods for in-ear control of remote devices
EP3865988A3 (en) * 2020-12-18 2022-01-12 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Method and apparatus for processing touch instruction, electronic device, storage medium and computer program product
US11237635B2 (en) 2017-04-26 2022-02-01 Cognixion Nonverbal multi-input and feedback devices for user intended computer control and communication of text, graphics and audio
US11252825B2 (en) 2016-10-03 2022-02-15 Google Llc Voice-activated electronic device assembly with separable base
US11402909B2 (en) 2017-04-26 2022-08-02 Cognixion Brain computer interface for augmented reality
EP4096239A1 (en) * 2015-09-30 2022-11-30 Apple Inc. Earbud case with capcitive sensor insert
EP4120062A1 (en) * 2021-07-15 2023-01-18 Nxp B.V. Method and apparatus for audio streaming
US11741979B1 (en) * 2014-03-27 2023-08-29 Amazon Technologies, Inc. Playback of audio content on multiple devices
US11783359B2 (en) 2017-12-04 2023-10-10 Spotify Ab Audio advertising interaction with voice interactive devices

Families Citing this family (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10019995B1 (en) 2011-03-01 2018-07-10 Alice J. Stiebel Methods and systems for language learning based on a series of pitch patterns
US11062615B1 (en) 2011-03-01 2021-07-13 Intelligibility Training LLC Methods and systems for remote language learning in a pandemic-aware world
KR20130096978A (en) * 2012-02-23 2013-09-02 삼성전자주식회사 User terminal device, server, information providing system based on situation and method thereof
US20130339859A1 (en) 2012-06-15 2013-12-19 Muzik LLC Interactive networked headphones
US9412129B2 (en) 2013-01-04 2016-08-09 Skullcandy, Inc. Equalization using user input
US9231898B2 (en) * 2013-02-08 2016-01-05 Machine Zone, Inc. Systems and methods for multi-user multi-lingual communications
US20140344205A1 (en) * 2013-05-15 2014-11-20 Aliphcom Smart media device ecosystem using local and remote data sources
US20170280221A1 (en) * 2014-03-07 2017-09-28 Wearhaus Inc. Audio emanation device for receiving and transmitting audio signals
US9338514B2 (en) * 2014-03-28 2016-05-10 Sonos, Inc. Account aware media preferences
CN105208056B (en) * 2014-06-18 2020-07-07 腾讯科技(深圳)有限公司 Information interaction method and terminal
CN104065718A (en) * 2014-06-19 2014-09-24 深圳米唐科技有限公司 Method and system for achieving social sharing through intelligent loudspeaker box
SE1451410A1 (en) 2014-11-21 2016-05-17 Melaud Ab Earphones with sensor controlled audio output
US11327711B2 (en) 2014-12-05 2022-05-10 Microsoft Technology Licensing, Llc External visual interactions for speech-based devices
KR102324363B1 (en) * 2014-12-23 2021-11-10 티모시 디그레이 Method and system for audio sharing
EP3101612A1 (en) 2015-06-03 2016-12-07 Skullcandy, Inc. Audio devices and related methods for acquiring audio device use information
CN106468987B (en) * 2015-08-18 2020-05-12 腾讯科技(深圳)有限公司 Information processing method and client
US10289205B1 (en) * 2015-11-24 2019-05-14 Google Llc Behind the ear gesture control for a head mountable device
US10171971B2 (en) 2015-12-21 2019-01-01 Skullcandy, Inc. Electrical systems and related methods for providing smart mobile electronic device features to a user of a wearable device
US10765956B2 (en) 2016-01-07 2020-09-08 Machine Zone Inc. Named entity recognition on chat data
JP2017147652A (en) * 2016-02-18 2017-08-24 ソニーモバイルコミュニケーションズ株式会社 Information processing apparatus
TWI596952B (en) * 2016-03-21 2017-08-21 固昌通訊股份有限公司 In-ear earphone
US10474422B1 (en) 2016-04-18 2019-11-12 Look Sharp Labs, Inc. Music-based social networking multi-media application and related methods
CN110178159A (en) * 2016-10-17 2019-08-27 沐择歌公司 Audio/video wearable computer system with integrated form projector
USD813203S1 (en) * 2016-10-26 2018-03-20 Muzik LLC Hand held controller
US10599785B2 (en) 2017-05-11 2020-03-24 Waverly Labs Inc. Smart sound devices and language translation system
WO2019060353A1 (en) 2017-09-21 2019-03-28 Mz Ip Holdings, Llc System and method for translating chat messages
EP3486915B1 (en) * 2017-11-17 2023-11-08 Siemens Healthcare GmbH Medical device and method for controlling the operation of a medical device, operating device, operating system
US11074906B2 (en) 2017-12-07 2021-07-27 Hed Technologies Sarl Voice aware audio system and method
US10708769B2 (en) * 2017-12-20 2020-07-07 Bose Corporation Cloud assisted accessory pairing
US10873779B1 (en) * 2018-01-22 2020-12-22 Renew World Outreach, Inc. Wireless media server with media and media-access application delivery
TWM575942U (en) * 2018-08-02 2019-03-21 禾伸堂企業股份有限公司 Bluetooth earphone combined with antenna and touch sensor
US11481434B1 (en) 2018-11-29 2022-10-25 Look Sharp Labs, Inc. System and method for contextual data selection from electronic data files
US10812486B2 (en) 2018-12-05 2020-10-20 Bank Of America Corporation Utilizing smart data tags to track and control secure enterprise data
US11264029B2 (en) 2019-01-05 2022-03-01 Starkey Laboratories, Inc. Local artificial intelligence assistant system with ear-wearable device
US11264035B2 (en) 2019-01-05 2022-03-01 Starkey Laboratories, Inc. Audio signal processing for automatic transcription using ear-wearable device
JP7380597B2 (en) * 2019-01-10 2023-11-15 ソニーグループ株式会社 Headphones, acoustic signal processing method, and program
WO2020203425A1 (en) * 2019-04-01 2020-10-08 ソニー株式会社 Information processing device, information processing method, and program
US12028667B2 (en) * 2020-01-14 2024-07-02 Deandre Robateau System and method for interactive microphone
US11803348B1 (en) 2020-09-14 2023-10-31 Apple Inc. Electronic devices for focused listening

Citations (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5872924A (en) * 1995-04-28 1999-02-16 Hitachi, Ltd. Collaborative work support system
US5880743A (en) * 1995-01-24 1999-03-09 Xerox Corporation Apparatus and method for implementing visual animation illustrating results of interactive editing operations
US6057845A (en) * 1997-11-14 2000-05-02 Sensiva, Inc. System, method, and apparatus for generation and recognizing universal commands
US6476834B1 (en) * 1999-05-28 2002-11-05 International Business Machines Corporation Dynamic creation of selectable items on surfaces
US20030014615A1 (en) * 2001-06-25 2003-01-16 Stefan Lynggaard Control of a unit provided with a processor
US20040145574A1 (en) * 2003-01-29 2004-07-29 Xin Zhen Li Invoking applications by scribing an indicium on a touch screen
US20050022130A1 (en) * 2003-07-01 2005-01-27 Nokia Corporation Method and device for operating a user-input area on an electronic display device
US20050216867A1 (en) * 2004-03-23 2005-09-29 Marvit David L Selective engagement of motion detection
US20050212751A1 (en) * 2004-03-23 2005-09-29 Marvit David L Customizable gesture mappings for motion controlled handheld devices
US6956562B1 (en) * 2000-05-16 2005-10-18 Palmsource, Inc. Method for controlling a handheld computer by entering commands onto a displayed feature of the handheld computer
US7004394B2 (en) * 2003-03-25 2006-02-28 Samsung Electronics Co., Ltd. Portable terminal capable of invoking program by sign command and program invoking method therefor
US7030861B1 (en) * 2001-02-10 2006-04-18 Wayne Carl Westerman System and method for packing multi-touch gestures onto a hand
US20070098263A1 (en) * 2005-10-17 2007-05-03 Hitachi, Ltd. Data entry apparatus and program therefor
US20070177803A1 (en) * 2006-01-30 2007-08-02 Apple Computer, Inc Multi-touch gesture dictionary
US20080120576A1 (en) * 2006-11-22 2008-05-22 General Electric Company Methods and systems for creation of hanging protocols using graffiti-enabled devices
US20080168403A1 (en) * 2007-01-06 2008-07-10 Appl Inc. Detecting and interpreting real-world and security gestures on touch and hover sensitive devices
US20090265670A1 (en) * 2007-08-30 2009-10-22 Kim Joo Min User interface for a mobile device using a user's gesture in the proximity of an electronic device
US20100293462A1 (en) * 2008-05-13 2010-11-18 Apple Inc. Pushing a user interface to a remote device
US20110041102A1 (en) * 2009-08-11 2011-02-17 Jong Hwan Kim Mobile terminal and method for controlling the same
US20110102464A1 (en) * 2009-11-03 2011-05-05 Sri Venkatesh Godavari Methods for implementing multi-touch gestures on a single-touch touch surface
US20110227871A1 (en) * 2010-03-22 2011-09-22 Mattel, Inc. Electronic Device and the Input and Output of Data
US20110316797A1 (en) * 2008-10-06 2011-12-29 User Interface In Sweden Ab Method for application launch and system function
US8169414B2 (en) * 2008-07-12 2012-05-01 Lim Seung E Control of electronic games via finger angle using a high dimensional touchpad (HDTP) touch user interface
US20120216152A1 (en) * 2011-02-23 2012-08-23 Google Inc. Touch gestures for remote control operations
US8448083B1 (en) * 2004-04-16 2013-05-21 Apple Inc. Gesture control of multimedia editing applications
US8677285B2 (en) * 2008-02-01 2014-03-18 Wimm Labs, Inc. User interface of a small touch sensitive display for an electronic data and communication device
US8819597B2 (en) * 2009-04-10 2014-08-26 Google Inc. Glyph entry on computing device

Family Cites Families (161)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6010216A (en) 1993-01-19 2000-01-04 Jesiek; Daniel Stephen "Hear speak" two-way voice radio communications eyeglasses
US6061064A (en) 1993-08-31 2000-05-09 Sun Microsystems, Inc. System and method for providing and using a computer user interface with a view space having discrete portions
US5815126A (en) 1993-10-22 1998-09-29 Kopin Corporation Monocular portable communication and display system
US20120105740A1 (en) 2000-06-02 2012-05-03 Oakley, Inc. Eyewear with detachable adjustable electronics module
US8482488B2 (en) 2004-12-22 2013-07-09 Oakley, Inc. Data input management system for wearable electronically enabled interface
AUPR956901A0 (en) * 2001-12-17 2002-01-24 Jayaratne, Neville Real time translator
US7065185B1 (en) 2002-06-28 2006-06-20 Bellsouth Intellectual Property Corp. Systems and methods for providing real-time conversation using disparate communication devices
DE60336499D1 (en) 2002-11-20 2011-05-05 Koninkl Philips Electronics Nv AUDIO-CONTROLLED DATA REPRESENTATION DEVICE AND METHOD
US7312699B2 (en) 2003-04-01 2007-12-25 Chornenky T Eric Ear associated machine-human interface
EP1618759A1 (en) * 2003-04-18 2006-01-25 Koninklijke Philips Electronics N.V. Personal audio system with earpiece remote controller
US20050047629A1 (en) * 2003-08-25 2005-03-03 International Business Machines Corporation System and method for selectively expanding or contracting a portion of a display using eye-gaze tracking
US7295904B2 (en) * 2004-08-31 2007-11-13 International Business Machines Corporation Touch gesture based interface for motor vehicle
US9158975B2 (en) 2005-05-31 2015-10-13 Avigilon Fortress Corporation Video analytics for retail business process monitoring
US9031604B2 (en) 2005-06-02 2015-05-12 Broadcom Corporation Method and apparatus for enabling simultaneous VoWLAN and Bluetooth audio in small form factor handheld devices
US8331603B2 (en) * 2005-06-03 2012-12-11 Nokia Corporation Headset
GB2427733A (en) 2005-06-29 2007-01-03 Symbian Software Ltd Remote control
US9036028B2 (en) 2005-09-02 2015-05-19 Sensormatic Electronics, LLC Object tracking and alerts
US20070052672A1 (en) * 2005-09-08 2007-03-08 Swisscom Mobile Ag Communication device, system and method
CA2622760C (en) 2005-09-16 2012-12-04 Janssen Pharmaceutica N.V. Cyclopropyl amines as modulators of the histamine h3 receptor
US20070130547A1 (en) * 2005-12-01 2007-06-07 Navisense, Llc Method and system for touchless user interface control
US8334841B2 (en) * 2006-03-13 2012-12-18 Navisense Virtual user interface method and system thereof
KR20080004229A (en) * 2006-07-05 2008-01-09 엘지노텔 주식회사 System and method of remotely controlling application programs using a wireless terminal
US7280849B1 (en) 2006-07-31 2007-10-09 At & T Bls Intellectual Property, Inc. Voice activated dialing for wireless headsets
US7978091B2 (en) * 2006-08-24 2011-07-12 Navisense Method and device for a touchless interface
US8564544B2 (en) * 2006-09-06 2013-10-22 Apple Inc. Touch screen device, method, and graphical user interface for customizing display of content category icons
US8793621B2 (en) * 2006-11-09 2014-07-29 Navisense Method and device to control touchless recognition
US9118990B2 (en) * 2007-01-06 2015-08-25 Apple Inc. Connectors designed for ease of use
US8060841B2 (en) * 2007-03-19 2011-11-15 Navisense Method and device for touchless media searching
JP2008278238A (en) 2007-04-27 2008-11-13 Toshiba Corp Reproducing device and communicating method for the reproducing device
US8855719B2 (en) 2009-05-08 2014-10-07 Kopin Corporation Wireless hands-free computing headset with detachable accessories controllable by motion, body gesture and/or vocal commands
US8224247B2 (en) 2007-05-16 2012-07-17 Texas Instruments Incorporated Controller integrated audio codec for advanced audio distribution profile audio streaming applications
US8331334B2 (en) 2007-07-20 2012-12-11 Broadcom Corporation Method and system for a handheld wireless communication device for configuring connection to and use of local and remote resources
US8825468B2 (en) 2007-07-31 2014-09-02 Kopin Corporation Mobile wireless display providing speech to speech translation and avatar simulating human attributes
US7631811B1 (en) * 2007-10-04 2009-12-15 Plantronics, Inc. Optical headset user interface
US8655004B2 (en) * 2007-10-16 2014-02-18 Apple Inc. Sports monitoring system for headphones, earbuds and/or headsets
US8072393B2 (en) 2007-11-15 2011-12-06 Symbol Technologies, Inc. User interface for a head mounted display
US8180078B2 (en) * 2007-12-13 2012-05-15 At&T Intellectual Property I, Lp Systems and methods employing multiple individual wireless earbuds for a common audio source
US8983093B2 (en) * 2008-01-14 2015-03-17 Apple Inc. Electronic device circuitry for communicating with accessories
US8099289B2 (en) 2008-02-13 2012-01-17 Sensory, Inc. Voice interface and search for electronic devices including bluetooth headsets and remote systems
EP2248271B1 (en) 2008-03-06 2011-12-07 GN Netcom A/S Headset as hub in remote control system
CN102047686B (en) 2008-04-07 2013-10-16 美国高思公司 Wireless earphone that transitions between wireless networks
US8320578B2 (en) * 2008-04-30 2012-11-27 Dp Technologies, Inc. Headset
TW200949618A (en) * 2008-05-16 2009-12-01 Kye Systems Corp Input device and the control method thereof
US8116788B2 (en) 2008-06-10 2012-02-14 Plantronics, Inc. Mobile telephony presence
US20100020998A1 (en) 2008-07-28 2010-01-28 Plantronics, Inc. Headset wearing mode based operation
US20100045928A1 (en) 2008-08-25 2010-02-25 Tri-Specs, Inc. Fashion eyewear frame that houses circuitry to effect wireless audio communication while providing extraneous background noise cancellation capability
US8957835B2 (en) 2008-09-30 2015-02-17 Apple Inc. Head-mounted display apparatus for retaining a portable electronic device with display
JP5141546B2 (en) 2008-12-26 2013-02-13 富士通モバイルコミュニケーションズ株式会社 Information processing device
US8456420B2 (en) 2008-12-31 2013-06-04 Intel Corporation Audible list traversal
US8447609B2 (en) * 2008-12-31 2013-05-21 Intel Corporation Adjustment of temporal acoustical characteristics
US20100184406A1 (en) * 2009-01-21 2010-07-22 Michael Schrader Total Integrated Messaging
KR20110131247A (en) 2009-02-27 2011-12-06 파운데이션 프로덕션, 엘엘씨 Headset-based telecommunications platform
FR2943202A1 (en) 2009-03-13 2010-09-17 St Wireless Sa METHOD OF AUDIO DATA EXCHANGE BETWEEN A MAIN UNIT AND A BLUETOOTH TYPE CONTROLLER
CN102460349A (en) * 2009-05-08 2012-05-16 寇平公司 Remote control of host application using motion and voice commands
US9740977B1 (en) 2009-05-29 2017-08-22 Videomining Corporation Method and system for recognizing the intentions of shoppers in retail aisles based on their trajectories
US8773330B2 (en) 2009-06-25 2014-07-08 The Boeing Company Method and apparatus for a virtual mission control station
US20130169514A1 (en) 2009-06-25 2013-07-04 The Boeing Company Method and apparatus for a virtual mission control station
US8265557B2 (en) 2009-10-06 2012-09-11 Lg Electronics Inc. Mobile terminal capable of being connected to audio output device using short-range communication and method of controlling the operation of the mobile terminal
KR101319264B1 (en) * 2010-01-22 2013-10-18 전자부품연구원 Method for providing UI according to multi touch pressure and electronic device using the same
US20150309316A1 (en) 2011-04-06 2015-10-29 Microsoft Technology Licensing, Llc Ar glasses with predictive control of external device based on event input
US20120194552A1 (en) 2010-02-28 2012-08-02 Osterhout Group, Inc. Ar glasses with predictive control of external device based on event input
US20110213664A1 (en) 2010-02-28 2011-09-01 Osterhout Group, Inc. Local advertising content on an interactive head-mounted eyepiece
US20130278631A1 (en) 2010-02-28 2013-10-24 Osterhout Group, Inc. 3d positioning of augmented reality information
KR101119835B1 (en) * 2010-03-25 2012-02-28 이노디지털 주식회사 Remote controller having user interface of touch pad
US9046999B1 (en) * 2010-06-08 2015-06-02 Google Inc. Dynamic input at a touch-based interface based on pressure
US9361729B2 (en) 2010-06-17 2016-06-07 Microsoft Technology Licensing, Llc Techniques to present location information for social networks using augmented reality
US20110314427A1 (en) * 2010-06-18 2011-12-22 Samsung Electronics Co., Ltd. Personalization using custom gestures
US9916006B2 (en) 2010-07-23 2018-03-13 Telepatheye Inc. Eye-wearable device user interface and method
US9977496B2 (en) 2010-07-23 2018-05-22 Telepatheye Inc. Eye-wearable device user interface and augmented reality method
US20120028682A1 (en) * 2010-07-29 2012-02-02 Chris Danne Steering wheel attached cellular phone interface device for communication with alert system
US8711656B1 (en) 2010-08-27 2014-04-29 Verifone Systems, Inc. Sonic fast-sync system and method for bluetooth
US20120089390A1 (en) 2010-08-27 2012-04-12 Smule, Inc. Pitch corrected vocal capture for telephony targets
EP2437163A1 (en) * 2010-09-09 2012-04-04 Harman Becker Automotive Systems GmbH User interface for a vehicle system
US9122307B2 (en) * 2010-09-20 2015-09-01 Kopin Corporation Advanced remote control of host application using motion and voice commands
US9316827B2 (en) * 2010-09-20 2016-04-19 Kopin Corporation LifeBoard—series of home pages for head mounted displays (HMD) that respond to head tracking
EP2432218B1 (en) * 2010-09-20 2016-04-20 EchoStar Technologies L.L.C. Methods of displaying an electronic program guide
US10013976B2 (en) 2010-09-20 2018-07-03 Kopin Corporation Context sensitive overlays in voice controlled headset computer displays
US9377862B2 (en) 2010-09-20 2016-06-28 Kopin Corporation Searchlight navigation using headtracker to reveal hidden or extra document data
US8706170B2 (en) 2010-09-20 2014-04-22 Kopin Corporation Miniature communications gateway for head mounted display
US20120265827A9 (en) 2010-10-20 2012-10-18 Sony Ericsson Mobile Communications Ab Portable electronic device and method and social network and method for sharing content information
US8677238B2 (en) 2010-10-21 2014-03-18 Sony Computer Entertainment Inc. Navigation of electronic device menu without requiring visual contact
US9348141B2 (en) 2010-10-27 2016-05-24 Microsoft Technology Licensing, Llc Low-latency fusing of virtual and real content
US9237393B2 (en) 2010-11-05 2016-01-12 Sony Corporation Headset with accelerometers to determine direction and movements of user head and method
US8184983B1 (en) 2010-11-12 2012-05-22 Google Inc. Wireless directional identification and subsequent communication between wearable electronic devices
US8177182B1 (en) * 2011-01-07 2012-05-15 Apple Inc. Wireless remote control device for a portable media device
EP2668758A1 (en) 2011-01-25 2013-12-04 Pairasight, Inc. Apparatus and method for streaming live images, audio and meta-data
JP5960796B2 (en) 2011-03-29 2016-08-02 クアルコム,インコーポレイテッド Modular mobile connected pico projector for local multi-user collaboration
US8203502B1 (en) * 2011-05-25 2012-06-19 Google Inc. Wearable heads-up display with integrated finger-tracking input sensor
JP2012253483A (en) 2011-06-01 2012-12-20 Sony Corp Image processing apparatus, image processing method, and program
US8223088B1 (en) * 2011-06-09 2012-07-17 Google Inc. Multimode input field for a head-mounted display
US20120314899A1 (en) 2011-06-13 2012-12-13 Microsoft Corporation Natural user interfaces for mobile image viewing
US20130007672A1 (en) 2011-06-28 2013-01-03 Google Inc. Methods and Systems for Correlating Head Movement with Items Displayed on a User Interface
US9024843B2 (en) * 2011-06-30 2015-05-05 Google Inc. Wearable computer with curved display and navigation tool
US8873147B1 (en) * 2011-07-20 2014-10-28 Google Inc. Chord authentication via a multi-touch interface
US20130021269A1 (en) * 2011-07-20 2013-01-24 Google Inc. Dynamic Control of an Active Input Region of a User Interface
US8217856B1 (en) * 2011-07-27 2012-07-10 Google Inc. Head-mounted display that displays a visual representation of physical interaction with an input interface located outside of the field of view
US9153195B2 (en) 2011-08-17 2015-10-06 Microsoft Technology Licensing, Llc Providing contextual personal information by a mixed reality device
KR101341727B1 (en) 2011-08-29 2013-12-16 주식회사 팬택 Apparatus and Method for Controlling 3D GUI
US9686612B2 (en) 2011-09-12 2017-06-20 Microsoft Technology Licensing, Llc Transference of time sensitive data between a wireless communication device and a computer system
US8941560B2 (en) * 2011-09-21 2015-01-27 Google Inc. Wearable computer with superimposed controls and instructions for external device
US9292082B1 (en) * 2011-11-08 2016-03-22 Google Inc. Text-entry for a computing device
US8866852B2 (en) 2011-11-28 2014-10-21 Google Inc. Method and system for input detection
CN104040631B (en) 2011-12-28 2017-04-12 英特尔公司 Multi-stream-multipoint-jack audio streaming
US9064436B1 (en) * 2012-01-06 2015-06-23 Google Inc. Text input on touch sensitive interface
US20150170418A1 (en) 2012-01-18 2015-06-18 Google Inc. Method to Provide Entry Into a Virtual Map Space Using a Mobile Device's Camera
JP5880115B2 (en) 2012-02-17 2016-03-08 ソニー株式会社 Head mounted display, head mounted display control program, and head mounted display control method
EP2817785B1 (en) * 2012-02-23 2019-05-15 Charles D. Huston System and method for creating an environment and for sharing a location based experience in an environment
US8819697B2 (en) 2012-02-29 2014-08-26 Sap Ag Managing actions that have no end events
US9035878B1 (en) * 2012-02-29 2015-05-19 Google Inc. Input system
US9075249B2 (en) 2012-03-07 2015-07-07 Google Inc. Eyeglass frame with input and output functionality
US20130246199A1 (en) * 2012-03-14 2013-09-19 Mark Carlson Point-of-transaction account feature redirection apparatuses, methods and systems
US20130241805A1 (en) 2012-03-15 2013-09-19 Google Inc. Using Convergence Angle to Select Among Different UI Elements
US8643951B1 (en) 2012-03-15 2014-02-04 Google Inc. Graphical menu and interaction therewith through a viewing window
US20130246967A1 (en) 2012-03-15 2013-09-19 Google Inc. Head-Tracked User Interaction with Graphical Interface
US8929954B2 (en) 2012-04-25 2015-01-06 Kopin Corporation Headset computer (HSC) as auxiliary display with ASR and HT input
US9823742B2 (en) 2012-05-18 2017-11-21 Microsoft Technology Licensing, Llc Interaction and management of devices using gaze detection
US9378028B2 (en) 2012-05-31 2016-06-28 Kopin Corporation Headset computer (HSC) with docking station and dual personality
US9583032B2 (en) 2012-06-05 2017-02-28 Microsoft Technology Licensing, Llc Navigating content using a physical object
US20130339859A1 (en) 2012-06-15 2013-12-19 Muzik LLC Interactive networked headphones
US20140002357A1 (en) 2012-06-28 2014-01-02 Kopin Corporation Enabling and Disabling Features of a Headset Computer Based on Real-Time Image Analysis
JP5358725B1 (en) 2012-07-02 2013-12-04 株式会社アライヘルメット Microphone holding device and open face helmet
TWI498771B (en) 2012-07-06 2015-09-01 Pixart Imaging Inc Gesture recognition system and glasses with gesture recognition function
KR101321157B1 (en) 2012-08-07 2013-10-23 한양대학교 산학협력단 Wearable display deice having sliding structure
US9134793B2 (en) 2013-01-04 2015-09-15 Kopin Corporation Headset computer with head tracking input used for inertial control
US9164588B1 (en) 2013-02-05 2015-10-20 Google Inc. Wearable computing device with gesture recognition
US9301085B2 (en) 2013-02-20 2016-03-29 Kopin Corporation Computer headset with detachable 4G radio
US20140253605A1 (en) 2013-03-05 2014-09-11 John N. Border Controlling brightness of a displayed image
CN105453075A (en) * 2013-03-14 2016-03-30 映翰德盖兹有限公司 Wirelessly triggered smart media guides
US11100334B2 (en) 2013-04-19 2021-08-24 James Carey Video identification and analytical recognition system
WO2014144035A1 (en) 2013-03-15 2014-09-18 Brian Adams Ballard Method and system for representing and interacting with augmented reality content
KR102081930B1 (en) 2013-03-21 2020-02-26 엘지전자 주식회사 Display device detecting gaze location and method for controlling thereof
US20140365333A1 (en) 2013-06-07 2014-12-11 Bby Solutions, Inc. Retail store customer natural-gesture interaction with animated 3d images using sensor array
US10025378B2 (en) 2013-06-25 2018-07-17 Microsoft Technology Licensing, Llc Selecting user interface elements via position signal
GB201314984D0 (en) 2013-08-21 2013-10-02 Sony Comp Entertainment Europe Head-mountable apparatus and systems
WO2015030099A1 (en) 2013-08-30 2015-03-05 ブラザー工業株式会社 Image display device, and head-mounted display
JP6209906B2 (en) 2013-09-05 2017-10-11 セイコーエプソン株式会社 Head-mounted display device, method for controlling head-mounted display device, and image display system
US9500867B2 (en) 2013-11-15 2016-11-22 Kopin Corporation Head-tracking based selection technique for head mounted displays (HMD)
US10209955B2 (en) 2013-11-15 2019-02-19 Kopin Corporation Automatic speech recognition (ASR) feedback for head mounted displays (HMD)
US9747007B2 (en) 2013-11-19 2017-08-29 Microsoft Technology Licensing, Llc Resizing technique for display content
WO2015081334A1 (en) 2013-12-01 2015-06-04 Athey James Leighton Systems and methods for providing a virtual menu
US9311718B2 (en) 2014-01-23 2016-04-12 Microsoft Technology Licensing, Llc Automated content scrolling
US9442631B1 (en) 2014-01-27 2016-09-13 Google Inc. Methods and systems for hands-free browsing in a wearable computing device
US20150220142A1 (en) 2014-01-31 2015-08-06 Kopin Corporation Head-Tracking Based Technique for Moving On-Screen Objects on Head Mounted Displays (HMD)
US9401540B2 (en) 2014-02-11 2016-07-26 Osterhout Group, Inc. Spatial location presentation in head worn computing
US10203762B2 (en) 2014-03-11 2019-02-12 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
US9477888B1 (en) 2014-03-27 2016-10-25 Amazon Technologies, Inc. Providing computer-based instructions without displays
US9619771B2 (en) 2014-04-05 2017-04-11 Parsable, Inc. Systems and methods for digital workflow and communication
US9547365B2 (en) 2014-09-15 2017-01-17 Google Inc. Managing information display
US10248192B2 (en) 2014-12-03 2019-04-02 Microsoft Technology Licensing, Llc Gaze target application launcher
GB201501510D0 (en) 2015-01-29 2015-03-18 Apical Ltd System
US9996749B2 (en) 2015-05-29 2018-06-12 Accenture Global Solutions Limited Detecting contextual trends in digital video content
US9977493B2 (en) 2015-06-17 2018-05-22 Microsoft Technology Licensing, Llc Hybrid display system
US9588593B2 (en) 2015-06-30 2017-03-07 Ariadne's Thread (Usa), Inc. Virtual reality system with control command gestures
US10101803B2 (en) 2015-08-26 2018-10-16 Google Llc Dynamic switching and merging of head, gesture and touch input in virtual reality
US20170092002A1 (en) 2015-09-30 2017-03-30 Daqri, Llc User interface for augmented reality system
US20180321493A1 (en) 2015-11-11 2018-11-08 Lg Electronics Inc. Hmd and method for controlling same
KR102524641B1 (en) 2016-01-22 2023-04-21 삼성전자주식회사 Head mounted display device and method for controlling the same
US9818126B1 (en) 2016-04-20 2017-11-14 Deep Labs Inc. Systems and methods for sensor data analysis through machine learning
US10620910B2 (en) 2016-12-23 2020-04-14 Realwear, Inc. Hands-free navigation of touch-based operating systems
US20180197218A1 (en) 2017-01-12 2018-07-12 Verizon Patent And Licensing Inc. System and method for object detection in retail environment
US10055853B1 (en) 2017-08-07 2018-08-21 Standard Cognition, Corp Subject identification and tracking using image recognition

Patent Citations (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5880743A (en) * 1995-01-24 1999-03-09 Xerox Corporation Apparatus and method for implementing visual animation illustrating results of interactive editing operations
US5872924A (en) * 1995-04-28 1999-02-16 Hitachi, Ltd. Collaborative work support system
US6057845A (en) * 1997-11-14 2000-05-02 Sensiva, Inc. System, method, and apparatus for generation and recognizing universal commands
US6476834B1 (en) * 1999-05-28 2002-11-05 International Business Machines Corporation Dynamic creation of selectable items on surfaces
US6956562B1 (en) * 2000-05-16 2005-10-18 Palmsource, Inc. Method for controlling a handheld computer by entering commands onto a displayed feature of the handheld computer
US7030861B1 (en) * 2001-02-10 2006-04-18 Wayne Carl Westerman System and method for packing multi-touch gestures onto a hand
US20030014615A1 (en) * 2001-06-25 2003-01-16 Stefan Lynggaard Control of a unit provided with a processor
US20040145574A1 (en) * 2003-01-29 2004-07-29 Xin Zhen Li Invoking applications by scribing an indicium on a touch screen
US7004394B2 (en) * 2003-03-25 2006-02-28 Samsung Electronics Co., Ltd. Portable terminal capable of invoking program by sign command and program invoking method therefor
US20050022130A1 (en) * 2003-07-01 2005-01-27 Nokia Corporation Method and device for operating a user-input area on an electronic display device
US20050216867A1 (en) * 2004-03-23 2005-09-29 Marvit David L Selective engagement of motion detection
US20050212751A1 (en) * 2004-03-23 2005-09-29 Marvit David L Customizable gesture mappings for motion controlled handheld devices
US8448083B1 (en) * 2004-04-16 2013-05-21 Apple Inc. Gesture control of multimedia editing applications
US20070098263A1 (en) * 2005-10-17 2007-05-03 Hitachi, Ltd. Data entry apparatus and program therefor
US20070177803A1 (en) * 2006-01-30 2007-08-02 Apple Computer, Inc Multi-touch gesture dictionary
US20080120576A1 (en) * 2006-11-22 2008-05-22 General Electric Company Methods and systems for creation of hanging protocols using graffiti-enabled devices
US20080168403A1 (en) * 2007-01-06 2008-07-10 Appl Inc. Detecting and interpreting real-world and security gestures on touch and hover sensitive devices
US20090265670A1 (en) * 2007-08-30 2009-10-22 Kim Joo Min User interface for a mobile device using a user's gesture in the proximity of an electronic device
US8677285B2 (en) * 2008-02-01 2014-03-18 Wimm Labs, Inc. User interface of a small touch sensitive display for an electronic data and communication device
US20100293462A1 (en) * 2008-05-13 2010-11-18 Apple Inc. Pushing a user interface to a remote device
US8169414B2 (en) * 2008-07-12 2012-05-01 Lim Seung E Control of electronic games via finger angle using a high dimensional touchpad (HDTP) touch user interface
US20110316797A1 (en) * 2008-10-06 2011-12-29 User Interface In Sweden Ab Method for application launch and system function
US8819597B2 (en) * 2009-04-10 2014-08-26 Google Inc. Glyph entry on computing device
US20110041102A1 (en) * 2009-08-11 2011-02-17 Jong Hwan Kim Mobile terminal and method for controlling the same
US20110102464A1 (en) * 2009-11-03 2011-05-05 Sri Venkatesh Godavari Methods for implementing multi-touch gestures on a single-touch touch surface
US20110227871A1 (en) * 2010-03-22 2011-09-22 Mattel, Inc. Electronic Device and the Input and Output of Data
US20120216152A1 (en) * 2011-02-23 2012-08-23 Google Inc. Touch gestures for remote control operations

Cited By (95)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USD951298S1 (en) 1991-11-29 2022-05-10 Google Llc Panel of a voice interface device
US9836276B2 (en) * 2009-05-27 2017-12-05 Hon Hai Precision Industry Co., Ltd. Voice command processing method and electronic device utilizing the same
US20160283191A1 (en) * 2009-05-27 2016-09-29 Hon Hai Precision Industry Co., Ltd. Voice command processing method and electronic device utilizing the same
US10129682B2 (en) * 2012-01-06 2018-11-13 Bacch Laboratories, Inc. Method and apparatus to provide a virtualized audio file
US20160295341A1 (en) * 2012-01-06 2016-10-06 Bit Cauldron Corporation Method and apparatus for providing 3d audio
US10599831B2 (en) 2014-02-07 2020-03-24 Snowshoefood Inc. Increased security method for hardware-tool-based authentication
CN104902360A (en) * 2014-03-07 2015-09-09 美国戴豪公司 Headphones for receiving and transmitting audio signals
US11741979B1 (en) * 2014-03-27 2023-08-29 Amazon Technologies, Inc. Playback of audio content on multiple devices
US9298907B2 (en) 2014-04-01 2016-03-29 Snowshoefood, Inc. Methods for enabling real-time digital object and tangible object interactions
US9876795B2 (en) 2014-04-01 2018-01-23 Snowshoefood, Inc. Methods for enabling real-time digital object and tangible object interactions
US11064008B2 (en) 2014-05-05 2021-07-13 Usablenet Inc. Methods for facilitating a remote interface and devices thereof
WO2015171228A1 (en) * 2014-05-05 2015-11-12 Usablenet Inc. Methods for facilitating a remote interface and devices thereof
US9832644B2 (en) 2014-09-08 2017-11-28 Snowshoefood, Inc. Systems and methods for hybrid hardware authentication
US10820117B2 (en) 2014-09-24 2020-10-27 Taction Technology, Inc. Systems and methods for generating damped electromagnetically actuated planar motion for audio-frequency vibrations
US10812913B2 (en) 2014-09-24 2020-10-20 Taction Technology, Inc. Systems and methods for generating damped electromagnetically actuated planar motion for audio-frequency vibrations
US10659885B2 (en) 2014-09-24 2020-05-19 Taction Technology, Inc. Systems and methods for generating damped electromagnetically actuated planar motion for audio-frequency vibrations
US10824251B2 (en) 2014-10-10 2020-11-03 Muzik Inc. Devices and methods for sharing user interaction
US20210034176A1 (en) * 2014-10-10 2021-02-04 Muzik Inc. Devices and Methods for Sharing User Interaction
CN107210950A (en) * 2014-10-10 2017-09-26 沐择歌有限责任公司 Equipment for sharing user mutual
US10048835B2 (en) 2014-10-31 2018-08-14 Microsoft Technology Licensing, Llc User interface functionality for facilitating interaction between users and their environments
US9977573B2 (en) * 2014-10-31 2018-05-22 Microsoft Technology Licensing, Llc Facilitating interaction between users and their environments using a headset having input mechanisms
US20160124707A1 (en) * 2014-10-31 2016-05-05 Microsoft Technology Licensing, Llc Facilitating Interaction between Users and their Environments Using a Headset having Input Mechanisms
US9936273B2 (en) * 2015-01-20 2018-04-03 Taction Technology, Inc. Apparatus and methods for altering the appearance of wearable devices
US20160212515A1 (en) * 2015-01-20 2016-07-21 Taction Technology Inc. Apparatus and methods for altering the appearance of wearable devices
US20160216943A1 (en) * 2015-01-25 2016-07-28 Harman International Industries, Inc. Headphones with integral image display
CN105828230A (en) * 2015-01-25 2016-08-03 哈曼国际工业有限公司 Headphones with integral image display
US9933995B2 (en) * 2015-01-25 2018-04-03 Harman International Industries, Incorporated Headphones with integral image display
US9746930B2 (en) 2015-03-26 2017-08-29 General Electric Company Detection and usability of personal electronic devices for field engineers
US10466801B2 (en) 2015-03-26 2019-11-05 General Electric Company Detection and usability of personal electronic devices for field engineers
WO2016154590A1 (en) * 2015-03-26 2016-09-29 General Electric Company Detection and usability of personal electronic devices for field engineers
US20160313973A1 (en) * 2015-04-24 2016-10-27 Seiko Epson Corporation Display device, control method for display device, and computer program
EP3278297A4 (en) * 2015-05-22 2018-08-29 Hewlett-Packard Development Company, L.P. Media content selection
CN107567634A (en) * 2015-05-22 2018-01-09 惠普发展公司有限责任合伙企业 Media content selects
US20160346604A1 (en) * 2015-05-28 2016-12-01 Nike, Inc. Music streaming for athletic activities
US10311462B2 (en) * 2015-05-28 2019-06-04 Nike, Inc. Music streaming for athletic activities
US11601756B2 (en) 2015-08-25 2023-03-07 Apple Inc. Electronic devices with orientation sensing
US10484793B1 (en) * 2015-08-25 2019-11-19 Apple Inc. Electronic devices with orientation sensing
US11263879B2 (en) 2015-09-16 2022-03-01 Taction Technology, Inc. Tactile transducer with digital signal processing for improved fidelity
US10573139B2 (en) 2015-09-16 2020-02-25 Taction Technology, Inc. Tactile transducer with digital signal processing for improved fidelity
US10390139B2 (en) 2015-09-16 2019-08-20 Taction Technology, Inc. Apparatus and methods for audio-tactile spatialization of sound and perception of bass
US11690428B2 (en) 2015-09-30 2023-07-04 Apple Inc. Portable listening device with accelerometer
US11944172B2 (en) 2015-09-30 2024-04-02 Apple Inc. Portable listening device with sensors
EP4096239A1 (en) * 2015-09-30 2022-11-30 Apple Inc. Earbud case with capcitive sensor insert
US10412519B1 (en) * 2015-12-27 2019-09-10 Philip Scott Lyren Switching binaural sound
US20190297442A1 (en) * 2015-12-27 2019-09-26 Philip Scott Lyren Switching Binaural Sound
US20190007776A1 (en) * 2015-12-27 2019-01-03 Philip Scott Lyren Switching Binaural Sound
US20230396945A1 (en) * 2015-12-27 2023-12-07 Philip Scott Lyren Switching Binaural Sound
US10499173B2 (en) * 2015-12-27 2019-12-03 Philip Scott Lyren Switching binaural sound
US20170216675A1 (en) * 2016-02-03 2017-08-03 Disney Enterprises, Inc. Fitness-based game mechanics
US10535343B2 (en) 2016-05-10 2020-01-14 Google Llc Implementations for voice assistant on devices
US11935535B2 (en) 2016-05-10 2024-03-19 Google Llc Implementations for voice assistant on devices
US10304450B2 (en) * 2016-05-10 2019-05-28 Google Llc LED design language for visual affordance of voice user interfaces
US11922941B2 (en) 2016-05-10 2024-03-05 Google Llc Implementations for voice assistant on devices
US20170330429A1 (en) * 2016-05-10 2017-11-16 Google Inc. LED Design Language for Visual Affordance of Voice User Interfaces
US11341964B2 (en) 2016-05-10 2022-05-24 Google Llc Voice-controlled media play in smart media environment
US10332516B2 (en) 2016-05-10 2019-06-25 Google Llc Media transfer among media output devices
US11990126B2 (en) 2016-05-10 2024-05-21 Google Llc Voice-controlled media play in smart media environment
US11355116B2 (en) 2016-05-10 2022-06-07 Google Llc Implementations for voice assistant on devices
US10861461B2 (en) 2016-05-10 2020-12-08 Google Llc LED design language for visual affordance of voice user interfaces
USD927550S1 (en) 2016-05-13 2021-08-10 Google Llc Voice interface device
US11860933B2 (en) 2016-05-13 2024-01-02 Google Llc Personalized and contextualized audio briefing
US10402450B2 (en) 2016-05-13 2019-09-03 Google Llc Personalized and contextualized audio briefing
USD979602S1 (en) 2016-05-13 2023-02-28 Google Llc Panel of a voice interface device
USD885436S1 (en) 2016-05-13 2020-05-26 Google Llc Panel of a voice interface device
US20180018965A1 (en) * 2016-07-12 2018-01-18 Bose Corporation Combining Gesture and Voice User Interfaces
CN107666492A (en) * 2016-07-25 2018-02-06 中兴通讯股份有限公司 A kind of control method, service sensor, service unit and terminal
US11678442B2 (en) 2016-10-03 2023-06-13 Google Llc Voice-activated electronic device assembly with separable base
US11252825B2 (en) 2016-10-03 2022-02-15 Google Llc Voice-activated electronic device assembly with separable base
US10973134B2 (en) 2016-10-03 2021-04-06 Google Llc Voice-activated electronic device assembly with separable base
US10535966B2 (en) 2016-10-03 2020-01-14 Google Llc Planar electrical connector for an electronic device
US10448520B2 (en) 2016-10-03 2019-10-15 Google Llc Voice-activated electronic device assembly with separable base
US10678502B2 (en) 2016-10-20 2020-06-09 Qualcomm Incorporated Systems and methods for in-ear control of remote devices
US20180146293A1 (en) * 2016-11-18 2018-05-24 Muzik, Llc Systems, methods and computer program products providing a bone conduction headband with a cross-platform application programming interface
US20180161626A1 (en) * 2016-12-12 2018-06-14 Blue Goji Llc Targeted neurogenesis stimulated by aerobic exercise with brain function-specific tasks
WO2018154327A1 (en) * 2017-02-24 2018-08-30 Guy's And St. Thomas' Nhs Foundation Trust Computer interface system and method
US11561616B2 (en) 2017-04-26 2023-01-24 Cognixion Corporation Nonverbal multi-input and feedback devices for user intended computer control and communication of text, graphics and audio
US11402909B2 (en) 2017-04-26 2022-08-02 Cognixion Brain computer interface for augmented reality
US11977682B2 (en) 2017-04-26 2024-05-07 Cognixion Corporation Nonverbal multi-input and feedback devices for user intended computer control and communication of text, graphics and audio
US11762467B2 (en) 2017-04-26 2023-09-19 Cognixion Corporation Nonverbal multi-input and feedback devices for user intended computer control and communication of text, graphics and audio
US11237635B2 (en) 2017-04-26 2022-02-01 Cognixion Nonverbal multi-input and feedback devices for user intended computer control and communication of text, graphics and audio
US11039240B2 (en) 2017-07-31 2021-06-15 Bose Corporation Adaptive headphone system
US10595114B2 (en) 2017-07-31 2020-03-17 Bose Corporation Adaptive headphone system
WO2019027912A1 (en) * 2017-07-31 2019-02-07 Bose Corporation Adaptive headphone system
US11783359B2 (en) 2017-12-04 2023-10-10 Spotify Ab Audio advertising interaction with voice interactive devices
US11435892B2 (en) * 2017-12-08 2022-09-06 Spotify Ab System and method for enabling interaction with an electronic device
US10509558B2 (en) * 2017-12-08 2019-12-17 Spotify Ab System and method for enabling advertisement interaction with an electronic device
KR102462150B1 (en) * 2018-09-18 2022-11-01 선전 구딕스 테크놀로지 컴퍼니, 리미티드 Touch Assemblies, Devices and Touch Methods
KR20200133331A (en) * 2018-09-18 2020-11-27 선전 구딕스 테크놀로지 컴퍼니, 리미티드 Touch assembly, device and touch method
CN109416610A (en) * 2018-09-18 2019-03-01 深圳市汇顶科技股份有限公司 Touch control component, device and touch control method
EP3647921A4 (en) * 2018-09-18 2020-05-13 Shenzhen Goodix Technology Co., Ltd. Touch assembly, apparatus, and touch method
US11334204B2 (en) 2018-09-18 2022-05-17 Shenzhen GOODIX Technology Co., Ltd. Touch component, touch apparatus, and touch-control method
CN110475174A (en) * 2019-08-28 2019-11-19 深圳市索爱创新科技有限公司 A kind of bluetooth headset based on Internet of Things
US11531463B2 (en) 2020-12-18 2022-12-20 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Method for determining touch instruction, electronic device and storage medium
EP3865988A3 (en) * 2020-12-18 2022-01-12 Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. Method and apparatus for processing touch instruction, electronic device, storage medium and computer program product
EP4120062A1 (en) * 2021-07-15 2023-01-18 Nxp B.V. Method and apparatus for audio streaming

Also Published As

Publication number Publication date
US20200252495A1 (en) 2020-08-06
US11924364B2 (en) 2024-03-05
WO2013188769A1 (en) 2013-12-19
US9992316B2 (en) 2018-06-05
US20180338024A1 (en) 2018-11-22
EP2862044A1 (en) 2015-04-22
US20180124224A9 (en) 2018-05-03
US10567564B2 (en) 2020-02-18
US20160269523A1 (en) 2016-09-15
US20220337692A1 (en) 2022-10-20
EP2862039A1 (en) 2015-04-22
US20160103511A1 (en) 2016-04-14
US20240187511A1 (en) 2024-06-06
EP2862039A4 (en) 2016-04-20
US20240163359A1 (en) 2024-05-16
US20130339859A1 (en) 2013-12-19
WO2013188749A1 (en) 2013-12-19
EP2862044A4 (en) 2015-12-23

Similar Documents

Publication Publication Date Title
US20160103511A1 (en) Interactive input device
US20210034176A1 (en) Devices and Methods for Sharing User Interaction
US11452915B2 (en) User interfaces for workout content
US11687163B2 (en) Apparatus, system, and method for transferring data from a terminal to an electromyography (EMG) device
CN107005612B (en) Digital assistant alarm system
CN105828145B (en) Interactive approach and device
CN104685470B (en) For the device and method from template generation user interface
CN107402687A (en) Context task shortcut
US20180121432A1 (en) Digital assistant integration with music services
CN103218387B (en) Method and apparatus for the integrated management content in portable terminal
JP2017523534A (en) Mobile computer system having user-preferred interactive components
KR20160150421A (en) Mobile terminal and method for controlling the same
CN109189953A (en) A kind of selection method and device of multimedia file
US20210378038A1 (en) Proximity Based Personalization of a Computing Device
US20180249056A1 (en) Mobile terminal and method for controlling same
CN103488669A (en) Information processing apparatus, information processing method and program
JP2023540256A (en) Personal performance feedback to the workout community
WO2015043239A1 (en) Method and device for playing media data on a terminal
KR20170038569A (en) Mobile terminal and method for controlling the same
Hopmann Content and Context-Aware Interfaces for Smarter Media Control

Legal Events

Date Code Title Description
AS Assignment

Owner name: A&L SERVICES CORPORATION, FLORIDA

Free format text: SECURITY AGREEMENT;ASSIGNOR:MUZIK LLC;REEL/FRAME:031598/0088

Effective date: 20131018

Owner name: A&L SERVICES CORP., FLORIDA

Free format text: SECURITY AGREEMENT;ASSIGNOR:HARDI, JASON;REEL/FRAME:031598/0100

Effective date: 20131018

AS Assignment

Owner name: MUZIK LLC, FLORIDA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HARDI, JASON;CAWLEY, JOHN;SIGNING DATES FROM 20140401 TO 20140530;REEL/FRAME:033025/0485

AS Assignment

Owner name: A&L SERVICES GROUP, FLORIDA

Free format text: SECURITY INTEREST;ASSIGNOR:MUZIK LLC;REEL/FRAME:034894/0689

Effective date: 20130909

Owner name: MUZIK LLC, FLORIDA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:A&L SERVICES GROUP;REEL/FRAME:034895/0044

Effective date: 20150120

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: FYRST, TIM, WASHINGTON

Free format text: SECURITY INTEREST;ASSIGNOR:MUZIK, INC.;REEL/FRAME:063801/0771

Effective date: 20230410