US20210083715A1 - System and methods for low power consumption by a wireless sensor device - Google Patents
System and methods for low power consumption by a wireless sensor device Download PDFInfo
- Publication number
- US20210083715A1 US20210083715A1 US17/035,289 US202017035289A US2021083715A1 US 20210083715 A1 US20210083715 A1 US 20210083715A1 US 202017035289 A US202017035289 A US 202017035289A US 2021083715 A1 US2021083715 A1 US 2021083715A1
- Authority
- US
- United States
- Prior art keywords
- sensor data
- power
- mode
- sensor
- communication
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000004891 communication Methods 0.000 claims abstract description 80
- 238000011156 evaluation Methods 0.000 claims abstract description 9
- 230000007704 transition Effects 0.000 claims abstract description 5
- 238000012545 processing Methods 0.000 claims description 40
- 238000003909 pattern recognition Methods 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 9
- 230000015654 memory Effects 0.000 description 32
- 238000010586 diagram Methods 0.000 description 24
- 238000001514 detection method Methods 0.000 description 13
- 230000005236 sound signal Effects 0.000 description 13
- 230000004044 response Effects 0.000 description 8
- 238000004422 calculation algorithm Methods 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 6
- 230000002452 interceptive effect Effects 0.000 description 6
- 238000007726 management method Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 239000000758 substrate Substances 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 210000005069 ears Anatomy 0.000 description 3
- 238000012423 maintenance Methods 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 230000003139 buffering effect Effects 0.000 description 2
- 230000010267 cellular communication Effects 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 241000218691 Cupressaceae Species 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000005669 field effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000005291 magnetic effect Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000010363 phase shift Effects 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 230000001020 rhythmical effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3206—Monitoring of events, devices or parameters that trigger a change in power modality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B1/00—Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
- H04B1/38—Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
- H04B1/40—Circuits
- H04B1/403—Circuits using the same oscillator for generating both the transmitter frequency and the receiver local oscillator frequency
- H04B1/406—Circuits using the same oscillator for generating both the transmitter frequency and the receiver local oscillator frequency with more than one transmission mode, e.g. analog and digital modes
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/30—Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04B—TRANSMISSION
- H04B1/00—Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
- H04B1/38—Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
- H04B1/40—Circuits
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W52/00—Power management, e.g. TPC [Transmission Power Control], power saving or power classes
- H04W52/02—Power saving arrangements
- H04W52/0209—Power saving arrangements in terminal devices
- H04W52/0251—Power saving arrangements in terminal devices using monitoring of local events, e.g. events related to user activity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W72/00—Local resource management
- H04W72/04—Wireless resource allocation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/325—Power saving in peripheral device
- G06F1/3278—Power saving in modem or I/O interface
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L2015/088—Word spotting
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Definitions
- the subject matter relates to the field of device connectivity. More specifically, but not by way of limitation, the subject matter discloses techniques for reducing power consumption by wireless sensor devices.
- Wireless sensor devices perceived to have “always-on” or “always listening” interface capabilities may remain in a powered state to collect sensor data and to wirelessly transmit the collected sensor data to another device. Remaining in a powered state over long periods of time may unnecessarily drain battery power or require an electrical outlet.
- FIG. 1 is a block diagram illustrating sensor devices communicatively coupled to networks through a network access device, in accordance with various embodiments
- FIG. 2 is a block diagram illustrating a sensor device, in accordance with embodiments
- FIG. 3 is a flow diagram illustrating sensor data processing and communication associated with a sensor device, in accordance with embodiments
- FIG. 4 is a block diagram illustrating a wireless headset communicatively coupled to networks, in accordance with embodiments
- FIG. 5 is a graph diagram illustrating audio data, in accordance with an embodiment.
- FIG. 6 is a block diagram illustrating an earpiece device of a wireless headset, in accordance with embodiments
- FIG. 7 is a block diagram illustrating a BT architecture, in accordance with embodiments.
- FIG. 8 is an interactive flow diagram illustrating operations of a headset and a mobile device, in accordance with an embodiment
- FIG. 9 is an interactive flow diagram illustrating operations of a headset and a mobile device, in accordance with an embodiment
- FIG. 10 is an interactive flow diagram illustrating operations of a headset and a mobile device, in accordance with an embodiment
- FIG. 11 is a flow diagram illustrating a method of powering a communication resource, in accordance with embodiments.
- FIG. 12 is a block diagram illustrating an electronic device, in accordance with various embodiments.
- Systems providing “always-on” or “always listening” interface capabilities may include multiple power domains that can each operate in one or more power consumption states.
- a power domain including communication resources e.g., transceivers and processing systems used to execute communication protocol code
- may remain in a low power consumption mode e.g., off, hibernate, sleep, etc.
- a low power consumption mode e.g., off, hibernate, sleep, etc.
- the power domain is transitioned to a higher power consumption mode to establish the wireless connection and communicate wirelessly.
- a wireless headset embodiment described herein includes a power source interface configured to couple with a battery and a microphone to provide audio signals.
- An integrated circuit (IC) of the wireless headset includes signal processing circuitry to generate audio data based on the audio signals and a processor to operate a phrase detector (PD).
- the IC includes a power manager coupled to the PD and Bluetooth (BT) circuitry.
- the wireless headset conserves power that would otherwise be used to communicate until the wireless headset detects speech in the audio data.
- the user utters the wake-up phrase and command to the wireless headset, “Ok helper, turn on the light.”
- the power manager transitions the wireless headset from operation in a first mode, (with battery power to operate the BT circuitry turned off) to operation in a second mode (with battery power to operate the BT circuitry turned on).
- the operation in the second mode includes use of the BT circuitry to establish a BT Low Energy (BLE) connection and transmit packets, including a second portion of the audio data (e.g., the command, “turn on the light.”), for speech recognition via the BLE connection as a BLE Generic Attribute Profile (GATT) server.
- BLE BT Low Energy
- GATT BLE Generic Attribute Profile
- Embodiments described herein can reduce power consumed by IoT devices by remaining disconnected from a network until sensor data sensed by the IoT device indicates that a network connection should be established to wirelessly communicate in connection with the sensor data in furtherance of an IoT application. Compared to prior techniques that maintain network connection independent of sensor data indications, embodiments can enable the perception of “always on” or “always listening” functionality by an IoT device with lower power consumption. These and other embodiments are described in further detail herein.
- FIG. 1 is a block diagram illustrating sensor devices 102 , 108 , and 109 communicatively coupled to network(s) 114 through a network access device 104 , in accordance with various embodiments.
- Sensor devices 102 , 108 , and 109 are to sense their surrounding environments and communicate corresponding sensor data with one another and the network access device 104 using any communication protocols known in the art.
- One or more of the sensor devices 102 , 108 , and 109 may sense, without limitation, sounds, light, images, temperature, humidity, moisture, device state, bodily functions, etc. for IoT applications which may include those for the smart home, elder care, healthcare, transportation, manufacturing, agriculture, energy management, and/or environmental monitoring.
- the node 106 is to relay to the sensor device 102 , sensor data originating from the sensor devices 108 and 109 .
- the network access device 104 may process the sensor data and/or forward it to the network(s) 114 .
- the node 106 and sensor devices 102 , 108 , 109 are not coupled directly to the network(s) 114 but are coupled to one another and to the network access device 104 through a short range wireless network such as BT, Zigbee, IEEE 802.15.4, and/or a Wi-Fi peer-to peer (p2p) network.
- a short range wireless network such as BT, Zigbee, IEEE 802.15.4, and/or a Wi-Fi peer-to peer (p2p) network.
- p2p Wi-Fi peer-to peer
- each sensor device 102 , 108 , 109 , the node 106 , and the network access device 104 is a node in a mesh network for many-to-many (m:m) device communications.
- the BLE mesh topology can support a network of tens, hundreds, or thousands of devices that need to communicate with one another.
- the sensor devices 108 and 109 may be out of range of direct connection with the network access device 104 but sensor data from sensor devices 108 and 109 can still be transferred to the network access device 104 through the node 106 and the sensor device 102 . In this way, the sensor device 102 can communicate sensor data to the network access device 104 on behalf of the out of range sensor devices 108 and 109 .
- sensor data may be efficiently transferred among mesh network nodes on existing mesh network maintenance packets (e.g., sensor data may piggyback on available fields of maintenance packets), which are regularly communicated to maintain the mesh network.
- the network access device 104 is to receive sensor data from the sensor device 102 and process the sensor data itself, in support of a particular IoT application, or forward the sensor data through a wired or wireless connection to the network(s) 114 for processing.
- the network access device 104 may be a multi-network capable access point, beacon, and/or voice controlled hub (VCH).
- VCH voice controlled hub
- the network access device 104 may connect with the sensor device 102 over a BT network and connect with the network(s) 114 over an Ethernet based network.
- Network(s) 114 may include one or more types of wired and/or wireless networks for communicatively coupling the network access device 104 , the IoT application(s) 112 , and the device under control 103 to one another.
- network(s) 114 may include a local area network (LAN), wireless LAN (WLAN) (e.g., Wi-Fi, 802.11 compliant), a metropolitan area network (MAN), a wide area network (WAN), a personal area network (PAN) (e.g., BT Special Interest Group (SIG) standard or Zigbee, IEEE 802.15.4 compliant), and/or the Internet.
- LAN local area network
- WLAN wireless LAN
- MAN metropolitan area network
- WAN wide area network
- PAN personal area network
- SIG BT Special Interest Group
- IoT application(s) 112 are to use the sensor data from the sensor devices 102 , 108 , 109 in support of an IoT application.
- example IoT application(s) 112 may include, without limitation, the smart home, elder care, healthcare, transportation, manufacturing, agriculture, energy management, and/or environmental monitoring.
- IoT application(s) 112 may reside on one or more computing devices coupled to the network(s) 114 and may use or be implemented using processors, memory, circuitry, arithmetic logic, software, algorithms, and data structures to organize and process attributes of sensor data.
- IoT application(s) 112 operate to recognize patterns of sensor data and associate the recognized patterns with a corresponding meaning.
- attributes of audio data may include pitch, volume, tone, repeating or rhythmic sounds and/or language sounds such as words, phrases, and the like.
- IoT application(s) 112 include Automated Speech Recognition (ASR) technology, which is described further with respect to FIG. 5 .
- ASR Automated Speech Recognition
- the device under control 103 may include any device with a function that can be initiated responsive to the sensor data.
- the network access device 104 controls the device under control 103 based on the results of sensor data processing (e.g., audio pattern recognition) performed by the IoT application(s) 112 .
- Example devices under control may include, without limitation, white goods, thermostats, lighting, automated blinds, automated door locks, automotive controls, windows, industrial controls and actuators.
- devices under control may include any logic, firmware, or software application run by the device under control 103 .
- an “always-on,” “always listening” sensor device uses its power supply (e.g., battery power) to establish and maintain a network connection with the network access device 104 regardless of whether that sensor device is currently sharing or even has sensor data to share with the network access device 104 or the IoT application(s) 112 .
- the sensor device 102 before the sensor device 102 uses its power source to establish and/or maintain a network connection with the network access device 104 , the sensor device 102 first evaluates whether it has sensor data that will be useful in support of an IoT application. For example, the sensor device 102 may analyze attributes of portions of sensor data for indicators (e.g., speech like sounds, presence detection) that the remainder of the sensor data should be transmitted to the network(s) ( 114 ) for use by an IoT application.
- indicators e.g., speech like sounds, presence detection
- sensor data analysis may be distributed among one or more of the sensor devices 102 , 108 , and 109 .
- sounds e.g., a potential voice command
- the decision on whether the sensor device 102 should forward the audio data to the network access device 104 for pattern recognition processing by the IoT applications 112 may be made based on different steps of evaluation provided by the sensor device 108 (e.g., to provide speech onset detection) and the sensor device 102 (e.g., to provide wake phrase detection).
- An example sensor device that uses its power source to establish and/or maintain a network connection, based on an indication that the sensor data will be useful in support of an application e.g., an IoT application
- FIG. 2 An example sensor device that uses its power source to establish and/or maintain a network connection, based on an indication that the sensor data will be useful in support of an application (e.g., an IoT application) is described with respect to FIG. 2 .
- FIG. 2 is a block diagram illustrating a sensor device 202 , in accordance with embodiments.
- the sensor device 202 is shown to include sensor(s) 222 , sensing circuitry 224 , evaluator 226 , power manager 228 , processor 230 and communication circuitry 232 coupled to one another over a bus system 227 .
- the Sensor(s) 222 are to sense attributes of a condition (e.g., a physical condition and/or a state condition) and provide a corresponding analog and/or digital signal.
- Example sensors may include transducers, image sensors, temperature sensors, humidity sensors, biometric sensors, data sensors, and the like.
- Sensing circuitry 224 is to measure an analog signal provided by sensor(s) 222 to quantify a sensed condition.
- Sensor(s) 222 or sensing circuitry 224 may include one or more analog to digital converters to convert analog signals to digital signals.
- sensor data may include an analog and/or a digital signal corresponding to the sensed condition.
- the evaluator 226 is to analyze a portion of the sensor data to determine whether the sensor data warrants further evaluation (e.g., for pattern recognition) by a remote device to support an IoT application. If the evaluator 226 determines that such further evaluation is warranted, the evaluator 226 provides a signal to the power manager 228 .
- Power manager 228 is to control power delivery to various power domains of the sensor device 202 . In embodiments, the power manager 228 does not deliver power to a power domain including the communication circuitry 232 until after the evaluator 226 determines that sensor data should be further evaluated.
- the evaluator 226 and/or the power manager 228 may be implemented by hardware (e.g., circuitry), software (e.g., instructions, firmware, code) or a combination of the two.
- the evaluator 226 and/or the power manager 228 may be code stored in a memory (not shown) and implemented through execution by the processor 230 .
- Processor 230 may include a memory system (not shown) and include one or more processing devices known by those of ordinary skill in the art, such as a microprocessor or central processing unit, an application processor, a host controller, a controller, special-purpose processor, DSP, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or the like.
- Communication circuitry 232 is to communicate with the sensor devices 102 , 108 , 109 , the node 106 , and/or the network access device 104 of FIG. 1 .
- the communication circuitry 232 includes packet processing and radio capabilities to support the wireless communication protocols discussed with respect to FIG. 1 .
- FIG. 3 is a flow diagram 300 illustrating sensor data processing and communication associated with the sensor device 202 , in accordance with embodiments.
- the operations shown in FIG. 3 can be performed by processing logic comprising hardware (circuitry, dedicated logic, etc.), software (such as is run on a general-purpose computing system or a dedicated machine), firmware (embedded software), or any combination thereof. In various embodiments, the operations may be performed as shown and described with respect to FIGS. 1 and 2 .
- the evaluator 226 receives sensor data from the sensing circuitry 224 .
- the evaluator 226 evaluates the sensor data to determine at block 306 whether the sensor data should be further analyzed by a remote device to support an IoT application. In one embodiment, if the evaluator 226 determines that a value of a portion of the sensor data meets or exceeds a predetermined reference value, the evaluator 226 determines at block 306 that the sensor data warrants further evaluation by a remote device (e.g., the IoT application(s) 112 ) to support an IoT application. Otherwise, operation returns to block 302 .
- a remote device e.g., the IoT application(s) 112
- the evaluator 226 may determine at block 306 that the sensor data warrants the further evaluation. Otherwise, operation returns to block 302 .
- the evaluator 226 determines that further analysis of the sensor data should be provided and provides a signal to the power manager 228 .
- the power manager 228 switches the communication circuitry 232 from operating in a first mode at block 310 to operating in a second mode at block 312 .
- the first mode is a lower power consumption mode than the second mode.
- the power manager 228 may provide little to no power to the communication circuitry 232 for packet processing and radio (e.g., transceiver) operation.
- the power manager 228 provides enough power to the communication circuitry 232 to transmit the sensor data to the network access device 104 at block 314 .
- the transmission at block 314 may be preceded by establishing a network connection and followed by maintaining the network connection to complete additional communications related to the sensor data.
- the network access device 104 transmits the sensor data (over a second network) to the IoT application 112 for processing or analysis. Alternatively, the network access device 104 may avoid the further transmission by providing the analysis of the sensor data itself.
- the IoT application 112 processes or analyzes the sensor data and at block 320 , transmits the results of its sensor data analysis to the device under control 103 and/or back to the network access device 104 .
- the network access device 104 transmits the results associated with the processed sensor data to a device under control 103 and/or to the communication circuitry 232 of the sensor device 102 , which receives the response at block 324 . Particular examples of the embodiments described in FIGS. 1 and 2 are discussed with respect to the remainder of the figures.
- FIG. 4 is a block diagram 400 illustrating a wireless headset 412 communicatively coupled to network(s) 114 , in accordance with embodiments.
- the headset 412 , the light bulb 403 , mobile device 404 , voice controlled hub (VCH) 406 , access point 408 , cell tower 410 , cloud ASR 416 and the network(s) 114 may be communicatively coupled to one another. Any communication networks known in the art may be used to communicate among the various nodes.
- the headset 412 couples to the mobile device 404 and/or the VCH 406 via BT network protocol.
- the mobile device 404 and the VCH 406 may couple to the light bulb 403 via the BT network protocol and couple to the access point 408 via Wi-Fi network protocol.
- the headset 412 is coupled to the mobile device 404 in a point to point configuration available on BT Basic Rate/Enhanced Data Rate (BR/EDR) (e.g., classic BT) for audio streaming and BLE for data transfer.
- BR/EDR Basic Rate/Enhanced Data Rate
- the mobile device 404 may also be coupled to the cell tower 410 via cellular communication protocols (e.g., Long-Term Evolution (LTE), 5G).
- LTE Long-Term Evolution
- 5G 5G
- the access point 408 may be coupled to larger Ethernet based networks 114 via wired connection, while the network(s) 114 may couple to the cell tower 410 and the cloud ASR 416 .
- the headset 412 is shown to include two earpieces each including a speaker 430 , a microphone 432 , a battery 434 , and a touch interface 436 . Each feature described with respect to the wireless headset 412 may be included in each earpiece of the wireless headset 412 .
- the headset 412 is to be utilized by a user 401 .
- the user 401 may use the wireless headset 412 to listen to audio played back by the speakers 430 of the wireless headset 412 after the audio has been transmitted to the wireless headset 412 from the mobile device 404 or the VCH 406 .
- the user 401 may also use the microphone 432 and the speakers 430 of the wireless headset 412 to convey sounds for voice calls made by the mobile device 404 .
- the user 401 may speak a voice command or query into the microphone 432 of the headset 412 .
- the voice command or query is comprised of sounds 402 sensed by the microphone (e.g., audio data).
- the headset 412 may evaluate some of the audio data to determine whether it includes a voice command or query that should be forwarded for interpretation to the mobile device 404 and/or to the cloud ASR 416 . If not, battery power to establish or maintain a network connection need not be consumed. If the audio data should be forwarded for interpretation, the headset 412 will use battery power to establish a BT (e.g., a BLE) connection and transfer the audio data to the mobile device 404 and/or the VCH 406 .
- BT e.g., a BLE
- touch interface 436 is a two-dimensional interface that uses a sensor array to detect presence of a touch and/or fingerprints proximate to the surface of the sensor array.
- the user 401 may use the touch interface 436 to control functionality (e.g., on/off, volume control, answer call, end call, etc.) or to gain access to or control the wireless headset 412 , the mobile device 404 , or a device that is communicatively coupled to the mobile device 404 .
- the battery 434 on each earpiece of the wireless headset 412 is the power source for the circuitry on each earpiece and may be recharged using a suitable charging device.
- the mobile device 404 may include any portable wireless communication device.
- the mobile device 404 is a battery powered smart phone.
- the VCH 406 may include any portable or fixed wireless communication device.
- the mobile device 404 and the VCH 406 may be equipped with communication systems capable of communicating using BT communication protocols, Wi-Fi communication protocols, and/or cellular communication protocols.
- the mobile device 404 and the VCH 406 may each include one or more microphones, speakers, and speech detectors to interpret speech and/or enable network transmission of the audio data for speech recognition. Both the mobile device 404 and the VCH 406 may facilitate a response to the voice commands or queries, and for example, may transmit a signal to the light bulb 403 to turn it on or off in response to an interpreted voice command.
- the mobile device 404 and/or the VCH 406 may each be capable of determining an answer to the query and responding by playing back the answer through its own speakers or providing the answer to the speaker 430 of the headset 412 for playback.
- the access point 408 may be a fixed wireless and wired communication device.
- the access point 408 includes communication systems to communicate wirelessly (e.g., via Wi-Fi) with the VCH 406 and mobile device 404 and through a wired medium (e.g., Ethernet LAN) with the network(s) 114 .
- Cell tower 410 is to facilitate voice and data communication between the mobile device 404 and other mobile devices (not shown) and may also be coupled to the network(s) 114 .
- Cloud ASR 416 is to identify predetermined audio patterns and associate them with one another (e.g., using a data structure) and/or with corresponding meaning. Patterns recognizable by Cloud ASR 416 may facilitate, for example and not limitation, music recognition, song recognition, voice recognition, image recognition, and speech recognition, or any other sensed pattern. In embodiments, Cloud ASR 416 may interpret the audio data and provide its results to the mobile device 404 , which may act on the results and/or forward the results back to the headset 412 .
- FIG. 5 is a graph diagram illustrating audio data 500 , in accordance with an embodiment.
- the headset 412 may receive sounds through its microphone 432 and evaluate the audio data 500 to determine whether it should be forwarded to a remote device for speech recognition.
- the audio data 500 is shown to include ambient noise 502 (e.g., background noise), speech onset 504 , a wake phrase 506 , and a query or command 508 .
- the ambient noise 502 is audio data that corresponds to background sounds in the environment.
- the speech onset 504 , the wake phrase 506 , and the query or command 508 are portions of the audio data 500 that correspond to both the sound waves 402 produced by the user (e.g., the speech to be recognized) and the ambient noise 502 .
- Speech onset 504 is the beginning of speech in the audio data 500 and is shown to be a beginning portion or subset of the wake phrase 506 .
- the wake phrase 506 is a predetermined phrase uttered by a user (e.g., “ok phone”). After having uttered the wake phrase 506 , the user utters the query or command 508 (e.g., “turn on the light”) to be acted upon (e.g., by the bulb 403 of FIG. 4 ).
- the query or command 508 e.g., “turn on the light”
- the headset 412 may only attempt detection of the wake phrase 506 if the headset 412 has already detected speech onset 504 . Similarly, the headset 412 may only consume power to establish a network connection with the mobile device 404 for subsequent speech recognition when the headset 412 has detected an indication (e.g., speech onset 504 , the wake phrase 506 or touch input) that the audio data is a command or query 508 .
- the headset constantly maintains the network connection even when there is no indication that sensed data should be transmitted onto the network for interpretation. There can be significant power consumption involved with the continuous maintenance of a network connection of previous solutions, which can be especially impactful in a battery powered audio processing device.
- FIG. 6 is a block diagram illustrating an earpiece device 600 of the wireless headset 412 , in accordance with embodiments.
- the functional blocks of the example earpiece device 600 includes a microphone 640 , audio signal processor 642 , sensor array 644 , capacitance sensor 646 , power source interface 664 , battery 663 , and Bluetooth communication IC 660 .
- the Bluetooth communication IC 660 is shown to include speech detector 662 , processor 672 , memory 674 , evaluator 670 , power manager 675 , BT/BLE resources, circuitry 665 , and transceiver 668 .
- Each functional block may be coupled to bus system 601 (e.g., I2C, I2S, SPI) and be implemented using hardware (e.g., circuitry), instructions (e.g., software and/or firmware), or a combination of hardware and instructions.
- bus system 601 e.g., I2C, I2S, SPI
- hardware e.g., circuitry
- instructions e.g., software and/or firmware
- this embodiment shows a set of functional blocks within a BT communication IC 660
- any combination of functional blocks could be implemented on a single integrated circuit substrate or in a single device package without departing from the claimed subject matter.
- the functional blocks of the earpiece device 600 are distributed among multiple integrated circuit devices, device packages, or other circuitry.
- the microphone 640 is to receive sound waves from its surrounding environment and includes a transducer or other mechanisms (e.g., a including a diaphragm) to convert the energy of the sound waves into electronic or digital signals (e.g., audio data).
- the microphone 640 may include an array of microphones.
- the microphone 640 may be a digital microphone.
- the microphone 640 may include threshold/hysteresis settings for activity detection and measurement and/or processing logic to determine whether a sound wave received by the microphone 640 meets or exceeds an activation threshold that gates whether corresponding audio data should be passed on to the speech detector 662 (e.g., discussed below) for processing.
- the threshold level of activity may be an energy level, an amplitude, a frequency, or any other attribute of a sound wave.
- the microphone 640 may be coupled to a memory (not shown) that stores the activation threshold, which may be dynamically reprogrammable.
- Audio signal processor 642 includes circuitry to process and analyze the audio data received from the microphone 640 .
- audio signal processor 642 digitizes (e.g., using an analog to digital converter (ADC)) and encodes the electronic audio signals. Once digitized, Audio signal processor 642 may provide signal processing (e.g., demodulation, mixing, filtering) to analyze or manipulate attributes of the audio data (e.g., phase, wavelength, frequency).
- ADC analog to digital converter
- the audio signal processor 642 includes a pulse density modulator (PDM) front end that is connected to the microphone 640 .
- PDM pulse density modulator
- the PDM In the PDM front end, the PDM generates a pulse density modulated bitstream based on an electronic signal from the microphone 640 .
- the PDM provides a clock signal to the microphone 640 that determines the initial sampling rate, then receives a data signal from the microphone 640 representing audio captured from the environment. From the data signal, the PDM generates a PDM bitstream and may provide the bitstream to a decimator, which can generate the audio data provided to the bus system 601 .
- the audio signal processor 642 includes an auxiliary analog to digital converter (AUX ADC) front end to provide the audio data.
- AUX ADC auxiliary analog to digital converter
- an analog to digital converter converts an analog signal from the microphone 640 to a digital audio signal.
- the digital audio signal may be provided to a decimator to generate the audio data provided to the bus system 601 .
- the earpiece 600 may include a sensor array 644 and a capacitance sensor 646 to implement the touch interface 436 of FIG. 4 .
- the sensor array 644 includes sensor electrodes that are disposed as a two-dimensional matrix (also referred to as an XY matrix).
- the sensor array 644 may be coupled to pins of the capacitance sensor 646 via one or more analog buses.
- the capacitance sensor 646 may include circuitry to excite electrodes of the sensor array 644 and conversion circuitry to convert responsive analog signals into a measured capacitance value.
- the capacitance sensor 646 may also include a counter or timer circuitry to measure the output of the conversion circuitry.
- the capacitance sensor 646 may evaluate other measurements to determine the user interaction. For example, in the capacitance sensor 646 having a sigma-delta modulator, the capacitance sensor 646 is evaluating the ratio of pulse widths of the output, instead of the raw counts being over or under a certain threshold.
- the capacitance sensor 646 may include or be coupled to software components to convert the count value (e.g., capacitance value) into a sensor electrode detection decision (also referred to as switch detection decision) or relative magnitude. Based on these count values, the processing logic associated with the capacitance sensor 646 may determine the state of the sensor array 644 , such as whether an object (e.g., a finger) is detected on or in proximity to the sensor array 644 (e.g., determining the presence of the finger), tracking the motion of the object, detecting features (e.g., fingerprint ridges and valleys).
- an object e.g., a finger
- the capacitance sensor 646 may send the raw data, partially-processed data, and/or the data indicating the state of the sensor array 644 to the evaluator 670 to evaluate whether that data should be remotely processed to support an IoT application (e.g., remote fingerprint authentication and/or gesture or pattern recognition).
- an IoT application e.g., remote fingerprint authentication and/or gesture or pattern recognition
- Speech detector 662 is to detect attributes of speech in the audio data 500 and may include a speech onset detector (SOD) and/or a PD stored in the memory 674 and operated by the processor 672 .
- the SOD is to determine whether audio data received from the audio signal processor 642 includes the speech onset 504 .
- the microphone 640 wakes up the SOD to execute a speech onset detection algorithm in order to determine whether speech like signals are present in the audio data.
- the SOD may use any of the speech onset detection algorithms or techniques known to those have ordinary skill in the art.
- audio data with a reduced sample rate (e.g., 2-4 kHz) is sufficient for detecting speech onset (or other sound onset event) while allowing the SOD to be clocked at a lower frequency, thus reducing the power consumption and complexity of the SOD.
- the SOD Upon detecting a speech onset event, the SOD asserts a status signal on the bus 601 to wake the PD from a low power consumption state (e.g., sleep state) to a higher power consumption state (e.g., active state) to perform phrase detection.
- the PD is to determine whether the audio data 500 indicated by the SOD includes the predetermined phrase 506 (e.g., a wake-phrase).
- the PD may include processing pattern recognition algorithms and performing comparisons to expected patterns to determine whether the wake-up word or phrase 506 has been spoken. In embodiments, the PD makes this determination based on the audio data 500 that has been recorded in the memory 674 (e.g., a buffer).
- the speech detector 662 is shown to be implemented on the BT communication IC 660 , it may reside on and be implemented by other hardware of the earpiece 600 .
- the memory 674 may include, for example, random access memory (RAM) and program flash.
- RAM may be static RAM (SRAM)
- program flash may be a non-volatile storage, which may be used to store firmware (e.g., control algorithms executable by processor 672 to implement operations described herein).
- the processor 672 may be the same or similar to the processor 230 described with respect to FIG. 2 .
- the memory 674 may include instructions or code that when executed perform the methods described herein. Portions of the memory 674 may be dynamically allocated to provide caching, buffering, and/or other memory based functionalities.
- the evaluator 670 includes decision logic to evaluate whether the output of the speech detector 662 or the capacitance sensor 646 indicates that the audio data or the touch data, as the case may be, should be processed remotely in support of an IoT application.
- the evaluator 670 is implemented when the processor 672 executes decision logic code stored in the memory 674 .
- decision logic may be implemented using hardware.
- the evaluator 670 is shown as a separate functional block, the evaluator 670 may be implemented entirely or in combination with other functional blocks (e.g., the speech detector 662 or the capacitance sensor 646 ) of the earpiece 600 .
- the evaluator 670 may signal the power manager 675 to power up communication resources when the PD detects the predetermined wake-phrase “Ok, helper” or when the capacitance sensor 646 detects the presence of a finger or fingerprint proximate to the sensor array 644 .
- Power manager 675 is to control power delivery for operation of BT architecture including, without limitation, the BT/BLE resources 665 and the transceiver 668 .
- the power manager 675 provides power management features that can be invoked by software through power management registers in memory or packet-handling components of the BT architecture.
- the power manager 675 may automatically adjust power delivery based on user activity. For example, the power manager 675 may generate power-down and power-up control signals for circuitry that executes (e.g., software code) or operates the BT architecture such as the processor 672 , the memory 674 , and a transmit path, receive path, phase locked loop (PLL), and power amplifier to the transceiver 668 .
- PLL phase locked loop
- the power manager 675 may support various BT power consumption modes in compliance with BT standards. For example, the power manager may transition the IC 660 to the next lower state after a programmable period of user inactivity. When user activity resumes, the power manager 675 may immediately enter the IC 660 into an active mode.
- the earpiece 600 may use compression and decompression software stored in a memory to reduce the amount of audio data to be transmitted and/or received over network (e.g., BT network) connections.
- Raw pulse code modulation (PCM) samples from the microphone may also be transmitted over BT connections.
- the processor 672 may execute the software to code the audio data using the Opus audio format, or any other appropriate codec known by one having ordinary skill in the art.
- audio formatting software may be executed by a microprocessor (not shown) of the earpiece 600 that is not disposed on the BT communication IC 660 .
- circuitry 665 may include BT communication circuitry and code to implement portions of the BT architecture described with respect to FIG. 7 .
- the BT/BLE resources 665 may implement either or both of the BLE and BR/EDR (e.g., classic BT) systems.
- the transceiver 668 is to couple with an antenna 661 and facilitates transmitting and receiving of radio frequency (RF) signals.
- RF radio frequency
- the transceiver 668 filters and mixes received RF signals with a local oscillator signal to down-convert the desired frequency (e.g., or channel) to an intermediate frequency.
- the down-conversion process provides the intermediate frequency as complex I and Q signals which are sampled and digitized by an analog to digital converter of the transceiver 668 .
- a phase estimator of the transceiver 668 may perform calculations to estimate the phase of the RF signal for the time it was received at the antenna using the I and Q values and forward the phase value to a demodulator of the transceiver, which forwards a decoded sequence of 1s and 0s to the for further processing (e.g., packet processing).
- the transceiver 668 When operating as a transmitter, the transceiver 668 generally performs the operations in reverse, receiving a sequence of 1s and 0s from the signal processor, modulating the signal, and outputting an analog signal for transmission by the antenna 661 .
- the Bluetooth communication IC 660 may wirelessly communicate with the mobile device 404 and/or the VCH 406 over a BLE link 680 (e.g., to encode speech data) or classic BT link 682 (e.g., BT Advanced Audio Distribution Profile (A2DP)) to encode streamed audio) via the antenna 661 .
- BLE link 680 e.g., to encode speech data
- classic BT link 682 e.g., BT Advanced Audio Distribution Profile (A2DP)
- A2DP BT Advanced Audio Distribution Profile
- the transmission energy and transmission range of the earpiece 600 can be adjusted (e.g., dynamically) to provide acceptable performance under the particular noise conditions, interference conditions, and considering the proximity of connected devices.
- FIG. 7 is a block diagram illustrating a BT architecture 700 , in accordance with embodiments.
- the application block 702 is the user application running on a device that interfaces with the BT protocol stack 703 of the device.
- the host 704 includes the upper layers of the BT protocol stack 703 and the controller 706 includes the lower layers.
- the BT/BLE resources 665 and/or the transceiver 668 implements portions of the host 704 and the controller 706 of BT architecture 700 using processing logic comprising hardware (circuitry, dedicated logic, etc.), software (such as is run on a general-purpose computing system or a dedicated machine), firmware (embedded software code), and/or any combination thereof.
- the BT protocol stack 703 is used to implement functionality that is compatible with BT specifications including BLE.
- the generic access profile (GAP) 708 defines the generic procedures related to discovery of BT devices and link management aspects of connecting to BT devices.
- the GAP 708 may define the broadcaster role, the observer role, the peripheral role, and/or the central role of the applicable BT specifications.
- a profile describes how devices connect to each other to find or use services and the general expected behavior of a device.
- a service is a collection of data entities called characteristics.
- a service is used to define a function in a profile.
- a service may also define its relationship to other services.
- a service is assigned a universally unique identifier (UUID).
- UUID universally unique identifier
- a characteristic includes a value and descriptor that describes a characteristic value. It is an attribute type for a specific piece of information within a service. Like a service, each characteristic is designated with a UUID.
- the generic attribute profile (GATT) 710 defines a generic service framework using the attribute protocol (ATT) layer 722 .
- This framework defines the procedures and formats of services and their characteristics. It defines the procedures for service, characteristics, and descriptor discovery, reading, writing, notifying, and indicating characteristics, as well as configuring the broadcast of characteristics.
- a GATT client is a device that wants data. It initiates commands and requests towards a GATT server. The GATT client can receive responses, indications, and notification data sent by the GATT server.
- the GATT server is a device that has the data and accepts incoming commands and requests from the GATT client and sends responses, indications, and notifications to a GATT client.
- BT/BLE resources 665 may support both the GATT client and GATT server roles simultaneously.
- the ATT layer 722 can define a client/server architecture that allows a GATT server to expose a set of attributes and their associated values to a GATT client.
- the security manager protocol (SMP) 712 can define the procedures and behavior to manage pairing (e.g., pass key and out of band bonding), authentication (e.g., key generation for device identity resolution), and encryption between devices.
- the logical link control adaptation protocol (L2CAP) 714 may provide a connectionless data channel and channel multiplexing for the ATT 722 , SMP 712 layers and for its own layer. L2CAP 714 may provide segmentation and reassembly of packets as well as flow control between two L2CAP 714 entities (e.g., for transferring large chunks of data).
- Host controller interface (HCI) 716 can implement a command, event, and data interface to allow link layer (LL) 718 access from upper layers such as GAP 708 , L2CAP 714 , and SMP 712 .
- the LL 718 may include link layer circuitry that directly interfaces with the physical layer (PHY) 720 and manages the physical connections between devices. It supports LL 718 states including advertising, scanning, initiating, and connecting (e.g., master and slave).
- the LL 718 may implement link control procedures including encryption, connection update, channel update, and ping.
- the PHY 720 includes the analog communications circuitry used for modulating and demodulating analog signals and transforming them into digital symbols.
- BLE can communicate over 40 channels from 2.4000 GHz to 2.4835 GHz (e.g., using the transceiver 668 ).
- 37 of these channels may be used for connection data and the last three channels (37, 38, and 39) may be used as advertising channels to set up connections and send broadcast data.
- FIGS. 8-10 discuss techniques for conserving power in BT headsets with speech detection in various example use cases.
- FIG. 8 is an interactive flow diagram 800 illustrating operations of the headset 412 and the mobile device 404 , in accordance with an embodiment.
- the headset 412 is in the user's ears but there is no active audio stream or voice call being communicated.
- headsets consume power (e.g., >100 uA) to maintain a sniff connection with the mobile device just to maintain the link, although there is no active session.
- a sniff connection is not maintained and power to the transceiver 668 and/or memory 674 and the BT/BLE resources 665 are shut off when there is no active session.
- the speech detector 662 receives the audio data corresponding to the wake-up phrase and query “OK helper, what is the current temperature?” If the speech detector 662 recognizes the wake-up phrase, “OK helper” it signals the power manager 675 , which in turn connects power to BT communication resources to power up the transceiver 668 and/or memory 674 at block 804 and to start processing BT/BLE resources 665 at block 806 .
- touch input rather than voice input may indicate to the power manager 675 to power the BT communication resources.
- the BT/BLE resources 665 establish a BLE connection so that the remainder of the audio data corresponding to the query, “what is the current temperature?” can be transmitted in packets to the mobile device 404 over the BLE connection at 810 .
- the mobile device 404 is shown to transmit a response to the query, “The current temperature is 76 degrees Fahrenheit.”
- the power manager 675 may stop processing the BT/BLE resources 665 at block 814 and power off transceiver 668 and/or memory 674 at block 816 to reduce power consumption by BT communication resources.
- FIG. 9 is an interactive flow diagram 900 illustrating operations of the headset 412 and the mobile device 404 , in accordance with an embodiment.
- the headset 412 is in the user's ears but there is no active audio stream or voice call being communicated.
- a sniff connection is not maintained to power the transceiver 668 and/or memory 674 and the BT/BLE resources 665 are shut off when there is no active session.
- the mobile device 404 receives an incoming request to connect a call and provides a notification to the user (e.g., ringing, vibration, LED). Having perceived the notification of the request to connect, the user may provide an indication to connect the call.
- the speech detector 662 receives the audio data corresponding to the wake-up phrase and command “OK helper, connect the call.” If the speech detector 662 recognizes the wake-up phrase, “OK helper” it signals the power manager 675 , which in turn connects power to BT communication resources to power up the transceiver 668 and/or memory 674 at block 906 and to start processing BT/BLE resources 665 at block 908 .
- touch input rather than voice input may indicate to the power manager 675 to power the BT communication resources.
- the BT/BLE resources 665 establish a classic BT or BLE connection so that the remainder of the audio data corresponding to the command, “connect the call,” can be transmitted in packets at 912 to the mobile device 404 for interpretation.
- the speech detector 662 may interpret both the wake-up phrase and the command in which case the headset itself will send a direct command at 912 to connect the voice call. If the classic BT connection has not already been established, it is then established prior to 914 where the voice call is carried out over the classic BT connection.
- FIG. 10 is an interactive flow diagram 1000 illustrating operations of the headset 412 and the mobile device 404 , in accordance with an embodiment.
- the headset 412 is in the user's ears and at 1002 , there is an active audio stream being transmitted over a classic BT A2DP connection.
- BLE resources of the BT/BLE resources 665 are shut off during streaming.
- the user may provide an indication (e.g., touch or voice command) to do so.
- the speech detector 662 receives the audio data corresponding to the wake-up phrase and command “OK helper, volume down.” If the speech detector 662 recognizes the wake-up phrase, “OK helper” it signals the power manager 675 , which in turn connects power to BT communication resources to start processing BLE resources of the BT/BLE resources 665 at block 1006 . At 1008 , the BT/BLE resources 665 establish a BLE connection so that the remainder of the audio data corresponding to the command, “volume down,” can be transmitted in packets at 1010 to the mobile device 404 for interpretation and implementation.
- the speech detector 662 may interpret both the wake-up phrase and the command in which case the headset 412 itself will send a direct command at 912 to turn down the volume.
- the power manager 675 may stop processing the BLE resources at block 1012 to reduce power consumption by BT communication resources.
- the headset 412 may control volume locally on the headset 412 , avoiding power consumption to establish the BLE connection.
- FIG. 11 is a flow diagram illustrating a method 1100 of powering a communication resource, in accordance with embodiments.
- the operations shown in FIG. 11 can be performed by processing logic comprising hardware (circuitry, dedicated logic, etc.), software (such as is run on a general-purpose computing system or a dedicated machine), firmware (embedded software), or any combination thereof. In various embodiments, the operations may be performed as shown and described with respect to FIGS. 1-10 .
- the method 1100 includes operating a wireless device (e.g., the earpiece 600 ) in a first mode with power to operate a communication resource (e.g., the BT/BLE resources 665 ) of the wireless device turned off.
- the method 1100 includes determining (e.g., by the speech detector 662 ) whether a voice attribute is detected in the audio data.
- the audio data may be provided by the audio signal processor 642 in response to audio input from the microphone 640 .
- the voice attribute may be speech like sounds detected by a SOD of the speech detector 662 or a phrase (e.g., a wake-phrase portion of the audio data) detected by a PD of the speech detector 662 .
- the method 1100 includes responsive to detection of the voice attribute, transitioning (e.g., by the power manager 675 ) to operating the wireless device in a second mode.
- the transitioning to operating the wireless device in the second mode includes the power manager 675 powering up circuitry (e.g., the processor 672 , the memory 674 ) configured to operate the communication resource (e.g., the BT/BLE resources and/or the transceiver 668 ).
- the operating of the wireless device in the first mode with the power turned off consumes less power than the operating of the wireless device in the second mode at block 1108 with the power turned on.
- the communication resource may include code configured to implement a portion of at least a one of a controller 706 and a host 704 of a BT architecture 700 and the transitioning to operating the wireless device in the second mode comprises starting a processing of the code by circuitry including the processor 672 and the memory 674 .
- the method includes using the communication resource to establish a network connection and communicate packets via the network connection, the communication of the packets based on the audio data.
- the BT/BLE resources 665 may establish a BLE connection and with the transceiver 668 transmit packets including the second portion of the audio data (e.g., corresponding to a command or query) via the BLE connection for pattern recognition processing.
- the communicating of the packets may also include receiving packets including a response to the at least one of the command and the query.
- the BT/BLE resources 665 may establish the BLE connection with the mobile device 404 while maintaining a classic BT connection with the mobile device 404 .
- the BT/BLE resource 665 may establish a classic BT connection with the mobile device 404 while maintaining a BLE connection with the mobile device 404 .
- the BT/BLE resources 665 along with the transceiver 668 communicate the packets via the BT connection as a Generic Attribute Profile (GATT) server to a GATT client.
- GATT Generic Attribute Profile
- embodiments described herein can reduce power consumed by IoT devices by remaining disconnected from a network until sensor data sensed by the IoT device indicates that a network connection should be established to wirelessly communicate in connection with the sensor data in furtherance of an IoT application.
- embodiments can enable “always on” or “always listening” functionality by an IoT device with lower power consumption.
- FIG. 12 is a block diagram illustrating an electronic device 1200 , in accordance with various embodiments.
- the electronic device 1200 may fully or partially include and/or operate the example embodiments of the sensor device 102 , the headset 412 , the network access device 104 , the mobile device 404 , the VCH 406 , the device under control, the bulb 403 , the IoT application(s) 112 , the cloud ASR 416 , or the access point 408 of FIGS. 1 and 4 .
- the electronic device 1200 may be in the form of a computer system within which sets of instructions may be executed to cause the electronic device 1200 to perform any one or more of the methodologies discussed herein.
- the electronic device 1200 may operate as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the electronic device 1200 may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a P2P (or distributed) network environment.
- the electronic device 1200 may be an Internet of Things (IoT) device, a server computer, a client computer, a personal computer (PC), a tablet, a set-top box (STB), a VCH, a Personal Digital Assistant (PDA), a mobile telephone, a web appliance, a network router, switch or bridge, a television, speakers, a remote control, a monitor, a handheld multi-media device, a handheld video player, a handheld gaming device, or a control panel, or any other machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
- IoT Internet of Things
- PC personal computer
- PDA Personal Digital Assistant
- the electronic device 1200 is shown to include processor(s) 1202 .
- the electronic device 1200 and/or processors(s) 1202 may include processing device(s) 1205 such as a System on a Chip processing device, developed by Cypress Semiconductor Corporation, San Jose, Calif.
- the electronic device 1200 may include one or more other processing devices known by those of ordinary skill in the art, such as a microprocessor or central processing unit, an application processor, a host controller, a controller, special-purpose processor, digital signal processor (DSP), an ASIC, a FPGA, or the like.
- Bus system(s) 1201 may include a communication block (not shown) to communicate with an internal or external component, such as an embedded controller or an application processor, via communication interface(s) 1209 and/or bus system 1201 .
- Components of the electronic device 1200 may reside on a common carrier substrate such as, for example, an IC die substrate, a multi-chip module substrate, or the like. Alternatively, components of the electronic device 1200 may be one or more separate integrated circuits and/or discrete components.
- the memory system 1204 may include volatile memory and/or non-volatile memory which may communicate with one another via the bus system 1201 .
- the memory system 1204 may include, for example, RAM and program flash.
- RAM may be SRAM
- program flash may be a non-volatile storage, which may be used to store firmware (e.g., control algorithms executable by processor(s) 1202 to implement operations described herein).
- firmware e.g., control algorithms executable by processor(s) 1202 to implement operations described herein.
- the memory system 1204 may include instructions 1203 that when executed perform the methods described herein. Portions of the memory system 1204 may be dynamically allocated to provide caching, buffering, and/or other memory based functionalities.
- the memory system 1204 may include a drive unit providing a machine-readable medium on which may be stored one or more sets of instructions 1203 (e.g., software) embodying any one or more of the methodologies or functions described herein.
- the instructions 1203 may also reside, completely or at least partially, within the other memory devices of the memory system 1204 and/or within the processor(s) 1202 during execution thereof by the electronic device 1200 , which in some embodiments, constitutes machine-readable media.
- the instructions 1203 may further be transmitted or received over a network via the communication interface(s) 1209 .
- machine-readable medium is in some embodiments a single medium
- the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions.
- the term “machine-readable medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the example operations described herein.
- the term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.
- the electronic device 1200 is further shown to include display interface(s) 1206 (e.g., a liquid crystal display (LCD), touchscreen, a cathode ray tube (CRT), and software and hardware support for display technologies), audio interface(s) 1208 (e.g., microphones, speakers and software and hardware support for microphone input/output and speaker input/output).
- display interface(s) 1206 e.g., a liquid crystal display (LCD), touchscreen, a cathode ray tube (CRT), and software and hardware support for display technologies
- audio interface(s) 1208 e.g., microphones, speakers and software and hardware support for microphone input/output and speaker input/output
- the electronic device 1200 is also shown to include user interface(s) 1210 (e.g., keyboard, buttons, switches, touchpad, touchscreens, and software and hardware support for user interfaces) and sensing system(s) 1207 .
Abstract
Description
- The present application is a continuation application of U.S. Non-Provisional application Ser. No. 16/744,358, filed on Jan. 16, 2020, which is a continuation application of U.S. Non-Provisional application Ser. No. 16/448,247, filed on Jun. 21, 2019, now U.S. Pat. No. 10,587,302, issued on Mar. 10, 2020, which claims priority to U.S. Non-Provisional application Ser. No. 16/221,185, filed Dec. 14, 2018, now U.S. Pat. No. 10,367,540, issued on Jul. 30, 2019, which claims the priority and benefit under 35 U.S.C. § 119(e) of U.S. Provisional Application No. 62/632,888, filed Feb. 20, 2018, all of which are incorporated by reference herein in their entirety.
- The subject matter relates to the field of device connectivity. More specifically, but not by way of limitation, the subject matter discloses techniques for reducing power consumption by wireless sensor devices.
- Wireless sensor devices perceived to have “always-on” or “always listening” interface capabilities, such as wireless headsets, health monitors, smart speakers, hands-free interfaces, and other wireless sensors, may remain in a powered state to collect sensor data and to wirelessly transmit the collected sensor data to another device. Remaining in a powered state over long periods of time may unnecessarily drain battery power or require an electrical outlet.
- Some embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings in which:
-
FIG. 1 is a block diagram illustrating sensor devices communicatively coupled to networks through a network access device, in accordance with various embodiments; -
FIG. 2 is a block diagram illustrating a sensor device, in accordance with embodiments; -
FIG. 3 is a flow diagram illustrating sensor data processing and communication associated with a sensor device, in accordance with embodiments; -
FIG. 4 is a block diagram illustrating a wireless headset communicatively coupled to networks, in accordance with embodiments; -
FIG. 5 is a graph diagram illustrating audio data, in accordance with an embodiment. -
FIG. 6 is a block diagram illustrating an earpiece device of a wireless headset, in accordance with embodiments; -
FIG. 7 is a block diagram illustrating a BT architecture, in accordance with embodiments; -
FIG. 8 is an interactive flow diagram illustrating operations of a headset and a mobile device, in accordance with an embodiment; -
FIG. 9 is an interactive flow diagram illustrating operations of a headset and a mobile device, in accordance with an embodiment; -
FIG. 10 is an interactive flow diagram illustrating operations of a headset and a mobile device, in accordance with an embodiment; -
FIG. 11 is a flow diagram illustrating a method of powering a communication resource, in accordance with embodiments; and -
FIG. 12 is a block diagram illustrating an electronic device, in accordance with various embodiments. - Systems and methods for lowering power consumption by wireless sensor devices are described. In the following description, for purposes of explanation, numerous examples and embodiments are set forth in order to provide a thorough understanding of the claimed subject matter. It will be evident to one skilled in the art that the claimed subject matter may be practiced in other embodiments. Some embodiments are now briefly introduced and then discussed in more detail along with other embodiments beginning with
FIG. 1 . - Smart speakers, hearing aids, voice controlled hubs, mobile phones, white goods, and industrial machinery are examples of Internet of Things (IoT) devices tasked with sensing their surroundings and sharing sensor data. Systems providing “always-on” or “always listening” interface capabilities may include multiple power domains that can each operate in one or more power consumption states. In embodiments, a power domain including communication resources (e.g., transceivers and processing systems used to execute communication protocol code) may remain in a low power consumption mode (e.g., off, hibernate, sleep, etc.) until user speech or other sensor data indicates that a wireless connection is required to transmit packets related to the sensor data. At that point, the power domain is transitioned to a higher power consumption mode to establish the wireless connection and communicate wirelessly.
- A wireless headset embodiment described herein includes a power source interface configured to couple with a battery and a microphone to provide audio signals. An integrated circuit (IC) of the wireless headset includes signal processing circuitry to generate audio data based on the audio signals and a processor to operate a phrase detector (PD). The IC includes a power manager coupled to the PD and Bluetooth (BT) circuitry. In embodiments, the wireless headset conserves power that would otherwise be used to communicate until the wireless headset detects speech in the audio data.
- In one example, the user utters the wake-up phrase and command to the wireless headset, “Ok helper, turn on the light.” Responsive to a detection by the PD of the wake-up phrase in a first portion of the audio data (e.g., the wake-up phrase, “Ok, helper”), the power manager transitions the wireless headset from operation in a first mode, (with battery power to operate the BT circuitry turned off) to operation in a second mode (with battery power to operate the BT circuitry turned on). The operation in the second mode includes use of the BT circuitry to establish a BT Low Energy (BLE) connection and transmit packets, including a second portion of the audio data (e.g., the command, “turn on the light.”), for speech recognition via the BLE connection as a BLE Generic Attribute Profile (GATT) server.
- Embodiments described herein can reduce power consumed by IoT devices by remaining disconnected from a network until sensor data sensed by the IoT device indicates that a network connection should be established to wirelessly communicate in connection with the sensor data in furtherance of an IoT application. Compared to prior techniques that maintain network connection independent of sensor data indications, embodiments can enable the perception of “always on” or “always listening” functionality by an IoT device with lower power consumption. These and other embodiments are described in further detail herein.
- The detailed description below includes references to the accompanying drawings, which form a part of the detailed description. The drawings show illustrations in accordance with embodiments. These embodiments, which are also referred to herein as “examples,” are described in enough detail to enable those skilled in the art to practice embodiments of the claimed subject matter. The embodiments may be combined, other embodiments may be utilized, or structural, logical, and electrical changes may be made without departing from the scope of what is claimed. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope is defined by the appended claims and their equivalents.
-
FIG. 1 is a block diagram illustratingsensor devices network access device 104, in accordance with various embodiments.Sensor devices network access device 104 using any communication protocols known in the art. One or more of thesensor devices - The
node 106 is to relay to thesensor device 102, sensor data originating from thesensor devices network access device 104 may process the sensor data and/or forward it to the network(s) 114. In one embodiment, thenode 106 andsensor devices network access device 104 through a short range wireless network such as BT, Zigbee, IEEE 802.15.4, and/or a Wi-Fi peer-to peer (p2p) network. One or more of thesensor devices - In some embodiments, each
sensor device node 106, and thenetwork access device 104 is a node in a mesh network for many-to-many (m:m) device communications. For example, the BLE mesh topology can support a network of tens, hundreds, or thousands of devices that need to communicate with one another. In embodiments, thesensor devices network access device 104 but sensor data fromsensor devices network access device 104 through thenode 106 and thesensor device 102. In this way, thesensor device 102 can communicate sensor data to thenetwork access device 104 on behalf of the out ofrange sensor devices - The
network access device 104 is to receive sensor data from thesensor device 102 and process the sensor data itself, in support of a particular IoT application, or forward the sensor data through a wired or wireless connection to the network(s) 114 for processing. Thenetwork access device 104 may be a multi-network capable access point, beacon, and/or voice controlled hub (VCH). For example, thenetwork access device 104 may connect with thesensor device 102 over a BT network and connect with the network(s) 114 over an Ethernet based network. - Network(s) 114 may include one or more types of wired and/or wireless networks for communicatively coupling the
network access device 104, the IoT application(s) 112, and the device undercontrol 103 to one another. For example, and not limitation, network(s) 114 may include a local area network (LAN), wireless LAN (WLAN) (e.g., Wi-Fi, 802.11 compliant), a metropolitan area network (MAN), a wide area network (WAN), a personal area network (PAN) (e.g., BT Special Interest Group (SIG) standard or Zigbee, IEEE 802.15.4 compliant), and/or the Internet. - IoT application(s) 112 are to use the sensor data from the
sensor devices FIG. 5 . - The device under
control 103 may include any device with a function that can be initiated responsive to the sensor data. In some embodiments, thenetwork access device 104 controls the device undercontrol 103 based on the results of sensor data processing (e.g., audio pattern recognition) performed by the IoT application(s) 112. Example devices under control may include, without limitation, white goods, thermostats, lighting, automated blinds, automated door locks, automotive controls, windows, industrial controls and actuators. As used herein, devices under control may include any logic, firmware, or software application run by the device undercontrol 103. - In some solutions (e.g., previous solutions), an “always-on,” “always listening” sensor device uses its power supply (e.g., battery power) to establish and maintain a network connection with the
network access device 104 regardless of whether that sensor device is currently sharing or even has sensor data to share with thenetwork access device 104 or the IoT application(s) 112. In embodiments described herein, before thesensor device 102 uses its power source to establish and/or maintain a network connection with thenetwork access device 104, thesensor device 102 first evaluates whether it has sensor data that will be useful in support of an IoT application. For example, thesensor device 102 may analyze attributes of portions of sensor data for indicators (e.g., speech like sounds, presence detection) that the remainder of the sensor data should be transmitted to the network(s) (114) for use by an IoT application. - In some embodiments, sensor data analysis may be distributed among one or more of the
sensor devices sensor device 108, the decision on whether thesensor device 102 should forward the audio data to thenetwork access device 104 for pattern recognition processing by theIoT applications 112 may be made based on different steps of evaluation provided by the sensor device 108 (e.g., to provide speech onset detection) and the sensor device 102 (e.g., to provide wake phrase detection). An example sensor device that uses its power source to establish and/or maintain a network connection, based on an indication that the sensor data will be useful in support of an application (e.g., an IoT application) is described with respect toFIG. 2 . -
FIG. 2 is a block diagram illustrating asensor device 202, in accordance with embodiments. Thesensor device 202 is shown to include sensor(s) 222, sensingcircuitry 224,evaluator 226,power manager 228,processor 230 andcommunication circuitry 232 coupled to one another over abus system 227. The Sensor(s) 222 are to sense attributes of a condition (e.g., a physical condition and/or a state condition) and provide a corresponding analog and/or digital signal. Example sensors may include transducers, image sensors, temperature sensors, humidity sensors, biometric sensors, data sensors, and the like.Sensing circuitry 224 is to measure an analog signal provided by sensor(s) 222 to quantify a sensed condition. Sensor(s) 222 orsensing circuitry 224 may include one or more analog to digital converters to convert analog signals to digital signals. As used herein, sensor data may include an analog and/or a digital signal corresponding to the sensed condition. - The
evaluator 226 is to analyze a portion of the sensor data to determine whether the sensor data warrants further evaluation (e.g., for pattern recognition) by a remote device to support an IoT application. If theevaluator 226 determines that such further evaluation is warranted, theevaluator 226 provides a signal to thepower manager 228.Power manager 228 is to control power delivery to various power domains of thesensor device 202. In embodiments, thepower manager 228 does not deliver power to a power domain including thecommunication circuitry 232 until after theevaluator 226 determines that sensor data should be further evaluated. Theevaluator 226 and/or thepower manager 228 may be implemented by hardware (e.g., circuitry), software (e.g., instructions, firmware, code) or a combination of the two. For example, theevaluator 226 and/or thepower manager 228 may be code stored in a memory (not shown) and implemented through execution by theprocessor 230. -
Processor 230 may include a memory system (not shown) and include one or more processing devices known by those of ordinary skill in the art, such as a microprocessor or central processing unit, an application processor, a host controller, a controller, special-purpose processor, DSP, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or the like.Communication circuitry 232 is to communicate with thesensor devices node 106, and/or thenetwork access device 104 ofFIG. 1 . In embodiments, thecommunication circuitry 232 includes packet processing and radio capabilities to support the wireless communication protocols discussed with respect toFIG. 1 . -
FIG. 3 is a flow diagram 300 illustrating sensor data processing and communication associated with thesensor device 202, in accordance with embodiments. The operations shown inFIG. 3 can be performed by processing logic comprising hardware (circuitry, dedicated logic, etc.), software (such as is run on a general-purpose computing system or a dedicated machine), firmware (embedded software), or any combination thereof. In various embodiments, the operations may be performed as shown and described with respect toFIGS. 1 and 2 . - At
block 302, theevaluator 226 receives sensor data from thesensing circuitry 224. Atblock 304, theevaluator 226 evaluates the sensor data to determine atblock 306 whether the sensor data should be further analyzed by a remote device to support an IoT application. In one embodiment, if theevaluator 226 determines that a value of a portion of the sensor data meets or exceeds a predetermined reference value, theevaluator 226 determines atblock 306 that the sensor data warrants further evaluation by a remote device (e.g., the IoT application(s) 112) to support an IoT application. Otherwise, operation returns to block 302. Alternatively or additionally, if theevaluator 226 determines that a portion of sensor data meets or exceeds a predetermined level of similarity to a reference value, theevaluator 226 may determine atblock 306 that the sensor data warrants the further evaluation. Otherwise, operation returns to block 302. Atblock 306, if theevaluator 226 determines that further analysis of the sensor data should be provided and provides a signal to thepower manager 228. - At
block 308, responsive to the signal from theevaluator 226, thepower manager 228 switches thecommunication circuitry 232 from operating in a first mode atblock 310 to operating in a second mode at block 312. In embodiments, the first mode is a lower power consumption mode than the second mode. In the first mode (e.g., off), thepower manager 228 may provide little to no power to thecommunication circuitry 232 for packet processing and radio (e.g., transceiver) operation. In the second mode, thepower manager 228 provides enough power to thecommunication circuitry 232 to transmit the sensor data to thenetwork access device 104 atblock 314. In this way, power previously used to constantly maintain a network connection even when not needed, is conserved until sensor data needing processing is ready to be sent. In an embodiment, the transmission atblock 314 may be preceded by establishing a network connection and followed by maintaining the network connection to complete additional communications related to the sensor data. - At block 316, the
network access device 104 transmits the sensor data (over a second network) to theIoT application 112 for processing or analysis. Alternatively, thenetwork access device 104 may avoid the further transmission by providing the analysis of the sensor data itself. Atblock 318, theIoT application 112 processes or analyzes the sensor data and atblock 320, transmits the results of its sensor data analysis to the device undercontrol 103 and/or back to thenetwork access device 104. At block 322, thenetwork access device 104 transmits the results associated with the processed sensor data to a device undercontrol 103 and/or to thecommunication circuitry 232 of thesensor device 102, which receives the response atblock 324. Particular examples of the embodiments described inFIGS. 1 and 2 are discussed with respect to the remainder of the figures. -
FIG. 4 is a block diagram 400 illustrating awireless headset 412 communicatively coupled to network(s) 114, in accordance with embodiments. Theheadset 412, thelight bulb 403,mobile device 404, voice controlled hub (VCH) 406,access point 408,cell tower 410,cloud ASR 416 and the network(s) 114 may be communicatively coupled to one another. Any communication networks known in the art may be used to communicate among the various nodes. In one embodiment, theheadset 412 couples to themobile device 404 and/or theVCH 406 via BT network protocol. Themobile device 404 and theVCH 406 may couple to thelight bulb 403 via the BT network protocol and couple to theaccess point 408 via Wi-Fi network protocol. In an embodiment, theheadset 412 is coupled to themobile device 404 in a point to point configuration available on BT Basic Rate/Enhanced Data Rate (BR/EDR) (e.g., classic BT) for audio streaming and BLE for data transfer. Themobile device 404 may also be coupled to thecell tower 410 via cellular communication protocols (e.g., Long-Term Evolution (LTE), 5G). Theaccess point 408 may be coupled to larger Ethernet basednetworks 114 via wired connection, while the network(s) 114 may couple to thecell tower 410 and thecloud ASR 416. - The
headset 412 is shown to include two earpieces each including aspeaker 430, amicrophone 432, abattery 434, and atouch interface 436. Each feature described with respect to thewireless headset 412 may be included in each earpiece of thewireless headset 412. Theheadset 412 is to be utilized by auser 401. For example, theuser 401 may use thewireless headset 412 to listen to audio played back by thespeakers 430 of thewireless headset 412 after the audio has been transmitted to thewireless headset 412 from themobile device 404 or theVCH 406. Theuser 401 may also use themicrophone 432 and thespeakers 430 of thewireless headset 412 to convey sounds for voice calls made by themobile device 404. In addition, theuser 401 may speak a voice command or query into themicrophone 432 of theheadset 412. The voice command or query is comprised ofsounds 402 sensed by the microphone (e.g., audio data). - An example anatomy of audio data evaluated by the
headset 412 is discussed in more detail with respect toFIG. 5 . As will be discussed in more detail with respect toFIG. 6 , theheadset 412 may evaluate some of the audio data to determine whether it includes a voice command or query that should be forwarded for interpretation to themobile device 404 and/or to thecloud ASR 416. If not, battery power to establish or maintain a network connection need not be consumed. If the audio data should be forwarded for interpretation, theheadset 412 will use battery power to establish a BT (e.g., a BLE) connection and transfer the audio data to themobile device 404 and/or theVCH 406. - In embodiment,
touch interface 436 is a two-dimensional interface that uses a sensor array to detect presence of a touch and/or fingerprints proximate to the surface of the sensor array. In some embodiments, theuser 401 may use thetouch interface 436 to control functionality (e.g., on/off, volume control, answer call, end call, etc.) or to gain access to or control thewireless headset 412, themobile device 404, or a device that is communicatively coupled to themobile device 404. Thebattery 434 on each earpiece of thewireless headset 412 is the power source for the circuitry on each earpiece and may be recharged using a suitable charging device. - The
mobile device 404 may include any portable wireless communication device. In a particular embodiment, themobile device 404 is a battery powered smart phone. TheVCH 406 may include any portable or fixed wireless communication device. Themobile device 404 and theVCH 406 may be equipped with communication systems capable of communicating using BT communication protocols, Wi-Fi communication protocols, and/or cellular communication protocols. Themobile device 404 and theVCH 406 may each include one or more microphones, speakers, and speech detectors to interpret speech and/or enable network transmission of the audio data for speech recognition. Both themobile device 404 and theVCH 406 may facilitate a response to the voice commands or queries, and for example, may transmit a signal to thelight bulb 403 to turn it on or off in response to an interpreted voice command. In another example, when the voice command involves a query, themobile device 404 and/or theVCH 406 may each be capable of determining an answer to the query and responding by playing back the answer through its own speakers or providing the answer to thespeaker 430 of theheadset 412 for playback. - The
access point 408 may be a fixed wireless and wired communication device. In an embodiment, theaccess point 408 includes communication systems to communicate wirelessly (e.g., via Wi-Fi) with theVCH 406 andmobile device 404 and through a wired medium (e.g., Ethernet LAN) with the network(s) 114.Cell tower 410 is to facilitate voice and data communication between themobile device 404 and other mobile devices (not shown) and may also be coupled to the network(s) 114. -
Cloud ASR 416 is to identify predetermined audio patterns and associate them with one another (e.g., using a data structure) and/or with corresponding meaning. Patterns recognizable byCloud ASR 416 may facilitate, for example and not limitation, music recognition, song recognition, voice recognition, image recognition, and speech recognition, or any other sensed pattern. In embodiments,Cloud ASR 416 may interpret the audio data and provide its results to themobile device 404, which may act on the results and/or forward the results back to theheadset 412. -
FIG. 5 is a graph diagram illustratingaudio data 500, in accordance with an embodiment. As introduced above, theheadset 412 may receive sounds through itsmicrophone 432 and evaluate theaudio data 500 to determine whether it should be forwarded to a remote device for speech recognition. Theaudio data 500 is shown to include ambient noise 502 (e.g., background noise),speech onset 504, awake phrase 506, and a query orcommand 508. Theambient noise 502 is audio data that corresponds to background sounds in the environment. Thespeech onset 504, thewake phrase 506, and the query orcommand 508 are portions of theaudio data 500 that correspond to both thesound waves 402 produced by the user (e.g., the speech to be recognized) and theambient noise 502.Speech onset 504 is the beginning of speech in theaudio data 500 and is shown to be a beginning portion or subset of thewake phrase 506. Thewake phrase 506 is a predetermined phrase uttered by a user (e.g., “ok phone”). After having uttered thewake phrase 506, the user utters the query or command 508 (e.g., “turn on the light”) to be acted upon (e.g., by thebulb 403 ofFIG. 4 ). - To conserve power, the
headset 412 may only attempt detection of thewake phrase 506 if theheadset 412 has already detectedspeech onset 504. Similarly, theheadset 412 may only consume power to establish a network connection with themobile device 404 for subsequent speech recognition when theheadset 412 has detected an indication (e.g.,speech onset 504, thewake phrase 506 or touch input) that the audio data is a command orquery 508. In previous “always listening” solutions, the headset constantly maintains the network connection even when there is no indication that sensed data should be transmitted onto the network for interpretation. There can be significant power consumption involved with the continuous maintenance of a network connection of previous solutions, which can be especially impactful in a battery powered audio processing device. -
FIG. 6 is a block diagram illustrating anearpiece device 600 of thewireless headset 412, in accordance with embodiments. The functional blocks of theexample earpiece device 600 includes amicrophone 640,audio signal processor 642,sensor array 644,capacitance sensor 646,power source interface 664,battery 663, andBluetooth communication IC 660. TheBluetooth communication IC 660 is shown to includespeech detector 662,processor 672,memory 674,evaluator 670,power manager 675, BT/BLE resources,circuitry 665, andtransceiver 668. - Each functional block may be coupled to bus system 601 (e.g., I2C, I2S, SPI) and be implemented using hardware (e.g., circuitry), instructions (e.g., software and/or firmware), or a combination of hardware and instructions. Although this embodiment shows a set of functional blocks within a
BT communication IC 660, in some embodiments, any combination of functional blocks could be implemented on a single integrated circuit substrate or in a single device package without departing from the claimed subject matter. In other embodiments, the functional blocks of theearpiece device 600 are distributed among multiple integrated circuit devices, device packages, or other circuitry. - The
microphone 640 is to receive sound waves from its surrounding environment and includes a transducer or other mechanisms (e.g., a including a diaphragm) to convert the energy of the sound waves into electronic or digital signals (e.g., audio data). Themicrophone 640 may include an array of microphones. In some embodiments, themicrophone 640 may be a digital microphone. In some embodiments, themicrophone 640 may include threshold/hysteresis settings for activity detection and measurement and/or processing logic to determine whether a sound wave received by themicrophone 640 meets or exceeds an activation threshold that gates whether corresponding audio data should be passed on to the speech detector 662 (e.g., discussed below) for processing. In various embodiments, the threshold level of activity may be an energy level, an amplitude, a frequency, or any other attribute of a sound wave. Themicrophone 640 may be coupled to a memory (not shown) that stores the activation threshold, which may be dynamically reprogrammable. -
Audio signal processor 642 includes circuitry to process and analyze the audio data received from themicrophone 640. In embodiments,audio signal processor 642 digitizes (e.g., using an analog to digital converter (ADC)) and encodes the electronic audio signals. Once digitized,Audio signal processor 642 may provide signal processing (e.g., demodulation, mixing, filtering) to analyze or manipulate attributes of the audio data (e.g., phase, wavelength, frequency). - In one embodiment, the
audio signal processor 642 includes a pulse density modulator (PDM) front end that is connected to themicrophone 640. In the PDM front end, the PDM generates a pulse density modulated bitstream based on an electronic signal from themicrophone 640. The PDM provides a clock signal to themicrophone 640 that determines the initial sampling rate, then receives a data signal from themicrophone 640 representing audio captured from the environment. From the data signal, the PDM generates a PDM bitstream and may provide the bitstream to a decimator, which can generate the audio data provided to thebus system 601. - In an alternative embodiment, the
audio signal processor 642 includes an auxiliary analog to digital converter (AUX ADC) front end to provide the audio data. In the auxiliary ADC front end, an analog to digital converter converts an analog signal from themicrophone 640 to a digital audio signal. The digital audio signal may be provided to a decimator to generate the audio data provided to thebus system 601. - The
earpiece 600 may include asensor array 644 and acapacitance sensor 646 to implement thetouch interface 436 ofFIG. 4 . In one embodiment, thesensor array 644 includes sensor electrodes that are disposed as a two-dimensional matrix (also referred to as an XY matrix). Thesensor array 644 may be coupled to pins of thecapacitance sensor 646 via one or more analog buses. Thecapacitance sensor 646 may include circuitry to excite electrodes of thesensor array 644 and conversion circuitry to convert responsive analog signals into a measured capacitance value. Thecapacitance sensor 646 may also include a counter or timer circuitry to measure the output of the conversion circuitry. It should be noted that there are various known methods for measuring capacitance, such as current versus voltage phase shift measurement, resistor-capacitor charge timing, capacitive bridge divider, charge transfer, successive approximation, sigma-delta modulators, charge-accumulation circuits, field effect, mutual capacitance, frequency shift, or other capacitance measurement algorithms. It should be noted however, instead of evaluating the raw counts relative to a threshold, thecapacitance sensor 646 may evaluate other measurements to determine the user interaction. For example, in thecapacitance sensor 646 having a sigma-delta modulator, thecapacitance sensor 646 is evaluating the ratio of pulse widths of the output, instead of the raw counts being over or under a certain threshold. - The
capacitance sensor 646 may include or be coupled to software components to convert the count value (e.g., capacitance value) into a sensor electrode detection decision (also referred to as switch detection decision) or relative magnitude. Based on these count values, the processing logic associated with thecapacitance sensor 646 may determine the state of thesensor array 644, such as whether an object (e.g., a finger) is detected on or in proximity to the sensor array 644 (e.g., determining the presence of the finger), tracking the motion of the object, detecting features (e.g., fingerprint ridges and valleys). Alternatively or additionally, thecapacitance sensor 646 may send the raw data, partially-processed data, and/or the data indicating the state of thesensor array 644 to theevaluator 670 to evaluate whether that data should be remotely processed to support an IoT application (e.g., remote fingerprint authentication and/or gesture or pattern recognition). -
Speech detector 662 is to detect attributes of speech in theaudio data 500 and may include a speech onset detector (SOD) and/or a PD stored in thememory 674 and operated by theprocessor 672. The SOD is to determine whether audio data received from theaudio signal processor 642 includes thespeech onset 504. In an embodiment, when sounds received by themicrophone 640 meet or exceed an activation threshold, themicrophone 640 wakes up the SOD to execute a speech onset detection algorithm in order to determine whether speech like signals are present in the audio data. The SOD may use any of the speech onset detection algorithms or techniques known to those have ordinary skill in the art. In an embodiment, audio data with a reduced sample rate (e.g., 2-4 kHz) is sufficient for detecting speech onset (or other sound onset event) while allowing the SOD to be clocked at a lower frequency, thus reducing the power consumption and complexity of the SOD. - Upon detecting a speech onset event, the SOD asserts a status signal on the
bus 601 to wake the PD from a low power consumption state (e.g., sleep state) to a higher power consumption state (e.g., active state) to perform phrase detection. The PD is to determine whether theaudio data 500 indicated by the SOD includes the predetermined phrase 506 (e.g., a wake-phrase). The PD may include processing pattern recognition algorithms and performing comparisons to expected patterns to determine whether the wake-up word orphrase 506 has been spoken. In embodiments, the PD makes this determination based on theaudio data 500 that has been recorded in the memory 674 (e.g., a buffer). Although thespeech detector 662 is shown to be implemented on theBT communication IC 660, it may reside on and be implemented by other hardware of theearpiece 600. - The
memory 674 may include, for example, random access memory (RAM) and program flash. RAM may be static RAM (SRAM), and program flash may be a non-volatile storage, which may be used to store firmware (e.g., control algorithms executable byprocessor 672 to implement operations described herein). Theprocessor 672 may be the same or similar to theprocessor 230 described with respect toFIG. 2 . Thememory 674 may include instructions or code that when executed perform the methods described herein. Portions of thememory 674 may be dynamically allocated to provide caching, buffering, and/or other memory based functionalities. - The
evaluator 670 includes decision logic to evaluate whether the output of thespeech detector 662 or thecapacitance sensor 646 indicates that the audio data or the touch data, as the case may be, should be processed remotely in support of an IoT application. In an embodiment theevaluator 670 is implemented when theprocessor 672 executes decision logic code stored in thememory 674. Alternatively or additionally, decision logic may be implemented using hardware. Although theevaluator 670 is shown as a separate functional block, theevaluator 670 may be implemented entirely or in combination with other functional blocks (e.g., thespeech detector 662 or the capacitance sensor 646) of theearpiece 600. In embodiments, theevaluator 670 may signal thepower manager 675 to power up communication resources when the PD detects the predetermined wake-phrase “Ok, helper” or when thecapacitance sensor 646 detects the presence of a finger or fingerprint proximate to thesensor array 644. -
Power manager 675 is to control power delivery for operation of BT architecture including, without limitation, the BT/BLE resources 665 and thetransceiver 668. Thepower manager 675 provides power management features that can be invoked by software through power management registers in memory or packet-handling components of the BT architecture. Thepower manager 675 may automatically adjust power delivery based on user activity. For example, thepower manager 675 may generate power-down and power-up control signals for circuitry that executes (e.g., software code) or operates the BT architecture such as theprocessor 672, thememory 674, and a transmit path, receive path, phase locked loop (PLL), and power amplifier to thetransceiver 668. In addition to the power management functionality that is responsive to indications from theevaluator 670, thepower manager 675 may support various BT power consumption modes in compliance with BT standards. For example, the power manager may transition theIC 660 to the next lower state after a programmable period of user inactivity. When user activity resumes, thepower manager 675 may immediately enter theIC 660 into an active mode. - In an embodiment, the
earpiece 600 may use compression and decompression software stored in a memory to reduce the amount of audio data to be transmitted and/or received over network (e.g., BT network) connections. Raw pulse code modulation (PCM) samples from the microphone may also be transmitted over BT connections. For example, theprocessor 672 may execute the software to code the audio data using the Opus audio format, or any other appropriate codec known by one having ordinary skill in the art. In other embodiments, such audio formatting software may be executed by a microprocessor (not shown) of theearpiece 600 that is not disposed on theBT communication IC 660. - BT/BLE resources,
circuitry 665 may include BT communication circuitry and code to implement portions of the BT architecture described with respect toFIG. 7 . In embodiments, the BT/BLE resources 665 may implement either or both of the BLE and BR/EDR (e.g., classic BT) systems. Thetransceiver 668 is to couple with anantenna 661 and facilitates transmitting and receiving of radio frequency (RF) signals. In embodiments, when operating as a receiver, thetransceiver 668 filters and mixes received RF signals with a local oscillator signal to down-convert the desired frequency (e.g., or channel) to an intermediate frequency. In an embodiment, the down-conversion process provides the intermediate frequency as complex I and Q signals which are sampled and digitized by an analog to digital converter of thetransceiver 668. A phase estimator of thetransceiver 668 may perform calculations to estimate the phase of the RF signal for the time it was received at the antenna using the I and Q values and forward the phase value to a demodulator of the transceiver, which forwards a decoded sequence of 1s and 0s to the for further processing (e.g., packet processing). When operating as a transmitter, thetransceiver 668 generally performs the operations in reverse, receiving a sequence of 1s and 0s from the signal processor, modulating the signal, and outputting an analog signal for transmission by theantenna 661. TheBluetooth communication IC 660 may wirelessly communicate with themobile device 404 and/or theVCH 406 over a BLE link 680 (e.g., to encode speech data) or classic BT link 682 (e.g., BT Advanced Audio Distribution Profile (A2DP)) to encode streamed audio) via theantenna 661. In embodiments, the transmission energy and transmission range of theearpiece 600 can be adjusted (e.g., dynamically) to provide acceptable performance under the particular noise conditions, interference conditions, and considering the proximity of connected devices. -
FIG. 7 is a block diagram illustrating aBT architecture 700, in accordance with embodiments. In embodiments, theapplication block 702 is the user application running on a device that interfaces with theBT protocol stack 703 of the device. Thehost 704 includes the upper layers of theBT protocol stack 703 and thecontroller 706 includes the lower layers. In embodiments, the BT/BLE resources 665 and/or thetransceiver 668 implements portions of thehost 704 and thecontroller 706 ofBT architecture 700 using processing logic comprising hardware (circuitry, dedicated logic, etc.), software (such as is run on a general-purpose computing system or a dedicated machine), firmware (embedded software code), and/or any combination thereof. TheBT protocol stack 703 is used to implement functionality that is compatible with BT specifications including BLE. - In embodiments, the generic access profile (GAP) 708 defines the generic procedures related to discovery of BT devices and link management aspects of connecting to BT devices. The
GAP 708 may define the broadcaster role, the observer role, the peripheral role, and/or the central role of the applicable BT specifications. - For BT compatible devices, data may be organized into concepts called profiles, services, and characteristics. A profile describes how devices connect to each other to find or use services and the general expected behavior of a device. A service is a collection of data entities called characteristics. A service is used to define a function in a profile. A service may also define its relationship to other services. A service is assigned a universally unique identifier (UUID). A characteristic includes a value and descriptor that describes a characteristic value. It is an attribute type for a specific piece of information within a service. Like a service, each characteristic is designated with a UUID.
- The generic attribute profile (GATT) 710 defines a generic service framework using the attribute protocol (ATT)
layer 722. This framework defines the procedures and formats of services and their characteristics. It defines the procedures for service, characteristics, and descriptor discovery, reading, writing, notifying, and indicating characteristics, as well as configuring the broadcast of characteristics. In an embodiment, a GATT client is a device that wants data. It initiates commands and requests towards a GATT server. The GATT client can receive responses, indications, and notification data sent by the GATT server. In an embodiment, the GATT server is a device that has the data and accepts incoming commands and requests from the GATT client and sends responses, indications, and notifications to a GATT client. BT/BLE resources 665 may support both the GATT client and GATT server roles simultaneously. TheATT layer 722 can define a client/server architecture that allows a GATT server to expose a set of attributes and their associated values to a GATT client. - The security manager protocol (SMP) 712 can define the procedures and behavior to manage pairing (e.g., pass key and out of band bonding), authentication (e.g., key generation for device identity resolution), and encryption between devices. The logical link control adaptation protocol (L2CAP) 714 may provide a connectionless data channel and channel multiplexing for the
ATT 722,SMP 712 layers and for its own layer.L2CAP 714 may provide segmentation and reassembly of packets as well as flow control between two L2CAP 714 entities (e.g., for transferring large chunks of data). - Host controller interface (HCI) 716 can implement a command, event, and data interface to allow link layer (LL) 718 access from upper layers such as
GAP 708,L2CAP 714, andSMP 712. TheLL 718 may include link layer circuitry that directly interfaces with the physical layer (PHY) 720 and manages the physical connections between devices. It supportsLL 718 states including advertising, scanning, initiating, and connecting (e.g., master and slave). TheLL 718 may implement link control procedures including encryption, connection update, channel update, and ping. - In embodiments, the
PHY 720 includes the analog communications circuitry used for modulating and demodulating analog signals and transforming them into digital symbols. For example, BLE can communicate over 40 channels from 2.4000 GHz to 2.4835 GHz (e.g., using the transceiver 668). In some embodiments, 37 of these channels may be used for connection data and the last three channels (37, 38, and 39) may be used as advertising channels to set up connections and send broadcast data.FIGS. 8-10 discuss techniques for conserving power in BT headsets with speech detection in various example use cases. -
FIG. 8 is an interactive flow diagram 800 illustrating operations of theheadset 412 and themobile device 404, in accordance with an embodiment. In this example, theheadset 412 is in the user's ears but there is no active audio stream or voice call being communicated. In previous solutions, headsets consume power (e.g., >100 uA) to maintain a sniff connection with the mobile device just to maintain the link, although there is no active session. In embodiments, a sniff connection is not maintained and power to thetransceiver 668 and/ormemory 674 and the BT/BLE resources 665 are shut off when there is no active session. - At
block 802, thespeech detector 662 receives the audio data corresponding to the wake-up phrase and query “OK helper, what is the current temperature?” If thespeech detector 662 recognizes the wake-up phrase, “OK helper” it signals thepower manager 675, which in turn connects power to BT communication resources to power up thetransceiver 668 and/ormemory 674 atblock 804 and to start processing BT/BLE resources 665 atblock 806. In an embodiment, touch input, rather than voice input may indicate to thepower manager 675 to power the BT communication resources. At 808, the BT/BLE resources 665 establish a BLE connection so that the remainder of the audio data corresponding to the query, “what is the current temperature?” can be transmitted in packets to themobile device 404 over the BLE connection at 810. At 812, themobile device 404 is shown to transmit a response to the query, “The current temperature is 76 degrees Fahrenheit.” After communications related to the initial query is deemed completed, thepower manager 675 may stop processing the BT/BLE resources 665 atblock 814 and power offtransceiver 668 and/ormemory 674 at block 816 to reduce power consumption by BT communication resources. -
FIG. 9 is an interactive flow diagram 900 illustrating operations of theheadset 412 and themobile device 404, in accordance with an embodiment. Again, theheadset 412 is in the user's ears but there is no active audio stream or voice call being communicated. In embodiments, a sniff connection is not maintained to power thetransceiver 668 and/ormemory 674 and the BT/BLE resources 665 are shut off when there is no active session. - At
block 902, themobile device 404 receives an incoming request to connect a call and provides a notification to the user (e.g., ringing, vibration, LED). Having perceived the notification of the request to connect, the user may provide an indication to connect the call. Atblock 904, thespeech detector 662 receives the audio data corresponding to the wake-up phrase and command “OK helper, connect the call.” If thespeech detector 662 recognizes the wake-up phrase, “OK helper” it signals thepower manager 675, which in turn connects power to BT communication resources to power up thetransceiver 668 and/ormemory 674 atblock 906 and to start processing BT/BLE resources 665 atblock 908. In an embodiment, touch input, rather than voice input may indicate to thepower manager 675 to power the BT communication resources. At 910, the BT/BLE resources 665 establish a classic BT or BLE connection so that the remainder of the audio data corresponding to the command, “connect the call,” can be transmitted in packets at 912 to themobile device 404 for interpretation. Alternatively, thespeech detector 662 may interpret both the wake-up phrase and the command in which case the headset itself will send a direct command at 912 to connect the voice call. If the classic BT connection has not already been established, it is then established prior to 914 where the voice call is carried out over the classic BT connection. -
FIG. 10 is an interactive flow diagram 1000 illustrating operations of theheadset 412 and themobile device 404, in accordance with an embodiment. In this example, theheadset 412 is in the user's ears and at 1002, there is an active audio stream being transmitted over a classic BT A2DP connection. In embodiments, BLE resources of the BT/BLE resources 665 are shut off during streaming. Wishing to turn down the sound, the user may provide an indication (e.g., touch or voice command) to do so. Atblock 1004, thespeech detector 662 receives the audio data corresponding to the wake-up phrase and command “OK helper, volume down.” If thespeech detector 662 recognizes the wake-up phrase, “OK helper” it signals thepower manager 675, which in turn connects power to BT communication resources to start processing BLE resources of the BT/BLE resources 665 atblock 1006. At 1008, the BT/BLE resources 665 establish a BLE connection so that the remainder of the audio data corresponding to the command, “volume down,” can be transmitted in packets at 1010 to themobile device 404 for interpretation and implementation. Alternatively, thespeech detector 662 may interpret both the wake-up phrase and the command in which case theheadset 412 itself will send a direct command at 912 to turn down the volume. After communications related to the command is deemed completed, thepower manager 675 may stop processing the BLE resources atblock 1012 to reduce power consumption by BT communication resources. In one embodiment, rather than transmitting the command, theheadset 412 may control volume locally on theheadset 412, avoiding power consumption to establish the BLE connection. -
FIG. 11 is a flow diagram illustrating amethod 1100 of powering a communication resource, in accordance with embodiments. The operations shown inFIG. 11 can be performed by processing logic comprising hardware (circuitry, dedicated logic, etc.), software (such as is run on a general-purpose computing system or a dedicated machine), firmware (embedded software), or any combination thereof. In various embodiments, the operations may be performed as shown and described with respect toFIGS. 1-10 . - At
block 1102, themethod 1100 includes operating a wireless device (e.g., the earpiece 600) in a first mode with power to operate a communication resource (e.g., the BT/BLE resources 665) of the wireless device turned off. Atblock 1104, themethod 1100 includes determining (e.g., by the speech detector 662) whether a voice attribute is detected in the audio data. For example, the audio data may be provided by theaudio signal processor 642 in response to audio input from themicrophone 640. The voice attribute may be speech like sounds detected by a SOD of thespeech detector 662 or a phrase (e.g., a wake-phrase portion of the audio data) detected by a PD of thespeech detector 662. - At
block 1106, themethod 1100 includes responsive to detection of the voice attribute, transitioning (e.g., by the power manager 675) to operating the wireless device in a second mode. In embodiments, the transitioning to operating the wireless device in the second mode includes thepower manager 675 powering up circuitry (e.g., theprocessor 672, the memory 674) configured to operate the communication resource (e.g., the BT/BLE resources and/or the transceiver 668). In embodiments, the operating of the wireless device in the first mode with the power turned off consumes less power than the operating of the wireless device in the second mode atblock 1108 with the power turned on. The communication resource may include code configured to implement a portion of at least a one of acontroller 706 and ahost 704 of aBT architecture 700 and the transitioning to operating the wireless device in the second mode comprises starting a processing of the code by circuitry including theprocessor 672 and thememory 674. - At
block 1110, the method includes using the communication resource to establish a network connection and communicate packets via the network connection, the communication of the packets based on the audio data. For example, the BT/BLE resources 665 may establish a BLE connection and with thetransceiver 668 transmit packets including the second portion of the audio data (e.g., corresponding to a command or query) via the BLE connection for pattern recognition processing. The communicating of the packets may also include receiving packets including a response to the at least one of the command and the query. In embodiments, the BT/BLE resources 665 may establish the BLE connection with themobile device 404 while maintaining a classic BT connection with themobile device 404. Alternatively or additionally, the BT/BLE resource 665 may establish a classic BT connection with themobile device 404 while maintaining a BLE connection with themobile device 404. In one embodiment, the BT/BLE resources 665 along with thetransceiver 668 communicate the packets via the BT connection as a Generic Attribute Profile (GATT) server to a GATT client. - Thus, embodiments described herein can reduce power consumed by IoT devices by remaining disconnected from a network until sensor data sensed by the IoT device indicates that a network connection should be established to wirelessly communicate in connection with the sensor data in furtherance of an IoT application. Compared to prior techniques that maintain network connection independent of sensor data indications, embodiments can enable “always on” or “always listening” functionality by an IoT device with lower power consumption.
-
FIG. 12 is a block diagram illustrating anelectronic device 1200, in accordance with various embodiments. Theelectronic device 1200 may fully or partially include and/or operate the example embodiments of thesensor device 102, theheadset 412, thenetwork access device 104, themobile device 404, theVCH 406, the device under control, thebulb 403, the IoT application(s) 112, thecloud ASR 416, or theaccess point 408 ofFIGS. 1 and 4 . Theelectronic device 1200 may be in the form of a computer system within which sets of instructions may be executed to cause theelectronic device 1200 to perform any one or more of the methodologies discussed herein. Theelectronic device 1200 may operate as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, theelectronic device 1200 may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a P2P (or distributed) network environment. - The
electronic device 1200 may be an Internet of Things (IoT) device, a server computer, a client computer, a personal computer (PC), a tablet, a set-top box (STB), a VCH, a Personal Digital Assistant (PDA), a mobile telephone, a web appliance, a network router, switch or bridge, a television, speakers, a remote control, a monitor, a handheld multi-media device, a handheld video player, a handheld gaming device, or a control panel, or any other machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a singleelectronic device 1200 is illustrated, the term “device” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. - The
electronic device 1200 is shown to include processor(s) 1202. In embodiments, theelectronic device 1200 and/or processors(s) 1202 may include processing device(s) 1205 such as a System on a Chip processing device, developed by Cypress Semiconductor Corporation, San Jose, Calif. Alternatively, theelectronic device 1200 may include one or more other processing devices known by those of ordinary skill in the art, such as a microprocessor or central processing unit, an application processor, a host controller, a controller, special-purpose processor, digital signal processor (DSP), an ASIC, a FPGA, or the like. Bus system(s) 1201 may include a communication block (not shown) to communicate with an internal or external component, such as an embedded controller or an application processor, via communication interface(s) 1209 and/or bus system 1201. - Components of the
electronic device 1200 may reside on a common carrier substrate such as, for example, an IC die substrate, a multi-chip module substrate, or the like. Alternatively, components of theelectronic device 1200 may be one or more separate integrated circuits and/or discrete components. - The
memory system 1204 may include volatile memory and/or non-volatile memory which may communicate with one another via the bus system 1201. Thememory system 1204 may include, for example, RAM and program flash. RAM may be SRAM, and program flash may be a non-volatile storage, which may be used to store firmware (e.g., control algorithms executable by processor(s) 1202 to implement operations described herein). Thememory system 1204 may includeinstructions 1203 that when executed perform the methods described herein. Portions of thememory system 1204 may be dynamically allocated to provide caching, buffering, and/or other memory based functionalities. - The
memory system 1204 may include a drive unit providing a machine-readable medium on which may be stored one or more sets of instructions 1203 (e.g., software) embodying any one or more of the methodologies or functions described herein. Theinstructions 1203 may also reside, completely or at least partially, within the other memory devices of thememory system 1204 and/or within the processor(s) 1202 during execution thereof by theelectronic device 1200, which in some embodiments, constitutes machine-readable media. Theinstructions 1203 may further be transmitted or received over a network via the communication interface(s) 1209. - While a machine-readable medium is in some embodiments a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the example operations described herein. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.
- The
electronic device 1200 is further shown to include display interface(s) 1206 (e.g., a liquid crystal display (LCD), touchscreen, a cathode ray tube (CRT), and software and hardware support for display technologies), audio interface(s) 1208 (e.g., microphones, speakers and software and hardware support for microphone input/output and speaker input/output). Theelectronic device 1200 is also shown to include user interface(s) 1210 (e.g., keyboard, buttons, switches, touchpad, touchscreens, and software and hardware support for user interfaces) and sensing system(s) 1207. - The above description is intended to be illustrative, and not restrictive. For example, the above-described embodiments (or one or more aspects thereof) may be used in combination with each other. Other embodiments will be apparent to those of skill in the art upon reviewing the above description. In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one. In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In the event of inconsistent usages between this document and those documents so incorporated by reference, the usage in the incorporated reference(s) should be considered supplementary to that of this document; for irreconcilable inconsistencies, the usage in this document supersedes the usage in any incorporated references.
- Although the claimed subject matter has been described with reference to specific embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of what is claimed. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. The scope of the claims should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended; a system, device, article, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels and are not intended to impose numerical requirements on their objects.
- The Abstract of the Disclosure is provided to comply with 37 C.F.R. § 1.72(b), requiring an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims.
Claims (21)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/035,289 US20210083715A1 (en) | 2018-02-20 | 2020-09-28 | System and methods for low power consumption by a wireless sensor device |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201862632888P | 2018-02-20 | 2018-02-20 | |
US16/221,185 US10367540B1 (en) | 2018-02-20 | 2018-12-14 | System and methods for low power consumption by a wireless sensor device |
US16/448,247 US10587302B2 (en) | 2018-02-20 | 2019-06-21 | System and methods for low power consumption by a wireless sensor device |
US16/744,358 US10797744B2 (en) | 2018-02-20 | 2020-01-16 | System and methods for low power consumption by a wireless sensor device |
US17/035,289 US20210083715A1 (en) | 2018-02-20 | 2020-09-28 | System and methods for low power consumption by a wireless sensor device |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/744,358 Continuation US10797744B2 (en) | 2018-02-20 | 2020-01-16 | System and methods for low power consumption by a wireless sensor device |
Publications (1)
Publication Number | Publication Date |
---|---|
US20210083715A1 true US20210083715A1 (en) | 2021-03-18 |
Family
ID=67394107
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/221,185 Active US10367540B1 (en) | 2018-02-20 | 2018-12-14 | System and methods for low power consumption by a wireless sensor device |
US16/448,247 Active US10587302B2 (en) | 2018-02-20 | 2019-06-21 | System and methods for low power consumption by a wireless sensor device |
US16/744,358 Active US10797744B2 (en) | 2018-02-20 | 2020-01-16 | System and methods for low power consumption by a wireless sensor device |
US17/035,289 Pending US20210083715A1 (en) | 2018-02-20 | 2020-09-28 | System and methods for low power consumption by a wireless sensor device |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/221,185 Active US10367540B1 (en) | 2018-02-20 | 2018-12-14 | System and methods for low power consumption by a wireless sensor device |
US16/448,247 Active US10587302B2 (en) | 2018-02-20 | 2019-06-21 | System and methods for low power consumption by a wireless sensor device |
US16/744,358 Active US10797744B2 (en) | 2018-02-20 | 2020-01-16 | System and methods for low power consumption by a wireless sensor device |
Country Status (4)
Country | Link |
---|---|
US (4) | US10367540B1 (en) |
CN (1) | CN111771337B (en) |
DE (1) | DE112019000884T5 (en) |
WO (1) | WO2019164716A1 (en) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018169379A1 (en) * | 2017-03-17 | 2018-09-20 | 엘지전자(주) | Method and apparatus for searching for device by using bluetooth low energy (le) technology |
US10367540B1 (en) | 2018-02-20 | 2019-07-30 | Cypress Semiconductor Corporation | System and methods for low power consumption by a wireless sensor device |
CN108428452B (en) * | 2018-03-14 | 2019-12-13 | 百度在线网络技术(北京)有限公司 | Terminal support and far-field voice interaction system |
US11172293B2 (en) * | 2018-07-11 | 2021-11-09 | Ambiq Micro, Inc. | Power efficient context-based audio processing |
CN109246671B (en) * | 2018-09-30 | 2020-12-08 | Oppo广东移动通信有限公司 | Data transmission method, device and system |
TWI712878B (en) * | 2019-04-17 | 2020-12-11 | 美律實業股份有限公司 | Wearable device and power saving method for wearable device |
US11310594B2 (en) * | 2019-09-18 | 2022-04-19 | Bose Corporation | Portable smart speaker power control |
CN110675873B (en) | 2019-09-29 | 2023-02-07 | 百度在线网络技术(北京)有限公司 | Data processing method, device and equipment of intelligent equipment and storage medium |
CN111479247A (en) * | 2020-03-16 | 2020-07-31 | 珠海格力电器股份有限公司 | Network distribution method and device, electronic equipment and computer readable medium |
CN111526241A (en) * | 2020-03-27 | 2020-08-11 | 深圳光启超材料技术有限公司 | Call progress processing method, head-mounted device, storage medium and electronic device |
US11778361B1 (en) | 2020-06-24 | 2023-10-03 | Meta Platforms Technologies, Llc | Headset activation validation based on audio data |
US11884535B2 (en) | 2020-07-11 | 2024-01-30 | xMEMS Labs, Inc. | Device, package structure and manufacturing method of device |
CN112216279A (en) * | 2020-09-29 | 2021-01-12 | 星络智能科技有限公司 | Voice transmission method, intelligent terminal and computer readable storage medium |
ES2852701A1 (en) * | 2021-02-01 | 2021-09-14 | Elparking Internet S L U | TELEMATIC ACCESS CONTROL DEVICE (Machine-translation by Google Translate, not legally binding) |
US11650655B2 (en) * | 2021-03-30 | 2023-05-16 | Rosemount Inc. | Power management for loop-powered field devices with low power wireless communication |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040117727A1 (en) * | 2002-11-12 | 2004-06-17 | Shinya Wada | Method and apparatus for processing files utilizing a concept of weight so as to visually represent the files in terms of whether the weight thereof is heavy or light |
US20090171994A1 (en) * | 2007-12-31 | 2009-07-02 | Eric Sprangle | Device, system, and method for improving processing efficiency by collectively applying operations |
US20160165572A1 (en) * | 2013-07-29 | 2016-06-09 | Fujitsu Limited | Method of transmission scheme switch, ue and base station |
US9437188B1 (en) * | 2014-03-28 | 2016-09-06 | Knowles Electronics, Llc | Buffered reprocessing for multi-microphone automatic speech recognition assist |
US20160357508A1 (en) * | 2015-06-05 | 2016-12-08 | Apple Inc. | Mechanism for retrieval of previously captured audio |
US20180060031A1 (en) * | 2016-08-26 | 2018-03-01 | Bragi GmbH | Voice assistant for wireless earpieces |
US20180098277A1 (en) * | 2016-09-30 | 2018-04-05 | Intel Corporation | Reduced power consuming mobile devices method and apparatus |
US20180108343A1 (en) * | 2016-10-14 | 2018-04-19 | Soundhound, Inc. | Virtual assistant configured by selection of wake-up phrase |
US10143027B1 (en) * | 2017-09-05 | 2018-11-27 | Amazon Technologies, Inc. | Device selection for routing of communications |
US20180353086A1 (en) * | 2017-06-07 | 2018-12-13 | Bragi GmbH | Use of body-worn radar for biometric measurements, contextual awareness and identification |
US20190028500A1 (en) * | 2017-07-24 | 2019-01-24 | Korea University Research And Business Foundation | Ecu identifying apparatus and controlling method thereof |
US20190051307A1 (en) * | 2017-08-14 | 2019-02-14 | Lenovo (Singapore) Pte. Ltd. | Digital assistant activation based on wake word association |
US20190080685A1 (en) * | 2017-09-08 | 2019-03-14 | Amazon Technologies, Inc. | Systems and methods for enhancing user experience by communicating transient errors |
US20190115018A1 (en) * | 2017-10-18 | 2019-04-18 | Motorola Mobility Llc | Detecting audio trigger phrases for a voice recognition session |
US10674552B1 (en) * | 2017-09-05 | 2020-06-02 | Amazon Technologies, Inc. | Routing of communications to a device |
US11138901B1 (en) * | 2017-06-28 | 2021-10-05 | Amazon Technologies, Inc. | Item recognition and analysis |
Family Cites Families (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6292674B1 (en) * | 1998-08-05 | 2001-09-18 | Ericsson, Inc. | One-handed control for wireless telephone |
US7256770B2 (en) * | 1998-09-14 | 2007-08-14 | Microsoft Corporation | Method for displaying information responsive to sensing a physical presence proximate to a computer input device |
US20040198464A1 (en) * | 2003-03-04 | 2004-10-07 | Jim Panian | Wireless communication systems for vehicle-based private and conference calling and methods of operating same |
US7242785B2 (en) * | 2003-08-04 | 2007-07-10 | Creative Technology Ltd | Portable powered speaker |
KR101120020B1 (en) * | 2007-02-26 | 2012-03-28 | 삼성전자주식회사 | Method and apparatus for controlling a portable audio device |
US8391503B2 (en) * | 2008-08-22 | 2013-03-05 | Plantronics, Inc. | Wireless headset noise exposure dosimeter |
US8892163B2 (en) * | 2012-03-06 | 2014-11-18 | Omni Vision Technologies, Inc. | Image sensor having a pulsed mode of operation |
JP6393021B2 (en) * | 2012-08-28 | 2018-09-19 | 京セラ株式会社 | Electronic device, control method, and control program |
US8731912B1 (en) * | 2013-01-16 | 2014-05-20 | Google Inc. | Delaying audio notifications |
US20140270287A1 (en) * | 2013-03-15 | 2014-09-18 | Qualcomm Incorporated | Bluetooth hearing aids enabled during voice activity on a mobile phone |
US9294903B2 (en) * | 2013-04-04 | 2016-03-22 | Nokia Technologies Oy | Method and apparatus for facilitating handover utilizing a predefined attribute protocol |
EP3008923B1 (en) * | 2013-06-14 | 2018-08-08 | Widex A/S | A method of operating a binaural hearing aid system and a binaural hearing aid system |
DK2835985T3 (en) * | 2013-08-08 | 2017-08-07 | Oticon As | Hearing aid and feedback reduction method |
US9443508B2 (en) * | 2013-09-11 | 2016-09-13 | Texas Instruments Incorporated | User programmable voice command recognition based on sparse features |
US8915445B1 (en) * | 2013-09-17 | 2014-12-23 | Amazon Technologies, Inc. | Activating device using contacting motion sensor |
US20150289124A1 (en) * | 2014-04-08 | 2015-10-08 | Nokia Corporation | Method, apparatus, and computer program product for seamless switching of communication connection |
US20150382098A1 (en) * | 2014-06-25 | 2015-12-31 | James Aita | Proximity ear buds for earphone listening |
WO2016007528A1 (en) * | 2014-07-10 | 2016-01-14 | Analog Devices Global | Low-complexity voice activity detection |
US9544718B2 (en) * | 2014-09-11 | 2017-01-10 | Lg Electronics Inc. | Method and apparatus for transmitting and receiving audio stream in wireless communication system |
KR102357965B1 (en) * | 2015-01-12 | 2022-02-03 | 삼성전자주식회사 | Method of recognizing object and apparatus thereof |
DE102015201945A1 (en) * | 2015-02-04 | 2016-08-04 | Sivantos Pte. Ltd. | Hearing device for binaural supply and method of operation |
US20160284363A1 (en) * | 2015-03-24 | 2016-09-29 | Intel Corporation | Voice activity detection technologies, systems and methods employing the same |
US9998815B2 (en) * | 2015-10-08 | 2018-06-12 | Mediatek Inc. | Portable device and method for entering power-saving mode |
KR102582600B1 (en) * | 2015-12-07 | 2023-09-25 | 삼성전자주식회사 | Electronic device and operating method thereof |
US10104486B2 (en) * | 2016-01-25 | 2018-10-16 | Bragi GmbH | In-ear sensor calibration and detecting system and method |
US20170213552A1 (en) * | 2016-01-26 | 2017-07-27 | Motorola Mobility Llc | Detection of audio public announcements by a mobile device |
US10349259B2 (en) * | 2016-09-23 | 2019-07-09 | Apple Inc. | Broadcasting a device state in a wireless communication network |
GB2555659B (en) * | 2016-11-07 | 2020-01-15 | Cirrus Logic Int Semiconductor Ltd | Package for MEMS device and process |
US10706868B2 (en) * | 2017-09-06 | 2020-07-07 | Realwear, Inc. | Multi-mode noise cancellation for voice detection |
US10367540B1 (en) | 2018-02-20 | 2019-07-30 | Cypress Semiconductor Corporation | System and methods for low power consumption by a wireless sensor device |
-
2018
- 2018-12-14 US US16/221,185 patent/US10367540B1/en active Active
-
2019
- 2019-02-13 DE DE112019000884.6T patent/DE112019000884T5/en active Pending
- 2019-02-13 CN CN201980014456.4A patent/CN111771337B/en active Active
- 2019-02-13 WO PCT/US2019/017767 patent/WO2019164716A1/en active Application Filing
- 2019-06-21 US US16/448,247 patent/US10587302B2/en active Active
-
2020
- 2020-01-16 US US16/744,358 patent/US10797744B2/en active Active
- 2020-09-28 US US17/035,289 patent/US20210083715A1/en active Pending
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040117727A1 (en) * | 2002-11-12 | 2004-06-17 | Shinya Wada | Method and apparatus for processing files utilizing a concept of weight so as to visually represent the files in terms of whether the weight thereof is heavy or light |
US20090171994A1 (en) * | 2007-12-31 | 2009-07-02 | Eric Sprangle | Device, system, and method for improving processing efficiency by collectively applying operations |
US20160165572A1 (en) * | 2013-07-29 | 2016-06-09 | Fujitsu Limited | Method of transmission scheme switch, ue and base station |
US9437188B1 (en) * | 2014-03-28 | 2016-09-06 | Knowles Electronics, Llc | Buffered reprocessing for multi-microphone automatic speech recognition assist |
US20160357508A1 (en) * | 2015-06-05 | 2016-12-08 | Apple Inc. | Mechanism for retrieval of previously captured audio |
US20180060031A1 (en) * | 2016-08-26 | 2018-03-01 | Bragi GmbH | Voice assistant for wireless earpieces |
US20180098277A1 (en) * | 2016-09-30 | 2018-04-05 | Intel Corporation | Reduced power consuming mobile devices method and apparatus |
US20180108343A1 (en) * | 2016-10-14 | 2018-04-19 | Soundhound, Inc. | Virtual assistant configured by selection of wake-up phrase |
US20180353086A1 (en) * | 2017-06-07 | 2018-12-13 | Bragi GmbH | Use of body-worn radar for biometric measurements, contextual awareness and identification |
US11138901B1 (en) * | 2017-06-28 | 2021-10-05 | Amazon Technologies, Inc. | Item recognition and analysis |
US20190028500A1 (en) * | 2017-07-24 | 2019-01-24 | Korea University Research And Business Foundation | Ecu identifying apparatus and controlling method thereof |
US20190051307A1 (en) * | 2017-08-14 | 2019-02-14 | Lenovo (Singapore) Pte. Ltd. | Digital assistant activation based on wake word association |
US10143027B1 (en) * | 2017-09-05 | 2018-11-27 | Amazon Technologies, Inc. | Device selection for routing of communications |
US10674552B1 (en) * | 2017-09-05 | 2020-06-02 | Amazon Technologies, Inc. | Routing of communications to a device |
US20190080685A1 (en) * | 2017-09-08 | 2019-03-14 | Amazon Technologies, Inc. | Systems and methods for enhancing user experience by communicating transient errors |
US20190115018A1 (en) * | 2017-10-18 | 2019-04-18 | Motorola Mobility Llc | Detecting audio trigger phrases for a voice recognition session |
Non-Patent Citations (1)
Title |
---|
Kumar B, The Role of Sleep mode in Embedded System, May 24, 2018 * |
Also Published As
Publication number | Publication date |
---|---|
CN111771337A (en) | 2020-10-13 |
US20190260413A1 (en) | 2019-08-22 |
US10367540B1 (en) | 2019-07-30 |
US10797744B2 (en) | 2020-10-06 |
US20200007183A1 (en) | 2020-01-02 |
DE112019000884T5 (en) | 2020-10-29 |
US10587302B2 (en) | 2020-03-10 |
WO2019164716A1 (en) | 2019-08-29 |
CN111771337B (en) | 2022-06-10 |
US20200212953A1 (en) | 2020-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10797744B2 (en) | System and methods for low power consumption by a wireless sensor device | |
WO2020253715A1 (en) | Voice data processing method, device and system | |
CN105869655B (en) | Audio devices and speech detection method | |
US11264049B2 (en) | Systems and methods for capturing noise for pattern recognition processing | |
WO2018000134A1 (en) | Bluetooth connection method and terminal | |
WO2019015435A1 (en) | Speech recognition method and apparatus, and storage medium | |
KR102097987B1 (en) | Apparatus and method for processing data of bluetooth in a portable terminal | |
WO2021052413A1 (en) | Energy-saving signal monitoring time determination and configuration method, and related device | |
TW201618490A (en) | Low power acoustic apparatus and method of operation | |
CN112954819B (en) | Equipment networking method, electronic equipment and system | |
US11310594B2 (en) | Portable smart speaker power control | |
WO2020124371A1 (en) | Method and device for establishing data channels | |
EP2569992B1 (en) | Wireless personal area network (pan) coordinator implementing power savings by transitioning between active and sleep states | |
CN105430762A (en) | Equipment connection control method and terminal equipment | |
WO2020038157A1 (en) | Nan-based intelligent management method and related product | |
WO2019242538A1 (en) | Information transmission method, network device, and terminal | |
US20200257353A1 (en) | Electronic Device, Method for Reducing Power Consumption, and Apparatus | |
WO2021253235A1 (en) | Voice activity detection method and apparatus | |
WO2023109187A1 (en) | Bluetooth random address generation method and related electronic device | |
WO2023236670A1 (en) | Data transmission management method, electronic device and storage medium | |
CN104021804A (en) | Wireless recording method of wireless recording device | |
CN115022310A (en) | Method and system for acquiring online time of equipment and electronic equipment | |
CN114245443A (en) | Wake-up alignment method, system and related device | |
CN116709476A (en) | Method and device for waking up and keeping alive device, electronic device and storage medium | |
CN116744165A (en) | A multi-functional expansion PCBA mainboard and earphone for earphone |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
AS | Assignment |
Owner name: CYPRESS SEMICONDUCTOR CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MEDAPALLI, KAMESH;BEDROSIAN, BRIAN;REEL/FRAME:058286/0633 Effective date: 20181213 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |