US11928386B2 - Audio peripheral device selections - Google Patents

Audio peripheral device selections Download PDF

Info

Publication number
US11928386B2
US11928386B2 US17/297,116 US201917297116A US11928386B2 US 11928386 B2 US11928386 B2 US 11928386B2 US 201917297116 A US201917297116 A US 201917297116A US 11928386 B2 US11928386 B2 US 11928386B2
Authority
US
United States
Prior art keywords
audio peripheral
peripheral device
processor
computing device
default
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US17/297,116
Other versions
US20220342629A1 (en
Inventor
Srinath Balaraman
Ling Wei Chung
Pradosh Tulsidas VERLEKAR
Charles J. Stancil
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Development Co LP
Original Assignee
Hewlett Packard Development Co LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Development Co LP filed Critical Hewlett Packard Development Co LP
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. reassignment HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BALARAMAN, Srinath, CHUNG, Ling Wei, STANCIL, CHARLES J., VERLEKAR, Pradosh Tulsidas
Publication of US20220342629A1 publication Critical patent/US20220342629A1/en
Application granted granted Critical
Publication of US11928386B2 publication Critical patent/US11928386B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4411Configuring for operating with peripheral devices; Loading of device drivers
    • G06F9/4413Plug-and-play [PnP]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44505Configuring for program initiating, e.g. using registry, configuration files
    • G06F9/4451User profiles; Roaming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/52Network services specially adapted for the location of the user terminal

Definitions

  • Computing devices may be connected to audio peripheral devices, such as headsets, microphones, speakers, and other devices for communicating audio signals.
  • the computing device communicate audio signals through the audio peripheral devices to play movies, music, voice calls, and other audio media.
  • FIG. 1 is a block diagram of an example computing device for audio peripheral device selection.
  • FIG. 2 is a flowchart of an example method for audio peripheral device selection.
  • FIG. 3 A is a schematic diagram of an example computing device for audio peripheral device selection in a workplace environment.
  • FIG. 3 B is a schematic diagram of another example computing device for audio peripheral device selection at home.
  • FIG. 3 C is a schematic diagram of another example computing device for audio peripheral device selection in a public environment.
  • Computing devices such as personal computers, laptops, desktops, or other types of computing devices such as imaging devices and the like, may be connected to audio peripheral devices for communicating audio signals, such as movies, music, voice calls, and other audio media.
  • audio peripheral devices for communicating audio signals, such as movies, music, voice calls, and other audio media.
  • Such computing devices often operate on a “set and forget” methodology, whereby audio peripheral devices are set for a session, and forgotten upon completion of the session. User preferences are not maintained from session to session.
  • Some docking stations may be employed to cooperate with computing devices to implement user preferences. For example, some docking stations may store the user preferences and provide the user preferences to the computing device when the computing device is docked at the given docking station. Other docking stations may provide an identifying key to the computing device to enable the computing device to determine what user preferences to implement. However, in both examples, the computing device is docked at the docking station in order to implement the user preferences.
  • a computing device may store an association between a location of the computing device and a default audio peripheral device to use in that location.
  • the computing device determines the location based on a network connection.
  • the computing device may associate network identifying characteristics, such as a type of connection, a network identifier, and an access point identifier with a location.
  • the computing device may thus set the default audio peripheral device based on user preferences on a home network, on a work network, on a public network, or the like, and communicate an audio signal through the default audio peripheral device.
  • the computing device may further set the default audio peripheral device based on calendar data, schedule data associated with a user account, or other parameters.
  • the computing device may store historical data including the parameters, the default audio peripheral device, and usage data for the audio peripheral device for each session.
  • the historical data may be used by the computing device to generate a predicted default audio peripheral device after a threshold number of selections.
  • the predicted default audio peripheral device may be generated by implementing a neural network system trained based on the historical data.
  • the computing device may also receive, from a user, a selection of a further audio peripheral device different from the default audio peripheral device and communicate the audio signal through the further audio peripheral device.
  • the selection of the further audio peripheral device may be stored in the historical data to provide further verification or feedback for future predicted default audio peripheral devices.
  • FIG. 1 shows a block diagram of an example computing device 100 , such as a laptop or a notebook computer.
  • the computing device 100 includes a plurality of interfaces 102 - 1 and 102 - 2 (referred to herein collectively as interfaces 102 and generically as an interface 102 ), a communications interface 106 , a processor 108 , and a memory 110 .
  • the plurality of interfaces 102 are to connect to a plurality of audio peripheral devices, such as speakers 104 - 1 and a headset 104 - 2 (also referred to generically as audio peripheral devices 104 ).
  • the interfaces 102 may include internal interfaces, such as a speaker interface 102 - 1 to connect to the speakers 104 - 1 integrally formed with the computing device 100 .
  • the interfaces 102 may include externally facing interfaces to connect to separate audio peripheral devices.
  • a headset jack 102 - 2 may connect to the headset 104 - 2 .
  • other externally facing interfaces such as USB ports, or other types of ports may be used to connect to other external audio peripheral devices 104 , such as speakers or the like.
  • the external audio peripheral devices may be connected via a docking station or other suitable intermediary device.
  • the communications interface 106 is to establish a network connection for the computing device 100 .
  • the communications interface 106 includes suitable hardware (e.g. transmitters, receivers, network interface controllers and the like) allowing the computing device 100 to establish a network connection and communicate with other computing devices.
  • the communications interface 106 may communicate with an access point and may cooperate with the access point to determine a network identifier, an access point identifier, or other suitable network identifying characteristics for determining a location of the computing device 100 .
  • the processor 108 is interconnected with the plurality of interfaces 102 and the communications interface 106 .
  • the processor 108 may include a central processing unit (CPU), a microcontroller, a microprocessor, a processing core, a field-programmable gate array (FPGA), or similar device capable of executing machine-readable instructions.
  • the processor 108 may cooperate with a memory 110 to execute instructions.
  • Memory may include a non-transitory machine-readable storage medium that may be may electronic, magnetic, optical, or other physical storage device that stores executable instructions.
  • the machine-readable storage medium may include, for example, random access memory (RAM), read-only memory (ROM), electrically-erasable programmable read-only memory (EEPROM), flash memory, a storage drive, an optical disc, and the like.
  • the machine-readable storage medium may be encoded with executable instructions.
  • the memory 110 includes location instructions 112 , which, when executed, cause the processor 108 to determine the location of the computing device 100 based on the network connection.
  • the processor 108 may determine the location based on the network identifier, the access point identifier, or other suitable network identifying characteristic determined by the communications interface 106 .
  • the memory 110 further includes default instructions 114 , which, when executed, cause the processor 108 to set an audio peripheral device 104 from the plurality of audio peripheral devices 104 as a default audio peripheral device based on the location of the computing device 100 .
  • the processor 108 may select the headset 104 - 2 as the default audio peripheral device when the computing device 100 is at one location, and the speakers 104 - 1 as the default audio peripheral device when the computing device 100 is at a different location.
  • the memory 110 further includes audio signal instructions 116 , which, when executed, cause the processor 108 to communicate an audio signal through the default audio peripheral device.
  • the memory 110 may further include repositories storing data for use during the execution of the location instructions 112 , the default instructions 114 , and the audio signal instructions 116 .
  • the memory 110 may store location data and historical data.
  • the location data may include a location identifier, network characteristics identified by the communications interface 106 (e.g. a type of network connection, a network identifier, an access point identifier, and the like), and a count of network connections at that location.
  • the location data may be stored integrally with historical data including an association between a given location and the selected default audio peripheral device.
  • the historical data may further include other parameters of use of the default audio peripheral device, such as a length of use of the default audio peripheral device, the type of audio content (e.g.
  • the historical data may track a change in the audio peripheral device during a session.
  • the session may be initiated with audio signals communicated through the default audio peripheral device and may change partway through the session to have audio signals communicated through a further audio peripheral device, as is described further herein.
  • the count of the network connections at a given location may represent a number of selections of the default audio peripheral device at the given location.
  • the memory 110 may further store default audio peripheral device data storing an association between a location, other parameters for selecting a default audio peripheral device, and the predicted default audio peripheral device for that combination of location and parameters.
  • FIG. 2 depicts a flowchart of an example method 200 of setting a default audio peripheral device for communicating audio signals.
  • the method 200 is described in conjunction with its performance by the computing device 100 , and in particular, the processor 108 . In other examples, the method 200 may be performed by other suitable devices or systems.
  • the method 200 is initiated at block 202 .
  • the method 200 may be initiated, for example, upon initialization of the computing device 100 .
  • the method 200 may be initiated in response to a request to communicate an audio signal.
  • a user of the computing device 100 may initiate a call or play audio or video media.
  • the processor 108 determines a location of the computing device 100 based on the network connection established by the communications interface 106 . Specifically, the processor 108 may determine the location based on characteristics of the network, such as the type of connection to the network (e.g. wired or wireless), a network identifier, and an access point identifier (e.g. an IP address or the like).
  • characteristics of the network such as the type of connection to the network (e.g. wired or wireless), a network identifier, and an access point identifier (e.g. an IP address or the like).
  • the computing device 100 is located in a workplace environment 300 , and the communications interface 106 may establish a network connection via a wired link to a workplace network 302 .
  • the processor 108 may obtain, from the communications interface 106 , the type of connection to the network (i.e. wired) and a network identifier of the workplace network 302 to determine that the computing device 100 is located at the workplace environment 300 .
  • the computing device 100 is located in a home environment 310 , and the communications interface 106 may establish a network connection to a home network 312 via an access point 314 (e.g. a router or the like).
  • the processor 108 may obtain, from the communications interface 106 , the type of connection to the network (i.e. wireless), a network identifier of the home network 312 , and an access point identifier of the access point 314 to determine that the computing device 100 is located at the home environment 310 .
  • the computing device 100 is located in a public environment 320 (e.g. in a coffee shop or other public location), and the communications interface 106 may establish a network connection to a public network 322 via an access point 324 (e.g. a public router or the like).
  • the processor 108 may obtain, from the communications interface 106 , the type of connection to the network (i.e. wireless), a network identifier of the public network 322 , and an access point identifier of the access point 324 to determine that the computing device 100 is located at the public environment 320 .
  • the processor 108 may also obtain further parameters for setting the default audio peripheral device.
  • the processor 108 may obtain calendar data, such as a date and time, as determined by native clock and calendar applications.
  • the processor 108 may also obtain schedule data associated with a user account.
  • a user of the computing device 100 may have an email account associated with the computing device 100 .
  • the email account may include scheduled events (e.g. meetings, conference calls, or the like) including an event date and time, as well as event location data (e.g. an address, a meeting room, a conference call line, or the like).
  • the schedule data obtained by the processor 108 may include said event location data, as well as the event date and time.
  • the processor 108 may determine whether a threshold number of selections at the determined location have occurred. For example, the processor 108 may utilize the historical data from the selection of the default audio peripheral device at a given location to generate a predicted default audio peripheral device. Accordingly, the threshold may be defined according to a minimum number of selections at the given location to generate a prediction based on the historical data at the given location. In another example, the threshold number of selections may be set across all locations. That is, the processor 108 may evaluate the selections of the default audio peripheral device at all locations to generate a predicted default audio peripheral device at the given location.
  • the processor 108 proceeds to block 206 .
  • the processor 108 receives input from a user of the computing device 100 as to the default audio peripheral device.
  • the processor 108 may present, at a display of the computing device 100 , a user interface for selecting a default audio peripheral device from the audio peripheral devices 104 connected to an interface 102 of the computing device 100 .
  • the processor 108 proceeds to block 210 .
  • the processor 108 proceeds to block 208 .
  • the processor 108 generates a predicted default audio peripheral device based on the location determined at block 202 .
  • the processor 108 may detect the audio peripheral devices 104 connected to an interface 102 of the computing device 100 which are available for selection.
  • the processor 108 may further consider the other parameters obtained at block 202 in generating the predicted default audio peripheral device.
  • the computing device 100 is located in the workplace environment 300 .
  • a user of the computing device 100 may generally select, as the default audio peripheral device, the headset 104 - 2 in order to avoid disturbing his or her neighbors.
  • the processor 108 may generate, as the predicted default audio peripheral device, the headset 104 - 2 .
  • the user of the computing device 100 may select, as the default audio peripheral device, the speakers 104 - 1 , after 6 pm, or on weekends, as there may be fewer or no neighbors to disturb. Accordingly, the processor 108 may additionally consider the calendar data in generating the predicted default audio peripheral device.
  • the user may select, as the default audio peripheral device, a wirelessly connected speaker (not shown) available for connection in an office meeting room. That is, when the user is in a meeting in the office meeting room, the default audio peripheral device may be selected as the wirelessly connected speaker.
  • the processor 108 may additionally consider the schedule data, and in particular, the event location data, in generating the predicted default audio peripheral device.
  • the computing device 100 is located in the home environment 310 .
  • the user of the computing device 100 may generally select, as the default audio peripheral device, the speakers 104 - 1 as the user may not be concerned about disturbing neighbors and hence may play audio media more freely.
  • the processor 108 may generate, as the predicted default audio peripheral device, the speakers 104 - 1 .
  • the user of the computing device 100 may select, as the default audio peripheral device, the headset 104 - 2 , after 9 pm in order to avoid disturbing sleeping family members. Accordingly, the processor 108 may additionally consider the calendar data in generating the predicted default audio peripheral device.
  • the computing device 100 is located in a public environment 320 .
  • the user of the computing device 100 may generally select, as the default audio peripheral device, the headset 104 - 2 in order to maintain privacy. Accordingly, after the threshold number of selections, the processor 108 may generate, as the predicted default audio peripheral device, the headset 104 - 2 .
  • the processor 108 may generate the predicted default audio peripheral device based on a deterministic model. That is, the memory 110 may store a repository tracking the default audio peripheral device based on the determined location and any other parameters obtained at block 202 . Accordingly, the processor 108 may perform a lookup in the repository to generate the predicted default audio peripheral device.
  • the processor 108 may implement neural network systems or machine learning algorithms to generate the predicted default audio peripheral device. That is, the processor 108 may receive, as input, the determined location and any other parameters obtained at block 202 and output the predicted default audio peripheral device. In particular, the processor 108 may use the historical data to train the neural networks and/or machine learning algorithms to allow each instance of selecting a default audio peripheral device to provide corrective feedback or verification. For example, the processor 108 may continue to learn and be updated based on user selection of a further audio peripheral device (e.g. to correct the predicted default audio peripheral device or to indicate different parameters for selecting the further audio peripheral device). Additionally, the processor 108 may verify the predicted default audio peripheral device if no further audio peripheral device is selected.
  • a further audio peripheral device e.g. to correct the predicted default audio peripheral device or to indicate different parameters for selecting the further audio peripheral device.
  • the processor 108 may use the historical data to generate a predicted default audio peripheral device at locations with no previous selections.
  • the user of the computing device 100 may generally select the headset 104 - 2 as the default audio peripheral device when in public environments.
  • the computing device 100 may connect to a variety of different networks in public environments. Accordingly, when the determined location has no previous selections, the processor 108 may generate, as the predicted default audio peripheral device, the headset 104 - 2 .
  • the processor 108 sets the default audio peripheral device.
  • the processor 108 sets, as the default audio peripheral device, either the predicted default audio peripheral device generated at block 208 , or the default audio peripheral device as selected by the user at block 206 .
  • the processor 108 may present, at a display of the computing device 100 , an indication of the selected default audio peripheral device.
  • the processor 108 communicates an audio signal through the default audio peripheral device.
  • the processor 108 may receive a selection of a further audio peripheral device from a user of the computing device 100 , where the further audio peripheral device is different from the default audio peripheral device. If, at block 214 , such a selection is received, the processor 108 proceeds to block 216 to set a current audio peripheral device based on the selection of the further audio peripheral device. The processor 108 then proceeds to block 212 to communicate the audio signal through the current audio peripheral device. That is, the audio signal may thus be communicated through the selected further audio peripheral device rather than the default audio peripheral device set at block 210 .
  • the user of the computing device 100 may select a further audio peripheral device if the generated predicted default audio peripheral device is incorrect.
  • the first instance of initialization of the method during a different time frame e.g. after work hours
  • the user of the computing device 100 may decide to select the speakers 104 - 1 as the current audio peripheral device.
  • the processor 108 may present, at the display of the computing device 100 , a user interface to select a further audio peripheral device.
  • the user interface may indicate audio peripheral devices 104 connected to an interface 102 of the computing device 100 which are available for selection.
  • the user interface may be presented together with the indication of the default audio peripheral device.
  • the user in the workplace environment 300 may notice that his or her neighbors have left for the evening and may switch from the headset 104 - 2 as the default audio peripheral device to the speakers 104 - 1 as the current audio peripheral device.
  • the processor 108 stores the data pertaining to the session as historical data.
  • the processor 108 may store the location, including the location data (e.g. network identifiers and the like), calendar data, schedule data, parameters of use of the audio peripheral device, the predicted default audio peripheral device, and any further audio peripheral devices.
  • a computing device may determine a location of the computing device based on a network connection, set a default audio peripheral device based on the location, and communicate an audio signal through the default audio peripheral device.
  • the computing device may further set the default audio peripheral device based on calendar data, schedule data associated with a user account, historical data, and other parameters.
  • the computing device may use the historical data to generate a predicted audio peripheral device, for example, using a neural network system.
  • the computing device may also receive a selection from a user to provide verification and feedback to the neural network system for future predicted default audio peripheral devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

An example computing device includes a plurality of interfaces to connect to a plurality of audio peripheral devices, a communications interface to establish a network connection, and a processor interconnected with the plurality of interfaces and the communications interface. The processor is to determine a location of the computing device based on the network connection. The processor sets an audio peripheral device from the plurality of the audio peripheral devices as a default audio peripheral device based on the location. The processor communicates an audio signal through the default audio peripheral device.

Description

BACKGROUND
Computing devices may be connected to audio peripheral devices, such as headsets, microphones, speakers, and other devices for communicating audio signals. The computing device communicate audio signals through the audio peripheral devices to play movies, music, voice calls, and other audio media.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of an example computing device for audio peripheral device selection.
FIG. 2 is a flowchart of an example method for audio peripheral device selection.
FIG. 3A is a schematic diagram of an example computing device for audio peripheral device selection in a workplace environment.
FIG. 3B is a schematic diagram of another example computing device for audio peripheral device selection at home.
FIG. 3C is a schematic diagram of another example computing device for audio peripheral device selection in a public environment.
DETAILED DESCRIPTION
Computing devices, such as personal computers, laptops, desktops, or other types of computing devices such as imaging devices and the like, may be connected to audio peripheral devices for communicating audio signals, such as movies, music, voice calls, and other audio media. Such computing devices often operate on a “set and forget” methodology, whereby audio peripheral devices are set for a session, and forgotten upon completion of the session. User preferences are not maintained from session to session.
Some docking stations may be employed to cooperate with computing devices to implement user preferences. For example, some docking stations may store the user preferences and provide the user preferences to the computing device when the computing device is docked at the given docking station. Other docking stations may provide an identifying key to the computing device to enable the computing device to determine what user preferences to implement. However, in both examples, the computing device is docked at the docking station in order to implement the user preferences.
A computing device may store an association between a location of the computing device and a default audio peripheral device to use in that location. The computing device determines the location based on a network connection. For example, the computing device may associate network identifying characteristics, such as a type of connection, a network identifier, and an access point identifier with a location. The computing device may thus set the default audio peripheral device based on user preferences on a home network, on a work network, on a public network, or the like, and communicate an audio signal through the default audio peripheral device.
The computing device may further set the default audio peripheral device based on calendar data, schedule data associated with a user account, or other parameters. The computing device may store historical data including the parameters, the default audio peripheral device, and usage data for the audio peripheral device for each session. The historical data may be used by the computing device to generate a predicted default audio peripheral device after a threshold number of selections. The predicted default audio peripheral device may be generated by implementing a neural network system trained based on the historical data. The computing device may also receive, from a user, a selection of a further audio peripheral device different from the default audio peripheral device and communicate the audio signal through the further audio peripheral device. The selection of the further audio peripheral device may be stored in the historical data to provide further verification or feedback for future predicted default audio peripheral devices.
FIG. 1 shows a block diagram of an example computing device 100, such as a laptop or a notebook computer. The computing device 100 includes a plurality of interfaces 102-1 and 102-2 (referred to herein collectively as interfaces 102 and generically as an interface 102), a communications interface 106, a processor 108, and a memory 110.
The plurality of interfaces 102 are to connect to a plurality of audio peripheral devices, such as speakers 104-1 and a headset 104-2 (also referred to generically as audio peripheral devices 104). The interfaces 102 may include internal interfaces, such as a speaker interface 102-1 to connect to the speakers 104-1 integrally formed with the computing device 100. In other examples, the interfaces 102 may include externally facing interfaces to connect to separate audio peripheral devices. For example, a headset jack 102-2 may connect to the headset 104-2. In other examples, other externally facing interfaces such as USB ports, or other types of ports may be used to connect to other external audio peripheral devices 104, such as speakers or the like. In some examples, the external audio peripheral devices may be connected via a docking station or other suitable intermediary device.
The communications interface 106 is to establish a network connection for the computing device 100. The communications interface 106 includes suitable hardware (e.g. transmitters, receivers, network interface controllers and the like) allowing the computing device 100 to establish a network connection and communicate with other computing devices. Specifically, the communications interface 106 may communicate with an access point and may cooperate with the access point to determine a network identifier, an access point identifier, or other suitable network identifying characteristics for determining a location of the computing device 100.
The processor 108 is interconnected with the plurality of interfaces 102 and the communications interface 106. The processor 108 may include a central processing unit (CPU), a microcontroller, a microprocessor, a processing core, a field-programmable gate array (FPGA), or similar device capable of executing machine-readable instructions. The processor 108 may cooperate with a memory 110 to execute instructions. Memory may include a non-transitory machine-readable storage medium that may be may electronic, magnetic, optical, or other physical storage device that stores executable instructions. The machine-readable storage medium may include, for example, random access memory (RAM), read-only memory (ROM), electrically-erasable programmable read-only memory (EEPROM), flash memory, a storage drive, an optical disc, and the like. The machine-readable storage medium may be encoded with executable instructions.
In particular, the memory 110 includes location instructions 112, which, when executed, cause the processor 108 to determine the location of the computing device 100 based on the network connection. Specifically, the processor 108 may determine the location based on the network identifier, the access point identifier, or other suitable network identifying characteristic determined by the communications interface 106.
The memory 110 further includes default instructions 114, which, when executed, cause the processor 108 to set an audio peripheral device 104 from the plurality of audio peripheral devices 104 as a default audio peripheral device based on the location of the computing device 100. For example, the processor 108 may select the headset 104-2 as the default audio peripheral device when the computing device 100 is at one location, and the speakers 104-1 as the default audio peripheral device when the computing device 100 is at a different location.
The memory 110 further includes audio signal instructions 116, which, when executed, cause the processor 108 to communicate an audio signal through the default audio peripheral device.
The memory 110 may further include repositories storing data for use during the execution of the location instructions 112, the default instructions 114, and the audio signal instructions 116. For example, the memory 110 may store location data and historical data. The location data may include a location identifier, network characteristics identified by the communications interface 106 (e.g. a type of network connection, a network identifier, an access point identifier, and the like), and a count of network connections at that location. In some examples, the location data may be stored integrally with historical data including an association between a given location and the selected default audio peripheral device. The historical data may further include other parameters of use of the default audio peripheral device, such as a length of use of the default audio peripheral device, the type of audio content (e.g. movies, music, voice, or other audio media) communicated through the default audio peripheral device, and the like. In particular, the historical data may track a change in the audio peripheral device during a session. For example, the session may be initiated with audio signals communicated through the default audio peripheral device and may change partway through the session to have audio signals communicated through a further audio peripheral device, as is described further herein. Additionally, the count of the network connections at a given location may represent a number of selections of the default audio peripheral device at the given location.
In some examples, the memory 110 may further store default audio peripheral device data storing an association between a location, other parameters for selecting a default audio peripheral device, and the predicted default audio peripheral device for that combination of location and parameters.
FIG. 2 depicts a flowchart of an example method 200 of setting a default audio peripheral device for communicating audio signals. The method 200 is described in conjunction with its performance by the computing device 100, and in particular, the processor 108. In other examples, the method 200 may be performed by other suitable devices or systems.
The method 200 is initiated at block 202. The method 200 may be initiated, for example, upon initialization of the computing device 100. In other examples, the method 200 may be initiated in response to a request to communicate an audio signal. For example, a user of the computing device 100 may initiate a call or play audio or video media.
At block 202, the processor 108 determines a location of the computing device 100 based on the network connection established by the communications interface 106. Specifically, the processor 108 may determine the location based on characteristics of the network, such as the type of connection to the network (e.g. wired or wireless), a network identifier, and an access point identifier (e.g. an IP address or the like).
For example, referring to FIG. 3A, the computing device 100 is located in a workplace environment 300, and the communications interface 106 may establish a network connection via a wired link to a workplace network 302. The processor 108 may obtain, from the communications interface 106, the type of connection to the network (i.e. wired) and a network identifier of the workplace network 302 to determine that the computing device 100 is located at the workplace environment 300.
In another example, referring to FIG. 3B, the computing device 100 is located in a home environment 310, and the communications interface 106 may establish a network connection to a home network 312 via an access point 314 (e.g. a router or the like). The processor 108 may obtain, from the communications interface 106, the type of connection to the network (i.e. wireless), a network identifier of the home network 312, and an access point identifier of the access point 314 to determine that the computing device 100 is located at the home environment 310.
In a further example, referring to FIG. 3C, the computing device 100 is located in a public environment 320 (e.g. in a coffee shop or other public location), and the communications interface 106 may establish a network connection to a public network 322 via an access point 324 (e.g. a public router or the like). The processor 108 may obtain, from the communications interface 106, the type of connection to the network (i.e. wireless), a network identifier of the public network 322, and an access point identifier of the access point 324 to determine that the computing device 100 is located at the public environment 320.
Returning to FIG. 2 , at block 202, the processor 108 may also obtain further parameters for setting the default audio peripheral device. For example, the processor 108 may obtain calendar data, such as a date and time, as determined by native clock and calendar applications. The processor 108 may also obtain schedule data associated with a user account. For example, a user of the computing device 100 may have an email account associated with the computing device 100. The email account may include scheduled events (e.g. meetings, conference calls, or the like) including an event date and time, as well as event location data (e.g. an address, a meeting room, a conference call line, or the like). Accordingly, the schedule data obtained by the processor 108 may include said event location data, as well as the event date and time.
At block 204, the processor 108 may determine whether a threshold number of selections at the determined location have occurred. For example, the processor 108 may utilize the historical data from the selection of the default audio peripheral device at a given location to generate a predicted default audio peripheral device. Accordingly, the threshold may be defined according to a minimum number of selections at the given location to generate a prediction based on the historical data at the given location. In another example, the threshold number of selections may be set across all locations. That is, the processor 108 may evaluate the selections of the default audio peripheral device at all locations to generate a predicted default audio peripheral device at the given location.
If the threshold number of selections at the determined location have not occurred, the processor 108 proceeds to block 206. At block 206, the processor 108 receives input from a user of the computing device 100 as to the default audio peripheral device. For example, the processor 108 may present, at a display of the computing device 100, a user interface for selecting a default audio peripheral device from the audio peripheral devices 104 connected to an interface 102 of the computing device 100. Upon receiving a selection from a user, the processor 108 proceeds to block 210.
If, at block 204, the threshold number of selections have occurred, the processor 108 proceeds to block 208. At block 208, the processor 108 generates a predicted default audio peripheral device based on the location determined at block 202. In particular, the processor 108 may detect the audio peripheral devices 104 connected to an interface 102 of the computing device 100 which are available for selection. In some examples, the processor 108 may further consider the other parameters obtained at block 202 in generating the predicted default audio peripheral device.
For example, referring again to FIG. 3A, the computing device 100 is located in the workplace environment 300. In practice, a user of the computing device 100 may generally select, as the default audio peripheral device, the headset 104-2 in order to avoid disturbing his or her neighbors. Accordingly, after the threshold number of selections, the processor 108 may generate, as the predicted default audio peripheral device, the headset 104-2.
In other examples, the user of the computing device 100 may select, as the default audio peripheral device, the speakers 104-1, after 6 pm, or on weekends, as there may be fewer or no neighbors to disturb. Accordingly, the processor 108 may additionally consider the calendar data in generating the predicted default audio peripheral device.
In still further examples, the user may select, as the default audio peripheral device, a wirelessly connected speaker (not shown) available for connection in an office meeting room. That is, when the user is in a meeting in the office meeting room, the default audio peripheral device may be selected as the wirelessly connected speaker. Accordingly, the processor 108 may additionally consider the schedule data, and in particular, the event location data, in generating the predicted default audio peripheral device.
Referring to FIG. 3B, the computing device 100 is located in the home environment 310. In practice, the user of the computing device 100 may generally select, as the default audio peripheral device, the speakers 104-1 as the user may not be concerned about disturbing neighbors and hence may play audio media more freely. Accordingly, after the threshold number of selections, the processor 108 may generate, as the predicted default audio peripheral device, the speakers 104-1.
In other examples, the user of the computing device 100 may select, as the default audio peripheral device, the headset 104-2, after 9 pm in order to avoid disturbing sleeping family members. Accordingly, the processor 108 may additionally consider the calendar data in generating the predicted default audio peripheral device.
Referring to FIG. 3C, the computing device 100 is located in a public environment 320. The user of the computing device 100 may generally select, as the default audio peripheral device, the headset 104-2 in order to maintain privacy. Accordingly, after the threshold number of selections, the processor 108 may generate, as the predicted default audio peripheral device, the headset 104-2.
The processor 108 may generate the predicted default audio peripheral device based on a deterministic model. That is, the memory 110 may store a repository tracking the default audio peripheral device based on the determined location and any other parameters obtained at block 202. Accordingly, the processor 108 may perform a lookup in the repository to generate the predicted default audio peripheral device.
In other examples, the processor 108 may implement neural network systems or machine learning algorithms to generate the predicted default audio peripheral device. That is, the processor 108 may receive, as input, the determined location and any other parameters obtained at block 202 and output the predicted default audio peripheral device. In particular, the processor 108 may use the historical data to train the neural networks and/or machine learning algorithms to allow each instance of selecting a default audio peripheral device to provide corrective feedback or verification. For example, the processor 108 may continue to learn and be updated based on user selection of a further audio peripheral device (e.g. to correct the predicted default audio peripheral device or to indicate different parameters for selecting the further audio peripheral device). Additionally, the processor 108 may verify the predicted default audio peripheral device if no further audio peripheral device is selected.
In other examples, the processor 108 may use the historical data to generate a predicted default audio peripheral device at locations with no previous selections. For example, in practice, the user of the computing device 100 may generally select the headset 104-2 as the default audio peripheral device when in public environments. Additionally, the computing device 100 may connect to a variety of different networks in public environments. Accordingly, when the determined location has no previous selections, the processor 108 may generate, as the predicted default audio peripheral device, the headset 104-2.
At block 210, the processor 108 sets the default audio peripheral device. In particular, the processor 108 sets, as the default audio peripheral device, either the predicted default audio peripheral device generated at block 208, or the default audio peripheral device as selected by the user at block 206. In some examples, the processor 108 may present, at a display of the computing device 100, an indication of the selected default audio peripheral device.
At block 212, the processor 108 communicates an audio signal through the default audio peripheral device.
At block 214, the processor 108 may receive a selection of a further audio peripheral device from a user of the computing device 100, where the further audio peripheral device is different from the default audio peripheral device. If, at block 214, such a selection is received, the processor 108 proceeds to block 216 to set a current audio peripheral device based on the selection of the further audio peripheral device. The processor 108 then proceeds to block 212 to communicate the audio signal through the current audio peripheral device. That is, the audio signal may thus be communicated through the selected further audio peripheral device rather than the default audio peripheral device set at block 210.
In practice, the user of the computing device 100 may select a further audio peripheral device if the generated predicted default audio peripheral device is incorrect. For example, the first instance of initialization of the method during a different time frame (e.g. after work hours) may generate a predicted default audio peripheral device of the headset 104-2, however, the user of the computing device 100 may decide to select the speakers 104-1 as the current audio peripheral device. In particular, the processor 108 may present, at the display of the computing device 100, a user interface to select a further audio peripheral device. For example, the user interface may indicate audio peripheral devices 104 connected to an interface 102 of the computing device 100 which are available for selection. In particular, the user interface may be presented together with the indication of the default audio peripheral device. In other examples, based on changing parameters or uses of the computing device. For example, the user in the workplace environment 300 may notice that his or her neighbors have left for the evening and may switch from the headset 104-2 as the default audio peripheral device to the speakers 104-1 as the current audio peripheral device.
At block 218, the processor 108 stores the data pertaining to the session as historical data. For example, the processor 108 may store the location, including the location data (e.g. network identifiers and the like), calendar data, schedule data, parameters of use of the audio peripheral device, the predicted default audio peripheral device, and any further audio peripheral devices.
As described above, a computing device may determine a location of the computing device based on a network connection, set a default audio peripheral device based on the location, and communicate an audio signal through the default audio peripheral device. The computing device may further set the default audio peripheral device based on calendar data, schedule data associated with a user account, historical data, and other parameters. The computing device may use the historical data to generate a predicted audio peripheral device, for example, using a neural network system. The computing device may also receive a selection from a user to provide verification and feedback to the neural network system for future predicted default audio peripheral devices.
The scope of the claims should not be limited by the above examples, but should be given the broadest interpretation consistent with the description as a whole.

Claims (13)

The invention claimed is:
1. A computing device comprising:
a plurality of interfaces to connect to a plurality of audio peripheral devices;
a communications interface to establish a network connection;
a memory to store historical data including an association between the network connection and a default audio peripheral device; and
a processor interconnected with the plurality of interfaces and the communications interface, the processor to:
generate a predicted default audio peripheral device from the plurality of the audio peripheral devices after a threshold number of selections at a historical network connection;
determine an audio peripheral device from the plurality of audio peripheral devices based on the historical data;
set the audio peripheral device from the plurality of the audio peripheral devices as a default audio peripheral device; and
communicate an audio signal through the default audio peripheral device.
2. The computing device of claim 1, wherein the processor is to:
determine calendar data; and
further set the default audio peripheral device based on the calendar data.
3. The computing device of claim 1, wherein the processor is to:
determine schedule data associated with a user account of the computing device; and
further set the default audio peripheral device based on the schedule data.
4. The computing device of claim 1, wherein the processor is to generate a predicted default audio peripheral device with a neural network system trained based on the historical data.
5. The computing device of claim 1, wherein the processor is to:
receive a selection of a further audio peripheral device from the plurality of the audio peripheral devices from a user; and
communicate the audio signal through the further audio peripheral device.
6. The computing device of claim 1, wherein the processor is to determine the audio peripheral device in response to a request to communicate the audio signal.
7. A non-transitory machine-readable storage medium storing a plurality of machine-readable instructions when executed cause a processor of a computing device to:
determine an audio peripheral device from a plurality of audio peripheral devices based on a network connection and stored historical data including a list of associations between historical network connections and historical audio peripheral devices;
generate a predicted default audio peripheral device from the plurality of audio peripheral devices after a threshold number of selections at the network connection;
set the audio peripheral device from a plurality of audio peripheral devices as a default audio peripheral device; and
communicate an audio signal through the default audio peripheral device.
8. The non-transitory machine-readable storage medium of claim 7, wherein further execution of the instructions is to:
determine calendar data; and
further set the default audio peripheral device based on the calendar data.
9. The non-transitory machine-readable storage medium of claim 7, wherein further execution of the instructions is to:
determine schedule data associated with a user account of the computing device; and
further set the default audio peripheral device based on the schedule data.
10. The non-transitory machine-readable storage medium of claim 7, wherein further execution of the instructions is to generate a predicted default audio peripheral device with a neural network system trained based on the historical data including the association between the network connection and the default audio peripheral device.
11. A computing device comprising:
a plurality of interfaces to connect to a plurality of audio peripheral devices;
a communications interface to establish a connection to a network;
a memory to store historical data including a list of associations between historical network connections and historical audio peripheral devices; and
a processor interconnected with the plurality of interfaces and the communications interface, the processor to:
generate a predicted default audio peripheral device from the plurality of audio peripheral devices after a threshold number of selections at the network connection;
obtain, from the communications interface, network identifying characteristics of the network;
set an audio peripheral device from the plurality of the audio peripheral devices as a default audio peripheral device based on the network identifying characteristics and the historical data; and
communicate an audio signal through the default audio peripheral device.
12. The computing device of claim 11, wherein the processor is to generate a predicted default audio peripheral device with a neural network system trained based on the historical data.
13. The computing device of claim 11, wherein the processor is to set the default audio peripheral device in response to a request to communicate the audio signal.
US17/297,116 2019-07-17 2019-07-17 Audio peripheral device selections Active 2039-09-28 US11928386B2 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2019/042142 WO2021010993A1 (en) 2019-07-17 2019-07-17 Audio peripheral device selections

Publications (2)

Publication Number Publication Date
US20220342629A1 US20220342629A1 (en) 2022-10-27
US11928386B2 true US11928386B2 (en) 2024-03-12

Family

ID=74210519

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/297,116 Active 2039-09-28 US11928386B2 (en) 2019-07-17 2019-07-17 Audio peripheral device selections

Country Status (2)

Country Link
US (1) US11928386B2 (en)
WO (1) WO2021010993A1 (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9002322B2 (en) 2011-09-29 2015-04-07 Apple Inc. Authentication with secondary approver
WO2015183366A1 (en) 2014-05-30 2015-12-03 Apple, Inc. Continuity
WO2016036541A2 (en) 2014-09-02 2016-03-10 Apple Inc. Phone user interface
DK201670622A1 (en) 2016-06-12 2018-02-12 Apple Inc User interfaces for transactions
US12526361B2 (en) 2017-05-16 2026-01-13 Apple Inc. Methods for outputting an audio output in accordance with a user being within a range of a device
CN111343060B (en) 2017-05-16 2022-02-11 苹果公司 Method and interface for home media control
KR102436985B1 (en) 2019-05-31 2022-08-29 애플 인크. User interface for controlling audio media
US11010121B2 (en) 2019-05-31 2021-05-18 Apple Inc. User interfaces for audio media control
DK201970533A1 (en) 2019-05-31 2021-02-15 Apple Inc Methods and user interfaces for sharing audio
US11392291B2 (en) 2020-09-25 2022-07-19 Apple Inc. Methods and interfaces for media control with dynamic feedback
US11847378B2 (en) 2021-06-06 2023-12-19 Apple Inc. User interfaces for audio routing
CN119376677A (en) * 2021-06-06 2025-01-28 苹果公司 User interface for audio routing

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100185787A1 (en) * 2009-01-21 2010-07-22 Anton Krantz Dynamic call handling from multiple attached devices
US20120054613A1 (en) * 2010-08-30 2012-03-01 Samsung Electronics Co., Ltd. Method and apparatus to process audio signal
US8260999B2 (en) 2009-10-28 2012-09-04 Google Inc. Wireless communication with a dock
US8527688B2 (en) 2008-09-26 2013-09-03 Palm, Inc. Extending device functionality amongst inductively linked devices
US8554045B2 (en) 2004-11-12 2013-10-08 Ksc Industries Incorporated Docking station for portable entertainment devices
US8613385B1 (en) 2011-06-02 2013-12-24 Digecor I.P. And Assets Pty. Ltd. Audio-visual entertainment system and docking systems associated therewith
US20140096092A1 (en) * 2011-03-20 2014-04-03 William J. Johnson System and Method for Indirect Manipulation of User Interface Object(s)
US20140337492A1 (en) * 2007-10-18 2014-11-13 Lenovo (Singapore) Pte. Ltd. Autonomic computer configuration based on location
US8914559B2 (en) * 2006-12-12 2014-12-16 Apple Inc. Methods and systems for automatic configuration of peripherals
US9081746B1 (en) * 2012-10-16 2015-07-14 Teradici Corporation Method for client configuration management in remote computing
US9207713B1 (en) 2012-03-15 2015-12-08 Amazon Technologies, Inc. Location-based device docking
US20160057526A1 (en) * 2014-04-08 2016-02-25 Doppler Labs, Inc. Time heuristic audio control
US20160254954A1 (en) 2009-12-31 2016-09-01 Apple Inc. Location-based dock for a computing device
US20170308121A1 (en) 2013-12-31 2017-10-26 Henge Docks Llc Selectable Audio Device for Docking Station
US10080089B2 (en) 2006-08-31 2018-09-18 Bose Corporation System with speaker, transceiver and related devices
US20200288247A1 (en) * 2019-03-07 2020-09-10 Bose Corporation Systems and methods for controlling electronic devices

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8554045B2 (en) 2004-11-12 2013-10-08 Ksc Industries Incorporated Docking station for portable entertainment devices
US10080089B2 (en) 2006-08-31 2018-09-18 Bose Corporation System with speaker, transceiver and related devices
US8914559B2 (en) * 2006-12-12 2014-12-16 Apple Inc. Methods and systems for automatic configuration of peripherals
US20140337492A1 (en) * 2007-10-18 2014-11-13 Lenovo (Singapore) Pte. Ltd. Autonomic computer configuration based on location
US8527688B2 (en) 2008-09-26 2013-09-03 Palm, Inc. Extending device functionality amongst inductively linked devices
US20100185787A1 (en) * 2009-01-21 2010-07-22 Anton Krantz Dynamic call handling from multiple attached devices
US8260999B2 (en) 2009-10-28 2012-09-04 Google Inc. Wireless communication with a dock
US20160254954A1 (en) 2009-12-31 2016-09-01 Apple Inc. Location-based dock for a computing device
US20120054613A1 (en) * 2010-08-30 2012-03-01 Samsung Electronics Co., Ltd. Method and apparatus to process audio signal
US20140096092A1 (en) * 2011-03-20 2014-04-03 William J. Johnson System and Method for Indirect Manipulation of User Interface Object(s)
US20160342779A1 (en) * 2011-03-20 2016-11-24 William J. Johnson System and method for universal user interface configurations
US8613385B1 (en) 2011-06-02 2013-12-24 Digecor I.P. And Assets Pty. Ltd. Audio-visual entertainment system and docking systems associated therewith
US9207713B1 (en) 2012-03-15 2015-12-08 Amazon Technologies, Inc. Location-based device docking
US9081746B1 (en) * 2012-10-16 2015-07-14 Teradici Corporation Method for client configuration management in remote computing
US20170308121A1 (en) 2013-12-31 2017-10-26 Henge Docks Llc Selectable Audio Device for Docking Station
US20160057526A1 (en) * 2014-04-08 2016-02-25 Doppler Labs, Inc. Time heuristic audio control
US20200288247A1 (en) * 2019-03-07 2020-09-10 Bose Corporation Systems and methods for controlling electronic devices

Also Published As

Publication number Publication date
WO2021010993A1 (en) 2021-01-21
US20220342629A1 (en) 2022-10-27

Similar Documents

Publication Publication Date Title
US11928386B2 (en) Audio peripheral device selections
US11119723B2 (en) User-adaptive volume selection
US9271117B2 (en) Computing system with configuration update mechanism and method of operation thereof
US9451584B1 (en) System and method for selection of notification techniques in an electronic device
US10158960B1 (en) Dynamic multi-speaker optimization
US20150358767A1 (en) Intelligent device connection for wireless media in an ad hoc acoustic network
US10764442B1 (en) Muting an audio device participating in a conference call
US11251987B2 (en) Modification of device settings based on user abilities
US11070880B2 (en) Customized recommendations of multimedia content streams
US12041424B2 (en) Real-time adaptation of audio playback
US20150067787A1 (en) Mechanism for facilitating dynamic adjustments to computing device characteristics in response to changes in user viewing patterns
CN103929692B (en) Audio information processing method and electronic equipment
US20150212657A1 (en) Recommending Mobile Device Settings Based on Input/Output Event History
WO2021068764A1 (en) Information processing method and device
US10172141B2 (en) System, method, and storage medium for hierarchical management of mobile device notifications
EP2933990B1 (en) Method and device for prompting a user
CN111756604A (en) A device coordination method, device and system
US20150056967A1 (en) System and method for community based mobile device profiling
US11914537B2 (en) Techniques for load balancing with a hub device and multiple endpoints
CN114721710A (en) Version control method, device and storage medium
WO2022005701A1 (en) Audio anomaly detection in a speech signal
US11681571B2 (en) Managing device group configurations across workspaces based on context
WO2019225109A1 (en) Information processing device, information processing method, and information processing program
JP2023128935A (en) Web conference system, web conference server, control method and control program of web conference server, and web conference application program
US20220058933A1 (en) Electronic system and method for improving human interaction and activities

Legal Events

Date Code Title Description
AS Assignment

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BALARAMAN, SRINATH;CHUNG, LING WEI;VERLEKAR, PRADOSH TULSIDAS;AND OTHERS;REEL/FRAME:056357/0028

Effective date: 20190712

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE