CN110998711A - Dynamic audio data transmission masking - Google Patents

Dynamic audio data transmission masking Download PDF

Info

Publication number
CN110998711A
CN110998711A CN201880053363.8A CN201880053363A CN110998711A CN 110998711 A CN110998711 A CN 110998711A CN 201880053363 A CN201880053363 A CN 201880053363A CN 110998711 A CN110998711 A CN 110998711A
Authority
CN
China
Prior art keywords
sound
file
encoded audio
computer
masking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201880053363.8A
Other languages
Chinese (zh)
Inventor
A.马登
A.古普塔
S.格沃尔亚尼
M.阿拉瓦特
H.卡纳
R.莱什拉姆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Publication of CN110998711A publication Critical patent/CN110998711A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B11/00Transmission systems employing sonic, ultrasonic or infrasonic waves
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/1752Masking
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K11/00Methods or devices for transmitting, conducting or directing sound in general; Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/16Methods or devices for protecting against, or for damping, noise or other acoustic waves in general
    • G10K11/175Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
    • G10K11/178Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
    • G10K11/1781Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase characterised by the analysis of input or output signals, e.g. frequency range, modes, transfer functions
    • G10K11/17821Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase characterised by the analysis of input or output signals, e.g. frequency range, modes, transfer functions characterised by the analysis of the input signals only
    • G10K11/17827Desired external signals, e.g. pass-through audio such as music or speech

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Soundproofing, Sound Blocking, And Sound Damping (AREA)

Abstract

The technology herein provides a computer-implemented method to dynamically mask audio-based data transmissions. The computing device encodes the data to be transmitted into an audio file that produces unpleasant sounds for the human ear. The computing device determines frequency points and amplitudes of the encoded audio file and creates a masking sound file based on the determined frequency points and amplitudes. The masking sound file may include a pleasant masking sound for human ears. The computing device plays the encoded audio file and the masking sound file. In an example, a computing device combines an encoded audio file and a masking sound file into a single sound file and plays the single sound file. In another example, the computing device simultaneously plays the encoded audio file and the masking sound file as two separate sound files.

Description

Dynamic audio data transmission masking
Cross Reference to Related Applications
This patent application claims priority from U.S. patent application No. 62/546, 133 entitled Dynamic Audio Data transfer masking, filed on 16.8.2017. The entire contents of the above application are incorporated herein by reference in their entirety.
Technical Field
Techniques disclosed herein relate to dynamic creation of optimal audio output to mask audio-based data during transmission.
Background
Mobile computing devices typically exchange data via the internet. In the event that an internet connection is not available or desired, data may be transferred using a peer-to-peer connection, such as bluetooth or near field communication. However, these peer-to-peer connection solutions require specific hardware and APIs to function. Accordingly, there is a need to enable and use features and hardware typically found on mobile computing devices to exchange data.
Each phone or mobile communication device has, by definition, a microphone and a speaker. By using a microphone of one mobile communication device and a speaker of another mobile communication device, data can be transmitted by sound waves. When audio vibrates, it vibrates at many frequencies, and each frequency produces sound waves. The high frequencies of the audio result in less ambient noise. As a result, the high frequency of the audio is ideal for data transmission. However, when data is transmitted by a high-frequency sound wave, an unpleasant sound is generated.
The sound may be played to mask unpleasant sounds generated by the data transmission. However, since different data is encoded for audio transmission, the resulting (suppressing) frequency and amplitude of the encoded audio may fluctuate from one transmission to another.
Disclosure of Invention
The present technology provides a computer-implemented method of dynamically masking audio-based data transmissions. In an example, a computing device encodes data to be transmitted as an audio file for audio-based transmission, where the encoded audio file produces sound that is audible to the human ear (which may or may not be objectionable to the human ear). The computing device determines a frequency point for the encoded audio file and an amplitude for the encoded audio file, and creates a masking sound file based on the determined frequency point and amplitude for the encoded audio file. The computing device plays the encoded audio file and the masking sound file. In an example, a computing device combines an encoded audio file and a masking sound file into a single sound file and plays the single sound file. In another example, the computing device simultaneously plays the encoded audio file and the masking sound file as two separate sound files.
The ideal masking sound that can effectively mask the unpleasant sounds produced by the data transmission may depend on the particular frequency and amplitude of the encoded audio. Accordingly, a masking sound that changes depending on data encoded for transmission can be dynamically generated.
The masking sound file may, for example, include masking sounds that are pleasant to the human ear. The masking sound may thus mask the unpleasant sound produced by the encoded audio file.
In certain other example aspects described herein, systems and computer program products are provided for dynamic audio-based data transmission masking.
These and other aspects, objects, features and advantages of the examples will become apparent to those skilled in the art upon consideration of the following detailed description of the examples shown.
Drawings
Fig. 1 is a block diagram depicting a system for dynamic audio-based data transfer masking, according to some examples.
Fig. 2 is a block flow diagram depicting a method for dynamic audio-based data transfer masking, according to some examples.
Fig. 3 is a block flow diagram depicting a method for creating a masking sound, according to some examples.
FIG. 4 is a block diagram depicting a computing machine and modules, according to some examples.
Detailed Description
SUMMARY
Examples described herein provide computer-implemented techniques for dynamic audio-based data transfer masking. Using and relying on the methods and systems described herein, broadcast computing devices and account management computing systems provide the ability to communicate data over audio communication channels in a manner that is more pleasing to the human ear. As such, the systems and methods described herein enable data to be transmitted via an audio communication channel by a broadcasting computing device, wherein a second sound is generated to mask the unpleasant nature of the transmitted data.
In an example, the account management computing system generates a set of rules that may be applied by the broadcasting computing device to create a desired masking sound. In an example, the rule set includes a function or algorithm that, when applied to known data points from the encoded sound, produces an ideal masking sound. In this example, a masking sound is dynamically generated for each encoded audio transmission. In an example, the account management computing system communicates rules for creating masking sounds to the broadcast computing device. In another example, the account management computing system pushes the rules as application updates.
In an example, a broadcast computing device encodes data for audio-based data transmission. In an example, data is encoded in an acoustic wave via modulation by varying one or more properties (e.g., amplitude, frequency, and/or phase) of the carrier acoustic wave. For example, the encoded audio has a known frequency. In this example, the known frequency includes a tone or note that is above a threshold tone or note, which results in an unpleasant sound.
Once the data for the audio-based data transmission is encoded, the broadcast computing device creates a masking sound. In an example, the broadcast computing device retrieves rules for creating masking sounds and applies the rules to known frequency points and amplitudes of the encoded audio. In this example, the ideal masking sound is played at the correct frequency point and the correct amplitude to mask the objectionable sound of the encoded audio. Based on the particular frequency point and amplitude of the encoded audio, the broadcast computing device will create a masking sound that will be played at the desired frequency point and amplitude. In an example, the rules for creating the masking sound include the following functions: wherein the frequency points and amplitudes of the encoded audio are input as inputs and a masking sound is generated as an output. In an example, the broadcast computing device encodes the output masking sound into a sound file.
The example broadcast computing device is capable of playing two separate sounds as separate files simultaneously on separate streams. In this example, the broadcast computing device plays the encoded audio file and the encoded masking sound file simultaneously. In an example, a broadcast computing device simultaneously plays an encoded audio file and an encoded masking sound file through an audio component. In this example, the masking sound blocks the unpleasant sound of the encoded audio, resulting in a more pleasant sound to the human ear.
In another example, the broadcast computing device may not be able to play two separate sounds as separate files simultaneously on separate streams. In this example, the broadcast computing device combines the encoded audio file and the encoded masking sound file to create a single sound file. The broadcast computing device then plays the single sound file. In an example, a broadcast computing device plays a single sound file through an audio component. In this example, the masking sound blocks the unpleasant sound of the encoded audio, resulting in a more pleasant sound to the human ear.
Using and relying on the methods and systems described herein, the broadcast computing device and account management computing system enable a user to communicate relevant information directly from the broadcast computing device without having to listen to the unpleasant nature of the sound produced by the encoded data. The masking sound is dynamically generated as a result of the audio being encoded with the data transmission. Because the masking sounds are dynamically generated, the methods and systems described herein reduce the input required by a user to transmit information to a broadcasting computing device.
In this way, the systems and methods described herein may be used to proactively find the best masking sound without requiring the user to physically manipulate the audio configuration. The system communicates with an account management computing system for creating rules that mask sounds, which the account management computing system can push to all similar computing devices, thereby saving time and resources. The automatic and dynamic nature of the system operates during audio-based data transmission. Thus, the system appears in the context of transmission at a faster rate than one can achieve by performing similar actions.
The dynamic rule scheme may be shared among all different computing devices that support audio-based data transfer. Such a dynamic rule scheme is beneficial, for example, when new computing devices are used for audio-based data transmission. In this example, when a new computing device model is introduced on the market, laboratory testing may be used to derive and share the best rules among computing devices of the same model.
In another example, the dynamic rule scheme may be beneficial, for example, when there is an offset in the expected audio-based data transmission scheme. For example, changing the frequency band that all computing devices must transmit/receive. In this example, the determined configuration may be communicated to all computing devices. This change may be timed so that all computing devices move to the new rule scheme at a predetermined time. In another example, the determined rules are communicated to groups of similarly behaving devices (e.g., those that are the same model or related models from the same manufacturer). In this example, a determined rule from a single computing device may trigger an audio rule change in a large number of devices belonging to the same group.
Various examples will be explained in more detail in the following description in conjunction with the figures showing the program flow.
Examples are now described in detail with reference to the drawings, in which like numerals indicate like (but not necessarily identical) elements throughout the several views.
Example System architecture
Fig. 1 is a block diagram depicting a system for dynamic audio-based data transfer masking, according to some examples. As shown in FIG. 1, the example operating environment 100 includes computing systems 110, 120, and 130 configured to communicate with one another via network computing devices via one or more networks 140. In another example, two or more of these computing systems (including systems 110, 120, and 130) are integrated into the same system. In some examples, a user associated with a computing device must install an application and/or make a feature selection to obtain the benefits of the techniques described herein.
Each network 140 includes wired or wireless telecommunication mechanisms by which network computing systems (including systems 110, 120, and 130) can communicate and exchange data. For example, each network 140 may be implemented as or may be part of: storage Area Networks (SANs), Personal Area Networks (PANs), Metropolitan Area Networks (MANs), Local Area Networks (LANs), Wide Area Networks (WANs), Wireless Local Area Networks (WLANs), Virtual Private Networks (VPNs), intranets, the internet, mobile phone networks, card networks, Bluetooth Low Energy (BLE), near field communication Networks (NFC), any form of standardized radio frequency, infrared, sound (e.g., audible sounds, melodies, and ultrasound), other short-range communication channels, or any combination thereof, or any other suitable architecture or system that facilitates communication of signals, data, and/or messages (often referred to as data). Throughout this specification, it should be understood that the terms "data" and "information" are used interchangeably herein to refer to text, images, audio, video, or any other form of information that may be present in a computer-based environment.
In an example, each network computing system (including systems 110, 120, and 130) includes a computing device having a communication module capable of transmitting and receiving data over network 140. For example, each network computing system (including systems 110, 120, and 130) may include a server, a personal computer, a mobile device (e.g., a notebook computer, a tablet computer, a netbook computer, a Personal Digital Assistant (PDA), a video game device, a GPS locator device, a cellular telephone, a smart phone, or other mobile device), a television having one or more processors embedded therein and/or coupled thereto, or other suitable technology that includes or is coupled to a web browser or other application for communicating via network 140. In the example shown in FIG. 1, the network computing systems (including systems 110, 120, and 130) are operated by users and account management computing system operators, respectively.
The example broadcast computing device 110 includes a user interface 111, an application 113, an audio component 117, and a data storage unit 119. In an example, the broadcast computing device 110 may be a personal computer, a mobile device (e.g., a notebook computer, a tablet computer, a netbook computer, a Personal Digital Assistant (PDA), a video game device, a GPS locator device, a cellular telephone, a smart phone, or other mobile device), a television, a wearable computing device (e.g., a watch, ring, or glasses), or other suitable technology that includes or is coupled to a web server (or other suitable application that interacts with web page files) or that includes or is coupled to the application 113.
The user may use the broadcast computing device 110 to broadcast audio-based data via the audio component 117 using the user interface 111 and the application 113. For example, user interface 111 includes a touch screen, a voice-based interface, or any other interface that allows a user to provide input and receive output from application 113. In an example, a user interacts with the application 113 via the user interface 111 to select or instruct the broadcast computing device 110 to broadcast audio-based data via the audio component 117.
The application 113 is a program, function, routine, applet, or similar entity that resides on and performs its operations on the broadcast computing device 110. For example, the application 113 may be one or more of an audio application, a data application, an account management computing system 130 application, an internet browser, a user interface 111 application, or other suitable application operating on the broadcast computing device 110. In some examples, the user must install the application 113 and/or make a selection of functions on the broadcast computing device 110 to obtain the benefits of the techniques described herein.
In an example, the data storage unit 119 and the application 113 may be implemented in a secure element or other secure memory (not shown) on the broadcast computing device 110. In another example, the data storage unit 119 may be a separate storage unit that resides in the broadcast computing device 110. The example data storage unit 119 enables storage of rules for creating an optimal masking sound. In an example, the data storage unit 119 may include a local or remote data storage structure accessible to the broadcasting computing device 110 that is adapted to store information. In an example, the data store 119 stores encryption information, such as HTML5 local storage.
In an example, audio component 117 includes a speaker device or other device capable of producing an audio output. Example sound outputs include ultrasound outputs. In an example, audio component 117 communicates with application 113 to receive instructions for broadcast sound output. In an example, the audio component 117 is a component of the broadcast computing device 110. In another example, the audio component 117 is communicatively coupled to the broadcast computing device 110.
The example broadcast computing device 110 communicates with the receiving computing device 120 via an audio communication channel. Example communications via an audio communication channel include transmission of audio-based data. In an example, data is transmitted from the broadcasting computing device 110 to the receiving computing device 120 by sound waves.
The example receiving computing device 120 includes a user interface 121, an application 123, a microphone component 125, and a data storage unit 129. In an example, the receiving computing device 120 may be a personal computer, a mobile device (e.g., a notebook computer, a tablet computer, a netbook computer, a Personal Digital Assistant (PDA), a video game device, a GPS locator device, a cellular telephone, a smartphone, or other mobile device), a television, a wearable computing device (e.g., a watch, ring, or glasses), or other suitable technology that includes or is coupled to a web server (or other suitable application that interacts with web page files) or that includes or is coupled to the application 123.
The user may use the receiving computing device 120 to receive audio-based data via the microphone component 125 using the user interface 121 and the application 123. For example, the user interface 121 includes a touch screen, a voice-based interface, or any other interface that allows a user to provide input and receive output from the application 123. In an example, a user interacts with the application 123 via the user interface 121 to receive, read, or interact with audio-based data received via the microphone component 125.
The application 123 is a program, function, routine, applet, or similar entity that exists on the receiving computing device 120 and performs its operations. For example, the application 123 may be one or more of an audio application, a data application, an account management computing system 130 application, an internet browser, a user interface 121 application, or other suitable application operating on the receiving computing device 120. In some examples, the user must install the application 123 and/or make a function selection on the receiving device for the computing device 120 to obtain the benefits of the techniques described herein.
In an example, the data storage unit 129 and the application 123 may be implemented in a secure element or other secure memory (not shown) on the receiving computing device 120. In another example, data storage unit 129 may be a separate storage unit resident on receiving computing device 102. In an example, data storage unit 129 can include a local or remote data storage structure accessible to receiving computing device 120 that is adapted to store information. In an example, the data storage unit 129 stores encrypted information, such as HTML5 local storage.
In an example, the microphone component 125 includes a microphone device that is capable of receiving sound input from the environment of the receiving computing device 120. In an example, the microphone component 125 communicates with the application 123 to receive instructions to transition from the passive mode to the active mode and listen for sound input. In an example, the microphone component 125 receives sound input when in the active mode and communicates the received sound input to the application 123.
The example receiving computing device 120 and the broadcasting computing device 110 are in communication with an account management computing system 130. The example account management computing system 130 includes an account management component 131, an audio configuration component 133, and a data storage unit 137.
In an example, the receiving computing device 120 and the broadcasting computing device 110 are registered with the account management computing system 130 or associated with the account management computing system 130. In this example, the account management computing system 130 can identify the receiving computing device 120 and the broadcasting computing device 110, and transmit a hardware configuration, instructions, updates, or other form of data transmission to each computing device 110 and 120. In another example, the account management computing system 130 can identify communications or transmissions received from the receiving computing device 120 and the broadcasting computing device 110. In an example, each device has a unique or otherwise identifiable code associated therewith. In an example, each computing device (including 110 and 120) downloads or authorizes an application (including 113 and 123) associated with the account management computing system 130 onto the device to perform the techniques described herein. In an example, this information is maintained within account management component 131.
In an example, the broadcast computing device 110 includes rules for creating an optimal masking sound. In an example, the account management computing device 130 communicates with the broadcast computing device 110 to provide rules for creating optimal masking sounds.
In an example, the audio configuration component 133 determines rules for creating optimal masking sounds for a plurality of broadcast computing devices (including 110). For example, the rule set utilizes harmonic frequencies to produce an ideal masking sound. In this example, the rules utilize multiples of the same fundamental frequency to produce a more pleasant masking sound. For example, the masking sound and the encoded audio are played at frequencies that are multiples of a certain predetermined frequency. In another example, the ideal masking sound includes an amplitude high enough to mask the encoded audio. However, the ideal masking sound also includes a sufficiently low amplitude so as not to interfere with data transmission. In an example, the objectionable sound includes a sound having one or more characteristics or attributes outside of a predetermined acceptable range.
The rule for creating the optimal masking sound is stored in the data storage unit 137. In an example, the data storage unit 137 may include any local or remote data storage structure accessible to the account management computing system 130 suitable for storing information. In an example, the data storage unit 137 stores encryption information, such as HTML5 local storage.
In another example, the computing devices (including 110 and 120) perform some or all of the functions of the account management computing system 130.
It will be appreciated that the network connections shown are examples and other means of establishing a communications link between the computer and the device may be used. In addition, those having ordinary skill in the art and the benefit of the present disclosure will appreciate that the computing device illustrated in FIG. 1 may have any of several other suitable computer system configurations. For example, a receiving computing device 120 or a broadcast computing device 110 embodied as a mobile phone or handheld computer may not include all of the components described above.
In an example, the network computing devices and any other computers associated with the techniques presented herein may be any type of computer, such as, but not limited to, those discussed in more detail with reference to fig. 4. Further, any function, application, or component associated with any of these computing machines, such as those described herein or any other (e.g., script, Web content, software, firmware, hardware, or module) associated with the techniques presented herein may be any of the components discussed in more detail with reference to fig. 4. The computing machines discussed herein may communicate with each other, as well as with other computing machines or communication systems, over one or more networks, such as network 140. Network 140 may include any type of data or communication network, including any of the network technologies discussed with reference to fig. 4.
Example processing
The components of exemplary operating environment 100 are described below with reference to the exemplary methods illustrated in fig. 2-3. The example methods of fig. 2-3 may also be performed with other systems and in other environments. The operations described with reference to any of fig. 2-3 may be implemented as executable code stored on a computer-or machine-readable non-transitory tangible storage medium (e.g., floppy disk, hard disk, ROM, EEPROM, non-volatile RAM, CD-ROM, etc.) that is completed by processor circuitry implemented using one or more integrated circuits based on execution of the code; the operations described herein may also be implemented as executable logic (e.g., a programmable logic array or device, a field programmable gate array, programmable array logic, application specific integrated circuit, etc.) encoded for execution in one or more non-transitory tangible media.
Fig. 2 is a block flow diagram depicting a method for dynamic audio-based data transfer masking, according to some examples. The method 200 is described with reference to the components shown in FIG. 1. In an example, transmission of data encoded as an audio file produces unpleasant sounds to the human ear. In this example, the broadcast computing device 110 generates a second sound to mask the unpleasant nature of the data sent in the encoded audio file.
In block 210, the account management computing system 130 determines the rules for creating the masking sound. In an example, the masking sound is a second sound played simultaneously with or combined with the encoded audio file to mask an objectionable sound of the encoded audio file.
In an example, the account management computing system generates a set of rules that may be applied by the broadcasting computing device to create a desired masking sound. In an example, the rule set includes a function or algorithm that, when applied to known data points from the encoded sound, produces an ideal masking sound. In this example, a masking sound is dynamically generated for each encoded audio transmission.
In an example, the ideal masking sound includes lower frequencies than the encoded audio transmission. In this example, the encoded audio transmission is played at a higher, more objectionable frequency. The desired masking sound includes lower, more pleasant frequencies. In another example, the rule set utilizes harmonic frequencies to produce an ideal masking sound. In this example, the rules use multiples of the same fundamental frequency to produce a more pleasant masking sound. For example, the masking sound and the encoded audio are played at frequencies (e.g., 200 hz, 300 hz, and 400 hz) that are multiples of a particular predetermined frequency (e.g., multiples of 100 hz). In yet another example, the ideal masking sound includes an amplitude high enough to mask the encoded audio. However, the ideal masking sound also includes a sufficiently low amplitude so as not to interfere with data transmission. In an example, the objectionable sound includes a sound having one or more characteristics or attributes outside of a predetermined acceptable range. For example, the frequency, amplitude, volume, or other attribute of the objectionable sound is outside of a predetermined acceptable range of frequencies, amplitudes, volumes, or other attributes of the sound. In another example, the objectionable sound includes a sound perceptible to the average human ear. In an example, the pleasant sound includes a frequency, amplitude, volume, and/or other attribute within a predetermined acceptable range.
In an example, the account management computing system 130 creates a function that produces an ideal masking sound for the encoded data when fed with the known frequency and amplitude of the encoded data. In this example, the masking sound changes depending on the data to be transmitted. This provides the maximum ability to mask the unpleasant sounds generated by the encoded data for the masking sound.
In block 220, the account management computing system 130 transmits the rules for creating the masking sound to the broadcast computing device 110. In an example, the rule applies to a plurality of different types of broadcast computing devices 110. For example, the rule is sent to all broadcast computing devices 110 that include the same make or model. In another example, the rules are specific to the device. In an example, the account management computing system 130 sends the rules for creating the masking sound to the broadcast computing device 110 via the network 140.
In block 225, the broadcast computing device 110 receives the rule for creating the masking sound. In an example, the account management computing system 130 pushes the rules to the broadcast computing device 110 when the application 113 updates.
In block 230, the broadcast computing device 110 saves the rules for creating the masking sound. In an example, the rules are saved by the application 113 in the data storage unit 119.
In block 240, the broadcast computing device 110 encodes the data for audio-based data transmission. In an example, the application 113 on the broadcast computing device 110 encodes data for audio-based data transmission. In an example, data is encoded in an acoustic wave via modulation by varying one or more properties of the carrier acoustic wave. Example varying properties of the carrier acoustic wave include amplitude, frequency, and/or phase.
In an example, the encoded audio has a known frequency. The known frequencies include tones or notes above a threshold tone or note, which results in an unpleasant sound. In another example, the encoded audio has a known amplitude. In an example, the audio component 117 of the broadcasting computing device 110 can broadcast (and the microphone component 125 of the receiving computing device 120 can receive) a limited spectrum, which results in a limited bandwidth.
In block 250, the broadcast computing device 130 creates a masking sound. Hereinafter, a method for creating a masking sound is described in more detail with reference to the method described in fig. 3.
The characteristics of sound include pitch and loudness, both determined by the frequency and amplitude of the sound wave. The pitch of sound depends on the frequency of the wave. The higher the frequency of the sound wave, the higher the pitch. Higher pitch results in a perceived sound that is more obtrusive. The loudness of sound depends on the amplitude of the vibrations that produce the sound. The higher the amplitude of the vibration, the louder the sound. The larger sound results in a higher perceived sound intensity.
In an example, the ideal masking sound is played at the correct frequency point and the correct amplitude to mask the unpleasant sounds of the encoded audio. Based on the particular frequency point and amplitude of the encoded audio, the broadcast computing device 110 creates a masking sound playback at the desired frequency point and amplitude.
Fig. 3 is a block flow diagram depicting a method 250 for creating a masking sound, as referred to in block 250, according to some examples. The method 200 is described with reference to the components shown in FIG. 1.
In block 310, the broadcast computing device 110 retrieves the rules for creating the masking sound. In an example, the rules for creating the masking sound are determined by the account management computing system 130 in block 210 of fig. 2 and saved by the broadcast computing device 110 in block 230 of fig. 2. In this example, the application 113 retrieves the rules for creating the masking sound from the data storage unit 119.
In block 320, the broadcast computing device 110 determines frequency points of the encoded audio. In an example, the audio-based data has a known frequency when encoded, and the broadcast computing device 110 retrieves the known frequency points. In another example, frequency points are measured or calculated.
In block 330, the broadcast computing device 110 determines the amplitude of the encoded audio. In an example, the audio-based data has a known amplitude when encoded, and the broadcast computing device 110 retrieves the known amplitude. In another example, the amplitude is measured or calculated.
In block 340, the broadcast computing device 110 applies the rules for creating the masking sound to the determined frequency points and amplitudes. In an example, the ideal masking sound is played at the correct frequency point and the correct amplitude to mask the unpleasant sounds of the encoded audio. Based on the particular frequency point and amplitude of the encoded audio, the broadcast computing device 110 creates a masking sound that is played at the desired frequency point and amplitude. In an example, the rules for creating the masking sound include the following functions: the frequency points and amplitudes of the encoded audio are input as inputs, and masking sounds are generated as outputs. In an example, the broadcast computing device 110 encodes the output masking sound into a sound file. In the example shown in the above, the first,
the method 250 then proceeds to block 260 in fig. 2.
In block 260, the broadcast computing device 110 determines whether it is capable of playing two separate sounds as separate files simultaneously on separate streams. In an example, the broadcast computing device 110 includes hardware that can play two sounds simultaneously. For example, the audio component 117 may play two sounds simultaneously.
If the broadcast computing device 110 may not play the two separate sounds as separate files simultaneously on separate streams, the method 200 proceeds to block 270 in FIG. 2. At block 270, the broadcast computing device 110 combines the encoded audio file and the encoded masking sound file to create a single sound file. In an example, the application 113 merges or combines audio files to create a single sound file.
In block 275, the broadcast computing device 110 plays the single sound file. In an example, the broadcast computing device 110 plays a single sound file through the audio component 117. In this example, the masking sound blocks the unpleasant sound of the encoded audio, resulting in a more pleasant sound to the human ear.
Returning to block 260, if the broadcast computing device 110 is capable of playing two separate sounds as separate files simultaneously on separate streams, the method 200 proceeds to block 280 in fig. 2. In block 280, the broadcast computing device 110 plays the encoded audio file and the encoded masking sound file simultaneously. In an example, the broadcast computing device 110 simultaneously plays the encoded audio file and the encoded masking sound file through the audio component 117. In this example, the masking sound blocks the unpleasant sound of the encoded audio, resulting in a more pleasant sound to the human ear.
In block 290, the receiving computing device 120 receives the encoded audio file. In an example, receiving computing device 120 receives an encoded audio file via microphone component 125. In an example, the application 123 of the receiving computing device 120 can decode the encoded audio file.
Other examples
Fig. 4 depicts a computing machine 2000 and a module 2050, according to some examples. The computing machine 2000 may correspond to any of the various computers, servers, mobile devices, embedded systems, or computing systems set forth herein. The module 2050 may include one or more hardware or software elements configured to facilitate the computing machine 2000 in performing the various methods and processing functions set forth herein. The computing machine 2000 may include various internal or attached components, such as a processor 2010, a system bus 2020, a system memory 2030, a storage medium 2040, an input/output interface 2060, and a network interface 2070 for communicating with a network 2080.
The computing machine 2000 may be implemented as a conventional computer system, an embedded controller, a laptop computer, a server, a mobile device, a smartphone, a set-top box, a kiosk, a router or other network node, a vehicle information system, one or more processors associated with a television, a customized machine, any other hardware platform, or any combination or multiple thereof. The computing machine 2000 may be a distributed system configured to operate with multiple computing machines interconnected via a data network or bus system.
Processor 2010 may be configured to execute code or instructions to perform the operations and functions described herein, manage request flow and address mapping and perform computations and generate commands. The processor 2010 may be configured to monitor and control the operation of the components in the computing machine 2000. The processor 2010 may be a general purpose processor, a processor core, a multiprocessor, a reconfigurable processor, a microcontroller, a digital signal processor ("DSP"), an application specific integrated circuit ("ASIC"), a graphics processing unit ("GPU"), a field programmable gate array ("FPGA"), a programmable logic device ("PLD"), a controller, a state machine, gated logic, discrete hardware components, any other processing unit, or any combination or plurality thereof. Processor 2010 may be a single processing unit, multiple processing units, a single processing core, multiple processing cores, a dedicated processing core, a coprocessor, or any combination thereof. According to some examples, the processor 2010, along with other components of the computing machine 2000, may be a virtualized computing machine executing within one or more other computing machines.
System memory 2030 may include a non-volatile memory such as a read only memory ("ROM"), a programmable read only memory ("PROM"), an erasable programmable read only memory ("EPROM"), a flash memory, or any other device capable of storing program instructions or data with or without power applied. The system memory 2030 may also include volatile memory such as random access memory ("RAM"), static random access memory ("SRAM"), dynamic random access memory ("DRAM"), and synchronous dynamic random access memory ("SDRAM"). Other types of RAM may also be used to implement system memory 2030. The system memory 2030 may be implemented using a single memory module or a plurality of memory modules. Although the system memory 2030 is depicted as being part of the computing machine 2000, those skilled in the art will recognize that the system memory 2030 may be separate from the computing machine 2000 without departing from the scope of the present technology. It should also be appreciated that the system memory 2030 may include or operate in conjunction with a non-volatile storage device, such as the storage media 2040.
The storage medium 2040 may include a hard disk, a floppy disk, a compact disk read only memory ("CD-ROM"), a digital versatile disk ("DVD"), a blu-ray disk, a tape, a flash memory, other non-volatile storage, a solid state drive ("SSD"), any magnetic storage device, any optical storage device, any electrical storage device, any semiconductor storage device, any physical-based storage device, any other data storage device, or any combination or plurality thereof. The storage media 2040 may store one or more operating systems, application programs, and program modules, such as modules 2050, data, or any other information. The storage medium 2040 may be part of the computing machine 2000 or connected to the computing machine 2000. The storage media 2040 may also be part of one or more other computing machines in communication with the computing machine 2000, such as a server, database server, cloud storage, network attached storage, and so forth.
The module 2050 may include one or more hardware or software elements configured to facilitate the computing machine 2000 in performing the various methods and processing functions set forth herein. The module 2050 may include one or more sequences of instructions stored as software or firmware in association with the system memory 2030, the storage medium 2040, or both. Thus, the storage medium 2040 may represent an example of a machine or computer-readable medium on which instructions or code may be stored for execution by the processor 2010. A machine or computer readable medium may generally refer to any medium or media used to provide instructions to processor 2010. Such machine or computer-readable media associated with the module 2050 may include a computer software product. It should be appreciated that a computer software product including the module 2050 may also be associated with one or more processes or methods for delivering the module 2050 to the computing machine 2000 via the network 2080, any signal-bearing medium, or any other communication or delivery technique. The module 2050 may also include hardware circuitry or information (such as microcode) used to configure the hardware circuitry or configuration information for an FPGA or other PLD.
The input/output ("I/O") interface 2060 may be configured to couple to, receive data from, and transmit data to one or more external devices. Such external devices as well as various internal devices may also be referred to as peripheral devices. The I/O interface 2060 may include electrical and physical connections for operatively coupling various peripheral devices to the computing machine 2000 or the processor 2010. The I/O interface 2060 may be configured to communicate data, addresses, and control signals between a peripheral device, the computing machine 2000, or a processor. The I/O interface 2060 may be configured to implement any standard interface, such as small computer system interface ("SCSI"), serial attached SCSI ("SAS"), fibre channel, peripheral component interconnect ("PCI"), PCI Express (PCIe), serial bus, parallel bus, advanced technology attached ("ATA"), serial ATA ("SATA"), universal serial bus ("USB"), Thunderbolt, FireWire, various video buses, and the like. The I/O interface 2060 may be configured to implement only one interface or bus technology. Alternatively, the I/O interface 2060 may be configured to implement multiple interface or bus technologies. The I/O interface 2060 may be configured as part of the system bus 2020, all or operational with the system bus 2020. The I/O interface 2060 may comprise one or more buffers for buffering transmissions between one or more external devices, internal devices, the computing machine 2000, or the processor 2010.
The I/O interface 2060 may couple the computing machine 2000 to various input devices including a mouse, touch screen, scanner, electronic digitizer, sensor, receiver, touchpad, trackball, camera, microphone, keyboard, any other pointing (pointing) device, or any combination thereof. The I/O interface 2060 may couple the computing machine 2000 to various output devices including video displays, speakers, printers, projectors, haptic feedback devices, automation controls, robotic components, actuators, motors, fans, solenoids, valves, pumps, transmitters, signal transmitters, lights, and the like.
The computing machine 2000 may operate in a networked environment using logical connections to one or more other systems or computers on the network 2080 through a network interface 2070. The network 2080 may include a Wide Area Network (WAN), a Local Area Network (LAN), an intranet, the internet, a wireless access network, a wired network, a mobile network, a telephone network, an optical network, or a combination thereof. The network 2080 may be packet-switched, circuit-switched, have any topology, and may use any communication protocol. The communication links within the network 2080 may involve various digital or analog communication media such as fiber optic cables, free-space optics, waveguides, electrical conductors, wireless links, antennas, radio frequency communications, and so forth.
The processor 2010 may be coupled to the computing machine 2000 or other elements of the various peripheral devices discussed herein by a system bus 2020. It is to be appreciated that the system bus 2020 can be internal to the processor 2010, external to the processor 2010, or both. According to certain examples, the processor 2010, other elements of the computing machine 2000, or any of the various peripherals discussed herein may be integrated into a single device, such as a system on chip ("SOC"), system on package ("SOP"), or ASIC device.
Where the systems discussed herein collect personal information about a user or can use personal information, the user may be provided with an opportunity or choice to control whether programs or functions collect user information (e.g., information about the user's social network, social activities or activities, profession, the user's preferences, or the user's current location), or to control whether and/or how to receive content from a content server that may be more relevant to the user. In addition, some data may be processed in one or more ways to delete personally identifiable information before it is stored or used. For example, the identity of the user may be processed and thus the user's personally identifiable information may not be determined, or the user's geographic location may be generalized to the location from which the location information was obtained (such as a city, ZIP code, or state level) and thus the user's location may not be determined. Thus, the user may control how information is collected about the user and used by the content server.
Examples may include a computer program embodying the functionality described and illustrated herein, wherein the computer program is implemented in a computer system including instructions stored in a machine-readable medium and a processor executing the instructions. It will be apparent, however, that there are many different ways in which examples can be implemented in computer programming, and these examples should not be construed as limited to any one set of computer program instructions. Furthermore, a skilled programmer would be able to write such a computer program to implement the disclosed examples based on the accompanying flow charts and associated description in the application text. Therefore, it is not believed that a particular set of program code instructions need be disclosed for a sufficient understanding of how the examples are made and used. Furthermore, those skilled in the art will recognize that one or more aspects of the examples described herein may be performed by hardware, software, or a combination thereof, as may be embodied in one or more computing systems. Moreover, any reference to an action being performed by a computer should not be construed as being performed by a single computer, as more than one computer may perform the action.
The examples described herein may be used with computer hardware and software that perform the methods and processing functions described herein. The systems, methods, and programs described herein may be embodied in a programmable computer, computer-executable software, or digital circuitry. The software may be stored on a computer readable medium. For example, the computer readable medium may include floppy disks, RAM, ROM, hard disks, removable media, flash memory, memory sticks, optical media, magneto-optical media, CD-ROMs, and the like. The digital circuitry may comprise integrated circuits, gate arrays, building logic, Field Programmable Gate Arrays (FPGAs), and the like
The example systems, methods, and acts described in the examples previously presented are illustrative, and in alternative examples some acts may be performed in a different order, in parallel with each other, omitted entirely, and/or combined between different examples, and/or some additional acts may be performed without departing from the scope and spirit of the various examples. Accordingly, such alternative examples are included within the scope of the appended claims, which scope should be accorded the broadest interpretation so as to encompass such alternative examples.
Although specific examples have been described in detail above, this description is for illustrative purposes only. It should be understood, therefore, that many of the aspects described above are not intended as required or essential elements unless explicitly described as such. In addition to the aspects described above, modifications of the disclosed aspects of the examples, and equivalents or acts corresponding thereto, may occur to those skilled in the art without departing from the spirit and scope of the examples as defined by the appended claims, which scope is to be accorded the broadest interpretation so as to encompass such modifications and equivalent structures.

Claims (26)

1. A computer-implemented method for dynamically masking an audio-based data transmission, comprising:
encoding, by a computing device, data to be transmitted as an audio file for audio-based transmission, wherein the encoded audio file produces sound that is audible to a human ear;
determining, by a computing device, frequency points of an encoded audio file;
determining, by a computing device, an amplitude of an encoded audio file;
creating, by the computing device, a masking sound file based on the determined frequency points and amplitudes of the encoded audio file, the masking sound file producing a masking sound; and
playing, by the computing device, the encoded audio file and a masking sound file, wherein the masking sound masks a sound produced by the encoded audio file.
2. The computer-implemented method of claim 1, wherein creating a masking sound file comprises applying a function of the determined frequency points and amplitudes of the audio file using the encoding.
3. The computer-implemented method of claim 1 or 2, wherein creating a masking sound file comprises encoding a separate audio file.
4. The computer-implemented method of any of claims 1-3, wherein the masking sound comprises lower frequency points than the encoded audio file.
5. The computer-implemented method of any of the preceding claims, wherein the masking sound frequency points and the frequency points of the encoded audio file comprise harmonic frequencies.
6. The computer-implemented method of claim 5, wherein masking sound frequencies are generated using multiples of the same fundamental frequency.
7. The computer-implemented method of claim 6, wherein multiples of 100 hertz are used to produce masking sound frequencies.
8. The computer-implemented method of any of the preceding claims, wherein the masking sound comprises a masking sound amplitude that is less than a threshold amplitude so as not to interfere with data transmission of the encoded audio file.
9. The computer-implemented method of any of the preceding claims, wherein the attribute of the masking sound depends on the data to be transmitted.
10. The computer-implemented method of any of the preceding claims, further comprising: combining, by the computing device, the encoded audio file and the masking sound file into a single sound file, wherein playing the encoded audio file and the masking sound file comprises playing the single sound file.
11. The computer-implemented method of any of claims 1-7, wherein playing the encoded audio file and the masking sound file comprises playing two separate sound files simultaneously.
12. The computer-implemented method of any of the preceding claims, wherein the encoded audio file produces an unpleasant sound that is audible to the human ear.
13. The computer-implemented method of any of the preceding claims, wherein the masking sound comprises an audible pleasing sound to a human ear.
14. The computer-implemented method of any of the preceding claims, further comprising: determining, by the computing device, that the encoded audio file produces an unpleasant sound that is audible to the human ear.
15. The computer-implemented method of claim 14, wherein determining that the encoded audio file produces an unpleasant sound that is audible to the human ear comprises determining that at least one of a sound frequency, a sound amplitude, and a volume is outside a predetermined range of sound frequencies, sound amplitudes, or volumes.
16. The computer-implemented method of any of the preceding claims, further comprising: determining, by the computing device, that the masking sound comprises an audibly pleasing sound to the human ear.
17. The computer-implemented method of claim 16, wherein determining that the masking sound comprises an audibly pleasing sound to the human ear comprises determining that at least one of a frequency, an amplitude, and a volume of the sound is within a predetermined range of frequencies, amplitudes, or volumes of the sound.
18. The computer-implemented method of any of the preceding claims, wherein the data to be transmitted is encoded in the sound wave via modulation by varying one or more properties of the carrier sound wave.
19. A system for dynamically masking an audio-based data transmission, comprising:
a storage device; and
a processor communicatively coupled to the storage device, wherein execution of the application code instructions stored in the storage device by the processor causes the system to:
encoding data to be transmitted into an audio file for audio-based transmission, wherein the encoded audio file produces sound that is audible to the human ear;
determining frequency points of the encoded audio file;
determining an amplitude of the encoded audio file;
creating a masking sound file based on the determined frequency points and amplitudes of the encoded audio file, the masking sound file producing a masking sound; and
playing the encoded audio file and a masking sound file, wherein the masking sound masks a sound generated by the encoded audio file.
20. The system of claim 19, wherein the masking sounds comprise lower frequency points than the encoded audio file masking sounds, and wherein the masking sound frequency points and the frequency points of the encoded audio file comprise harmonic frequencies.
21. The system of claim 19 or 20, wherein the masking sound comprises a masking sound amplitude that is less than a threshold amplitude so as not to interfere with data transmission of the encoded audio file.
22. The system of any of claims 19-21, wherein the processor is further configured to execute application code instructions stored in the storage device to cause the system to combine the encoded audio file and the masking sound file into a single sound file, wherein playing the encoded audio file and the masking sound file comprises playing the single sound file.
23. A computer program product, comprising:
a non-transitory computer-readable storage device having computer-executable program instructions embodied therein that, when executed by a computer, cause the computer to dynamically mask audio-based data transmissions, the computer-readable program instructions comprising:
computer-readable program instructions to determine frequency points of an encoded audio file;
computer-readable program instructions to determine an amplitude of the encoded audio file;
computer readable program instructions to create a masking sound file based on the determined frequency points and amplitudes of the encoded audio file, the masking sound file producing a masking sound; and
computer readable program instructions to play the encoded audio file and a masking sound file, wherein the masking sound masks a sound produced by the encoded audio file.
24. The computer program product of claim 23, wherein the masking sounds comprise lower frequency points than the encoded audio file masking sounds, and wherein the masking sound frequency points and the frequency points of the encoded audio file comprise harmonic frequencies.
25. The computer program product of claim 23 or 24, wherein the processor is further configured to execute application code instructions stored in the storage device to cause the system to combine the encoded audio file and the masking sound file into a single sound file, wherein playing the encoded audio file and the masking sound file comprises playing the single sound file.
26. The computer program product of claim 23 or 24, further comprising computer-readable program instructions to combine the encoded audio file and the masking sound file into a single sound file, wherein playing the encoded audio file and the masking sound file comprises playing the single sound file.
CN201880053363.8A 2017-08-16 2018-06-08 Dynamic audio data transmission masking Pending CN110998711A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201762546133P 2017-08-16 2017-08-16
US62/546,133 2017-08-16
PCT/US2018/036783 WO2019036092A1 (en) 2017-08-16 2018-06-08 Dynamic audio data transfer masking

Publications (1)

Publication Number Publication Date
CN110998711A true CN110998711A (en) 2020-04-10

Family

ID=62815149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880053363.8A Pending CN110998711A (en) 2017-08-16 2018-06-08 Dynamic audio data transmission masking

Country Status (2)

Country Link
CN (1) CN110998711A (en)
WO (1) WO2019036092A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113593602A (en) * 2021-07-19 2021-11-02 深圳市雷鸟网络传媒有限公司 Audio processing method and device, electronic equipment and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113765609A (en) 2017-04-10 2021-12-07 谷歌有限责任公司 Mobile service request for any sound emitting device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102652337A (en) * 2009-12-10 2012-08-29 三星电子株式会社 Device and method for acoustic communication
CN103189912A (en) * 2010-10-21 2013-07-03 雅马哈株式会社 Voice processor and voice processing method
US20140006017A1 (en) * 2012-06-29 2014-01-02 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for generating obfuscated speech signal
CN104505096A (en) * 2014-05-30 2015-04-08 华南理工大学 Method and device using music to transmit hidden information
CN205028649U (en) * 2015-09-29 2016-02-10 苏州一天声学科技有限公司 Ware is sheltered to multichannel sound
CN107637095A (en) * 2015-05-11 2018-01-26 微软技术许可有限责任公司 The loudspeaker of reservation privacy, energy efficient for personal voice

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100735557B1 (en) * 2005-10-12 2007-07-04 삼성전자주식회사 Method and apparatus for disturbing voice signal by sound cancellation and masking
US8160271B2 (en) * 2008-10-23 2012-04-17 Continental Automotive Systems, Inc. Variable noise masking during periods of substantial silence
EP3005344A4 (en) * 2013-05-31 2017-02-22 Nokia Technologies OY An audio scene apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102652337A (en) * 2009-12-10 2012-08-29 三星电子株式会社 Device and method for acoustic communication
CN103189912A (en) * 2010-10-21 2013-07-03 雅马哈株式会社 Voice processor and voice processing method
US20140006017A1 (en) * 2012-06-29 2014-01-02 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for generating obfuscated speech signal
CN104505096A (en) * 2014-05-30 2015-04-08 华南理工大学 Method and device using music to transmit hidden information
CN107637095A (en) * 2015-05-11 2018-01-26 微软技术许可有限责任公司 The loudspeaker of reservation privacy, energy efficient for personal voice
CN205028649U (en) * 2015-09-29 2016-02-10 苏州一天声学科技有限公司 Ware is sheltered to multichannel sound

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113593602A (en) * 2021-07-19 2021-11-02 深圳市雷鸟网络传媒有限公司 Audio processing method and device, electronic equipment and storage medium
CN113593602B (en) * 2021-07-19 2023-12-05 深圳市雷鸟网络传媒有限公司 Audio processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
WO2019036092A1 (en) 2019-02-21

Similar Documents

Publication Publication Date Title
KR102660922B1 (en) Management layer for multiple intelligent personal assistant services
US20200008003A1 (en) Presence-based volume control system
CN108874144B (en) Sound-to-haptic effect conversion system using mapping
JP2020173821A (en) System for converting stream-independent sound to haptic effect
US20190196779A1 (en) Intelligent personal assistant interface system
US8874448B1 (en) Attention-based dynamic audio level adjustment
CN110062309B (en) Method and device for controlling intelligent loudspeaker box
TWI703877B (en) Audio processing device, audio processing method, and computer program product
US20180352359A1 (en) Remote personalization of audio
JP6906584B2 (en) Methods and equipment for waking up devices
US20180270175A1 (en) Method, apparatus, system, and non-transitory computer readable medium for chatting on mobile device using an external device
WO2020108102A1 (en) Vibration method, electronic device and storage medium
WO2019129127A1 (en) Method for multi-terminal cooperative playback of audio file and terminal
CN110998711A (en) Dynamic audio data transmission masking
CN110164443B (en) Voice processing method and device for electronic equipment and electronic equipment
US20170178636A1 (en) Method and electronic device for jointly playing high-fidelity sounds of multiple players
CN111066264B (en) Dynamic calibration for audio data transfer
US8494206B2 (en) Electronic device and method thereof
JP2022058215A (en) Method, computer program and computer system (voice command execution) for communicating between a plurality of computing devices based on voice command
CN111052639B (en) Audio-based service set identifier
CN112788004B (en) Method, device and computer readable medium for executing instructions by virtual conference robot
US11102606B1 (en) Video component in 3D audio
US11317289B2 (en) Audio communication tokens
CN113709506A (en) Multimedia playing method, device, medium and program product based on cloud mobile phone
CN109841224B (en) Multimedia playing method, system and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination