CN107182011B - Audio playing method and system, mobile terminal and WiFi earphone - Google Patents

Audio playing method and system, mobile terminal and WiFi earphone Download PDF

Info

Publication number
CN107182011B
CN107182011B CN201710602055.3A CN201710602055A CN107182011B CN 107182011 B CN107182011 B CN 107182011B CN 201710602055 A CN201710602055 A CN 201710602055A CN 107182011 B CN107182011 B CN 107182011B
Authority
CN
China
Prior art keywords
wifi
audio file
headset
volume
earphone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710602055.3A
Other languages
Chinese (zh)
Other versions
CN107182011A (en
Inventor
桂明建
张玉磊
鄢明智
曹军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Taihengnuo Technology Co ltd Shanghai Branch
Original Assignee
Shenzhen Taihengnuo Technology Co ltd Shanghai Branch
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Taihengnuo Technology Co ltd Shanghai Branch filed Critical Shenzhen Taihengnuo Technology Co ltd Shanghai Branch
Priority to CN201710602055.3A priority Critical patent/CN107182011B/en
Publication of CN107182011A publication Critical patent/CN107182011A/en
Application granted granted Critical
Publication of CN107182011B publication Critical patent/CN107182011B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/12Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/02Hierarchically pre-organised networks, e.g. paging networks, cellular networks, WLAN [Wireless Local Area Network] or WLL [Wireless Local Loop]
    • H04W84/10Small scale networks; Flat hierarchical networks
    • H04W84/12WLAN [Wireless Local Area Networks]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/01Input selection or mixing for amplifiers or loudspeakers

Abstract

The application provides an audio playing method and system, a mobile terminal and a WiFi earphone, wherein the audio playing method comprises the following steps: after the mobile terminal and the WiFi earphone are connected in the same WiFi network, the current position parameters of the WiFi earphone are acquired, then the audio file of the mobile terminal and the current position parameters of the WiFi earphone are subjected to algorithm fusion, and the audio file after the algorithm fusion is played through the WiFi earphone. When the head position of the user changes, the current position parameters of the audio file and the WiFi earphone are subjected to algorithm fusion and then played through the WiFi earphone, so that the sound source position of the virtual scene simulated by the audio file in the brain of the user is fixed, the hearing mode of the user in a real boundary is met, the hearing feeling of the user is more comfortable, and the playing quality of the audio file is improved.

Description

Audio playing method and system, mobile terminal and WiFi earphone
Technical Field
The present disclosure relates to the field of mobile communications technologies, and in particular, to an audio playing method and system, a mobile terminal, and a WiFi headset.
Background
At present, mobile terminals are becoming more popular and are moving toward more intelligent, more integrated and more powerful functions. While most mobile terminals are used without audio playback. The performance of the mobile terminal in the audio playing process directly influences the user experience of the mobile terminal, and further influences the position of the mobile terminal in the market.
In the audio playing process, the existing mobile terminal mainly has two modes of playing through a loudspeaker and playing through an earphone. When playing through a loudspeaker, a plurality of users can acquire the audio at the same time, but the audio playing quality is easily affected by noise in the environment, the user privacy is poor, other people in the environment are easily affected, and inconvenience is caused to the lives of other people. When playing through the earphone, the influence of noise in the environment on the audio playing quality can be avoided, the privacy of a user is protected, and the life of other people is not influenced. However, when the existing mobile terminal is connected with the earphone, whether the mobile terminal is connected in a wired mode through the earphone jack or in a wireless mode through Bluetooth, the mobile terminal corresponds to one wired earphone or one Bluetooth earphone, only one person can receive audio played by the mobile terminal, and the requirement that multiple persons listen to the audio at the same time cannot be met.
In addition, because the length of the output line of the wired earphone is limited, the user can only move in a certain range near the mobile terminal, and the moving range of the user in the process of listening to the audio is greatly limited. The transmission distance of bluetooth in the open environment is only about ten meters, and the range of motion of the user is also limited when the bluetooth headset receives the audio played by the mobile terminal. Moreover, the transmission rate of bluetooth is limited, and efficiency is lower, stability is relatively poor when carrying out audio transmission through bluetooth, leads to bluetooth headset broadcast's audio quality not high.
Therefore, how to improve the quality of audio played by a mobile terminal and to enable audio played by a plurality of headphones in a mobile terminal to be played simultaneously is one of the problems to be solved by those skilled in the art.
Disclosure of Invention
In view of the above drawbacks of the prior art, an object of the present application is to provide an audio playing method and system, a mobile terminal, and a WiFi headset, which are used for solving the problem in the prior art that the quality of audio played by the mobile terminal is poor.
To achieve the above and other related objects, a first aspect of the present application provides an audio playing method, applied to a mobile terminal, including: under a WiFi network accessed by the mobile terminal, connecting with a WiFi earphone accessed to the WiFi network; the current position parameters of the WiFi earphone are subjected to algorithm fusion with the audio file; and sending the audio file subjected to algorithm fusion to the connected WiFi earphone through a WiFi network so that the WiFi earphone can receive the audio file and play the audio file.
In certain embodiments of the first aspect of the present application, under a WiFi network to which a mobile terminal is connected, the mobile terminal is connected to two or more WiFi headphones connected to the WiFi network.
In certain implementations of the first aspect of the present application, obtaining the current location parameter of the WiFi headset includes: receiving current position parameters of the WiFi headset from the WiFi headset through a WiFi network; the current position parameter is obtained through sensing of a position sensor arranged in the WiFi headset.
In certain embodiments of the first aspect of the present application, the position sensor is a gyroscope; alternatively, the position sensor is a gyroscope or an acceleration sensor.
In certain implementations of the first aspect of the present application, the algorithmic fusion of the current location parameters of the WiFi headset with the audio file includes: calibrating a position sensor in the WiFi headset, and determining a reference position parameter and a reference play volume of the WiFi headset; calculating a position offset parameter of the WiFi headset according to the reference position parameter and the current position parameter of the WiFi headset; calculating left volume offset and right volume offset of the WiFi headset according to the position offset parameters of the WiFi headset; and determining the left channel volume and the right channel volume of the audio file according to the reference playing volume, the left volume offset and the right volume offset of the WiFi earphone to form the audio file after algorithm fusion.
In a second aspect of the present application, an audio playing method is provided, which is applied to a mobile terminal, and includes: under a WiFi network accessed by the mobile terminal, connecting with a WiFi earphone accessed to the WiFi network; and sending the audio file to the connected WiFi earphone through a WiFi network so that the WiFi earphone can perform algorithm fusion on the audio file and the acquired current position parameter of the WiFi earphone and play the audio file.
In certain embodiments of the second aspect of the present application, under a WiFi network to which a mobile terminal is connected, the mobile terminal is connected to two or more WiFi headphones connected to the WiFi network.
In certain embodiments of the second aspect of the present application, the current position parameter of the WiFi headset is sensed by a position sensor disposed within the WiFi headset.
In certain embodiments of the second aspect of the present application, the position sensor is a gyroscope; alternatively, the position sensor is a gyroscope or an acceleration sensor.
In certain embodiments of the second aspect of the present application, performing algorithm fusion on the audio file and the obtained current position parameter of the WiFi headset includes: calibrating a position sensor in the WiFi headset, and determining a reference position parameter and a reference play volume of the WiFi headset; calculating a position offset parameter of the WiFi headset according to the reference position parameter and the current position parameter of the WiFi headset; calculating left volume offset and right volume offset of the WiFi headset according to the position offset parameters of the WiFi headset; and determining the left channel volume and the right channel volume of the audio file according to the reference playing volume, the left volume offset and the right volume offset of the WiFi earphone to form the audio file after algorithm fusion.
In a third aspect of the present application, an audio playing method is provided, which is applied to a WiFi headset, and includes: under a WiFi network accessed by the WiFi earphone, connecting with a mobile terminal accessed to the WiFi network; acquiring the current position parameter of the WiFi earphone, and transmitting the current position parameter of the WiFi earphone to the mobile terminal through a WiFi network so that the mobile terminal can carry out algorithm fusion on the current position parameter of the WiFi earphone and an audio file; and receiving the audio file subjected to the algorithm fusion from the mobile terminal through a WiFi network and playing the audio file.
In certain embodiments of the third aspect of the present application, under a WiFi network to which the WiFi headset is connected, the two or more WiFi headsets are connected to a mobile terminal that is connected to the WiFi network.
In certain embodiments of the third aspect of the present application, the obtaining the current position parameter of the WiFi headset is obtained by sensing a position sensor disposed in the WiFi headset.
In certain embodiments of the third aspect of the present application, the position sensor is a gyroscope; alternatively, the position sensor is a gyroscope or an acceleration sensor.
In certain implementations of the third aspect of the present application, the performing, by the mobile terminal, the algorithm fusion of the current location parameter of the WiFi headset with the audio file includes: calibrating a position sensor in the WiFi headset, and determining a reference position parameter and a reference play volume of the WiFi headset; calculating a position offset parameter of the WiFi headset according to the reference position parameter and the current position parameter of the WiFi headset; calculating left volume offset and right volume offset of the WiFi headset according to the position offset parameters of the WiFi headset; and determining the left channel volume and the right channel volume of the audio file according to the reference playing volume, the left volume offset and the right volume offset of the WiFi earphone to form the audio file after algorithm fusion.
In a fourth aspect of the present application, an audio playing method is provided, which is applied to a WiFi headset, and includes: under a WiFi network accessed by the WiFi earphone, connecting with a mobile terminal accessed to the WiFi network; receiving an audio file from the mobile terminal through a WiFi network; acquiring the current position parameter of the WiFi earphone, and carrying out algorithm fusion on the current position parameter of the WiFi earphone and the audio file; and playing the audio file subjected to algorithm fusion.
In certain embodiments of the fourth aspect of the present application, under a WiFi network to which the WiFi headset is connected, two or more WiFi headsets are connected to a mobile terminal that is connected to the WiFi network.
In certain embodiments of the fourth aspect of the present application, the obtaining the current position parameter of the WiFi headset is obtained by sensing a position sensor disposed in the WiFi headset.
In certain embodiments of the fourth aspect of the present application, the position sensor is a gyroscope; alternatively, the position sensor is a gyroscope or an acceleration sensor.
In certain embodiments of the fourth aspect of the present application, algorithmically fusing the current location parameters of the WiFi headset with the audio file includes: calibrating a position sensor in the WiFi headset, and determining a reference position parameter and a reference play volume of the WiFi headset; calculating a position offset parameter of the WiFi headset according to the reference position parameter and the current position parameter of the WiFi headset; calculating left volume offset and right volume offset of the WiFi headset according to the position offset parameters of the WiFi headset; and determining the left channel volume and the right channel volume of the audio file according to the reference playing volume, the left volume offset and the right volume offset of the WiFi earphone to form the audio file after algorithm fusion.
In a fifth aspect of the present application, there is provided a mobile terminal, comprising: the first WiFi module is used for being connected with a WiFi earphone under the same WiFi network; the information receiving module is used for receiving the current position parameters of the WiFi earphone through the first WiFi module; the first audio file processing module is used for carrying out algorithm fusion on the current position parameters of the WiFi earphone and the audio file; and the audio sending module is used for sending the audio file subjected to algorithm fusion to the connected WiFi earphone through the first WiFi module so as to be played after the WiFi earphone receives the audio file.
In certain embodiments of the fifth aspect of the present application, the first WiFi module is connected to more than two WiFi headphones.
In certain embodiments of the fifth aspect of the present application, the current position parameter of the WiFi headset is sensed by a position sensor disposed within the WiFi headset.
In certain embodiments of the fifth aspect of the present application, the position sensor is a gyroscope; alternatively, the position sensor is a gyroscope or an acceleration sensor.
In certain embodiments of the fifth aspect of the present application, the first audio file processing module includes: the first reference information acquisition unit is used for calibrating a position sensor in the WiFi earphone and determining a reference position parameter and a reference playing volume of the WiFi earphone; the first position deviation calculation unit is used for calculating the position deviation parameter of the WiFi headset according to the reference position parameter and the current position parameter of the WiFi headset; the first volume offset calculation unit is used for calculating the left volume offset and the right volume offset of the WiFi headset according to the position offset parameter of the WiFi headset; and the first audio volume calibration unit is used for determining the left channel volume and the right channel volume of the audio file according to the reference play volume, the left volume offset and the right volume offset of the WiFi earphone to form the audio file after algorithm fusion.
In certain embodiments of the fifth aspect of the present application, further comprising: and the first storage module is used for storing the audio file.
In a sixth aspect of the present application, there is provided a WiFi headset, comprising: the second WiFi module is used for being connected with the mobile terminal under the same WiFi network; the audio receiving module is used for receiving an audio file from the mobile terminal through the second WiFi module; the position sensor is used for acquiring the current position parameters of the WiFi earphone; the second audio file processing module is used for carrying out algorithm fusion on the current position parameters of the WiFi earphone and the audio files received from the mobile terminal through the second WiFi module; and the audio playing module is used for playing the audio file subjected to the algorithm fusion.
In certain embodiments of the sixth aspect of the present application, the position sensor is a gyroscope; alternatively, the position sensor is a gyroscope or an acceleration sensor.
In certain embodiments of the sixth aspect of the present application, the second audio file processing module includes: the second reference information acquisition unit is used for calibrating the position sensor and determining the reference position parameter and the reference play volume of the WiFi earphone; the second position deviation calculating unit is used for calculating the position deviation parameter of the WiFi headset according to the reference position parameter and the current position parameter of the WiFi headset; the second volume offset calculation unit is used for calculating the left volume offset and the right volume offset of the WiFi headset according to the position offset parameter of the WiFi headset; and the second audio volume calibration unit is used for determining the left channel volume and the right channel volume of the audio file according to the reference play volume, the left volume offset and the right volume offset of the WiFi earphone to form the audio file after algorithm fusion.
In certain embodiments of the sixth aspect of the present application, the WiFi headset further includes: and the second storage module is used for storing the audio files received from the mobile terminal through the second WiFi module.
In a seventh aspect of the present application, an audio playing system is provided, including a mobile terminal and a WiFi headset that establish a WiFi network connection under the same WiFi network; the mobile terminal is used for: acquiring current position parameters of a WiFi headset through a WiFi network, carrying out algorithm fusion on the current position parameters of the WiFi headset and an audio file, and sending the audio file subjected to the algorithm fusion to the connected WiFi headset through the WiFi network; the WiFi headset is used for: and acquiring the current position parameters of the WiFi earphone, transmitting the current position parameters of the WiFi earphone to the mobile terminal through a WiFi network, acquiring the audio file subjected to algorithm fusion from the mobile terminal through the WiFi network, and playing the audio file.
In an eighth aspect of the present application, an audio playing system is provided, including a mobile terminal and a WiFi headset that establish a WiFi network connection under the same WiFi network; the mobile terminal is used for: sending an audio file to the connected WiFi headset through a WiFi network; the WiFi headset is used for: acquiring current position parameters of the WiFi earphone, carrying out algorithm fusion on the acquired current position parameters of the WiFi earphone and an audio file received from the mobile terminal through a WiFi network, and playing the audio file subjected to the algorithm fusion.
As described above, the audio playing method and system, the mobile terminal and the WiFi earphone have the following beneficial effects:
after the mobile terminal and the WiFi earphone are connected in the same WiFi network, the current position parameters of the WiFi earphone are acquired, then the audio file of the mobile terminal and the current position parameters of the WiFi earphone are subjected to algorithm fusion, and the audio file after the algorithm fusion is played through the WiFi earphone. Because the current position parameter of the WiFi headset corresponds to the head position of the user using the WiFi headset, when the head position of the user changes, the current position parameter of the WiFi headset also changes. When the head position of the user changes, the current position parameters of the audio file and the WiFi earphone are subjected to algorithm fusion and then played through the WiFi earphone, so that the sound source position of the virtual scene simulated by the audio file in the brain of the user is fixed, the hearing mode of the user in a real boundary is met, the hearing feeling of the user is more comfortable, and the playing quality of the audio file is improved. Moreover, as the data transmission rate of the WiFi network is faster, compared with the existing method for transmitting the audio file through Bluetooth, the WiFi network is more stable in transmitting the audio file, and the audio file is more smoothly played.
Further, the mobile terminal is connected with more than two WiFi headsets under the same WiFi network, so that the audio files of the mobile terminal can be played in a plurality of WiFi headsets at the same time. Compared with the mode that the audio file is played through the loudspeaker so that multiple users can listen to the same audio file at the same time, the audio playing scheme can effectively protect the privacy of the users and avoid affecting the life of other people in the environment.
Furthermore, the algorithm fusion is performed on the current position parameters of the WiFi earphone and the audio file, and the formation of the audio file after the algorithm fusion specifically comprises the following steps: calibrating a position sensor in the WiFi earphone, and determining reference playing volume and reference position parameters of the WiFi earphone; then, calculating the position deviation parameter of the WiFi earphone according to the reference position parameter and the current position parameter of the WiFi earphone; then, calculating the left volume offset and the right volume offset of the WiFi earphone according to the position offset parameter of the WiFi earphone; and finally, calibrating the audio file according to the reference play volume, the left volume offset and the right volume offset of the WiFi earphone to form an audio file after algorithm fusion. Therefore, the volume of the left earphone and the right earphone in the WiFi earphone is adjusted according to the head position change of the user, and the sound source position of the simulated scene formed in the brain of the user is adjusted. For audio file playing matched with image watching, the user can accurately judge the sound source position in the virtual scene, and the sound source position is consistent with the real scene of the corresponding change of the sound source relative to the position of the user when the position of the user is changed, so that the listening experience of the user is more real, the accuracy of audio file scene simulation is improved, and the listening experience of the audio file of the user is better.
Drawings
Fig. 1 is a schematic structural diagram of a mobile terminal according to the present application.
Fig. 2 is a schematic structural diagram of a first audio file processing module in fig. 1 of the present application.
Fig. 3 is a schematic structural diagram of a WiFi headset according to the present application.
Fig. 4 is a schematic structural diagram of a second audio file processing module in fig. 3 of the present application.
Fig. 5 is a flowchart of an embodiment of an audio playing method of the present application.
Fig. 6 is a schematic flow chart of step S13 in fig. 5.
Fig. 7 is a flowchart of another embodiment of an audio playing method according to the present application.
Fig. 8 is a flowchart of another embodiment of an audio playing method according to the present application.
Fig. 9 is a schematic flow chart of a further embodiment of the audio playing method of the present application.
Fig. 10 is a schematic structural diagram of an embodiment of an audio playing system of the present application.
Fig. 11 is a schematic structural diagram of another embodiment of the audio playing system of the present application.
Detailed Description
Further advantages and effects of the present application will be readily apparent to those skilled in the art from the present disclosure, by describing the embodiments of the present application with specific examples.
In the following description, reference is made to the accompanying drawings, which describe several embodiments of the present application. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present application is defined only by the claims of the issued patent. Spatially relative terms, such as "upper," "lower," "left," "right," "lower," "upper," and the like, may be used herein to facilitate a description of one element or feature as illustrated in the figures as being related to another element or feature.
Although the terms first, second, etc. may be used herein to describe various elements in some examples, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, the first preset threshold may be referred to as a second preset threshold, and similarly, the second preset threshold may be referred to as a first preset threshold, without departing from the scope of the various described embodiments. The first preset threshold and the preset threshold are both described as one threshold, but they are not the same preset threshold unless the context clearly indicates otherwise. Similar situations also include a first volume and a second volume.
Furthermore, as used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context indicates otherwise: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; A. b and C). An exception to this definition will occur only when a combination of elements, functions, steps or operations are in some way inherently mutually exclusive.
For purposes of this application, an "audio file" will be used to describe an electrical representation of sound that, when played through headphones, is converted into an air sound pressure wave of sound audible to the user.
In this application, the mobile terminal includes, but is not limited to, a notebook computer, a tablet computer, a mobile phone, a smart phone, a media player, a Personal Digital Assistant (PDA), etc., and also includes a combination of two or more thereof. It should be understood that the mobile terminal described in the embodiments of the present application is only one example of application, and the components of the mobile terminal may have more or less components than illustrated, or may have different configurations of components. The various components of the depicted illustrations may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits. In the specific embodiments of the present application, the mobile terminal is taken as a smart phone as an example to be described.
The mobile terminal includes memory, a memory controller, one or more Central Processing Units (CPUs), peripheral interfaces, RF circuits, audio circuits, speakers, microphones, input/output (I/O) subsystems, touch screens, other output or control devices, and external ports. The components communicate via one or more communication buses or signal lines. The mobile terminal also includes a power supply system for powering the various components. The power system may include a power management system, one or more power sources (e.g., battery, ac), a charging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., light emitting diode, LED), and any other components associated with power generation, management, and distribution in the portable device.
The mobile terminal supports various applications such as one or more of the following: drawing applications, presentation applications, word processing applications, website creation applications, disk editing applications, spreadsheet applications, gaming applications, telephony applications, video conferencing applications, email applications, instant messaging applications, fitness support applications, photo management applications, digital camera applications, digital video camera applications, web browsing applications, digital music player applications, and/or digital video player applications.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a mobile terminal of the present application, and as shown in the drawing, the mobile terminal 10 includes a first WiFi module 11, an information receiving module 12, a first audio file processing module 13, and an audio transmitting module 14. The first WiFi module 11 is used for connecting with a WiFi headset under the same WiFi network. The information receiving module 12 is configured to receive, via the first WiFi module 11, a current position parameter of the WiFi headset. The first audio file processing module 13 is configured to algorithmically fuse the current position parameter of the WiFi headset with an audio file. The audio sending module 14 is configured to send the audio file subjected to the algorithm fusion to the connected WiFi headset through the first WiFi module 11, so that the WiFi headset receives the audio file and plays the audio file.
The audio files in the mobile terminal 10 may be independent audio information (e.g., music, recordings, etc.), or may be audio information combined with other information (e.g., audio information in a movie multimedia file), and the audio information in the audio file may be in a format in which the mobile terminal stores audio, such as MP3 format, WMA format, WAV format, ASF format, AAC format, VQF format, FLAC format, APE format, OGG format, etc.
In this embodiment, the first WiFi module 11 is a wireless network card supporting a wireless communication standard protocol, and works in the last two layers of OSI: physical layer and data link layer. The physical layer defines the electrical and optical signals, line states, clock bases, data encoding and circuitry, etc. required for data transmission and reception and provides a standard interface to the data link layer devices. The chip of the physical layer is called PHY. The data link layer provides the functions of addressing mechanism, data frame construction, data error checking, transmission control, data interface providing standard data interface standard for network layer, etc. The chips of the data link layer in ethernet are called MAC controllers. The PHY and the MAC controller of many network cards are connected together, and the relation between the two parts is that the PCI bus is connected with the MAC bus, the MAC bus is connected with the PHY, and the PHY is connected with the network.
In this embodiment, the first audio file processing module 13 may be a Central Processing Unit (CPU). Which may communicate with the first WiFi module 11 via one or more PCI (Peripheral Component Interconnect, peripheral component interconnect standard) buses. It should be understood that the mobile terminal of fig. 1 is merely an example of a mobile terminal and that the components of the mobile terminal may have more or fewer components than shown or a different configuration of components. The various components shown in fig. 1 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
In this embodiment, the current position parameter of the WiFi headset is obtained by sensing a position sensor disposed in the WiFi headset. Specifically, the position sensor is a gyroscope, for example, the balance sensor is a three-axis gyroscope. The three-axis gyroscope is a core sensitive device of an inertial navigation system, and the biggest role of the three-axis gyroscope is to measure angular velocity so as to judge the motion state of an object, so the three-axis gyroscope is also called a motion sensor, and can be used for rotation detection requiring high resolution and quick response. When a user performs small-range activities, the gyroscope can accurately position the WiFi earphone with lower cost, and the acquired current position parameters of the WiFi earphone are accurate.
In other embodiments, the position sensor may also be a gyroscope and an acceleration sensor. The acceleration sensor is one of basic measuring elements of an inertial navigation and inertial guidance system, is a sensor for measuring acceleration, generates gravity components on three coordinate axes due to the action of gravity when an object is in a static state, quantifies the gravity components and utilizes a trigonometric function to calculate the inclination angles of the object relative to the three coordinate axes. From the application direction of the gyroscope, the gyroscope can measure the angular velocity of motion along one axis or a plurality of axes, can form advantage complementation with the acceleration sensor, and if the two sensors of the acceleration sensor and the gyroscope are used in combination, the complete motion of the three-dimensional space of the WiFi headset can be tracked and captured better, so that the current position parameter of the WiFi headset can be determined accurately. The current position parameters of the WiFi headset can be accurately determined even if the user performs a large range of activities.
In a specific implementation scene that a user listens to an audio file in a mobile terminal through a WiFi earphone, when the head of the user moves or rotates, the current position parameters of the WiFi earphone obtained through a position sensor determine the rotating angle and direction of the head of the user, and the moving distance and direction, so that according to the virtual sound source position of the audio file in the brain of the user and the distance change between the left earphone and the right earphone and the sound source position caused by the change of the head position of the user, the volume of the left earphone and the right earphone in the WiFi earphone is respectively adjusted, and the volume change between the left earphone and the right earphone and the sound source position is consistent with the volume change caused by the change of the left earphone and the right earphone of the user, thereby truly simulating the listening scene of the audio file.
In this embodiment, after the mobile terminal and the WiFi headset are connected to each other through the first WiFi module 11 under the same WiFi network, the current position parameter of the WiFi headset is obtained through the information receiving module 12, then the audio file of the mobile terminal and the current position parameter of the WiFi headset are subjected to algorithm fusion through the first audio file processing module 13, and finally the audio file after the algorithm fusion is played through the WiFi headset through the audio sending module 14. Because the current position parameter of the WiFi headset corresponds to the head position of the user using the WiFi headset, when the head position of the user changes, the current position parameter of the WiFi headset also changes. When the head position of the user changes, the current position parameters of the audio file and the WiFi earphone are subjected to algorithm fusion and then played through the WiFi earphone, so that the sound source position of the virtual scene simulated by the audio file in the brain of the user is fixed, the hearing mode of the user in a real boundary is met, the hearing feeling of the user is more comfortable, and the playing quality of the audio file is improved. Moreover, as the data transmission rate of the WiFi network is faster, compared with the existing method for transmitting the audio file through Bluetooth, the WiFi network is more stable in transmitting the audio file, and the audio file is more smoothly played.
With continued reference to fig. 1, in another embodiment, the first WiFi module 11 in fig. 1 is connected to more than two WiFi headphones. Thereby enabling the audio files of the mobile terminal 10 to be played simultaneously in multiple WiFi headphones. Compared with the method that the audio file is played through the loudspeaker so that multiple users can listen to the same audio file at the same time, the audio playing scheme in the embodiment can effectively protect the privacy of the users and avoid affecting the lives of other people in the environment.
Referring to fig. 2, a schematic structural diagram of the first audio file processing module in fig. 1 of the present application is shown, and as shown in the drawing, the first audio file processing module 13 includes: a first reference information acquisition unit 131, a first positional deviation calculation unit 132, a first volume deviation calculation unit 133, and a first audio volume calibration unit 134. Wherein:
the first reference information obtaining unit 131 is configured to calibrate a position sensor in the WiFi headset, and determine a reference position parameter and a reference play volume of the WiFi headset.
The first position offset calculating unit 132 is configured to calculate a position offset parameter of the WiFi headset according to the reference position parameter and the current position parameter of the WiFi headset.
The first volume offset calculating unit 133 is configured to calculate a left volume offset and a right volume offset of the WiFi headset according to the position offset parameter of the WiFi headset.
The first audio volume calibration unit 134 is configured to determine a left channel volume and a right channel volume of the audio file according to the reference play volume, the left volume offset and the right volume offset of the WiFi headset, so as to form an audio file after algorithm fusion.
In this embodiment, the first reference information obtaining unit 131 is used to calibrate the position sensor in the WiFi headset, and determine the reference position parameter and the reference playing volume of the WiFi headset; next, calculating, by the first position offset calculating unit 132, a position offset parameter of the WiFi headset according to the reference position parameter and the current position parameter of the WiFi headset; then, calculating the left volume offset and the right volume offset of the WiFi headset according to the position offset parameter of the WiFi headset by the first volume offset calculation unit 133; finally, the audio file is calibrated by the first audio volume calibration unit 134 according to the WiFi headset and the reference play volume, the left volume offset and the right volume offset, so as to form an audio file after algorithm fusion. Therefore, the volume of the left earphone and the right earphone in the WiFi earphone is adjusted according to the head position change of the user, and the sound source position of the simulated scene formed in the brain of the user is adjusted. For audio file playing matched with image watching, the user can accurately judge the sound source position in the virtual scene, and the sound source position is consistent with the real scene of the corresponding change of the sound source relative to the position of the user when the position of the user is changed, so that the listening experience of the user is more real, the accuracy of audio file scene simulation is improved, and the listening experience of the audio file of the user is better.
In this embodiment, the audio file of the mobile terminal is derived from a real-time audio file acquired by the APP in the mobile terminal. In another embodiment, the mobile terminal of fig. 1 may further include a first storage module (not shown), where the first storage module is configured to store the audio file. At this time, the audio file of the mobile terminal is derived from the audio file stored in the mobile terminal.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a WiFi headset according to the present application. As shown in fig. 3, the WiFi headset 20 includes: a second WiFi module 21, an audio receiving module 22, a second audio file processing module 23, a position sensor 24 and an audio playing module 25. Wherein,
the second WiFi module 21 is configured to connect to a mobile terminal under the same WiFi network.
The audio receiving module 22 is configured to receive an audio file from the mobile terminal through the second WiFi module.
The position sensor 24 is used to obtain the current position parameters of the WiFi headset.
The second audio file processing module 23 is configured to algorithmically fuse the current position parameter of the WiFi headset with an audio file received from the mobile terminal through the second WiFi module.
The audio playing module 25 is used for playing the audio file subjected to the algorithm fusion.
In this embodiment, the second WiFi module 21 is a wireless network card supporting a wireless communication standard protocol, and works in the last two layers of OSI: physical layer and data link layer. The physical layer defines the electrical and optical signals, line states, clock bases, data encoding and circuitry, etc. required for data transmission and reception and provides a standard interface to the data link layer devices. The chip of the physical layer is called PHY. The data link layer provides the functions of addressing mechanism, data frame construction, data error checking, transmission control, data interface providing standard data interface standard for network layer, etc. The chips of the data link layer in ethernet are called MAC controllers. The PHY and the MAC controller of many network cards are connected together, and the relation between the two parts is that the PCI bus is connected with the MAC bus, the MAC bus is connected with the PHY, and the PHY is connected with the network.
In this embodiment, the second audio file processing module 13 may be a Central Processing Unit (CPU). Which may communicate with the second WiFi module 21 via one or more PCI (Peripheral Component Interconnect, peripheral component interconnect standard) buses.
In this embodiment, the position sensor 24 is a gyroscope, for example, the balance sensor is a tri-axis gyroscope. The three-axis gyroscope is a core sensitive device of an inertial navigation system, and the biggest role of the three-axis gyroscope is to measure angular velocity so as to judge the motion state of an object, so the three-axis gyroscope is also called a motion sensor, and can be used for rotation detection requiring high resolution and quick response. When a user performs small-range activities, the gyroscope can accurately position the WiFi earphone with lower cost, and the acquired current position parameters of the WiFi earphone are accurate.
In other embodiments, the position sensor 24 may also be a gyroscope and an acceleration sensor. The acceleration sensor is one of basic measuring elements of an inertial navigation and inertial guidance system, is a sensor for measuring acceleration, generates gravity components on three coordinate axes due to the action of gravity when an object is in a static state, quantifies the gravity components and utilizes a trigonometric function to calculate the inclination angles of the object relative to the three coordinate axes. From the application direction of the gyroscope, the gyroscope can measure the angular velocity of motion along one axis or a plurality of axes, can form advantage complementation with the acceleration sensor, and if the two sensors of the acceleration sensor and the gyroscope are used in combination, the complete motion of the three-dimensional space of the WiFi headset can be tracked and captured better, so that the current position parameter of the WiFi headset can be determined accurately. The current position parameters of the WiFi headset can be accurately determined even if the user performs a large range of activities.
In this embodiment, after the mobile terminal and the WiFi headset are connected to a WiFi network under the same WiFi network through the second WiFi module, the audio receiving module 22 receives an audio file from the mobile terminal through the second WiFi module, obtains the current position parameter of the WiFi headset through the position sensor 24, performs algorithm fusion on the current position parameter of the WiFi headset and the audio file received from the mobile terminal through the second WiFi module through the second audio file processing module 23, and finally plays the audio file after the algorithm fusion through the audio playing module 25. The method has the advantages that the sound source position of the virtual scene simulated by the audio file in the brain of the user is fixed, the hearing mode of the user in the real world is met, the listening experience of the user is more comfortable, and the playing quality of the audio file is improved. Moreover, as the data transmission rate of the WiFi network is faster, compared with the existing method for transmitting the audio file through Bluetooth, the WiFi network is more stable in transmitting the audio file, and the audio file is more smoothly played.
Fig. 4 is a schematic structural diagram of the second audio file processing module in fig. 3. As shown in fig. 3, the second audio file processing module 23 includes: a second reference information acquisition unit 231, a second position offset calculation unit 232, a second volume offset calculation unit 233, and a second audio volume calibration unit 234. Wherein,
the second reference information acquiring unit 231 is configured to calibrate the position sensor 24, and determine a reference position parameter and a reference play of the WiFi headset.
The second position offset calculating unit 232 is configured to calculate a position offset parameter of the WiFi headset according to the reference position parameter and the current position parameter of the WiFi headset.
The second volume offset calculating unit 233 is configured to calculate a left volume offset and a right volume offset of the WiFi headset according to the position offset parameter of the WiFi headset.
The second audio volume calibration unit 234 is configured to determine a left channel volume and a right channel volume of the audio file according to the reference play volume, the left volume offset and the right volume offset of the WiFi headset, so as to form an audio file after algorithm fusion.
In this embodiment, the second audio file processing module 23 performs algorithm fusion on the current position parameter of the WiFi headset and the audio file, and the forming the audio file after the algorithm fusion specifically includes: calibrating a position sensor in the WiFi headset through a second reference information acquisition unit 231, and determining a reference volume parameter and a reference position parameter of the WiFi headset; then, calculating a position offset parameter of the WiFi headset according to the reference position parameter and the current position parameter of the WiFi headset by a second position offset calculating unit 232; then, calculating the left volume offset and the right volume offset of the WiFi headset according to the position offset parameter of the WiFi headset by the second volume offset calculating unit 233; finally, the audio file is calibrated by the second audio volume calibration unit 234 according to the reference volume parameter, the left volume offset and the right volume offset of the WiFi headset, so as to form an audio file after algorithm fusion. Therefore, the volume of the left earphone and the right earphone in the WiFi earphone is adjusted according to the head position change of the user, and the sound source position of the simulated scene formed in the brain of the user is adjusted. For audio file playing matched with image watching, the user can accurately judge the sound source position in the virtual scene, and the sound source position is consistent with the real scene of the corresponding change of the sound source relative to the position of the user when the position of the user is changed, so that the listening experience of the user is more real, the accuracy of audio file scene simulation is improved, and the listening experience of the audio file of the user is better.
In another embodiment, the WiFi headset further includes a second storage module (not shown). The second storage module is used for storing the audio files received from the mobile terminal through the second WiFi module.
Referring to fig. 5, a flowchart of an embodiment of an audio playing method of the present application is shown, which is applied to a mobile terminal. As shown in the figure, the audio playing method includes:
step S11, under the WiFi network accessed by the mobile terminal, connecting with a WiFi earphone accessed to the WiFi network.
Step S12, acquiring current position parameters of the WiFi headset.
And S13, carrying out algorithm fusion on the current position parameters of the WiFi earphone and the audio file.
And S14, sending the audio file subjected to algorithm fusion to a connected WiFi earphone through a WiFi network so that the WiFi earphone can receive the audio file and play the audio file.
In this embodiment, step S12 obtains the current position parameters of the WiFi headset, including: receiving current position parameters of the WiFi headset from the WiFi headset through a WiFi network; the current position parameter is obtained through sensing of a position sensor arranged in the WiFi headset.
Specifically, the position sensor is a gyroscope; alternatively, the position sensor is a gyroscope or an acceleration sensor. In this embodiment, the position sensor is a gyroscope, for example, the balance sensor is a tri-axis gyroscope. The three-axis gyroscope is a core sensitive device of an inertial navigation system, and the biggest role of the three-axis gyroscope is to measure angular velocity so as to judge the motion state of an object, so the three-axis gyroscope is also called a motion sensor, and can be used for rotation detection requiring high resolution and quick response. When a user performs small-range activities, the gyroscope can accurately position the WiFi earphone with lower cost, and the acquired current position parameters of the WiFi earphone are accurate.
In other embodiments, the position sensor may also be a gyroscope and an acceleration sensor. The acceleration sensor is one of basic measuring elements of an inertial navigation and inertial guidance system, is a sensor for measuring acceleration, generates gravity components on three coordinate axes due to the action of gravity when an object is in a static state, quantifies the gravity components and utilizes a trigonometric function to calculate the inclination angles of the object relative to the three coordinate axes. From the application direction of the gyroscope, the gyroscope can measure the angular velocity of motion along one axis or a plurality of axes, can form advantage complementation with the acceleration sensor, and if the two sensors of the acceleration sensor and the gyroscope are used in combination, the complete motion of the three-dimensional space of the WiFi headset can be tracked and captured better, so that the current position parameter of the WiFi headset can be determined accurately. The current position parameters of the WiFi headset can be accurately determined even if the user performs a large range of activities.
In this embodiment, after the mobile terminal and the WiFi headset establish a WiFi network connection under the same WiFi network, the current position parameter of the WiFi headset is obtained first, then an audio file of the mobile terminal and the current position parameter of the WiFi headset are subjected to algorithm fusion, and the audio file after the algorithm fusion is played through the WiFi headset. Because the current position parameter of the WiFi headset corresponds to the head position of the user using the WiFi headset, when the head position of the user changes, the current position parameter of the WiFi headset also changes. When the head position of the user changes, the current position parameters of the audio file and the WiFi earphone are subjected to algorithm fusion and then played through the WiFi earphone, so that the sound source position of the virtual scene simulated by the audio file in the brain of the user is fixed, the hearing mode of the user in a real boundary is met, the hearing feeling of the user is more comfortable, and the playing quality of the audio file is improved. Moreover, as the data transmission rate of the WiFi network is faster, compared with the existing method for transmitting the audio file through Bluetooth, the WiFi network is more stable in transmitting the audio file, and the audio file is more smoothly played.
In another embodiment, under a WiFi network to which a mobile terminal is connected, the mobile terminal is connected to two or more WiFi headphones that are connected to the WiFi network. Thereby enabling the audio files of the mobile terminal to be played simultaneously in a plurality of WiFi headphones. Compared with the mode that the audio file is played through the loudspeaker so that multiple users can listen to the same audio file at the same time, the audio playing scheme can effectively protect the privacy of the users and avoid affecting the life of other people in the environment.
Referring to fig. 6, a flow chart of step S13 in fig. 5 is shown. As shown in the figure, in step S13, performing algorithm fusion on the current position parameter of the WiFi headset and the audio file specifically includes:
step S131, calibrating a position sensor in the WiFi headset, and determining a reference position parameter and a reference play volume of the WiFi headset.
Step S132, calculating the position deviation parameter of the WiFi earphone according to the reference position parameter and the current position parameter of the WiFi earphone.
Step S133, calculating the left volume offset and the right volume offset of the WiFi headset according to the position offset parameter of the WiFi headset.
Step S134, the left channel volume and the right channel volume of the audio file are determined according to the reference playing volume, the left volume offset and the right volume offset of the WiFi earphone, and the audio file after algorithm fusion is formed.
In the embodiment, firstly calibrating a balance sensor in the WiFi earphone, and determining a reference position parameter of the WiFi earphone; then, calculating the position deviation parameter of the WiFi earphone according to the reference position parameter and the current position parameter of the WiFi earphone; then, calculating the left volume offset and the right volume offset of the WiFi earphone according to the position offset parameter of the WiFi earphone; and finally, calibrating the audio file according to the left volume offset and the right volume offset of the WiFi earphone to form an audio file after algorithm fusion. Therefore, the volume of the left earphone and the right earphone in the WiFi earphone is adjusted according to the head position change of the user, and the sound source position of the simulated scene formed in the brain of the user is adjusted. For audio file playing matched with image watching, the user can accurately judge the sound source position in the virtual scene, and the sound source position is consistent with the real scene of the corresponding change of the sound source relative to the position of the user when the position of the user is changed, so that the listening experience of the user is more real, the accuracy of audio file scene simulation is improved, and the listening experience of the audio file of the user is better.
Referring to fig. 7, a flowchart of another embodiment of the audio playing method of the present application is shown. As shown in the figure, an audio playing method is provided, which is applied to a mobile terminal and includes:
Step S21, under the WiFi network accessed by the mobile terminal, connecting with a WiFi earphone accessed to the WiFi network.
Step S22, the audio file is sent to the connected WiFi earphone through the WiFi network, so that the WiFi earphone can perform algorithm fusion on the audio file and the acquired current position parameters of the WiFi earphone and play the audio file.
In this embodiment, under a WiFi network to which the mobile terminal is connected, the mobile terminal is connected to a WiFi headset connected to the WiFi network, and the mobile terminal sends an audio file to the connected WiFi headset through the WiFi network, so that the WiFi headset performs algorithm fusion on the audio file and the obtained current position parameter of the WiFi headset, and then plays the audio file. Because the current position parameter of the WiFi headset corresponds to the head position of the user using the WiFi headset, when the head position of the user changes, the current position parameter of the WiFi headset also changes. When the head position of the user changes, the current position parameters of the audio file and the WiFi earphone are subjected to algorithm fusion and then played through the WiFi earphone, so that the sound source position of the virtual scene simulated by the audio file in the brain of the user is fixed, the hearing mode of the user in a real boundary is met, the hearing feeling of the user is more comfortable, and the playing quality of the audio file is improved. Moreover, as the data transmission rate of the WiFi network is faster, compared with the existing method for transmitting the audio file through Bluetooth, the WiFi network is more stable in transmitting the audio file, and the audio file is more smoothly played.
In another embodiment, under a WiFi network to which a mobile terminal is connected, the mobile terminal is connected to two or more WiFi headphones that are connected to the WiFi network. Thereby enabling the audio files of the mobile terminal to be played simultaneously in a plurality of WiFi headphones. Compared with the mode that the audio file is played through the loudspeaker so that multiple users can listen to the same audio file at the same time, the audio playing scheme can effectively protect the privacy of the users and avoid affecting the life of other people in the environment.
In yet another embodiment, the obtaining the current position parameter of the WiFi headset is obtained by sensing a position sensor disposed within the WiFi headset. The position sensor is a gyroscope. The position sensor is a gyroscope; alternatively, the position sensor is a gyroscope or an acceleration sensor. In this embodiment, the position sensor is a gyroscope, for example, the balance sensor is a tri-axis gyroscope. The three-axis gyroscope is a core sensitive device of an inertial navigation system, and the biggest role of the three-axis gyroscope is to measure angular velocity so as to judge the motion state of an object, so the three-axis gyroscope is also called a motion sensor, and can be used for rotation detection requiring high resolution and quick response. When a user performs small-range activities, the gyroscope can accurately position the WiFi earphone with lower cost, and the acquired current position parameters of the WiFi earphone are accurate.
In other embodiments, the position sensor may also be a gyroscope and an acceleration sensor. The acceleration sensor is one of basic measuring elements of an inertial navigation and inertial guidance system, is a sensor for measuring acceleration, generates gravity components on three coordinate axes due to the action of gravity when an object is in a static state, quantifies the gravity components and utilizes a trigonometric function to calculate the inclination angles of the object relative to the three coordinate axes. From the application direction of the gyroscope, the gyroscope can measure the angular velocity of motion along one axis or a plurality of axes, can form advantage complementation with the acceleration sensor, and if the two sensors of the acceleration sensor and the gyroscope are used in combination, the complete motion of the three-dimensional space of the WiFi headset can be tracked and captured better, so that the current position parameter of the WiFi headset can be determined accurately. The current position parameters of the WiFi headset can be accurately determined even if the user performs a large range of activities.
In yet another embodiment, the algorithm fusing the audio file with the acquired WiFi headset current position parameter includes: calibrating a position sensor in the WiFi headset, and determining a reference position parameter and a reference play volume of the WiFi headset; calculating a position offset parameter of the WiFi headset according to the reference position parameter and the current position parameter of the WiFi headset; calculating left volume offset and right volume offset of the WiFi headset according to the position offset parameters of the WiFi headset; and determining the left channel volume and the right channel volume of the audio file according to the reference playing volume, the left volume offset and the right volume offset of the WiFi earphone to form the audio file after algorithm fusion.
Because the current position parameter of the WiFi headset corresponds to the head position of the user using the WiFi headset, when the head position of the user changes, the current position parameter of the WiFi headset also changes. When the head position of the user changes, the current position parameters of the audio file and the WiFi earphone are subjected to algorithm fusion and then played through the WiFi earphone, so that the sound source position of the virtual scene simulated by the audio file in the brain of the user is fixed, the hearing mode of the user in a real boundary is met, the hearing feeling of the user is more comfortable, and the playing quality of the audio file is improved. Moreover, as the data transmission rate of the WiFi network is faster, compared with the existing method for transmitting the audio file through Bluetooth, the WiFi network is more stable in transmitting the audio file, and the audio file is more smoothly played.
Referring to fig. 8, a flowchart of another embodiment of the audio playing method is shown. As shown in the figure, an audio playing method is provided, which is applied to a WiFi headset, and includes:
step S31, under the WiFi network accessed by the WiFi earphone, the mobile terminal is connected with the WiFi network.
Step S32, the current position parameters of the WiFi earphone are obtained, and the current position parameters of the WiFi earphone are sent to the mobile terminal through the WiFi network so that the mobile terminal can carry out algorithm fusion on the current position parameters of the WiFi earphone and the audio file.
And step S33, receiving the audio file subjected to the algorithm fusion from the mobile terminal through the WiFi network and playing the audio file.
In this embodiment, under a WiFi network to which a WiFi headset is connected, the WiFi headset is connected to a mobile terminal that is connected to the WiFi network. The WiFi earphone acquires the current position parameter of the WiFi earphone, and the current position parameter of the WiFi earphone is sent to the mobile terminal through the WiFi network, so that the mobile terminal can carry out algorithm fusion on the current position parameter of the WiFi earphone and the audio file. And finally, receiving the audio file subjected to the algorithm fusion from the mobile terminal through the WiFi network and playing the audio file. Because the current position parameter of the WiFi headset corresponds to the head position of the user using the WiFi headset, when the head position of the user changes, the current position parameter of the WiFi headset also changes. When the head position of the user changes, the current position parameters of the audio file and the WiFi earphone are subjected to algorithm fusion and then played through the WiFi earphone, so that the sound source position of the virtual scene simulated by the audio file in the brain of the user is fixed, the hearing mode of the user in a real boundary is met, the hearing feeling of the user is more comfortable, and the playing quality of the audio file is improved. Moreover, as the data transmission rate of the WiFi network is faster, compared with the existing method for transmitting the audio file through Bluetooth, the WiFi network is more stable in transmitting the audio file, and the audio file is more smoothly played.
In another embodiment, under a WiFi network to which a WiFi headset is connected, two or more WiFi headsets are connected to a mobile terminal connected to the WiFi network. Thereby enabling the audio files of the mobile terminal to be played simultaneously in a plurality of WiFi headphones. Compared with the mode that the audio file is played through the loudspeaker so that multiple users can listen to the same audio file at the same time, the audio playing scheme can effectively protect the privacy of the users and avoid affecting the life of other people in the environment.
In yet another embodiment, the obtaining the current position parameter of the WiFi headset is obtained by sensing a position sensor disposed within the WiFi headset. Specifically, the position sensor is a gyroscope; alternatively, the position sensor is a gyroscope or an acceleration sensor.
In this embodiment, the position sensor is a gyroscope, for example, the balance sensor is a tri-axis gyroscope. The three-axis gyroscope is a core sensitive device of an inertial navigation system, and the biggest role of the three-axis gyroscope is to measure angular velocity so as to judge the motion state of an object, so the three-axis gyroscope is also called a motion sensor, and can be used for rotation detection requiring high resolution and quick response. When a user performs small-range activities, the gyroscope can accurately position the WiFi earphone with lower cost, and the acquired current position parameters of the WiFi earphone are accurate.
In other embodiments, the position sensor may also be a gyroscope and an acceleration sensor. The acceleration sensor is one of basic measuring elements of an inertial navigation and inertial guidance system, is a sensor for measuring acceleration, generates gravity components on three coordinate axes due to the action of gravity when an object is in a static state, quantifies the gravity components and utilizes a trigonometric function to calculate the inclination angles of the object relative to the three coordinate axes. From the application direction of the gyroscope, the gyroscope can measure the angular velocity of motion along one axis or a plurality of axes, can form advantage complementation with the acceleration sensor, and if the two sensors of the acceleration sensor and the gyroscope are used in combination, the complete motion of the three-dimensional space of the WiFi headset can be tracked and captured better, so that the current position parameter of the WiFi headset can be determined accurately. The current position parameters of the WiFi headset can be accurately determined even if the user performs a large range of activities.
In yet another embodiment, the step S32 of the mobile terminal algorithmically fusing the current location parameters of the WiFi headset with an audio file includes: calibrating a position sensor in the WiFi headset, and determining a reference position parameter and a reference play volume of the WiFi headset; calculating a position offset parameter of the WiFi headset according to the reference position parameter and the current position parameter of the WiFi headset; calculating left volume offset and right volume offset of the WiFi headset according to the position offset parameters of the WiFi headset; and determining the left channel volume and the right channel volume of the audio file according to the reference playing volume, the left volume offset and the right volume offset of the WiFi earphone to form the audio file after algorithm fusion. Therefore, the volume of the left earphone and the right earphone in the WiFi earphone is adjusted according to the head position change of the user, and the sound source position of the simulated scene formed in the brain of the user is adjusted. For audio file playing matched with image watching, the user can accurately judge the sound source position in the virtual scene, and the sound source position is consistent with the real scene of the corresponding change of the sound source relative to the position of the user when the position of the user is changed, so that the listening experience of the user is more real, the accuracy of audio file scene simulation is improved, and the listening experience of the audio file of the user is better.
Referring to fig. 9, a flowchart of still another embodiment of the audio playing method of the present application is shown. As shown in the figure, an audio playing method is provided, which is applied to a WiFi headset, and includes:
and S41, connecting with the mobile terminal accessed to the WiFi network under the WiFi network accessed by the WiFi earphone.
In step S42, the audio file is received from the mobile terminal through the WiFi network.
Step S43, acquiring the current position parameters of the WiFi earphone, and carrying out algorithm fusion on the current position parameters of the WiFi earphone and the audio file.
And S44, playing the audio file subjected to algorithm fusion.
In this embodiment, under a WiFi network to which a WiFi headset is connected, the WiFi headset is connected to a mobile terminal connected to the WiFi network; receiving an audio file from a mobile terminal through a WiFi network; acquiring the current position parameter of the WiFi earphone, and carrying out algorithm fusion on the current position parameter of the WiFi earphone and the audio file; and playing the audio file subjected to algorithm fusion.
Because the current position parameter of the WiFi headset corresponds to the head position of the user using the WiFi headset, when the head position of the user changes, the current position parameter of the WiFi headset also changes. When the head position of the user changes, the current position parameters of the audio file and the WiFi earphone are subjected to algorithm fusion and then played through the WiFi earphone, so that the sound source position of the virtual scene simulated by the audio file in the brain of the user is fixed, the hearing mode of the user in a real boundary is met, the hearing feeling of the user is more comfortable, and the playing quality of the audio file is improved. Moreover, as the data transmission rate of the WiFi network is faster, compared with the existing method for transmitting the audio file through Bluetooth, the WiFi network is more stable in transmitting the audio file, and the audio file is more smoothly played.
In another embodiment, under a WiFi network to which a mobile terminal is connected, the mobile terminal is connected to two or more WiFi headphones that are connected to the WiFi network. Thereby enabling the audio files of the mobile terminal to be played simultaneously in a plurality of WiFi headphones. Compared with the mode that the audio file is played through the loudspeaker so that multiple users can listen to the same audio file at the same time, the audio playing scheme can effectively protect the privacy of the users and avoid affecting the life of other people in the environment.
In yet another embodiment, the obtaining the current position parameter of the WiFi headset is obtained by sensing a position sensor disposed within the WiFi headset. The position sensor is a gyroscope. The position sensor is a gyroscope; alternatively, the position sensor is a gyroscope or an acceleration sensor. In this embodiment, the position sensor is a gyroscope, for example, the balance sensor is a tri-axis gyroscope. The three-axis gyroscope is a core sensitive device of an inertial navigation system, and the biggest role of the three-axis gyroscope is to measure angular velocity so as to judge the motion state of an object, so the three-axis gyroscope is also called a motion sensor, and can be used for rotation detection requiring high resolution and quick response. When a user performs small-range activities, the gyroscope can accurately position the WiFi earphone with lower cost, and the acquired current position parameters of the WiFi earphone are accurate.
In other embodiments, the position sensor may also be a gyroscope and an acceleration sensor. The acceleration sensor is one of basic measuring elements of an inertial navigation and inertial guidance system, is a sensor for measuring acceleration, generates gravity components on three coordinate axes due to the action of gravity when an object is in a static state, quantifies the gravity components and utilizes a trigonometric function to calculate the inclination angles of the object relative to the three coordinate axes. From the application direction of the gyroscope, the gyroscope can measure the angular velocity of motion along one axis or a plurality of axes, can form advantage complementation with the acceleration sensor, and if the two sensors of the acceleration sensor and the gyroscope are used in combination, the complete motion of the three-dimensional space of the WiFi headset can be tracked and captured better, so that the current position parameter of the WiFi headset can be determined accurately. The current position parameters of the WiFi headset can be accurately determined even if the user performs a large range of activities.
In yet another embodiment, the algorithm fusing the audio file with the acquired WiFi headset current position parameter includes: calibrating a position sensor in the WiFi headset, and determining a reference position parameter and a reference play volume of the WiFi headset; calculating a position offset parameter of the WiFi headset according to the reference position parameter and the current position parameter of the WiFi headset; calculating left volume offset and right volume offset of the WiFi headset according to the position offset parameters of the WiFi headset; and determining the left channel volume and the right channel volume of the audio file according to the reference playing volume, the left volume offset and the right volume offset of the WiFi earphone to form the audio file after algorithm fusion.
Because the current position parameter of the WiFi headset corresponds to the head position of the user using the WiFi headset, when the head position of the user changes, the current position parameter of the WiFi headset also changes. When the head position of the user changes, the current position parameters of the audio file and the WiFi earphone are subjected to algorithm fusion and then played through the WiFi earphone, so that the sound source position of the virtual scene simulated by the audio file in the brain of the user is fixed, the hearing mode of the user in a real boundary is met, the hearing feeling of the user is more comfortable, and the playing quality of the audio file is improved. Moreover, as the data transmission rate of the WiFi network is faster, compared with the existing method for transmitting the audio file through Bluetooth, the WiFi network is more stable in transmitting the audio file, and the audio file is more smoothly played.
Referring to fig. 10, a schematic structural diagram of an embodiment of an audio playing system of the present application is shown. As shown, the audio playing system 1 includes a mobile terminal 10 and a WiFi headset 30 that establish a WiFi network connection under the same WiFi network;
the mobile terminal 10 is configured to obtain a current position parameter of a WiFi headset through a WiFi network, perform algorithm fusion on the current position parameter of the WiFi headset and an audio file, and send the audio file after the algorithm fusion to the connected WiFi headset through the WiFi network;
The WiFi headset 30 is configured to obtain a current position parameter of the WiFi headset and send the current position parameter of the WiFi headset to the mobile terminal through a WiFi network, and obtain the audio file after the algorithm fusion from the mobile terminal through the WiFi network and play the audio file.
In this embodiment, the current position parameter of the WiFi headset 30 may be obtained by a position sensor (not shown) disposed in the WiFi headset 30. The position sensor is a gyroscope, or the position sensor is a gyroscope and an acceleration sensor. The method for performing algorithm fusion on the current position parameters of the WiFi headset and the audio file in this embodiment may refer to corresponding parts in other embodiments, and will not be described herein.
In this embodiment, the number of WiFi headphones connected to the mobile terminal by the audio playing system 1 is one. But not limited thereto, the number of WiFi headphones may be more than one. Meeting the requirement of simultaneous listening of multiple people.
In the audio playing system in this embodiment, when the head position of the user changes, the mobile terminal 10 performs algorithm fusion on the current position parameters of the audio file and the WiFi earphone, and then plays the audio file through the WiFi earphone, so that the sound source position of the virtual scene simulated by the audio file in the brain of the user is fixed, the audio playing system conforms to the hearing mode of the user in the real world, the listening experience of the user is more comfortable, and the playing quality of the audio file is improved.
Referring to fig. 11, a schematic structural diagram of another embodiment of the audio playing system of the present application is shown. As shown, an audio playing system 2 is provided, including a mobile terminal 40, a WiFi headset 50, a WiFi headset 60, and a WiFi headset 70 that establish a WiFi network connection under the same WiFi network; the mobile terminal 40 is configured to send an audio file to the connected WiFi headset through a WiFi network; the WiFi headphones 50, 60 and 70 are configured to obtain respective current position parameters, perform algorithm fusion on the obtained respective current position parameters of the WiFi headphones and an audio file received from the mobile terminal 40 through a WiFi network, and play the audio file after the algorithm fusion.
In this embodiment, the number of WiFi headphones connected to the mobile terminal 40 in the audio playing system 2 is three. Three users can be satisfied to listen to audio at the same time. But not limited thereto, the number of WiFi headphones may be one, two or more than four.
In this embodiment, the current position parameters of the WiFi headset 50, the WiFi headset 60 and the WiFi headset 70 are respectively acquired by position sensors (not shown) disposed in the respective headset. The position sensor is a gyroscope, or the position sensor is a gyroscope and an acceleration sensor. The method for performing algorithm fusion on the current position parameters of the WiFi headset and the audio file in this embodiment may refer to corresponding parts in other embodiments, and will not be described herein.
In this embodiment, the number of WiFi headphones connected to the mobile terminal by the audio playing system 1 is one. But not limited thereto, the number of WiFi headphones may be more than one. Meeting the requirement of simultaneous listening of multiple people.
In the audio playing system in this embodiment, when the head position of the user changes, the WiFi headphones 50, the WiFi headphones 60 and the WiFi headphones 70 respectively perform algorithm fusion on the audio file and the respective current position parameters, and then play the audio file through the WiFi headphones, wherein each headphone corresponds to a different user position, and the audio file after the algorithm fusion has different volumes in the left-side headphones and the right-side headphones of the corresponding WiFi headphones, so that the sound source position of the virtual scene simulated by the audio file in the brain of the user is fixed, the hearing mode of each user in the real world is met, the listening experience of the user is more comfortable, and the playing quality of the audio file is improved.
The audio playing system of the present embodiment is described below with reference to specific scenarios.
Assume that the user names wearing WiFi headset 50, wiFi headset 60, and WiFi headset 70 are A, B and C, respectively. User a is resting in bed, user B is picking up the room, and user C is dancing while audio is playing. The specific working process of the audio playing system 2 is as follows:
First, the mobile terminal 40 accesses a WiFi network, searches for a WiFi headset located under the WiFi network, and connects with the found WiFi headset. Specifically, the mobile terminal 40 may be connected to a WiFi headset under the WiFi network by an APP in the mobile terminal 40 or by the mobile terminal sharing a hotspot.
Next, the mobile terminal 40 transmits the audio files to the users a, B and C of the WiFi headphones 50, 60 and 70, respectively. At this time, the user a position is unchanged, the user B position occasionally changes, and the user C position always changes.
The volume of the left and right ear pieces remains unchanged while the audio file is played by the ear piece 50. When the earphone 60 plays the audio file and the user position changes, the volume of the left earphone and the right earphone changes correspondingly. When the earphone 70 plays the audio file, the user position changes all the time, and the volumes of the left earphone and the right earphone change all the time along with the change of the user position.
In summary, the audio playing method and system, the mobile terminal and the WiFi earphone have the following beneficial effects:
after the mobile terminal and the WiFi earphone are connected in the same WiFi network, the current position parameters of the WiFi earphone are acquired, then the mobile terminal or the WiFi earphone carries out algorithm fusion on the audio file of the mobile terminal and the current position parameters of the WiFi earphone, and the audio file after the algorithm fusion is played through the WiFi earphone. Because the current position parameter of the WiFi headset corresponds to the head position of the user using the WiFi headset, when the head position of the user changes, the current position parameter of the WiFi headset also changes. When the head position of the user changes, the current position parameters of the audio file and the WiFi earphone are subjected to algorithm fusion and then played through the WiFi earphone, so that the sound source position of the virtual scene simulated by the audio file in the brain of the user is fixed, the hearing mode of the user in a real boundary is met, the hearing feeling of the user is more comfortable, and the playing quality of the audio file is improved. Moreover, as the data transmission rate of the WiFi network is faster, compared with the existing method for transmitting the audio file through Bluetooth, the WiFi network is more stable in transmitting the audio file, and the audio file is more smoothly played.
Further, the mobile terminal is connected with more than two WiFi headsets under the same WiFi network, so that the audio files of the mobile terminal can be played in a plurality of WiFi headsets at the same time. Compared with the mode that the audio file is played through the loudspeaker so that multiple users can listen to the same audio file at the same time, the audio playing scheme can effectively protect the privacy of the users and avoid affecting the life of other people in the environment.
Furthermore, the algorithm fusion is performed on the current position parameters of the WiFi earphone and the audio file, and the formation of the audio file after the algorithm fusion specifically comprises the following steps: firstly calibrating a balance sensor in the WiFi earphone, and determining a reference position parameter of the WiFi earphone; then, calculating the position deviation parameter of the WiFi earphone according to the reference position parameter and the current position parameter of the WiFi earphone; then, calculating the left volume offset and the right volume offset of the WiFi earphone according to the position offset parameter of the WiFi earphone; and finally, calibrating the audio file according to the left volume offset and the right volume offset of the WiFi earphone to form an audio file after algorithm fusion. Therefore, the volume of the left earphone and the right earphone in the WiFi earphone is adjusted according to the head position change of the user, and the sound source position of the simulated scene formed in the brain of the user is adjusted. For audio file playing matched with image watching, the user can accurately judge the sound source position in the virtual scene, and the sound source position is consistent with the real scene of the corresponding change of the sound source relative to the position of the user when the position of the user is changed, so that the listening experience of the user is more real, the accuracy of audio file scene simulation is improved, and the listening experience of the audio file of the user is better.
The foregoing embodiments are merely illustrative of the principles of the present application and their effectiveness, and are not intended to limit the application. Modifications and variations may be made to the above-described embodiments by those of ordinary skill in the art without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications and variations which may be accomplished by persons skilled in the art without departing from the spirit and technical spirit of the disclosure be covered by the claims of this application.

Claims (9)

1. An audio playing method applied to a mobile terminal is characterized by comprising the following steps:
under the WiFi network accessed by the mobile terminal, the mobile terminal is connected with more than two WiFi earphones accessed to the WiFi network;
acquiring a current position parameter of the WiFi earphone, and carrying out algorithm fusion on the current position parameter of the WiFi earphone and an audio file to adjust the left channel volume and the right channel volume of the audio file, wherein the current position parameter is used for representing the rotation angle and direction of the head of a user and the moving distance and direction; and
the audio file subjected to algorithm fusion is sent to the connected WiFi earphone through a WiFi network so that the WiFi earphone can receive the audio file and play the audio file;
The step of algorithmically fusing the current position parameters of the WiFi headset with the audio file further includes:
calibrating a position sensor in the WiFi headset, and determining a reference position parameter and a reference play volume of the WiFi headset;
calculating a position offset parameter of the WiFi headset according to the reference position parameter and the current position parameter of the WiFi headset;
calculating left volume offset and right volume offset of the WiFi headset according to the position offset parameters of the WiFi headset; and
and determining the left channel volume and the right channel volume of the audio file according to the reference playing volume, the left volume offset and the right volume offset of the WiFi earphone to form the audio file after algorithm fusion.
2. The audio playing method according to claim 1, wherein,
obtaining the current position parameter of the WiFi headset comprises the following steps: receiving current position parameters of the WiFi headset from the WiFi headset through a WiFi network; the current position parameter is obtained through sensing of a position sensor arranged in the WiFi headset.
3. An audio playing method applied to a WiFi earphone is characterized by comprising the following steps:
Under a WiFi network accessed by the WiFi earphone, more than two WiFi earphones are connected with a mobile terminal accessed to the WiFi network;
receiving an audio file from the mobile terminal through a WiFi network;
acquiring current position parameters of the WiFi earphone, and carrying out algorithm fusion on the current position parameters of the WiFi earphone and the audio file to adjust the left channel volume and the right channel volume of the audio file, wherein the current position parameters are used for representing the rotating angle and direction of the head of a user and the moving distance and direction; and
playing the audio file subjected to algorithm fusion;
the step of algorithmically fusing the current position parameters of the WiFi headset with the audio file further includes:
calibrating a position sensor in the WiFi headset, and determining a reference position parameter and a reference play volume of the WiFi headset;
calculating a position offset parameter of the WiFi headset according to the reference position parameter and the current position parameter of the WiFi headset;
calculating left volume offset and right volume offset of the WiFi headset according to the position offset parameters of the WiFi headset; and
and determining the left channel volume and the right channel volume of the audio file according to the reference playing volume, the left volume offset and the right volume offset of the WiFi earphone to form the audio file after algorithm fusion.
4. The audio playing method according to claim 3, wherein the current position parameter of the WiFi headset is obtained through a position sensor disposed in the WiFi headset.
5. A mobile terminal, comprising:
the first WiFi module is used for being connected with more than two WiFi headphones under the same WiFi network;
the information receiving module is used for receiving the current position parameter of the WiFi earphone through the first WiFi module, wherein the current position parameter is used for representing the rotating angle and direction of the head of the user and the moving distance and direction;
the first audio file processing module is used for carrying out algorithm fusion on the current position parameter of the WiFi earphone and the audio file so as to adjust the left channel volume and the right channel volume of the audio file; and
the audio sending module is used for sending the audio file subjected to algorithm fusion to the connected WiFi earphone through the first WiFi module so as to be played after the WiFi earphone receives the audio file;
the first audio file processing module includes:
the first reference information acquisition unit is used for calibrating a position sensor in the WiFi earphone and determining a reference position parameter and a reference playing volume of the WiFi earphone;
The first position deviation calculation unit is used for calculating the position deviation parameter of the WiFi headset according to the reference position parameter and the current position parameter of the WiFi headset;
the first volume offset calculation unit is used for calculating the left volume offset and the right volume offset of the WiFi headset according to the position offset parameter of the WiFi headset; and
and the first audio volume calibration unit is used for determining the left channel volume and the right channel volume of the audio file according to the reference play volume, the left volume offset and the right volume offset of the WiFi earphone to form the audio file after algorithm fusion.
6. The mobile terminal of claim 5, wherein the current location parameter of the WiFi headset is sensed by a location sensor disposed within the WiFi headset.
7. A WiFi headset, comprising:
the second WiFi module is used for being connected with the mobile terminal under the same WiFi network; under a WiFi network accessed by the mobile terminal, the mobile terminal is connected with more than two WiFi headphones accessed to the WiFi network;
the audio receiving module is used for receiving an audio file from the mobile terminal through the second WiFi module;
The position sensor is used for acquiring current position parameters of the WiFi earphone, wherein the current position parameters are used for representing the rotation angle and direction of the head of the user and the movement distance and direction;
the second audio file processing module is used for carrying out algorithm fusion on the current position parameter of the WiFi earphone and the audio file received from the mobile terminal through the second WiFi module so as to adjust the left channel volume and the right channel volume of the audio file; and
the audio playing module is used for playing the audio files subjected to algorithm fusion;
wherein the second audio file processing module includes:
the second reference information acquisition unit is used for calibrating the position sensor and determining the reference position parameter and the reference play volume of the WiFi earphone;
the second position deviation calculating unit is used for calculating the position deviation parameter of the WiFi headset according to the reference position parameter and the current position parameter of the WiFi headset;
the second volume offset calculation unit is used for calculating the left volume offset and the right volume offset of the WiFi headset according to the position offset parameter of the WiFi headset; and
and the second audio volume calibration unit is used for determining the left channel volume and the right channel volume of the audio file according to the reference play volume, the left volume offset and the right volume offset of the WiFi earphone to form the audio file after algorithm fusion.
8. The audio playing system is characterized by comprising a mobile terminal and more than two WiFi headsets, wherein the mobile terminal and the more than two WiFi headsets are connected with a WiFi network under the same WiFi network;
the mobile terminal is used for: acquiring current position parameters of a WiFi headset through a WiFi network, carrying out algorithm fusion on the current position parameters of the WiFi headset and an audio file to adjust left channel volume and right channel volume of the audio file, and sending the audio file subjected to the algorithm fusion to the connected WiFi headset through the WiFi network, wherein the current position parameters are used for representing the rotating angle and direction of a user head and the moving distance and direction;
the WiFi headset is used for: acquiring current position parameters of a WiFi earphone, transmitting the current position parameters of the WiFi earphone to the mobile terminal through a WiFi network, acquiring the audio file subjected to algorithm fusion from the mobile terminal through the WiFi network, and playing the audio file;
the step of algorithmically fusing the current position parameters of the WiFi headset with the audio file further includes:
calibrating a position sensor in the WiFi headset, and determining a reference position parameter and a reference play volume of the WiFi headset;
Calculating a position offset parameter of the WiFi headset according to the reference position parameter and the current position parameter of the WiFi headset;
calculating left volume offset and right volume offset of the WiFi headset according to the position offset parameters of the WiFi headset; and
and determining the left channel volume and the right channel volume of the audio file according to the reference playing volume, the left volume offset and the right volume offset of the WiFi earphone to form the audio file after algorithm fusion.
9. The audio playing system is characterized by comprising a mobile terminal and more than two WiFi headsets, wherein the mobile terminal and the more than two WiFi headsets are connected with a WiFi network under the same WiFi network;
the mobile terminal is used for: sending an audio file to the connected WiFi headset through a WiFi network;
the WiFi headset is used for: acquiring current position parameters of a WiFi earphone, carrying out algorithm fusion on the acquired current position parameters of the WiFi earphone and an audio file received from the mobile terminal through a WiFi network to adjust left channel volume and right channel volume of the audio file, and playing the audio file subjected to the algorithm fusion, wherein the current position parameters are used for representing the rotation angle and direction of a user head and the moving distance and direction;
The step of carrying out algorithm fusion on the current position parameters of the WiFi earphone and the audio file further comprises the following steps:
calibrating a position sensor in the WiFi headset, and determining a reference position parameter and a reference play volume of the WiFi headset;
calculating a position offset parameter of the WiFi headset according to the reference position parameter and the current position parameter of the WiFi headset;
calculating left volume offset and right volume offset of the WiFi headset according to the position offset parameters of the WiFi headset; and
and determining the left channel volume and the right channel volume of the audio file according to the reference playing volume, the left volume offset and the right volume offset of the WiFi earphone to form the audio file after algorithm fusion.
CN201710602055.3A 2017-07-21 2017-07-21 Audio playing method and system, mobile terminal and WiFi earphone Active CN107182011B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710602055.3A CN107182011B (en) 2017-07-21 2017-07-21 Audio playing method and system, mobile terminal and WiFi earphone

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710602055.3A CN107182011B (en) 2017-07-21 2017-07-21 Audio playing method and system, mobile terminal and WiFi earphone

Publications (2)

Publication Number Publication Date
CN107182011A CN107182011A (en) 2017-09-19
CN107182011B true CN107182011B (en) 2024-04-05

Family

ID=59838569

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710602055.3A Active CN107182011B (en) 2017-07-21 2017-07-21 Audio playing method and system, mobile terminal and WiFi earphone

Country Status (1)

Country Link
CN (1) CN107182011B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111050271B (en) * 2018-10-12 2021-01-29 北京微播视界科技有限公司 Method and apparatus for processing audio signal
CN111224693B (en) * 2019-11-27 2021-08-06 展讯通信(上海)有限公司 Audio data transmission method and device for wireless earphone, storage medium and terminal
CN112399306B (en) * 2020-11-16 2022-05-31 联想(北京)有限公司 Control method and device
CN112565973B (en) * 2020-12-21 2023-08-01 Oppo广东移动通信有限公司 Terminal, terminal control method, device and storage medium
CN112351364B (en) * 2021-01-04 2021-04-16 深圳千岸科技股份有限公司 Voice playing method, earphone and storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060112566A (en) * 2005-04-27 2006-11-01 주식회사 팬택 Wireless telecommunication headset which having the neck microphone for mobile terminal
JP2008092193A (en) * 2006-09-29 2008-04-17 Japan Science & Technology Agency Sound source selecting device
CN101662720A (en) * 2008-08-26 2010-03-03 索尼株式会社 Sound processing apparatus, sound image localized position adjustment method and video processing apparatus
CN102082991A (en) * 2010-11-24 2011-06-01 蔡庸成 Method specially designed for earphone audition and used for simulating field holographic audio frequency
WO2012151839A1 (en) * 2011-07-18 2012-11-15 中兴通讯股份有限公司 Method and device for automatically adjusting sound volume
CN104661081A (en) * 2015-03-18 2015-05-27 飞狐信息技术(天津)有限公司 Audio data transmission method and device
CN104994237A (en) * 2015-07-15 2015-10-21 北京译随行资讯有限公司 Audio access method, equipment and WIFI headphone
CN105049977A (en) * 2015-07-30 2015-11-11 努比亚技术有限公司 Automatic earphone volume adjusting method and device
CN105657609A (en) * 2016-01-26 2016-06-08 宇龙计算机通信科技(深圳)有限公司 Method and device for playing control of bone conduction headsets and bone conduction headset equipment
JP2016146576A (en) * 2015-02-09 2016-08-12 角元 純一 Measuring method and measuring tool and correction method of reproduction characteristics of earphone and application program of measurement and application program of correction
CN106027809A (en) * 2016-07-27 2016-10-12 维沃移动通信有限公司 Volume adjusting method and mobile terminal
CN106341546A (en) * 2016-09-29 2017-01-18 广东欧珀移动通信有限公司 Audio playing method, device and mobile terminal
CN106888419A (en) * 2015-12-16 2017-06-23 华为终端(东莞)有限公司 The method and apparatus for adjusting earpiece volume
CN208079373U (en) * 2017-07-21 2018-11-09 深圳市泰衡诺科技有限公司上海分公司 Audio frequency broadcast system, mobile terminal, WiFi earphones

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8559651B2 (en) * 2011-03-11 2013-10-15 Blackberry Limited Synthetic stereo on a mono headset with motion sensing
US10129682B2 (en) * 2012-01-06 2018-11-13 Bacch Laboratories, Inc. Method and apparatus to provide a virtualized audio file

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20060112566A (en) * 2005-04-27 2006-11-01 주식회사 팬택 Wireless telecommunication headset which having the neck microphone for mobile terminal
JP2008092193A (en) * 2006-09-29 2008-04-17 Japan Science & Technology Agency Sound source selecting device
CN101662720A (en) * 2008-08-26 2010-03-03 索尼株式会社 Sound processing apparatus, sound image localized position adjustment method and video processing apparatus
CN102082991A (en) * 2010-11-24 2011-06-01 蔡庸成 Method specially designed for earphone audition and used for simulating field holographic audio frequency
WO2012151839A1 (en) * 2011-07-18 2012-11-15 中兴通讯股份有限公司 Method and device for automatically adjusting sound volume
JP2016146576A (en) * 2015-02-09 2016-08-12 角元 純一 Measuring method and measuring tool and correction method of reproduction characteristics of earphone and application program of measurement and application program of correction
CN104661081A (en) * 2015-03-18 2015-05-27 飞狐信息技术(天津)有限公司 Audio data transmission method and device
CN104994237A (en) * 2015-07-15 2015-10-21 北京译随行资讯有限公司 Audio access method, equipment and WIFI headphone
CN105049977A (en) * 2015-07-30 2015-11-11 努比亚技术有限公司 Automatic earphone volume adjusting method and device
CN106888419A (en) * 2015-12-16 2017-06-23 华为终端(东莞)有限公司 The method and apparatus for adjusting earpiece volume
CN105657609A (en) * 2016-01-26 2016-06-08 宇龙计算机通信科技(深圳)有限公司 Method and device for playing control of bone conduction headsets and bone conduction headset equipment
CN106027809A (en) * 2016-07-27 2016-10-12 维沃移动通信有限公司 Volume adjusting method and mobile terminal
CN106341546A (en) * 2016-09-29 2017-01-18 广东欧珀移动通信有限公司 Audio playing method, device and mobile terminal
CN208079373U (en) * 2017-07-21 2018-11-09 深圳市泰衡诺科技有限公司上海分公司 Audio frequency broadcast system, mobile terminal, WiFi earphones

Also Published As

Publication number Publication date
CN107182011A (en) 2017-09-19

Similar Documents

Publication Publication Date Title
CN107182011B (en) Audio playing method and system, mobile terminal and WiFi earphone
US10397728B2 (en) Differential headtracking apparatus
CN104205880B (en) Audio frequency control based on orientation
US8831240B2 (en) Bluetooth device and audio playing method using the same
EP2700907B1 (en) Acoustic Navigation Method
TW201215179A (en) Virtual spatial sound scape
US11647352B2 (en) Head to headset rotation transform estimation for head pose tracking in spatial audio applications
CN105722009A (en) Portable Apparatus And Method Of Controlling Location Information Of Portable Apparatus
US11589183B2 (en) Inertially stable virtual auditory space for spatial audio applications
CN103105926A (en) Multi-sensor posture recognition
US20210400414A1 (en) Head tracking correlated motion detection for spatial audio applications
US9916004B2 (en) Display device
US20210400419A1 (en) Head dimension estimation for spatial audio applications
US11582573B2 (en) Disabling/re-enabling head tracking for distracted user of spatial audio application
WO2022057365A1 (en) Noise reduction method, terminal device, and computer readable storage medium
CN108827338B (en) Voice navigation method and related product
CN110719545B (en) Audio playing device and method for playing audio
CN115967887B (en) Method and terminal for processing sound image azimuth
WO2020102994A1 (en) 3d sound effect realization method and apparatus, and storage medium and electronic device
CN208079373U (en) Audio frequency broadcast system, mobile terminal, WiFi earphones
WO2023029143A1 (en) Headphone connection device switching method and system, and related components
Aitenbichler et al. The talking assistant headset: A novel terminal for ubiquitous computing
CN116347284B (en) Earphone sound effect compensation method and earphone sound effect compensation device
CN110708582A (en) Synchronous playing method, device, electronic equipment and medium
TWI661808B (en) Physiological detection system, method and wearable device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant