CN107707742B - Audio file playing method and mobile terminal - Google Patents

Audio file playing method and mobile terminal Download PDF

Info

Publication number
CN107707742B
CN107707742B CN201710832388.5A CN201710832388A CN107707742B CN 107707742 B CN107707742 B CN 107707742B CN 201710832388 A CN201710832388 A CN 201710832388A CN 107707742 B CN107707742 B CN 107707742B
Authority
CN
China
Prior art keywords
preset
audio file
user
sound
audio signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710832388.5A
Other languages
Chinese (zh)
Other versions
CN107707742A (en
Inventor
孙逊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201710832388.5A priority Critical patent/CN107707742B/en
Publication of CN107707742A publication Critical patent/CN107707742A/en
Application granted granted Critical
Publication of CN107707742B publication Critical patent/CN107707742B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72442User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for playing music files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Stereophonic System (AREA)
  • Telephone Function (AREA)

Abstract

The invention provides an audio file playing method and a mobile terminal, and relates to the technical field of electronic equipment, wherein the method comprises the following steps: receiving a switching instruction for switching to a second audio file when a first audio file is played; in a first time period from the current time to a preset switching time, controlling a sound source to be away from a user according to a preset first path, performing first signal processing on an audio signal of the first audio file, and playing the processed audio signal; and in a preset second time period after the switching moment, controlling a sound source to perform second signal processing on the audio signal of the second audio file according to a mode that a preset second path is close to the user, and playing the processed audio signal. The invention solves the problem that the sound effect processing in the switching process is single when the existing electronic equipment plays the audio file.

Description

Audio file playing method and mobile terminal
Technical Field
The invention relates to the technical field of electronic equipment, in particular to an audio file playing method and a mobile terminal.
Background
With the development of electronic device technology, users have a higher and higher degree of dependence on various electronic devices, such as communicating through the electronic devices or playing music. In the process of playing music through electronic equipment, various sound effects are often built in the playing equipment for users to select in order to improve the listening feeling of the users. However, in the process of switching the audio file by the user, the electronic device processes the sound effect of the audio file more singly, and usually, the common fade-in and fade-out sound effect processing is performed on the audio file, so that the use experience of the user is difficult to be improved, and the science and technology sense of the electronic device is difficult to be embodied.
Disclosure of Invention
The invention provides an audio file playing method and a mobile terminal, and aims to solve the problem that the sound effect processing in the switching process is single when the existing electronic equipment plays an audio file.
In one aspect, an embodiment of the present invention provides an audio file playing method, where the method includes:
receiving a switching instruction for switching to a second audio file when a first audio file is played;
in a first time period from the current time to a preset switching time, controlling a sound source to be away from a user according to a preset first path, performing first signal processing on an audio signal of the first audio file, and playing the processed audio signal;
and in a preset second time period after the switching moment, controlling a sound source to perform second signal processing on the audio signal of the second audio file according to a mode that a preset second path is close to the user, and playing the processed audio signal.
On the other hand, an embodiment of the present invention further provides a mobile terminal, including:
the instruction receiving module is used for receiving a switching instruction for switching to a second audio file when the first audio file is played;
the first processing module is used for controlling a sound source to carry out first signal processing on the audio signal of the first audio file in a mode that a preset first path is far away from a user in a first time period from the current time to a preset switching time, and playing the processed audio signal;
and the second processing module is used for controlling the sound source to perform second signal processing on the audio signal of the second audio file according to a mode that a preset second path is close to the user in a preset second time period after the switching moment, and playing the processed audio signal.
In another aspect, an embodiment of the present invention further provides a mobile terminal, including: the audio file playing method comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor realizes the steps of the audio file playing method when executing the computer program.
In still another aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the audio file playing method are implemented.
Therefore, according to the audio file playing method and the mobile terminal provided by the invention, when a first audio file is played and a switching instruction for switching to a second audio file is received, in a first time period from the current moment to the switching moment, a sound source is controlled to be far away from a user according to a preset first path, and first signal processing is carried out on an audio signal of the first audio file, so that the user obtains a spatial orientation effect of the far away first audio file; and in a second time period after the switching moment, controlling the sound source to approach the user according to a preset second path, and performing second signal processing on the audio signal of the second audio file, so that the user obtains the spatial orientation effect of the second audio file approaching, and sound effect processing is added during switching of the audio files, thereby creating a switching spatial feeling and improving user experience.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a flow chart illustrating an audio file playing method according to an embodiment of the present invention;
FIG. 2 shows one of the exemplary scenario diagrams of an embodiment of the present invention;
FIG. 3 is a second exemplary scenario diagram of an embodiment of the present invention;
FIG. 4 illustrates a first exemplary scenario diagram of an embodiment of the present invention;
FIG. 5 illustrates a second exemplary scenario diagram of an embodiment of the present invention;
FIG. 6 illustrates one of the block diagrams of a mobile terminal of an embodiment of the present invention;
fig. 7 shows a block diagram of an electronic device provided by a third example of an embodiment of the invention;
fig. 8 is a second block diagram of a mobile terminal according to an embodiment of the present invention;
fig. 9 is a third block diagram of a mobile terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In various embodiments of the present invention, it should be understood that the sequence numbers of the following processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Referring to fig. 1, an embodiment of the present invention provides an audio file playing method, including:
step 101, when a first audio file is played, a switching instruction for switching to a second audio file is received.
Where the audio file may be music or other audio file. When the mobile terminal plays the first audio file, receiving a switching instruction, and switching to playing a second audio file, such as an instruction for playing a next song; the switching instruction can be actively triggered by a user or automatically generated by the mobile terminal; for example, when the mobile terminal detects that the first audio file is to be played, the switching instruction is automatically generated.
And step 102, in a first time period from the current time to a preset switching time, controlling a sound source to perform first signal processing on an audio signal of the first audio file according to a mode that a preset first path is far away from a user, and playing the processed audio signal.
When a switching instruction is received at the current moment, the preset switching moment is determined after a first time period after the current moment, and in the first time period from the current moment to the switching moment, the audio signal of the first audio file is played after being subjected to first signal processing. Specifically, the first signal processing includes controlling the sound source to be away from the user according to a preset first path, that is, simulating a sound effect that the sound source is away from the user according to the preset first path, so that the user feels that the first audio file is away in auditory spatial sense.
And 103, in a preset second time period after the switching moment, controlling a sound source to perform second signal processing on the audio signal of the second audio file according to a mode that a preset second path is close to the user, and playing the processed audio signal.
And switching to a second audio signal at the switching moment, and playing the audio signal of the second audio file after performing second signal processing in a preset second time period after the switching moment. Specifically, the second signal processing includes controlling the sound source to approach the user in accordance with the preset second path, that is, simulating a sound effect that the user feels that the second audio file is approaching in auditory spatial sense in such a manner that the sound source approaches the user in accordance with the preset second path.
Specifically, the first signal processing and the second signal processing are mainly realized by virtual auditory reproduction, which is an important experimental means for a binaural auditory scientific study. The method carries out signal processing through a Head Related Transfer Function (HRTF), and utilizes the HRTF to process an original sound signal based on a positioning mechanism of spatial hearing, so that the positioning sense of a listener on a sound source is highlighted, and the spatial information of the sound signal is reproduced. The method can reproduce sound from any direction by adjusting the HRTF according to needs, virtualizes an expected sound source position by using a small number of real loudspeakers, obtains good spatial sense, achieves the reproduction of a virtual sound image in a three-dimensional space, simplifies a system, saves cost and space, and is convenient to realize.
The HRTF embodies the filtering effect of external ears, heads, trunks and the like on sound signals from different directions in the sound wave transmission process, not only contains frequency spectrum characteristics, but also integrates the interaural difference characteristics, and is an important azimuth characteristic. After the HRTF in a certain direction is convoluted with the original sound, the HRTF is played back to the ears of a person in a certain mode, so that a listener can perceive the directional effect in a free sound field. The nature of the human pinna effect is to change the spectral characteristics of sound from different directions in space, so that the whole auditory system is similar to a filter, and sound from different directions is filtered differently, so that the human ear feels the difference of the sound source orientation. These overall processing effects can be seen as a filter-induced, and can be described by a uniform transfer function, which is the Head-Related transfer function, and the response of the filter is called the Head-Related impulse response (HRIR). By controlling (i.e., simulating) the manner in which the sound source is away from the user according to the preset first path or controlling the manner in which the sound source is close to the user according to the preset second path by HRIR, it is possible to realize a change in the sense of auditory orientation felt by the user.
In the case of a free sound field, the HRTF is defined as the ratio of the signal in the ear canal in the frequency domain to the free sound field signal in the corresponding direction. Specific expression forms are as shown in the following expression 1 and expression 2
Expression 1:
Figure GDA0002252037010000051
expression 2:
Figure GDA0002252037010000052
where expression 1 represents the head related transfer function of the left ear of a listener (i.e., user), and expression 2 represents the head related transfer function of the right ear of the listener; pLAnd PRThe sound waves emitted by the sound source in a certain direction generate a plurality of sound pressures on the left ear and the right ear respectively; and P is0When the measuring object does not exist, the sound source of the corresponding direction is transmitted to the free sound field complex sound pressure of the head center position.
r is the distance from the sound source to the center of the head, ω is the angular frequency, azimuth θ and elevation of the sound wave
Figure GDA0002252037010000054
Indicating the direction of the sound source. Typically, HL and HR are azimuth θ and elevation
Figure GDA0002252037010000055
Distance r and angular frequency ω. When the sound source is farther from the center of the head, e.g. r>1.4m,HLAnd HRSubstantially independent of distance. Furthermore, HLAnd HRThere are also differences between individuals in relation to the head size of the listener object and the shape of the head and the pinna.
Expressions 1 and 2 are frequency domain expressions of HRTFs, and may be equivalently expressed as time domain expressions as follows:
time domain form expression 11 of expression 1:
Figure GDA0002252037010000053
time domain form expression 21 of expression 2:
Figure GDA0002252037010000061
here, in order to distinguish from the physical quantity in which the capital letters in expression 1 represent frequency domains, the time domain physical quantity corresponding to expression 1 is represented by lower case letters in expression 11, and the letter t is used only as the time domain and frequency domain conversion in the above expression; expression 2 is the same as expression 21; wherein p is0、pL、pRAre respectively connected with P0,PL、PRFourier transform pairs are formed between the two sound sources, and represent corresponding time domain sound pressure; and hLAnd hRIs the corresponding impulse response HRIR, respectively HLAnd HRAnd are fourier transform pairs. It can also be seen from the above expression that this is a convolution process.
Therefore, the following relationship (expression 12 and expression 22) exists:
expression 12:
Figure GDA0002252037010000062
expression 22:
Figure GDA0002252037010000063
expression 13 is derived from expression 12, expression 23 is derived from expression 22:
expression 13:
Figure GDA0002252037010000064
expression 23:
wherein h isLAnd HL,hRAnd HRAre Fourier transform pairs; expressions 13 and 23 are HRIRs of left and right ears of a listener, respectively, and after the sound signal P0 is convolved with the HRIRs in a certain direction, a sound signal P with a certain directional sense, that is, a sound pressure adjustment formula in the following, can be obtained. Such sound signals can be played back to produce a directional effect of the sound source in a free sound field in the human brain.
Preferably, step 101 comprises:
acquiring a first audio signal of each sound channel to be played by the first audio file within a first time period from the current time to a preset switching time;
and inputting each first audio signal into a preset sound pressure adjusting formula, and increasing the azimuth distance between a sound source in the sound pressure adjusting formula and the center of the head of the user according to a first preset frequency to obtain a processed audio signal.
After receiving a switching instruction, determining a switching time, and acquiring a first audio signal of each channel played in a first time period between the current time and the switching time, wherein the first audio signal of each channel includes a first audio signal of a left channel and a first audio signal of a right channel, taking two channels as an example; the first audio signals of each sound channel are respectively input into a corresponding preset sound pressure adjusting formula, and the azimuth distance between a sound source in the sound pressure adjusting formula and the center of the head of a user is increased according to a first preset frequency, so that the user obtains the technical effect that the first audio file is gradually far away in the playing process.
As a first example, referring to fig. 2 and 3, fig. 3 is a schematic diagram of a scene without signal processing; wherein E isLFor the left ear of the user, ERIs the user's right ear; a switching instruction is received at time T1 in fig. 2, and T2 is a switching time, and the first signal processing is performed on the first audio signal of each channel of the first audio file during a period of time from T1 to T2. Referring to fig. 4, fig. 4 is a schematic view of a scene after the first signal processing is performed;
1) before time T1, normally playing song A signal SA
2) During the period from T1 to T2, the A song signal SAAnd playing after being convolved with the head-related impulse response HRIR which changes along with time. In the head-related impulse response HRIR at the position indicated by time T1 in fig. 3, the azimuth angle θ is fixed and the elevation angle is set to
Figure GDA0002252037010000071
Gradually increasing r with time from T1 to T2 until T2 time, head-related impulse response (H in the figure)LAnd HR) The change is the head-related impulse response at the position marked by T2 in fig. 4.
3) At time T2, playback of song A signal S is stoppedA
Preferably, step 102 comprises:
acquiring a second audio signal of each sound channel, which is to be played by the second audio file within a preset second time period after the switching moment;
and inputting each second audio signal into a preset sound pressure adjusting formula, and reducing the azimuth distance between a sound source in the sound pressure adjusting formula and the center of the head of the user according to a second preset frequency to obtain a processed audio signal.
Switching to a second audio signal at the switching moment, and performing second signal processing on the audio signal of the second audio file and then playing the audio signal in a preset second time period after the switching moment, wherein the second audio signal of each channel comprises a second audio signal of a left channel and a first audio signal of a right channel, taking two channels as an example; and respectively inputting the second audio signal of each sound channel into a corresponding preset sound pressure adjusting formula, and reducing the azimuth distance between a sound source in the sound pressure adjusting formula and the center of the head of the user according to a second preset frequency, so that the user obtains the technical effect that the second audio file is gradually close to the user in the playing process.
As a second example, referring to fig. 2 and 3, fig. 3 is a schematic diagram of a scene without signal processing; wherein E isLFor the left ear of the user, ERIs the user's right ear; the second signal processing is performed on the second audio signal of each channel of the second audio file for a period of time from T2 to T3 at the time of receiving the switching instruction at time T1 in fig. 2, T2 being the switching time. Referring to fig. 5, fig. 5 is a schematic view of a scene after the second signal processing is performed;
1) for the B-song signal S as shown in FIG. 2 at time T2BPerforming sound effect processing to obtain B song signal SBConvolving with the head-related impulse response HRIR at the position marked by T2 in fig. 2, and then playing the result;
2) during the period from T2 to T3, the B-song signal SBAnd playing after convolution with the head-related impulse response which changes along with time. The azimuth angle theta and the elevation angle theta in the head-related impulse response HRIR at the position indicated by T2 in fig. 2 are fixed
Figure GDA0002252037010000084
Gradually decreasing r as time changes from T2 to T3 until T3 shows the head-related impulse response at the position marked by T2 in fig. 2 (H in the figure)LAnd HR) The change is the head-related impulse response HRIR at the position marked T3 in fig. 5.
3) After time T3, B-Song is played normallySignal SB
Preferably, the sound pressure adjustment formula is as follows:
Figure GDA0002252037010000081
wherein r is the azimuth distance, θ is a preset azimuth angle between the sound source and the center of the user's head,
Figure GDA0002252037010000082
a preset elevation angle between the sound source and the center of the user head;
p0(t) is a time domain expression of free sound field complex sound pressure of the sound source and the center of the user head;
Figure GDA0002252037010000083
an impulse response function which is a head related function between a sound source and a user head center.
The first signal processing and the second signal processing are respectively adjusted to be azimuth distances r, theta and phi are preset fixed values, and the letter t is only used for time domain and frequency domain conversion in the formula; p is a radical of0And (t) is a time domain expression of the free sound field complex sound pressure of the sound source and the center of the head of the user, namely the free sound field complex sound pressure of the sound source of the corresponding direction which is transmitted to the center position of the head when no measuring object exists.
Figure GDA0002252037010000091
The impulse response function being a head-related function between the sound source and the center of the user's head, i.e. the sound signal P0Convolution with HRIR in a certain direction.
That is, the sound pressure adjustment formula is p0(t) and
Figure GDA0002252037010000092
is performed. It should be noted that the sound pressure adjustment formula of each sound channel of the same audio file may be different, and the sound pressure adjustment formula of each sound channel is p0(t) HRIR corresponding to eachIs performed.
Preferably, the step of playing the processed audio signal further includes:
and synthesizing the processed audio signals of each sound channel to obtain synthesized audio signals and playing the synthesized audio signals.
Wherein, the sound signal P includes the left channel P in the time domainLSignal and right channel pRThe sound signal of the signal. Because the sound pressure adjustment formulas of each sound channel are possibly different, the processed audio signals are synthesized after being respectively processed.
Alternatively, the above embodiments of the present invention can also be applied to other electronic devices, such as computers.
In the above embodiment of the present invention, when a first audio file is played and a switching instruction for switching to a second audio file is received, in a first time period from a current time to a switching time, a sound source is controlled to perform a first signal processing on an audio signal of the first audio file in a manner of keeping a first path away from a user, so that the user obtains a spatial orientation effect of keeping the first audio file away from the user; in a second time period after the switching moment, controlling a sound source to approach the user according to a preset second path, and performing second signal processing on the audio signal of the second audio file to enable the user to obtain an approaching spatial orientation effect of the second audio file; the virtual auditory space technology and the HRTF signal processing are utilized to add sound effect processing during audio file switching, a switching space sense is created, and user experience is improved. The invention solves the problem that the sound effect processing in the switching process is single when the existing electronic equipment plays the audio file.
Referring to fig. 6, an embodiment of the present invention further provides a mobile terminal 600, including:
the instruction receiving module 601 is configured to receive a switching instruction for switching to a second audio file when the first audio file is played.
Where the audio file may be music or other audio file. When the mobile terminal 600 plays the first audio file, receiving a switching instruction, switching to playing a second audio file, for example, playing a next song; the switching instruction may be actively triggered by the user or automatically generated by the mobile terminal 600; for example, the mobile terminal 600 automatically generates the switching instruction when it detects that the first audio file is to be played.
The first processing module 602 is configured to, in a first time period from a current time to a preset switching time, control a sound source to perform first signal processing on an audio signal of the first audio file in a manner that a preset first path is away from a user, and play the processed audio signal.
When a switching instruction is received at the current moment, the preset switching moment is determined after a first time period after the current moment, and in the first time period from the current moment to the switching moment, the audio signal of the first audio file is played after being subjected to first signal processing. Specifically, the first signal processing includes a sound effect that controls a sound source to be away from a user in a manner that a first path is preset so that the user feels that the first audio file is being away in auditory spatial sense.
A second processing module 603, configured to control, in a preset second time period after the switching time, a sound source to perform second signal processing on the audio signal of the second audio file according to a manner that a preset second path approaches the user, and play the processed audio signal.
And switching to a second audio signal at the switching moment, and playing the audio signal of the second audio file after performing second signal processing in a preset second time period after the switching moment. Specifically, the second signal processing includes a sound effect that controls the sound source to approach the user in a preset second path so that the user feels that the second audio file is approaching in auditory spatial sense.
Alternatively, referring to fig. 7, as a third example, the electronic device 700 shown in fig. 7 includes:
the display interaction module 701: the user can perform the song switching operation.
The control processing module 702: according to the information obtained by the display interaction module, the control equipment switches the action of playing songs and controls the time sequence of the action of switching the playing songs at the same time,
the calculation processing module 703: and synthesizing the appointed song and the sound effect at the appointed time according to the instruction of the control processing module, and sending the data after calculation processing to the sound playing module.
The sound playing module 704: and playing the song signal output by the calculation processing module according to the instruction of the control processing module.
The sound playing module 704 may be a speaker or an earphone, and is preferably an earphone, because in the playing state of the earphone, the sound effect is more spatial, and it is easier for the user to feel the change of the sound source in space.
Optionally, the first processing module 602 is configured to:
acquiring a first audio signal of each sound channel to be played by the first audio file within a first time period from the current time to a preset switching time;
and inputting each first audio signal into a preset sound pressure adjusting formula, and increasing the azimuth distance between a sound source in the sound pressure adjusting formula and the center of the head of the user according to a first preset frequency to obtain a processed audio signal.
Optionally, the second processing module 603 is configured to:
acquiring a second audio signal of each sound channel, which is to be played by the second audio file within a preset second time period after the switching moment;
and inputting each second audio signal into a preset sound pressure adjusting formula, and reducing the azimuth distance between a sound source in the sound pressure adjusting formula and the center of the head of the user according to a second preset frequency to obtain a processed audio signal.
Optionally, the sound pressure adjustment formula is as follows:
wherein r is the azimuth distance, and θ is the distance between the sound source and the center of the user's headIs determined by the predetermined angle of orientation of the rotor,a preset elevation angle between the sound source and the center of the user head;
p0(t) is a time domain expression of free sound field complex sound pressure of the sound source and the center of the user head;
Figure GDA0002252037010000113
an impulse response function which is a head related function between a sound source and a user head center.
Optionally, the mobile terminal 600 further includes:
and the signal synthesis module is used for synthesizing the processed audio signals of each sound channel to obtain the synthesized audio signals and playing the synthesized audio signals.
In the above embodiment of the present invention, when a first audio file is played and a switching instruction for switching to a second audio file is received, in a first time period from a current time to a switching time, a sound source is controlled to perform a first signal processing on an audio signal of the first audio file in a manner of keeping a first path away from a user, so that the user obtains a spatial orientation effect of keeping the first audio file away from the user; in a second time period after the switching moment, controlling a sound source to approach the user according to a preset second path, and performing second signal processing on the audio signal of the second audio file to enable the user to obtain an approaching spatial orientation effect of the second audio file; the virtual auditory space technology and the HRTF signal processing are utilized to add sound effect processing during audio file switching, a switching space sense is created, and user experience is improved. The invention solves the problem that the sound effect processing in the switching process is single when the existing electronic equipment plays the audio file.
On the other hand, an embodiment of the present invention further provides a mobile terminal, including: the audio file playing method comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein when the computer program is executed by the processor, each process of the audio file playing method embodiment is realized, the same technical effect can be achieved, and the details are not repeated here to avoid repetition.
On the other hand, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process in the audio file playing method, and can achieve the same technical effect, and in order to avoid repetition, the details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
Referring to fig. 8, yet another embodiment of the present invention provides a mobile terminal 800. The mobile terminal 800 shown in fig. 8 includes: at least one processor 801, memory 802, at least one network interface 804, and other user interfaces 803. The various components in the mobile terminal 800 are coupled together by a bus system 805. It is understood that the bus system 805 is used to enable communications among the components connected. The bus system 805 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 805 in fig. 8.
The user interface 803 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, trackball, touch pad, or touch screen, among others.
It will be appreciated that the memory 802 in embodiments of the invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data rate Synchronous Dynamic random access memory (ddr Data RAM, ddr SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), and direct memory bus RAM (DRRAM). The memory 802 of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 802 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof: an operating system 8021 and application programs 8022.
The operating system 8021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application program 8022 includes various application programs, such as a Media Player (Media Player), a Browser (Browser), and the like, for implementing various application services, and a program for implementing the method according to the embodiment of the present invention may be included in the application program 8022.
In this embodiment of the present invention, the mobile terminal 800 further includes: a computer program stored on the memory 802 and executable on the processor 801, the computer program when executed by the processor 801 implementing the steps of: receiving a switching instruction for switching to a second audio file when a first audio file is played; in a first time period from the current time to a preset switching time, controlling a sound source to be away from a user according to a preset first path, performing first signal processing on an audio signal of the first audio file, and playing the processed audio signal; and in a preset second time period after the switching moment, controlling a sound source to perform second signal processing on the audio signal of the second audio file according to a mode that a preset second path is close to the user, and playing the processed audio signal.
The methods disclosed in the embodiments of the present invention described above may be implemented in the processor 801 or implemented by the processor 801. The processor 801 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 801. The Processor 801 may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 802, and the processor 801 reads the information in the memory 802, and combines the hardware to complete the steps of the method.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Alternatively, as another embodiment, the computer program may further implement the following steps when being executed by the processor 801: acquiring a first audio signal of each sound channel to be played by the first audio file within a first time period from the current time to a preset switching time; and inputting each first audio signal into a preset sound pressure adjusting formula, and increasing the azimuth distance between a sound source in the sound pressure adjusting formula and the center of the head of the user according to a first preset frequency to obtain a processed audio signal.
Alternatively, as another embodiment, the computer program may further implement the following steps when being executed by the processor 801: acquiring a second audio signal of each sound channel, which is to be played by the second audio file within a preset second time period after the switching moment; and inputting each second audio signal into a preset sound pressure adjusting formula, and reducing the azimuth distance between a sound source in the sound pressure adjusting formula and the center of the head of the user according to a second preset frequency to obtain a processed audio signal.
Alternatively, as another embodiment, the sound pressure adjustment formula is the following formula:
Figure GDA0002252037010000141
wherein r is the azimuth distance, θ is a preset azimuth angle between the sound source and the center of the user's head,
Figure GDA0002252037010000151
a preset elevation angle between the sound source and the center of the user head;
p0(t) is a time domain expression of free sound field complex sound pressure of the sound source and the center of the user head;
Figure GDA0002252037010000152
as a sound source and useImpulse response function of head related function between the centers of the user heads.
Alternatively, as another embodiment, the computer program may further implement the following steps when being executed by the processor 801: and synthesizing the processed audio signals of each sound channel to obtain synthesized audio signals and playing the synthesized audio signals.
The mobile terminal 800 can implement each process implemented by the mobile terminal in the foregoing embodiments, and details are not repeated here to avoid repetition.
In the mobile terminal 800 of the embodiment of the present invention, when the processor 801 plays a first audio file and receives a switching instruction for switching to a second audio file, in a first time period from a current time to a switching time, a sound source is controlled to move away from a user according to a preset first path, so as to perform first signal processing on an audio signal of the first audio file, and the user obtains a spatial orientation effect of moving away the first audio file; in a second time period after the switching moment, controlling a sound source to approach the user according to a preset second path, and performing second signal processing on the audio signal of the second audio file to enable the user to obtain an approaching spatial orientation effect of the second audio file; the virtual auditory space technology and the HRTF signal processing are utilized to add sound effect processing during audio file switching, a switching space sense is created, and user experience is improved. The invention solves the problem that the sound effect processing in the switching process is single when the existing electronic equipment plays the audio file.
Referring to fig. 9, yet another embodiment of the present invention provides a mobile terminal 900. Specifically, the mobile terminal 900 in fig. 9 may be a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), or a vehicle-mounted computer.
The mobile terminal 900 of fig. 9 includes a Radio Frequency (RF) circuit 910, a memory 920, an input unit 930, a display unit 940, a processor 950, a Wi-fi (wireless fidelity) module 960, an audio circuit 970, and a power supply 980.
The input unit 930 may be used, among other things, to receive numeric or character information input by a user and to generate signal inputs related to user settings and function control of the mobile terminal 900.
Specifically, in the embodiment of the present invention, the input unit 930 may include a touch panel 931. The touch panel 931, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 931 (for example, a user may operate the touch panel 931 by using a finger, a stylus pen, or any other suitable object or accessory), and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 931 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 950, and can receive and execute commands sent from the processor 950. In addition, the touch panel 931 may be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 931, the input unit 930 may also include other input devices 932, and the other input devices 932 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
Among other things, the display unit 940 may be used to display information input by the user or information provided to the user and various menu interfaces of the mobile terminal 900. The display unit 940 may include a display panel 941, and the display panel 941 may be optionally configured in the form of an LCD or an Organic Light-Emitting Diode (OLED).
It should be noted that the touch panel 931 may cover the display panel 941 to form a touch display screen, and when the touch display screen detects a touch operation on or near the touch display screen, the touch display screen is transmitted to the processor 950 to determine the type of the touch event, and then the processor 950 provides a corresponding visual output on the touch display screen according to the type of the touch event.
The touch display screen comprises an application program interface display area and a common control display area. The arrangement modes of the application program interface display area and the common control display area are not limited, and can be an arrangement mode which can distinguish two display areas, such as vertical arrangement, left-right arrangement and the like. The application interface display area may be used to display an interface of an application. Each interface may contain at least one interface element such as an icon and/or widget desktop control for an application. The application interface display area may also be an empty interface that does not contain any content. The common control display area is used for displaying controls with high utilization rate, such as application icons like setting buttons, interface numbers, scroll bars, phone book icons and the like.
In this embodiment of the present invention, the mobile terminal 900 further includes: a computer program stored on the memory 920 and executable on the processor 950, the computer program when executed by the processor 950 implementing the steps of: receiving a switching instruction for switching to a second audio file when a first audio file is played; in a first time period from the current time to a preset switching time, controlling a sound source to be away from a user according to a preset first path, performing first signal processing on an audio signal of the first audio file, and playing the processed audio signal; and in a preset second time period after the switching moment, controlling a sound source to perform second signal processing on the audio signal of the second audio file according to a mode that a preset second path is close to the user, and playing the processed audio signal.
Alternatively, as another embodiment, the computer program when executed by the processor 950 may further implement the following steps: acquiring a first audio signal of each sound channel to be played by the first audio file within a first time period from the current time to a preset switching time; and inputting each first audio signal into a preset sound pressure adjusting formula, and increasing the azimuth distance between a sound source in the sound pressure adjusting formula and the center of the head of the user according to a first preset frequency to obtain a processed audio signal.
Alternatively, as another embodiment, the computer program when executed by the processor 950 may further implement the following steps: acquiring a second audio signal of each sound channel, which is to be played by the second audio file within a preset second time period after the switching moment; and inputting each second audio signal into a preset sound pressure adjusting formula, and reducing the azimuth distance between a sound source in the sound pressure adjusting formula and the center of the head of the user according to a second preset frequency to obtain a processed audio signal.
Alternatively, as another embodiment, the sound pressure adjustment formula is the following formula:
Figure GDA0002252037010000171
wherein r is the azimuth distance, θ is a preset azimuth angle between the sound source and the center of the user's head,a preset elevation angle between the sound source and the center of the user head;
p0(t) is a time domain expression of free sound field complex sound pressure of the sound source and the center of the user head;
Figure GDA0002252037010000173
an impulse response function which is a head related function between a sound source and a user head center.
Alternatively, as another embodiment, the computer program when executed by the processor 950 may further implement the following steps: and synthesizing the processed audio signals of each sound channel to obtain synthesized audio signals and playing the synthesized audio signals.
The mobile terminal 900 can implement the processes implemented by the mobile terminal in the foregoing embodiments, and in order to avoid repetition, the details are not described here.
In the mobile terminal 900 according to the embodiment of the present invention, when the processor 950 plays the first audio file and receives the switching instruction for switching to the second audio file, in the first time period from the current time to the switching time, the sound source is controlled to move away from the user according to the preset first path, so as to perform the first signal processing on the audio signal of the first audio file, and the user obtains the spatial orientation effect of moving away the first audio file; in a second time period after the switching moment, controlling a sound source to approach the user according to a preset second path, and performing second signal processing on the audio signal of the second audio file to enable the user to obtain an approaching spatial orientation effect of the second audio file; the virtual auditory space technology and the HRTF signal processing are utilized to add sound effect processing during audio file switching, a switching space sense is created, and user experience is improved. The invention solves the problem that the sound effect processing in the switching process is single when the existing electronic equipment plays the audio file.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: u disk, removable hard disk, ROM, RAM, magnetic disk, optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (6)

1. An audio file playing method, comprising:
receiving a switching instruction for switching to a second audio file when a first audio file is played;
in a first time period from the current time to a preset switching time, controlling a sound source to be far away from a user according to a preset first path, performing first signal processing on an audio signal of the first audio file, enabling the user to feel a sound effect of the first audio file which is far away in auditory sense in space, and playing the processed audio signal;
and in a preset second time period after the switching moment, controlling a sound source to approach the user according to a preset second path, and performing second signal processing on the audio signal of the second audio fileA sound effect that the user feels that the second audio file is approaching in auditory sense space and plays the processed audio signal;
in a first time period from a current time to a preset switching time, controlling a sound source to be away from a user according to a preset first path, and performing first signal processing on an audio signal of the first audio file to enable the user to feel a sound effect of the first audio file which is away from the user in an auditory sense, wherein the method comprises the following steps:
acquiring a first audio signal of each sound channel to be played by the first audio file within a first time period from the current time to a preset switching time;
inputting each first audio signal into a preset sound pressure adjusting formula, and increasing the azimuth distance between a sound source in the sound pressure adjusting formula and the center of the head of a user according to a first preset frequency, so that the user obtains the technical effect that a first audio file is gradually far away in the playing process, and a processed audio signal is obtained;
and in a preset second time period after the switching moment, controlling a sound source to perform second signal processing on the audio signal of the second audio file according to a mode that a preset second path is close to the user, so that the user feels the sound effect of the second audio file approaching in auditory space, wherein the step comprises the following steps of:
acquiring a second audio signal of each sound channel, which is to be played by the second audio file within a preset second time period after the switching moment;
inputting each second audio signal into a preset sound pressure adjusting formula, and reducing the azimuth distance between a sound source in the sound pressure adjusting formula and the center of the head of a user according to a second preset frequency, so that the user obtains the technical effect that a second audio file gradually approaches in the playing process, and a processed audio signal is obtained;
the sound pressure adjustment formula is as follows:
Figure FDA0002252035000000021
wherein r is the azimuth distance, θ is a preset azimuth angle between the sound source and the center of the user's head,
Figure FDA0002252035000000022
a preset elevation angle between the sound source and the center of the user head;
p0(t) is a time domain expression of free sound field complex sound pressure of the sound source and the center of the user head;
Figure FDA0002252035000000023
an impulse response function which is a head related function between a sound source and a user head center.
2. The method of claim 1, wherein the step of playing the processed audio signal further comprises:
and synthesizing the processed audio signals of each sound channel to obtain synthesized audio signals and playing the synthesized audio signals.
3. A mobile terminal, comprising:
the instruction receiving module is used for receiving a switching instruction for switching to a second audio file when the first audio file is played;
the first processing module is used for controlling a sound source to be away from a user according to a preset first path in a first time period from the current time to a preset switching time, performing first signal processing on an audio signal of the first audio file, enabling the user to feel a sound effect of the first audio file which is away from the user in an auditory sense, and playing the processed audio signal;
the second processing module is used for controlling the sound source to perform second signal processing on the audio signal of the second audio file in a mode that a preset second path is close to the user in a preset second time period after the switching moment, so that the user can feel the sound effect of the second audio file approaching in the sense of auditory space, and the processed audio signal is played;
the first processing module is configured to:
acquiring a first audio signal of each sound channel to be played by the first audio file within a first time period from the current time to a preset switching time;
inputting each first audio signal into a preset sound pressure adjusting formula, and increasing the azimuth distance between a sound source in the sound pressure adjusting formula and the center of the head of a user according to a first preset frequency, so that the user obtains the technical effect that a first audio file is gradually far away in the playing process, and a processed audio signal is obtained;
the second processing module is configured to:
acquiring a second audio signal of each sound channel, which is to be played by the second audio file within a preset second time period after the switching moment;
inputting each second audio signal into a preset sound pressure adjusting formula, and reducing the azimuth distance between a sound source in the sound pressure adjusting formula and the center of the head of a user according to a second preset frequency, so that the user obtains the technical effect that a second audio file gradually approaches in the playing process, and a processed audio signal is obtained;
the sound pressure adjustment formula is as follows:
wherein r is the azimuth distance, θ is a preset azimuth angle between the sound source and the center of the user's head,
Figure FDA0002252035000000032
a preset elevation angle between the sound source and the center of the user head;
p0(t) is a time domain expression of free sound field complex sound pressure of the sound source and the center of the user head;
Figure FDA0002252035000000033
an impulse response function which is a head related function between a sound source and a user head center.
4. The mobile terminal of claim 3, further comprising:
and the signal synthesis module is used for synthesizing the processed audio signals of each sound channel to obtain the synthesized audio signals and playing the synthesized audio signals.
5. A mobile terminal, comprising: memory, processor and computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps in the audio file playback method as claimed in any one of claims 1 to 2.
6. A computer-readable storage medium, having stored thereon a computer program which, when being executed by a processor, carries out the steps of the audio file playback method according to any one of claims 1 to 2.
CN201710832388.5A 2017-09-15 2017-09-15 Audio file playing method and mobile terminal Active CN107707742B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710832388.5A CN107707742B (en) 2017-09-15 2017-09-15 Audio file playing method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710832388.5A CN107707742B (en) 2017-09-15 2017-09-15 Audio file playing method and mobile terminal

Publications (2)

Publication Number Publication Date
CN107707742A CN107707742A (en) 2018-02-16
CN107707742B true CN107707742B (en) 2020-01-03

Family

ID=61171709

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710832388.5A Active CN107707742B (en) 2017-09-15 2017-09-15 Audio file playing method and mobile terminal

Country Status (1)

Country Link
CN (1) CN107707742B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109445739B (en) * 2018-09-30 2020-05-19 Oppo广东移动通信有限公司 Audio playing method and device, electronic equipment and computer readable medium
CN109408664A (en) * 2018-10-30 2019-03-01 努比亚技术有限公司 A kind of audio recommended method, terminal and computer readable storage medium
EP4068798A4 (en) * 2019-12-31 2022-12-28 Huawei Technologies Co., Ltd. Signal processing apparatus, method and system
CN114758560B (en) * 2022-03-30 2023-06-06 厦门大学 Humming pitch evaluation method based on dynamic time warping

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4573130B2 (en) * 2006-07-21 2010-11-04 ソニー株式会社 REPRODUCTION DEVICE, RECORDING MEDIUM, REPRODUCTION METHOD, AND REPRODUCTION PROGRAM
EP2642407A1 (en) * 2012-03-22 2013-09-25 Harman Becker Automotive Systems GmbH Method for retrieving and a system for reproducing an audio signal
CN105120418B (en) * 2015-07-17 2017-03-22 武汉大学 Double-sound-channel 3D audio generation device and method
CN106856094B (en) * 2017-03-01 2021-02-09 北京牡丹电子集团有限责任公司数字电视技术中心 Surrounding type live broadcast stereo method

Also Published As

Publication number Publication date
CN107707742A (en) 2018-02-16

Similar Documents

Publication Publication Date Title
CN107707742B (en) Audio file playing method and mobile terminal
JP6086188B2 (en) SOUND EFFECT ADJUSTING DEVICE AND METHOD, AND PROGRAM
US11812252B2 (en) User interface feedback for controlling audio rendering for extended reality experiences
EP2922313B1 (en) Audio signal processing device and audio signal processing system
EP3629145B1 (en) Method for processing 3d audio effect and related products
EP3364638B1 (en) Recording method, recording playing method and apparatus, and terminal
CN111913682B (en) Enhancing control sound with spatial audio cues
US9769585B1 (en) Positioning surround sound for virtual acoustic presence
KR20200070110A (en) Spatial repositioning of multiple audio streams
JP5703807B2 (en) Signal processing device
KR20150117797A (en) Method and Apparatus for Providing 3D Stereophonic Sound
CN111492342A (en) Audio scene processing
CN109121069B (en) 3D sound effect processing method and related product
CN106303841B (en) Audio playing mode switching method and mobile terminal
CN108924705B (en) 3D sound effect processing method and related product
CN113115175B (en) 3D sound effect processing method and related product
CN114520950B (en) Audio output method, device, electronic equipment and readable storage medium
WO2006107074A1 (en) Portable terminal
CN106886388B (en) Method and terminal for playing audio data
CN111756929A (en) Multi-screen terminal audio playing method and device, terminal equipment and storage medium
CN113194400B (en) Audio signal processing method, device, equipment and storage medium
WO2024011937A1 (en) Audio processing method and system, and electronic device
JP2011166256A (en) Acoustic reproducing device
JP6281606B2 (en) SOUND EFFECT ADJUSTING DEVICE AND METHOD, AND PROGRAM
CN116193196A (en) Virtual surround sound rendering method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant