WO2015024409A1 - Audio calling method and device thereof - Google Patents

Audio calling method and device thereof Download PDF

Info

Publication number
WO2015024409A1
WO2015024409A1 PCT/CN2014/080232 CN2014080232W WO2015024409A1 WO 2015024409 A1 WO2015024409 A1 WO 2015024409A1 CN 2014080232 W CN2014080232 W CN 2014080232W WO 2015024409 A1 WO2015024409 A1 WO 2015024409A1
Authority
WO
WIPO (PCT)
Prior art keywords
jump
voice
voice unit
time
played
Prior art date
Application number
PCT/CN2014/080232
Other languages
English (en)
French (fr)
Inventor
Xiayu WU
Jianye Li
Original Assignee
Tencent Technology (Shenzhen) Company Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology (Shenzhen) Company Limited filed Critical Tencent Technology (Shenzhen) Company Limited
Publication of WO2015024409A1 publication Critical patent/WO2015024409A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Definitions

  • the present disclosure relates to the field of Internet technologies, and particularly to an application voice playback switching method and apparatus.
  • policies may be selectively configured for different voice types.
  • Embodiments of the present disclosure provide an application voice playback switching method and apparatus, aimed at improving flexibility of voice playback switching in an application, and improving playback effects of the application.
  • the embodiments of the present disclosure propose an application voice playback switching method.
  • the method includes: acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping; determining a category of the jump state according to the jump state information; and selecting a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch.
  • the embodiments of the present disclosure further propose an application voice playback switching apparatus.
  • the apparatus includes a hardware processor and a non-transitory storage medium accessible to the hardware processor.
  • the non-transitory storage medium is configured to store modules including: an acquisition module, a judgment module, and a switching module.
  • the acquisition module is configured to acquire jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping.
  • the judgment module is configured to determine a category of the jump state according to the jump state information.
  • the switching module is configured to select a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch.
  • FIG. 1 is a schematic view of a flow of a first embodiment of an application voice playback switching method according to the present disclosure
  • FIG. 2 is a schematic view of a flow of a second embodiment of the application voice playback switching method according to the present disclosure
  • FIG. 3 is a schematic view of a flow of a third embodiment of the application voice playback switching method according to the present disclosure
  • FIG. 4 is a schematic view of functional modules of a first embodiment of an application voice playback switching apparatus according to the present disclosure
  • FIG. 5 is a schematic view of functional modules of a second embodiment of the application voice playback switching apparatus according to the present disclosure.
  • FIG. 6 is a schematic view of an example embodiment of a terminal according to embodiments of the present disclosure.
  • module may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC); an electronic circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor (shared, dedicated, or group) that executes code; other suitable hardware components that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip.
  • ASIC Application Specific Integrated Circuit
  • FPGA field programmable gate array
  • processor shared, dedicated, or group
  • the term module or unit may include memory (shared, dedicated, or group) that stores code executed by the processor.
  • the exemplary environment may include a server, a client, and a communication network.
  • the server and the client may be coupled through the communication network for information exchange, such as sending/receiving identification information, sending/receiving data files such as splash screen images, etc.
  • information exchange such as sending/receiving identification information, sending/receiving data files such as splash screen images, etc.
  • client and one server are shown in the environment, any number of terminals or servers may be included, and other devices may also be included.
  • the communication network may include any appropriate type of communication network for providing network connections to the server and client or among multiple servers or clients.
  • communication network may include the Internet or other types of computer networks or telecommunication networks, either wired or wireless.
  • the disclosed methods and apparatus may be implemented, for example, in a wireless network that includes at least one client.
  • the client may refer to any appropriate user terminal with certain computing capabilities, such as a personal computer (PC), a work station computer, a server computer, a hand-held computing device (tablet), a smart phone or mobile phone, or any other user-side computing device.
  • the client may include a network access device.
  • the client may be stationary or mobile.
  • a server may refer to one or more server computers configured to provide certain server functionalities, such as database management and search engines.
  • a server may also include one or more processors to execute computer programs in parallel.
  • a first embodiment of the present disclosure proposes an application voice playback switching method, which includes the following steps.
  • Step S 101 Acquire jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping.
  • the operating environment of the method in this embodiment relates to online games, single -player games and other applications, and particularly to switching management policies for voice playback when application content bound to a voice performs jump.
  • the jump of the application content bound to a voice may be graphics jump, duplicate content jump and the like.
  • Step S102 Determine a category of the jump state according to the jump state information.
  • Step S103 Select a corresponding voice switching policy according to the category of the jump state and dynamically perform voice playback switch.
  • This embodiment previously classifies all the switch jump states in the application, as one implementation manner, which may be divided into hard jump and soft jump.
  • soft jump refers to switch between the players' skills
  • hard jump refers to that other players interrupt current local players' skills.
  • a local player operates a character the character can control skills that the player plays back, the skills include voices and graphics bound to the voices, and if it is necessary to play two skills, each skill takes 1.5 seconds to complete playback. If the player first plays a first skill and continuously plays a second skill when the first skill is not played, the graphics bound to the voices perform a soft jump operation.
  • the voice unit may include an audio file that represents an action or a move of a game character in a video game.
  • different voice units may correspond to different moves of a game character.
  • different game character may have different voice units when performing the same action or movement.
  • a mandatory command of fading out to negative infinity within a certain time may be set for each voice unit; if obtaining a command of being suspended, fade out the voice unit to negative infinity within the set time to be recovered then.
  • the voice units all come with a fade-out mandatory command, so that the hard jump exhibits fade-out switch, to achieve the aim of natural switch.
  • a voice switching policy may be set based on the following principle: a time axis is set for each voice unit, and the time axis automatically operates synchronously in milliseconds when the voice unit is played back.
  • a time axis position of a voice unit being played before the jump may be acquired, making playback of the voice unit before the jump begin fading out to negative infinity herein and then recovered, and the voice unit after the jump is played back.
  • a time when playback of the voice unit before the jump fades out to negative infinity may be obtained through calculation based on a time cut-in position of the bound content after the jump and a total playback time of the voice unit after the jump and in combination with a time when playback of the voice unit before the jump fades out of a mandatory command.
  • the hard jump and the soft jump in the above embodiment need to be preset, by taking a game as an example, that action jump of players' leading roles is soft jump and interruption of non-leading roles is hard jump in the above embodiment is only default jump state classification setting in current game development. Therefore, different soft and hard jump rules may be set in different game types. That is to say, in other implementation manners, other forms of classification may be performed on the jump state, and corresponding voice switching policies are set for different jump state categories respectively, so as to achieve the aim of improving flexibility of voice switch and then improving playback effects of the application.
  • This embodiment through the above solution, that is, acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping; determining a category of the jump state according to the jump state information; and selecting a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch, can automatically determine a policy to be selected currently according to a jump state of current content, real-time flexibly select a switching manner, and perform flexible switching and suspension of any voice, so as to improve flexibility of voice playback switching in an application.
  • a second embodiment of the present disclosure proposes an application voice playback switching method; compared with the first embodiment, this embodiment specifically defines the step S103 in the above embodiment, i.e., selecting a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch, but other steps are the same as those in the first embodiment.
  • the method in this embodiment includes the following steps implemented by a terminal device.
  • Step S101 The terminal device acquires jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping.
  • the step S101 is the same as that in the first embodiment, i.e., when monitoring that application content bound to a voice currently played by an application performs jump, first acquire jump state information of the application content, so as to acquire a jump state of the application content bound from the jump state information.
  • Step S 102 The terminal device determines a category of the jump state according to the jump state information. There are two categories of the jump states: a hard jump and a soft jump. When the category of the jump state is hard jump, perform step S 1031; and when the category of the jump state is soft jump, perform step S1032. [054] Step S 1031. The terminal device interrupts playback of a voice unit being played before the jump, and fades out to negative infinity to be recovered; and play back the voice unit after the jump.
  • Step S1032. The terminal device acquires a time axis position of a voice unit being played before the jump, to serve as a fade-out time point of the voice unit being played before the jump.
  • Step S1033. The terminal device acquires a time cut-in position of the application content after the jump, to serve as a time point when the voice unit after the jump starts to play.
  • Step S 1034 The terminal device subtracts the time point when the voice unit after the jump starts to play from a total playback time of the voice unit after the jump, to obtain a time in which the voice unit after the jump is not played.
  • Step S 1035 The terminal device adds the time in which the voice unit after the jump is not played to a preset time when the voice unit being played before the jump fades out of a mandatory command, to obtain a time when the voice unit being played before the jump fades out to the negative infinity.
  • Step S 1036 From the fade-out time point of the voice unit being played before the jump, The terminal device makes the voice unit being played before the jump fade out to the negative infinity and recovered in the acquired time when the voice unit being played before the jump fades out to negative infinity, and plays back the voice unit after the jump from the time point when the voice unit after the jump starts to play.
  • This embodiment previously classifies all the switch jump states in the application, which are specifically divided into two types, i.e., hard jump and soft jump, and corresponding voice switching policies are respectively set for the two different jump state categories, i.e., hard jump and soft jump.
  • a hard jump an action of the game character is interrupted by another game character or player.
  • a soft jump the action of the game character is interrupted by the same game character itself or by the game player that controls the same game character.
  • each voice unit has a time axis, and the time axis automatically operates
  • the voice units all come with a fade-out mandatory command, so that the hard jump exhibits fade-out switch, to achieve the aim of natural switch.
  • the voice switching operation of the soft jump is actually the same as that of the hard jump. If the switching content after the soft jump is at any time point of the content, in the case of jump switch, the switching time may be automatically and flexibly determined to perform smooth access of the voice unit after the jump.
  • the switching time may be automatically and flexibly determined to perform smooth access of the voice unit after the jump.
  • This embodiment through the above solution, that is, acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping; determining a category of the jump state according to the jump state information; and selecting a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch, can automatically determine a policy to be selected currently according to a jump state of current content, real-time flexibly select a switching manner, and perform flexible switching and suspension of any voice, so as to improve flexibility of voice playback switching in an application.
  • a third embodiment of the present disclosure proposes an application voice playback switching method, and on the basis of the first embodiment, before the step S101: acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping, the method further includes:
  • step S100 Setting a mandatory command of fading out to the negative infinity within a predetermined time for all voice units of the application.
  • this embodiment further includes the solution of setting a mandatory command of fading out to the negative infinity within a predetermined time for all voice units of the application, thus, by setting a mandatory command of fading out to the negative infinity within a predetermined time for voice units, it is favorable for, when determining that the category of the jump state is hard jump, interrupting playback of a voice unit being played before the jump, and fading it out to negative infinity within a set time of a mandatory command; in addition, when it is determined that the category of the jump state is soft jump, a time when the voice unit being played before the jump fades out to negative infinity can be calculated and acquired based on a preset time when the voice unit being played before the jump fades out of the mandatory command in combination with the time point when the voice unit after the jump starts to play and a total playback time of the voice unit after the jump, so as to attenuate the voice unit being played before the jump to the negative infinity within the time
  • This embodiment through the above solution, that is, setting a mandatory command of fading out to the negative infinity within a predetermined time for all voice units of the application, acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping; determining a category of the jump state according to the jump state information; selecting a corresponding voice switching policy according to the category of the jump state, when determining that the category of the jump state is hard jump, interrupting playback of a voice unit being played before the jump, and fading it out to negative infinity within a set time of a mandatory command; in addition, when determining that the category of the jump state is soft jump, calculating and acquiring a time when the voice unit being played before the jump fades out to negative infinity based on a preset time when the voice unit being played before the jump fades out of the mandatory command in combination with the time point when the voice unit after the jump starts to play and a total playback time of the voice unit after the jump, so as to attenu
  • the first embodiment of the present disclosure proposes an application voice playback switching apparatus 200.
  • the apparatus includes a hardware processor 210 and a non-transitory storage medium 220 accessible to the hardware processor 210.
  • the non-transitory storage medium 220 is configured to store modules including: an acquisition module 201, a judgment module 202 and a switching module 203.
  • the apparatus may be a user terminal.
  • the acquisition module 201 is configured to acquire jump state
  • the judgment module 202 is configured to determine a category of the jump state according to the jump state information.
  • the switching module 203 is configured to select a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch.
  • This embodiment relates to online games, single -player games and other applications, and particularly to switching management policies for voice playback when application content bound to a voice performs jump.
  • the jump of the application content bound to a voice may be graphics jump, duplicate content jump and the like.
  • the acquisition module 201 acquires jump state information of the application content, so as to facilitate the judgment module 202 to acquire a jump state of the application content bound from the jump state information.
  • the switching module 203 selects a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch.
  • This embodiment previously classifies all the switch jump states in the application, as one implementation manner, which specifically may be divided into hard jump and soft jump.
  • soft jump refers to switch between the players' skills
  • hard jump refers to that other players interrupt current local players' skills.
  • a local player operates a character the character can control skills that the player plays back, the skills include voices and graphics bound to the voices, and if it is necessary to play two skills, each skill takes 1.5 seconds to complete playback. If the player first plays a first skill and continuously plays a second skill when the first skill is not played, the graphics bound to the voices perform a soft jump operation.
  • playback of a voice unit before jump may be directly interrupted, and the voice unit after the jump is played back.
  • the voice unit is represented with frame.
  • a mandatory command of fading out to negative infinity within a certain time may be set for each voice unit; if obtaining a command of being suspended, fade out the voice unit to negative infinity within the set time to be recovered then.
  • a voice switching policy may be set based on the following principle: a time axis is set for each voice unit, and the time axis automatically operates synchronously in milliseconds when the voice unit is played.
  • a time axis position of a voice unit being played before the jump may be acquired, making playback of the voice unit before the jump begin fading out to negative infinity herein and then recovered, and the voice unit after the jump is played back.
  • a time when playback of the voice unit before the jump fades out to negative infinity may be obtained through calculation based on a time cut-in position of the bound content after the jump and a total playback time of the voice unit after the jump and in combination with a time when playback of the voice unit before the jump fades out of a mandatory command.
  • the hard jump and the soft jump in the above embodiment need to be preset, by taking a game as an example, that action jump of players' leading roles is soft jump and interruption of non-leading roles is hard jump in the above embodiment is only default jump state classification setting in current game development. Therefore, different soft and hard jump rules may be set in different game types. That is to say, in other implementation manners, other forms of classification may be performed on the jump state, and corresponding voice switching policies are set for different jump state categories respectively, so as to achieve the aim of improving flexibility of voice switch and then improving playback effects of the application.
  • This embodiment through the above solution, that is, acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping; determining a category of the jump state according to the jump state information; and selecting a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch, can automatically determine a policy to be selected currently according to a jump state of current content, real-time flexibly select a switching manner, and perform flexible switching and suspension of any voice, so as to improve flexibility of voice playback switching in an application.
  • each voice unit has a time axis, and the time axis automatically operates synchronously in milliseconds when the voice unit is played.
  • the voice units all come with a fade-out mandatory command, so that the hard jump exhibits fade-out switch, to achieve the aim of natural switch.
  • the specific calculation process may include the follow acts implemented by a terminal device:
  • This embodiment through the above solution, that is, acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping; determining a category of the jump state according to the jump state information; and selecting a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch, can automatically determine a policy to be selected currently according to a jump state of current content, real-time flexibly select a switching manner, and perform flexible switching and suspension of any voice, so as to improve flexibility of voice playback switching in an application.
  • the second embodiment of the present disclosure proposes an application voice playback switching apparatus, and on the basis of the first embodiment, the apparatus may further include:
  • this embodiment further includes the solution of setting a mandatory command of fading out to the negative infinity within a predetermined time for all voice units of the application, thus, by setting a mandatory command of fading out to the negative infinity within a predetermined time for voice units, it is favorable for, when determining that the category of the jump state is hard jump, interrupting playback of a voice unit being played before the jump, and fading it out to negative infinity within a set time of a mandatory command; in addition, when it is determined that the category of the jump state is soft jump, a time when the voice unit being played before the jump fades out to negative infinity can be calculated and acquired based on a preset time when the voice unit being played before the jump fades out of a mandatory command in combination with the time point when the voice unit after the jump
  • This embodiment through the above solution, that is, setting a mandatory command of fading out to the negative infinity within a predetermined time for all voice units of the application, acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping; determining a category of the jump state according to the jump state information; selecting a corresponding voice switching policy according to the category of the jump state, when determining that the category of the jump state is hard jump, interrupting playback of a voice unit being played before the jump, and fading it out to negative infinity within a set time of a mandatory command; in addition, when determining that the category of the jump state is soft jump, calculating and acquiring a time when the voice unit being played before the jump fades out to negative infinity based on a preset time when the voice unit being played before the jump fades out of a mandatory command in combination with the time point when the voice unit after the jump starts to play and a total playback time of the voice unit after the jump, so as to atten
  • FIG. 6 shows a block diagram of an example embodiment of the terminal.
  • the terminal includes a radio frequency (RF) circuit 20, a memory 21 including one or more computer-readable storage mediums, an input unit 22, a display unit 23, a sensor 24, an audio circuit 25, a wireless fidelity (WiFi) module 26, a processor 27 including one or more cores, and a power 28, etc.
  • RF radio frequency
  • FIG. 6 the structure of the terminal shown in FIG. 6 is not limiting, it can includes less or more components, or includes other combinations or arrangements.
  • the RF circuit 20 can be used for receiving and sending signals during calling or process of receiving and sending message. Specially, the RF circuit 20 will receive downlink information from the base station and send it to the processor 27; or send uplink data to the base station.
  • the RF circuit 20 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a diplexer, and the like.
  • the RF circuit 20 can communicate with network or other devices by wireless communication.
  • Such wireless communication can use any communication standard or protocol, which includes, but is not limited to, Global System of Mobile communication (GSM), (General Packet Radio Service, GPRS), (Code Division Multiple Access, CDMA), (Wideband Code Division Multiple Access, WCDMA), (Long Term Evolution, LTE), email, or (Short Messaging Service, SMS).
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • SMS Short Messaging Service
  • the memory 21 is configured to store software program and module which will be run by the processor 27, so as to perform multiple functional applications of the mobile phone and data processing.
  • the memory 21 mainly includes storing program area and storing data area.
  • the storing program area can store the operating system, at least one application program with required function (such as sound playing function, image playing function, etc.).
  • the storing data area can store data established by mobile phone according to actual using demand (such as audio data, phonebook, etc.)
  • the memory 21 can be high-speed random access memory, or nonvolatile memory, such as disk storage, flash memory device, or other volatile solid-state memory devices.
  • the memory 21 may include a storing controller to help the processor 27 and the input unit 22 to access the memory 21.
  • the input unit 22 is configured to receive the entered number or character information, and the entered key signal related to user setting and function control.
  • the input unit 22 includes a touch- sensitive surface 221 or other input devices 222.
  • the touch- sensitive surface 221 is called as touch screen or touch panel, which can collect user's touch operations thereon or nearby (for example the operations generated by fingers of user or stylus pen, and the like, touching on or near the touch-sensitive surface 221), and drive the corresponding connection device according to the preset program.
  • the touch- sensitive surface 221 includes two portions including a touch detection device and a touch controller.
  • the touch detection device is configured to detect touch position of the user and detecting signals accordingly, and then sending the signals to the touch controller.
  • the touch controller receives touch information from the touch detection device, and converts it to contact coordinates which are to be sent to the processor 27, and then receives command sent by the processor 27 to perform.
  • the input unit 22 can include, but is not limited to, other input devices 222, such as one or more selected from physical keyboard, function keys (such as volume control keys, switch key-press, etc.), a trackball, a mouse, and an operating lever, etc..
  • the display unit 23 is configured to display information entered by the user or information supplied to the user, and menus of the mobile phone.
  • the display unit 23 includes a display panel 231, such as a Liquid Crystal Display (LCD), or an Organic Light-Emitting Diode (OLED).
  • the display panel 231 can be covered by the touch-sensitive surface 221, after touch operations are detected on or near the touch- sensitive surface 221, they will be sent to the processor 27 to determine the type of the touching event. Subsequently, the processor 27 supplies the corresponding visual output to the display panel 231 according to the type of the touching event.
  • the touch- sensitive surface 221 and the display panel 231 are two individual components to implement input and output, but they can be integrated together to implement the input and output in some embodiments.
  • the terminal may include at least one sensor 24, such as light sensors, motion sensors, or other sensors.
  • the light sensors includes ambient light sensors for adjusting brightness of the display panel 231 according to the ambient light, and proximity sensors for turning off the display panel 231 and/or maintaining backlight when the terminal is moved to the ear side.
  • Accelerometer sensor as one of the motion sensors can detect the magnitude of accelerations in every direction (Triaxial, generally), and detect the magnitude and direction of gravity in an immobile status, which is applicable to applications of identifying attitudes of the mobile (such as switching between horizontal and vertical screens, related games, magnetometer attitude calibration, etc.), vibration recognition related functions (such as pedometer, percussion, etc.).
  • the terminal also can configure other sensors (such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc.) whose detailed descriptions are omitted here.
  • the audio circuit 25, the speaker 251 and the microphone 252 supply an audio interface between the user and the terminal. Specifically, the audio data is received and converted to electrical signals by audio circuit 25, and then transmitted to the speaker 251, which are converted to sound signal to output. On the other hand, the sound signal collected by the speaker is then converted to electrical signals which will be received and converted to audio data. Subsequently, the audio data are output to the processor 27 to process, and then sent to another mobile phone via the RF circuit 20, or sent to the memory 21 to process further.
  • the audio circuit 25 may further include an earplug jack to provide a communication between the external earphone and the terminal.
  • WiFi pertains to short-range wireless transmission technology providing a wireless broadband Internet, by which the mobile phone can help the user to receive and send email, browse web, and access streaming media, etc.
  • WiFi module 26 is illustrated in FIG. 6, it should be understood that, WiFi module 26 is not a necessary for the terminal, which can be omitted according the actual demand without changing the essence of the present disclosure.
  • the processor 27 is a control center of the mobile phone, which connects with every part of the mobile phone by various interfaces or circuits, and performs various functions and processes data by running or performing software
  • the processor 27 may include one or more processing units.
  • the processor 27 can integrate with application processors and modem processors, for example, the application processors include processing operating system, user interface and applications, etc.; the modern processors are used for performing wireless communication. It can be understood that, it's an option to integrate the modern processors to the processor 27.
  • the terminal may include a power supply 28 (such as battery) supplying power for each component, preferably, the power supply can connect with the processor 27 by power management system, so as to manage charging, discharging and power consuming.
  • the power supply 28 may include one or more AC or DC powers, recharging systems, power failure detection circuits, power converters or inverters, or power status indicators, etc..
  • the terminal may include a camera, and a Bluetooth module, etc., which are not illustrated.
  • the processor 27 of the terminal will perform an executable file stored in the memory 21 according to one or more program of the application, as the following steps.
  • the terminal is configured to acquire jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping.
  • the terminal is configured to determine a category of the jump state according to the jump state information.
  • the terminal is configured to select a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch.
  • the methods in the above embodiments may be accomplished by software on a necessary universal hardware platform, and definitely may also be accomplished by hardware; however, in most circumstances, the former is a better implementation manner.
  • the technical solution of the present disclosure or the part that makes contributions to the prior art can be substantially embodied in the form of a software product.
  • the computer software product may be stored in a storage medium (for example, a ROM/RAM, a magnetic disk, or an optical disk), and contain several instructions to instruct a terminal device (for example, a mobile phone, a computer, a server, or a device) to perform the methods as described in the embodiments of the present disclosure.
  • program instructions corresponding to the application voice playback switching apparatuses in FIG. 4 and FIG. 5 can be stored in a readable storage medium of a computer, a server or other terminals, and are executed by at least one processor therein, so as to implement the application voice playback switching methods in FIG. 1 to FIG. 3.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Studio Circuits (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • User Interface Of Digital Computer (AREA)
PCT/CN2014/080232 2013-08-20 2014-06-18 Audio calling method and device thereof WO2015024409A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310364343.1A CN104423924B (zh) 2013-08-20 2013-08-20 应用声音播放切换方法及装置
CN201310364343.1 2013-08-20

Publications (1)

Publication Number Publication Date
WO2015024409A1 true WO2015024409A1 (en) 2015-02-26

Family

ID=52483035

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/080232 WO2015024409A1 (en) 2013-08-20 2014-06-18 Audio calling method and device thereof

Country Status (2)

Country Link
CN (1) CN104423924B (zh)
WO (1) WO2015024409A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107233734B (zh) * 2017-06-07 2020-08-11 珠海金山网络游戏科技有限公司 一种控制游戏应用与其他应用声音播放的方法和装置
CN110265017B (zh) * 2019-06-27 2021-08-17 百度在线网络技术(北京)有限公司 语音处理方法和装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007044329A2 (en) * 2005-10-04 2007-04-19 Run-Tech Llc System and method for selecting music to guide a user through an activity
US20070208770A1 (en) * 2006-01-23 2007-09-06 Sony Corporation Music content playback apparatus, music content playback method and storage medium
WO2008150340A1 (en) * 2007-05-31 2008-12-11 Sony Computer Entertainment America Inc. System and method for taking control of a system during a commercial break
WO2011069357A1 (zh) * 2009-12-10 2011-06-16 腾讯科技(深圳)有限公司 一种音量动态调节的方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007044329A2 (en) * 2005-10-04 2007-04-19 Run-Tech Llc System and method for selecting music to guide a user through an activity
US20070208770A1 (en) * 2006-01-23 2007-09-06 Sony Corporation Music content playback apparatus, music content playback method and storage medium
WO2008150340A1 (en) * 2007-05-31 2008-12-11 Sony Computer Entertainment America Inc. System and method for taking control of a system during a commercial break
WO2011069357A1 (zh) * 2009-12-10 2011-06-16 腾讯科技(深圳)有限公司 一种音量动态调节的方法及装置

Also Published As

Publication number Publication date
CN104423924B (zh) 2019-01-29
CN104423924A (zh) 2015-03-18

Similar Documents

Publication Publication Date Title
US10834237B2 (en) Method, apparatus, and storage medium for controlling cooperation of multiple intelligent devices with social application platform
US10525353B2 (en) Method, apparatus and terminal for displaying prompt information
US10951557B2 (en) Information interaction method and terminal
WO2015172704A1 (en) To-be-shared interface processing method, and terminal
CN106850983B (zh) 一种熄屏控制方法、装置、终端和存储介质
US20180063046A1 (en) Method and apparatus for downloading and displaying pictures
CN108958629B (zh) 分屏退出方法、装置、存储介质和电子设备
US9680921B2 (en) Method, apparatus, and system for controlling voice data transmission
CN103530520A (zh) 一种数据获取的方法及终端
CN109067981B (zh) 分屏应用切换方法、装置、存储介质和电子设备
US20150043312A1 (en) Sound playing method and device thereof
WO2017215661A1 (zh) 一种场景音效的控制方法、及电子设备
CN103294442A (zh) 一种播放提示音的方法、装置及终端设备
CN109692474A (zh) 基于移动终端的游戏控制方法、移动终端及可读存储介质
CN110930964B (zh) 一种显示屏亮度调节方法、装置、存储介质及终端
US9479888B2 (en) Methods and apparatus for implementing sound events
CN107193551B (zh) 一种生成图像帧的方法和装置
WO2015135457A1 (en) Method, apparatus, and system for sending and playing multimedia information
WO2016019695A1 (zh) 语音互动的方法及终端
CN108388400A (zh) 一种操作处理方法及移动终端
WO2015024409A1 (en) Audio calling method and device thereof
WO2015184959A2 (en) Method and apparatus for playing behavior event
CN108920086B (zh) 分屏退出方法、装置、存储介质和电子设备
US10225388B2 (en) Method and apparatus for adjusting volume of an accepted session
US10127009B2 (en) Data processing method and terminal thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14837251

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 130716)

122 Ep: pct application non-entry in european phase

Ref document number: 14837251

Country of ref document: EP

Kind code of ref document: A1