WO2015184959A2 - Method and apparatus for playing behavior event - Google Patents

Method and apparatus for playing behavior event Download PDF

Info

Publication number
WO2015184959A2
WO2015184959A2 PCT/CN2015/080100 CN2015080100W WO2015184959A2 WO 2015184959 A2 WO2015184959 A2 WO 2015184959A2 CN 2015080100 W CN2015080100 W CN 2015080100W WO 2015184959 A2 WO2015184959 A2 WO 2015184959A2
Authority
WO
WIPO (PCT)
Prior art keywords
event
sound
playback
behavior event
behavior
Prior art date
Application number
PCT/CN2015/080100
Other languages
French (fr)
Other versions
WO2015184959A3 (en
Inventor
Xiaorong Chen
Longfeng WEI
Original Assignee
Tencent Technology (Shenzhen) Company Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology (Shenzhen) Company Limited filed Critical Tencent Technology (Shenzhen) Company Limited
Priority to SG11201605960WA priority Critical patent/SG11201605960WA/en
Priority to MYPI2016703177A priority patent/MY196865A/en
Publication of WO2015184959A2 publication Critical patent/WO2015184959A2/en
Publication of WO2015184959A3 publication Critical patent/WO2015184959A3/en

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07FCOIN-FREED OR LIKE APPARATUS
    • G07F17/00Coin-freed apparatus for hiring articles; Coin-freed facilities or services
    • G07F17/32Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
    • G07F17/3202Hardware aspects of a gaming system, e.g. components, construction, architecture thereof
    • G07F17/3204Player-machine interfaces

Definitions

  • the present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for playing a behavior event.
  • the number of online game applications increases and types of online game applications also become diversified. Poor user experience of an online game application leads to a decline in the number of users.
  • the number of users is an important index for measuring the performance of an online game application, while the quality of behavior event playback of the online game application influences the number of users of the online game application, and therefore, how to play a behavior event so as to improve the number of users of the online game application becomes a concern to those skilled in the art.
  • a sound event is set for each behavior event of each object in an online game application, where each sound event includes a sound effect; and then, during playback of a behavior event of a given object in the online game application, the corresponding sound event is played.
  • each behavior event of each object corresponds to one sound event, which leads to a monotonous sound effect during playback of the behavior event.
  • embodiments of the present invention provide a method and an apparatus for playing a behavior event.
  • the technical solutions are as follows:
  • a behavior event played in the method being corresponding to at least two sound events, the method including:
  • an apparatus for playing a behavior event including:
  • a first determining module configured to determine at least two sound events corresponding to a current to-be-played behavior event
  • an acquiring module configured to acquire a state of the behavior event, each sound event that corresponds to a behavior event having different playback time information in a sound track when the behavior even is in different states, and the playback time information including at least a playback point and playback duration;
  • a second determining module configured to determine playback time information of each sound event according to the state of the behavior event
  • a playback module configured to play each sound event according to the playback time information of each sound event during playback of the current behavior event.
  • At least two sound events corresponding to a current to-be-played behavior event are determined, and a state of the behavior event is acquired; further, playback time information of each sound event is determined according to the state of the behavior event, and each sound event is played according to the playback time information of each sound event during playback of the current behavior event. Because the current to-be-played behavior event corresponds to at least two sound events, sound effects during playback of the behavior event are enriched.
  • FIG. 1 is a flowchart of a method for playing a behavior event according to an embodiment of the present invention
  • FIG. 2 is a flowchart of a method for playing a behavior event according to another embodiment of the present invention.
  • FIG. 3 is a schematic diagram of sound events and playback duration corresponding to a behavior event according to another embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of an apparatus for playing a behavior event according to another embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of an apparatus for playing a behavior event according to another embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of an apparatus for playing a behavior event according to another embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a terminal according to another embodiment of the present invention.
  • an embodiment of the present invention provides a method for playing a behavior event, where a behavior event played by using this method corresponds to at least two sound events.
  • a process of the method provided in this embodiment includes:
  • the method before the determining at least two sound events corresponding to a current to-be-played behavior event, the method further includes:
  • the determining at least two sound events corresponding to a current to-be-played behavior event includes:
  • the determining playback time information of each sound event according to the state of the behavior event includes:
  • the state of the behavior event is a current playback frequency of the behavior event
  • the determining playback time information of each sound event according to the state of the behavior event includes:
  • the method before the determining playback time information of each sound event according to the current playback frequency of the behavior event, the method further includes:
  • the determining playback time information of each sound event according to the current playback frequency of the behavior event includes:
  • At least two sound events corresponding to a current to-be-played behavior event are determined, and a state of the behavior event is acquired; further, playback time information of each sound event is determined according to the state of the behavior event, and each sound event is played according to the playback time information of each sound event during playback of the current behavior event. Because the current to-be-played behavior event corresponds to at least two sound events, sound effects during playback of the behavior event are enriched.
  • this embodiment of the present invention provides a method for playing a behavior event, where the behavior event may be a behavior event configured in a web application, for example, a behavior event configured for a virtual character in the web application, and none of the embodiments of the present invention defines a specific behavior event.
  • a behavior event played by using the method provided in this embodiment corresponds to at least two sound events. Referring to FIG. 2, a process of the method provided in this embodiment includes:
  • At least two sound events may be set in advance during setting of a web application; at least two sound events corresponding to each behavior event are determined among the preset sound events, and a correspondence between each behavior event and sound events is stored.
  • the number of preset sound events may be 10, 20, 50, or the like, and this embodiment does not specifically limit the number of preset sound events.
  • This embodiment does not specifically limit the manner of determining, among the preset sound events, at least two sound events corresponding to each behavior event.
  • at least two sound events corresponding to each behavior event may be determined among the preset sound events according to behavior content of each behavior event.
  • the behavior event is Lü Bu riding on his horse and holding a scherd; in this case, according to the content of the behavior event, four sound events may be set for the behavior event: a sound event of horse neighing, a sound event of gurerd slashing, a sound event of character yelling, and a sound event of horseshoes landing on the ground.
  • the behavior event is Guan Yu riding on his horse and holding a broadsword; in this case, according to the content of the behavior event, three sound events may be set for the behavior event: a sound event of horse neighing, a sound event of broadsword slashing, and a sound event of character yelling.
  • the manner of storing a correspondence between each behavior event and sound events includes, but is not limited to, storing the correspondence between each behavior event and sound events into a web server.
  • a form for storing the correspondence between each behavior event and sound events includes, but is not limited to, storing the correspondence between each behavior event and sound events in a form of a table.
  • the at least two sound events corresponding to the current to-be-played behavior event may be determined according to the correspondence between each behavior event and sound events.
  • Table 1 if the current to-be-played behavior event is behavior event 1, it can be determined, according to the stored correspondence between each behavior event and sound events, that sound events corresponding to behavior event 1 are: sound event A, sound event B, and sound event C.
  • the state of the behavior event includes, but is not limited to, a current playback frequency of the behavior event, and this embodiment does not specifically limit the state of the behavior event.
  • the behavior event is played in different forms, while different playback forms of the behavior event bring different experience effects to users; therefore, in order to improve experience effects for the users, the method provided in this embodiment needs to acquire the state of the behavior event.
  • each sound event corresponding to a behavior event has different playback time information in a sound track when the behavior event is in different states.
  • the playback time information includes, but is not limited to, a playback point and playback duration, and this embodiment does not specifically limit the playback time information.
  • the playback point is a playback start time point when the sound event corresponding to the behavior event starts to be played during a playback time of the behavior event. When the playback time of the behavior event reaches the playback point of the sound event corresponding to the behavior event, the sound event corresponding to the behavior event is played.
  • the position of the playback point of the sound event may be at different positions of the playback duration of the behavior event, for example at 1/2 or 1/3 of the playback duration of the behavior event, and this embodiment does not specifically limit the position of the playback point of the sound event in the playback duration of the behavior event.
  • the playback duration of the sound event is the length of playback of the sound event, and the playback duration of the sound event may be 1 minute, 2 minutes, 3 minutes, or the like; this embodiment does not specifically limit the playback duration of the sound event.
  • behavior event 1 in Table 1 as an example, playback points and playback duration of sound events corresponding to behavior event 1 are shown in FIG. 3, and it can be learned from FIG. 3 that, states of behavior event 1 include state a and state b.
  • the playback point of sound event A in the sound track is at 1/4 of the playback duration of the behavior event, and the playback duration of sound event A is 1 second; the playback point of sound event B in the sound track is at 1/2 of the playback duration of the behavior event, and the playback duration of sound event B is 2 seconds; and the playback point of sound event C in the sound track is at 3/4 of the playback duration of the behavior event, and the playback duration of sound event C is 2.5 seconds.
  • the playback point of sound event A in the sound track is at 1/2 of the playback duration of the behavior event, and the playback duration of sound event A is 3 seconds; the playback point of sound event B in the sound track is at 1/3 of the playback duration of the behavior event, and the playback duration of sound event B is 1 second; and the playback point of sound event C in the sound track is at 3/5 of the playback duration of the behavior event, and the playback duration of sound event C is 0.5 seconds.
  • Each sound event corresponding to the behavior event has a corresponding playback point in the behavior event, and when the playback time of the behavior event reaches the playback point of each sound event corresponding to the behavior event, each sound event corresponding to the behavior event is played. Further, the state of the behavior event determines the playback time information of each sound event corresponding to the behavior event, and when the behavior event is in different states, the playback time information of each sound event corresponding to the behavior event is different. Therefore, during determining of the playback time information of each sound event corresponding to the behavior event, the playback time information may be determined according to the state of the behavior event.
  • the manner of determining playback time information of each sound event according to the state of the behavior event includes, but is not limited to: determining, according to the state of the behavior event, a playback point and playback duration that correspond to each sound event during a playback time of the behavior event, and determining the playback point and playback duration that correspond to each sound event during the playback time of the behavior event as the playback time information of each sound event.
  • states of behavior event 1 include state a and state b, and sound events corresponding to behavior event 1 are sound event A and sound event B; moreover, the playback point of sound event A is at 1/3 of the playback duration of the behavior event, and the playback duration of sound event A is 10 seconds; the playback point of sound event B is at 2/3 of the playback duration of the behavior event, and the playback duration of sound event B is 20 seconds.
  • the playback duration of the behavior event is 3 minutes, and in this case, it is determined according to state a of the behavior event that the playback point of sound event A is at 1 minute in the playback time of the behavior event, and when the playback time of the behavior event reaches 1 minute, sound event A is played for 10 seconds; and it is determined according to state a of the behavior event that the playback point of sound event B is at 2 minutes in the playback time of the behavior event, and when the playback time of the behavior event reaches 2 minutes, sound event B is played for 20 seconds.
  • the playback duration of the behavior event is 1 minute, and in this case, it is determined according to state b of the behavior event that the playback point of sound event A is at 20 seconds in the playback time of the behavior event, and when the playback time of the behavior event reaches 20 seconds, sound event A is played for 10 seconds; and it is determined according to state b of the behavior event that the playback point of sound event B is at 40 seconds in the playback time of the behavior event, and when the playback time of the behavior event reaches 40 seconds, sound event B is played for 20 seconds.
  • the state of the behavior event includes, but is not limited to,the current playback frequency of the behavior event
  • the manner of determining playback time information of each sound event according to the state of the behavior event includes, but is not limited to:
  • the method provided in this embodiment needs to set and store in advance a correspondence between playback frequencies of each behavior event and playback time information of sound events.
  • This embodiment does not specifically limit the manner of setting in advance a correspondence between playback frequencies of each behavior event and playback time information of sound events.
  • the correspondence may be determined according to a proportional relation between playback frequencies of each behavior event and playback time information of sound events. Because the playback time information not only includes the playback point but also includes the playback duration, when the correspondence between playback frequencies of each behavior event and playback time information of sound events is set in advance, descriptions with respect to the playback point and playback duration in the playback time information are separately made below.
  • the behavior event is behavior event 2
  • the sound event corresponding to behavior event 2 is sound event C
  • a correspondence between playback frequencies of the behavior event and playback time information of the sound event is set in advance, if it is known that when the playback frequency of the behavior event is a, the playback point of sound event C is at 1/m of the playback duration of the behavior event, it can be determined that when the playback frequency of behavior event 2 is b,the playback point of sound event C is at b/am of the playback duration of the behavior event.
  • the behavior event is behavior event 2
  • the sound event corresponding to behavior event 2 is sound event C
  • a correspondence between playback frequencies of the behavior event and playback time information of the sound event is set in advance, ifit is known that when the playback frequency of the behavior event is a, the playback duration of sound event C is t 1 , it can be determined that when the playback frequency of behavior event 2 is b, the playback duration of sound event C is t 1 b/a.
  • the manner of storing a correspondence between playback frequencies of each behavior event and playback time information of sound events includes, but is not limited to, storing the correspondence between playback frequencies of each behavior event and playback time information of sound events into a server.
  • a form for storing the correspondence between playback frequencies of each behavior event and playback time information of sound events includes, but is not limited to, storing the correspondence between playback frequencies of each behavior event and playback time information of sound events in a form of a table.
  • behavior event A for details of the correspondence between playback frequencies of each behavior event and playback time information of sound events stored in the form of a table, reference may be made to Table 2.
  • playback time information of each sound event may be determined according to the correspondence between playback frequencies of each behavior event and playback time information of sound events.
  • the determining playback time information of each sound event according to the current playback frequency of the behavior event includes:
  • step 203 the playback time information of each sound event has been determined according to the state of the behavior event, and each sound event includes the playback point and playback duration of each sound event. Therefore, in this step, on the basis of step 203, each sound event is played according to the playback time information of each sound event during playback of the current behavior event.
  • a process of playing each sound event according to the playback time information of each sound event during playback of the current behavior event includes, but is not limited to, playing each sound event according to the playback duration of each sound event when the playback time of the current behavior event reaches the playback point in the playback time information of each sound event.
  • the current behavior event is behavior event 1
  • the sound events corresponding to behavior event 1 is sound event A and sound event B
  • the playback duration of the current behavior event is 5 minutes
  • the playback point of sound event A is at 1/5 of the playback duration of the behavior event and the playback duration of sound event A is 20 seconds
  • the playback point of sound event B is at 3/5 of the playback duration of the behavior event and the playback duration of sound event is 30 seconds; in this case, when the playback time of the behavior event reaches 1 minute, sound event A is played for 20 seconds, and when the playback time of the behavior event reaches 2 minutes, sound event B is played for 30 seconds.
  • At least two sound events corresponding to a current to-be-played behavior event are determined, and a state of the behavior event is acquired; further, playback time information of each sound event corresponding to the behavior event is determined according to the state of the behavior event, and each sound event is played when a playback time of the current behavior event reaches a playback point of each sound event. Because the current to-be-played behavior event corresponds to at least two sound events, sounds during playback of the behavior event are enriched.
  • an embodiment of the present invention provides an apparatus for playing a behavior event, where a behavior event played by the apparatus corresponds to at least two sound events, and the apparatus includes:
  • a first determining module 401 configured to determine at least two sound events corresponding to a current to-be-played behavior event
  • an acquiring module 402 configured to acquire a state of the behavior event, each sound event that corresponds to a behavior event having different playback time information in a sound track when the behavior event is in different states, and the playback time information including at least a playback point and playback duration;
  • a second determining module 403 configured to determine playback time information of each sound event according to the state of the behavior event
  • a playback module 404 configured to play each sound event according to the playback time information of each sound event during playback of the current behavior event.
  • the apparatus further includes:
  • a third determining module 405, configured to determine, among preset sound events, at least two sound events corresponding to each behavior event;
  • a first storage module 406 configured to store a correspondence between each behavior event and sound events, where
  • the first determining module 401 is configured to determine, according to the stored correspondence between each behavior event and sound events, at least two sound events corresponding to the current to-be-played behavior event.
  • the second determining module 403 is configured to determine, according to the state of the behavior event, a playback point and playback duration that correspond to each sound event during a playback time of the behavior event, and determine the playback point and playback duration that correspond to each sound event during the playback time of the behavior event as the playback time information of each sound event.
  • the state of the behavior event is a current playback frequency of the behavior event
  • the second determining module 403 is configured to determine playback time information of each sound event according to the current playback frequency of the behavior event.
  • the apparatus further includes:
  • a second storage module 407 configured to store in advance a correspondence between playback frequencies of each behavior event and playback time information of sound events,
  • the second determining module 403 is configured to search the stored correspondence between playback frequencies of each behavior event and playback time information of sound events for playback time information of each sound event corresponding to the current playback frequency of the behavior event.
  • the apparatus determines at least two sound events corresponding to a current to-be-played behavior event, and acquires a state of the behavior event; and further, determines, according to the state of the behavior event, playback time information of each sound event corresponding to the behavior event, and plays each sound event when a playback time of the current behavior event reaches a playback point of each sound event. Because the current to-be-played behavior event corresponds to at least two sound events, sounds during playback of the behavior event are enriched.
  • FIG. 7 shows a schematic structural diagram of a terminal involved in the embodiments of the present invention, and the terminal may be used to implement the method for playing a behavior event provided in the foregoing embodiment.
  • the terminal 700 may include components such as a radio frequency (RF) circuit 110, a memory 120 including one or more computer readable storage media, an input unit 130, a display unit 140, a sensor 150, an audio circuit 160, a Wireless Fidelity (WiFi) module 170, a processor 180 including one or more processing cores, and a power supply 190.
  • RF radio frequency
  • the RF circuit 110 may be configured to receive and send signals during an information receiving and sending process or a call process. Particularly, the RF circuit 110 receives downlink information from a base station, then delivers the downlink information to one or more processors 180 for processing, and sends related uplink data to the base station.
  • the RF circuit 110 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a subscriber identity module (SIM) card, a transceiver, a coupler, a low noise amplifier (LNA) , and a duplexer.
  • the communications unit 110 may also communicate with a network and another device by wireless communication.
  • the wireless communication may use any communications standard or protocol, which includes, but is not limited to, Global System for Mobile communications (GSM) , General Packet Radio Service (GPRS) , Code Division Multiple Access (CDMA) , Wideband Code Division Multiple Access (WCDMA) , Long Term Evolution (LTE) , e-mail, Short Messaging Service (SMS) , and the like.
  • GSM Global System for Mobile communications
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • SMS Short Messaging Service
  • the memory 120 may be configured to store a software program and module.
  • the processor 180 runs the software program and module stored in the memory 120, to implement various functional applications and data processing.
  • the memory 120 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function and an image display function) , and the like.
  • the data storage area may store data (such as audio data and an address book) created according to use of the terminal 700, and the like.
  • the memory 120 may include a high speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory, or another volatile solid-state storage device.
  • the memory 120 may further include a memory controller, so as to provide access of the processor 180 and the input unit 130 to the memory 120.
  • the input unit 130 may be configured to receive input digit or character information, and generate a keyboard, mouse, joystick, optical, or track ball signal input related to the user setting and function control.
  • the input unit 130 may include a touch-sensitive surface 131 and another input device 132.
  • the touch-sensitive surface 131 which may also be referred to as a touch screen or a touch panel, may collect a touch operation of a user on or near the touch-sensitive surface (such as an operation of a user on or near the touch-sensitive surface 131 by using any suitable ob ject or accessory, such as a finger or a stylus) , and drive a corresponding connection apparatus according to a preset program.
  • the touch-sensitive surface 131 may include two parts: a touch detection apparatus and a touch controller.
  • the touch detection apparatus detects a touch position of the user, detects a signal generated by the touch operation, and transfers the signal to the touch controller.
  • the touch controller receives the touch signal from the touch detection apparatus, converts the touch signal into touch point coordinates, and sends the touch point coordinates to the processor 180.
  • the touch controller can receive and execute a command sent from the processor 180.
  • the touch-sensitive surface 131 may be a resistive, capacitive, infrared, or surface sound wave type touch-sensitive surface.
  • the input unit 130 may further include the another input device 132.
  • the another input device 132 may include, but is not limited to, one or more of a physical keyboard, a functional key (such as a volume control key or a switch key) , a track ball, a mouse, and a joystick.
  • the display unit 140 may be configured to display information input by the user or information provided for the user, and various graphical user interfaces of the terminal 700.
  • the graphical user interfaces may be formed by a graph, a text, an icon, a video, or any combination thereof.
  • the display unit 140 may include a display panel 141.
  • the display panel 141 may be configured by using a liquid crystal display (LCD) , an organic light-emitting diode (OLED) , or the like.
  • the touch-sensitive surface 131 may cover the display panel 141. After detecting a touch operation on or near the touch-sensitive surface 131, the touch-sensitive surface 131 transfers the touch operation to the processor 180, so as to determine the type of the touch event.
  • the processor 180 provides a corresponding visual output on the display panel 141 according to the type of the touch event.
  • the touch-sensitive surface 131 and the display panel 141 are used as two separate parts to implement input and output functions, in some embodiments, the touch-sensitive surface 131 and the display panel 141 may be integrated to implement the input and output functions.
  • the terminal 700 may further include at least one sensor 150, such as an optical sensor, a motion sensor, and other sensors.
  • the optical sensor may include an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust luminance of the display panel 141 according to brightness of the ambient light.
  • the proximity sensor may switch off the display panel 141 and/or backlight when the terminal 700 is moved to the ear.
  • a gravity acceleration sensor can detect magnitude of accelerations in various directions (generally on three axes) , may detect magnitude and a direction of the gravity when static, and may be applied to an application that recognizes the attitude of the mobile phone (for example, switching between landscape orientation and portrait orientation, a related game, and magnetometer attitude calibration) , a function related to vibration recognition (such as a pedometer and a knock) , and the like.
  • Other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which may be configured in the terminal 700, are not further described herein.
  • the audio circuit 160, a loudspeaker 161, and a microphone 162 may provide audio interfaces between the user and the terminal 700.
  • the audio circuit 160 may convert received audio data into an electric signal and transmit the electric signal to the loudspeaker 161.
  • the loudspeaker 161 converts the electric signal into a sound signal for output.
  • the microphone 162 converts a collected sound signal into an electric signal.
  • the audio circuit 160 receives the electric signal and converts the electric signal into audio data, and outputs the audio data to the processor 180 for processing. Then, the processor 180 sends the audio data to, for example, another terminal by using the RF circuit 110, or outputs the audio data to the memory 120 for further processing.
  • the audio circuit 160 may further include an earplug jack, so as to provide communication between a peripheral earphone and the terminal 700.
  • WiFi is a short distance wireless transmission technology.
  • the terminal 700 may help, by using the WiFi module 170, the user to receive and send e-mails, browse a webpage, access streaming media, and so on, which provides wireless broadband Internet access for the user.
  • FIG. 7 shows the WiFi module 170, it may be understood that the WiFi module 170 is not a necessary component of the terminal 700, and when required, the WiFi module 170 may be omitted as long as the scope of the essence of the present disclosure is not changed.
  • the processor 180 is the control center of the terminal 700, and is connected to various parts of the mobile phone by using various interfaces and lines. By running or executing the software program and/or module stored in the memory 120, and invoking data stored in the memory 120, the processor 180 performs various functions and data processing of the terminal 700, thereby performing overall monitoring on the mobile phone.
  • the processor 180 may include one or more processing cores.
  • the processor 180 may integrate an application processor and a modem.
  • the application processor mainly processes an operating system, a user interface, an application program, and the like.
  • the modem mainly processes wireless communication. It may be understood that the foregoing modem may also not be integrated into the processor 180.
  • the terminal 700 further includes the power supply 190 (such as a battery) for supplying power to the components.
  • the power supply may be logically connected to the processor 180 by using a power management system, thereby implementing functions such as charging, discharging and power consumption management by using the power management system.
  • the power supply 190 may further include one or more of a direct current or alternating current power supply, a re-charging system, a power failure detection circuit, a power supply converter or inverter, a power supply state indicator, and any other components.
  • the terminal 700 may further include a camera, a Bluetooth module, and the like, which are not further described herein.
  • the display unit of the terminal 700 is a touch screen display, and the terminal 700 further includes a memory and one or more programs.
  • the one or more programs are stored in the memory and configured to be executed by one or more processors.
  • the one or more programs contain instructions used for implementing the following operations:
  • the memory of the terminal further contains an instruction for implementing the following operation: before the determining at least two sound events corresponding to a current to-be-played behavior event,
  • the determining at least two sound events corresponding to a current to-be-played behavior event includes:
  • the memory of the terminal further contains an instruction for implementing the following operation:
  • the determining playback time information of each sound event according to the state of the behavior event includes:
  • the memory of the terminal further contains an instruction for implementing the following operation: the state of the behavior event includes a current playback frequency of the behavior event, where
  • the determining playback time information of each sound event according to the state of the behavior event includes:
  • the memory of the terminal further contains an instruction for implementing the following operation: before the determining playback time information of each sound event according to the current playback frequency of the behavior event:
  • the determining playback time information of each sound event according to the current playback frequency of the behavior event includes:
  • the terminal determines at least two sound events corresponding to a current to-be-played behavior event, and acquires a state of the behavior event; and further, determines, according to the state of the behavior event, playback time information of each sound event corresponding to the behavior event, and plays each sound event according to a playback time of each sound event during playback of the current behavior event. Because the current to-be-played behavior event corresponds to at least two sound events, sound effects during playback of the behavior event are enriched.
  • An embodiment of the present invention further provides a computer readable storage medium, where the computer readable medium may be a computer readable storage medium contained in the memory in the foregoing embodiment, or may be a separate computer readable storage medium that is not installed in a terminal.
  • the computer readable storage medium has one or more programs stored therein, and the one or more programs are executed by one or more processors to implement the method for playing a behavior event, where the method includes:
  • the memory of the terminal further contains an instruction for implementing the following operation: before the determining at least two sound events corresponding to a current to-be-played behavior event:
  • the determining at least two sound events corresponding to a current to-be-played behavior event includes:
  • the memory of the terminal further contains an instruction for implementing the following operation:
  • the determining playback time information of each sound event according to the state of the behavior event includes:
  • the memory of the terminal further contains an instruction for implementing the following operation: the state of the behavior event includes a current playback frequency of the behavior event, where
  • the determining playback time information of each sound event according to the state of the behavior event includes:
  • the memory of the terminal further contains an instruction for implementing the following operation: before the determining playback time information of each sound event according to the current playback frequency of the behavior event:
  • the determining playback time information of each sound event according to the current playback frequency of the behavior event includes:
  • the computer readable storage medium determines at least two sound events corresponding to a current to-be-played behavior event, and acquires a state of the behavior event; and further, determines, according to the state of the behavior event, playback time information of each sound event corresponding to the behavior event, and plays each sound event according to a playback time of each sound event during playback of the current behavior event. Because the current to-be- played behavior event corresponds to at least two sound events, sound effects during playback of the behavior event are enriched.
  • An embodiment of the present invention provides a graphical user interface, where the graphical user interface is used on a display terminal for playing a behavior event, and the display terminal includes a touch screen display, a memory, and one or more processors for executing one or more programs.
  • the display terminal executes operations:
  • the graphical user interface determines at least two sound events corresponding to a current to-be-played behavior event, and acquires a state of the behavior event; and further, determines, according to the state of the behavior event, playback time information of each sound event corresponding to the behavior event, and plays each sound event according to a playback time of each sound event during playback of the current behavior event. Because the current to-be-played behavior event corresponds to at least two sound events, sound effects during playback of the behavior event are enriched.
  • the above division of functional modules is only described for exemplary purposes when the apparatus for playing a behavior event provided in the foregoing embodiment plays a behavior event.
  • the functions may be allocated to different functional modules as needed, that is, the internal structure of the apparatus for playing a behavior event is divided into different functional modules to complete all or some of the above described functions.
  • the apparatus for playing a behavior event provided by the foregoing embodiment is based on the same concept as the method for playing a behavior event. For the specific implementation process, refer to the method embodiment, and the details are not described herein again.
  • the program may be stored in a computer readable storage medium.
  • the storage medium may be a read-only memory, a magnetic disk, an optical disc, or the like.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Electrophonic Musical Instruments (AREA)
  • Indexing, Searching, Synchronizing, And The Amount Of Synchronization Travel Of Record Carriers (AREA)

Abstract

The present disclosure relates to the field of computer technologies, and discloses a method and an apparatus for playing a behavior event. The method includes: determining at least two sound events corresponding to a current to-be-played behavior event; acquiring a state of the behavior event, each sound event that corresponds to a behavior event having different playback time information in a sound track when the behavior even is in different states, and the playback time information including at least a playback point and playback duration; and determining playback time information of each sound event according to the state of the behavior event, and playing each sound event according to the playback time information of each sound event during playback of the current behavior event. In the present disclosure, after at least two sound events corresponding to a current to-be-played behavior event are determined, playback time information of each sound event is determined according to a state of the behavior event, and each sound event is played according to the playback time information of each sound event during playback of the current behavior event, thereby enriching sound effects during playback of the behavior event.

Description

METHOD AND APPARATUS FOR PLAYING BEHAVIOR EVENT
FIELD OF THE TECHNOLOGY
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for playing a behavior event.
BACKGROUND OF THE DISCLOSURE
With the development of computer technologies, the number of online game applications increases and types of online game applications also become diversified. Poor user experience of an online game application leads to a decline in the number of users. The number of users is an important index for measuring the performance of an online game application, while the quality of behavior event playback of the online game application influences the number of users of the online game application, and therefore, how to play a behavior event so as to improve the number of users of the online game application becomes a concern to those skilled in the art.
In a related technology, during playback of a behavior event, a sound event is set for each behavior event of each object in an online game application, where each sound event includes a sound effect; and then, during playback of a behavior event of a given object in the online game application, the corresponding sound event is played.
During the implementation of the present disclosure, the inventor finds that the related technology has at least the following problem:
During playback of a behavior event in the related technology, each behavior event of each object corresponds to one sound event, which leads to a monotonous sound effect during playback of the behavior event.
SUMMARY
To solve the problem in the existing technology, embodiments of the present invention provide a method and an apparatus for playing a behavior event. The technical solutions are as follows:
According to one aspect, a method for playing a behavior event is provided, a behavior event played in the method being corresponding to at least two sound events, the method including:
determining at least two sound events corresponding to a current to-be-played behavior event;
acquiring a state of the behavior event, each sound event that corresponds to a behavior event having different playback time information in a sound track when the behavior even is in different states, and the playback time information including at least a playback point and playback duration; and
determining playback time information of each sound event according to the state of the behavior event, and playing each sound event according to the playback time information of each sound event during playback of the current behavior event.
According to another aspect, an apparatus for playing a behavior event is provided, a behavior event played by the apparatus being corresponding to at least two sound events, the apparatus including:
a first determining module, configured to determine at least two sound events corresponding to a current to-be-played behavior event;
an acquiring module, configured to acquire a state of the behavior event, each sound event that corresponds to a behavior event having different playback time information in a sound track when the behavior even is in different states, and the playback time information including at least a playback point and playback duration;
a second determining module, configured to determine playback time information of each sound event according to the state of the behavior event; and
a playback module, configured to play each sound event according to the playback time information of each sound event during playback of the current behavior event.
The technical solutions provided by the embodiments of the present invention have the following beneficial effects:
At least two sound events corresponding to a current to-be-played behavior event are determined, and a state of the behavior event is acquired; further, playback time information of each sound event is determined according to the state of the behavior event,  and each sound event is played according to the playback time information of each sound event during playback of the current behavior event. Because the current to-be-played behavior event corresponds to at least two sound events, sound effects during playback of the behavior event are enriched.
BRIEF DESCRIPTION OF THE DRAWINGS
To describe the technical solutions of the embodiments of the present invention or the existing technology more clearly, the following briefly introduces the accompanying drawings required for describing the embodiments or the existing technology. Apparently, the accompanying drawings in the following description show only some embodiments of the present invention, and a person of ordinary skill in the art may still derive other drawings from these accompanying drawings without creative efforts.
FIG. 1 is a flowchart of a method for playing a behavior event according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for playing a behavior event according to another embodiment of the present invention;
FIG. 3 is a schematic diagram of sound events and playback duration corresponding to a behavior event according to another embodiment of the present invention;
FIG. 4 is a schematic structural diagram of an apparatus for playing a behavior event according to another embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an apparatus for playing a behavior event according to another embodiment of the present invention;
FIG. 6 is a schematic structural diagram of an apparatus for playing a behavior event according to another embodiment of the present invention; and
FIG. 7 is a schematic structural diagram of a terminal according to another embodiment of the present invention.
DESCRIPTION OF EMBODIMENTS
To make the objectives, technical solutions and advantages of the present disclosure more clear, implementation manners of the present disclosure are described in further detail below with reference to the accompanying drawings.
To enrich sounds during playback of a behavior event and improve audio and visual experience of users, an embodiment of the present invention provides a method for playing a behavior event, where a behavior event played by using this method corresponds to at least two sound events. Referring to FIG. 1, a process of the method provided in this embodiment includes:
101: Determine at least two sound events corresponding to a current to-be-played behavior event.
As an optional embodiment, before the determining at least two sound events corresponding to a current to-be-played behavior event, the method further includes:
determining, among preset sound events, at least two sound events corresponding to each behavior event, and storing a correspondence between each behavior event and sound events, where
the determining at least two sound events corresponding to a current to-be-played behavior event includes:
determining, according to the stored correspondence between each behavior event and sound events, at least two sound events corresponding to the current to-be-played behavior event.
102: Acquire a state of the behavior event, each sound event that corresponds to a behavior event having different playback time information in a sound track when the behavior even is in different states, and the playback time information including at least a playback point and playback duration.
103: Determine playback time information of each sound event according to the state of the behavior event, and play each sound event according to the playback time information of each sound event during playback of the current behavior event.
As an optional embodiment, the determining playback time information of each sound event according to the state of the behavior event includes:
determining, according to the state of the behavior event, a playback point and playback duration that correspond to each sound event during a playback time of the behavior event, and determining the playback point and playback duration that correspond to each  sound event during the playback time of the behavior event as the playback time information of each sound event.
As an optional embodiment, the state of the behavior event is a current playback frequency of the behavior event, where
the determining playback time information of each sound event according to the state of the behavior event includes:
determining playback time information of each sound event according to the current playback frequency of the behavior event.
As an optional embodiment, before the determining playback time information of each sound event according to the current playback frequency of the behavior event, the method further includes:
storing in advance a correspondence between playback frequencies of each behavior event and playback time information of sound events, where
the determining playback time information of each sound event according to the current playback frequency of the behavior event includes:
searching the stored correspondence between playback frequencies of each behavior event and playback time information of sound events for playback time information of each sound event corresponding to the current playback frequency of the behavior event.
In the method provided in this embodiment of the present invention, at least two sound events corresponding to a current to-be-played behavior event are determined, and a state of the behavior event is acquired; further, playback time information of each sound event is determined according to the state of the behavior event, and each sound event is played according to the playback time information of each sound event during playback of the current behavior event. Because the current to-be-played behavior event corresponds to at least two sound events, sound effects during playback of the behavior event are enriched.
With reference to the content of the foregoing embodiment, this embodiment of the present invention provides a method for playing a behavior event, where the behavior event may be a behavior event configured in a web application, for example, a behavior event configured for a virtual character in the web application, and none of the embodiments of the present invention defines a specific behavior event. To enrich sound effects during playback  of a behavior event, a behavior event played by using the method provided in this embodiment corresponds to at least two sound events. Referring to FIG. 2, a process of the method provided in this embodiment includes:
201: Determine at least two sound events corresponding to a current to-be-played behavior event.
To enrich sound effects of the behavior event, in the method provided in this embodiment, at least two sound events may be set in advance during setting of a web application; at least two sound events corresponding to each behavior event are determined among the preset sound events, and a correspondence between each behavior event and sound events is stored. The number of preset sound events may be 10, 20, 50, or the like, and this embodiment does not specifically limit the number of preset sound events.
This embodiment does not specifically limit the manner of determining, among the preset sound events, at least two sound events corresponding to each behavior event. During specific implementation, at least two sound events corresponding to each behavior event may be determined among the preset sound events according to behavior content of each behavior event. For example, the behavior event is Lü Bu riding on his horse and holding a halberd; in this case, according to the content of the behavior event, four sound events may be set for the behavior event: a sound event of horse neighing, a sound event of halberd slashing, a sound event of character yelling, and a sound event of horseshoes landing on the ground. For another example, the behavior event is Guan Yu riding on his horse and holding a broadsword; in this case, according to the content of the behavior event, three sound events may be set for the behavior event: a sound event of horse neighing, a sound event of broadsword slashing, and a sound event of character yelling.
The manner of storing a correspondence between each behavior event and sound events includes, but is not limited to, storing the correspondence between each behavior event and sound events into a web server. A form for storing the correspondence between each behavior event and sound events includes, but is not limited to, storing the correspondence between each behavior event and sound events in a form of a table.
For details of the correspondence between each behavior event and sound events stored in the form of a table, refer to Table 1.
Table 1
Figure PCTCN2015080100-appb-000001
Further, after the correspondence between each behavior event and sound events is stored, in determining at least two sound events corresponding to the current to-be-played behavior event, the at least two sound events corresponding to the current to-be-played behavior event may be determined according to the correspondence between each behavior event and sound events. Using Table 1 as an example, if the current to-be-played behavior event is behavior event 1, it can be determined, according to the stored correspondence between each behavior event and sound events, that sound events corresponding to behavior event 1 are: sound event A, sound event B, and sound event C.
202: Acquire a state of the behavior event.
The state of the behavior event includes, but is not limited to, a current playback frequency of the behavior event, and this embodiment does not specifically limit the state of the behavior event. Corresponding to different states of the behavior event, the behavior event is played in different forms, while different playback forms of the behavior event bring different experience effects to users; therefore, in order to improve experience effects for the users, the method provided in this embodiment needs to acquire the state of the behavior event.
Further, during playback of a behavior event in different states, corresponding sound effects also should be different; therefore, each sound event corresponding to a behavior event has different playback time information in a sound track when the behavior event is in different states. The playback time information includes, but is not limited to, a playback point and playback duration, and this embodiment does not specifically limit the playback time information. Specifically, the playback point is a playback start time point when the sound event corresponding to the behavior event starts to be played during a playback time of the behavior event. When the playback time of the behavior event reaches the playback point of the sound event corresponding to the behavior event, the sound event  corresponding to the behavior event is played. The position of the playback point of the sound event may be at different positions of the playback duration of the behavior event, for example at 1/2 or 1/3 of the playback duration of the behavior event, and this embodiment does not specifically limit the position of the playback point of the sound event in the playback duration of the behavior event. The playback duration of the sound event is the length of playback of the sound event, and the playback duration of the sound event may be 1 minute, 2 minutes, 3 minutes, or the like; this embodiment does not specifically limit the playback duration of the sound event. Using behavior event 1 in Table 1 as an example, playback points and playback duration of sound events corresponding to behavior event 1 are shown in FIG. 3, and it can be learned from FIG. 3 that, states of behavior event 1 include state a and state b. When the state of behavior event 1 is state a, the playback point of sound event A in the sound track is at 1/4 of the playback duration of the behavior event, and the playback duration of sound event A is 1 second; the playback point of sound event B in the sound track is at 1/2 of the playback duration of the behavior event, and the playback duration of sound event B is 2 seconds; and the playback point of sound event C in the sound track is at 3/4 of the playback duration of the behavior event, and the playback duration of sound event C is 2.5 seconds. When the state of behavior event 1 is state b, the playback point of sound event A in the sound track is at 1/2 of the playback duration of the behavior event, and the playback duration of sound event A is 3 seconds; the playback point of sound event B in the sound track is at 1/3 of the playback duration of the behavior event, and the playback duration of sound event B is 1 second; and the playback point of sound event C in the sound track is at 3/5 of the playback duration of the behavior event, and the playback duration of sound event C is 0.5 seconds.
203: Determine playback time information of each sound event according to the state of the behavior event.
Each sound event corresponding to the behavior event has a corresponding playback point in the behavior event, and when the playback time of the behavior event reaches the playback point of each sound event corresponding to the behavior event, each sound event corresponding to the behavior event is played. Further, the state of the behavior event determines the playback time information of each sound event corresponding to the behavior event, and when the behavior event is in different states, the playback time information of each sound event corresponding to the behavior event is different. Therefore,  during determining of the playback time information of each sound event corresponding to the behavior event, the playback time information may be determined according to the state of the behavior event. Specifically, the manner of determining playback time information of each sound event according to the state of the behavior event includes, but is not limited to: determining, according to the state of the behavior event, a playback point and playback duration that correspond to each sound event during a playback time of the behavior event, and determining the playback point and playback duration that correspond to each sound event during the playback time of the behavior event as the playback time information of each sound event.
For ease of comprehension, the foregoing process is illustrated in detail below by using a specific example.
It is set that the behavior event is behavior event 1, states of behavior event 1 include state a and state b, and sound events corresponding to behavior event 1 are sound event A and sound event B; moreover, the playback point of sound event A is at 1/3 of the playback duration of the behavior event, and the playback duration of sound event A is 10 seconds; the playback point of sound event B is at 2/3 of the playback duration of the behavior event, and the playback duration of sound event B is 20 seconds. When the state of the behavior event is state a, the playback duration of the behavior event is 3 minutes, and in this case, it is determined according to state a of the behavior event that the playback point of sound event A is at 1 minute in the playback time of the behavior event, and when the playback time of the behavior event reaches 1 minute, sound event A is played for 10 seconds; and it is determined according to state a of the behavior event that the playback point of sound event B is at 2 minutes in the playback time of the behavior event, and when the playback time of the behavior event reaches 2 minutes, sound event B is played for 20 seconds. When the state of the behavior event is state b, the playback duration of the behavior event is 1 minute, and in this case, it is determined according to state b of the behavior event that the playback point of sound event A is at 20 seconds in the playback time of the behavior event, and when the playback time of the behavior event reaches 20 seconds, sound event A is played for 10 seconds; and it is determined according to state b of the behavior event that the playback point of sound event B is at 40 seconds in the playback time of the behavior event, and when the playback time of the behavior event reaches 40 seconds, sound event B is played for 20 seconds.
Optionally, because the state of the behavior event includes, but is not limited to,the current playback frequency of the behavior event, when the state of the behavior event is the current playback frequency, the manner of determining playback time information of each sound event according to the state of the behavior event includes, but is not limited to:
determining playback time information of each sound event according to the current playback frequency of the behavior event.
Further, to implement determining playback time information of each sound event according to the current playback frequency of the behavior event, before the determining playback time information of each sound event according to the current playback frequency of the behavior event, the method provided in this embodiment needs to set and store in advance a correspondence between playback frequencies of each behavior event and playback time information of sound events.
This embodiment does not specifically limit the manner of setting in advance a correspondence between playback frequencies of each behavior event and playback time information of sound events. During specific implementation, the correspondence may be determined according to a proportional relation between playback frequencies of each behavior event and playback time information of sound events. Because the playback time information not only includes the playback point but also includes the playback duration, when the correspondence between playback frequencies of each behavior event and playback time information of sound events is set in advance, descriptions with respect to the playback point and playback duration in the playback time information are separately made below.
Using an example in which the behavior event is behavior event 2, and the sound event corresponding to behavior event 2 is sound event C, when a correspondence between playback frequencies of the behavior event and playback time information of the sound event is set in advance, if it is known that when the playback frequency of the behavior event is a, the playback point of sound event C is at 1/m of the playback duration of the behavior event, it can be determined that when the playback frequency of behavior event 2 is b,the playback point of sound event C is at b/am of the playback duration of the behavior event.
Still using the example in which the behavior event is behavior event 2, and the sound event corresponding to behavior event 2 is sound event C, when a correspondence between playback frequencies of the behavior event and playback time information of the  sound event is set in advance, ifit is known that when the playback frequency of the behavior event is a, the playback duration of sound event C is t1, it can be determined that when the playback frequency of behavior event 2 is b, the playback duration of sound event C is t1 b/a.
The manner of storing a correspondence between playback frequencies of each behavior event and playback time information of sound events includes, but is not limited to, storing the correspondence between playback frequencies of each behavior event and playback time information of sound events into a server. A form for storing the correspondence between playback frequencies of each behavior event and playback time information of sound events includes, but is not limited to, storing the correspondence between playback frequencies of each behavior event and playback time information of sound events in a form of a table.
Using behavior event A as an example, for details of the correspondence between playback frequencies of each behavior event and playback time information of sound events stored in the form of a table, reference may be made to Table 2.
Table 2
Figure PCTCN2015080100-appb-000002
Further, after the correspondence between playback frequencies of each behavior event and playback time information of sound events is stored, in the method provided in this embodiment, playback time information of each sound event may be determined according to the correspondence between playback frequencies of each behavior event and playback time information of sound events. Specifically, the determining playback  time information of each sound event according to the current playback frequency of the behavior event includes:
searching the stored correspondence between playback frequencies of each behavior event and playback time information of sound events for playback time information of each sound event corresponding to the current playback frequency of the behavior event. Using Table 2 as an example, if the current frequency of behavior event A is a, it is found, in the stored correspondence between playback frequencies of each behavior event and playback time information of sound events, that the playback point of sound event A corresponding to the current playback frequency of the behavior event is q, and the playback duration of sound event A is t1; the playback point of sound event B is w, and the playback duration of sound event B is t2; the playback point of sound event C is e, and the playback duration of sound event C is t3.
204: Play each sound event according to the playback time information of each sound event during playback of the current behavior event.
In step 203, the playback time information of each sound event has been determined according to the state of the behavior event, and each sound event includes the playback point and playback duration of each sound event. Therefore, in this step, on the basis of step 203, each sound event is played according to the playback time information of each sound event during playback of the current behavior event.
A process of playing each sound event according to the playback time information of each sound event during playback of the current behavior event includes, but is not limited to, playing each sound event according to the playback duration of each sound event when the playback time of the current behavior event reaches the playback point in the playback time information of each sound event.
For example, the current behavior event is behavior event 1, and the sound events corresponding to behavior event 1 is sound event A and sound event B, where the playback duration of the current behavior event is 5 minutes, the playback point of sound event A is at 1/5 of the playback duration of the behavior event and the playback duration of sound event A is 20 seconds; the playback point of sound event B is at 3/5 of the playback duration of the behavior event and the playback duration of sound event is 30 seconds; in this case, when the playback time of the behavior event reaches 1 minute, sound event A is played  for 20 seconds, and when the playback time of the behavior event reaches 2 minutes, sound event B is played for 30 seconds.
In the method provided in this embodiment of the present invention, at least two sound events corresponding to a current to-be-played behavior event are determined, and a state of the behavior event is acquired; further, playback time information of each sound event corresponding to the behavior event is determined according to the state of the behavior event, and each sound event is played when a playback time of the current behavior event reaches a playback point of each sound event. Because the current to-be-played behavior event corresponds to at least two sound events, sounds during playback of the behavior event are enriched.
Referring to FIG. 4, an embodiment of the present invention provides an apparatus for playing a behavior event, where a behavior event played by the apparatus corresponds to at least two sound events, and the apparatus includes:
a first determining module 401, configured to determine at least two sound events corresponding to a current to-be-played behavior event;
an acquiring module 402, configured to acquire a state of the behavior event, each sound event that corresponds to a behavior event having different playback time information in a sound track when the behavior event is in different states, and the playback time information including at least a playback point and playback duration;
a second determining module 403, configured to determine playback time information of each sound event according to the state of the behavior event; and
playback module 404, configured to play each sound event according to the playback time information of each sound event during playback of the current behavior event.
Referring to FIG. 5, the apparatus further includes:
a third determining module 405, configured to determine, among preset sound events, at least two sound events corresponding to each behavior event; and
first storage module 406, configured to store a correspondence between each behavior event and sound events, where
the first determining module 401 is configured to determine, according to the stored correspondence between each behavior event and sound events, at least two sound events corresponding to the current to-be-played behavior event.
As an optional embodiment, the second determining module 403 is configured to determine, according to the state of the behavior event, a playback point and playback duration that correspond to each sound event during a playback time of the behavior event, and determine the playback point and playback duration that correspond to each sound event during the playback time of the behavior event as the playback time information of each sound event.
As an optional embodiment, the state of the behavior event is a current playback frequency of the behavior event, where
the second determining module 403 is configured to determine playback time information of each sound event according to the current playback frequency of the behavior event.
Referring to FIG. 6, the apparatus further includes:
second storage module 407 configured to store in advance a correspondence between playback frequencies of each behavior event and playback time information of sound events, where
the second determining module 403 is configured to search the stored correspondence between playback frequencies of each behavior event and playback time information of sound events for playback time information of each sound event corresponding to the current playback frequency of the behavior event.
In conclusion, the apparatus provided in this embodiment of the present invention determines at least two sound events corresponding to a current to-be-played behavior event, and acquires a state of the behavior event; and further, determines, according to the state of the behavior event, playback time information of each sound event corresponding to the behavior event, and plays each sound event when a playback time of the current behavior event reaches a playback point of each sound event. Because the current to-be-played behavior event corresponds to at least two sound events, sounds during playback of the behavior event are enriched.
Referring to FIG. 7, FIG. 7 shows a schematic structural diagram of a terminal involved in the embodiments of the present invention, and the terminal may be used to implement the method for playing a behavior event provided in the foregoing embodiment.
Specifically, the terminal 700 may include components such as a radio frequency (RF) circuit 110, a memory 120 including one or more computer readable storage media, an input unit 130, a display unit 140, a sensor 150, an audio circuit 160, a Wireless Fidelity (WiFi) module 170, a processor 180 including one or more processing cores, and a power supply 190. A person skilled in the art may understand that the structure of the terminal shown in FIG. 7 does not constitute a limitation to the terminal, and the terminal may include more components or fewer components than those shown in the figure, or some components may be combined, or a different component deployment may be used.
The RF circuit 110 may be configured to receive and send signals during an information receiving and sending process or a call process. Particularly, the RF circuit 110 receives downlink information from a base station, then delivers the downlink information to one or more processors 180 for processing, and sends related uplink data to the base station. Generally, the RF circuit 110 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a subscriber identity module (SIM) card, a transceiver, a coupler, a low noise amplifier (LNA) , and a duplexer. In addition, the communications unit 110 may also communicate with a network and another device by wireless communication. The wireless communication may use any communications standard or protocol, which includes, but is not limited to, Global System for Mobile communications (GSM) , General Packet Radio Service (GPRS) , Code Division Multiple Access (CDMA) , Wideband Code Division Multiple Access (WCDMA) , Long Term Evolution (LTE) , e-mail, Short Messaging Service (SMS) , and the like.
The memory 120 may be configured to store a software program and module. The processor 180 runs the software program and module stored in the memory 120, to implement various functional applications and data processing. The memory 120 may mainly include a program storage area and a data storage area. The program storage area may store an operating system, an application program required by at least one function (such as a sound playback function and an image display function) , and the like. The data storage area may store data (such as audio data and an address book) created according to use of the terminal 700, and the like. In addition, the memory 120 may include a high speed random access  memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory, or another volatile solid-state storage device. Correspondingly, the memory 120 may further include a memory controller, so as to provide access of the processor 180 and the input unit 130 to the memory 120.
The input unit 130 may be configured to receive input digit or character information, and generate a keyboard, mouse, joystick, optical, or track ball signal input related to the user setting and function control. Specifically, the input unit 130 may include a touch-sensitive surface 131 and another input device 132. The touch-sensitive surface 131, which may also be referred to as a touch screen or a touch panel, may collect a touch operation of a user on or near the touch-sensitive surface (such as an operation of a user on or near the touch-sensitive surface 131 by using any suitable ob ject or accessory, such as a finger or a stylus) , and drive a corresponding connection apparatus according to a preset program. Optionally, the touch-sensitive surface 131 may include two parts: a touch detection apparatus and a touch controller. The touch detection apparatus detects a touch position of the user, detects a signal generated by the touch operation, and transfers the signal to the touch controller. The touch controller receives the touch signal from the touch detection apparatus, converts the touch signal into touch point coordinates, and sends the touch point coordinates to the processor 180. Moreover, the touch controller can receive and execute a command sent from the processor 180. In addition, the touch-sensitive surface 131 may be a resistive, capacitive, infrared, or surface sound wave type touch-sensitive surface. In addition to the touch-sensitive surface 131, the input unit 130 may further include the another input device 132. Specifically, the another input device 132 may include, but is not limited to, one or more of a physical keyboard, a functional key (such as a volume control key or a switch key) , a track ball, a mouse, and a joystick.
The display unit 140 may be configured to display information input by the user or information provided for the user, and various graphical user interfaces of the terminal 700. The graphical user interfaces may be formed by a graph, a text, an icon, a video, or any combination thereof. The display unit 140 may include a display panel 141. Optionally, the display panel 141 may be configured by using a liquid crystal display (LCD) , an organic light-emitting diode (OLED) , or the like. Further, the touch-sensitive surface 131 may cover the display panel 141. After detecting a touch operation on or near the touch-sensitive surface 131, the touch-sensitive surface 131 transfers the touch operation to the processor 180, so as  to determine the type of the touch event. Then, the processor 180 provides a corresponding visual output on the display panel 141 according to the type of the touch event. Although, in FIG. 7, the touch-sensitive surface 131 and the display panel 141 are used as two separate parts to implement input and output functions, in some embodiments, the touch-sensitive surface 131 and the display panel 141 may be integrated to implement the input and output functions.
The terminal 700 may further include at least one sensor 150, such as an optical sensor, a motion sensor, and other sensors. Specifically, the optical sensor may include an ambient light sensor and a proximity sensor. The ambient light sensor can adjust luminance of the display panel 141 according to brightness of the ambient light. The proximity sensor may switch off the display panel 141 and/or backlight when the terminal 700 is moved to the ear. As one type of motion sensor, a gravity acceleration sensor can detect magnitude of accelerations in various directions (generally on three axes) , may detect magnitude and a direction of the gravity when static, and may be applied to an application that recognizes the attitude of the mobile phone (for example, switching between landscape orientation and portrait orientation, a related game, and magnetometer attitude calibration) , a function related to vibration recognition (such as a pedometer and a knock) , and the like. Other sensors, such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which may be configured in the terminal 700, are not further described herein.
The audio circuit 160, a loudspeaker 161, and a microphone 162 may provide audio interfaces between the user and the terminal 700. The audio circuit 160 may convert received audio data into an electric signal and transmit the electric signal to the loudspeaker 161. The loudspeaker 161 converts the electric signal into a sound signal for output. On the other hand, the microphone 162 converts a collected sound signal into an electric signal. The audio circuit 160 receives the electric signal and converts the electric signal into audio data, and outputs the audio data to the processor 180 for processing. Then, the processor 180 sends the audio data to, for example, another terminal by using the RF circuit 110, or outputs the audio data to the memory 120 for further processing. The audio circuit 160 may further include an earplug jack, so as to provide communication between a peripheral earphone and the terminal 700.
WiFi is a short distance wireless transmission technology. The terminal 700 may help, by using the WiFi module 170, the user to receive and send e-mails, browse a  webpage, access streaming media, and so on, which provides wireless broadband Internet access for the user. Although FIG. 7 shows the WiFi module 170, it may be understood that the WiFi module 170 is not a necessary component of the terminal 700, and when required, the WiFi module 170 may be omitted as long as the scope of the essence of the present disclosure is not changed.
The processor 180 is the control center of the terminal 700, and is connected to various parts of the mobile phone by using various interfaces and lines. By running or executing the software program and/or module stored in the memory 120, and invoking data stored in the memory 120, the processor 180 performs various functions and data processing of the terminal 700, thereby performing overall monitoring on the mobile phone. Optionally, the processor 180 may include one or more processing cores. Optionally, the processor 180 may integrate an application processor and a modem. The application processor mainly processes an operating system, a user interface, an application program, and the like. The modem mainly processes wireless communication. It may be understood that the foregoing modem may also not be integrated into the processor 180.
The terminal 700 further includes the power supply 190 (such as a battery) for supplying power to the components. Preferably, the power supply may be logically connected to the processor 180 by using a power management system, thereby implementing functions such as charging, discharging and power consumption management by using the power management system. The power supply 190 may further include one or more of a direct current or alternating current power supply, a re-charging system, a power failure detection circuit, a power supply converter or inverter, a power supply state indicator, and any other components.
Although not shown in the figure, the terminal 700 may further include a camera, a Bluetooth module, and the like, which are not further described herein. Specifically, in this embodiment, the display unit of the terminal 700 is a touch screen display, and the terminal 700 further includes a memory and one or more programs. The one or more programs are stored in the memory and configured to be executed by one or more processors. The one or more programs contain instructions used for implementing the following operations:
determining at least two sound events corresponding to a current to-be-played behavior event;
acquiring a state of the behavior event, each sound event that corresponds to a behavior event having different playback time information in a sound track when the behavior even is in different states, and the playback time information including at least a playback point and playback duration; and
determining playback time information of each sound event according to the state of the behavior event, and playing each sound event according to the playback time information of each sound event during playback of the current behavior event.
Assuming the foregoing is a first possible implementation manner, in a second possible implementation manner provided on the basis of the first possible implementation manner, the memory of the terminal further contains an instruction for implementing the following operation: before the determining at least two sound events corresponding to a current to-be-played behavior event,
determining, among preset sound events, at least two sound events corresponding to each behavior event, and storing a correspondence between each behavior event and sound events, where
the determining at least two sound events corresponding to a current to-be-played behavior event includes:
determining, according to the stored correspondence between each behavior event and sound events, at least two sound events corresponding to the current to-be-played behavior event.
In a third possible implementation manner provided on the basis of the first possible implementation manner or the second possible implementation manner, the memory of the terminal further contains an instruction for implementing the following operation:
the determining playback time information of each sound event according to the state of the behavior event includes:
determining, according to the state of the behavior event, a playback point and playback duration that correspond to each sound event during a playback time of the behavior event, and determining the playback point and playback duration that correspond to each sound event during the playback time of the behavior event as the playback time information of each sound event.
In a fourth possible implementation manner provided on the basis of the first to third possible implementation manners, the memory of the terminal further contains an instruction for implementing the following operation: the state of the behavior event includes a current playback frequency of the behavior event, where
the determining playback time information of each sound event according to the state of the behavior event includes:
determining playback time information of each sound event according to the current playback frequency of the behavior event.
In a fifth possible implementation manner provided on the basis of the first to fourth possible implementation manners, the memory of the terminal further contains an instruction for implementing the following operation: before the determining playback time information of each sound event according to the current playback frequency of the behavior event:
storing in advance a correspondence between playback frequencies of each behavior event and playback time information of sound events, where
the determining playback time information of each sound event according to the current playback frequency of the behavior event includes:
searching the stored correspondence between playback frequencies of each behavior event and playback time information of sound events for playback time information of each sound event corresponding to the current playback frequency of the behavior event.
The terminal provided in this embodiment of the present invention determines at least two sound events corresponding to a current to-be-played behavior event, and acquires a state of the behavior event; and further, determines, according to the state of the behavior event, playback time information of each sound event corresponding to the behavior event, and plays each sound event according to a playback time of each sound event during playback of the current behavior event. Because the current to-be-played behavior event corresponds to at least two sound events, sound effects during playback of the behavior event are enriched.
An embodiment of the present invention further provides a computer readable storage medium, where the computer readable medium may be a computer readable storage medium contained in the memory in the foregoing embodiment, or may be a separate computer readable storage medium that is not installed in a terminal. The computer readable  storage medium has one or more programs stored therein, and the one or more programs are executed by one or more processors to implement the method for playing a behavior event, where the method includes:
determining at least two sound events corresponding to a current to-be-played behavior event;
acquiring a state of the behavior event, each sound event that corresponds to a behavior event having different playback time information in a sound track when the behavior even is in different states, and the playback time information including at least a playback point and playback duration; and
determining playback time information of each sound event according to the state of the behavior event, and playing each sound event according to the playback time information of each sound event during playback of the current behavior event.
Assuming the foregoing is a first possible implementation manner, in a second possible implementation manner provided on the basis of the first possible implementation manner, the memory of the terminal further contains an instruction for implementing the following operation: before the determining at least two sound events corresponding to a current to-be-played behavior event:
determining, among preset sound events, at least two sound events corresponding to each behavior event, and storing a correspondence between each behavior event and sound events, where
the determining at least two sound events corresponding to a current to-be-played behavior event includes:
determining, according to the stored correspondence between each behavior event and sound events, at least two sound events corresponding to the current to-be-played behavior event.
In a third possible implementation manner provided on the basis of the first possible implementation manner or the second possible implementation manner, the memory of the terminal further contains an instruction for implementing the following operation:
the determining playback time information of each sound event according to the state of the behavior event includes:
determining, according to the state of the behavior event, a playback point and playback duration that correspond to each sound event during a playback time of the behavior event, and determining the playback point and playback duration that correspond to each sound event during the playback time of the behavior event as the playback time information of each sound event.
In a fourth possible implementation manner provided on the basis of the first to third possible implementation manners, the memory of the terminal further contains an instruction for implementing the following operation: the state of the behavior event includes a current playback frequency of the behavior event, where
the determining playback time information of each sound event according to the state of the behavior event includes:
determining playback time information of each sound event according to the current playback frequency of the behavior event.
In a fifth possible implementation manner provided on the basis of the first to fourth possible implementation manners, the memory of the terminal further contains an instruction for implementing the following operation: before the determining playback time information of each sound event according to the current playback frequency of the behavior event:
storing in advance a correspondence between playback frequencies of each behavior event and playback time information of sound events, where
the determining playback time information of each sound event according to the current playback frequency of the behavior event includes:
searching the stored correspondence between playback frequencies of each behavior event and playback time information of sound events for playback time information of each sound event corresponding to the current playback frequency of the behavior event.
The computer readable storage medium provided in this embodiment of the present invention determines at least two sound events corresponding to a current to-be-played behavior event, and acquires a state of the behavior event; and further, determines, according to the state of the behavior event, playback time information of each sound event corresponding to the behavior event, and plays each sound event according to a playback time of each sound event during playback of the current behavior event. Because the current to-be- played behavior event corresponds to at least two sound events, sound effects during playback of the behavior event are enriched.
An embodiment of the present invention provides a graphical user interface, where the graphical user interface is used on a display terminal for playing a behavior event, and the display terminal includes a touch screen display, a memory, and one or more processors for executing one or more programs. The display terminal executes operations:
determining at least two sound events corresponding to a current to-be-played behavior event;
acquiring a state of the behavior event, each sound event that corresponds to a behavior event having different playback time information in a sound track when the behavior even is in different states, and the playback time information including at least a playback point and playback duration; and
determining playback time information of each sound event according to the state of the behavior event, and playing each sound event according to the playback time information of each sound event during playback of the current behavior event.
The graphical user interface provided in this embodiment of the present invention determines at least two sound events corresponding to a current to-be-played behavior event, and acquires a state of the behavior event; and further, determines, according to the state of the behavior event, playback time information of each sound event corresponding to the behavior event, and plays each sound event according to a playback time of each sound event during playback of the current behavior event. Because the current to-be-played behavior event corresponds to at least two sound events, sound effects during playback of the behavior event are enriched.
It should be noted that the above division of functional modules is only described for exemplary purposes when the apparatus for playing a behavior event provided in the foregoing embodiment plays a behavior event. In actual applications, the functions may be allocated to different functional modules as needed, that is, the internal structure of the apparatus for playing a behavior event is divided into different functional modules to complete all or some of the above described functions. In addition, the apparatus for playing a behavior event provided by the foregoing embodiment is based on the same concept as the  method for playing a behavior event. For the specific implementation process, refer to the method embodiment, and the details are not described herein again.
The sequence numbers of the foregoing embodiments of the present invention are merely for the convenience of description, and do not imply the preference among the embodiments.
A person of ordinary skill in the art may understand that all or some of the steps of the foregoing embodiments may be implemented by using hardware, or may be implemented by a program instructing relevant hardware. The program may be stored in a computer readable storage medium. The storage medium may be a read-only memory, a magnetic disk, an optical disc, or the like.
The foregoing descriptions are merely preferred embodiments of the present invention, but are not intended to limit the present disclosure. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure shall fall within the protection scope of the present disclosure.

Claims (15)

  1. A method for playing a behavior event, a behavior event played in the method being corresponding to at least two sound events, the method comprising:
    at the intelligent terminal having one or more processors and memory storing program modules to be executed by the one or more processors:
    determining at least two sound events corresponding to a current to-be-played behavior event;
    acquiring a state of the behavior event, each sound event that corresponds to a behavior event having different playback time information in a sound track when the behavior even is in different states, and the playback time information comprising at least a playback point and playback duration; and
    determining playback time information of each sound event according to the state of the behavior event, and playing each sound event according to the playback time information of each sound event during playback of the current behavior event.
  2. The method according to claim 1, before the determining at least two sound events corresponding to a current to-be-played behavior event, further comprising:
    determining, among preset sound events, at least two sound events corresponding to each behavior event, and storing a correspondence between each behavior event and sound events, wherein
    the determining at least two sound events corresponding to a current to-be-played behavior event comprises:
    determining, according to the stored correspondence between each behavior event and sound events, at least two sound events corresponding to the current to-be-played behavior event.
  3. The method according to claim 1, wherein the determining playback time information of each sound event according to the state of the behavior event comprises:
    determining, according to the state of the behavior event, a playback point and playback duration that correspond to each sound event during a playback time of the behavior event,  and determining the playback point and playback duration that correspond to each sound event during the playback time of the behavior event as the playback time information of each sound event.
  4. The method according to any one of claims 1 to 3, wherein the state of the behavior event is a current playback frequency of the behavior event; and
    the determining playback time information of each sound event according to the state of the behavior event comprises:
    determining playback time information of each sound event according to the current playback frequency of the behavior event.
  5. The method according to claim 4, before the determining playback time information of each sound event according to the current playback frequency of the behavior event, further comprising:
    storing in advance a correspondence between playback frequencies of each behavior event and playback time information of sound events, wherein
    the determining playback time information of each sound event according to the current playback frequency of the behavior event comprises:
    searching the stored correspondence between playback frequencies of each behavior event and playback time information of sound events for playback time information of each sound event corresponding to the current playback frequency of the behavior event.
  6. An apparatus for playing a behavior event, a behavior event played by the apparatus being corresponding to at least two sound events, the apparatus comprising:
    one or more processors;
    memory; and
    a plurality of program modules, when, executed by the one or more processors, cause the intelligent terminal to perform predefined functions, the plurality ofprogram modules further comprising:
    a first determining module, configured to determine at least two sound events corresponding to a current to-be-played behavior event;
    an acquiring module, configured to acquire a state of the behavior event, each sound event that corresponds to a behavior event having different playback time information  in a sound track when the behavior even is in different states, and the playback time information comprising at least a playback point and playback duration;
    a second determining module, configured to determine playback time information of each sound event according to the state of the behavior event; and
    a playback module, configured to play each sound event according to the playback time information of each sound event during playback of the current behavior event.
  7. The apparatus according to claim 6, wherein the plurality of program modules further comprises:
    a third determining module, configured to determine, among preset sound events, at least two sound events corresponding to each behavior event; and
    a first storage module, configured to store a correspondence between each behavior event and sound events, wherein
    the first determining module is configured to determine, according to the stored correspondence between each behavior event and sound events, at least two sound events corresponding to the current to-be-played behavior event.
  8. The apparatus according to claim 6, wherein the second determining module is configured to determine, according to the state of the behavior event, a playback point and playback duration that correspond to each sound event during a playback time of the behavior event, and determine the playback point and playback duration that correspond to each sound event during the playback time of the behavior event as the playback time information of each sound event.
  9. The apparatus according to any one of claims 6 to 8, wherein the state of the behavior event is a current playback frequency of the behavior event; and
    the second determining module is configured to determine playback time information of each sound event according to the current playback frequency of the behavior event.
  10. The apparatus according to claim 9, wherein the plurality of program modules further comprises:
    a second storage module, configured to store in advance a correspondence between playback frequencies of each behavior event and playback time information of sound events, wherein
    the second determining module is configured to search the stored correspondence between playback frequencies of each behavior event and playback time information of sound events for playback time information of each sound event corresponding to the current playback frequency of the behavior event.
  11. A non-transitory computer-readable medium, having instructions stored thereon, which when executed by one or more processors cause the processors to perform operations comprising:
    determining at least two sound events corresponding to a current to-be-played behavior event;
    acquiring a state of the behavior event, each sound event that corresponds to a behavior event having different playback time information in a sound track when the behavior even is in different states, and the playback time information comprising at least a playback point and playback duration; and
    determining playback time information of each sound event according to the state of the behavior event, and playing each sound event according to the playback time information of each sound event during playback of the current behavior event.
  12. The non-transitory computer-readable medium according to claim 11, before the determining at least two sound events corresponding to a current to-be-played behavior event, further comprising:
    determining, among preset sound events, at least two sound events corresponding to each behavior event, and storing a correspondence between each behavior event and sound events, wherein
    the determining at least two sound events corresponding to a current to-be-played behavior event comprises:
    determining, according to the stored correspondence between each behavior event and sound events, at least two sound events corresponding to the current to-be-played behavior event.
  13. The non-transitory computer-readable medium according to claim 11, wherein the determining playback time information of each sound event according to the state of the behavior event comprises:
    determining, according to the state of the behavior event, a playback point and playback duration that correspond to each sound event during a playback time of the behavior event, and determining the playback point and playback duration that correspond to each sound event during the playback time of the behavior event as the playback time information of each sound event.
  14. The non-transitory computer-readable medium according to any one of claims 11 to 13,wherein the state of the behavior event is a current playback frequency of the behavior event; and
    the determining playback time information of each sound event according to the state of the behavior event comprises:
    determining playback time information of each sound event according to the current playback frequency of the behavior event.
  15. The non-transitory computer-readable medium according to claim 14, before the determining playback time information of each sound event according to the current playback frequency of the behavior event, further comprising:
    storing in advance a correspondence between playback frequencies of each behavior event and playback time information of sound events, wherein
    the determining playback time information of each sound event according to the current playback frequency of the behavior event comprises:
    searching the stored correspondence between playback frequencies of each behavior event and playback time information of sound events for playback time information of each sound event corresponding to the current playback frequency of the behavior event.
PCT/CN2015/080100 2014-05-28 2015-05-28 Method and apparatus for playing behavior event WO2015184959A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
SG11201605960WA SG11201605960WA (en) 2014-05-28 2015-05-28 Method and apparatus for playing behavior event
MYPI2016703177A MY196865A (en) 2014-05-28 2015-05-28 Method and apparatus for playing behavior event

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410229383.XA CN105159655B (en) 2014-05-28 2014-05-28 Behavior event playing method and device
CN201410229383.X 2014-05-28

Publications (2)

Publication Number Publication Date
WO2015184959A2 true WO2015184959A2 (en) 2015-12-10
WO2015184959A3 WO2015184959A3 (en) 2016-01-28

Family

ID=54767515

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/080100 WO2015184959A2 (en) 2014-05-28 2015-05-28 Method and apparatus for playing behavior event

Country Status (4)

Country Link
CN (1) CN105159655B (en)
MY (1) MY196865A (en)
SG (1) SG11201605960WA (en)
WO (1) WO2015184959A2 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109173259B (en) * 2018-07-17 2022-01-21 派视觉虚拟现实(深圳)软件技术有限公司 Sound effect optimization method, device and equipment in game
CN109246580B (en) * 2018-09-25 2022-02-11 Oppo广东移动通信有限公司 3D sound effect processing method and related product
CN111135572A (en) * 2019-12-24 2020-05-12 北京像素软件科技股份有限公司 Game sound effect management method and device, storage medium and electronic equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3917630B2 (en) * 2005-07-06 2007-05-23 株式会社コナミデジタルエンタテインメント GAME DEVICE, GAME DEVICE CONTROL METHOD, AND PROGRAM
AU2008201210B2 (en) * 2007-04-02 2010-05-27 Aristocrat Technologies Australia Pty Ltd Gaming machine with sound effects
AU2012201105A1 (en) * 2007-12-21 2012-03-15 Aristocrat Technologies Australia Pty Limited A gaming system, a sound controller, and a method of gaming
CN100511240C (en) * 2007-12-28 2009-07-08 腾讯科技(深圳)有限公司 Audio document calling method and system
CN102542129A (en) * 2010-12-08 2012-07-04 杭州格诚网络科技有限公司 Three-dimensional (3D) scene display system
US20140091897A1 (en) * 2012-04-10 2014-04-03 Net Power And Light, Inc. Method and system for measuring emotional engagement in a computer-facilitated event

Also Published As

Publication number Publication date
CN105159655A (en) 2015-12-16
CN105159655B (en) 2020-04-24
SG11201605960WA (en) 2016-08-30
MY196865A (en) 2023-05-05
WO2015184959A3 (en) 2016-01-28

Similar Documents

Publication Publication Date Title
US10165309B2 (en) Method and apparatus for live broadcast of streaming media
US10525353B2 (en) Method, apparatus and terminal for displaying prompt information
US10635449B2 (en) Method and apparatus for running game client
WO2015172704A1 (en) To-be-shared interface processing method, and terminal
US10173135B2 (en) Data processing method, terminal and server
WO2015180652A1 (en) Method for acquiring interactive information, terminal, server and system
CN106254910B (en) Method and device for recording image
WO2015176680A1 (en) Information display method and apparatus
US20170064352A1 (en) Method and system for collecting statistics on streaming media data, and related apparatus
US20160292946A1 (en) Method and apparatus for collecting statistics on network information
US9824476B2 (en) Method for superposing location information on collage, terminal and server
US20200336875A1 (en) Scenario-based sound effect control method and electronic device
US20200212701A1 (en) Method for controlling multi-mode charging, mobile terminal, and storage medium
US20160119695A1 (en) Method, apparatus, and system for sending and playing multimedia information
CN106919458B (en) Method and device for Hook target kernel function
WO2015184959A2 (en) Method and apparatus for playing behavior event
EP2869233B1 (en) Method, device and terminal for protecting application program
US9621674B2 (en) Method and apparatus for associating online accounts
US20160274754A1 (en) Method and apparatus for controlling presentation of multimedia data
US10073957B2 (en) Method and terminal device for protecting application program
US10419816B2 (en) Video-based check-in method, terminal, server and system
WO2015124060A1 (en) Login interface displaying method and apparatus
US9913055B2 (en) Playback request processing method and apparatus
WO2015124095A1 (en) Information release method, apparatus, and system

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: IDP00201605684

Country of ref document: ID

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13/04/2017)

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15803878

Country of ref document: EP

Kind code of ref document: A2

122 Ep: pct application non-entry in european phase

Ref document number: 15803878

Country of ref document: EP

Kind code of ref document: A2