CN116570919A - Sound effect playing method, sound effect playing device, medium and electronic equipment - Google Patents
Sound effect playing method, sound effect playing device, medium and electronic equipment Download PDFInfo
- Publication number
- CN116570919A CN116570919A CN202310468650.8A CN202310468650A CN116570919A CN 116570919 A CN116570919 A CN 116570919A CN 202310468650 A CN202310468650 A CN 202310468650A CN 116570919 A CN116570919 A CN 116570919A
- Authority
- CN
- China
- Prior art keywords
- target
- sound effect
- target note
- determining
- note
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 230000000694 effects Effects 0.000 title claims description 85
- 230000001960 triggered effect Effects 0.000 claims abstract description 38
- 238000004590 computer program Methods 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 9
- 238000013507 mapping Methods 0.000 claims description 5
- 230000000875 corresponding effect Effects 0.000 description 43
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 12
- 239000010931 gold Substances 0.000 description 12
- 229910052737 gold Inorganic materials 0.000 description 12
- 238000010586 diagram Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 230000004044 response Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000001276 controlling effect Effects 0.000 description 4
- 230000033764 rhythmic process Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000013475 authorization Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000009189 diving Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/54—Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F2300/00—Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
- A63F2300/60—Methods for processing data by generating or executing the game program
- A63F2300/6063—Methods for processing data by generating or executing the game program for sound processing
- A63F2300/6081—Methods for processing data by generating or executing the game program for sound processing generating an output signal, e.g. under timing constraints, for spatialization
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Acoustics & Sound (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Auxiliary Devices For Music (AREA)
Abstract
The method comprises the steps of determining a current beat point of background music played by a virtual scene, determining a target note corresponding to an audio playing event according to a target note sequence if the audio playing event is triggered in a time range of the current beat point, and playing the target note, wherein the playing of the target note is consistent with each beat point of the background music, so that harmony between the target note and the background music is guaranteed, and better game sound experience is brought to a game player.
Description
Technical Field
The disclosure relates to the technical field of computers, and in particular relates to an audio playing method, an audio playing device, a medium and electronic equipment.
Background
The game sound effect not only can provide necessary event feedback for the game, but also can bring better game immersion to the player. Therefore, how to improve the play effect of the game sound becomes more important. In particular, in a music rhythm type game or a running game, each level in the game may have different background music, and a player may jump or evade according to the background music. In order to improve the game experience of players, how to enable the game sound effect and the background music to be kept in harmony is a technical problem to be solved.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, the present disclosure provides an audio playback method, including:
determining the current beat point of background music played by a virtual scene;
if the sound effect playing event is triggered in the time range corresponding to the current beat point, determining a target note corresponding to the sound effect playing event according to a target note sequence, wherein the target note sequence comprises a plurality of notes determined based on the background music;
and playing the target notes.
In a second aspect, the present disclosure provides an audio playback apparatus, including:
the beat determining module is configured to determine the current beat point of the background music played by the virtual scene;
the note determining module is configured to determine a target note corresponding to the sound effect playing event according to a target note sequence if the sound effect playing event is triggered in a time range corresponding to the current beat point, wherein the target note sequence comprises a plurality of notes determined based on the background music;
And the playing module is configured to play the target notes.
In a third aspect, the present disclosure provides a computer readable medium having stored thereon a computer program which when executed by a processing device performs the steps of the method of the first aspect.
In a fourth aspect, the present disclosure provides an electronic device comprising:
a storage device having a computer program stored thereon;
processing means for executing said computer program in said storage means to carry out the steps of the method of the first aspect.
Based on the technical scheme, the current beat point of the background music played in the virtual scene is determined, if the sound effect playing event is triggered within the time range of the current beat point, the target note corresponding to the sound effect playing event is determined according to the target note sequence, and the target note is played, so that the playing of the target note is consistent with each beat point of the background music, the target note and the background music are ensured to be harmonious in music, and better game sound effect experience is brought to game players.
Additional features and advantages of the present disclosure will be set forth in the detailed description which follows.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale. In the drawings:
fig. 1 is a flow chart illustrating a method of playing sound effects according to some embodiments.
Fig. 2 is a flowchart of an audio playback method according to further embodiments.
Fig. 3 is a schematic diagram illustrating module connection of an audio playback apparatus according to some embodiments.
Fig. 4 is a schematic structural diagram of an electronic device shown according to some embodiments.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
It will be appreciated that prior to using the technical solutions disclosed in the embodiments of the present disclosure, the user should be informed and authorized of the type, usage range, usage scenario, etc. of the personal information related to the present disclosure in an appropriate manner according to the relevant legal regulations.
For example, in response to receiving an active request from a user, a prompt is sent to the user to explicitly prompt the user that the operation it is requesting to perform will require personal information to be obtained and used with the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server or a storage medium for executing the operation of the technical scheme of the present disclosure according to the prompt information.
As an alternative but non-limiting implementation, in response to receiving an active request from a user, the manner in which the prompt information is sent to the user may be, for example, a popup, in which the prompt information may be presented in a text manner. In addition, a selection control for the user to select to provide personal information to the electronic device in a 'consent' or 'disagreement' manner can be carried in the popup window.
It will be appreciated that the above-described notification and user authorization process is merely illustrative and not limiting of the implementations of the present disclosure, and that other ways of satisfying relevant legal regulations may be applied to the implementations of the present disclosure.
Meanwhile, it can be understood that the data (including but not limited to the data itself, the acquisition or the use of the data) related to the technical scheme should conform to the requirements of the corresponding laws and regulations and related regulations.
Fig. 1 is a flow chart illustrating a method of playing sound effects according to some embodiments. As shown in fig. 1, an embodiment of the present disclosure provides an audio playing method, which may be executed by an electronic device, and in particular, may be executed by an audio playing apparatus, where the apparatus may be implemented by software and/or hardware, and configured in the electronic device. As shown in fig. 1, the method may include the following steps.
In step 110, a current beat point of background music played by the virtual scene is determined.
Here, the Virtual scene is a scene presented by the electronic device, and the Virtual scene may be a game scene (such as a two-dimensional game scene, a three-dimensional game scene, etc.) or a Virtual Reality (VR) scene. Background music played by a virtual scene is accompaniment music in the virtual scene and is used for creating rhythms and moods of the corresponding virtual scene. For example, in a music rhythm type game, each level has different background music through which a player can play a beat. For another example, in a running game, each level may have different background music to create a strenuous running experience with the background music.
The current beat point of the background music refers to a music beat point corresponding to the background music currently played by the virtual scene. The music beat refers to a combination rule of strong beat and weak beat, and specifically refers to the total length of notes of each bar in a music score. For example, beats typically have 1/4 beat, 2/4 beat, 3/4 beat, 4/4 beat, 3/8 beat, 6/8 beat, 7/8 beat, 9/8 beat, 12/8 beat, and so forth. Taking 3/4 beat as an example, 3/4 beat takes the score of 4 notes as one beat, 3 beats per bar can have 3 score of 4 notes, and the specific music rhythm is strong-weak. Moreover, the beat time length corresponding to each beat is related to the beat rate of the background music. For example, in the case where the tempo of the background music is 60BPM and the number of beats is 1/4, it means that 60 notes of 4 minutes appear per minute, the beat length of one note of 4 minutes is 1 second, and in the case where the tempo of the background music is 120BPM and the number of beats is 1/4, it means that 120 notes of 4 minutes appear per minute, the beat length of one note of 4 minutes is 1/2 second.
In the process of playing background music, the electronic equipment determines a beat point to be the current beat point every time the electronic equipment plays to one beat point of the background music.
In step 120, if the sound effect playing event is triggered within the time range corresponding to the current beat point, the target note corresponding to the sound effect playing event is determined according to the target note sequence.
Here, the time range of the current beat point refers to the beat time length corresponding to the current beat point. Taking background music with 3/4 beats and 60BPM beats as an example, 3/4 beats take 4 minute notes as one beat, one bar comprises 3 beats, 60BPM means that 60 minute notes appear in one minute, and the beat time of one minute note is 1 second. Thus, the beat length corresponding to each node is correlated with the beat rate of the background music.
If the sound effect playing event is triggered within the time range corresponding to the current beat point, the electronic equipment responds to the sound effect playing event and determines the corresponding target notes according to the target note sequence. The sound effect playing event is used for indicating the electronic equipment to play the target notes corresponding to the sound effect playing event so as to bring event feedback to the player through the target notes.
For example, an audio play event may be triggered by a virtual character in a virtual scene contacting a target virtual item in the virtual scene. For example, in a cool class game, a game player may trigger a sound play event by controlling a virtual character to contact a gold in a virtual scene. Of course, the sound effect playing event may also be triggered by releasing a game skill in the virtual scene or by triggering a target control in the virtual scene. For example, in a music tempo type game, a player may trigger a sound effect play event by clicking on a note in a virtual scene.
Wherein, the target notes corresponding to the sound effect playing event can be determined according to the target note sequence. A plurality of notes determined based on the background music may be included in the target note sequence, and a target note is determined among the plurality of notes in the target note sequence each time an audio play event is triggered.
In step 130, the target note is played.
Here, the electronic device may control the audio engine to play the target note. Wherein the target note may be played at the time the sound effect play event is triggered.
It should be appreciated that since the target note is determined from the target note sequence in response to the sound effect playing event within the time range of the current beat point of the background music played in the virtual scene, and the target note is played, the playing timing of the target note can be made within the time range of the current beat point. I.e. the play of the target note is related to the current beat point of the background music, so that the target note remains harmonically harmonious with the background music.
For example, in a cool game, a game player may trigger an audio play event by controlling a virtual character to contact a medal in a virtual scene, the audio play event being used to instruct playing of a medal sound, and the electronic device playing the medal sound at a current beat point of background music in response to the audio play event, so that the medal sound can be musically harmonious with the background music.
Therefore, by determining the current beat point of the background music played in the virtual scene and determining the target note corresponding to the sound effect playing event according to the target note sequence and playing the target note if the sound effect playing event is triggered in the time range of the current beat point, the playing of the target note is enabled to be consistent with each beat point of the background music, and accordingly harmony between the target note and the background music is guaranteed, and better game sound effect experience is brought to game players.
In some implementations, in step 120, a target note corresponding to the audio playing event may be determined according to a pre-configured target note sequence, where the target note is a next note in the target note sequence determined according to the last triggered audio playing event.
Here, the target note sequence includes a plurality of notes determined based on background music. Wherein a plurality of notes can be determined from the pitch of the background music, the pitch of the music formed by the plurality of notes in sequence remaining in harmony with the pitch of the background music. For example, the target note sequence may be [ do, re, mi, do, re, mi ].
In some embodiments, one note sequence may be determined from a plurality of pre-configured note sequences as the target note sequence. Wherein the plurality of preconfigured note sequences are determined based on background music played by the virtual scene. It should be appreciated that for different virtual scenes, the plurality of preconfigured note sequences corresponding to different virtual scenes may be different due to the different background music being played. Of course, in the case where the tones of the background music played in different virtual scenes are similar, the note sequences between the different virtual scenes can be shared.
The target note determined in the pre-configured target note sequence may be a next note in the target note sequence determined according to the last triggered sound effect playing event. For example, the target note sequence is [ do, re, mi, do, re, mi ], and if the note determined in the target note sequence by the last triggered sound effect playing event is the first "do" note, the target note corresponding to the sound effect playing event corresponding to the current beat point is the first "re" note. It should be noted that, if the note determined in the target note sequence by the last triggered sound effect playing event is the second "re" note, the target note corresponding to the sound effect playing event corresponding to the current beat point is the first "do" note.
It should be appreciated that since the target note sequence includes a plurality of notes determined based on the background music, by sequentially selecting the target notes in the target note sequence according to the audio play event, the plurality of sequentially played target notes can be made to form a complete phrase that can be, in the music theory, harmonically maintained with the background music. For example, in the running game, the target note sequence is [ do, re, mi, do, re, mi ], when the virtual character touches 2 gold coins at the first beat point, touches 3 gold coins at the second beat point, touches 5 gold coins at the third beat point, and the corresponding target notes are the first "do" note, the first "re" note, and the first "mi" note, respectively, then the three target notes can constitute a phrase of [ do, re, mi ], so as to keep harmony with the background music on the music theory.
Therefore, by determining the target notes according to the pre-configured target note sequence, the notes which are sequentially played can be kept harmonious on the music theory, and better game sound effect experience is brought to game players.
In some implementations, in step 130, the target note may be played according to the trigger time of the sound effect playing event.
Here, in the time range of the current beat point, if the sound effect playing event is triggered, the target note may be played at the trigger time of triggering the sound effect playing event. For example, the time point corresponding to the current beat point is 11:20:38, and the beat time length corresponding to each beat of the background music is 1/4 second, if the sound effect playing event is triggered at the time of 11:20:38.20 in the time range of 11:20:38-11:20:38.25, the target note is played at the time of 11:20:38.20.
Therefore, by playing the target notes at the trigger time of the sound effect playing event, timely game feedback can be brought to the game player, and the game player can have good game experience.
It should be noted that, in the time range of the current beat point, only one audio play event may be triggered. Accordingly, the target note is played only once within the time range of the current beat point. For example, in the time range of 11:20:38-11:20:38.25, no matter how many sound effect playing events are triggered, only the target notes corresponding to the sound effect playing event triggered for the first time are played. Taking the running game as an example, no matter how many times the virtual characters touch the gold coins in the virtual scene or the number of gold coins contacted at the same time each time in the time range of 11:20:38-11:20:38.25, the gold coin sound can be played once at the moment of contacting the gold coins for the first time in one beat point of the background music, so that the gold coins can be played at the beat point of the background music, and the target notes and the background music can be kept harmonious on the music through playing the gold coin sound only once.
In some implementations, the trigger audio play event may be determined when a virtual character in the virtual scene contacts a target virtual object in the virtual scene, where the trigger audio play event is triggered once within a time range corresponding to one beat point.
Here, the virtual character in the virtual scene may be a game character operated by a game player, and the target virtual article may refer to a virtual article having a bonus effect in the virtual scene. Taking a running game as an example, the target virtual object may be a virtual object such as a gold coin and/or a box, and in a game, the score of the game player in the game may be calculated based on the gold coin and/or box obtained by the game player.
And determining triggering sound effect playing events under the condition that the virtual characters in the virtual scene contact the target virtual objects in the virtual scene, wherein the sound effect playing events are used for indicating the sound effects that the playing virtual characters contact the target virtual objects.
It should be noted that, in the time range of one beat point, one audio play event is triggered. For example, the time point corresponding to the current beat point is 11:20:38, the time range corresponding to the current beat point is 11:20:38-11:20:38.25, if the virtual character touches the target virtual object in the virtual scene for the first time at the time of 11:20:38.20, the sound effect playing event is determined to be triggered so as to play the target note at the time of 11:20:38.20, and even if the virtual character touches the target virtual object in the virtual scene again at the time of 11:20:38.22, the sound effect playing event is not triggered any more so as to ensure that the target note is played only once within the time range of one beat point.
Therefore, by triggering one sound effect playing event in the time range of one beat point, only one target note can be played in the time range of one beat point, and accordingly harmony between the target note and background music can be guaranteed.
Fig. 2 is a flowchart of an audio playback method according to further embodiments. As shown in fig. 2, in some embodiments that may be implemented, the audio playback method may include the following steps.
In step 210, the current beat point sent by the audio engine is received by the target module of the game engine, wherein the audio engine is configured to send one beat point to the target module every time the audio engine plays to background music.
Here, in the process of playing background music, each time the audio engine of the game plays to one beat point of the background music, the beat point is sent to the target module of the game engine. The target module of the game engine receives the current beat point sent by the audio engine. It should be understood that the goal module of the game engine refers to a module in the game engine for setting up game rules and tracking progress of the game and the player. For example, in the case where the game engine is the illusion engine 5, the target module may be a game play module.
For example, the audio engine may send a message to the target module of the game engine every time a beat point of the background music is played back by means of callback, so as to notify the target module of the game engine that a new beat point has been played back according to the message.
It should be noted that, the audio engine sends the current beat point to the target module of the game engine, so as to inform the target module to determine whether the target note needs to be played at the current beat point.
In step 220, by the target module, detecting a virtual scene within a time range of the current beat point, and determining to trigger an active play event if a virtual character in the virtual scene contacts a target virtual object in the virtual scene.
Here, when the target module receives the current beat point, the target module detects the virtual scene within the time range of the current beat point, and determines to trigger the sound effect playing event under the condition that the virtual character in the virtual scene is detected to contact the target virtual object in the virtual scene.
For example, the target module may detect a game screen corresponding to the virtual scene, and if a virtual character in the game screen contacts a target virtual object in the virtual scene, determine to trigger a sound effect playing event.
In step 230, the control objective module sends a play instruction to the audio engine in response to the sound effect play event.
Here, in the case where it is determined that the sound effect play event is triggered, the target module of the game engine transmits a play instruction to the audio engine in response to the sound effect play event. The playing instruction is used for indicating the audio engine to determine a target note in the target note sequence and play the target note at the trigger time of the sound effect playing event.
In step 240, the audio engine is controlled to determine a target note corresponding to the audio playback event according to the target note sequence in response to the playback instruction.
Here, the audio engine may determine, when receiving the play instruction, a target note corresponding to the audio play event according to a pre-configured target note sequence.
It should be noted that, the operation of determining the target note sequence and how to determine the target note has been described in detail in the above embodiments, and will not be described in detail here.
In step 250, the audio engine is controlled to play the target note.
Here, the electronic device may control the audio engine to play the target note at a trigger time of the sound effect play event so that the target note can be played within a time range of the current beat point, thereby enabling the played target note to be harmonically kept with the background music on the music theory.
Therefore, the playing of the target notes in the game is consistent with each beat point of the background music, so that the target notes and the background music can be kept harmonious on the music theory, and better game sound effect experience is brought to game players.
In some implementations, the new target note sequence is determined from the at least one preconfigured note sequence if a preset condition is met, wherein the preset condition includes an interval between a current time and a time at which the target note was last played being greater than or equal to a preset time threshold, or a scene switch occurs in the virtual scene.
Here, in case that an interval between the current time and the time at which the target note was played last is greater than or equal to a preset time threshold value, a new target note sequence may be determined from at least one preconfigured note sequence to determine a target note corresponding to a subsequently triggered sound effect playing event based on the new target note sequence.
The current time may refer to a real-time of the virtual scene. The interval between the current time and the time of playing the target note is greater than or equal to the preset time threshold, which effectively characterizes that even if the current time triggers the playing of the target note, the current time and the last playing of the target note cannot form a complete phrase on the hearing of the game player because the interval between the current time and the time of playing the target note is greater than or equal to the preset time threshold. Accordingly, in case that the interval between the current time and the time at which the target note was last played is greater than or equal to the preset time threshold value, a new target note sequence may be redetermined.
For example, the target note sequence is [ do, re, mi, do, re, mi ], the last played target note is the first "do" note, and the playing time is 11:20:38. If the current time is 11:20:40, even if the first "re" note is triggered to be played at the current time, the first "do" note and the first "re" note cannot form a complete phrase in the sense of hearing of the game player due to the 2 second time interval. At this point, a new sequence of target notes may be redetermined to enable subsequent triggering of the played plurality of target notes to form a complete phrase in the listening sense of the game player.
Illustratively, the preset time threshold may be set to a value of 2 seconds. Of course, the value of the preset time threshold can also be set according to the actual use condition.
In the event of a scene cut of the virtual scene, a new target note sequence is determined from at least one preconfigured note sequence. Wherein, the virtual scene occurrence scene switching may refer to the entire virtual scene occurrence transformation, for example, switching from virtual scene a to virtual scene B. Of course, the virtual scene occurrence scene switching may also refer to that the virtual character in the virtual scene enters a new sub-checkpoint. For example, a virtual scene may be divided into a plurality of sub-checkpoints, the scene between the plurality of sub-checkpoints being continuous, and when a virtual character switches from one sub-checkpoint to another sub-checkpoint, it is determined that a scene switch occurs for the virtual scene. Taking a running game as an example, when a virtual character is switched from one diving platform to another, it is determined that a scene change occurs in the virtual scene.
It should be appreciated that in the event of a scene switch in the virtual scene, the background music played by the virtual scene may change, or during the scene switch, an audio playback event may not be triggered, and in order to ensure that a plurality of target notes that are subsequently triggered to play can form a complete phrase in the listening sense of the game player, a new sequence of target notes needs to be redetermined.
Thus, by determining a new sequence of target notes, a plurality of target notes that are subsequently triggered to play can be enabled to form a complete phrase in the listening sense of the game player.
In some implementations, the new target note sequence may be determined based on background music of the current virtual scene in combination with a mapping relationship between the background music and the note sequence.
Here, the current virtual scene may refer to a virtual scene currently displayed by the electronic device. In different virtual scenes, the type of background music that it plays may be different. For example, in virtual scene a, background music with a cheerful tune may be played, while in virtual scene B, background music with a quiet tune may be played.
By determining a new target note sequence from background music of the current virtual scene in combination with a mapping relationship between the background music and the note sequence, a target note sequence that remains harmonious with a tune of the background music can be determined from the tune of the background music.
In some embodiments, at least one note sequence may be constructed according to the tone of the background music, where the note sequence includes a plurality of notes, and the tone corresponding to the music formed by the plurality of notes sequentially is harmonious with the tone of the background music, and the background music and the at least one note sequence are associated to obtain the mapping relationship.
Therefore, by determining the new target note sequence according to the background music of the current virtual scene, the finally played target notes can be kept harmonious with the tone of the background music in tone, and better game sound effect experience is brought to game players.
Fig. 3 is a schematic diagram illustrating module connection of an audio playback apparatus according to some embodiments. As shown in fig. 3, an embodiment of the present disclosure provides an audio playback apparatus, and the apparatus 300 may include:
a beat determining module 301 configured to determine a current beat point of background music played by the virtual scene;
the note determining module 302 is configured to determine, in a time range corresponding to the current beat point, a target note corresponding to an audio playing event according to a target note sequence if the audio playing event is triggered, where the target note sequence includes a plurality of notes determined based on the background music;
A playing module 303 configured to play the target note.
Optionally, the note determination module 302 is specifically configured to:
and determining a target note corresponding to the sound effect playing event according to a pre-configured target note sequence, wherein the target note is the next note of the notes determined according to the sound effect playing event triggered last time in the target note sequence.
Optionally, the playing module 303 is specifically configured to:
and playing the target notes according to the trigger time of the sound effect playing event.
Optionally, the note determination module 302 is specifically configured to:
and under the condition that the virtual character in the virtual scene is contacted with the target virtual object in the virtual scene, determining to trigger the sound effect playing event, wherein the sound effect playing event is triggered once in a time range corresponding to one beat point.
Optionally, the beat determining module 301 is specifically configured to:
receiving, by a target module of a game engine, the current beat point sent by an audio engine, wherein the audio engine is configured to send one beat point to the target module every time the audio engine plays to the background music;
The note determination module 302 is specifically configured to:
detecting the virtual scene through the target module, and determining to trigger the sound effect playing event under the condition that a virtual character in the virtual scene is contacted with a target virtual object in the virtual scene;
the playing module 303 is specifically configured to:
controlling the target module to respond to the sound effect playing event and sending a playing instruction to the audio engine;
and controlling the audio engine to respond to the playing instruction, and determining a target note corresponding to the sound effect playing event according to the target note sequence.
Optionally, the apparatus 300 further includes:
and the switching module is configured to determine a new target note sequence from at least one preset note sequence under the condition that a preset condition is met, wherein the preset condition comprises that the interval between the current time and the time of playing the target note last time is greater than or equal to a preset time threshold value, or the virtual scene is subjected to scene switching.
Optionally, the switching module is specifically configured to:
and determining a new target note sequence according to the background music of the current virtual scene and combining the mapping relation between the background music and the note sequence.
The logic of the method executed by each functional module in the apparatus 300 may refer to the portions of the method in the foregoing embodiments, which are not described herein.
Referring now to fig. 4, a schematic diagram of an electronic device 600 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a car-mounted terminal (e.g., car navigation terminal), etc., and a fixed terminal such as a digital TV, a desktop computer, etc., or the electronic device may be a Virtual Reality (VR) device. The electronic device shown in fig. 4 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 4, the electronic device 600 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other through a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
In general, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, magnetic tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 4 shows an electronic device 600 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via communication means 609, or from storage means 608, or from ROM 602. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 601.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the electronic device and the server may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining the current beat point of background music played by a virtual scene; if the sound effect playing event is triggered in the time range corresponding to the current beat point, determining a target note corresponding to the sound effect playing event according to a target note sequence, wherein the target note sequence comprises a plurality of notes determined based on the background music; and playing the target notes.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented in software or hardware. The name of a module does not in some cases define the module itself.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims. The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Claims (10)
1. A method of playing sound effects, comprising:
determining the current beat point of background music played by a virtual scene;
if the sound effect playing event is triggered in the time range corresponding to the current beat point, determining a target note corresponding to the sound effect playing event according to a target note sequence, wherein the target note sequence comprises a plurality of notes determined based on the background music;
and playing the target notes.
2. The method of claim 1, wherein determining the target note corresponding to the audio playback event based on the target note sequence comprises:
And determining a target note corresponding to the sound effect playing event according to a pre-configured target note sequence, wherein the target note is the next note of the notes determined according to the sound effect playing event triggered last time in the target note sequence.
3. The method of claim 1, wherein the playing the target note comprises:
and playing the target notes according to the trigger time of the sound effect playing event.
4. A method according to any one of claims 1 to 3, wherein the audio playback event is determined by:
and under the condition that the virtual character in the virtual scene is contacted with the target virtual object in the virtual scene, determining to trigger the sound effect playing event, wherein the sound effect playing event is triggered once in a time range corresponding to one beat point.
5. The method of claim 4, wherein determining the current beat point of the background music played by the virtual scene comprises:
receiving, by a target module of a game engine, the current beat point sent by an audio engine, wherein the audio engine is configured to send one beat point to the target module every time the audio engine plays to the background music;
And under the condition that the virtual character in the virtual scene is contacted with the target virtual object in the virtual scene, determining to trigger the sound effect playing event comprises the following steps:
detecting the virtual scene through the target module, and determining to trigger the sound effect playing event under the condition that a virtual character in the virtual scene is contacted with a target virtual object in the virtual scene;
the determining, according to the target note sequence, the target note corresponding to the sound effect playing event includes:
controlling the target module to respond to the sound effect playing event and sending a playing instruction to the audio engine;
and controlling the audio engine to respond to the playing instruction, and determining a target note corresponding to the sound effect playing event according to the target note sequence.
6. The method according to claim 2, wherein the method further comprises:
and determining a new target note sequence from at least one preconfigured note sequence under the condition that a preset condition is met, wherein the preset condition comprises that the interval between the current time and the last time of playing the target note is greater than or equal to a preset time threshold value, or the virtual scene is subjected to scene switching.
7. The method of claim 6, wherein determining a new target note sequence from a plurality of preconfigured note sequences comprises:
and determining a new target note sequence according to the background music of the current virtual scene and combining the mapping relation between the background music and the note sequence.
8. An audio playback apparatus, comprising:
the beat determining module is configured to determine the current beat point of the background music played by the virtual scene;
the note determining module is configured to determine a target note corresponding to the sound effect playing event according to a target note sequence if the sound effect playing event is triggered in a time range corresponding to the current beat point, wherein the target note sequence comprises a plurality of notes determined based on the background music;
and the playing module is configured to play the target notes.
9. A computer readable medium on which a computer program is stored, characterized in that the program, when being executed by a processing device, carries out the steps of the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
a storage device having a computer program stored thereon;
Processing means for executing said computer program in said storage means to carry out the steps of the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310468650.8A CN116570919A (en) | 2023-04-26 | 2023-04-26 | Sound effect playing method, sound effect playing device, medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310468650.8A CN116570919A (en) | 2023-04-26 | 2023-04-26 | Sound effect playing method, sound effect playing device, medium and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116570919A true CN116570919A (en) | 2023-08-11 |
Family
ID=87538887
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310468650.8A Pending CN116570919A (en) | 2023-04-26 | 2023-04-26 | Sound effect playing method, sound effect playing device, medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116570919A (en) |
-
2023
- 2023-04-26 CN CN202310468650.8A patent/CN116570919A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109543064B (en) | Lyric display processing method and device, electronic equipment and computer storage medium | |
US20220391060A1 (en) | Methods for displaying and providing multimedia resources | |
CN113395572B (en) | Video processing method and device, storage medium and electronic equipment | |
US11272136B2 (en) | Method and device for processing multimedia information, electronic equipment and computer-readable storage medium | |
US20240127856A1 (en) | Audio processing method and apparatus, and electronic device and storage medium | |
CN112511850A (en) | Wheat connecting method, live broadcast display method, device, equipment and storage medium | |
EP4319135A1 (en) | Image fusion method and apparatus, electronic device, and storage medium | |
US20240364958A1 (en) | Interface display method and apparatus, device, medium, and product | |
CN110248236A (en) | Video broadcasting method, device, terminal and storage medium | |
CN109582274B (en) | Volume adjusting method and device, electronic equipment and computer readable storage medium | |
CN115134670B (en) | Multimedia playing method, device, storage medium and program product | |
CN114025116B (en) | Video generation method, device, readable medium and electronic equipment | |
CN113253885A (en) | Target content display method, device, equipment, readable storage medium and product | |
CN110267088B (en) | Video playing control method and device, electronic equipment and storage medium | |
CN113542817B (en) | Mixed playing method and equipment for programs | |
US20230308723A1 (en) | Content display method and apparatus | |
WO2024099455A1 (en) | Live-broadcast interaction method and apparatus, and electronic device and storage medium | |
CN111857482B (en) | Interaction method, device, equipment and readable medium | |
CN114042310A (en) | Game operation data collection method and device, computer equipment and storage medium | |
US20240007591A1 (en) | Method and apparatus of interaction video connection, electronic device, and storage medium | |
WO2024051596A1 (en) | Game editing method and apparatus, and terminal and storage medium | |
CN110960857B (en) | Game data monitoring method and device, electronic equipment and storage medium | |
CN113220262A (en) | Multi-application audio data distribution method and terminal equipment | |
CN115190367B (en) | Video playing method, device, equipment and storage medium | |
CN111460211A (en) | Audio information playing method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |