CN113318432B - Music control method in game, nonvolatile storage medium and electronic device - Google Patents

Music control method in game, nonvolatile storage medium and electronic device Download PDF

Info

Publication number
CN113318432B
CN113318432B CN202110636792.1A CN202110636792A CN113318432B CN 113318432 B CN113318432 B CN 113318432B CN 202110636792 A CN202110636792 A CN 202110636792A CN 113318432 B CN113318432 B CN 113318432B
Authority
CN
China
Prior art keywords
sound
game
virtual
music
performance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110636792.1A
Other languages
Chinese (zh)
Other versions
CN113318432A (en
Inventor
丁磊
黄智文
徐雨帆
邱锦瑜
张涛
栗殿懿
李也
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202110636792.1A priority Critical patent/CN113318432B/en
Publication of CN113318432A publication Critical patent/CN113318432A/en
Application granted granted Critical
Publication of CN113318432B publication Critical patent/CN113318432B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/214Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
    • A63F13/2145Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads the surface being also a display device, e.g. touch screens
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/814Musical performances, e.g. by evaluating the player's ability to follow a notation
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1068Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals being specially adapted to detect the point of contact of the player on a surface, e.g. floor mat, touch pad
    • A63F2300/1075Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals being specially adapted to detect the point of contact of the player on a surface, e.g. floor mat, touch pad using a touch screen
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6063Methods for processing data by generating or executing the game program for sound processing
    • A63F2300/6072Methods for processing data by generating or executing the game program for sound processing of an input signal, e.g. pitch and rhythm extraction, voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6063Methods for processing data by generating or executing the game program for sound processing
    • A63F2300/6081Methods for processing data by generating or executing the game program for sound processing generating an output signal, e.g. under timing constraints, for spatialization
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8047Music games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8082Virtual reality

Abstract

The invention discloses a music control method in a game, a nonvolatile storage medium and an electronic device. The method comprises the following steps: obtaining performance music corresponding to the virtual musical instrument model; continuously adjusting the current position of the performance mark along the preset track on the graphical user interface in response to the continuous change of the performance progress of the performance music; responding to touch operation of an operation control, and determining the current position of a performance identifier corresponding to the operation control at the current time; and determining a touch response result according to the coincidence ratio between the current position of the performance identifier corresponding to the operation control and the preset reference position, and simultaneously playing the sound effects of different channels corresponding to the performance identifier. The invention solves the technical problem that the game player lacks the feeling of being close to the body and surrounding by directly recording the sound effect recording mode provided in the related technology only by adopting the mode of picking up sound at the sound source.

Description

Music control method in game, nonvolatile storage medium and electronic device
Technical Field
The present invention relates to the field of computers, and more particularly, to a music control method in a game, a nonvolatile storage medium, and an electronic device.
Background
Game sound effects generally refer to background music played when a particular action occurs or a particular operation is performed in a game scene, which may include, but is not limited to: game theme songs, scene-switching background music, combat background music, and the like. The game sound effect is mainly divided into the following expression forms according to the sound effect format:
(1) Single sound effect, that is, a single audio file is an independent sound effect, and is played in a program calling mode;
(2) The composite sound effect is that a plurality of sound elements are synthesized in real time through a program and can be reused;
(3) Musical sound effects, i.e. a small piece of music, are typically produced by a music producer.
However, the recording method of the game sound effect provided in the related art only adopts the mode of picking up sound at the sound source to directly record, which makes the game player lack of feeling the surrounding feeling, and clear sound relations of left and right, depth and distance.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
At least some embodiments of the present invention provide a music control method, a non-volatile storage medium, and an electronic device in a game, so as to at least solve the technical problem that a game player lacks a feeling of a surrounding feeling due to a direct recording of a sound effect of the game provided in the related art only by picking up sound at a sound source.
According to one embodiment of the present invention, there is provided a music control method in a game, which provides a graphical user interface through a first terminal device, the music control method in the game including:
obtaining performance music corresponding to the virtual musical instrument model; responding to the continuous change of the performance progress of the performance music, continuously adjusting the current position of the performance identifier moving along the preset track on the graphical user interface, wherein the graphical user interface is also displayed with operation controls respectively corresponding to the performance identifiers, and the performance identifiers respectively correspond to the sound effects of different channels; responding to touch operation of an operation control, and determining the current position of a performance identifier corresponding to the operation control at the current time; and determining a touch response result according to the coincidence ratio between the current position of the performance identifier corresponding to the operation control and the preset reference position, and simultaneously playing the sound effects of different channels corresponding to the performance identifier.
Optionally, the performance identifier includes: a left direction identifier and a right direction identifier, wherein the left direction identifier corresponds to the sound effect of the left channel, and the right direction identifier corresponds to the sound effect of the right channel; the operation control comprises: left and right direction controls.
Optionally, the music control method in the game further includes: and adjusting sound attributes of the performance music based on the touch response result, wherein the sound attributes comprise: sound distance, volume, audio.
Optionally, adjusting the sound attribute of the performance musical composition based on the touch response result includes: when the touch response result shows that the current position is partially or completely overlapped with the preset reference position, the sound distance is shortened, the volume is increased, and the audio frequency is kept unchanged; when the touch response result shows that the current position is not overlapped with the preset reference position, the sound distance is far, the volume is small, and the audio frequency is weakened.
Optionally, the content displayed by the graphical user interface includes a game scene picture obtained by shooting a three-dimensional game scene by a preset virtual camera, the game scene picture including a virtual musical instrument model being played and a virtual character controlled by the second terminal device, and the music control method in the game further includes: determining a distance and/or a relative orientation between the virtual character and the virtual instrument model; and determining the volume and/or the sound channel of playing music corresponding to the virtual musical instrument model on the second terminal equipment according to the distance and/or the relative direction.
Optionally, determining, according to the distance, a volume of playing the music piece corresponding to the virtual musical instrument model on the second terminal device includes: describing the relation between the distance and the sound volume by adopting an attenuation curve, wherein the attenuation curve is used for indicating that the sound volume is kept unchanged in a preset range of the position of the virtual musical instrument model, and the sound volume continuously attenuates along with the continuous increase of the distance outside the preset range; and determining the volume of the playing music corresponding to the virtual musical instrument model played on the second terminal equipment according to the distance and the attenuation curve.
Optionally, determining, according to the relative orientation, a channel for playing a performance musical composition corresponding to the virtual musical instrument model on the second terminal device, including: when the virtual musical instrument model is positioned on the right side of the virtual character, determining that playing music corresponding to the virtual musical instrument model is played on the second terminal equipment through a right channel; when the virtual instrument model is located on the left side of the virtual character, it is determined that the performance musical piece corresponding to the virtual instrument model is played through the left channel on the second terminal device.
Optionally, the audio is a single-shot audio obtained by slicing the audio data, and the audio data is recorded by a dummy head binaural stereo microphone in advance.
Optionally, the playing music adopts a mode of recording the dummy head binaural stereo microphone to divide the tracks of different kinds of virtual musical instruments, so as to obtain the individual tracks corresponding to the virtual musical instruments.
Optionally, the playing method of the virtual musical instrument further includes: combining the independent sound tracks into a single-track point sound source to be used in the three-dimensional game scene by utilizing the three-dimensional spatialization function of the game audio middleware; the width of the point sound source is adjusted by utilizing the reverberation function built-in the three-dimensional surround sound plug-in of the game audio middleware.
According to one embodiment of the present invention, there is also provided another music control method in a game, including:
obtaining game music to be played, wherein the game music comprises a plurality of musical instrument sounds recorded on different sound tracks; the method comprises the steps of obtaining positions of virtual instrument models corresponding to a plurality of instrument sounds in a virtual game scene, and setting sound source positions of the plurality of instrument sounds of different sound tracks according to the positions of the virtual instrument models; and acquiring the position relation between the virtual character and each sound source position, and controlling the playing of the sounds of a plurality of musical instruments of different sound tracks according to the position relation.
Optionally, the music control method in the game further includes: updating the position relation between the virtual character and each sound source in response to the position adjustment of the virtual character in the virtual game scene; and adjusting the playing of the plurality of instrument sounds of different tracks according to the updated position relation.
Optionally, acquiring the positional relationship between the virtual character and each sound source position, and controlling playing of the plurality of instrument sounds of different tracks according to the positional relationship includes: determining a distance and/or a relative position between the virtual character and each sound source position; the volume and/or channel of a plurality of instrument sounds of different tracks are controlled according to the distance and/or relative orientation.
Optionally, controlling the volume of the plurality of instrument sounds of different tracks according to the distance includes: describing the relation between the distance and the sound volume by adopting an attenuation curve, wherein the attenuation curve is used for indicating that the sound volume is kept unchanged in a preset range of the position of the virtual musical instrument model, and the sound volume continuously attenuates along with the continuous increase of the distance outside the preset range; the volume of a plurality of instrument sounds for different tracks is determined based on the distance and the decay curve.
Optionally, controlling the channels of the plurality of instrument sounds of different soundtracks according to the relative orientation comprises: when the virtual instrument model is positioned on the right side of the virtual character, determining to play the instrument sound corresponding to the virtual instrument model through the right channel; when the virtual instrument model is positioned at the left side of the virtual character, the instrument sound corresponding to the virtual instrument model is determined to be played through the left channel.
Optionally, the game music adopts a mode of recording by the dummy head binaural stereo microphone to divide the tracks of different kinds of virtual musical instruments, so as to obtain the individual tracks corresponding to the virtual musical instruments.
Optionally, the music control method in the game further includes: combining the independent sound tracks into a single-track point sound source to be used in the three-dimensional game scene by utilizing the three-dimensional spatialization function of the game audio middleware; the width of the point sound source is adjusted by utilizing the reverberation function built-in the three-dimensional surround sound plug-in of the game audio middleware.
According to one embodiment of the present invention, there is also provided a music control apparatus in a game, which provides a graphical user interface through a first terminal device, the music control apparatus in a game including:
the acquisition module is used for acquiring performance music corresponding to the virtual musical instrument model; the adjusting module is used for responding to the continuous change of the performance progress of the performance music, and continuously adjusting the current position of the performance identifier moving along the preset track on the graphical user interface, wherein the graphical user interface is also displayed with operation controls respectively corresponding to the performance identifiers, and the performance identifiers respectively correspond to the sound effects of different channels; the determining module is used for responding to the touch operation of the operation control and determining the current position of the performance identifier corresponding to the operation control at the current time; and the control module is used for determining a touch response result according to the coincidence degree between the current position of the performance identifier corresponding to the operation control and the preset reference position, and playing the sound effects of different channels corresponding to the performance identifier.
Optionally, the performance identifier includes: a left direction identifier and a right direction identifier, wherein the left direction identifier corresponds to the sound effect of the left channel, and the right direction identifier corresponds to the sound effect of the right channel; the operation control comprises: left and right direction controls.
Optionally, the adjusting module is further configured to adjust a sound attribute of the playing music based on the touch response result, where the sound attribute includes: sound distance, volume, audio.
Optionally, the adjusting module is further configured to, when the touch response result indicates that the current position and the preset reference position are partially or completely overlapped, make the sound distance become closer, make the volume become larger, and make the audio remain unchanged; when the touch response result shows that the current position is not overlapped with the preset reference position, the sound distance is far, the volume is small, and the audio frequency is weakened.
Optionally, the content displayed by the graphical user interface includes a game scene picture obtained by shooting a three-dimensional game scene through a preset virtual camera, the game scene picture includes a virtual musical instrument model being played and a virtual character controlled through a second terminal device, and the determining module is further used for determining a distance and/or a relative direction between the virtual character and the virtual musical instrument model; and determining the volume and/or the sound channel of playing music corresponding to the virtual musical instrument model on the second terminal equipment according to the distance and/or the relative direction.
Optionally, the determining module is further configured to describe a relationship between the distance and the volume by using an attenuation curve, where the attenuation curve is used to indicate that the volume is kept unchanged in a preset range where the virtual musical instrument model is located, and outside the preset range, the volume continuously attenuates with the continuous increase of the distance; and determining the volume of the playing music corresponding to the virtual musical instrument model played on the second terminal equipment according to the distance and the attenuation curve.
Optionally, the determining module is further configured to determine, when the virtual musical instrument model is located at the right of the virtual character, to play the playing musical piece corresponding to the virtual musical instrument model through the right channel on the second terminal device; when the virtual instrument model is located on the left side of the virtual character, it is determined that the performance musical piece corresponding to the virtual instrument model is played through the left channel on the second terminal device.
Optionally, the audio is a single-shot audio obtained by slicing the audio data, and the audio data is recorded by a dummy head binaural stereo microphone in advance.
Optionally, the playing music adopts a mode of recording the dummy head binaural stereo microphone to divide the tracks of different kinds of virtual musical instruments, so as to obtain the individual tracks corresponding to the virtual musical instruments.
Optionally, the playing device of the virtual musical instrument further includes: the processing module is used for combining the independent sound tracks into a single-track point sound source to be used in the three-dimensional game scene by utilizing the three-dimensional spatialization function of the game audio middleware; the width of the point sound source is adjusted by utilizing the reverberation function built-in the three-dimensional surround sound plug-in of the game audio middleware.
According to one embodiment of the present invention, there is also provided another music control apparatus in a game, including:
the acquisition module is used for acquiring game music to be played, wherein the game music comprises a plurality of musical instrument sounds recorded on different sound tracks; the processing module is used for acquiring the positions of the virtual instrument models corresponding to the instrument sounds in the virtual game scene and setting the sound source positions of the instrument sounds of different sound tracks according to the positions of the virtual instrument models; and the control module is used for acquiring the position relation between the virtual character and each sound source position and controlling the playing of a plurality of instrument sounds of different sound tracks according to the position relation.
Optionally, the music control device in the game further includes: the adjusting module is used for responding to the position adjustment of the virtual character in the virtual game scene and updating the position relation between the virtual character and each sound source; and adjusting the playing of the plurality of instrument sounds of different tracks according to the updated position relation.
Optionally, a control module for determining a distance and/or a relative position between the virtual character and each sound source position; the volume and/or channel of a plurality of instrument sounds of different tracks are controlled according to the distance and/or relative orientation.
Optionally, the control module is configured to describe a relationship between the distance and the volume by using an attenuation curve, where the attenuation curve is used to indicate that the volume is kept unchanged in a preset range where the virtual musical instrument model is located, and outside the preset range, the volume continuously attenuates with the continuous increase of the distance; the volume of a plurality of instrument sounds for different tracks is determined based on the distance and the decay curve.
Optionally, the control module is used for determining that the instrument sound corresponding to the virtual instrument model is played through the right channel when the virtual instrument model is positioned on the right of the virtual character; when the virtual instrument model is positioned at the left side of the virtual character, the instrument sound corresponding to the virtual instrument model is determined to be played through the left channel.
Optionally, the game music adopts a mode of recording by the dummy head binaural stereo microphone to divide the tracks of different kinds of virtual musical instruments, so as to obtain the individual tracks corresponding to the virtual musical instruments.
Optionally, the processing module is further configured to combine the individual audio tracks into a mono point sound source to be used in the three-dimensional game scene by using a three-dimensional spatialization function of the game audio middleware; the width of the point sound source is adjusted by utilizing the reverberation function built-in the three-dimensional surround sound plug-in of the game audio middleware.
According to an embodiment of the present invention, there is also provided a nonvolatile storage medium in which a computer program is stored, wherein the computer program is configured to execute the music control method in the game in any one of the above-described items when run.
According to one embodiment of the present invention, there is also provided an electronic device including a memory in which a computer program is stored, and a processor configured to run the computer program to perform the music control method in the game in any one of the above.
In at least some embodiments of the present invention, a performance musical composition corresponding to a virtual musical instrument model is obtained, in response to a continuous change of a performance progress of the performance musical composition, a current position of a performance identifier moving along a preset track is continuously adjusted on a graphical user interface, and an operation control corresponding to the performance identifier respectively is displayed on the graphical user interface.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a mobile terminal of a music control method in a game according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of music control in a game according to one embodiment of the present invention;
FIG. 3 is a schematic illustration of an in-game music control process according to an alternative embodiment of the present invention;
FIG. 4 is a schematic illustration of an attenuation curve in accordance with an alternative embodiment of the present invention;
FIG. 5 is a flow chart of another method of music control in a game according to one embodiment of the present invention;
FIG. 6 is a schematic illustration of the playing of a plurality of instrument sounds for adjusting different audio tracks based on distance variation in accordance with an alternative embodiment of the present invention;
FIG. 7 is a schematic illustration of the playing of a plurality of instrument sounds for adjusting different tracks based on relative orientation changes in accordance with an alternative embodiment of the present invention;
fig. 8 is a block diagram of a music control device in a game according to one embodiment of the present invention;
FIG. 9 is a block diagram of another in-game music control device according to an alternative embodiment of the present invention;
fig. 10 is a block diagram of a music control device in another game according to an embodiment of the present invention;
fig. 11 is a block diagram of another music control device in a game according to an alternative embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
According to one embodiment of the present invention, there is provided an embodiment of a music control method in a game, it being noted that the steps shown in the flowcharts of the drawings may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is shown in the flowcharts, in some cases the steps shown or described may be performed in an order different from that herein.
The music control method in the game in one embodiment of the present invention may be run on the terminal device or the server. The terminal device may be a local terminal device. When the music control method in the game runs on the server, the method can be realized and executed based on a cloud interaction system, wherein the cloud interaction system comprises the server and the client device.
In an alternative embodiment, various cloud applications may be run under the cloud interaction system, for example: and (5) cloud game. Taking cloud game as an example, cloud game refers to a game mode based on cloud computing. In the running mode of the cloud game, a running main body of the game program and a game picture presentation main body are separated, the storage and running of a music control method in the game are completed on a cloud game server, and the function of a client device is used for receiving and sending data and presenting game pictures, for example, the client device can be a display device with a data transmission function close to a user side, such as a mobile terminal, a television, a computer, a palm computer and the like; but the terminal device for information processing is cloud game server of cloud. When playing the game, the player operates the client device to send an operation instruction to the cloud game server, the cloud game server runs the game according to the operation instruction, codes and compresses data such as game pictures and the like, returns the data to the client device through a network, and finally decodes the data through the client device and outputs the game pictures.
In an alternative embodiment, the terminal device may be a local terminal device. Taking a game as an example, the local terminal device stores a game program and is used to present a game screen. The local terminal device is used for interacting with the player through the graphical user interface, namely, conventionally downloading and installing the game program through the electronic device and running. The manner in which the local terminal device provides the graphical user interface to the player may include a variety of ways, for example, it may be rendered for display on a display screen of the terminal, or provided to the player by holographic projection. For example, the local terminal device may include a display screen for presenting a graphical user interface including game visuals, and a processor for running the game, generating the graphical user interface, and controlling the display of the graphical user interface on the display screen.
In a possible implementation manner, the embodiment of the invention provides a music control method in a game, and a graphical user interface is provided through a first terminal device, wherein the first terminal device can be the aforementioned local terminal device or the aforementioned client device in the cloud interaction system.
Taking a mobile terminal running in a local terminal device as an example, the mobile terminal can be a terminal device such as a smart phone (e.g. an Android mobile phone, an iOS mobile phone, etc.), a tablet computer, a palm computer, a mobile internet device (Mobile Internet Devices, abbreviated as MID), a PAD, a game console, etc. Fig. 1 is a block diagram of a hardware configuration of a mobile terminal of a music control method in a game according to an embodiment of the present invention. As shown in fig. 1, a mobile terminal may include one or more (only one is shown in fig. 1) processors 102 (the processors 102 may include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processor (GPU), a Digital Signal Processing (DSP) chip, a Microprocessor (MCU), a programmable logic device (FPGA), a neural Network Processor (NPU), a Tensor Processor (TPU), an Artificial Intelligence (AI) type processor, etc.) and a memory 104 for storing data. Optionally, the mobile terminal may further include a transmission device 106, an input-output device 108, and a display device 110 for communication functions. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to a music control method in a game in an embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, that is, implements the music control method in a game as described above. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the mobile terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
The input in the input output device 108 may come from a plurality of human interface devices (Human Interface Device, abbreviated as HIDs). For example: keyboard and mouse, gamepad, other special game controllers (e.g., steering wheel, fishing pole, dance mat, remote control, etc.). Part of the ergonomic interface device may provide output functions in addition to input functions, such as: force feedback and vibration of the gamepad, audio output of the controller, etc.
The display device 110 may be, for example, a head-up display (HUD), a touch screen type Liquid Crystal Display (LCD), and a touch display (also referred to as a "touch screen" or "touch display"). The liquid crystal display may enable a user to interact with a user interface of the mobile terminal. In some embodiments, the mobile terminal has a Graphical User Interface (GUI), and the user may interact with the GUI by touching finger contacts and/or gestures on the touch-sensitive surface, where the man-machine interaction functions optionally include the following interactions: executable instructions for performing the above-described human-machine interaction functions, such as creating web pages, drawing, word processing, making electronic documents, games, video conferencing, instant messaging, sending and receiving electronic mail, talking interfaces, playing digital video, playing digital music, and/or web browsing, are configured/stored in a computer program product or readable storage medium executable by one or more processors.
In this embodiment, there is provided a music control method in a game running on the above mobile terminal, and fig. 2 is a flowchart of a music control method in a game according to one embodiment of the present invention, as shown in fig. 2, the method includes the steps of:
step S20, obtaining performance music corresponding to the virtual musical instrument model;
for playing a musical composition, the ability to give the listener an autonomous sensory meridian response (Autonomous Sensory Meridian Response, abbreviated as ASMR) is an experience of immersing the person and unconsciously inducing other sensory resonances (alliance) through hearing. For some people who are sensitive, their sensation may even extend to the neck, back and limbs.
In order to distinguish the game sound effects currently used in the game from the game sound effects currently used in the game, the game player is given a totally new audio-visual feast, and the ASMR effect processing can be performed on the performance musical composition. ASMR effect processing typically involves both human head recording and ear attachment. To achieve ASMR effect processing, music may be recorded using Dummy Head binaural stereo microphones. And different from the conventional fixed-position recording, in the music recording process, a dynamic moving mode is adopted to play, and the implementation means in the game are combined, so that a game player can feel the surrounding sense in a self-cutting manner, and the sound relationship of left and right, depth and distance is clear.
ASMR audio provided in the related art generally occurs in a non-musical form, for example: human speech, natural environmental sounds, etc., all of which exhibit similar characteristics: clear distance sense, clear sound image, wide frequency band energy distribution, and the like. Therefore, the extremely close distance feeling and the distinct sound image can create the effect of drilling the sound into the auditory canal, and extremely vivid stimulus feeling is brought to the hearing. Considering that the sound actually heard by the game player can be transmitted to the human ear through the media such as skin, bones and the like, for example, when the sound propagates to the face of the player, a part of the sound can be reflected and then propagates to the player ear, and another part of the sound can be absorbed and transmitted into the player ear through resonance, and therefore, the hearing experience needs to be restored through a dummy head recording mode. The dummy head binaural stereo microphone is intended to simulate the auditory state of the player's ears in order to provide the listener with an immersive sensation. The construction of the dummy head binaural stereo microphone not only truly restores the player's head through a 1:1 ratio, but also maintains a high degree of consistency with the real person in the interior details and overall materials of the ears. By means of the microphone, the musical instrument can move around the head of a player in real time in the directions of up, down, left, right, depth, far and near, and the like, and a listener can feel sounds heard by the human head binaural stereo microphone in 360-degree all-around mode.
After analyzing the above non-musical tones that are liable to produce ASMR effects, the musical instrument composition of the musical composition can be determined as: musical instruments such as zither, lute, flowerpot drum, cymbal, rain drum, fruit shell, bells, and the like. Both zither and lute are playing music with very abundant playing methods, wherein the sound equipment of ' trembling, singing, rubbing ', sliding and overtones ' in the string technology is very fine, and sound can be amplified by recording by using a dummy head binaural stereo microphone. The flowerpot drum may typically act as the primary rhythmic support for the musical composition. The playing method of the cymbals is also rich, and particularly, the two cymbals rub with each other to generate metallic luster. The rain barrel is mainly used for simulating rain sound. The fruit shell can generate wood friction sound. The bells can act as an aid to rhythms and add metallic luster. It should be noted that the above musical instrument programming can be determined according to the wind used in the game scene, and it can also be replaced by other types of musical instruments according to different wind.
The virtual characters belonging to the same game camp (such as a helper) can participate in the music ensemble without being limited by the character level and the time of joining the camp. After the virtual character enters a specific scene area (such as a square area) where the game is in a game scene, the virtual musical instrument model can be selected for playing or listening.
Fig. 3 is a schematic diagram of a music control process in a game according to an alternative embodiment of the present invention, as shown in fig. 3, when there are no virtual characters playing or the number of virtual characters participating in playing does not reach a preset number, if the virtual characters are close to the idle virtual instrument model, a performance icon is popped up on the graphical user interface, and the performance interface can be accessed by performing a click touch operation on the performance icon. Assuming that 2 virtual characters are needed for ensemble of the current music, but when no virtual character is playing or only 1 virtual character participates in playing, if the other 1 virtual character is close to the idle virtual musical instrument model, a playing icon is popped up on the graphical user interface, and the playing interface can be accessed by performing click touch operation on the playing icon. The primary role of the performance interface is to select performance tracks within the game scene.
When the virtual characters play and the number of virtual characters participating in the performance reaches a preset number, the rest virtual characters in the specific scene area can hear the played music. Virtual instrument models available for virtual character selection in a game scene may include, but are not limited to: koto, lute, cymbal, flowerpot drum, hand dish, bowl, thumb-organ, tuning fork, guqin, xun, etc.
Step S21, responding to the continuous change of the performance progress of the performance music, continuously adjusting the current position of the performance identifier moving along the preset track on the graphical user interface, wherein the graphical user interface is also displayed with operation controls respectively corresponding to the performance identifiers, and the performance identifiers respectively correspond to the sound effects of different channels;
by responding to the continuous change of the performance progress of the performance musical composition, the current position of the performance identifier along the preset track can be continuously adjusted on the graphical user interface. The preset trajectory may be represented by any geometric shape, for example: straight line, semi-closed curve, fully-closed curve. In addition, operation controls respectively corresponding to the performance identifiers are also displayed on the graphical user interface, and the performance identifiers respectively correspond to sound effects of different channels. As also shown in fig. 3, the performance identifier may include a left direction identifier and a right direction identifier. The left direction identifier corresponds to the sound effect of the left channel and the right direction identifier corresponds to the sound effect of the right channel. Correspondingly, the operation controls can include a left direction control and a right direction control.
It should be noted that the content contained in the performance identifier and the operation control described above is only an exemplary illustration, and does not constitute an undue limitation on the present invention. For example: the performance identifier may include an upper direction identifier and a lower direction identifier. The upper direction identifier corresponds to the sound effect of the left channel and the lower direction identifier corresponds to the sound effect of the right channel. Correspondingly, the operation controls can include an upper direction control and a lower direction control.
In an alternative embodiment, the above-mentioned sound effect is a single-shot sound effect obtained by slicing the audio data, and the audio data is recorded by using the dummy head binaural stereo microphone in advance. The playing music adopts a mode of recording the dummy head and the double ears stereo microphones to divide the tracks of different kinds of virtual musical instruments, and the independent tracks corresponding to the virtual musical instruments are obtained. The track separation of the musical instruments (namely, the individual tracks of different musical instruments) and the sound effect of an interface (UI) can be obtained through a human head recording mode, and game audio middleware is imported. Instrument tracking may include, but is not limited to: koto, lute, flowerpot drum, cymbal, rain drum and fruit shell. UI sound effects may include, but are not limited to: bells, cymbals. The gamer can trigger the UI sound effect by clicking the UI icon.
The sound source positions of a plurality of instrument sounds of different tracks may be set in a game scene according to the positions of the respective virtual instrument models, which may be achieved by presetting functions of the game engine and the above-described game audio middleware. Different placement locations between different virtual instrument models may produce different sound localization. At the same time, the game player is visually affected, and the sound of the virtual instrument model is considered to be transmitted from the placement position of the virtual instrument model.
The ear is attached, i.e. the distance of the sound is felt to be close and sounds very close to the ear. The use of the ear patch is rare in the mixing of regular music, because: the emphasis on spatial unification or coordination is usually required among different music instruments, and obvious ear patches are easy to crack in hearing. Therefore, the degree of control of the ear (distance feeling) becomes a key point of ASMR effect processing. If the played sound sounds to produce an ear-stick sensation, an audible violation is created. In order to solve this technical disadvantage, it is necessary to control the sense of distance.
The perceived processing of sound space can be enhanced by the addition of a three-dimensional surround sound plug-in to the game audio middleware. The three-dimensional surround sound insert may provide additional acoustic reflection, thereby enhancing the perception of sound distance by a game player. More importantly, the balance of the virtual instrument model can be controlled by adjusting the spatial parameters in the three-dimensional surround sound plug-in.
In addition, after enabling the three-dimensional spatialization (i.e., when spatialization is used, the sound source movement in the game appears as sound heard from different speakers) function of the game audio middleware, the stereo sound tracks are put into the game scene by point sound sources combined into mono, because: mono is more beneficial to sound localization. However, the sound width of the mono point source is greatly reduced, and for this reason, it is necessary to widen the sound width by the reverberation function built in the three-dimensional surround sound package, and it is also preferable to integrate the sound of each instrument more perfectly.
Of course, the spatial impression of sound is enhanced while the perception of the ear is impaired. In order to emphasize the effect processing of ASMR, sounds that do not pass through the above three-dimensional spatialization processing, do not attenuate curves, and do not use three-dimensional surround sound inserts, i.e., rain drums and fruit shells, and UI sound effects, may be used. The rain barrel and the fruit shell are tracks which can be played when playing, and the UI sound effect is a single-shot sound effect triggered when a game player clicks a screen.
Among the tracks introduced, the rain barrels and the fruit shells are special, which lack the corresponding instruments that can be placed in the game scene, the effect of which is that: in contrast to instruments using three-dimensional surround sound inserts (i.e., virtual instrument models in which virtual characters can play in a game scene), to provide the most aural sounds, thereby achieving an accompaniment-like effect. Thus, different degrees of ASMR effect are formed sequentially from near to far: rain barrel + fruit shell- & gt instruments controlled by the virtual character of the player- & gt instruments controlled by other virtual characters, and further provides richer and more interesting hearing experience for game players through vivid layering.
In the aspect of UI sound effect, in order to merge playing methods of the sound tour, musical instrument sounds such as bells, cymbals and the like obtained through a head recording mode can be adopted, and performance marks (such as left direction marks, right direction marks and the like) in a plurality of different directions are randomly arranged. Considering that the UI sound effect is a double-channel audio file, when the bell and the dial are recorded, the position of the musical instrument is continuously changed, so that different sound positioning in the sense of hearing is generated, and then the recorded sound is segmented into a plurality of single-shot UI sound effects. The sense of distance and positioning of the virtual instrument model does not change after the virtual character begins playing, and random ASMR feedback can also add significant sound interest.
Step S22, responding to the touch operation of the operation control, and determining the current position of the performance identifier corresponding to the operation control at the current time;
the touch operation may include, but is not limited to: click touch operation, heavy press touch operation, long press touch operation, and the like. In an alternative embodiment, the current position of the performance identifier corresponding to the operation control at the current time may be determined by responding to the click touch operation of the operation control. As also shown in fig. 3, the current position of the left direction identifier corresponding to the left direction control at the current time may be determined by responding to a click touch operation on the left direction control.
Step S23, determining a touch response result according to the coincidence degree between the current position of the performance identifier corresponding to the operation control and the preset reference position, and simultaneously playing sound effects of different channels corresponding to the performance identifier.
The above-mentioned contact ratio between the current position and the preset reference position may include, but is not limited to: completely overlapping, partially overlapping, and non-overlapping. The touch response result corresponding to the above-mentioned coincidence ratio between the current position and the preset reference position may include, but is not limited to: perfect, good, missing. Meanwhile, the sound effects of the performance identifiers corresponding to different channels can be played. Still as shown in fig. 3, the contact ratio between the current position and the preset reference position is the complete contact ratio, so that the touch response result is determined to be perfect, and the left channel sound effect corresponding to the left direction identifier is played.
According to the method, the current position of the performance identifier corresponding to the operation control at the current moment is determined by responding to the touch operation on the operation control, the touch response result is determined according to the coincidence degree between the current position of the performance identifier corresponding to the operation control and the preset reference position, and meanwhile, the performance identifier corresponds to the sound effect of different channels, so that the purpose of providing stereo sound effect in the performance process is achieved, the technical effects of surround feeling brought by a game player in a body-cutting manner and clear left-right, deep-deep and far-near sound relations are achieved, and the technical problem that the recording mode of the game sound effect provided in the related technology only adopts the game player at the sound source to directly pick up the surround feeling is solved.
Optionally, the music control method in the game may further include the following execution steps:
step S24, adjusting sound attributes of the performance music based on the touch response result, wherein the sound attributes comprise: sound distance, volume, audio.
In the process of adjusting the sound attribute of the performance music based on the touch response result, when the touch response result indicates that the current position is partially or completely overlapped with the preset reference position, the sound distance is shortened, the volume is increased, and the audio is kept unchanged; when the touch response result shows that the current position is not overlapped with the preset reference position, the sound distance is far, the volume is small, and the audio frequency is weakened. Specifically, the objects are far apart, depending on the nature of sound propagation, causing a decrease in volume and a decrease in high frequency. By utilizing the physical law, the distance change can be generated according to the accuracy and the error of the performance. When the game player fails to perform the click touch operation on the operation control accurately according to the prompt, the low-pass filter and the volume change are triggered, that is, the sound distance becomes longer, the volume becomes smaller and the high frequency is weakened. When the game player can accurately perform the click touch operation on the operation control according to the prompt, the sound reduced in high frequency and volume will be restored, i.e., the sound distance becomes closer, the volume becomes larger, and the high frequency is not adjusted. This feedback can be heard not only by the game player himself but also by other game players participating in the ensemble, thereby further increasing the layering of the ASMR sounds.
Optionally, the content displayed by the graphical user interface includes a game scene picture obtained by shooting a three-dimensional game scene by a preset virtual camera, the game scene picture includes a virtual musical instrument model being played and a virtual character controlled by the second terminal device, and the music control method in the game may further include the following execution steps:
step S25, determining the distance and/or the relative orientation between the virtual character and the virtual musical instrument model;
step S26, according to the distance and/or the relative direction, the volume and/or the sound channel of the playing music corresponding to the virtual musical instrument model played on the second terminal device are determined.
By acquiring the distance between the virtual musical instrument model being played and the virtual character controlled by the second terminal device, the volume of the played musical piece corresponding to the virtual musical instrument model being played on the second terminal device can be determined, and by acquiring the relative orientation between the virtual musical instrument model being played and the virtual character controlled by the second terminal device, the channel of the played musical piece corresponding to the virtual musical instrument model being played on the second terminal device can be determined.
Optionally, in step S26, determining the volume of playing the performance musical piece corresponding to the virtual musical instrument model on the second terminal device according to the distance may include performing the steps of:
Step S260, describing the relation between the distance and the sound volume by adopting an attenuation curve, wherein the attenuation curve is used for indicating that the sound volume is kept unchanged in a preset range of the position of the virtual musical instrument model, and the sound volume continuously attenuates along with the continuous increase of the distance outside the preset range;
step S261, determining the volume of playing the music corresponding to the virtual musical instrument model on the second terminal device according to the distance and the decay curve.
The relationship between distance and volume can be described using an Attenuation (Attenuation) curve. The attenuation curve is used for indicating that the volume is kept unchanged in a preset range of the position of the virtual musical instrument model, and outside the preset range, the volume continuously attenuates along with the continuous increase of the distance. Thus, the volume of the played piece of music corresponding to the virtual instrument model played on the second terminal device can be determined based on the distance and the decay curve.
Fig. 4 is a schematic view of attenuation curves according to an alternative embodiment of the present invention, as shown in fig. 4, since the distances between the virtual character participating in the performance and the virtual instrument model played by itself are closest, by adjusting the attenuation curves, the output bus volume (Output Bus Volume) is continuously attenuated with the increase of the Distance (Distance), so that the virtual character played can hear the virtual instrument model sound operated by itself to have a larger volume and a wider sound width, and the virtual instrument models played by other virtual characters have smaller volumes and narrower sound widths. The starting point of the decay curve is the instrument position, and since the standing position of the virtual character cannot overlap with the virtual instrument model, but needs to keep a certain distance, the curve is not decayed at first (namely, the instrument sound is kept constant within a preset range and not decayed), so that the virtual instrument model played by the virtual character itself is enabled to sound to be attached to the ears than the virtual instrument models played by other virtual characters, and the decay speed can be judged according to whether the sound feel is natural or not outside the preset range.
Optionally, in step S26, determining, based on the relative orientation, a channel for playing the performance musical piece corresponding to the virtual musical instrument model on the second terminal device may include the following performing steps:
step S262, when the virtual musical instrument model is positioned on the right side of the virtual character, determining to play the playing music corresponding to the virtual musical instrument model through the right channel on the second terminal device;
in step S263, when the virtual musical instrument model is located on the left side of the virtual character, it is determined that the performance musical piece corresponding to the virtual musical instrument model is played through the left channel on the second terminal device.
By acquiring the relative orientation between the virtual musical instrument model being played and the virtual character controlled by the second terminal device, the channel of the playing musical piece corresponding to the virtual musical instrument model played on the second terminal device can be determined. With the virtual character as a reference target, the relative orientations between the virtual instrument model and the virtual character may include, but are not limited to: the virtual instrument model is located to the right of the virtual character, the virtual instrument model is located to the left of the virtual character, etc. When the virtual musical instrument model is positioned on the right side of the virtual character, determining that playing music corresponding to the virtual musical instrument model is played on the second terminal equipment through a right channel; when the virtual instrument model is located on the left side of the virtual character, it is determined that the performance musical piece corresponding to the virtual instrument model is played through the left channel on the second terminal device.
In this embodiment, there is also provided another method for controlling music in a game running on the mobile terminal, and fig. 5 is a flowchart of another method for controlling music in a game according to an embodiment of the present invention, as shown in fig. 5, the method includes the steps of:
step S50, obtaining game music to be played, wherein the game music comprises a plurality of musical instrument sounds recorded on different sound tracks;
the instrument tracks (namely the individual tracks of different instruments) and the UI sound effects can be obtained through a head recording mode, and the game audio middleware is imported. Instrument tracking may include, but is not limited to: koto, lute, flowerpot drum, cymbal, rain drum and fruit shell. UI sound effects may include, but are not limited to: bells, cymbals. The gamer can trigger the UI sound effect by clicking the UI icon. The game music can be recorded by adopting the dummy head binaural stereo microphone to divide the tracks of different types of virtual musical instruments, so that the individual tracks corresponding to the virtual musical instruments are obtained.
Step S51, the positions of the virtual instrument models corresponding to the instrument sounds in the virtual game scene are obtained, and the sound source positions of the instrument sounds of different tracks are set according to the positions of the virtual instrument models;
The sound source positions of a plurality of instrument sounds of different tracks may be set in a game scene according to the positions of the respective virtual instrument models, which may be achieved by presetting functions of the game engine and the above-described game audio middleware. Different placement locations between different virtual instrument models may produce different sound localization. At the same time, the game player is visually affected, and the sound of the virtual instrument model is considered to be transmitted from the placement position of the virtual instrument model.
The ear is attached, i.e. the distance of the sound is felt to be close and sounds very close to the ear. The use of the ear patch is rare in the mixing of regular music, because: the emphasis on spatial unification or coordination is usually required among different music instruments, and obvious ear patches are easy to crack in hearing. Therefore, the degree of control of the ear (distance feeling) becomes a key point of ASMR effect processing. If the played sound sounds to produce an ear-stick sensation, an audible violation is created. In order to solve this technical disadvantage, it is necessary to control the sense of distance.
The perceived processing of sound space can be enhanced by the addition of a three-dimensional surround sound plug-in to the game audio middleware. The three-dimensional surround sound insert may provide additional acoustic reflection, thereby enhancing the perception of sound distance by a game player. More importantly, the balance of the virtual instrument model can be controlled by adjusting the spatial parameters in the three-dimensional surround sound plug-in.
In addition, after enabling the three-dimensional spatialization (i.e., when spatialization is used, the sound source movement in the game appears as sound heard from different speakers) function of the game audio middleware, the stereo sound tracks are put into the game scene by point sound sources combined into mono, because: mono is more beneficial to sound localization. However, the sound width of the mono point source is greatly reduced, and for this reason, it is necessary to widen the sound width by the reverberation function built in the three-dimensional surround sound package, and it is also preferable to integrate the sound of each instrument more perfectly. That is, the single sound track can be combined into a single sound track point sound source to be used in a three-dimensional game scene by utilizing the three-dimensional spatialization function of the game audio middleware and the width of the point sound source can be adjusted by utilizing the reverberation function built in the three-dimensional surround sound plug-in of the game audio middleware.
Of course, the spatial impression of sound is enhanced while the perception of the ear is impaired. In order to emphasize the effect processing of ASMR, sounds that do not pass through the above three-dimensional spatialization processing, do not attenuate curves, and do not use three-dimensional surround sound inserts, i.e., rain drums and fruit shells, and UI sound effects, may be used. The rain barrel and the fruit shell are tracks which can be played when playing, and the UI sound effect is a single-shot sound effect triggered when a game player clicks a screen.
Among the tracks introduced, the rain barrels and the fruit shells are special, which lack the corresponding instruments that can be placed in the game scene, the effect of which is that: in contrast to instruments using three-dimensional surround sound inserts (i.e., virtual instrument models in which virtual characters can play in a game scene), to provide the most aural sounds, thereby achieving an accompaniment-like effect. Thus, different degrees of ASMR effect are formed sequentially from near to far: rain barrel + fruit shell- & gt instruments controlled by the virtual character of the player- & gt instruments controlled by other virtual characters, and further provides richer and more interesting hearing experience for game players through vivid layering.
In the aspect of UI sound effect, in order to merge playing methods of the sound tour, musical instrument sounds such as bells, cymbals and the like obtained through a head recording mode can be adopted, and performance marks (such as left direction marks, right direction marks and the like) in a plurality of different directions are randomly arranged. Considering that the UI sound effect is a double-channel audio file, when the bell and the dial are recorded, the position of the musical instrument is continuously changed, so that different sound positioning in the sense of hearing is generated, and then the recorded sound is segmented into a plurality of single-shot UI sound effects. The sense of distance and positioning of the virtual instrument model does not change after the virtual character begins playing, and random ASMR feedback can also add significant sound interest.
Step S52, the position relation between the virtual character and each sound source position is obtained, and the playing of the plurality of instrument sounds of different sound tracks is controlled according to the position relation.
By acquiring the positional relationship between the virtual character and each sound source position, the playback of a plurality of instrument sounds of different tracks can be controlled based on the positional relationship. For example: when the virtual zither model is positioned on the right side of the virtual character, the sound of the virtual zither model can be controlled to be played in the right channel based on the position relation; when the virtual harp model is located to the left of the virtual character, the left channel may be controlled to play the sound of the virtual harp model based on the positional relationship.
Through the steps, the game music to be played can be obtained, the game music comprises a plurality of instrument sounds recorded in different sound tracks, the positions of the virtual instrument models corresponding to the plurality of instrument sounds in the virtual game scene are obtained, the sound source positions of the plurality of instrument sounds of different sound tracks are set according to the positions of the virtual instrument models, the position relation between the virtual role and each sound source position is obtained, and the playing of the plurality of instrument sounds of different sound tracks is controlled according to the position relation, so that the purpose of providing three-dimensional stereo sound effects according to the virtual instrument models at different positions in the three-dimensional game scene when the virtual role moves in the three-dimensional game scene is achieved, the technical effect that a game player can realize the surrounding sense brought by sound in a body-in-body manner is achieved, and the clear left-right, deep-far-near sound relation is achieved, and the technical problem that the game sound effect recording mode provided in the related technology only adopts the sound source position direct recording mode, so that the game player lacks the feeling of surrounding.
Optionally, the music control method in the game may further include the following execution steps:
step S53, the position relation between the virtual character and each sound source is updated in response to the position adjustment of the virtual character in the virtual game scene;
step S54, adjusting the playing of the plurality of instrument sounds of different tracks according to the updated positional relationship
When the virtual character continuously moves in the three-dimensional game scene, the position of the virtual character changes, and for this purpose, the positional relationship between the virtual character and each sound source needs to be updated in response to the position adjustment of the virtual character, so that the playing of a plurality of instrument sounds of different tracks is adjusted according to the updated positional relationship. The above-described positional adjustment may cause a change in both the distance between the virtual character and each of the sound source positions and a change in the relative orientation between the virtual character and each of the sound source positions.
When the distance between the virtual character and each sound source position changes, it is necessary to adjust the volumes of the plurality of instrument sounds of different tracks in synchronization. Fig. 6 is a schematic diagram of the playing of a plurality of instrument sounds for adjusting different tracks based on distance variation according to an alternative embodiment of the present invention, as shown in fig. 6, with the virtual character moving from position 1 to position 2. When the virtual character is located at the position 1, a distance between the virtual character and the virtual zither model is D1, a distance between the virtual character and the virtual lute model is D2, and D1 is smaller than D2. At this time, the virtual zither model heard by the virtual character has a volume greater than that of the virtual lute model. When the virtual character is located at the position 2, the distance between the virtual character and the virtual zither model is D3, the distance between the virtual character and the virtual lute model is D4, and D3 is greater than D4. At this time, the virtual zither model heard by the virtual character has a smaller volume than the virtual lute model.
When the relative orientation between the virtual character and the respective sound source positions changes, it is necessary to adjust the channels of the plurality of instrument sounds of different tracks in synchronization. Fig. 7 is a schematic diagram of the playing of a plurality of instrument sounds for adjusting different tracks based on relative orientation changes according to an alternative embodiment of the present invention, as shown in fig. 7, with the virtual character being adjusted from direction 1 to direction 2. When the virtual character moves along the direction 1, the virtual zither model is positioned at the left side of the virtual character, and the left sound channel is controlled to play the sound of the virtual zither model; the virtual lute model is positioned on the right side of the virtual character, and the sound of the virtual lute model is controlled to be played in the right channel. When the virtual character moves along the direction 2, the virtual zither model is positioned on the right side of the virtual character, and the right sound channel is controlled to play the sound of the virtual zither model; the virtual lute model is positioned at the left side of the virtual character, and the sound of the virtual lute model is controlled to be played by the left sound channel.
Alternatively, in step S52, acquiring the positional relationship between the virtual character and the respective sound source positions, and controlling the playback of the plurality of instrument sounds of different tracks according to the positional relationship may include the following performing steps:
step S520, determining a distance and/or a relative azimuth between the virtual character and each sound source position;
Step S521 controls the volume and/or channel of a plurality of instrument sounds of different tracks according to the distance and/or relative orientation.
By acquiring the distance between the virtual character and each sound source position, the volume of the plurality of instrument sounds of different tracks can be controlled, and by acquiring the relative orientation between the virtual character and each sound source position, the channels of the plurality of instrument sounds of different tracks can be controlled.
Alternatively, in step S521, controlling the volumes of the plurality of instrument sounds of different tracks according to the distance may include performing the steps of:
step S5210, describing the relation between the distance and the sound volume by adopting an attenuation curve, wherein the attenuation curve is used for indicating that the sound volume is kept unchanged in a preset range of the position of the virtual musical instrument model, and the sound volume continuously attenuates along with the continuous increase of the distance outside the preset range;
in step S5211, the volumes of the plurality of instrument sounds of different tracks are determined according to the distance and the attenuation curve.
The relationship between distance and volume can be described using an Attenuation (Attenuation) curve. The attenuation curve is used for indicating that the volume is kept unchanged in a preset range of the position of the virtual musical instrument model, and outside the preset range, the volume continuously attenuates along with the continuous increase of the distance. Thus, the volume of a plurality of instrument sounds for different tracks can be determined based on the distance and the decay curve.
Optionally, in step S521, controlling the channels of the plurality of instrument sounds of different soundtracks according to the relative orientation may include performing the steps of:
step S5212, when the virtual musical instrument model is located on the right of the virtual character, determining to play the musical instrument sound corresponding to the virtual musical instrument model through the right channel;
in step S5213, when the virtual instrument model is located to the left of the virtual character, it is determined that the instrument sound corresponding to the virtual instrument model is played through the left channel.
By acquiring the relative orientation between the virtual character and each sound source position, the channels of a plurality of instrument sounds of different soundtracks can be controlled. With the virtual character as a reference target, the relative orientations between the virtual instrument model and the virtual character may include, but are not limited to: the virtual instrument model is located to the right of the virtual character, the virtual instrument model is located to the left of the virtual character, etc. Playing the instrument sound corresponding to the virtual instrument model through a right sound channel when the virtual instrument model is positioned on the right of the virtual character; when the virtual instrument model is positioned at the left side of the virtual character, playing the instrument sound corresponding to the virtual instrument model through the left sound channel.
Through the steps, the purpose of providing three-dimensional stereo sound effects according to different positions of the virtual musical instrument model in the three-dimensional game scene when the virtual character moves in the three-dimensional game scene is achieved, and therefore the technical effects of surrounding sense brought by sound perceived by a game player by the game player are achieved, and the sound relations of left and right, deep and far and near are clear.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiment also provides a music control device in a game, which is used for implementing the above embodiment and the preferred implementation, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 8 is a block diagram of a music control apparatus in a game according to an embodiment of the present invention, which provides a graphic user interface through a first terminal device, as shown in fig. 8, comprising: an acquisition module 10 for acquiring performance musical compositions corresponding to the virtual musical instrument models; the adjusting module 12 is configured to continuously adjust, on the graphical user interface, a current position of the performance identifier moving along the preset track in response to a continuous change of a performance progress of the performance music, where the graphical user interface further displays operation controls corresponding to the performance identifiers respectively, and the performance identifiers respectively correspond to sound effects of different channels; the determining module 14 is configured to determine, in response to a touch operation on the operation control, a current position of a performance identifier corresponding to the operation control at a current time; and the control module 16 is used for determining a touch response result according to the coincidence degree between the current position of the performance identifier corresponding to the operation control and the preset reference position, and playing the sound effects of different channels corresponding to the performance identifier.
Optionally, the performance identifier includes: a left direction identifier and a right direction identifier, wherein the left direction identifier corresponds to the sound effect of the left channel, and the right direction identifier corresponds to the sound effect of the right channel; the operation control comprises: left and right direction controls.
Optionally, the adjusting module 12 is further configured to adjust sound attributes of the playing music based on the touch response result, where the sound attributes include: sound distance, volume, audio.
Optionally, the adjusting module 12 is further configured to, when the touch response result indicates that the current position and the preset reference position are partially or completely overlapped, make the sound distance become closer, make the volume become larger, and keep the audio unchanged; when the touch response result shows that the current position is not overlapped with the preset reference position, the sound distance is far, the volume is small, and the audio frequency is weakened.
Optionally, the content displayed by the graphical user interface includes a game scene picture obtained by shooting a three-dimensional game scene by a preset virtual camera, the game scene picture including a virtual musical instrument model being played and a virtual character controlled by the second terminal device, and the determining module 14 is further configured to determine a distance and/or a relative orientation between the virtual character and the virtual musical instrument model; and determining the volume and/or the sound channel of playing music corresponding to the virtual musical instrument model on the second terminal equipment according to the distance and/or the relative direction.
Optionally, the determining module 14 is further configured to describe a relationship between the distance and the volume by using an attenuation curve, where the attenuation curve is used to indicate that the volume remains unchanged within a preset range of the position of the virtual musical instrument model, and outside the preset range, the volume continuously attenuates with the continuous increase of the distance; and determining the volume of the playing music corresponding to the virtual musical instrument model played on the second terminal equipment according to the distance and the attenuation curve.
Optionally, the determining module 14 is further configured to determine, when the virtual musical instrument model is located on the right of the virtual character, to play the playing musical piece corresponding to the virtual musical instrument model through the right channel on the second terminal device; when the virtual instrument model is located on the left side of the virtual character, it is determined that the performance musical piece corresponding to the virtual instrument model is played through the left channel on the second terminal device.
Optionally, the audio is a single-shot audio obtained by slicing the audio data, and the audio data is recorded by a dummy head binaural stereo microphone in advance.
Optionally, the playing music adopts a mode of recording the dummy head binaural stereo microphone to divide the tracks of different kinds of virtual musical instruments, so as to obtain the individual tracks corresponding to the virtual musical instruments.
Alternatively, fig. 9 is a block diagram of a music control apparatus in another game according to an alternative embodiment of the present invention, as shown in fig. 9, which includes all the modules shown in fig. 8, and the playing apparatus of the virtual musical instrument further includes: a processing module 18, configured to combine the individual audio tracks into a mono point sound source to be used in the three-dimensional game scene by using a three-dimensional spatialization function of the game audio middleware; the width of the point sound source is adjusted by utilizing the reverberation function built-in the three-dimensional surround sound plug-in of the game audio middleware.
In this embodiment, another music control device in a game is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, and will not be described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 10 is a block diagram showing a configuration of another music control apparatus in a game according to an embodiment of the present invention, as shown in fig. 10, the apparatus including: an acquisition module 20, configured to acquire game music to be played, where the game music includes a plurality of instrument sounds recorded in different audio tracks; the processing module 22 is configured to obtain positions of virtual instrument models corresponding to the plurality of instrument sounds in the virtual game scene, and set sound source positions of the plurality of instrument sounds of different tracks according to the positions of the respective virtual instrument models; the control module 24 is configured to obtain a positional relationship between the virtual character and each sound source position, and control playing of a plurality of instrument sounds of different audio tracks according to the positional relationship.
Alternatively, fig. 11 is a block diagram showing a construction of another in-game music control apparatus according to an alternative embodiment of the present invention, as shown in fig. 11, which includes, in addition to all the blocks shown in fig. 10, the in-game music control apparatus described above including: an adjustment module 26 for updating the positional relationship between the virtual character and each sound source in response to the positional adjustment of the virtual character in the virtual game scene; and adjusting the playing of the plurality of instrument sounds of different tracks according to the updated position relation.
Optionally, a control module 24 for determining the distance and/or relative position between the virtual character and each sound source location; the volume and/or channel of a plurality of instrument sounds of different tracks are controlled according to the distance and/or relative orientation.
Optionally, the control module 24 is configured to describe a relationship between the distance and the volume by using an attenuation curve, where the attenuation curve is used to indicate that the volume remains unchanged within a preset range of the position of the virtual musical instrument model, and outside the preset range, the volume continuously attenuates with the continuous increase of the distance; the volume of a plurality of instrument sounds for different tracks is determined based on the distance and the decay curve.
Optionally, the control module 24 is configured to determine, when the virtual musical instrument model is located on the right of the virtual character, to play the musical instrument sound corresponding to the virtual musical instrument model through the right channel; when the virtual instrument model is positioned at the left side of the virtual character, the instrument sound corresponding to the virtual instrument model is determined to be played through the left channel.
Optionally, the game music adopts a mode of recording by the dummy head binaural stereo microphone to divide the tracks of different kinds of virtual musical instruments, so as to obtain the individual tracks corresponding to the virtual musical instruments.
Optionally, the processing module 22 is further configured to combine the individual audio tracks into a mono point sound source to be used in the three-dimensional game scene by using a three-dimensional spatialization function of the game audio middleware; the width of the point sound source is adjusted by utilizing the reverberation function built-in the three-dimensional surround sound plug-in of the game audio middleware.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
Embodiments of the present invention also provide a non-volatile storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described nonvolatile storage medium may be configured to store a computer program for performing the steps of:
S1, obtaining performance music corresponding to a virtual musical instrument model;
s2, continuously adjusting the current position of the performance identifier moving along the preset track on the graphical user interface in response to the continuous change of the performance progress of the performance music, wherein operation controls respectively corresponding to the performance identifiers are also displayed on the graphical user interface, and the performance identifiers respectively correspond to sound effects of different channels;
s3, responding to touch operation of the operation control, and determining the current position of the performance identifier corresponding to the operation control at the current time;
and S4, determining a touch response result according to the coincidence degree between the current position of the performance identifier corresponding to the operation control and the preset reference position, and simultaneously playing the sound effects of different channels corresponding to the performance identifier.
Alternatively, in the present embodiment, the above-described nonvolatile storage medium may be further configured to store a computer program for performing the steps of:
s1, obtaining game music to be played, wherein the game music comprises a plurality of musical instrument sounds recorded on different sound tracks;
s2, acquiring positions of virtual instrument models corresponding to the plurality of instrument sounds in the virtual game scene, and setting sound source positions of the plurality of instrument sounds of different sound tracks according to the positions of the virtual instrument models;
S3, acquiring the position relation between the virtual character and each sound source position, and controlling the playing of a plurality of instrument sounds of different sound tracks according to the position relation.
Alternatively, in the present embodiment, the above-described nonvolatile storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
An embodiment of the invention also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, obtaining performance music corresponding to a virtual musical instrument model;
s2, continuously adjusting the current position of the performance identifier moving along the preset track on the graphical user interface in response to the continuous change of the performance progress of the performance music, wherein operation controls respectively corresponding to the performance identifiers are also displayed on the graphical user interface, and the performance identifiers respectively correspond to sound effects of different channels;
S3, responding to touch operation of the operation control, and determining the current position of the performance identifier corresponding to the operation control at the current time;
and S4, determining a touch response result according to the coincidence degree between the current position of the performance identifier corresponding to the operation control and the preset reference position, and simultaneously playing the sound effects of different channels corresponding to the performance identifier.
Optionally, in this embodiment, the above processor may be further configured to execute the following steps by a computer program:
s1, obtaining game music to be played, wherein the game music comprises a plurality of musical instrument sounds recorded on different sound tracks;
s2, acquiring positions of virtual instrument models corresponding to the plurality of instrument sounds in the virtual game scene, and setting sound source positions of the plurality of instrument sounds of different sound tracks according to the positions of the virtual instrument models;
s3, acquiring the position relation between the virtual character and each sound source position, and controlling the playing of a plurality of instrument sounds of different sound tracks according to the position relation.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments and optional implementations, and this embodiment is not described herein.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present invention, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (18)

1. A music control method in a game, characterized in that a graphical user interface is provided by a first terminal device, the music control method in a game comprising:
obtaining performance music corresponding to the virtual musical instrument model;
responding to the continuous change of the performance progress of the performance music, continuously adjusting the current position of the performance identifier moving along a preset track on the graphical user interface, wherein the graphical user interface is also displayed with operation controls respectively corresponding to the performance identifiers, and the performance identifiers respectively correspond to sound effects of different channels;
responding to touch operation of the operation control, and determining the current position of a performance identifier corresponding to the operation control at the current time;
determining a touch response result according to the coincidence ratio between the current position of the performance identifier corresponding to the operation control and a preset reference position, and simultaneously playing sound effects of different channels corresponding to the performance identifier;
The music control method in the game further comprises the following steps: when the touch response result shows that the current position is partially or completely overlapped with the preset reference position, the sound distance of the performance music piece is shortened, the volume of the performance music piece is increased, and the audio of the performance music piece is kept unchanged;
when the touch response result indicates that the current position and the preset reference position are not overlapped, the sound distance becomes far, the volume becomes small, and the audio frequency is weakened.
2. The in-game music control method according to claim 1, wherein the performance identifier includes: a left direction identifier and a right direction identifier, wherein the left direction identifier corresponds to the sound effect of the left channel, and the right direction identifier corresponds to the sound effect of the right channel; the operation control comprises: left and right direction controls.
3. The in-game music control method according to claim 1, characterized in that the in-game music control method further comprises:
and adjusting sound attributes of the performance music based on the touch response result, wherein the sound attributes comprise: sound distance, volume, audio.
4. The in-game music control method according to claim 1, wherein the content displayed by the graphical user interface includes a game scene picture obtained by shooting a three-dimensional game scene by a preset virtual camera, the game scene picture including the virtual musical instrument model being played and a virtual character controlled by a second terminal device, the in-game music control method further comprising:
determining a distance and/or a relative orientation between the virtual character and the virtual instrument model;
and determining the volume and/or the sound channel of playing music corresponding to the virtual musical instrument model on the second terminal equipment according to the distance and/or the relative direction.
5. The in-game music control method according to claim 4, wherein determining the volume of playing the performance piece corresponding to the virtual musical instrument model on the second terminal device according to the distance, comprises:
describing the relation between the distance and the sound volume by adopting an attenuation curve, wherein the attenuation curve is used for indicating that the sound volume is kept unchanged in a preset range of the position of the virtual musical instrument model, and the sound volume continuously attenuates along with the continuous increase of the distance outside the preset range;
And determining the volume of playing the music corresponding to the virtual musical instrument model on the second terminal equipment according to the distance and the attenuation curve.
6. The in-game music control method according to claim 4, wherein determining a channel for playing a performance musical piece corresponding to the virtual musical instrument model on the second terminal device based on the relative orientation, comprises:
when the virtual musical instrument model is positioned on the right side of the virtual character, determining that playing music corresponding to the virtual musical instrument model is played on the second terminal equipment through a right channel;
and when the virtual musical instrument model is positioned at the left side of the virtual character, determining that playing music corresponding to the virtual musical instrument model is played on the second terminal equipment through a left channel.
7. The method according to claim 1, wherein the sound effect is a single-shot sound effect obtained by slicing audio data recorded in advance by using a dummy head binaural stereo microphone.
8. The method according to claim 1, wherein the playing musical composition is recorded by means of dummy head binaural stereo microphones, and different kinds of virtual musical instruments are subjected to track separation processing, so as to obtain individual tracks corresponding to the virtual musical instruments.
9. The in-game music control method according to claim 8, wherein the virtual musical instrument playing method further comprises:
combining the individual sound tracks into a single-track point sound source to be used in a three-dimensional game scene by utilizing the three-dimensional spatialization function of the game audio middleware;
the width of the point sound source is adjusted by utilizing a reverberation function built in a three-dimensional surround sound plug-in of the game audio middleware.
10. A music control method in a game, characterized in that content displayed by a graphic user interface includes a game scene picture obtained by photographing a three-dimensional game scene by a preset virtual camera, the game scene picture including a virtual instrument model being played and a virtual character controlled by a second terminal device, comprising:
obtaining game music to be played, wherein the game music comprises a plurality of musical instrument sounds recorded on different sound tracks, and the game music comprises playing music played by a first terminal corresponding to the playing virtual musical instrument model;
determining a distance and/or a relative orientation between the virtual character and each of the virtual instrument models;
determining the volume and/or the sound channel of playing music corresponding to the virtual musical instrument model on the second terminal equipment according to the distance and/or the relative direction;
Wherein the performance musical composition has the following characteristics: when the touch response result of the first terminal corresponding to the touch operation of the operation control shows that the current position of the performance identifier corresponding to the operation control is partially or completely overlapped with the preset reference position, the sound distance of the performance music becomes closer, the volume of the performance music becomes larger, and the audio of the performance music remains unchanged;
when the touch response result indicates that the current position is not overlapped with the preset reference position, the sound distance is increased, the volume is decreased, and the audio frequency is reduced, wherein the operation control corresponds to performance identifiers respectively, and the performance identifiers correspond to sound effects of different channels respectively.
11. The in-game music control method according to claim 10, characterized in that the in-game music control method further comprises:
updating the position relation between the virtual character and each virtual musical instrument model in response to the position adjustment of the virtual character in the virtual game scene;
and adjusting the playing of the plurality of instrument sounds of the different tracks according to the updated position relation.
12. The in-game music control method according to claim 10, wherein controlling the volumes of the plurality of instrument sounds of the different tracks according to the distance, comprises:
Describing the relation between the distance and the sound volume by adopting an attenuation curve, wherein the attenuation curve is used for indicating that the sound volume is kept unchanged in a preset range of the position of the virtual musical instrument model, and the sound volume continuously attenuates along with the continuous increase of the distance outside the preset range;
and determining the volume of a plurality of instrument sounds of the different tracks according to the distance and the attenuation curve.
13. The method for controlling music in a game according to claim 10, wherein the game music is recorded by a dummy head binaural stereo microphone to perform a track-dividing process on different kinds of virtual musical instruments, so as to obtain individual tracks corresponding to the virtual musical instruments.
14. The in-game music control method according to claim 13, characterized in that the method further comprises:
combining the individual sound tracks into a single-track point sound source to be used in a three-dimensional game scene by utilizing the three-dimensional spatialization function of the game audio middleware;
the width of the point sound source is adjusted by utilizing a reverberation function built in a three-dimensional surround sound plug-in of the game audio middleware.
15. A music control apparatus in a game, characterized in that a graphical user interface is provided by a first terminal device, the music control apparatus in a game comprising:
The acquisition module is used for acquiring performance music corresponding to the virtual musical instrument model;
the adjusting module is used for responding to the continuous change of the performance progress of the performance music, and continuously adjusting the current position of the performance identifier moving along the preset track on the graphical user interface, wherein the graphical user interface is also displayed with operation controls respectively corresponding to the performance identifiers, and the performance identifiers respectively correspond to sound effects of different channels;
the determining module is used for responding to the touch operation of the operation control and determining the current position of the performance identifier corresponding to the operation control at the current time;
the control module is used for determining a touch response result according to the coincidence ratio between the current position of the performance identifier corresponding to the operation control and a preset reference position, and playing sound effects of different channels corresponding to the performance identifier respectively;
wherein the device is further configured to, when the touch response result indicates that the current position partially or completely coincides with the preset reference position, make a sound distance of the performance musical composition become closer, make a sound volume of the performance musical composition become larger, and keep an audio of the performance musical composition unchanged; when the touch response result indicates that the current position and the preset reference position are not overlapped, the sound distance becomes far, the volume becomes small, and the audio frequency is weakened.
16. A music control apparatus in a game, characterized in that a content displayed by a graphic user interface includes a game scene picture obtained by photographing a three-dimensional game scene by a preset virtual camera, the game scene picture including a virtual instrument model being played and a virtual character controlled by a second terminal device, comprising:
the system comprises an acquisition module, a first terminal and a second terminal, wherein the acquisition module is used for acquiring game music to be played, the game music comprises a plurality of musical instrument sounds recorded on different sound tracks, and the game music comprises music played by the first terminal;
a control module for determining a distance and/or a relative position between the virtual character and each sound source position; determining the volume and/or the sound channel of playing music corresponding to the virtual musical instrument model on the second terminal equipment according to the distance and/or the relative direction;
wherein the device is further for: when the touch response result of the first terminal corresponding to the touch operation of the operation control shows that the current position of the performance identifier corresponding to the operation control is partially or completely overlapped with the preset reference position, the sound distance of the performance music becomes closer, the volume of the performance music becomes larger, and the audio of the performance music remains unchanged;
When the touch response result indicates that the current position is not overlapped with the preset reference position, the sound distance is increased, the volume is decreased, and the audio frequency is reduced, wherein the operation control corresponds to performance identifiers respectively, and the performance identifiers correspond to sound effects of different channels respectively.
17. A non-volatile storage medium, wherein a computer program is stored in the storage medium, wherein the computer program is arranged to execute the in-game music control method of any one of claims 1 to 9 or the in-game music control method of any one of claims 10 to 14 at run-time.
18. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the in-game music control method of any one of claims 1 to 9 or the in-game music control method of any one of claims 10 to 14.
CN202110636792.1A 2021-06-07 2021-06-07 Music control method in game, nonvolatile storage medium and electronic device Active CN113318432B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110636792.1A CN113318432B (en) 2021-06-07 2021-06-07 Music control method in game, nonvolatile storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110636792.1A CN113318432B (en) 2021-06-07 2021-06-07 Music control method in game, nonvolatile storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN113318432A CN113318432A (en) 2021-08-31
CN113318432B true CN113318432B (en) 2024-02-23

Family

ID=77421125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110636792.1A Active CN113318432B (en) 2021-06-07 2021-06-07 Music control method in game, nonvolatile storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN113318432B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1770257A (en) * 2005-10-21 2006-05-10 李全成 Method for making 360-degree surround stereo music
JP2010131113A (en) * 2008-12-03 2010-06-17 Copcom Co Ltd Sound output processor, game program for achieving the sound output processor, and recording medium
CN109165005A (en) * 2018-09-04 2019-01-08 Oppo广东移动通信有限公司 Audio Enhancement Method, device, electronic equipment and storage medium
CN111111167A (en) * 2019-12-05 2020-05-08 腾讯科技(深圳)有限公司 Sound effect playing method and device in game scene and electronic device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5437317B2 (en) * 2011-06-10 2014-03-12 株式会社スクウェア・エニックス Game sound field generator

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1770257A (en) * 2005-10-21 2006-05-10 李全成 Method for making 360-degree surround stereo music
JP2010131113A (en) * 2008-12-03 2010-06-17 Copcom Co Ltd Sound output processor, game program for achieving the sound output processor, and recording medium
CN109165005A (en) * 2018-09-04 2019-01-08 Oppo广东移动通信有限公司 Audio Enhancement Method, device, electronic equipment and storage medium
CN111111167A (en) * 2019-12-05 2020-05-08 腾讯科技(深圳)有限公司 Sound effect playing method and device in game scene and electronic device

Also Published As

Publication number Publication date
CN113318432A (en) 2021-08-31

Similar Documents

Publication Publication Date Title
US11778412B2 (en) Head pose mixing of audio files
US7732694B2 (en) Portable music player with synchronized transmissive visual overlays
JP2018129035A (en) Haptic broadcast with select haptic metadata
WO2020224322A1 (en) Method and device for processing music file, terminal and storage medium
CN108701369A (en) For the making and encapsulation of the recreation data of virtual reality
CN113823250B (en) Audio playing method, device, terminal and storage medium
US10540820B2 (en) Interactive virtual reality system for experiencing sound
CN110915240B (en) Method for providing interactive music composition to user
CN113318432B (en) Music control method in game, nonvolatile storage medium and electronic device
Goodwin Beep to boom: the development of advanced runtime sound systems for games and extended reality
JP2001517814A (en) Sound effect system
KR20060054678A (en) Apparatus and method for implementing character video synchronized with sound
US20230199420A1 (en) Real-world room acoustics, and rendering virtual objects into a room that produce virtual acoustics based on real world objects in the room
Hamilton Perceptually coherent mapping schemata for virtual space and musical method
CN106952637B (en) Interactive music creation method and experience device
Nuora Introduction to sound design for virtual reality games: a look into 3D sound, spatializer plugins and their implementation in Unity game engine
Kapralos et al. Advanced sound integration for toy-based computing
Prandoni A virtual reality environment with personalized spatial audio rendering
CN117379787A (en) Music control method and device in game, readable storage medium and electronic equipment
Catalano Virtual Reality In Interactive Environments: A Comparative Analysis Of Spatial Audio Engines
CN113986191A (en) Audio playing method and device, terminal equipment and storage medium
CN115373626A (en) Method and device for switching audio modes in game scene
Sound Game Technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant