CN111714889A - Sound source control method, sound source control device, computer equipment and medium - Google Patents

Sound source control method, sound source control device, computer equipment and medium Download PDF

Info

Publication number
CN111714889A
CN111714889A CN202010568352.2A CN202010568352A CN111714889A CN 111714889 A CN111714889 A CN 111714889A CN 202010568352 A CN202010568352 A CN 202010568352A CN 111714889 A CN111714889 A CN 111714889A
Authority
CN
China
Prior art keywords
sound source
virtual character
target volume
volume sound
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010568352.2A
Other languages
Chinese (zh)
Inventor
叶甫盖尼·切尔尼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202010568352.2A priority Critical patent/CN111714889A/en
Publication of CN111714889A publication Critical patent/CN111714889A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Stereophonic System (AREA)

Abstract

The application provides a method, a device, a computer device and a medium for sound source control, wherein the method comprises the following steps: responding to a movement control instruction, controlling the virtual character to move in the game scene, and judging whether the virtual character is in a first target volume sound source or not according to the current position of the virtual character and the position of the first target volume sound source; if the virtual character is in the first target volume sound source, setting the sounding position of the first target volume sound source on the virtual character, and adjusting a point sound source in the first target volume sound source to be in a working state; and if the virtual character is not in the first target volume sound source, adjusting the point sound source in the first target volume sound source to be in a closed state.

Description

Sound source control method, sound source control device, computer equipment and medium
Technical Field
The present application relates to the field of sound control, and in particular, to a method, an apparatus, a computer device, and a medium for sound source control.
Background
At present of rapid development of science and technology, terminal equipment capable of being used by people is more and more intelligent, and network games generated by the people are more and more diverse based on the terminal equipment. In leisure and entertainment, most people can play time through network games. In order to make the online game attract more users, game operators can design the online game to be more realistic, and the users can feel personally on the scene.
In general, the network game enables a user to feel as if he or she is present, which is basically realized by a visual effect and an auditory effect. In terms of hearing effect, any object in the online game needs to be capable of producing sound, the simulation effect of the online game is improved, and further the experience of the user is more real, but each object produces sound, a large amount of computing resources are needed for control, and the resource consumption is too much.
Disclosure of Invention
In view of the above, an object of the present application is to provide a method, an apparatus, a computer device and a medium for controlling a sound source, which are used to solve the problem of excessive power consumption of a system in the prior art.
In a first aspect, an embodiment of the present application provides a method for controlling a sound source, where a part of a game scene is displayed through a graphical user interface provided by a terminal device, where the game scene includes a virtual character, a volumetric sound source, and a point sound source, and the method includes:
responding to a movement control instruction, controlling the virtual character to move in the game scene, and judging whether the virtual character is in a first target volume sound source or not according to the current position of the virtual character and the position of the first target volume sound source;
if the virtual character is in the first target volume sound source, setting the sounding position of the first target volume sound source on the virtual character, and adjusting a point sound source in the first target volume sound source to be in a working state;
and if the virtual character is not in the first target volume sound source, adjusting the point sound source in the first target volume sound source to be in a closed state.
Optionally, if the virtual character is not located in the first target volume sound source, the method further includes:
and setting the sound production position of the first target volume sound source at one side of the first target volume sound source close to the virtual character.
Optionally, the setting of the sound emission position of the first target volume sound source at a side of the first target volume sound source close to the virtual character includes:
setting an utterance position of the first target volume sound source at a target position of the first target volume sound source; wherein the target position is a position at which an intersection of a target link and the first target volume sound source boundary is located; the target link is a link between a position of a physical center of the first target volume sound source and a position of the virtual character.
Optionally, before the virtual character is controlled to move in the game scene in response to the movement control instruction, the method further includes:
determining a candidate volume sound source, of the at least two candidate volume sound sources, of which the distance from the virtual character is smaller than a first preset distance according to the current position of the virtual character and the positions of the at least two candidate volume sound sources, and taking the candidate volume sound source as the first target volume sound source;
and adjusting the first target volume sound source to be in a working state.
Optionally, determining, according to the current position of the virtual character and the positions of at least two candidate volume sound sources, a candidate volume sound source, of the at least two candidate volume sound sources, whose distance from the virtual character is smaller than a first preset distance, as the first target volume sound source, includes:
for each reference area, determining a candidate volume sound source, of the candidate volume sound sources in the reference area, of which the distance to the virtual character is smaller than a first preset distance according to the position of the candidate volume sound source in the reference area and the current position of the virtual character, as the first target volume sound source; wherein each of the reference areas is located in a different direction of the virtual character.
Optionally, before the virtual character is controlled to move in the game scene in response to the movement control instruction, the method further includes:
selecting a candidate volume sound source from the at least two candidate volume sound sources as the first target volume sound source by using a preset proximity algorithm according to the current position of the virtual character and the positions of the at least two candidate volume sound sources;
and adjusting the first target volume sound source to be in a working state.
Optionally, the method further includes:
adjusting other candidate volume sound sources except the first target volume sound source in the at least two candidate volume sound sources to be in an off state.
Optionally, the method further includes:
and if the virtual character is in the first target volume sound source, adjusting a point sound source which is positioned outside the first target volume sound source and has a distance with the virtual character greater than a second preset distance to be in a closed state.
Optionally, the method further includes:
and if the virtual character is in the first target volume sound source, adjusting the point sound source which is positioned in the first target volume sound source and has a distance with the virtual character greater than a second preset distance to be in a closed state.
Optionally, the method further includes:
if the virtual character is not in the first target volume sound source, calculating the distance between the position of the virtual character and the position of a point sound source outside the first target volume sound source according to the current position of the virtual character and the position of the point sound source outside the first target volume sound source;
and if the distance between the position of the point sound source outside the first target volume sound source and the position of the virtual character is greater than a third preset distance, adjusting the point sound source outside the first target volume sound source, the distance between which and the position of the virtual character is greater than the third preset distance, to be in a closed state.
Optionally, the method further includes:
calculating the distance between each point sound source and the virtual character according to the position of the virtual character and the position of the point sound source;
determining the attenuation coefficient of the corresponding point sound source according to the distance between each point sound source and the virtual character;
and controlling the sounding strategy of the corresponding point sound source according to the attenuation coefficient of each point sound source.
Optionally, the method further includes:
if the virtual character is not in the first target volume sound source, calculating the distance between the first target volume sound source and the virtual character according to the position of the virtual character and the position of the first target volume sound source;
determining an attenuation coefficient of the first target volume sound source according to the distance between the first target volume sound source and the virtual character;
and controlling the sound production strategy of the first target volume sound source according to the attenuation coefficient of the first target volume sound source.
In a second aspect, an embodiment of the present application provides an apparatus for controlling a sound source, which displays a part of a game scene including a virtual character, a volumetric sound source, and a point sound source through a graphical user interface provided by a terminal device, including:
the moving module is used for responding to a moving control instruction and controlling the virtual character to move in the game scene;
the judging module is used for judging whether the virtual character is in the first target volume sound source or not according to the current position of the virtual character and the position of the first target volume sound source;
the first adjusting module is used for setting the sounding position of the first target volume sound source on the virtual character and adjusting a point sound source positioned in the first target volume sound source to be in a working state if the virtual character is in the first target volume sound source;
and the second adjusting module is used for adjusting the point sound source positioned in the first target volume sound source to be in a closed state if the virtual character is not in the first target volume sound source.
In a third aspect, an embodiment of the present application provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the steps of the above method when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, performs the steps of the above method.
The method for controlling the sound source provided by the embodiment of the application comprises the steps of responding to a movement control instruction, controlling the virtual character to move in a game scene, and judging whether the virtual character is in a first target volume sound source or not according to the current position of the virtual character and the position of the first target volume sound source; then, if the virtual character is in the first target volume sound source, setting the sounding position of the first target volume sound source on the virtual character, and adjusting a point sound source in the first target volume sound source to be in a working state; or if the virtual character is not in the first target volume sound source, adjusting the point sound source in the first target volume sound source to be in a closed state.
In a certain embodiment, the method provided by the present application determines the sound production position of the first target volume sound source according to the position of the virtual character, and adjusts the working state of the point sound source inside the first target volume sound source, and this way of adjusting the working state of the point sound source can make the point sound source not produce sound in real time, thereby reducing the consumption of computing resources in the system.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic flowchart of a method for controlling a sound source according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a game scenario provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of another game scenario provided by an embodiment of the present application;
fig. 4 is a schematic structural diagram of an apparatus for controlling a sound source according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application;
fig. 6 is a schematic view of another game scenario provided in the embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
In the prior art, in order to make the game scene of the online game more realistic, a game operator may make each element (e.g., mountain, airship, river, forest, wind, rain, etc.) in the game scene have a sound production effect. The element may be a larger element or a smaller element, and one larger element may be composed of an infinite number of smaller elements. A large element may have a sound effect, and the small elements that make up the large element may also have corresponding sound effects, for example, for a forest where numerous trees make up the forest, where the sound from the forest is different from the sound from a single tree, and the sound from the forest is a combination of the sounds from multiple trees. Whether the element is large or small, the element capable of generating sound can be used as a sound source. The element with a large volume is defined as a volume sound source, and the sound production position of the volume sound source is located at the position of the physical center inside the volume sound source. The element with smaller volume is defined as a point sound source in the application, and the sound production position of the point sound source is consistent with the position of the point sound source.
However, the sound emitted in the online game is controlled by the system, when too many elements in the game scene sound, more computing resources in the system are consumed, the computing resources occupied by the sound are too much, and the computing resources for controlling other aspects are reduced, so that certain influence is generated on the fluency of the game.
For the above reasons, an embodiment of the present application provides a method for controlling a sound source, as shown in fig. 1, where a part of a game scene including a virtual character, a volume sound source, and a point sound source is displayed through a graphical user interface provided by a terminal device, the method includes:
s101, responding to a movement control instruction, controlling a virtual character to move in a game scene, and judging whether the virtual character is in a first target volume sound source or not according to the current position of the virtual character and the position of the first target volume sound source;
s102, if the virtual character is in the first target volume sound source, setting the sounding position of the first target volume sound source on the virtual character, and adjusting a point sound source in the first target volume sound source to be in a working state;
and S103, if the virtual character is not in the first target volume sound source, adjusting the point sound source in the first target volume sound source to be in a closed state.
In this embodiment of the present application, the terminal device mentioned in the present application mainly refers to an intelligent device that is used for displaying a game screen and can perform control operations on a virtual character, and the terminal device may include any one of the following devices: smart phones, tablet computers, notebook computers, desktop computers, and the like. The terminal device may include a device capable of calculating the control instruction and a device incapable of calculating the control instruction, and after receiving the control instruction input by the user, the device capable of calculating the control instruction calculates the control instruction and then controls the virtual character according to the control instruction; the device which cannot calculate the control instruction can upload the control instruction to the cloud server after receiving the control instruction, and after receiving a processing result of the control instruction by the cloud server, control the virtual role according to the processing result. The graphical user interface is an interface for displaying a game screen on a display screen of the terminal device. The game scene is a virtual game space bearing virtual characters in the game process, and the virtual characters can move, release skills and other actions under the control of operation instructions issued to the terminal equipment by users in the game scene. The game scene may include any one or more of the following elements: game virtual character elements, game sound elements, and the like. The game virtual character element is a virtual character controlled by a user, and the sound source state corresponding to the virtual character in the game scene is always in a working state. The sound source has two states, namely a working state and a closed state, the sound source (point sound source or volume sound source) can emit corresponding sound in the working state, the sound volume is 0 when the sound is emitted in time, and the sound source (point sound source or volume sound source) cannot emit sound in the closed state. The sound-producing elements may include a volume sound source and a point sound source, and the volume sound source may be an element having a large volume and capable of producing sound, such as a mountain, an airship, a river, a forest, wind, rain, and the like. The sound production location of a volumetric sound source is typically located at the physical center position (e.g., the center of gravity) of the volumetric sound source. The point sound source may be a small, sound-producing element, such as a loudspeaker. Raindrops, trees, birds, stones, etc. The sound production position of the point sound source coincides with the position of the point sound source. The volume sound source may be composed of a plurality of point sound sources. A game scene is a partial scene in the virtual world that is viewed at a specified viewing angle (e.g., the viewing angle is the angle of the eyes of the virtual character controlled by the user), and the partial scene is presented on the graphical user interface. Displayed in the graphical user interface is a game scene that can be seen with the eyes of the virtual character.
The game scene is a virtual scene (e.g., a certain space in a game) used in a normal game playing process, a large number of virtual characters controlled by different users may exist in the scene, and the scene may also include a plurality of elements (e.g., mountains, airships, rivers, forests, winds, rains, trees, birds, stones, etc.) capable of producing sounds.
In step S101, the movement control instruction may be issued by the terminal device or by the cloud server, and the movement control instruction may control the virtual character to move in the game scene. The first target volume sound source may be a volume sound source from which emitted sound can be heard by the virtual character. Since the volume of the first target volume sound source is large, it is necessary to use a relatively representative point to represent the position of the first target volume sound source, and for a large element, the position of the physical center can represent the position of the first target volume sound source, wherein the position of the physical center can be any one of the following positions: the position of the center of mass, the position of the center of gravity, etc. Alternatively, the position of the first target volume sound source is characterized by a boundary position of the first target volume sound source.
In a specific implementation, when the boundary position of the first target volume sound source is used to represent the position of the first target volume sound source, because the volume of the first target volume sound source is large, in a certain situation, the virtual character can enter the first target volume sound source, and therefore, when the virtual character controlled by the movement control instruction moves in a game scene, whether the virtual character moves into the first target volume sound source or not should be judged in real time. Whether or not the virtual character enters the first target volume sound source needs to be determined based on the current position of the virtual character and the position of the first target volume sound source (the boundary of the first target volume sound source). The specific calculation can be determined according to whether the virtual character is positioned in the boundary of the target volume sound source.
Or, when the physical center represents the position of the first target volume sound source, and whether the virtual character enters the first target volume sound source is determined, first, the distance between the virtual character and the first target volume sound source is calculated according to the current position of the virtual character and the position of the first target volume sound source (the physical center of the first target volume sound source), and then, if the calculated distance is close enough (for example, smaller than the distance between the boundary of the first target volume sound source and the physical center of the volume target volume sound source), when the position of the virtual character is determined to be within the boundary of the first target volume sound source, the virtual character is determined to be within the first target volume sound source.
After performing step S101, it may be determined whether to perform step S102 or step S103 according to whether the determined virtual character is within the first target volume sound source.
In the above steps S102 and S103, the sound-emitting position of the first target volume sound source refers to a virtual spatial point in the game scene, that is, the volume sound source is attached to the point, and the sound-emitting position may be used to calculate the distance between the virtual character and the sound-emitting position, and calculate the sound level of the volume sound source heard by the player (the user operating the virtual character) according to the distance.
In specific implementation, when the virtual character is inside the first target volume sound source, the sound generating position of the first target volume sound source can be set on the virtual character (that is, the sound generating position of the first target volume sound source is set at the visual angle position of the virtual character), so that the sound generating position of the first target volume sound source moves along with the movement of the virtual character.
When first target volume sound source sound production, the inside point sound source of first target volume sound source also can the sound production, the sound that the point sound source sent can be simpler, the user of control virtual character can also hear the sound that every point sound source sent when hearing the sound that first target volume sound source sent, thus, can have more the authenticity, the user can have a sensation of being personally on the scene, consequently, when virtual character moves inside first target volume sound source, will all adjust the inside point sound source of first target volume sound source for operating condition, let the point sound source carry out the sound production. Unlike the first target volume sound source, the point sound source does not need to move as the virtual character moves, and the position of the point sound source is always fixed.
During the concrete implementation, when the virtual character is outside at first target volume sound source, the sound of first target volume sound source can cover the sound of point sound source, and like this, the user of controlling the virtual character probably can not hear the sound of point sound source yet, just so, has caused the waste of the computational resource of control point sound source sound production, consequently, in order to reduce the waste of computational resource, can be with the adjustment of the inside point sound source of first target volume sound source for closed state, make the point sound source not produce sound.
In the embodiment of the application, the sound production position of the first target volume sound source is determined according to the position of the virtual character, and the working state of the point sound source inside the first target volume sound source is adjusted.
Because the volume of first target volume sound source is great, and the vocal position of first target volume sound source is only a point, consequently, when the virtual character is not inside first target volume sound source, the audio of the sound that first target volume sound source sent can not change along with the position of virtual character, can not set up the vocal position of first target volume sound source on the virtual character, sound in order to let the user hear more has the authenticity, need set up the vocal position of first target volume sound source on first target volume sound source, let sound send from the first target volume sound source that corresponds, can let user experience truer. The sound emitted by the first target volume sound source is the comprehensive sound effect of the sound emitted by all the point sound sources in the first target volume sound source, and in order to ensure the authenticity, the setting position of the sound emitting position of the first target volume sound source needs to be carefully considered. Further, the sound emission position of the first target volume sound source may be set as follows:
and step 1031, setting the sound production position of the first target volume sound source at the side of the first target volume sound source close to the virtual character.
In step 1031, when the virtual character is not located inside the first target volume sound source, the virtual character controls the user of the virtual character to hear the sound of the first target volume sound source at a position not far from the first target volume sound source, but in order to make the sound of the first target volume sound source heard by the user more realistic, the sound emission position of the first target volume sound source cannot be set on the side of the first target volume sound source away from the virtual character, and the sound emission position of the first target volume sound source should be set on the side facing the virtual character.
For example, as shown in fig. 2, in a game scene, a forest and a virtual character are divided into two areas, an area a is a side of the forest far from the virtual character, an area B is a side of the forest near to the virtual character, if the sound emission position of the forest is set at the area a, the sound heard by the user who controls the virtual character is emitted from a distant position, but the virtual character is not far away from the forest in the scene seen by the user, so that the sound heard by the user and the picture seen by the user are not matched, if the sound emission position of the forest is set at the area B, the sound heard by the user who controls the virtual character is emitted at a closer position, in addition, in the scene seen by the user, the virtual character is closer to the forest, so that the sound heard by the user is not matched with the picture seen by the user. Therefore, when the virtual character is located outside the forest, it is necessary to set the sound emission position of the forest at the area B.
When the virtual character is located outside the first target volume sound source, the sound production position of the first target volume sound source needs to be set at a side close to the virtual character, and more specifically, step 1031 includes:
step 10311, setting the sound production position of the first target volume sound source at the target position of the first target volume sound source; wherein the target position is the position of the intersection point of the target connecting line and the boundary of the first target volume sound source; the target link is a link between the position of the physical center of the first target volume sound source and the position of the virtual character.
In the above step 10311, the target position is a position at which an intersection of the target link and the boundary of the first target volume sound source is located; the target link is a link between the position of the physical center of the first target volume sound source and the position of the virtual character.
In implementation, if the boundary of the volume sound source is a complete sphere (the sphere center of the sphere is the center of gravity of the volume sound source), the target position is the position closest to the virtual character in the first target volume sound source; if the first target volume sound source is irregularly shaped (as shown in fig. 3), the target position may be a position closer to the virtual character but not the closest position in the first target volume sound source.
When the boundary of the first target volume sound source is a complete sphere, the sound production position of the first target volume sound source is set at the position closest to the virtual character, and the sound of the first target volume sound source heard by the user is clearer.
When the first target volume sound source is irregularly shaped, if the sound emission position of the first target volume sound source is set at the position closest to the virtual character in the first target volume sound source, the sound heard by the user is larger in sound effect than the sound emitted by the first target volume sound source, which is not matched with the picture seen by the user, and the immersion feeling of the user is reduced. And the sound production position of the first target volume sound source is set at the target position (namely the position of the intersection point of the target connecting line and the boundary of the first target volume sound source), the sound heard by the user is basically emitted from the main body position of the first target volume sound source, and the picture seen by the user is matched with the sound produced by the user, so that the sound heard by the user is more real, and the sound production position of the first target volume sound source can be set at the target position.
For example, as shown in fig. 3, there is a mountain and a virtual character in the game scene, the centroid position of the mountain is a, the position in the mountain closest to the virtual character is C, the position of the virtual character is B, and the intersection point position of the connection line between the position a and the position B and the boundary of the mountain is D. If the sound emission position of the mountain is set at C, the sound of the mountain heard by the user is larger than the sound emitted from the mountain in the picture seen by the user, which reduces the immersion feeling of the user, whereas if the sound emission position of the mountain is set at D, the sound of the mountain heard by the user is emitted from the main body position of the mountain, which substantially matches the picture seen by the user, which enhances the immersion feeling of the user.
In a game scene, there are usually a plurality of volume sound sources, and in order to reduce the consumption of computing resources in the system, it is necessary to control the number of volume sound sources in an operating state, so that the step of controlling the virtual character before moving in the game scene in response to the movement control instruction comprises:
step 1011, determining a candidate volume sound source, of the at least two candidate volume sound sources, of which the distance from the virtual character is smaller than a first preset distance according to the current position of the virtual character and the positions of the at least two candidate volume sound sources, as the first target volume sound source;
step 1012, the first target volume sound source is adjusted to an operating state.
In step 1011 above, the candidate volume sound source refers to any one of the volume sound sources located in the game scene, and at least two candidate volume sound sources are included in the game scene. The first preset distance is a value set by a person, and may be a maximum distance between a virtual character corresponding to a sound that can be heard by a user and a volume sound source.
In specific implementation, a first target volume sound source which can be heard by a user from a plurality of candidate volume sound sources can be screened out through a first preset distance.
In step 1012, after the first target volume sound source is determined, the first target volume sound source is adjusted to be in an operating state (i.e., a volume sound source that is not the first target volume sound source is adjusted to be in an off state), so that the number of the first target volume sound sources in the operating state is limited, and the calculation resources are saved. Of course, it is also necessary to determine whether the virtual character is located inside the first target volume sound source, and further determine whether to control whether the point sound source inside the first target volume sound source is sounded, and therefore, it is also necessary to determine whether the virtual character is located inside the first target volume sound source according to the distance between the first target volume sound source and the virtual character.
More specifically, before the step of controlling the virtual character to move in the game scene in response to the movement control command, step 1011 includes:
step 1013, selecting a candidate volume sound source from the at least two candidate volume sound sources as a first target volume sound source by using a preset proximity algorithm according to the current position of the virtual character and the positions of the at least two candidate volume sound sources;
step 1014, adjusting the first target volume sound source to be in a working state.
In step 1013, the proximity algorithm may determine a specified number of volumetric sound sources adjacent to the virtual character. The specified number is specified manually.
In a specific implementation, the proximity algorithm is specifically kNN (K-nearest neighbor classification algorithm), where K is a specified number. And screening the first target volume sound sources through a kNN algorithm, namely screening K first target volume sound sources with the shortest distance to the virtual character from the candidate volume sound sources. Of course, other distance screening methods may be used to determine the first target volumetric sound source.
In step S1014, after the first target volume sound source is determined, the first target volume sound source is adjusted to be in the operating state (i.e., the volume sound source that is not the first target volume sound source is adjusted to be in the off state), so that the number of the first target volume sound sources in the operating state is limited, and the calculation resources are saved. Of course, it is also necessary to determine whether the virtual character is inside the first target volume sound source according to the distance between the first target volume sound source and the virtual character, and further determine whether to control whether the point sound source inside the first target volume sound source generates sound, and therefore, it is also necessary to determine whether the virtual character is inside the first target volume sound source according to the distance between the first target volume sound source and the virtual character.
In the scene of playing, there is the volume sound source that can sound, also has the volume sound source that can not sound, consequently, the scheme in this application still includes:
and step 104, adjusting other candidate volume sound sources except the first target volume sound source in the at least two candidate volume sound sources to be in a closed state.
In the above step 104, a volume sound source other than the first target volume sound source among the at least two candidate volume sound sources may be a volume sound source having a distance from the virtual character greater than a first preset distance. The candidate volume sound sources other than the first target volume sound source may be adjusted to the off state in order to reduce the consumption of resources because the candidate volume sound sources other than the first target volume sound source are not audible to the user due to the attenuation of the volume even if they make a sound.
In a real scene, if the distance between the sound-producing object and the person is far, the person cannot hear the sound produced by the sound-producing object. Therefore, in implementation, according to the distance between the virtual character and the candidate volume sound source, the candidate volume sound source whose distance from the virtual character is greater than the first preset distance is screened out, and the candidate volume sound sources are the candidate volume sound sources except the first target volume sound source. The distance between the candidate volume sound source other than the first target volume sound source and the virtual character is long, and the user cannot hear the candidate volume sound source other than the first target volume sound source even if the candidate volume sound source other than the first target volume sound source makes a sound, so that the candidate volume sound source other than the first target volume sound source can be adjusted to be in the off state, and the computing resources of the system are saved.
Similarly, in a game scene, the distance between the point sound source and the virtual character is too far, and the user controlling the virtual character cannot hear the sound generated by the point sound source too far, so that step S102 includes:
step 1021, if the virtual character is in the first target volume sound source and the first candidate point sound source is outside the first target volume sound source, adjusting the point sound source outside the first target volume sound source, which is located at a distance greater than a second preset distance from the position of the virtual character, to be in a closed state.
In step 1021, the second preset distance is the farthest distance that the virtual character can hear the sound generated by the point sound source when the virtual character is located inside the first target volume sound source.
In specific implementation, if the virtual character is in the first target volume sound source, the sound production position of the first target volume sound source is set on the virtual character, and a point sound source which is located outside the first target volume sound source and has a distance with the position of the virtual character greater than a second preset distance is adjusted to be in a closed state. And the point sound sources positioned outside the first target volume sound source are screened, and the point sound sources positioned outside the first target volume sound source which do not need to generate sound are adjusted to be in a closed state, so that the computing resources are saved.
When the virtual character is in the first target volume sound source, the point sound source in the first target volume sound source is far away from the current position of the virtual character, so if the virtual character is in the first target volume sound source, the step of adjusting the point sound source in the first target volume sound source to be in the working state includes:
step 1022, if the virtual character is in the first target volume sound source and the second candidate point sound source is located inside the first target volume sound source, adjusting the point sound source located inside the first target volume sound source, whose distance from the virtual character is greater than the second preset distance, to be in a closed state.
In step 1022, the second preset distance is a value set manually, and is the farthest distance that the virtual character can hear the sound generated by the point sound source when the virtual character is located inside the first target volume sound source.
In specific implementation, the user cannot hear the sound generated by the sound source located inside the first target volume sound source and having the distance to the virtual character greater than the second preset distance, so that even if the point sound source having the distance to the virtual character greater than the second preset distance is located inside the first target volume sound source, the point sound source can be adjusted to be in a closed state, and thus, the consumption of computing resources is reduced.
Similarly, when the point sound source is located outside the first target volume sound source, whether the point sound source operates or not may also be controlled according to the distance between the virtual character and the point sound source, that is, the method provided by the present application further includes:
step 106, if the virtual character is not in the first target volume sound source, calculating the distance between the position of the virtual character and the position of the point sound source outside the first target volume sound source according to the current position of the virtual character and the position of the point sound source outside the first target volume sound source;
step 107, if the distance between the position of the point sound source outside the first target volume sound source and the position of the virtual character is greater than the third preset distance, adjusting the point sound source outside the first target volume sound source, whose distance between the point sound source outside the first target volume sound source and the position of the virtual character is greater than the third preset distance, to be in a closed state.
In step 106, in a specific implementation, each point sound source located outside the first target volume sound source has corresponding position information, and a distance between the virtual character and each point sound source located outside the first target volume sound source can be calculated according to the position of the point sound source located outside the first target volume sound source and the position of the virtual character.
In step 107, the third preset distance is a value set manually, and is the farthest distance at which the virtual character can hear the sound from the point sound source when the virtual character is outside the first target volume sound source. When the virtual character is positioned in the first target volume sound source, a plurality of point sound sources can sound in the first target volume sound source, and the sound environment is noisy; when the virtual character is located inside the first target volume sound source, the number of point sound sources surrounding the virtual character is small, and the sound environment is simple. Therefore, the farthest distance of the sound emitted from the point sound source that the user can hear may be different between when the virtual character is located inside the first target volume sound source and when the virtual character is located outside the first target volume sound source (i.e., the third preset distance may be different from the second preset distance).
During specific implementation, according to the distance between each virtual character and each point sound source outside the first target volume sound source, the point sound source outside the first target volume sound source, which is larger than the third preset distance, is screened out, the distance between the point sound source outside the first target volume sound source, which is screened out, and the virtual character is far away, and according to a common principle, a user cannot hear the sound of the point sound source outside the first target volume sound source, which is larger than the third preset distance, between the user and the virtual character, so that in order to save the calculation resources of the system, the point sound source outside the first target volume sound source, which is larger than the third preset distance, can be directly adjusted to be in a closed state.
In a real scene, for the same object, the closer the distance to the object, the clearer the sound emitted by the object is heard, and the farther the distance to the object, the more blurred the sound emitted by the object is heard. Therefore, the scheme further comprises:
step 201, calculating the distance between each point sound source and the virtual character according to the position of the virtual character and the position of the point sound source;
step 202, determining the attenuation coefficient of the corresponding point sound source according to the distance between each point sound source and the virtual character;
and step 203, controlling the sounding strategy of the corresponding point sound source according to the attenuation coefficient of each point sound source.
In step 201, if there are positions of virtual characters and positions of point sound sources, the distance between each point sound source and a virtual character can be calculated.
In the above step 202, the attenuation coefficient is a coefficient for controlling the volume level, and there is a correlation between the attenuation coefficient and the volume level, and the larger the attenuation coefficient, the smaller the volume, and the smaller the attenuation coefficient, the larger the volume. The attenuation coefficient is correlated with the distance, the farther the distance, the larger the attenuation coefficient, the more blurred the sound is heard, and the closer the distance, the smaller the attenuation coefficient, the sharper the sound is heard.
In step 203, the sound emission strategy mainly refers to adjusting the volume of the emitted sound.
In specific implementation, for each point sound source in a working state, the sound generation strategy of the point sound source may be controlled according to the attenuation coefficient corresponding to the point sound source, that is, the volume of the sound generated by the point sound source is adjusted according to the attenuation coefficient. The volume of the point sound source which is farther away from the virtual character is adjusted to be smaller, and the adjustment mode is closer to the real scene.
Similarly, when the virtual character is inside the first target volume sound source, the sound production position of the first target volume sound source is directly hung on the virtual character, and the sound produced by the first target volume sound source is not attenuated. When the virtual character is located outside the first target volume sound source, for the same volume sound source, the closer the distance to the volume sound source is, the clearer the sound emitted by the volume sound source is heard, and the farther the distance to the volume sound source is, the more fuzzy the sound emitted by the volume sound source is heard. Therefore, the scheme further comprises:
step 301, if the virtual character is not in the first target volume sound source, calculating the distance between each first target volume sound source and the virtual character according to the position of the virtual character and the position of the first target volume sound source;
step 302, determining an attenuation coefficient of a first target volume sound source according to the distance between the first target volume sound source and the virtual character;
step 303, controlling the sound production strategy of the first target volume sound source according to the attenuation coefficient of the first target volume sound source.
In step 301, when the virtual character is located outside the first target volume sound source, the distance between each first target volume sound source and the virtual character can be calculated as long as there is the position of the virtual character and the position of the first target volume sound source (the sound emission position of the first target volume sound source).
In the above step 302, the attenuation coefficient is a coefficient for controlling the volume level, and there is a correlation between the attenuation coefficient and the volume level, and the larger the attenuation coefficient, the smaller the volume, and the smaller the attenuation coefficient, the larger the volume. The attenuation coefficient is correlated with the distance, the farther the distance, the larger the attenuation coefficient, the more blurred the sound is heard, and the closer the distance, the smaller the attenuation coefficient, the sharper the sound is heard.
In the above step 303, the sound emission strategy mainly refers to adjusting the volume of the emitted sound.
In specific implementation, for each first target volume sound source in the working state, the sound generation strategy of the first target volume sound source may be controlled according to the attenuation coefficient corresponding to the first target volume sound source, that is, the volume of the sound generated by the first target volume sound source is adjusted according to the attenuation coefficient. The volume of the sound source of the first target volume which is farther away from the virtual character is adjusted to be smaller, and the adjustment mode is closer to the real scene.
Even if the first target volume sound source is screened out by using the preset distance, the user may still hear the sounds emitted by a plurality of first target volume sound sources, so that the sounds heard by the user become disordered, and in order to make the user hear more clear sounds, step S1011 includes:
step 1013, determining, for each reference region, a candidate volume sound source, of which the distance to the virtual character is smaller than a first preset distance, in the candidate volume sound sources in the reference region, as a first target volume sound source, according to the position of the candidate volume sound source in the reference region and the current position of the virtual character; wherein, each reference area is respectively positioned in different directions of the virtual character.
In the above step 1013, the reference areas are respectively located in different directions of the virtual character, that is, the areas around the virtual character are divided into different reference areas, such as an east area, a south area, a north area, a west area, and the like, in different orientations around the virtual character.
In specific implementation, the candidate volume sound sources located in the first preset distance range are not regularly arranged around the virtual character, some candidate volume sound sources may be closer to the virtual character, some candidate volume sound sources may be farther from the virtual character, or a plurality of candidate volume sound sources may be distributed in some reference areas, and the candidate volume sound source closest to the virtual character in some reference areas is not in the first preset distance range. In order to make the sound heard by the user more realistic, the candidate volume sound source closest to the virtual character in each reference area within the first preset distance range may be sounded, and therefore, the candidate volume sound source closest to the virtual character in each reference area within the first preset distance range is used as the first target volume sound source. Therefore, the user can hear the sound source emitted by the candidate volume sound source in each direction, and the hearing effect of the user is better and more real.
For example, as shown in fig. 6, there is a virtual character in the game scene, there are 12 volume sound sources in total, and there are 9 volume sound sources in total located within a preset distance, which are a1, a2, A3, a4, B1, B2, B3, B4 and B5, respectively, a1 is closest to the virtual character in the east region, a2 is closest to the virtual character in the south region, A3 is closest to the virtual character in the north region, and a4 is closest to the virtual character in the west region, and in order to make the sound heard by the virtual character more realistic, only a1, a2, A3 and a4 are made as the first target volume sound sources.
In the method, the sound production position of the first target volume sound source is determined according to the current position of the virtual character, and the working state of the point sound source inside the first target volume sound source is adjusted. Of course, in order to make the sound heard by the user more realistic, when the virtual character is located outside the first target volume sound source, the sound production position of the first target volume sound source is set at the target position of the first target volume sound source, so that the sound heard by the user can be matched with the seen picture. The sound users can not hear the sound emitted by the point sound source which is more than the second preset distance away from the virtual character, so that the point sound source which is more than the second preset distance away from the virtual character is adjusted to be in a closed state in order to reduce the consumption of computing resources. Similarly, when the virtual character is located outside the first target volume sound source, a user cannot hear a sound generated by a second target volume sound source whose distance from the virtual character is greater than a second preset distance, so that when the virtual character is located outside the first target volume sound source, the user can adjust the virtual character according to the distance between the virtual character and the first target volume sound source, and in order to reduce consumption of computing resources, the user can adjust the second target volume sound source whose distance from the virtual character is greater than the second preset distance to a closed state. The distance can determine the sound volume heard by the user, so that for the point sound source, the attenuation coefficient of the point sound source can be determined according to the distance between the point sound source and the virtual character, and the sound production of the point sound source is controlled according to the attenuation coefficient, so that the sound heard by the user is more real. Similarly, when the virtual character is located outside the first target volume sound source, the sound emission strategy of the first target volume sound source may also be controlled by the attenuation coefficient. Of course, the volume sound source with the minimum distance to the virtual character can be screened out in each reference area to serve as the first target volume sound source, so that the user can hear the sound in each direction as much as possible, and the reality is improved.
As shown in fig. 4, an embodiment of the present application provides an apparatus for controlling a sound source, which displays a part of a game scene including a virtual character, a volume sound source, and a point sound source through a graphical user interface provided by a terminal device, including:
a moving module 501, configured to respond to a movement control instruction and control a virtual character to move in a game scene;
a judging module 502, configured to judge whether the virtual character is in the first target volume sound source according to the current position of the virtual character and the position of the first target volume sound source;
a first adjusting module 503, configured to set a sound generating position of the first target volume sound source on the virtual character if the virtual character is in the first target volume sound source, and adjust a point sound source located inside the first target volume sound source to be in a working state;
a second adjusting module 504, configured to adjust a point sound source located inside the first target volume sound source to an off state if the virtual character is not located inside the first target volume sound source.
Optionally, the second adjusting module 504 includes:
and the adjusting unit is used for setting the sound production position of the first target volume sound source at one side of the first target volume sound source close to the virtual character.
Optionally, the adjusting unit includes:
an adjustment subunit configured to set a sound emission position of the first target volume sound source at a target position of the first target volume sound source; wherein the target position is the position of the intersection point of the target connecting line and the boundary of the first target volume sound source; the target link is a link between the position of the physical center of the first target volume sound source and the position of the virtual character.
Optionally, the apparatus further comprises:
the determining module is used for determining a candidate volume sound source of which the distance between the virtual character and the candidate volume sound source is smaller than a first preset distance from the at least two candidate volume sound sources as a first target volume sound source according to the current position of the virtual character and the positions of the at least two candidate volume sound sources;
and the third adjusting module is used for adjusting the first target volume sound source to be in a working state.
Optionally, the determining module includes:
a first determining unit, configured to determine, for each reference region, a candidate volume sound source, of the candidate volume sound sources in the reference region, whose distance from the virtual character is smaller than a first preset distance, as a first target volume sound source, according to the position of the candidate volume sound source in the reference region and the current position of the virtual character; wherein, each reference area is respectively positioned in different directions of the virtual character.
Optionally, the apparatus further comprises:
the second determining unit is used for selecting one candidate volume sound source from the at least two candidate volume sound sources as a first target volume sound source by utilizing a preset proximity algorithm according to the current position of the virtual character and the positions of the at least two candidate volume sound sources;
and the fourth adjusting unit is used for adjusting the first target volume sound source to be in a working state.
Optionally, the apparatus further comprises:
a fifth adjusting unit, configured to adjust other candidate volume sound sources except the first target volume sound source in the at least two candidate volume sound sources to an off state.
Optionally, the apparatus further comprises:
and if the virtual character is in the first target volume sound source, adjusting the point sound source which is positioned outside the first target volume sound source and has a distance with the virtual character greater than a second preset distance to be in a closed state.
Optionally, the first adjusting module 503 includes:
and a sixth adjusting unit which adjusts, if the virtual character is in the first target volume sound source, a point sound source located inside the first target volume sound source, the point sound source being located at a distance from the virtual character greater than a second preset distance, to an off state.
Optionally, the apparatus further comprises:
the first screening module is used for calculating the distance between the position of the virtual character and the position of a point sound source outside the first target volume sound source according to the current position of the virtual character and the position of the point sound source outside the first target volume sound source if the virtual character is not in the first target volume sound source;
and the seventh adjusting module is used for adjusting the point sound source which is positioned outside the first target volume sound source and has the distance with the virtual character greater than the third preset distance to be in a closed state if the distance between the point sound source which is positioned outside the first target volume sound source and the virtual character is greater than the third preset distance.
Optionally, the apparatus further comprises:
the first calculation module is used for calculating the distance between each point sound source and the virtual character according to the position of the virtual character and the position of the point sound source;
the first coefficient determining module is used for determining the attenuation coefficient of the corresponding point sound source according to the distance between each point sound source and the virtual character;
and the first strategy determining module is used for controlling the sounding strategy of the corresponding point sound source according to the attenuation coefficient of each point sound source.
Optionally, the apparatus further comprises:
the second calculation module is used for calculating the distance between each first target volume sound source and the virtual character according to the position of the virtual character and the position of the first target volume sound source if the virtual character is not in the first target volume sound source;
the second coefficient determining module is used for determining the attenuation coefficient of the first target volume sound source according to the distance between the first target volume sound source and the virtual character;
and the second strategy determination module is used for controlling the sound production strategy of the first target volume sound source according to the attenuation coefficient of the first target volume sound source.
Corresponding to the sound source control method in fig. 1, an embodiment of the present application further provides a computer device 1000, as shown in fig. 5, the device includes a memory 1001, a processor 1002, and a computer program stored in the memory 1001 and executable on the processor 1002, wherein the processor 1002 implements the sound source control method when executing the computer program.
Specifically, the memory 1001 and the processor 1002 can be general-purpose memory and processor, which are not limited in particular, and when the processor 1002 runs the computer program stored in the memory 1001, the sound source control method can be executed, which solves the problem of excessive power consumption of the system in the prior art.
Corresponding to the sound source control method in fig. 1, an embodiment of the present application further provides a computer-readable storage medium having a computer program stored thereon, where the computer program is executed by a processor to perform the steps of the sound source control method.
Specifically, the storage medium can be a general storage medium, such as a mobile disk, a hard disk, and the like, when a computer program on the storage medium is run, the sound source control method can be executed, the problem of excessive system energy consumption in the prior art is solved, the sound production position of the first target volume sound source is determined according to the position of the virtual character, and the working state of the point sound source inside the first target volume sound source is adjusted, so that the point sound source does not need to produce sound in real time, and the consumption of computing resources in the system is reduced.
In the embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided in the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the present disclosure, which should be construed in light of the above teachings. Are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. A method of sound source control, wherein a part of a game scene including a virtual character, a volumetric sound source, and a point sound source is displayed through a graphic user interface provided by a terminal device, comprising:
responding to a movement control instruction, controlling the virtual character to move in the game scene, and judging whether the virtual character is in a first target volume sound source or not according to the current position of the virtual character and the position of the first target volume sound source;
if the virtual character is in the first target volume sound source, setting the sounding position of the first target volume sound source on the virtual character, and adjusting a point sound source in the first target volume sound source to be in a working state;
and if the virtual character is not in the first target volume sound source, adjusting the point sound source in the first target volume sound source to be in a closed state.
2. The method of claim 1, wherein if the avatar is not within the first target volumetric sound source, the method further comprises:
and setting the sound production position of the first target volume sound source at one side of the first target volume sound source close to the virtual character.
3. The method according to claim 2, wherein said setting an utterance position of the first target volume sound source at a side of the first target volume sound source close to the virtual character comprises:
setting an utterance position of the first target volume sound source at a target position of the first target volume sound source; wherein the target position is a position at which an intersection of a target link and the first target volume sound source boundary is located; the target link is a link between a position of a physical center of the first target volume sound source and a position of the virtual character.
4. The method of claim 1, wherein prior to controlling the virtual character to move within the game scene in response to a movement control instruction, the method further comprises:
determining a candidate volume sound source, of the at least two candidate volume sound sources, of which the distance from the virtual character is smaller than a first preset distance according to the current position of the virtual character and the positions of the at least two candidate volume sound sources, and taking the candidate volume sound source as the first target volume sound source;
and adjusting the first target volume sound source to be in a working state.
5. The method according to claim 4, wherein the determining, as the first target volume sound source, a candidate volume sound source whose distance from the virtual character is less than a first preset distance from among the at least two candidate volume sound sources according to the current position of the virtual character and the positions of the at least two candidate volume sound sources comprises:
for each reference area, determining a candidate volume sound source, of the candidate volume sound sources in the reference area, of which the distance to the virtual character is smaller than a first preset distance according to the position of the candidate volume sound source in the reference area and the current position of the virtual character, as the first target volume sound source; wherein each of the reference areas is located in a different direction of the virtual character.
6. The method of claim 1, wherein prior to controlling the virtual character to move within the game scene in response to a movement control instruction, the method further comprises:
selecting a candidate volume sound source from the at least two candidate volume sound sources as the first target volume sound source by using a preset proximity algorithm according to the current position of the virtual character and the positions of the at least two candidate volume sound sources;
and adjusting the first target volume sound source to be in a working state.
7. The method of claim 4, further comprising:
adjusting other candidate volume sound sources except the first target volume sound source in the at least two candidate volume sound sources to be in an off state.
8. The method of claim 1, further comprising:
and if the virtual character is in the first target volume sound source, adjusting a point sound source which is located outside the first target volume sound source and has a distance with the position of the virtual character larger than a second preset distance to be in a closed state.
9. The method of claim 1, wherein adjusting the point sound source located inside the first target volume sound source to be in an operating state if the virtual character is inside the first target volume sound source comprises:
and if the virtual character is in the first target volume sound source, adjusting the point sound source which is positioned in the first target volume sound source and has a distance with the position of the virtual character larger than a second preset distance to be in a closed state.
10. The method of claim 1, further comprising:
if the virtual character is not in the first target volume sound source, calculating the distance between the position of the virtual character and the position of a point sound source outside the first target volume sound source according to the current position of the virtual character and the position of the point sound source outside the first target volume sound source;
if the distance between the position of the point sound source outside the first target volume sound source and the position of the virtual character is greater than a third preset distance, the distance between the position of the virtual character is greater than the third preset distance, and the point sound source outside the first target volume sound source is adjusted to be in a closed state.
11. The method of claim 1, further comprising:
calculating the distance between each point sound source and the virtual character according to the position of the virtual character and the position of the point sound source;
determining the attenuation coefficient of the corresponding point sound source according to the distance between each point sound source and the virtual character;
and controlling the sounding strategy of the corresponding point sound source according to the attenuation coefficient of each point sound source.
12. The method of claim 1, further comprising:
if the virtual character is not in the first target volume sound source, calculating the distance between the first target volume sound source and the virtual character according to the position of the virtual character and the position of the first target volume sound source;
determining an attenuation coefficient of the first target volume sound source according to the distance between the first target volume sound source and the virtual character;
and controlling the sound production strategy of the first target volume sound source according to the attenuation coefficient of the first target volume sound source.
13. An apparatus for sound source control, wherein a part of a game scene including a virtual character, a volume sound source, and a point sound source is displayed through a graphic user interface provided by a terminal device, comprising:
the moving module is used for responding to a moving control instruction and controlling the virtual character to move in the game scene;
the judging module is used for judging whether the virtual character is in the first target volume sound source or not according to the current position of the virtual character and the position of the first target volume sound source;
the first adjusting module is used for setting the sounding position of the first target volume sound source on the virtual character and adjusting a point sound source positioned in the first target volume sound source to be in a working state if the virtual character is in the first target volume sound source;
and the second adjusting module is used for adjusting the point sound source positioned in the first target volume sound source to be in a closed state if the virtual character is not in the first target volume sound source.
14. A computer arrangement comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of the preceding claims 1-12 are implemented by the processor when executing the computer program.
15. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of the claims 1-12.
CN202010568352.2A 2020-06-19 2020-06-19 Sound source control method, sound source control device, computer equipment and medium Pending CN111714889A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010568352.2A CN111714889A (en) 2020-06-19 2020-06-19 Sound source control method, sound source control device, computer equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010568352.2A CN111714889A (en) 2020-06-19 2020-06-19 Sound source control method, sound source control device, computer equipment and medium

Publications (1)

Publication Number Publication Date
CN111714889A true CN111714889A (en) 2020-09-29

Family

ID=72568370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010568352.2A Pending CN111714889A (en) 2020-06-19 2020-06-19 Sound source control method, sound source control device, computer equipment and medium

Country Status (1)

Country Link
CN (1) CN111714889A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114627890A (en) * 2022-05-16 2022-06-14 宏景科技股份有限公司 Method, system, equipment and storage medium for identifying multiple sounds in virtual environment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070293313A1 (en) * 2004-05-10 2007-12-20 Toru Shimizu Electronic Game Machine, Data Processing Method in Electronic Game Machine, Program and Storage Medium for the Same
CN101400417A (en) * 2006-03-13 2009-04-01 科乐美数码娱乐株式会社 Game sound output device, game sound control method, information recording medium, and program
JP2012045202A (en) * 2010-08-27 2012-03-08 Square Enix Co Ltd Video game processing apparatus and video game processing program
CN107281753A (en) * 2017-06-21 2017-10-24 网易(杭州)网络有限公司 Scene audio reverberation control method and device, storage medium and electronic equipment
WO2019153840A1 (en) * 2018-02-09 2019-08-15 腾讯科技(深圳)有限公司 Sound reproduction method and device, storage medium and electronic device
CN110538456A (en) * 2019-09-09 2019-12-06 北京西山居互动娱乐科技有限公司 Sound source setting method, device, equipment and storage medium in virtual environment
CN111009158A (en) * 2019-12-18 2020-04-14 华中师范大学 Virtual learning environment multi-channel fusion display method for field practice teaching
CN111111167A (en) * 2019-12-05 2020-05-08 腾讯科技(深圳)有限公司 Sound effect playing method and device in game scene and electronic device
CN111282271A (en) * 2018-12-06 2020-06-16 网易(杭州)网络有限公司 Sound rendering method and device in mobile terminal game and electronic equipment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070293313A1 (en) * 2004-05-10 2007-12-20 Toru Shimizu Electronic Game Machine, Data Processing Method in Electronic Game Machine, Program and Storage Medium for the Same
CN101400417A (en) * 2006-03-13 2009-04-01 科乐美数码娱乐株式会社 Game sound output device, game sound control method, information recording medium, and program
JP2012045202A (en) * 2010-08-27 2012-03-08 Square Enix Co Ltd Video game processing apparatus and video game processing program
CN107281753A (en) * 2017-06-21 2017-10-24 网易(杭州)网络有限公司 Scene audio reverberation control method and device, storage medium and electronic equipment
WO2019153840A1 (en) * 2018-02-09 2019-08-15 腾讯科技(深圳)有限公司 Sound reproduction method and device, storage medium and electronic device
CN111282271A (en) * 2018-12-06 2020-06-16 网易(杭州)网络有限公司 Sound rendering method and device in mobile terminal game and electronic equipment
CN110538456A (en) * 2019-09-09 2019-12-06 北京西山居互动娱乐科技有限公司 Sound source setting method, device, equipment and storage medium in virtual environment
CN111111167A (en) * 2019-12-05 2020-05-08 腾讯科技(深圳)有限公司 Sound effect playing method and device in game scene and electronic device
CN111009158A (en) * 2019-12-18 2020-04-14 华中师范大学 Virtual learning environment multi-channel fusion display method for field practice teaching

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
鲁大能: "辅助听声辨位的小软件合集!缩小枪声放大脚步!音量放大 5倍!听声训练!", pages 00, Retrieved from the Internet <URL:https://www.bilibili.com/video/BV1xE41197Bo/?spm_id_from=333.337.search-card.all.click&vd_source=f679f9840e971528dd7df98a0ff8806c> *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114627890A (en) * 2022-05-16 2022-06-14 宏景科技股份有限公司 Method, system, equipment and storage medium for identifying multiple sounds in virtual environment

Similar Documents

Publication Publication Date Title
EP3882870B1 (en) Method and device for image display, storage medium and electronic device
CN111013142B (en) Interactive effect display method and device, computer equipment and storage medium
CN112156464B (en) Two-dimensional image display method, device and equipment of virtual object and storage medium
CN111672104B (en) Virtual scene display method, device, terminal and storage medium
US11895273B2 (en) Voice message display method and apparatus in application, computer device, and computer-readable storage medium
CN110930487A (en) Animation implementation method and device
CN112206517B (en) Rendering method, rendering device, storage medium and computer equipment
JP2020523691A (en) Delayed lighting optimization, foveal adaptation of particles, and simulation model in foveal rendering system
CN111714889A (en) Sound source control method, sound source control device, computer equipment and medium
CN111679879B (en) Display method and device of account segment bit information, terminal and readable storage medium
CN112891939A (en) Contact information display method and device, computer equipment and storage medium
CN114042315B (en) Virtual scene-based graphic display method, device, equipment and medium
CN112619131B (en) Method, device and equipment for switching states of virtual props and readable storage medium
JP4130464B2 (en) GAME DEVICE, GAME CONTROL METHOD, AND PROGRAM
JP7329217B2 (en) Computer program, server device, terminal device, and method
He Virtual reality for budget smartphones
CN113713371B (en) Music synthesis method, device, equipment and medium
JP2020028419A (en) Game program and game device
CN112717395B (en) Audio binding method, device, equipment and storage medium
US20230285859A1 (en) Virtual world sound-prompting method, apparatus, device and storage medium
KR20230167988A (en) Method for providing game service from server to electronic device
CN114404953A (en) Virtual model processing method and device, computer equipment and storage medium
Shao RABBOT-Exploring Shared Awareness in Virtual Reality
CN116808582A (en) Method, device, equipment and medium for prompting information in game
CN117671201A (en) Information refreshing method, device, storage medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination