CN110538456B - Sound source setting method, device and equipment in virtual environment and storage medium - Google Patents

Sound source setting method, device and equipment in virtual environment and storage medium Download PDF

Info

Publication number
CN110538456B
CN110538456B CN201910849482.0A CN201910849482A CN110538456B CN 110538456 B CN110538456 B CN 110538456B CN 201910849482 A CN201910849482 A CN 201910849482A CN 110538456 B CN110538456 B CN 110538456B
Authority
CN
China
Prior art keywords
environment
sound source
boundary
character
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910849482.0A
Other languages
Chinese (zh)
Other versions
CN110538456A (en
Inventor
李裕逵
郑金鑫
何晓平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Kingsoft Digital Network Technology Co Ltd
Original Assignee
Zhuhai Kingsoft Digital Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Kingsoft Digital Network Technology Co Ltd filed Critical Zhuhai Kingsoft Digital Network Technology Co Ltd
Priority to CN201910849482.0A priority Critical patent/CN110538456B/en
Publication of CN110538456A publication Critical patent/CN110538456A/en
Application granted granted Critical
Publication of CN110538456B publication Critical patent/CN110538456B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6063Methods for processing data by generating or executing the game program for sound processing
    • A63F2300/6081Methods for processing data by generating or executing the game program for sound processing generating an output signal, e.g. under timing constraints, for spatialization

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Stereophonic System (AREA)

Abstract

The application provides a sound source setting method, device and equipment in a virtual environment and a storage medium. Wherein the method comprises the following steps: acquiring a target environment boundary and a role position; judging whether the character is in the target environment or not based on the target environment boundary and the character position; if the role is in the target environment, setting position data which are the same as the role for the virtual sound source; if the character is not in the target environment, calculating the distance between the character and the boundary of the target environment, and setting position data for the virtual sound source based on the calculated distance; and setting the virtual sound source according to the position data of the virtual sound source. According to the sound source setting method, device and equipment in the virtual environment and the storage medium, the continuity of virtual sound source sound can be effectively improved, the consumption of system calculation is effectively reduced, and the game performance is greatly improved.

Description

Sound source setting method, device and equipment in virtual environment and storage medium
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a method, an apparatus, a device, and a computer readable storage medium for setting a sound source in a virtual environment.
Background
In a game, sound is one of indispensable important elements, and a rich sound configuration in the game can pull up the distance between the virtual world and the real world. The reality, the fineness and the richness of the game scene and the fitting degree with the real world can be increased based on different types of sounds configured by different scenes.
Currently, sound generation setting related to a game scene generally employs setting a plurality of virtual sound sources at corresponding positions in the game scene, the plurality of virtual sound sources matching sound to be provided at the positions. However, this sound effect design method needs to design a plurality of sound sources, and needs to consider the problem of position distribution among the plurality of sound sources. When a character moves, the relative azimuth distribution between a plurality of sound sources and the character changes, and thus discontinuous changes in sound are liable to occur. In addition, the setting of a plurality of sound sources consumes much calculation of the system, and affects the game performance.
Disclosure of Invention
In view of the foregoing, embodiments of the present application provide a method, an apparatus, a device, and a computer-readable storage medium for setting a sound source in a virtual environment, so as to solve the technical drawbacks in the prior art.
The embodiment of the application discloses a sound source setting method in a virtual environment, which comprises the following steps:
Acquiring a target environment boundary and a role position;
judging whether the character is in the target environment or not based on the target environment boundary and the character position;
if the role is in the target environment, setting position data which are the same as the role for the virtual sound source;
if the character is not in the target environment, calculating the distance between the character and the boundary of the target environment, and setting position data for the virtual sound source based on the calculated distance;
and setting the virtual sound source according to the position data of the virtual sound source.
Further, before the target environment boundary and the role position are acquired, the method further comprises:
establishing a target environment model;
the acquiring the target environment boundary includes: and acquiring a target environment boundary according to the target environment model.
Further, the acquiring the target environment boundary and the character position includes:
and acquiring the target environment boundary and the role position at regular time according to a preset time interval.
Further, the calculating a distance between the character and the target environment boundary, and setting position data for the virtual sound source based on the calculated distance, includes:
calculating the shortest distance between the role and the target environment boundary;
Based on the calculated shortest distance, obtaining a target position closest to the role on the target environment boundary;
and setting the target position as position data of the analog sound source.
Further, the setting the target position as the position data of the analog sound source includes:
in the case where the target position includes two or more positions, position data identical to any one of the target positions is set for the virtual sound source.
Further, the setting the target position as the position data of the analog sound source includes:
under the condition that the target position comprises two or more positions, converting the target environment boundary to obtain a conversion environment boundary;
and calculating the distance between the character and the boundary of the conversion environment, and setting position data for the virtual sound source based on the calculated distance.
Further, the calculating a distance between the character and the transition environment boundary, and setting position data for the virtual sound source based on the calculated distance, includes:
calculating the shortest distance between the character and the conversion environment boundary;
Based on the calculated shortest distance, obtaining the position closest to the role on the boundary of the conversion environment;
and setting the position closest to the character on the boundary of the conversion environment as the position data of the analog sound source.
The embodiment of the application discloses a sound source setting device in virtual environment, including:
an acquisition module configured to acquire a target environment boundary and a character position;
a judging module configured to judge whether a character is in a target environment based on the target environment boundary and the character position;
if the role is in the target environment, setting position data which are the same as the role for the virtual sound source;
if the character is not in the target environment, calculating the distance between the character and the boundary of the target environment, and setting position data for the virtual sound source based on the calculated distance;
and the setting module is configured to set the virtual sound source according to the position data of the virtual sound source.
Optionally, the audio source setting device in the virtual environment further includes:
the building module is configured to build a target environment model;
the acquisition module is specifically configured to: and acquiring a target environment boundary according to the target environment model.
Optionally, the acquisition module is further configured to:
and acquiring the target environment boundary and the role position at regular time according to a preset time interval.
Optionally, the computing module is further configured to:
calculating the shortest distance between the role and the target environment boundary;
based on the calculated shortest distance, obtaining a target position closest to the role on the target environment boundary;
and setting the target position as position data of the analog sound source.
Optionally, the computing module is further configured to:
in the case where the target position includes two or more positions, position data identical to any one of the target positions is set for the virtual sound source.
Optionally, the computing module is further configured to:
under the condition that the target position comprises two or more positions, converting the target environment boundary to obtain a conversion environment boundary;
and calculating the distance between the character and the boundary of the conversion environment, and setting position data for the virtual sound source based on the calculated distance.
Optionally, the computing module is further configured to:
Calculating the shortest distance between the character and the conversion environment boundary;
based on the calculated shortest distance, obtaining the position closest to the role on the boundary of the conversion environment;
and setting the position closest to the character on the boundary of the conversion environment as the position data of the analog sound source.
A computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, when executing the instructions, implementing the steps of a method of sound source setting in the virtual environment.
A computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of a method of sound source setting in a virtual environment.
According to the sound source setting method, device and equipment in the virtual environment and the computer readable storage medium, a plurality of virtual sound sources in a single scene are combined into one virtual sound source, the positions of the virtual sound sources change along with the change of the positions of characters, so that the continuity of sound of the virtual sound sources can be effectively improved, the consumption of system calculation is effectively reduced, and the game performance is greatly improved.
Drawings
FIG. 1 is a schematic diagram of a computer device according to an embodiment of the present application;
fig. 2 is a flowchart of a method for setting a sound source in a virtual environment according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a method for setting a sound source in a virtual environment according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a sound source setting device in a virtual environment according to an embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is, however, susceptible of embodiment in many other ways than those herein described and similar generalizations can be made by those skilled in the art without departing from the spirit of the application and the application is therefore not limited to the specific embodiments disclosed below.
The terminology used in the one or more embodiments of the specification is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the specification. As used in this specification, one or more embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that, although the terms first, second, etc. may be used in one or more embodiments of this specification to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first may also be referred to as a second, and similarly, a second may also be referred to as a first, without departing from the scope of one or more embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
In the present application, a method, an apparatus, a device, and a storage medium for setting a sound source in a virtual environment are provided, and the following embodiments are described in detail one by one.
Fig. 1 is a block diagram illustrating a configuration of a computing device 100 according to an embodiment of the present description. The components of the computing device 100 include, but are not limited to, a memory 110 and a processor 120. Processor 120 is coupled to memory 110 via bus 130 and database 150 is used to store data.
Computing device 100 also includes access device 140, access device 140 enabling computing device 100 to communicate via one or more networks 160. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. The access device 140 may include one or more of any type of network interface, wired or wireless (e.g., a Network Interface Card (NIC)), such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present description, the above-described components of computing device 100, as well as other components not shown in FIG. 1, may also be connected to each other, such as by a bus. It should be understood that the block diagram of the computing device shown in FIG. 1 is for exemplary purposes only and is not intended to limit the scope of the present description. Those skilled in the art may add or replace other components as desired.
Computing device 100 may be any type of stationary or mobile computing device including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smart phone), wearable computing device (e.g., smart watch, smart glasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 100 may also be a mobile or stationary server.
Wherein the processor 120 may perform the steps of the method shown in fig. 2.
As shown in fig. 2, fig. 2 shows a flowchart of a method for setting a sound source in a virtual environment according to an embodiment of the present application, including steps S210 to S230.
Step S210: and acquiring the boundary of the target environment and the position of the role.
In an embodiment of the present application, the target environment may be various environments in the virtual scene of interest, such as a snow mountain, a lake, an ocean, a street, a recreation ground, and the like, which is not limited in the present application. The method for determining the target environment can be various, and can be judged according to whether the distance between the character and the center of the environment area is larger than a preset threshold value, wherein the environment area does not belong to the target environment under the condition that the distance between the character and the center of the environment area is larger than the preset threshold value, and the environment area belongs to the target environment under the condition that the distance between the character and the center of the environment area is smaller than or equal to the preset threshold value; as the scene in the game interface is continuously changed along with the movement of the character when the character moves, the environment area displayed in the game interface can be used as the target environment at the same moment or in the same frame of picture when the character position is acquired, or the target environment can be determined by other methods, and the application is not limited in this way.
The boundary is a kind of marking boundary for dividing the environment area, and may be an outer contour line of the environment area of various shapes. The target environment boundary may be an outer boundary of the virtual environment model of interest in connection with the character position. The character position may be where, and the azimuth the character is located.
For example, assuming that three environmental areas of a forest, a waterfall and a street are set in the game, and the distances between the characters and the three environmental areas are equal to and smaller than a preset threshold value, the forest, waterfall and street all belong to target environments, and the boundaries of the forest, waterfall and street all belong to target environment boundaries; under the condition that the role is located between the forest environment area and the waterfall environment area and the distance between the role and the street environment area is larger than the set threshold value, the street environment does not belong to the target environment, the boundary of the street environment area does not belong to the target environment boundary, under the condition that the role is located at the middle point between the forest environment area and the waterfall environment area, the forest environment and the waterfall environment both belong to the target environment, and the boundary of the forest environment area and the boundary of the waterfall environment area are both the target environment boundary.
In practical application, the target environment boundary and the role position can be acquired regularly according to a preset time interval.
The preset time interval may be determined according to actual requirements, for example, n times of acquisition of the target environment boundary and the role position (n is an integer greater than or equal to 1) per second may be set, or one time of acquisition of the target environment boundary and the role position (x is an integer greater than or equal to 1) per x frames, which is not limited in this application.
Step S220: based on the target environment boundary and the character position, it is determined whether the character is in the target environment, if so, step S221 is executed, and if not, step S222 is executed.
Step S221: and setting the same position data as the roles for the virtual sound source. After setting the same position data as the character for the virtual sound source, step S230 is continued to be performed.
Step S222: and calculating the distance between the character and the boundary of the target environment, and setting position data for the virtual sound source based on the calculated distance. After setting the position data for the virtual sound source, step S230 is continued.
The method for judging whether the character is in the target environment may be various, for example, a coordinate system may be established based on the target environment, a center point in the target environment is used as a coordinate origin, a target environment boundary and a character position are represented in a coordinate form, a distance between the target environment boundary and the character position is calculated, if the distance between the target environment boundary and the character position is greater than a preset value, the character is judged to be located outside the target environment, if the distance between the target environment boundary and the character position is less than the preset value, the character is judged to be located in the target environment, and if the distance between the target environment boundary and the character position is equal to the preset value, the character is judged to be located on the target environment boundary. In addition, whether the role is in the target environment can be judged by other methods, and the application is not limited to the method.
Setting position data identical to the position data of the character for the virtual sound source if the character is in the target environment, and moving the virtual sound source along with the movement of the character if the character is moved in the target environment; if the character moves from the target environment to the outside of the target environment, the virtual sound source still moves along with the movement of the character when the character is still in the target environment, the distance between the character and the target environment is calculated after the character moves to the outside of the target environment, and position data is set for the virtual sound source based on the distance between the character and the target environment.
There are various methods for calculating the distance between the character and the boundary of the target environment, for example, any one ray intersecting the boundary of the target environment may be taken from the position of the character to obtain an intersection point of the position of the character and the boundary of the target environment, a linear distance between the position of the character and the intersection point is calculated, and the calculated linear distance is used as the distance between the character and the target environment. In addition, the distance between the character and the target environment can be calculated by other methods, which is not limited in the application.
Further, a shortest distance between the character and the target environment boundary is calculated.
And obtaining the target position closest to the role on the boundary of the target environment based on the calculated shortest distance.
And setting the target position as position data of the analog sound source.
There are various methods for calculating the shortest distance between the character and the target environment boundary, for example, a plurality of rays intersecting the target environment boundary may be generated by taking the position of the character as a starting point, intersections of the plurality of rays of the position of the character and the target environment boundary are generated, the straight line distance between the position of the character and each intersection is calculated, the calculated shortest distance is taken as the shortest distance between the character and the target environment, and the position of the point with the shortest distance between the character and the target environment boundary is taken as the target position. In addition, the distance between the character and the target environment can be calculated by other methods, which is not limited in the application.
The target position is the position of the nearest point to the character on the boundary of the target environment, and the target position moves along with the movement of the character and changes along with the change of the position of the character.
In practical applications, in the case where the target position includes two or more positions, the virtual sound source may be set with the same position data as any one of the target positions.
Specifically, if two or more points closest to the character exist on the boundary of the target environment, and the distances between the two or more points and the character are equal, the positions of the two or more points closest to the character are all target positions, and since the distances between the character and the two or more target positions are equal, it is possible to set position data identical to the position of any one of the target positions for the virtual sound source, in which case the difference in target position selection does not affect the sound effect finally exhibited.
For example, assuming that the target environment boundary is a "concave" boundary, and there is a hidden centerline that can equally divide the "concave" boundary into two parts, when a character moves onto the hidden centerline in the concave area of the "concave" boundary, there are two or more points closest to the character on the target environment boundary, the positions of the two or more points closest to the character on the target environment are all target positions, one of the target positions is arbitrarily selected, and the virtual sound source is set with the same position data as the position of the target position.
Step S230: and setting the virtual sound source according to the position data of the virtual sound source.
Specifically, the virtual sound source may be a tone resource in the virtual game scene, and the types of the sound of the virtual sound source include multiple types, such as wind sound, rain sound, thunder sound, and the like, or may be a mixed sound of wind sound, rain sound, thunder sound, and the like, which may be determined according to the target environment and the game scene.
For example, if the target environment is a lawn, the sound of the virtual sound source may include a worm sound, a playing sound, and the like, and if the specific game scene is a barren lawn, the sound of the virtual sound source may be a worm sound, and if the specific game scene is a lawn suitable for step-down and picnic, the sound of the virtual sound source may be a mixed sound of a worm sound, a smiling sound, a playing sound, and a speaking sound; if the target environment is ocean, the virtual sound source may include wind sound, wave sound, and the like, and in the case that the specific game scene is ocean in sunny and clear, the sound of the virtual sound source may be wave sound, ship flute, and the like, or may be mixed sound including wave sound and ship flute, and in the case that the specific game scene is ocean with electric lightning, the sound of the virtual sound source may include mixed sound including lightning, wind sound, wave sound, and the like.
If the target environment is a train station, the sound of the virtual sound source may be speaking sound, whistling sound, broadcasting sound, etc., and when the specific game scene is a train station where a train arrives temporarily, the sound of the virtual sound source may include vendor speaking sound, crowd speaking sound and station broadcasting sound, and the volume ratio of the vendor speaking sound, crowd speaking sound and station broadcasting sound in the sound of the virtual sound source may be determined according to actual requirements, for example, the volume ratio of the station broadcasting sound in the sound of the virtual sound source is 40%, and the volume ratio of the vendor speaking sound and crowd speaking sound in the virtual sound source is 30%, which is not limited in the application.
The location data of the virtual sound source may include one or more, and the virtual sound source may likewise include one or more, which may be determined as the case may be, and the present application is not limited thereto.
The above embodiments are further described below with reference to specific examples.
For example, assuming that when a character enters a game scene, an initial character position is obtained, and four environment areas including a brook, a pasture, a bamboo forest and a flower field are further included on the game interface in the case that the initial character position is the center of the game interface, the brook environment area, the pasture environment area, the bamboo forest environment area and the flower field environment area are all target environment areas, and the boundaries of the brook environment area, the pasture environment area, the bamboo forest environment area and the flower field environment area are all target environment boundaries.
Assuming that a two-dimensional coordinate system is established with an initial character position origin and each extracted object environment boundary and character positions of different frames are expressed in the form of coordinates, the object environment boundary and the character positions are acquired once per frame, assuming that the character positions acquired in the 10 th frame are (x 10 ,y 10 ) And (x) 10 ,y 10 ) Within the boundaries of the pasture environment area, the position data of the virtual sound source at the 10 th frame is (x) 10 ,y 10 ) And the virtual sound source plays the beeping sound and the sheep sound of the pasture environment area. Assume that the character position acquired in frame 12 is (x 12 ,y 12 ) And (x) 12 ,y 12 ) Still within the boundaries of the pasture environment area, the position data of the virtual sound source at frame 12 is updated to (x) 12 ,y 12 ) And the virtual sound source still plays the beeping sounds and the sheep sounds in the pasture environment area.
Assume that the character position acquired in the 15 th frame is (x 15 ,y 15 ),(x 15 ,y 15 ) In none of the target environments, the character positions (x 15 ,y 15 ) Shortest distance to each target environment, a character position (x 15 ,y 15 ) Points a (x) on the boundary with the pasture environment area a ,y a ) If the distance is the shortest, the position data of the virtual sound source setting is updated to (x) a ,y a ) And the virtual sound source plays the beeping sound and the sheep sound of the pasture environment area. Assume that the character position acquired in frame 18 is (x 18 ,y 18 ),(x 18 ,y 18 ) In none of the target environments, the character positions (x 18 ,y 18 ) Shortest distance to each target environment, a character position (x 18 ,y 18 ) Point b (x) on boundary with pasture environment area b ,y b ) If the distance is the shortest, the position data of the virtual sound source setting is updated to (x) b ,y b ) And the virtual sound source still plays the beeping sounds and the sheep sounds in the pasture environment area.
Assume that the character position acquired in frame 22 is (x 22 ,y 22 ),(x 22 ,y 22 ) In none of the target environments, the character positions (x 22 ,y 22 ) Shortest distance to each target environment, a character position (x 22 ,y 22 ) With point c (x c ,y c ) And point d (x d ,y d ) The distance between them is the shortest, and the character position (x 22 ,y 22 ) And point c (x c ,y c ) Distance between each other and character position (x 22 ,y 22 ) And point d (x d ,y d ) The distances between the virtual sound sources are equal, and the position data of the virtual sound sources can be updated to be (x) c ,y c ) Or (x) d ,y d ) And the virtual sound source plays the stream sound in the stream environment area.
Assume that the character position acquired in the 30 th frame is (x 30 ,y 30 ),(x 30 ,y 30 ) In none of the target environments, the character positions (x 30 ,y 30 ) Shortest distance to each target environment, a character position (x 30 ,y 30 ) Points e (x) on the boundary with the stream environment area e ,y e ) Points f (x) on the boundaries of the flower field environment region f ,y f ) The distance between them is the shortest, and the character position (x 30 ,y 30 ) Points e (x) on the boundary with the stream environment area e ,y e ) Distance between each other and character position (x 30 ,y 30 ) And a point f (x) on the boundary of the flower field environment region f ,y f ) With equal distance between them, point e (x) e ,y e ) And point f (x) on the border of the flower field environment region f ,y f ) Virtual sound sources are respectively arranged at the positions and played at the same time, and point e (x e ,y e ) The virtual sound source at the position plays the stream sound in the stream environment area, and the point f (x f ,y f ) The virtual sound source at the position plays the bird song in the flower field environment area.
Assume that the character position acquired in the 36 th frame is (x 36 ,y 36 ),(x 36 ,y 36 ) In none of the target environments, the character positions (x 36 ,y 36 ) Shortest distance to each target environment, a character position (x 36 ,y 36 ) Points g (x) g ,y g ) Point h (x) h ,y h ) Points i (x) on the border of the flower field environment region i ,y i ) The distance between them is the shortest, and the character position (x 36 ,y 36 ) Points g (x) g ,y g ) Point h (x) h ,y h ) Distance between each other and character position (x 36 ,y 36 ) With point i (x) on the border of the flower field environment region i ,y i ) With equal distance between them, points g (x g ,y g ) Or point h (x h ,y h ) A virtual sound source is arranged at the position, and a point i (x) i ,y i ) A virtual sound source is arranged at the position, the two virtual sound sources are played simultaneously, and point g (x g ,y g ) Or point h (x h ,y h ) Virtual sound source at the position plays wind sound of bamboo forest environment area, point i (x i ,y i ) The virtual sound source at the position plays the bird song in the flower field environment area.
According to the sound source setting method in the virtual environment, under the condition that the character is located in the target environment area, the position of the virtual sound source is identical to the position of the character and moves along with the movement of the character, under the condition that the character is located outside the target environment, the position of the virtual sound source is the position of the point closest to the position of the character on the boundary of the target environment and moves along with the movement of the character, the continuity and the clarity of the sound of the virtual sound source can be effectively ensured, the situation that the sound of the virtual sound source is intermittent or unclear due to the fact that the character moves too far from the position of the virtual sound source is avoided, the fineness and the reality of the virtual sound source in a scene are improved, and the experience sense of a player is enhanced.
As shown in fig. 3, fig. 3 shows a schematic flowchart of a sound source setting method in a virtual environment according to an embodiment of the present application, including steps S310 to S340.
Step S310: and establishing a target environment model.
The target environment may be various environmental areas in the virtual scene of interest, such as a forest, lake, ocean, street, market, bar, etc., to which the present application is not limited. Various tools such as 3D max may be selected to build the target environment model, which is not limited in this application.
Step S320: and acquiring a target environment boundary and a role position according to the target environment model.
The method for acquiring the target environment boundary according to the target environment model may be various methods such as edge extraction, which is not limited in this application.
Step S330: and judging whether the character is in the target environment or not based on the target environment boundary and the character position. If yes, go to step S331, otherwise go to step S332.
Step S331: and setting the same position data as the roles for the virtual sound source. After setting the same position data as the character for the virtual sound source, step S340 is continued.
Step S332: and calculating the distance between the character and the boundary of the target environment, and setting position data for the virtual sound source based on the calculated distance. After setting the position data for the virtual sound source, step S340 is continued.
Further, a shortest distance between the character and the target environment boundary is calculated.
And obtaining the target position closest to the role on the boundary of the target environment based on the calculated shortest distance.
And setting the target position as position data of the analog sound source.
If the target position includes two or more positions, the target environment boundary may be converted to obtain a converted environment boundary; and calculating the distance between the character and the boundary of the conversion environment, and setting position data for the virtual sound source based on the calculated distance.
Specifically, if a ray passes through the target environment model and the target environment boundary of the target environment model to generate more than two intersection points, the target environment model is a female die type; if any ray passes through the target environment model and only generates two intersection points with the target environment boundary of the target environment model, the target environment model is a male model.
In the case where the target position includes two or more positions, the target environment model is a female model, and when determining the position data of the virtual sound source, the unique target position can be obtained by conversion between the female model and the male model. The method comprises the steps of converting a target environment model into a male model, extracting the boundary of the target environment model converted into the male model through edge extraction and other methods, obtaining a conversion environment boundary, calculating the distance between a character and the conversion environment boundary, and setting position data for a virtual sound source. The method for converting the female mold into the male mold is not limited herein. The method for calculating the distance between the character and the transition boundary may include various methods, and the above-mentioned step S220 is not repeated here.
Further, calculating a shortest distance between the character and the transition environment boundary; based on the calculated shortest distance, obtaining the position closest to the role on the boundary of the conversion environment; and setting the position closest to the character on the boundary of the conversion environment as the position data of the analog sound source.
Various methods for calculating the shortest distance between the character and the boundary of the transformation environment may be referred to the above step S220, and will not be described herein.
For example, assuming that after edge extraction is performed on a target environment model, an M-type target environment boundary with a closed bottom is obtained, under the condition that a character is located in the target environment boundary, position data identical to the position of the character is configured for a virtual sound source, under the condition that the character is located outside the target environment boundary, the shortest distance between the character and the target environment boundary is calculated, and under the condition that a transverse ray passes through the target environment boundary, since more than two intersection points exist between the transverse ray and the target environment boundary, the target environment model is a female model, the target environment model is converted to obtain a convex model, and assuming that edge extraction is performed on the converted target environment model to obtain a rectangular shape of the converted environment boundary, the shortest distance between the character and the rectangular converted environment boundary is calculated, and position data of the closest position on the converted boundary is not configured for the virtual sound source.
Step S340: and setting the virtual sound source according to the position data of the virtual sound source.
The position data of the virtual sound source may include one or more, and the virtual sound source may include one or more, which may be determined according to circumstances, and may refer to the above step S230, which is not described herein.
The above embodiments are further described below with reference to specific examples.
For example, assuming that when a character enters a game scene, an initial character position is obtained, and in the case that the initial character position is the center of the game interface, two environment areas of a snack bar and a bar are further included on the game interface, the environment areas of the snack bar and the bar are both target environment areas, and the boundaries of the environment areas of the snack bar and the bar are both target environment boundaries.
It is assumed that a two-dimensional coordinate system is established by an initial character position coordinate origin, each extracted object environment boundary and character positions with different frames are represented in a coordinate form, and each frame is acquired by one object environment boundary and character position.
Assume that the character position acquired in frame 6 is (x 6 ,y 6 ),(x 6 ,y 6 ) In none of the target environments, the character positions (x 6 ,y 6 ) Shortest distance to each target environment, a character position (x 6 ,y 6 ) With point a (x) a ,y a ) And point b (x b ,y b ) The distance between them is the shortest, and the character position (x 6 ,y 6 ) And point a (x a ,y a ) Distance between each other and character position (x 6 ,y 6 ) And point b (x b ,y b ) If the distances are equal, converting the bar environment model to obtain a conversion model, acquiring the boundary of the conversion model to obtain a conversion environment boundary, and calculating the character position (x 6 ,y 6 ) Distance from the boundary of the transformation environment, to obtain the character position (x 6 ,y 6 ) And a conversion environmentPoint c (x c ,y c ) If the distance between the two is shortest, the position data of the virtual sound source is set as point c (x c ,y c ) And the virtual sound source plays music sound of the bar environment area.
Assume that the character position acquired in frame 18 is (x 18 ,y 18 ),(x 18 ,y 18 ) In none of the target environments, the character positions (x 18 ,y 18 ) Shortest distance to each target environment, a character position (x 18 ,y 18 ) And point d (x) d ,y d ) And point e (x e ,y e ) And a point q (x) q ,y q ) Is the shortest and the character position (x 18 ,y 18 ) And point d (x d ,y d ) Distance between, and point e (x e ,y e ) Distance between them and point q (x q ,y q ) If the distances are equal, converting the bar environment model to obtain a conversion model, acquiring the boundary of the conversion model to obtain a conversion environment boundary, and calculating the character position (x) 18 ,y 18 ) Distance from the boundary of the transformation environment, to obtain the character position (x 18 ,y 18 ) And a point f (x f ,y f ) The shortest distance between them, the point f (x) f ,y f ) Points q (x) q ,y q ) Respectively setting virtual sound sources for simultaneous playing, and setting a point f (x) on the conversion environment boundary of the bar environment area f ,y f ) The virtual sound source at the location plays the musical sound of the bar environment area, the point q (x q ,y q ) The virtual sound source at the location plays the sales sounds of the snack street environment area.
According to the sound source setting method in the virtual environment, under the condition that two or more positions closest to the character exist on the target environment boundary, the target environment model and the target environment boundary are converted to determine that only one position closest to the character is provided with the virtual sound source, the accuracy of the position of the virtual sound source can be effectively improved, the sound effect of the virtual sound source is effectively improved, and better experience is brought to players.
As shown in fig. 4, fig. 4 shows a schematic structural diagram of a sound source setting device in a virtual environment according to an embodiment of the present application.
An audio source setting apparatus in a virtual environment, comprising:
An acquisition module 410 configured to acquire a target environment boundary and a character position.
A determination module 420 is configured to determine whether a character is in a target environment based on the target environment boundary and the character position.
And if the role is in the target environment, setting the same position data as the role for the virtual sound source.
And if the character is not in the target environment, calculating the distance between the character and the boundary of the target environment, and setting position data for the virtual sound source based on the calculated distance.
And a setting module 430 configured to set a virtual sound source according to the position data of the virtual sound source.
Optionally, the audio source setting device in the virtual environment further includes:
and the building module is configured to build a target environment model.
The acquisition module 410 is specifically configured to: and acquiring a target environment boundary according to the target environment model.
And acquiring a target environment boundary according to the target environment model.
Optionally, the obtaining module 410 is further configured to:
and acquiring the target environment boundary and the role position at regular time according to a preset time interval.
Optionally, the computing module is further configured to:
A shortest distance between the character and the target environment boundary is calculated.
And obtaining the target position closest to the role on the boundary of the target environment based on the calculated shortest distance.
And setting the target position as position data of the analog sound source.
Optionally, the computing module is further configured to:
in the case where the target position includes two or more positions, position data identical to any one of the target positions is set for the virtual sound source.
Optionally, the computing module is further configured to:
and under the condition that the target position comprises two or more positions, converting the target environment boundary to obtain a conversion environment boundary.
And calculating the distance between the character and the boundary of the conversion environment, and setting position data for the virtual sound source based on the calculated distance.
Optionally, the computing module is further configured to:
a shortest distance between the character and the transition environment boundary is calculated.
And obtaining the position closest to the role on the boundary of the transformation environment based on the calculated shortest distance.
And setting the position closest to the character on the boundary of the conversion environment as the position data of the analog sound source.
According to the sound source setting device in the virtual environment, the consumption of the virtual sound source to system calculation can be effectively reduced, and the game performance is greatly improved.
An embodiment of the present application also provides a computing device including a memory, a processor, and computer instructions stored on the memory and executable on the processor, the processor implementing the following steps when executing the instructions:
and acquiring the boundary of the target environment and the position of the role.
And judging whether the character is in the target environment or not based on the target environment boundary and the character position.
And if the role is in the target environment, setting the same position data as the role for the virtual sound source.
And if the character is not in the target environment, calculating the distance between the character and the boundary of the target environment, and setting position data for the virtual sound source based on the calculated distance.
And setting the virtual sound source according to the position data of the virtual sound source.
An embodiment of the present application also provides a computer-readable storage medium storing computer instructions that, when executed by a processor, implement the steps of the sound source setting method as in the virtual environment.
The above is an exemplary version of a computer-readable storage medium of the present embodiment. It should be noted that, the technical solution of the storage medium and the technical solution of the method for setting a sound source in the virtual environment belong to the same concept, and details of the technical solution of the storage medium, which are not described in detail, can be referred to the description of the technical solution of the method for setting a sound source in the virtual environment.
The computer instructions include computer program code that may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
It should be noted that, for the sake of simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all necessary for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The above-disclosed preferred embodiments of the present application are provided only as an aid to the elucidation of the present application. Alternative embodiments are not intended to be exhaustive or to limit the invention to the precise form disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and the practical application, to thereby enable others skilled in the art to best understand and utilize the application. This application is to be limited only by the claims and the full scope and equivalents thereof.

Claims (14)

1. A method for setting a sound source in a virtual environment, comprising:
acquiring a target environment boundary and a role position;
judging whether the character is in the target environment or not based on the target environment boundary and the character position;
if the role is in the target environment, setting position data which are the same as the role for the virtual sound source;
if the character is not in the target environment, calculating the distance between the character and the boundary of the target environment, and setting position data for the virtual sound source based on the calculated distance;
setting a virtual sound source according to the position data of the virtual sound source, wherein the position of the virtual sound source changes along with the change of the position of the character;
the calculating the distance between the character and the boundary of the target environment, and setting position data for the virtual sound source based on the calculated distance, includes: calculating the shortest distance between the role and the target environment boundary; based on the calculated shortest distance, obtaining a target position closest to the role on the target environment boundary; and setting the target position as position data of the virtual sound source.
2. The method for setting sound sources in a virtual environment according to claim 1, further comprising, before the acquiring the target environment boundary and the character position:
establishing a target environment model;
the acquiring the target environment boundary includes:
and acquiring a target environment boundary according to the target environment model.
3. The sound source setting method in a virtual environment according to claim 1, wherein the acquiring the target environment boundary and the character position includes:
and acquiring the target environment boundary and the role position at regular time according to a preset time interval.
4. The sound source setting method in a virtual environment according to claim 1, wherein the setting the target position as the position data of the virtual sound source includes:
in the case where the target position includes two or more positions, position data identical to any one of the target positions is set for the virtual sound source.
5. The sound source setting method in a virtual environment according to claim 1, wherein the setting the target position as the position data of the virtual sound source includes:
under the condition that the target position comprises two or more positions, converting the target environment boundary to obtain a conversion environment boundary;
And calculating the distance between the character and the boundary of the conversion environment, and setting position data for the virtual sound source based on the calculated distance.
6. The sound source setting method in a virtual environment according to claim 5, wherein the calculating a distance between the character and the transition environment boundary and setting position data for the virtual sound source based on the calculated distance, comprises:
calculating the shortest distance between the character and the conversion environment boundary;
based on the calculated shortest distance, obtaining the position closest to the role on the boundary of the conversion environment;
and setting the position closest to the character on the boundary of the conversion environment as the position data of the virtual sound source.
7. An audio source setting apparatus in a virtual environment, comprising: an acquisition module configured to acquire a target environment boundary and a character position;
a judging module configured to judge whether a character is in a target environment based on the target environment boundary and the character position;
if the role is in the target environment, setting position data which are the same as the role for the virtual sound source;
If the character is not in the target environment, calculating the distance between the character and the boundary of the target environment, and setting position data for the virtual sound source based on the calculated distance;
a setting module configured to set a virtual sound source according to position data of the virtual sound source, the position of the virtual sound source varying with a variation in the position of the character;
a computing module configured to: calculating the shortest distance between the role and the target environment boundary; based on the calculated shortest distance, obtaining a target position closest to the role on the target environment boundary; and setting the target position as position data of the virtual sound source.
8. The sound source setting apparatus in a virtual environment according to claim 7, further comprising:
the building module is configured to build a target environment model;
the acquisition module is specifically configured to: and acquiring a target environment boundary according to the target environment model.
9. The sound source setting apparatus in a virtual environment according to claim 7, wherein the acquisition module is further configured to:
and acquiring the target environment boundary and the role position at regular time according to a preset time interval.
10. The sound source setting apparatus in a virtual environment according to claim 7, wherein the computing module is further configured to:
in the case where the target position includes two or more positions, position data identical to any one of the target positions is set for the virtual sound source.
11. The sound source setting apparatus in a virtual environment according to claim 10, wherein the computing module is further configured to:
under the condition that the target position comprises two or more positions, converting the target environment boundary to obtain a conversion environment boundary;
and calculating the distance between the character and the boundary of the conversion environment, and setting position data for the virtual sound source based on the calculated distance.
12. The sound source setting apparatus in a virtual environment according to claim 11, wherein the computing module is further configured to:
calculating the shortest distance between the character and the conversion environment boundary;
based on the calculated shortest distance, obtaining the position closest to the role on the boundary of the conversion environment;
and setting the position closest to the character on the boundary of the conversion environment as the position data of the virtual sound source.
13. A computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, wherein the processor, when executing the instructions, implements the steps of the method of any one of claims 1 to 6.
14. A computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the method of any one of claims 1 to 6.
CN201910849482.0A 2019-09-09 2019-09-09 Sound source setting method, device and equipment in virtual environment and storage medium Active CN110538456B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910849482.0A CN110538456B (en) 2019-09-09 2019-09-09 Sound source setting method, device and equipment in virtual environment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910849482.0A CN110538456B (en) 2019-09-09 2019-09-09 Sound source setting method, device and equipment in virtual environment and storage medium

Publications (2)

Publication Number Publication Date
CN110538456A CN110538456A (en) 2019-12-06
CN110538456B true CN110538456B (en) 2023-08-08

Family

ID=68713099

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910849482.0A Active CN110538456B (en) 2019-09-09 2019-09-09 Sound source setting method, device and equipment in virtual environment and storage medium

Country Status (1)

Country Link
CN (1) CN110538456B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111135572A (en) * 2019-12-24 2020-05-12 北京像素软件科技股份有限公司 Game sound effect management method and device, storage medium and electronic equipment
CN111714889B (en) * 2020-06-19 2024-06-25 网易(杭州)网络有限公司 Sound source control method, device, computer equipment and medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005046270A (en) * 2003-07-31 2005-02-24 Konami Computer Entertainment Yokyo Inc Game device, control method of computer and program
CN107281753A (en) * 2017-06-21 2017-10-24 网易(杭州)网络有限公司 Scene audio reverberation control method and device, storage medium and electronic equipment
CN107885417A (en) * 2017-11-03 2018-04-06 腾讯科技(深圳)有限公司 Object localization method, device and computer-readable recording medium in virtual environment
CN108579084A (en) * 2018-04-27 2018-09-28 腾讯科技(深圳)有限公司 Method for information display, device, equipment in virtual environment and storage medium
CN108939535A (en) * 2018-06-25 2018-12-07 网易(杭州)网络有限公司 The sound effect control method and device of virtual scene, storage medium, electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005046270A (en) * 2003-07-31 2005-02-24 Konami Computer Entertainment Yokyo Inc Game device, control method of computer and program
CN107281753A (en) * 2017-06-21 2017-10-24 网易(杭州)网络有限公司 Scene audio reverberation control method and device, storage medium and electronic equipment
CN107885417A (en) * 2017-11-03 2018-04-06 腾讯科技(深圳)有限公司 Object localization method, device and computer-readable recording medium in virtual environment
CN108579084A (en) * 2018-04-27 2018-09-28 腾讯科技(深圳)有限公司 Method for information display, device, equipment in virtual environment and storage medium
CN108939535A (en) * 2018-06-25 2018-12-07 网易(杭州)网络有限公司 The sound effect control method and device of virtual scene, storage medium, electronic equipment

Also Published As

Publication number Publication date
CN110538456A (en) 2019-12-06

Similar Documents

Publication Publication Date Title
CN110602554B (en) Cover image determining method, device and equipment
WO2018049979A1 (en) Animation synthesis method and device
CN106710003B (en) OpenG L ES-based three-dimensional photographing method and system
CN110390704A (en) Image processing method, device, terminal device and storage medium
CN110538456B (en) Sound source setting method, device and equipment in virtual environment and storage medium
CN116797684B (en) Image generation method, device, electronic equipment and storage medium
CN110163054A (en) A kind of face three-dimensional image generating method and device
CN113630615B (en) Live broadcast room virtual gift display method and device
CN107437272B (en) Interactive entertainment method and device based on augmented reality and terminal equipment
CN103856390A (en) Instant messaging method and system, messaging information processing method and terminals
CN103745497B (en) Plant growth modeling method and system
CN110570499B (en) Expression generating method, device, computing equipment and storage medium
CN111127624A (en) Illumination rendering method and device based on AR scene
US20150092038A1 (en) Editing image data
KR20210113948A (en) Method and apparatus for generating virtual avatar
CN111951156B (en) Method for drawing photoelectric special effect of graph
CN110188600B (en) Drawing evaluation method, system and storage medium
CN114245099B (en) Video generation method and device, electronic equipment and storage medium
CN110298925B (en) Augmented reality image processing method, device, computing equipment and storage medium
Zhang et al. Urban landscape design based on data fusion and computer virtual reality technology
WO2019218773A1 (en) Voice synthesis method and device, storage medium, and electronic device
CN110097615B (en) Stylized and de-stylized artistic word editing method and system
CN113936086A (en) Method and device for generating hair model, electronic equipment and storage medium
CN112604279A (en) Special effect display method and device
CN115690280B (en) Three-dimensional image pronunciation mouth shape simulation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 001, Floor 03, Floor 3, No. 33, Xiaoying West Road, Haidian District, Beijing 100085

Applicant after: Beijing Jinshan Shiyou Interactive Entertainment Technology Co.,Ltd.

Address before: 2f04, No. 33, Xiaoying West Road, Haidian District, Beijing 100085

Applicant before: Beijing Xishanju Interactive Entertainment Technology Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20221101

Address after: 519000 Room 102, 202, 302 and 402, No. 325, Qiandao Ring Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, Room 102 and 202, No. 327 and Room 302, No. 329

Applicant after: Zhuhai Jinshan Digital Network Technology Co.,Ltd.

Address before: Room 001, Floor 03, Floor 3, No. 33, Xiaoying West Road, Haidian District, Beijing 100085

Applicant before: Beijing Jinshan Shiyou Interactive Entertainment Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant