CN112316427B - Voice playing method and device, computer equipment and storage medium - Google Patents

Voice playing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112316427B
CN112316427B CN202011223741.8A CN202011223741A CN112316427B CN 112316427 B CN112316427 B CN 112316427B CN 202011223741 A CN202011223741 A CN 202011223741A CN 112316427 B CN112316427 B CN 112316427B
Authority
CN
China
Prior art keywords
virtual character
target
voice
voice data
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011223741.8A
Other languages
Chinese (zh)
Other versions
CN112316427A (en
Inventor
曹木勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202011223741.8A priority Critical patent/CN112316427B/en
Publication of CN112316427A publication Critical patent/CN112316427A/en
Application granted granted Critical
Publication of CN112316427B publication Critical patent/CN112316427B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the application discloses a voice playing method, a voice playing device, computer equipment and a storage medium, and can acquire a target user voice data packet corresponding to a target virtual character in a game scene and acquire distance information between a candidate virtual character and the target virtual character in the game scene; determining the corresponding voice energy level between the candidate virtual character and the target virtual character according to the distance information between the candidate virtual character and the target virtual character; acquiring position information of the candidate virtual character and target position information of the target virtual character; determining the azimuth information of the candidate virtual character relative to the target virtual character according to the position information of the candidate virtual character and the target position information of the target virtual character; and playing the voice in the voice data packet of the target user corresponding to the target virtual character to the candidate virtual character according to the voice energy level and the azimuth information. The flexibility and the effect of voice playing are improved.

Description

Voice playing method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for playing a voice, a computer device, and a storage medium.
Background
Currently, during game play, a player may communicate with other players through speech. Specifically, the voice data generated by the player is collected through the sending end, and then the voice data is uploaded to the server, the server can broadcast the voice data to the receiving ends corresponding to other players, and at the moment, each receiving end can play the received voice data according to a unified standard.
The server broadcasts the voice data to the receiving ends corresponding to other players, so that all the receiving ends can play the voice data and cannot simulate the voice conversation scene between people in nature; moreover, different receiving ends play the received voice data according to a uniform standard, so that different receiving ends play the voice data without differentiation, the voice playing can not be close to the real feeling of human ears, and the flexibility and the effect of the voice playing are reduced.
Disclosure of Invention
The embodiment of the application provides a voice playing method, a voice playing device, computer equipment and a storage medium, which can improve the flexibility and effect of voice playing.
In order to solve the above technical problem, an embodiment of the present application provides the following technical solutions:
the embodiment of the application provides a voice playing method, which comprises the following steps:
acquiring a target user voice data packet corresponding to a target virtual character in a game scene, and acquiring distance information between a candidate virtual character and the target virtual character in the game scene, wherein the candidate virtual character is a virtual character of which the distance from the target virtual character is smaller than a preset distance threshold;
determining a corresponding voice energy level between the candidate virtual character and the target virtual character according to the distance information between the candidate virtual character and the target virtual character;
acquiring the position information of the candidate virtual role and the target position information of the target virtual role;
determining the azimuth information of the candidate virtual character relative to the target virtual character according to the position information of the candidate virtual character and the target position information of the target virtual character;
and playing the voice in the voice data packet of the target user corresponding to the target virtual role to the candidate virtual role according to the voice energy level and the azimuth information.
According to an aspect of the present application, there is also provided a voice playing method, including:
acquiring a target user voice data packet corresponding to a target virtual character in a game scene, and acquiring the target virtual character and position information of each virtual character in the game scene, wherein each virtual character is a virtual character except the target virtual character in the game scene;
determining distance information between the target virtual character and each virtual character based on the position information of the target virtual character and each virtual character in the game scene;
screening out the virtual roles with the distance smaller than a preset distance threshold value according to the distance information to obtain candidate virtual roles;
and sending the target user voice data packet to a terminal corresponding to the candidate virtual role, so that the terminal determines the voice energy level and the azimuth information corresponding to the candidate virtual role and the target virtual role, and playing the voice in the target user voice data packet corresponding to the target virtual role to the candidate virtual role according to the voice energy level and the azimuth information.
According to an aspect of the present application, there is also provided a voice playing apparatus, including:
the device comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is used for acquiring a target user voice data packet corresponding to a target virtual character in a game scene and acquiring distance information between a candidate virtual character and the target virtual character in the game scene, and the candidate virtual character is a virtual character of which the distance between the candidate virtual character and the target virtual character is smaller than a preset distance threshold;
the first determining unit is used for determining the corresponding voice energy level between the candidate virtual character and the target virtual character according to the distance information between the candidate virtual character and the target virtual character;
a second obtaining unit configured to obtain position information of the candidate virtual character and target position information of the target virtual character;
a second determining unit, configured to determine, according to the position information of the candidate virtual character and the target position information of the target virtual character, position information of the candidate virtual character with respect to the target virtual character;
and the playing unit is used for playing the voice in the voice data packet of the target user corresponding to the target virtual character to the candidate virtual character according to the voice energy level and the direction information.
According to an aspect of the present application, there is also provided a voice playing apparatus, including:
a third obtaining unit, configured to obtain a target user voice data packet corresponding to a target virtual character in a game scene, and obtain the target virtual character and position information of each virtual character in the game scene, where each virtual character is a virtual character in the game scene except the target virtual character;
a third determination unit configured to determine distance information between the target virtual character and each virtual character based on the target virtual character and position information of each virtual character in the game scene;
the screening unit is used for screening out the virtual roles with the distance smaller than a preset distance threshold value according to the distance information to obtain candidate virtual roles;
and the sending unit is used for sending the target user voice data packet to a terminal corresponding to the candidate virtual role, so that the terminal determines the voice energy level and the azimuth information corresponding to the candidate virtual role and the target virtual role, and plays the voice in the target user voice data packet corresponding to the target virtual role to the candidate virtual role according to the voice energy level and the azimuth information.
According to an aspect of the present application, there is also provided a computer device, including a processor and a memory, where the memory stores a computer program, and the processor executes any one of the voice playing methods provided by the embodiments of the present application when calling the computer program in the memory.
According to an aspect of the present application, there is also provided a storage medium for storing a computer program, which is loaded by a processor to execute any one of the voice playing methods provided by the embodiments of the present application.
According to the embodiment of the application, the voice data packet of the target user corresponding to the target virtual character in the game scene can be obtained, the distance information between the candidate virtual character and the target virtual character in the game scene can be obtained, and then the voice energy level corresponding to the candidate virtual character and the target virtual character can be determined according to the distance information between the candidate virtual character and the target virtual character. The position information of the candidate virtual character and the target position information of the target virtual character are obtained, and the azimuth information of the candidate virtual character relative to the target virtual character is determined according to the position information of the candidate virtual character and the target position information of the target virtual character. At the moment, the voice in the target user voice data packet corresponding to the target virtual character can be played to the candidate virtual character according to the voice energy level and the azimuth information, so that the voice interaction between the target virtual character and the candidate virtual character approaches to a natural voice conversation scene, the voice playing is closer to the real feeling of human ears, and the flexibility and the effect of the voice playing are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic scene diagram of a voice playing system provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a voice playing method provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of distance information calculation provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of orientation information determination provided by an embodiment of the present application;
FIG. 5 is another schematic diagram of position information determination provided by embodiments of the present application;
fig. 6 is a schematic diagram of interaction between a sending end, a server, and a receiving end according to an embodiment of the present application;
fig. 7 is another schematic flowchart of a voice playing method provided in an embodiment of the present application;
fig. 8 is another schematic flowchart of a voice playing method provided in an embodiment of the present application;
fig. 9 is a schematic diagram of a voice playing apparatus provided in an embodiment of the present application;
fig. 10 is another schematic diagram of a voice playing apparatus provided in an embodiment of the present application;
fig. 11 is a schematic structural diagram of a computer device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a voice playing method and device, computer equipment and a storage medium.
Referring to fig. 1, fig. 1 is a schematic view of a scenario of a voice playing system according to an embodiment of the present disclosure, where the voice playing system may include a voice playing device, the voice playing device may be specifically integrated in a terminal 10, and the terminal 10 may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, or a wearable device. The terminal 10 may communicate with the server 20, wherein the terminal 10 and the server 20 may be directly or indirectly connected through wired or wireless communication, and the application is not limited herein.
The server 20 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, but is not limited thereto.
When the terminal 10 is a terminal corresponding to a candidate virtual object, the terminal 10 may be configured to obtain a target user voice data packet corresponding to a target virtual character in a game scene, and obtain distance information between the candidate virtual character and the target virtual character in the game scene, and then determine a corresponding voice energy level between the candidate virtual character and the target virtual character according to the distance information between the candidate virtual character and the target virtual character. The position information of the candidate virtual character and the target position information of the target virtual character are obtained, and the azimuth information of the candidate virtual character relative to the target virtual character is determined according to the position information of the candidate virtual character and the target position information of the target virtual character. At this time, the voice in the voice data packet of the target user corresponding to the target virtual character can be played to the candidate virtual character according to the voice energy level and the azimuth information. The voice interaction between the target virtual character and the candidate virtual character approaches to a natural voice conversation scene, the voice playing is closer to the real feeling of human ears, and the flexibility and the effect of the voice playing are improved.
The terminal 10 may also be configured to obtain original user voice data of a terminal corresponding to the target virtual role, pre-process the original user voice data of the target virtual role to obtain target user voice data, and encode the target user voice data to obtain an encoded voice data packet. Target position information of the target virtual character in the game scene can be acquired, the target position information of the target virtual character in the game scene is written into a preset position of the encoded voice data packet to obtain a target voice data packet, and the target voice data packet is sent to the server 20, so that the server 20 sends the target voice data packet to a terminal corresponding to the candidate virtual character.
The server 20 may be configured to obtain a target user voice data packet corresponding to a target virtual character in a game scene, and obtain the target virtual character and position information of each virtual character in the game scene, where each virtual character is a virtual character in the game scene except the target virtual character. And then, based on the position information of the target virtual character and each virtual character in the game scene, determining the distance information between the target virtual character and each virtual character, and screening out the virtual characters with the distance less than a preset distance threshold value according to the distance information to obtain candidate virtual characters. At this time, the target user voice data packet may be sent to the terminal corresponding to the candidate virtual character, so that the terminal determines the voice energy level and the azimuth information corresponding to the candidate virtual character and the target virtual character, and plays the voice in the target user voice data packet corresponding to the target virtual character to the candidate virtual character according to the voice energy level and the azimuth information.
It should be noted that the scene schematic diagram of the voice playing system shown in fig. 1 is merely an example, and the voice playing system and the scene described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application.
The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
In this embodiment, a description will be given from the perspective of a voice playing apparatus, where the voice playing apparatus may be specifically integrated in a computer device such as a terminal, and the terminal may be a terminal corresponding to any virtual character in a game scene.
Referring to fig. 2, fig. 2 is a schematic flow chart illustrating a voice playing method according to an embodiment of the present application. The voice playing method can comprise the following steps:
s101, acquiring a target user voice data packet corresponding to a target virtual character in a game scene, and acquiring distance information between a candidate virtual character and the target virtual character in the game scene.
The specific type of the game can be flexibly set according to actual needs, for example, the game can be a game of an MMorpg type or a shooting type game, and the like which has a plurality of virtual characters for playing games.
The virtual character may be a character played by a game player, for example, the virtual character may be a virtual character generated by the game player logging in a game application through a user account, wherein a target virtual character may be a character played by the game player generating user voice data, and the candidate virtual character may be a virtual character whose distance from the target virtual character in a game scene is smaller than a preset distance threshold, and the preset distance threshold may be flexibly set according to actual needs. For example, the position information of the target virtual character and each virtual character in the game scene may be acquired, each virtual character is a virtual character except the target virtual character in the game scene, the distance information between the target virtual character and each virtual character is determined based on the position information of the target virtual character and each virtual character in the game scene, and the virtual character with the distance smaller than the preset distance threshold value is screened out according to the distance information, so as to obtain the candidate virtual character. The position information may be a position coordinate of the virtual character in a three-dimensional space of the game scene, for example, the position information of the virtual character a may be represented as (X, Y, Z), where X may represent a coordinate value in an X-axis direction in the three-dimensional space coordinate system, Y may represent a coordinate value in a Y-axis direction in the three-dimensional space coordinate system, and Z may represent a coordinate value in a Z-axis direction in the three-dimensional space coordinate system.
The target user voice data packet may be generated by encoding user voice data corresponding to the target virtual character, where the user voice data may be data for a game player to perform voice interaction, for example, the user voice data corresponding to the target virtual character refers to data for the target virtual character to perform voice interaction with another virtual character.
In one embodiment, acquiring user voice data corresponding to a target virtual character in a game scene may include: receiving a voice data packet of a target virtual role sent by a server; and analyzing the voice data packet to obtain user voice data corresponding to the target virtual role.
The user voice data can be obtained from the server, so that the user voice data can be quickly and conveniently obtained. Specifically, the voice data packet of the target virtual character sent by the server may be received, for example, the voice data acquisition request may be sent to the server in response to a trigger operation for a voice data acquisition control in the game interface, and the voice data packet of the target virtual character sent by the server based on the voice data acquisition request may be received; or, when the server receives the voice data packet, the server can receive the voice data packet of the target virtual role actively sent by the server; and so on. The server may send the voice data packet through a communication Protocol such as a HyperText Transfer Protocol (HTTP), a Transmission Control Protocol (TCP), and a User Datagram Protocol (UDP).
The voice data packet may be obtained by encoding user voice data corresponding to the target virtual character, and the voice data packet may include, in addition to the user voice data corresponding to the target virtual character, target position information of the target virtual character, position information of each virtual character in the game scene except the target virtual character, and other information, that is, in the process of generating the voice data packet corresponding to the target virtual character, the position information of each virtual character in the game scene except the target virtual character may be added to the voice data packet. For example, the terminal where each virtual character except the target virtual character is located in the game scene may obtain the position information of the virtual character in real time or at preset intervals, and report the position information of the virtual character to the server, and at this time, the server may add the received position information of each virtual character to the voice data packet. The encoding method for encoding the user voice data to obtain the voice data packet may include Advanced Audio Coding (AAC), motion Picture Experts Group Audio Layer III (MP 3), Adaptive Transform Audio Coding (ATRAC), Lossless Audio compression Coding (FLAC), and Windows Media Audio Coding (WMA).
Then, the terminal that acquires the voice packet may analyze the voice packet, where the analysis may be a decoding method corresponding to the encoding method of the voice packet, for example, may decode the voice packet according to AAC, MP3, ATRAC, FLAC, WMA, or the like, and since the voice packet may include user voice data, position information of the target virtual character in the game scene, and position information of each virtual character in the game scene, the user voice data of the target virtual character, position information of the target virtual character in the game scene, and position information of each virtual character in the game scene may be obtained after analyzing the voice packet.
It should be noted that the obtaining manner of the user voice data may be that a terminal (which may be referred to as a sending terminal) where the target virtual character is located directly sends the user voice data to terminals where other virtual characters are located in the game play, for example, when the sending terminal where the target virtual character is located is closer to the terminals (which may be referred to as a receiving terminal) where other virtual characters are located in the game play, a communication connection may be established between the sending terminal and the receiving terminal in a manner of bluetooth or WiFi, and when the sending terminal collects the user voice data, the sending terminal may encode the user voice data to generate a voice data packet, and send the voice data packet to the receiving terminal.
In the process of obtaining the distance information between the candidate virtual character and the target virtual character in the game scene, the position information of the target virtual character and each virtual character in the game scene can be obtained, each virtual character is a virtual character except the target virtual character in the game scene, the distance information between the target virtual character and each virtual character is determined based on the position information of the target virtual character and each virtual character in the game scene, the virtual character with the distance smaller than a preset distance threshold value is screened out according to the distance information, and the candidate virtual character is obtained. The candidate virtual character may include one or more virtual characters. The preset distance threshold may be flexibly set according to actual needs, and specific values are not limited here, for example, for a terminal where a virtual character with a distance smaller than the preset distance threshold is located, user voice data of a target virtual character may be acquired, whereas for a terminal where a virtual character with a distance greater than or equal to the preset distance threshold is located, user voice data of the target virtual character may not be acquired.
The distance information may be the distance between the target virtual character and each virtual character in the game scene, and the distance information between the target virtual character and each virtual character may be calculated by the server based on the position information of the target virtual character and each virtual character in the game scene, and the calculated distance information between the target virtual character and each virtual character is sent to the terminal which obtains the voice data of the corresponding user of the target virtual character. Or the distance information between the target virtual character and each virtual character may be calculated by the terminal which acquires the corresponding user voice data of the target virtual character based on the position information of the target virtual character and each virtual character in the game scene.
In one embodiment, obtaining distance information between the candidate virtual character and the target virtual character in the game scene may include: and receiving the distance information between the target virtual character and the candidate virtual character, which is obtained by calculation based on the position information of the target virtual character and the candidate virtual character in the game scene, sent by the server.
The distance information can be sent to the terminal after being calculated by the server, so that the convenience of obtaining the distance information can be improved, and the calculation resources of the terminal for calculating the distance information are saved. For example, the server may receive position information of the target virtual character and each virtual character in the game scene, which is reported by the terminal where the target virtual character and each virtual character are located in the game scene, and then may calculate distance information between the target virtual character and each virtual character based on the position information of the target virtual character and each virtual character in the game scene, and send the calculated distance information between the target virtual character and each virtual character to the terminal.
In one embodiment, obtaining distance information between the candidate virtual character and the target virtual character in the game scene may include: acquiring target position information of a target virtual character and position information of a candidate virtual character in a game scene; and performing arithmetic square root operation on the position information of the candidate virtual character and the target position information of the target virtual character to obtain the distance information between the candidate virtual character and the target virtual character.
The distance information can be obtained by the terminal through calculation based on the position information of the target virtual character and each virtual character, namely, the terminal through calculation based on the position information of the target virtual character and each virtual character in the game scene in the game, and the distance information between the target virtual character and each virtual character, so that the flexibility of obtaining the distance information can be improved. For example, as shown in fig. 3, when the position information of the target virtual character O is (x1, y1, z1), the position information of the virtual character a is (x2, y2, z2), the position information of the virtual character B is (x3, y3, z3), and the position information of the virtual character C is (x4, y4, z4), an arithmetic square root operation may be performed on the position information of the target virtual character O (x1, y1, z1) and the position information of the virtual character a (x2, y2, z2), and distance information D1 in the game scene between the target virtual character O and the virtual character a is obtained: d1 ═ sqrt (((x1-x2) ^2+ (y1-y2) ^2+ (z1-z2) ^ 2)); and performing an arithmetic square root operation on the position information (x1, y1, z1) of the target virtual character O and the position information (x3, y3, z3) of the virtual character B to obtain distance information D2 in the game scene between the target virtual character O and the virtual character B: d2 ═ sqrt (((x1-x3) ^2+ (y1-y3) ^2+ (z1-z2) ^3)), and arithmetic square root operation is performed on the position information (x1, y1, z1) of the target virtual character O and the position information (x4, y4, z4) of the virtual character C to obtain distance information D3 between the target virtual character O and the virtual character C in the game scene: d3 ═ sqrt (((x1-x4) ^2+ (y1-y4) ^2+ (z1-z4) ^ 2)).
S102, determining the corresponding voice energy level between the candidate virtual character and the target virtual character according to the distance information between the candidate virtual character and the target virtual character.
The voice energy may be used to represent the volume of the voice, and may also be used to represent information related to the voice, where specific content is not limited here, and taking the volume as an example, the larger the voice energy is, the larger the volume of the corresponding voice is, and conversely, the smaller the voice energy is, the smaller the volume of the corresponding voice is. In a natural voice conversation scene, if the distance between the sending end and the receiving end is farther, the voice energy between the sending end and the receiving end is smaller, and if the distance between the sending end and the receiving end is closer, the voice energy between the sending end and the receiving end is larger, therefore, the scheme can divide the voice energy into a plurality of grades, the lower the voice energy grade can be set, the smaller the corresponding voice energy is, and otherwise, the higher the voice energy grade is, the larger the corresponding voice energy is.
And setting that the voice energy level corresponding to the distance information is higher when the distance between the candidate virtual character represented by the distance information and the target virtual character is shorter, and conversely, the voice energy level corresponding to the distance information is lower when the distance between the candidate virtual character represented by the distance information and the target virtual character is longer.
In one embodiment, determining the corresponding speech energy level between the candidate virtual character and the target virtual character according to the distance information between the candidate virtual character and the target virtual character may include: acquiring mapping relations between different distance information and sound energy levels; and determining the corresponding voice energy level between the candidate virtual character and the target virtual character according to the distance information between the candidate virtual character and the target virtual character based on the mapping relation.
In order to improve the convenience of determining the voice energy level, the voice energy level may be divided into a plurality of different levels in advance, each voice energy level may correspond to a voice energy interval without overlapping, and a mapping relationship between different distance information and voice energy levels is set, for example, the distance information may be divided into a plurality of distance intervals without overlapping, and a mapping relationship between different distance intervals and voice energy levels is established, where the mapping relationship may be shown in the following table:
Figure BDA0002762955810000101
Figure BDA0002762955810000111
as can be seen from the table, the distance section 1 corresponds to the sound energy level 1, the distance section 2 corresponds to the sound energy level 2, and the distance section 3 corresponds to the sound energy level 3, wherein the distance section 1 may be a distance section from the distance information D1 (including D1) to the distance information D2 (not including D2), the distance section 2 may be a distance section from the distance information D2 (including D2) to the distance information D3 (not including D3), and the distance section 3 may be a distance section from the distance information D3 (including D3) to the distance information D4 (not including D4). The speech energy level 1 may correspond to a speech energy interval 1, and the speech energy interval 1 may be a speech energy interval from the speech energy 1 (including the speech energy 1) to the speech energy 2 (not including the speech energy 2); the speech energy level 2 may correspond to the speech energy interval 2, and the speech energy interval 2 may be a speech energy interval from the speech energy 2 (including the speech energy 2) to the speech energy 3 (not including the speech energy 3); the speech energy level 3 may correspond to a speech energy interval 3, and the speech energy interval 3 may be a speech energy interval from speech energy 3 (including speech energy 3) to speech energy 4 (not including speech energy 4); and so on.
After the distance information between the target virtual character and the candidate virtual character is obtained, the mapping relation between different distance information and voice energy levels can be inquired, so that the voice energy level corresponding to the distance information between the candidate virtual character and the target virtual character is determined according to the mapping relation. For example, when the distance information between the target virtual character and the candidate virtual character a is D3, it may be determined that D3 is located in the distance interval 3, and by querying the mapping relationship, it may be determined that the voice energy level corresponding to the distance interval 3 is the voice energy level 3, and thus it may be determined that the voice energy level corresponding to the candidate virtual character a is the voice energy level 3.
S103, acquiring the position information of the candidate virtual character and the target position information of the target virtual character.
For example, as described above, in the process of acquiring the voice data packet corresponding to the target virtual character, the voice data packet of the target virtual character sent by the server may be received, and the voice data packet may be analyzed to obtain the user voice data corresponding to the target virtual character. Since the voice data packet includes the user voice data corresponding to the target virtual character, and also includes the target position information of the target virtual character, the position information of the candidate virtual character, and other information, after the voice data packet is analyzed, the position information of the target virtual character in the game scene and the position information of the candidate virtual character in the game scene can be obtained in addition to the user voice data of the target virtual character.
For another example, when it is necessary to acquire the position information of the candidate virtual character and the target position information of the target virtual character, an information acquisition request may be sent to the terminal where the candidate virtual character is located and the terminal where the target virtual character is located, and the position information of the candidate virtual character returned by the terminal where the candidate virtual character is located based on the information acquisition request and the target position information of the target virtual character returned by the terminal where the target virtual character is located based on the information acquisition request may be received.
And S104, determining the azimuth information of the candidate virtual character relative to the target virtual character according to the position information of the candidate virtual character and the target position information of the target virtual character.
For example, as shown in fig. 4, the east, south, west, and north may be used as basic orientations, and the northeast, southwest, northwest, southeast, and the like may be used as intermediate orientations, and at this time, the orientation information of the candidate virtual character a with respect to the target virtual character O may be: the candidate virtual character a is located in the southwest direction of the target virtual character O, and the orientation information of the candidate virtual character B with respect to the target virtual character O may be: candidate avatar B is located directly south of target avatar O, and so on. For another example, as shown in fig. 5, the front, rear, left, and right may be the basic orientations, and the right front 45 °, right rear 45 °, left front 45 °, left rear 45 °, and left rear 45 ° may be the intermediate orientations, and in this case, the orientation information of the candidate virtual character a with respect to the target virtual character O may be: candidate virtual character a is located at the front right 45 ° of target virtual character O, and the orientation information of candidate virtual character B with respect to target virtual character O may be: candidate avatar B is located directly in front of target avatar O, and so on.
In one embodiment, determining the orientation information of the candidate virtual character relative to the target virtual character according to the position information of the candidate virtual character and the target position information of the target virtual character may include: constructing a position distribution area by taking the target position information of the target virtual character as an origin; determining the target position of the candidate virtual role in the azimuth distribution area according to the position information of the candidate virtual role; and determining the azimuth information of the candidate virtual character relative to the target virtual character according to the target position.
For example, as shown in fig. 5, an azimuth distribution area may be constructed with the target position information of the target virtual character O as the origin, and the azimuth distribution area may include areas divided into front, rear, left, and right basic azimuths, and may further include areas divided into intermediate azimuths of 45 ° on the right front, 45 ° on the right rear, 45 ° on the left front, and 45 ° on the left rear. Then, the target position of the candidate virtual character in the position distribution area may be determined according to the position information of the candidate virtual character, and the position information of the candidate virtual character relative to the target virtual character may be determined according to the target position, for example, in fig. 5, the position information of the candidate virtual character a relative to the target virtual character O may be determined according to the target position of the candidate virtual character a in the position distribution area as: the candidate virtual character A is positioned right in front of the target virtual character O; and determining the azimuth information of the candidate virtual character B relative to the target virtual character O according to the target position of the candidate virtual character B in the azimuth distribution area as follows: the candidate virtual character B is positioned at the front right 45 degrees of the target virtual character O, so that the azimuth information of the candidate virtual character can be accurately positioned.
It should be noted that an orientation distribution area may be constructed with the position information of the candidate virtual character as an origin, the target position of the target virtual character in the orientation distribution area may be determined according to the target position information of the target virtual character, and the orientation information of the target virtual character with respect to the candidate virtual character may be determined according to the target position.
And S105, playing the voice in the voice data packet of the target user corresponding to the target virtual character to the candidate virtual character according to the voice energy level and the azimuth information.
After the voice energy level corresponding to the candidate virtual character and the target virtual character and the direction information of the candidate virtual character relative to the target virtual character are obtained, the voice in the voice data of the target user can be played according to the voice energy level and the direction information, for example, the voice data packet of the target user can be decoded to obtain the voice data of the user, and the voice data of the user is played, so that the game player corresponding to the candidate virtual character can perform voice communication with the game player corresponding to the target virtual character. For example, the higher the voice energy level is, the higher the volume of the voice data of the corresponding playing user is, whereas, the lower the voice energy level is, the lower the volume of the voice data of the corresponding playing user is; and determining information such as a binaural Time Difference (Interaural Time Difference) and a binaural intensity Difference (Interaural Level Difference) of audio playing based on the orientation information through a Head Related Transform Function (HRTF) or other algorithms, so as to construct an effect of stereo space sound localization and realize the sound data playing based on the sound effect localization, wherein the binaural Time Difference may be a Time Difference generated when a sound wave reaches two ears due to a distance between the two ears, and the binaural intensity Difference may be a Difference in intensity of the two ears due to reflection and diffraction of the wave when the two ears receive sound stimulation transmitted from a certain direction. The closer the distance to the target virtual character is, the higher the voice energy level is, the larger the played volume is, and certain differences exist among voices heard in different directions of the target virtual character, so that the voice interaction between the target virtual character and the candidate virtual character approaches to a voice conversation scene in the nature, namely the voice interaction between the game player corresponding to the target virtual character and the game player corresponding to the candidate virtual character approaches to the voice conversation scene in the nature, and the voice playing is closer to the real feeling of human ears.
In one embodiment, playing the voice in the voice data packet of the target user corresponding to the target virtual character to the candidate virtual character according to the voice energy level and the direction information may include: determining audio playing parameters according to the voice energy level and the azimuth information; and playing the voice in the voice data packet of the target user corresponding to the target virtual role to the candidate virtual role according to the audio playing parameters.
The audio playing parameters may include parameters such as volume, binaural time difference, and binaural intensity difference of audio playing. For example, the volume of the audio playing corresponding to the voice energy level may be determined according to the corresponding relationship between the voice energy level and the volume of the audio playing, where the higher the voice energy level is, the larger the volume of the audio playing is, and conversely, the lower the voice energy level is, the smaller the volume of the audio playing is. For example, correspondence between different preset orientation information and the binaural time difference and the binaural intensity difference may be obtained, and the binaural time difference and the binaural intensity difference of the audio playback corresponding to the orientation information may be determined based on the correspondence. And then, the voice in the voice data packet of the target user can be played according to the audio playing parameters, so that the voice interaction between the target virtual character and the candidate virtual character approaches to a natural voice conversation scene, the voice playing is closer to the real feeling of human ears, and the highly vivid voice interaction experience is provided for the game.
In an embodiment, the voice playing method may further include: acquiring original user voice data of a terminal corresponding to a target virtual role; preprocessing original user voice data of the target virtual role to obtain target user voice data; coding the voice data of the target user to obtain a coded voice data packet; acquiring target position information of a target virtual character in a game scene; writing target position information of the target virtual character in the game scene into a preset position of the coded voice data packet to obtain a target voice data packet; and sending the target voice data packet to the server so that the server sends the target voice data packet to the terminal corresponding to the candidate virtual role.
The original user voice data can be voice data generated by a game player using a terminal where the target virtual character is located, and when the game player using the terminal where the target virtual character is located sends voice, the user voice data of the game player can be collected through a microphone or other voice collectors of the terminal where the target virtual character is located, so that the original user voice data of the target virtual character can be obtained.
Then, the original user voice data of the target virtual character may be preprocessed to obtain the target user voice data, where the preprocessing may include echo cancellation, noise reduction (also referred to as filtering), human voice detection, and the like, where the echo cancellation may refer to cancellation of an echo in the target user voice data, where the echo may be a reflected sound wave generated by a sound wave hitting a reflecting surface (such as a wall of a building) and reflecting during propagation, for example, the echo cancellation may be performed on the target user voice data to obtain the target user voice data after the echo cancellation, and the target user voice data after the echo cancellation is filtered through an adaptive filter (Least Mean Square, LMS), a high pass filter, a low pass filter, and the like to filter noise in the target user voice data to obtain the filtered user voice data, and carrying out voice detection on the filtered voice data of the target user through a preset voice detection algorithm, extracting the voice of the voice data of the target user, and enhancing the extracted voice to obtain the voice data of the target user.
Secondly, the target user voice data can be coded through coding modes such as AAC, MP3, ATRAC, FLAC or WMA and the like to obtain a coded voice data packet. And target position information of the target virtual character in the game scene may be acquired, for example, the target position information may be (x0, y0, z 0). At this time, the target location information and the encoded voice data packet may be sent to the server, so that the server may send the encoded voice data packet to a terminal where the virtual character whose distance from the virtual character is smaller than the preset distance threshold is located. For example, the target location information and the encoded voice packet may be transmitted to the server via a communication protocol such as an HTTP protocol, a TCP protocol, or a UDP protocol.
The server may receive position information sent by a terminal where other virtual characters except the virtual character are located in the game-play, or the server may send a position information acquisition request to the terminal where the other virtual character is located, and receive position information returned by the terminal where the other virtual character is located based on the position information acquisition request. After receiving the target position information and the encoded voice data packet, the server may perform an arithmetic square root operation with position information of other virtual characters according to the target position information of the virtual character to calculate distance information between the virtual character and each other virtual character, and screen out virtual characters whose distance to the virtual character is smaller than a preset distance threshold value therefrom, and send the encoded voice data packet to a terminal where the virtual character whose distance to the virtual character is smaller than the preset distance threshold value is located, at this time, the server may also send the encoded voice data packet to the terminal along with the calculated distance information between the virtual character and each other virtual character, so that the terminal may decode the encoded voice data packet to obtain the distance information between the virtual character and each other virtual character, the target user voice data, and the like, and then determining the voice energy level corresponding to each other virtual character according to the distance information between the virtual character and each other virtual character. And acquiring the position information of each other virtual role and the position information of the virtual role, determining the orientation information of each other virtual role relative to the virtual role according to the position information of each other virtual role and the position information of the virtual role, and playing the voice data of the target user according to the voice energy level corresponding to each other virtual role and the orientation information of each other virtual role relative to the virtual role.
In one embodiment, sending the target location information and the encoded voice data packet to the server may include: writing target position information of the target virtual character in the game scene into a preset position of the coded voice data packet to obtain a target voice data packet; and sending the target voice data packet to a server.
In order to conveniently send the target position information and the encoded voice data packet to the server, the target position information may be written into a preset position of the encoded voice data packet to obtain a target voice data packet, so that the target voice data packet may be sent to the server. The preset position may be flexibly set according to actual needs, and is not specifically limited herein, for example, the preset position may be a position where a header field, a tail field, or an intermediate field is located.
In an embodiment, writing the target location information into a preset location of the encoded voice data packet, and obtaining the target voice data packet may include: and writing the target position information into a head field or a tail field of the coded voice data packet to obtain the target voice data packet.
In an embodiment, writing the target location information into a preset location of the encoded voice data packet, and obtaining the target voice data packet may include: and writing the target position information into a preset field of the coded voice data packet based on the preset character identifier to obtain the target voice data packet.
For example, the target position information may be written (i.e., stored) in a header field of the encoded voice data packet to obtain the target voice data packet, or the target position information may be written (i.e., stored) in a trailer field of the encoded voice data packet to obtain the target voice data packet, so that the target position information can be quickly extracted directly from the header field or the trailer field, and the target position information acquisition efficiency can be improved.
For another example, a preset character identifier may be set, and the preset character identifier may be flexibly set according to actual needs, for example, the preset character identifier may include "#", "@", "|! "and", etc. The target location information may then be written into a preset field of the encoded voice data packet based on the preset character identifier to obtain the target voice data packet, for example, when the preset character identifier is "#", the "# target location information #" may be written into any free field in the encoded voice data packet, so that the target location information may be subsequently queried based on "#". The method and the device have the advantages that the position where the preset character identification is located can be searched subsequently, the target position information is accurately extracted according to the position where the preset character identification is located, and the accuracy of target position information acquisition is improved.
In one embodiment, sending the target voice data packet to the server may include: acquiring game play identification of game play and virtual character identification of virtual characters; generating a network data packet based on the target voice data packet, the game-play identification and the virtual role identification through a network communication protocol; and sending the network data packet to a server.
The game match-up identification can be used for uniquely identifying the game match-up, the game match-up identification can be composed of numbers, letters, characters or characters and the like, the virtual character identification can be used for uniquely identifying the virtual character, the virtual character identification can be composed of numbers, letters, characters or characters and the like, and the game match-up identification and the virtual character identification can be flexibly set according to actual needs. The network communication protocol (which may be simply referred to as a communication protocol) may include an HTTP protocol, a TCP protocol, or a UDP protocol, or the like. In order to facilitate the server to know the object of data transmission, game play, and the like, in the process of sending the target voice data packet to the server, information related to the game play and the virtual character, such as game play identification, virtual character identification, and the like, can be packaged and sent to the server. Specifically, a game play identifier of the game play and a virtual character identifier of the virtual character may be obtained; the network data packet is generated through network communication protocols such as an HTTP protocol, a TCP protocol or a UDP protocol based on the target voice data packet, the game-to-game identification and the virtual character identification, and the format, the size and the like of the network data packet can be flexibly set according to actual requirements. For example, the game-to-game identifier, the virtual character identifier, and the like may be stored in the free field of the target voice packet to obtain the network packet. For another example, the target voice packet, the game match identifier, and the virtual character identifier may be concatenated to generate the network packet. The network packet may be sent to the server at this point.
According to the embodiment of the application, the voice data packet of the target user corresponding to the target virtual character in the game scene can be acquired, the distance information between the candidate virtual character and the target virtual character in the game scene can be acquired, and then the voice energy level corresponding to the candidate virtual character and the target virtual character can be determined according to the distance information between the candidate virtual character and the target virtual character. The position information of the candidate virtual character and the target position information of the target virtual character are obtained, and the azimuth information of the candidate virtual character relative to the target virtual character is determined according to the position information of the candidate virtual character and the target position information of the target virtual character. At the moment, the voice in the target user voice data packet corresponding to the target virtual character can be played to the candidate virtual character according to the voice energy level and the azimuth information, so that the voice interaction between the target virtual character and the candidate virtual character approaches to a natural voice conversation scene, the voice playing is closer to the real feeling of human ears, and the flexibility and the effect of the voice playing are improved.
The method described in the above embodiments is further illustrated in detail by way of example.
In this embodiment, for example, a voice playing apparatus is integrated in a terminal, the terminal may be used as a sending end for generating voice data, and may also be used as a receiving end for playing voice, the terminal may implement receiving and sending of voice data in full duplex, and the full duplex may refer to performing bidirectional transmission of voice data simultaneously.
As shown in fig. 6, the following description will be made in detail by taking an example that a transmitting end generating voice data transmits the voice data to a server, and the server forwards the voice data to one or more receiving ends, so that the receiving ends perform voice playing based on the received voice data, wherein the server can be in communication connection with the transmitting end and the receiving end respectively.
Referring to fig. 7, fig. 7 is a flowchart illustrating a voice playing method according to an embodiment of the present application. The method flow can comprise the following steps:
s201, a sending end collects user voice data and encodes the user voice data to generate an encoded voice data packet.
For example, the sending end may collect the user voice data of the game player at the sending end by using a microphone or other voice collectors, and then may encode the user voice data by using encoding modes such as AAC, MP3, ATRAC, FLAC, WMA, and the like, to generate an encoded voice data packet.
It should be noted that, in order to reduce interference such as noise or meeting sound in the user voice data and improve reliability of acquiring the user voice data, the sending end may perform preprocessing such as echo cancellation and noise reduction on the acquired user voice data to obtain target user voice data, encode the target user voice data, and generate an encoded voice data packet.
S202, the sending end obtains the position information of the virtual role of the sending end in the game scene, and packs the position information into the encoded voice data packet to obtain a target voice data packet.
For example, the sending end may write the location information of the virtual role of the sending end into a header field or a trailer field of the encoded voice data packet to obtain the target voice data packet. For another example, the sending end may write the location information of the virtual role of the sending end into any free field of the encoded voice data packet based on the preset character identifier, so as to obtain the target voice data packet.
S203, the sending end sends the target voice data packet to the server.
The transmitting end may transmit the target voice packet to the server through a communication protocol such as an HTTP protocol, a TCP protocol, or a UDP protocol.
It should be noted that the sending end may further obtain information such as a game match identifier of the game match and a virtual character identifier of a virtual character of the sending end, and package (for example, splice) the target voice data packet, the game match identifier, the virtual character identifier, and the like to obtain a network data packet, and send the network data packet to the server, so that the server may conveniently obtain the object of data transmission and the related information such as the game match.
S204, the server calculates the distance information between the virtual role of the sending end and the virtual roles of all the receiving ends, and screens out the virtual roles of the receiving ends with the distance smaller than a preset distance threshold value.
The server may obtain position information of the virtual character at the sending end in the game scene and position information of the virtual character at each receiving end in the game scene, for example, the server may parse (e.g., decode) the received target voice data packet to obtain user voice data, and extract the position information of the virtual character at the sending end in the game scene from the user voice data. For another example, the server may receive the position information of the virtual character of the transmitting end in the game scene, which is transmitted by the transmitting end, and the position information of the virtual character of the receiving end in the game scene, which is transmitted by each receiving end. Then, the distance information between the virtual character of the transmitting end and the virtual character of each receiving end may be calculated according to the position information of the virtual character of the transmitting end in the game scene and the position information of the virtual character of each receiving end in the game scene, for example, when the position information of the virtual character of the transmitting end is (x1, y1, z1) and the position information of the virtual character of the receiving end is (x2, y2, z2), the distance information D1 between the virtual character of the transmitting end and the virtual character of the receiving end may be: d1 ═ sqrt (((x1-x2) ^2+ (y1-y2) ^2+ (z1-z2) ^2)), at this time, the virtual roles of the receiving end with the distance smaller than the preset distance threshold value can be screened out.
S205, the server sends the target voice data packet to the selected receiving end with the distance smaller than the preset distance threshold.
For example, the server may also carry the calculated distance information between the virtual character of the transmitting end and the virtual character of each receiving end in the target voice packet and transmit the target voice packet to the receiving end.
For example, the server may transmit the position information of the virtual character at the transmitting end and the position information of the virtual character at each receiving end to the receiving end in a target voice packet.
It should be noted that the server may not send the target voice packet to the receiving end whose distance is greater than or equal to the preset distance threshold.
It should be noted that, in order to improve the diversity of game playing methods, in the process of sending the target voice data packet, the server may determine the relationship between the virtual character corresponding to each receiving end and the virtual character corresponding to the sending end, for example, it may determine whether the virtual character corresponding to the receiving end and the virtual character of the sending end are in an enemy relationship or a teammate relationship according to the character identifier of the virtual character corresponding to the receiving end and the character identifier of the virtual character corresponding to the sending end, and the character identifier may be flexibly set according to actual needs, and may be used to identify the team to which the virtual character belongs, so as to identify the relationship between the virtual characters in the game scene. Then, the virtual character corresponding to the receiving end whose virtual character corresponding to the transmitting end is in the teammate relationship can be screened out (i.e. the receiving end whose virtual character is in the teammate relationship with the transmitting end is screened out), and the target voice data packet is transmitted to the receiving end whose virtual character is in the teammate relationship with the transmitting end. Or, the server may screen out the receiving terminals with the distance smaller than the preset distance threshold value to obtain candidate receiving terminals, and then send the target voice data packet to the candidate receiving terminals in the teammate relationship with the sending terminal.
For a receiving end which is in enemy relation with a sending end, the server can decode the target voice data packet to obtain target voice data, recognize voice information of the target voice data, generate error voice information opposite to the voice information of the target voice data, encode the error voice information to obtain an error voice data packet, and send the error voice data packet to the receiving end which is in enemy relation with the sending end to mislead an enemy and improve probability of winning a game.
S206, the receiving end determines the voice energy level corresponding to the virtual character of the receiving end and the azimuth information.
The virtual character of the receiving end can be the virtual character of the receiving end, the distance between the virtual character of the receiving end and the virtual character of the sending end is smaller than a preset distance threshold value.
After receiving the target voice data packet sent by the server, the receiving end can decode the target voice data packet to obtain the user voice data and the position information of the virtual role of the sending end. It should be noted that, when the target voice data packet carries the distance information between the virtual character of the sending end and the virtual characters of the receiving ends, the receiving end may also obtain the distance information between the virtual character of the sending end and the virtual characters of the receiving ends after decoding the target voice data packet. When the target voice data packet carries the position information of the virtual character of the sending end and the position information of the virtual character of each receiving end, the receiving end can also obtain the position information of the virtual character of the sending end and the position information of the virtual character of each receiving end after decoding the target voice data packet. When the target voice data packet does not carry the distance information between the virtual character of the transmitting end and the virtual characters of each receiving end, the receiving end may calculate the distance information between the virtual character of the transmitting end and the virtual characters of each receiving end based on the position information of the virtual character of the transmitting end and the position information of the virtual character of each receiving end.
The receiving end can acquire the mapping relation between different pre-stored distance information and voice energy levels so as to determine the voice energy levels corresponding to the distance information between the virtual character of the transmitting end and the virtual characters of the receiving ends according to the mapping relation. And determining the orientation information of the virtual character of the receiving end relative to the virtual character of the transmitting end according to the position information of the virtual character of the transmitting end and the position information of the virtual character of each receiving end. For example, an orientation distribution area may be constructed with the position information of the virtual character of the transmitting end as an origin, the target position of the virtual character of the receiving end in the orientation distribution area may be determined based on the position information of the virtual character of the receiving end, and the orientation information of the virtual character of the receiving end with respect to the virtual character of the transmitting end may be determined based on the target position.
And S207, the receiving end plays the voice in the voice data packet of the target user according to the voice energy level and the direction information.
For example, the receiving end may determine audio playing parameters such as volume of audio playing, binaural time difference of audio playing, and binaural intensity difference according to the speech energy level and the direction information, and may play speech in the target user speech data according to the audio playing parameters.
It should be noted that, in order to improve the output effect of the user voice data on the game interface, after receiving the voice data packet, the receiving end may translate the user voice data into characters such as chinese or english, and then may display the characters in the game display interface of the receiving end, optionally, in order to improve the synchronization of the user voice data playing and character displaying, a timestamp may be set for the user voice data and corresponding characters, and the user voice data and the displayed characters are synchronously played based on the timestamp, so as to facilitate the user to watch.
In order to improve the flexibility and reliability of the receiving end for playing the user voice data, the receiving end can adjust the voice playing effect in real time during the process that the virtual character of the receiving end moves in the game scene, for example, the receiving end can detect the position information and the distance information between the virtual character of the receiving end and the virtual character of the transmitting end in real time, when the transmission is changed based on the detected position information and the distance information between the virtual character of the receiving end and the virtual character of the transmitting end, the voice energy level and the direction information between the virtual character of the receiving end and the virtual character of the transmitting end can be updated, the user voice data can be played based on the updated voice energy level and the direction information, and when the distance between the virtual character of the receiving end and the virtual character of the transmitting end is longer (for example, the distance between the virtual character of the receiving end and the virtual character of the transmitting end is greater than a preset distance threshold value, the distance between the virtual character of the receiving end and the virtual character of the sending end is far, the preset distance threshold value can be flexibly set according to actual needs), and the prompt information which is related to the fact that the distance between the virtual character of the receiving end and the virtual character of the sending end is far can be displayed in the game display interface, and the prompt information which suggests that the receiving end is close to the sending end to carry out short-distance communication is displayed.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the voice playing method, and are not described herein again.
The sending end can collect user voice data, codes the user voice data to generate a coded voice data packet, obtains position information of a virtual role of the sending end in a game scene, packs the position information into the coded voice data packet to obtain a target voice data packet, and sends the target voice data packet to the server. The server can calculate the distance information between the virtual character of the sending end and the virtual character of each receiving end, and send the target voice data packet to the receiving end where the virtual character of the selected receiving end with the distance smaller than the preset distance threshold value is located. The receiving end can determine the voice energy level and the azimuth information corresponding to the virtual character of the receiving end, and play the user voice data according to the voice energy level and the azimuth information. The voice interaction between the virtual character of the sending end and the virtual character of the receiving end approaches to a natural voice conversation scene, namely, the voice interaction between the game player corresponding to the sending end and the game player corresponding to the receiving end approaches to the natural voice conversation scene, so that voice playing is closer to the real feeling of human ears, and highly vivid voice interaction experience is provided for games.
In this embodiment, a description will be given from the perspective of a voice playback apparatus, which may be specifically integrated in a computer device such as a server.
Referring to fig. 8, fig. 8 is a flowchart illustrating a voice playing method according to an embodiment of the present application. The voice playing method can comprise the following steps:
s301, acquiring a target user voice data packet corresponding to a target virtual character in a game scene, and acquiring the target virtual character and position information of each virtual character in the game scene, wherein each virtual character is a virtual character except the target virtual character in the game scene.
S302, determining distance information between the target virtual character and each virtual character based on the target virtual character and the position information of each virtual character in the game scene.
S303, screening out the virtual roles with the distance smaller than the preset distance threshold value according to the distance information to obtain candidate virtual roles.
S304, the target user voice data packet is sent to the terminal corresponding to the candidate virtual role, so that the terminal determines the voice energy level and the azimuth information corresponding to the candidate virtual role and the target virtual role, and the voice in the target user voice data packet corresponding to the target virtual role is played to the candidate virtual role according to the voice energy level and the azimuth information.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the voice playing method, and are not described herein again.
In the embodiment of the application, the server can acquire the voice data packet of the target user corresponding to the target virtual character in the game scene, acquire the position information of the target virtual character and each virtual character in the game scene, and determine the distance information between the target virtual character and each virtual character based on the position information of the target virtual character and each virtual character in the game scene. And then screening out the virtual roles with the distance smaller than a preset distance threshold value according to the distance information to obtain candidate virtual roles, and sending the target user voice data packet to a terminal corresponding to the candidate virtual roles, so that the terminal determines the voice energy level and the azimuth information corresponding to the candidate virtual roles and the target virtual roles, and plays the voice in the target user voice data packet corresponding to the target virtual roles to the candidate virtual roles according to the voice energy level and the azimuth information. The voice interaction between the virtual character of the sending end and the virtual character of the receiving end is close to the natural voice conversation scene, the voice playing is closer to the real feeling of human ears, and the highly vivid voice interaction experience is provided for the game.
In order to better implement the voice playing method provided by the embodiment of the present application, an embodiment of the present application further provides a device based on the voice playing method. The meaning of the noun is the same as that in the above voice playing method, and the specific implementation details can refer to the description in the method embodiment.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a voice playing apparatus according to an embodiment of the present application, where the voice playing apparatus may include a first obtaining unit 301, a first determining unit 302, a second obtaining unit 303, a second determining unit 304, a playing unit 305, and the like.
The first obtaining unit 301 is configured to obtain a target user voice data packet corresponding to a target virtual character in a game scene, and obtain distance information between a candidate virtual character and the target virtual character in the game scene, where the candidate virtual character is a virtual character whose distance from the target virtual character is smaller than a preset distance threshold.
A first determining unit 302, configured to determine, according to distance information between the candidate virtual character and the target virtual character, a corresponding voice energy level between the candidate virtual character and the target virtual character.
A second obtaining unit 303, configured to obtain the position information of the candidate virtual character and the target position information of the target virtual character.
A second determining unit 304, configured to determine the orientation information of the candidate virtual character relative to the target virtual character according to the position information of the candidate virtual character and the target position information of the target virtual character.
And a playing unit 305, configured to play the voice in the target user voice data packet corresponding to the target virtual character to the candidate virtual character according to the voice energy level and the direction information.
In an embodiment, the playing unit 305 may specifically be configured to: determining audio playing parameters according to the voice energy level and the azimuth information; and playing the voice in the voice data packet of the target user corresponding to the target virtual role to the candidate virtual role according to the audio playing parameters.
In an embodiment, the first determining unit 302 may specifically be configured to: acquiring mapping relations between different distance information and sound energy levels; and determining the corresponding voice energy level between the candidate virtual character and the target virtual character according to the distance information between the candidate virtual character and the target virtual character based on the mapping relation.
In an embodiment, the first obtaining unit 301 may specifically be configured to: and receiving the distance information between the target virtual character and the candidate virtual character, which is obtained by calculation based on the position information of the target virtual character and the candidate virtual character in the game scene, sent by the server.
In an embodiment, the first obtaining unit 301 may specifically be configured to: acquiring target position information of a target virtual character and position information of a candidate virtual character in a game scene; and performing arithmetic square root operation on the position information of the candidate virtual character and the target position information of the target virtual character to obtain the distance information between the candidate virtual character and the target virtual character.
In one embodiment, the voice playing apparatus may further include:
the voice data acquisition unit is used for acquiring original user voice data of a terminal corresponding to the target virtual role;
the processing unit is used for preprocessing the original user voice data of the target virtual role to obtain target user voice data;
the encoding unit is used for encoding the target user voice data to obtain an encoded voice data packet;
a position information acquiring unit for acquiring target position information of a target virtual character in a game scene;
a position information writing unit, configured to write target position information of the target virtual character in the game scene into a preset position of the encoded voice data packet to obtain a target voice data packet
And the data sending unit is used for sending the target voice data packet to the server so that the server sends the target voice data packet to the terminal where the candidate virtual character with the distance between the candidate virtual character and the target virtual character smaller than the preset distance threshold value is located.
In an embodiment, the location information writing unit may be specifically configured to: writing target position information of the target virtual character in a game scene into a head field or a tail field of the coded voice data packet to obtain a target voice data packet; or writing the target position information of the target virtual character in the game scene into a preset field of the coded voice data packet based on the preset character identifier to obtain the target voice data packet.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the voice playing method, which is not described herein again.
In the embodiment of the application, the first obtaining unit 301 may obtain the voice data packet of the target user corresponding to the target virtual character in the game scene, and obtain the distance information between the candidate virtual character and the target virtual character in the game scene, and then the first determining unit 302 may determine the voice energy level corresponding to the candidate virtual character and the target virtual character according to the distance information between the candidate virtual character and the target virtual character. The position information of the candidate virtual character and the target position information of the target virtual character are acquired by the second acquisition unit 303, and the azimuth information of the candidate virtual character with respect to the target virtual character is determined by the second determination unit 304 based on the position information of the candidate virtual character and the target position information of the target virtual character. At this time, the playing unit 305 can play the voice in the voice data packet of the target user corresponding to the target virtual character to the candidate virtual character according to the voice energy level and the azimuth information, so that the voice interaction between the target virtual character and the candidate virtual character approaches to the natural voice conversation scene, and the voice playing is closer to the real feeling of human ears, thereby improving the flexibility and the effect of the voice playing.
In order to better implement the voice playing method provided by the embodiment of the present application, an embodiment of the present application further provides a device based on the voice playing method. The meaning of the noun is the same as that in the above voice playing method, and the specific implementation details can refer to the description in the method embodiment.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a voice playing apparatus according to an embodiment of the present application, where the voice playing apparatus may include a third obtaining unit 401, a third determining unit 402, a filtering unit 403, a sending unit 404, and the like.
The third obtaining unit 401 is configured to obtain a target user voice data packet corresponding to a target virtual character in a game scene, and obtain the target virtual character and position information of each virtual character in the game scene, where each virtual character is a virtual character in the game scene except the target virtual character.
A third determining unit 402, configured to determine distance information between the target virtual character and each virtual character based on the target virtual character and the position information of each virtual character in the game scene.
And a screening unit 403, configured to screen out, according to the distance information, a virtual character whose distance is smaller than a preset distance threshold, so as to obtain a candidate virtual character.
A sending unit 404, configured to send the target user voice data packet to a terminal corresponding to the candidate virtual character, so that the terminal determines a voice energy level and orientation information corresponding to the candidate virtual character and the target virtual character, and plays, according to the voice energy level and the orientation information, voice in the target user voice data packet corresponding to the target virtual character to the candidate virtual character.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the voice playing method, and are not described herein again.
An embodiment of the present application further provides a computer device, where the computer device may be a terminal or a server, and as shown in fig. 11, it shows a schematic structural diagram of the computer device according to the embodiment of the present application, specifically:
the computer device may include components such as a processor 501 of one or more processing cores, memory 502 of one or more computer-readable storage media, a power supply 503, and an input unit 504. Those skilled in the art will appreciate that the computer device architecture illustrated in FIG. 11 is not intended to be limiting of computer devices and may include more or less components than those illustrated, or combinations of certain components, or different arrangements of components. Wherein:
the processor 501 is a control center of the computer device, connects various parts of the entire computer device by using various interfaces and lines, and performs various functions of the computer device and processes data by running or executing software programs and/or modules stored in the memory 502 and calling data stored in the memory 502, thereby monitoring the computer device as a whole. Optionally, processor 501 may include one or more processing cores; preferably, the processor 501 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 501.
The memory 502 may be used to store software programs and modules, and the processor 501 executes various functional applications and data processing by operating the software programs and modules stored in the memory 502. The memory 502 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to use of the computer device, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 502 may also include a memory controller to provide the processor 501 with access to the memory 502.
The computer device further comprises a power supply 503 for supplying power to the various components, and preferably, the power supply 503 may be logically connected to the processor 501 through a power management system, so that functions of managing charging, discharging, power consumption, and the like are realized through the power management system. The power supply 503 may also include any component of one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
The computer device may also include an input unit 504, and the input unit 504 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the computer device may further include a display unit and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 501 in the computer device loads the executable file corresponding to the process of one or more application programs into the memory 502 according to the following instructions, and the processor 501 runs the application programs stored in the memory 502, so as to implement various functions as follows:
when the computer equipment is a terminal, the terminal can acquire a target user voice data packet corresponding to a target virtual character in a game scene and acquire distance information between a candidate virtual character and the target virtual character in the game scene; determining the corresponding voice energy level between the candidate virtual character and the target virtual character according to the distance information between the candidate virtual character and the target virtual character; acquiring position information of the candidate virtual character and target position information of the target virtual character; determining the azimuth information of the candidate virtual character relative to the target virtual character according to the position information of the candidate virtual character and the target position information of the target virtual character; and playing the voice in the voice data packet of the target user corresponding to the target virtual character to the candidate virtual character according to the voice energy level and the azimuth information.
When the computer equipment is a server, the server can acquire a target user voice data packet corresponding to a target virtual character in a game scene and position information of the target virtual character and each virtual character in the game scene, wherein each virtual character is a virtual character except the target virtual character in the game scene; determining distance information between the target virtual character and each virtual character based on the position information of the target virtual character and each virtual character in the game scene; screening out the virtual roles with the distance smaller than a preset distance threshold value according to the distance information to obtain candidate virtual roles; and sending the target user voice data packet to a terminal corresponding to the candidate virtual role, so that the terminal determines the voice energy level and the azimuth information corresponding to the candidate virtual role and the target virtual role, and playing the voice in the target user voice data packet corresponding to the target virtual role to the candidate virtual role according to the voice energy level and the azimuth information.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the voice playing method, and are not described herein again.
According to an aspect of the application, a computer program product or computer program is provided, comprising computer instructions, the computer instructions being stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method provided in the various alternative implementations of the above embodiments.
It will be understood by those skilled in the art that all or part of the steps of the methods of the embodiments described above may be performed by computer instructions, or by computer instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor. To this end, the present application provides a storage medium, in which a computer program is stored, where the computer program includes computer instructions, and the computer program can be loaded by a processor to execute any one of the voice playing methods provided in the present application.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Wherein the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
Since the instructions stored in the storage medium can execute the steps in any of the voice playing methods provided in the embodiments of the present application, the beneficial effects that can be achieved by any of the voice playing methods provided in the embodiments of the present application can be achieved, which are detailed in the foregoing embodiments and will not be described herein again.
The foregoing describes in detail a voice playing method, apparatus, computer device, and storage medium provided in the embodiments of the present application, and specific examples are applied in the present application to explain the principles and implementations of the present application, and the descriptions of the foregoing embodiments are only used to help understand the method and core ideas of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (11)

1. A method for playing a voice, comprising:
acquiring a first role identification corresponding to each virtual role in a game scene, a second role identification corresponding to a target virtual role and distance information between the virtual roles and the target virtual role;
determining the relationship category between the virtual role and the target virtual role according to the first role identification and the second role identification;
determining a candidate virtual character according to the relationship type and the distance information, wherein the candidate virtual character is a virtual character which has a teammate relationship with the target virtual character and has a distance with the target virtual character smaller than a preset distance threshold;
acquiring a target user voice data packet corresponding to a target virtual character in a game scene, and acquiring distance information between a candidate virtual character and the target virtual character in the game scene;
determining a corresponding voice energy level between the candidate virtual character and the target virtual character according to the distance information between the candidate virtual character and the target virtual character;
acquiring the position information of the candidate virtual role and the target position information of the target virtual role;
determining the azimuth information of the candidate virtual character relative to the target virtual character according to the position information of the candidate virtual character and the target position information of the target virtual character;
playing the voice in the voice data packet of the target user corresponding to the target virtual role to the candidate virtual role according to the voice energy level and the azimuth information;
and playing error voice information corresponding to the target virtual character to the virtual character with the relationship type of enemy relationship, wherein the error voice information is generated according to error voice opposite to the voice in the target user voice data packet.
2. The method of claim 1, wherein the playing the voice in the voice data packet of the target user corresponding to the target virtual character to the candidate virtual character according to the voice energy level and the direction information comprises:
determining audio playing parameters according to the voice energy level and the azimuth information;
and playing the voice in the voice data packet of the target user corresponding to the target virtual role to the candidate virtual role according to the audio playing parameters.
3. The method of claim 1, wherein the determining the corresponding speech energy level between the candidate virtual character and the target virtual character according to the distance information between the candidate virtual character and the target virtual character comprises:
acquiring mapping relations between different distance information and sound energy levels;
and determining the corresponding voice energy level between the candidate virtual character and the target virtual character according to the distance information between the candidate virtual character and the target virtual character based on the mapping relation.
4. The method of claim 1, wherein the obtaining the distance information between the candidate virtual character and the target virtual character in the game scene comprises:
receiving the distance information between the target virtual character and the candidate virtual character, which is obtained by calculation and is sent by a server based on the position information of the target virtual character and the candidate virtual character in the game scene; or performing arithmetic square root operation on the position information of the candidate virtual character and the target position information of the target virtual character to obtain the distance information between the candidate virtual character and the target virtual character.
5. The audio playback method according to any one of claims 1 to 4, wherein the audio playback method further comprises:
acquiring original user voice data of a terminal corresponding to a target virtual role;
preprocessing the original user voice data of the target virtual role to obtain target user voice data;
coding the target user voice data to obtain a coded voice data packet;
acquiring target position information of the target virtual character in the game scene;
writing the target position information of the target virtual character in the game scene into the preset position of the coded voice data packet to obtain a target voice data packet;
and sending the target voice data packet to a server so that the server sends the target voice data packet to a terminal corresponding to the candidate virtual role.
6. The audio playing method according to claim 5, wherein writing the target position information of the target avatar in the game scene into a preset position of the encoded audio data packet to obtain a target audio data packet, comprises:
writing the target position information of the target virtual character in the game scene into a head field or a tail field of the coded voice data packet to obtain a target voice data packet; alternatively, the first and second electrodes may be,
and writing the target position information of the target virtual character in the game scene into a preset field of the coded voice data packet based on a preset character identifier to obtain a target voice data packet.
7. A method for playing speech, comprising:
acquiring a target user voice data packet corresponding to a target virtual character in a game scene, and acquiring the target virtual character and position information of each virtual character in the game scene, wherein each virtual character is a virtual character except the target virtual character in the game scene;
determining distance information between the target virtual character and each virtual character based on the position information of the target virtual character and each virtual character in the game scene;
acquiring a first role identification corresponding to each virtual role in a game scene and a second role identification corresponding to a target virtual role;
determining the relationship category between the virtual role and the target virtual role according to the first role identification and the second role identification;
screening out candidate virtual roles according to the relationship types and the distance information, wherein the candidate virtual roles are virtual roles of which the relationship types with the target virtual roles are teammate relationships and the distances from the candidate virtual roles to the target virtual roles are smaller than a preset distance threshold;
and sending the target user voice data packet to a terminal corresponding to the candidate virtual role, so that the terminal determines the voice energy level and the azimuth information corresponding to the candidate virtual role and the target virtual role, plays the voice in the target user voice data packet corresponding to the target virtual role to the candidate virtual role according to the voice energy level and the azimuth information, and plays the error voice information corresponding to the target virtual role to the virtual role with the relationship type of enemy relationship, wherein the error voice information is generated according to the error voice opposite to the voice in the target user voice data packet.
8. A voice playback apparatus, comprising:
the game system comprises a first obtaining unit, a second obtaining unit and a third obtaining unit, wherein the first obtaining unit is used for obtaining a first role identification corresponding to each virtual role in a game scene, a second role identification corresponding to a target virtual role and distance information between the virtual roles and the target virtual role;
a first determining unit, configured to determine a relationship category between the virtual character and the target virtual character according to the first character identifier and the second character identifier;
a second determining unit, configured to determine a candidate virtual character according to the relationship type and the distance information, where the candidate virtual character is a virtual character whose relationship type with the target virtual character is a teammate relationship and whose distance from the target virtual character is smaller than a preset distance threshold;
the second acquisition unit is used for acquiring a target user voice data packet corresponding to a target virtual character in a game scene and acquiring distance information between a candidate virtual character and the target virtual character in the game scene;
a third determining unit, configured to determine, according to distance information between the candidate virtual character and the target virtual character, a corresponding speech energy level between the candidate virtual character and the target virtual character;
a third acquiring unit, configured to acquire position information of the candidate virtual character and target position information of the target virtual character;
a fourth determining unit, configured to determine, according to the position information of the candidate virtual character and the target position information of the target virtual character, position information of the candidate virtual character with respect to the target virtual character;
the first playing unit is used for playing the voice in the target user voice data packet corresponding to the target virtual role to the candidate virtual role according to the voice energy level and the azimuth information;
and the second playing unit is used for playing the error voice information corresponding to the target virtual character to the virtual character with the relationship type of enemy relationship, wherein the error voice information is generated according to the error voice opposite to the voice in the target user voice data packet.
9. A voice playback apparatus, comprising:
a fourth obtaining unit, configured to obtain a target user voice data packet corresponding to a target virtual character in a game scene, and obtain the target virtual character and position information of each virtual character in the game scene, where each virtual character is a virtual character in the game scene except the target virtual character;
a fifth determining unit configured to determine distance information between the target virtual character and each virtual character based on the target virtual character and position information of each virtual character in the game scene;
a fifth obtaining unit, configured to obtain a first role identifier corresponding to each virtual role in a game scene and a second role identifier corresponding to a target virtual role;
a sixth determining unit, configured to determine a relationship category between the virtual character and the target virtual character according to the first character identifier and the second character identifier;
the screening unit is used for screening out candidate virtual roles according to the relation types and the distance information, wherein the candidate virtual roles are virtual roles of which the relation types with the target virtual roles are teammate relations and the distances from the candidate virtual roles to the target virtual roles are smaller than a preset distance threshold;
a sending unit, configured to send the target user voice data packet to a terminal corresponding to the candidate virtual character, so that the terminal determines a voice energy level and azimuth information corresponding to the candidate virtual character and the target virtual character, so as to play, according to the voice energy level and the azimuth information, a voice in the target user voice data packet corresponding to the target virtual character to the candidate virtual character, and play, to a virtual character with an enemy relationship as a relationship type, error voice information corresponding to the target virtual character, where the error voice information is generated according to an error voice opposite to the voice in the target user voice data packet.
10. A computer device comprising a processor and a memory, the memory having a computer program stored therein, the processor executing the voice playback method according to any one of claims 1 to 7 when calling the computer program in the memory.
11. A storage medium for storing a computer program which is loaded by a processor to execute the voice playback method according to any one of claims 1 to 7.
CN202011223741.8A 2020-11-05 2020-11-05 Voice playing method and device, computer equipment and storage medium Active CN112316427B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011223741.8A CN112316427B (en) 2020-11-05 2020-11-05 Voice playing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011223741.8A CN112316427B (en) 2020-11-05 2020-11-05 Voice playing method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112316427A CN112316427A (en) 2021-02-05
CN112316427B true CN112316427B (en) 2022-06-10

Family

ID=74315837

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011223741.8A Active CN112316427B (en) 2020-11-05 2020-11-05 Voice playing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112316427B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113010594B (en) * 2021-04-06 2023-06-06 深圳市思麦云科技有限公司 XR-based intelligent learning platform
CN113082709A (en) * 2021-04-20 2021-07-09 网易(杭州)网络有限公司 Information prompting method and device in game, storage medium and computer equipment
CN113707165A (en) * 2021-09-07 2021-11-26 联想(北京)有限公司 Audio processing method and device, electronic equipment and storage medium
CN113827954B (en) * 2021-09-24 2023-01-10 广州博冠信息科技有限公司 Regional voice communication method, device, storage medium and electronic equipment
CN114143700B (en) * 2021-12-01 2023-01-10 腾讯科技(深圳)有限公司 Audio processing method, device, equipment, medium and program product

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7985138B2 (en) * 2004-02-17 2011-07-26 International Business Machines Corporation SIP based VoIP multiplayer network games
JP2008299135A (en) * 2007-05-31 2008-12-11 Nec Corp Speech synthesis device, speech synthesis method and program for speech synthesis
CN109550248B (en) * 2018-11-09 2021-05-04 Oppo广东移动通信有限公司 Virtual object position identification method and device, mobile terminal and storage medium
CN110270094A (en) * 2019-07-17 2019-09-24 珠海天燕科技有限公司 A kind of method and device of game sound intermediate frequency control
KR20190101329A (en) * 2019-08-12 2019-08-30 엘지전자 주식회사 Intelligent voice outputting method, apparatus, and intelligent computing device
CN111569416A (en) * 2020-05-22 2020-08-25 网易(杭州)网络有限公司 Sound playing control method of virtual reality game, storage medium and electronic device

Also Published As

Publication number Publication date
CN112316427A (en) 2021-02-05

Similar Documents

Publication Publication Date Title
CN112316427B (en) Voice playing method and device, computer equipment and storage medium
CN106984043B (en) Data synchronization method and system for multiplayer battle game
CN109413480A (en) Picture processing method, device, terminal and storage medium
CN104010706B (en) The direction input of video-game
US10525354B2 (en) Game apparatus, game controlling method and storage medium for determining a terrain based on a distribution of collision positions
JP2015523886A (en) Dynamic allocation of drawing resources in cloud game systems
CN113209632B (en) Cloud game processing method, device, equipment and storage medium
CN102543096B (en) Method and device for suppressing scene noise during media file playing
CN111111167B (en) Sound effect playing method and device in game scene and electronic device
JP2016528563A (en) Image processing apparatus, image processing system, image processing method, and storage medium
CN114143700B (en) Audio processing method, device, equipment, medium and program product
WO2019080901A1 (en) Interactive interface display method and device, storage medium, and electronic device
WO2022143322A1 (en) Augmented reality interaction method and electronic device
JP6114388B2 (en) Method and system for bandwidth efficient remote procedure call
CN108379842A (en) Gaming audio processing method, device, electronic equipment and storage medium
CN104606884A (en) Matching method and device in game battle
JP2005322125A (en) Information processing system, information processing method, and program
CN113952720A (en) Game scene rendering method and device, electronic equipment and storage medium
CN108880975B (en) Information display method, device and system
US8827817B2 (en) Apparatus and method for collecting game data
CN116390016A (en) Sound effect control method and device for virtual scene, computer equipment and storage medium
US20150106497A1 (en) Communication destination determination apparatus, communication destination determination method, communication destination determination program, and game system
JP2012213427A (en) Program, information storage medium, and game information producing device
CN108744517A (en) Audio-frequency processing method, device, terminal and storage medium
CN114307157A (en) Sound processing method, device, equipment and storage medium in virtual scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40038699

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant