CN110152309B - Voice communication method, device, electronic equipment and storage medium - Google Patents

Voice communication method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN110152309B
CN110152309B CN201910069735.2A CN201910069735A CN110152309B CN 110152309 B CN110152309 B CN 110152309B CN 201910069735 A CN201910069735 A CN 201910069735A CN 110152309 B CN110152309 B CN 110152309B
Authority
CN
China
Prior art keywords
user
voice
target
channel
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910069735.2A
Other languages
Chinese (zh)
Other versions
CN110152309A (en
Inventor
郑祥威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910069735.2A priority Critical patent/CN110152309B/en
Publication of CN110152309A publication Critical patent/CN110152309A/en
Application granted granted Critical
Publication of CN110152309B publication Critical patent/CN110152309B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/70Game security or game management aspects
    • A63F13/79Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
    • A63F13/795Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for finding other players; for building a team; for providing a buddy list
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/87Communicating with other players during game play, e.g. by e-mail or chat

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • General Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention discloses a voice communication method, a voice communication device, electronic equipment and a storage medium, and belongs to the technical field of Internet. The method comprises the following steps: according to the embodiment of the invention, the target voice channels are determined in the candidate voice channels, and the target user sets are determined according to the user information of the first user and the target voice channels, so that the voice message of the first user can be sent to the terminal where the second user of the target user sets is located, the communication requirement of communication between the user and the target user sets can be met simultaneously, the user does not need to switch frequently between the two voice channels, and the voice communication efficiency is improved.

Description

Voice communication method, device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a voice communication method and apparatus, an electronic device, and a storage medium.
Background
With the development of internet technology, the types of games that can be implemented on terminals are becoming more and more abundant. In some gaming applications, instant voice communication is possible from user to user during the course of a game. For example, voice communication may be conducted in real-time between multiple users within the same game team.
At present, the voice communication process is as follows: the game application has a setting button provided in an application interface, and the terminal can display a setting page based on the user's activation of the setting button. Team voice options and range voice options are provided in the setting page, a user can trigger the team voice options or the range voice options based on needs, and the team voice or the range voice of the game application is started by the terminal based on the options triggered by the user. Wherein, the team voice means that voice communication is carried out in a game team where the user is; the range voice refers to voice communication in a preset virtual scene range of the user in a virtual scene of the game. When the user needs to switch the voice channel, the user can trigger the setting button again, and trigger the team voice option or the range voice option again in the setting page, so that switching between team voice and range voice is realized.
In the voice communication, since the user can select only one of the team voice and the range voice, the user may frequently switch between the team voice and the range voice in some cases. When switching, a user needs to perform multiple trigger operations, the operation is complex, the time consumption of the whole switching process is long, and the efficiency of the voice communication is low.
Disclosure of Invention
The embodiment of the invention provides a voice communication method, a voice communication device, electronic equipment and a storage medium, which can solve the problem of low voice communication efficiency in the related art. The technical scheme is as follows:
in one aspect, a method for voice communication is provided, the method comprising:
determining a plurality of target voice channels in the plurality of candidate voice channels;
determining a plurality of target user sets according to the user information of the first user and the target voice channels, wherein each target user set corresponds to one target voice channel;
and when receiving the voice message of the first user, sending the voice message to a terminal where a second user of the target user set is located.
In another aspect, a voice communication apparatus is provided, the apparatus comprising:
a determining module, configured to determine a plurality of target voice channels among the plurality of candidate voice channels;
the determining module is further configured to determine a plurality of target user sets according to the user information of the first user and the plurality of target voice channels, wherein each target user set corresponds to one target voice channel;
and the sending module is used for sending the voice message to a terminal where a second user of the target user sets is located when the voice message of the first user is received.
In another aspect, an electronic device is provided and includes one or more processors and one or more memories, where at least one instruction is stored in the one or more memories and loaded into and executed by the one or more processors to implement the operations performed by the voice communication method as described above.
In another aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the operations performed by the voice communication method as described above.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
the method comprises the steps that a plurality of target voice channels are determined in a plurality of candidate voice channels through a terminal, and a plurality of target user sets are determined according to user information of a first user and the target voice channels, so that voice messages of the first user can be sent to a terminal where a second user of the target user sets is located, communication requirements of the user and the target user sets can be met simultaneously, the user does not need to frequently switch between the two voice channels, and voice communication efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by an embodiment of the invention;
fig. 2 is a flowchart of a voice communication method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an application interface provided by an embodiment of the invention;
FIG. 4 is a schematic diagram of an application interface provided by an embodiment of the invention;
fig. 5 is a schematic diagram of a voice channel setting page according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a voice channel according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a voice communication process according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a voice communication apparatus according to an embodiment of the present invention;
fig. 9 is a block diagram illustrating a terminal 900 according to an exemplary embodiment of the present invention;
fig. 10 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present invention, and referring to fig. 1, the implementation environment includes: the terminal 101 and the server 102, the terminal 101 is installed with an application, the server 101 is a background server of the application, the application is configured with a voice communication function, the terminal 101 can perform data interaction with the server 102 based on the application, and the terminal 101 realizes the voice communication function based on the data interaction with the server 102.
The voice communication functionality of the application supports voice communication over multiple different voice channels simultaneously. In one possible implementation scenario, the first user may simultaneously communicate voice with multiple target user sets corresponding to multiple target voice channels based on the voice communication function.
The application may be a game application, and in the process of running the game application by the terminal 101, the first user may perform voice communication with a plurality of target user sets corresponding to a plurality of target voice channels in real time, so as to meet communication requirements of the users to a greater extent in the game process. In a possible implementation scenario, the game application may include a virtual scene, which may be used to simulate a virtual space, which may be an open space, and the virtual scene may be used to simulate a real environment in reality, for example, the virtual scene may include sky, land, sea, and the like, and the land may include environmental elements such as a desert, a city, and the like. The terminal 101 displays a virtual object representing the first user in the virtual scene, and the avatar may be in any form, such as human, animal, etc., but the invention is not limited thereto. The first user may control the virtual object to move in the virtual scene, for example, in a shooting game, the first user may control the virtual object to freely fall, glide or open a parachute to fall in the sky of the virtual scene, to run, jump, crawl, bend over on land, or to swim, float, or dive in the sea, or the first user may control the virtual object to move in the virtual scene by riding a vehicle, which is only exemplified in the above-mentioned scene, but the present invention is not limited thereto. The first user can also control the virtual object to fight with other virtual objects through weapons, wherein the weapons can be cold weapons or hot weapons, and the invention is not particularly limited to this.
The game application may be a separate application program, or a game plug-in installed in a separate application program, or the like. For example, a game applet installed in a social application. The terminal 101 may be any Device that installs the game application, such as a mobile phone terminal, a PAD (Portable Android Device) terminal, or a computer terminal. The embodiment of the present invention is not particularly limited to this.
Fig. 2 is a flowchart of a voice communication method according to an embodiment of the present invention. The execution subject of the embodiment of the present invention may be a terminal, and referring to fig. 2, the method includes:
201. and the terminal receives a voice communication instruction in the application running process.
The voice communication instruction is used for indicating to start the voice communication function of the application; in the embodiment of the invention, the terminal runs the application, the application is configured with the voice communication function, and the voice communication function is the function of carrying out voice communication with the user of the application. In the running process of the application, a user can trigger the terminal to start the voice communication function based on needs. In this step, when the terminal detects a target event of the application interface, the terminal receives the voice communication instruction.
The user may trigger the terminal to start the voice communication function through a trigger operation or a voice trigger, such as a click operation, a gesture operation, or a target voice, and the target event may be a target trigger operation.
In the first mode, the terminal receives a voice communication instruction based on an operation trigger. Then the step is: and when the terminal detects the target trigger operation on the application interface, the terminal receives a voice communication instruction.
The application is configured with a voice communication button, the target triggering operation may be a triggering operation on the voice communication button, and when the terminal detects the target triggering operation on the voice communication button, the terminal receives the voice communication instruction. The target triggering operation may be an operation triggered by a touch event of a finger of a user or an operation triggered by a click event of a mouse or other input device. The voice communication button is used for triggering the voice communication function of starting the application. Of course, the voice communication button may be represented as a voice icon in the application interface of the application, or may be a voice on-off button in the setting page of the application, and when the voice icon in the application interface is triggered or the on-off button in the setting interface is triggered, the terminal receives the voice communication instruction and starts the voice communication function.
In a possible implementation manner, the target trigger operation may also be a target gesture operation of the user, for example, the target gesture operation may be a double-click operation or a left-right sliding operation of the user on the application interface, and the like. When the terminal detects the target gesture operation on the application interface, the terminal receives the voice communication instruction. Of course, the target gesture operation may be set as needed, and this is not particularly limited in the embodiment of the present invention.
As shown in fig. 3, the application may be a game application, and a voice communication button is provided in an application interface of the game application, and when the voice communication button is triggered, the terminal starts the voice communication function. The first user may be speaking to other users in the application. In addition, a microphone button can be further included below the voice communication button, and when the microphone button is triggered, the terminal can start to collect the voice message of the first user in real time. As shown in fig. 4, fig. 4 is a schematic view of an actual application interface of the game application, and fig. 4 shows more clearly the actual representation of the voice communication button and the microphone button in the application interface.
In the second mode, the terminal receives a voice communication instruction based on the triggering of the voice signal. When the terminal detects a target voice in the surrounding environment, the terminal receives the voice communication instruction.
The terminal can detect the voice in the surrounding environment in real time, and when the terminal detects the target voice in the surrounding environment, the terminal receives the voice communication instruction. The target speech may be set based on needs, which is not specifically limited in this embodiment of the present invention. For example, the target voice may be "voice," "voice chat," or the like.
In one possible implementation scenario, the application may be a game application, in which a first user may trigger to start a voice communication function of the game application, and based on the voice communication function of the application, voice communication is implemented with other users in the game application. In the embodiment of the invention, the terminal can display the virtual scene of the game application in the application interface of the game application, and when the first user controls the virtual object to play the game in the virtual scene, the first user can meet the communication requirement of the user in real time through the instant voice communication with other users in the game application. In addition, the game experience of the user can be further enhanced from the visual sense and the auditory sense simultaneously during the game, and the user experience is improved.
In the embodiment of the invention, the terminal can quickly start the voice communication function based on a plurality of triggering modes of a user, so that the efficiency of voice communication is improved, and the terminal can support voice triggering and operation triggering, so that the practicability of voice communication is improved.
202. The terminal determines a plurality of target voice channels among the plurality of candidate voice channels.
Wherein each candidate voice channel is used to indicate a user set determination mode, and the user set comprises a plurality of users performing voice communication with the first user. The terminal may determine a plurality of candidate voice channels for the application and select a plurality of target voice channels from the plurality of candidate voice channels.
Wherein, a plurality of candidate voice channels are configured in the application. The first user can select a target voice channel by himself; or, the terminal may select a target voice channel for the first user based on a historical channel used by the first user; or, the terminal can also directly select a preset voice channel as the target voice channel for the first user. Accordingly, the step can determine at least two target voice channels in at least two of the following four ways.
In the first way, if the terminal selects the preset first voice channel, this step may be: the terminal acquires a first voice channel from the candidate voice channels as the target voice channel.
The terminal may store a preset channel identifier of a preset voice channel in advance, and when receiving a voice communication instruction, the terminal acquires the preset voice channel identifier and uses a first voice channel corresponding to the preset voice channel identifier as the target voice channel. The preset voice channel identifier may be stored in the configuration file of the application, and when the terminal starts the voice communication function, a first voice channel corresponding to the preset voice channel identifier in the configuration file is selected as a target voice channel by default.
If the number of the preset voice channel identifiers is multiple, the terminal can acquire a first voice channel identifier in the multiple preset voice channel identifiers, and use a first voice channel corresponding to the first voice channel identifier as the target voice channel.
In the embodiment of the present invention, the server may also update the preset voice channel in real time, and the server may adopt a certain update policy to update the preset voice channel in real time and send the updated preset voice channel to the terminal. In one possible implementation, the update policy may be: and updating the preset voice channel based on the using times of the plurality of voice channels. The server may collect a plurality of historical voice channels of a plurality of users in real time, and use the historical voice channel with the largest number of usage times as the preset voice channel according to the number of usage times of the plurality of historical voice channels of the plurality of users. Alternatively, the update policy may be: updating the preset voice channel based on the number of usage times of the plurality of candidate voice channels within a preset time period, for example, updating based on the number of usage times of the plurality of voice channels within the last week; of course, the update policy may also be: the duration of the multi-voice channel is updated, which is not specifically limited in the embodiment of the present invention.
In the second way, if the terminal selects a preset second voice channel for the user, this step may be: and the terminal acquires a second voice channel from the candidate voice channels as the target voice channel.
In this step, the manner in which the terminal acquires the second voice channel is the same as the implementation process of the first manner, and is not described herein again.
It should be noted that, in one possible implementation, the terminal may obtain two preset voice channels, that is, the first voice channel and the second voice channel, as the multiple target voice channels through the first manner and the second manner.
In the third mode, the first user selects the target voice channel. The terminal obtains a third voice channel corresponding to the channel selection instruction as the target voice channel in the candidate voice channels according to the channel selection instruction of the first user.
Wherein the channel selection instruction is used for indicating the selection of the candidate voice channels. In this step, the terminal may display a voice communication setting interface, where the voice communication setting interface includes a plurality of candidate voice channel options, each candidate voice channel option corresponds to one candidate voice channel, and the first user may select a target voice channel from the plurality of candidate voice channel options. The terminal receives a channel selection instruction of the first user, and acquires a third voice channel corresponding to the channel selection instruction as the target voice channel.
When the terminal detects that any candidate voice channel option in the voice communication setting interface is selected, the terminal receives the channel selection instruction and acquires the selected third voice channel according to the channel selection instruction.
As shown in fig. 5, the terminal may display the voice communication setting interface, and display a plurality of voice channel options in the voice communication setting interface, as shown in fig. 5, the plurality of candidate voice channel options may include a team channel option, and a team frequency channel corresponding to the team channel option indicates that voice communication is performed in the game team where the first user is located. Of course, the candidate voice channels may further include a distance channel and a vitality channel, where the distance channel is used for performing voice communication with the user within the preset distance radius of the first user, and the vitality channel is used for performing voice communication with the user within the distance radius corresponding to the information related to the vitality of the first user. In addition, the terminal may further set an all-channel option, where the all-channel option is to select all voice channels of the application for voice communication, and the all-channel option may be used to indicate a plurality of channels including a team channel, a distance channel, or a vitality channel. It should be noted that, for example, only the team channel option and all options are shown in the setting interface for description, the distance channel option, the vitality channel option, and the like may also be displayed in the setting interface, and in fig. 5, the options corresponding to the distance channel, the vitality channel, and the like, which are further included in the embodiment of the present invention, are not shown in the setting interface. In addition, as shown in fig. 5, the first user may also set a main volume, a microphone volume, a sound effect, and the like of the voice communication in the setting interface.
And in a fourth mode, the terminal selects a target voice channel based on the historical channel of the first user. Then this step may be: and the terminal determines a fourth voice channel according to the historical channel use information of the first user, and determines the target voice channel from the fourth voice channel.
Wherein the historical channel usage information of the first user may include: the using time of the candidate voice channels, the channel identification of the used voice channel and other information of the first user. In this step, the terminal may count the number of times of use of the candidate voice channels by the first user within a preset time period according to the historical channel use information of the first user, and acquire the candidate voice channel of which the number of times of use meets a preset condition as a fourth voice channel.
The preset condition may be a first preset number of candidate voice channels with the largest number of usage times. In a possible implementation manner, the terminal may perform descending order arrangement on the plurality of candidate voice channels according to the number of times of use of each candidate voice channel, filter a preset number of candidate voice channels ranked in the top from the plurality of candidate voice channels in descending order arrangement, and determine the preset number of candidate voice channels as the fourth voice channel. The preset number may be set according to needs, and for example, the preset number may be 1, 3, and the like. The embodiment of the present invention is not particularly limited thereto.
Of course, the terminal may also count the usage duration of the candidate voice channels by the first user according to the historical channel usage information of the first user, and acquire the candidate voice channel with the usage duration exceeding the maximum as the fourth voice channel.
In one possible implementation, the candidate voice channels are used to indicate a plurality of different user set determination manners, which may include but are not limited to: determining a user set based on a preset distance radius; determining a user set based on the feature element where the user is located; and determining a user set based on the user association relation, or determining the user set based on the vitality related information of the user, and the like.
It should be noted that the terminal may obtain two target voice channels based on any two of the four manners, and certainly, the terminal may also obtain three target voice channels based on any three manners, or obtain four target voice channels based on the four manners, which is not specifically limited in the embodiment of the present invention. In addition, the terminal can also screen out the target voice channel based on the user characteristics of the user. The terminal filters out a target voice channel matched with the user characteristic of the first user from the plurality of voice channels according to the user characteristic of the first user. Among other things, the user characteristics may include, but are not limited to: the age of the user, the frequency or period of use of the application by the user, etc. For example, the terminal may previously store the usage frequency matching each candidate voice channel, and then the terminal acquires the target voice channel matching the usage frequency of the first user based on a plurality of usage frequency ranges corresponding to a plurality of voice channels. When the user characteristics include the age of the user and the usage time period of the application, the terminal acquires the target voice channel based on the age and the usage time period of the user, which is the same as the above-mentioned process of acquiring the target voice channel based on the usage frequency, and is not described herein again.
It should be noted that the user can select the multiple target voice channels by himself, so that the multiple target voice channels determined by the terminal better meet the voice communication requirement of the user. In addition, the terminal can also determine a target voice channel based on the historical channel use information of the first user, accurately screen the target voice channel closer to the actual requirement of the user for the user, and improve the accuracy of determining the target voice channel.
203. And the terminal determines a plurality of target user sets according to the user information of the first user and the plurality of target voice channels.
Wherein each set of target users corresponds to a target voice channel. The target user set refers to a user set composed of at least one second user performing voice communication with the first user. In the embodiment of the invention, the terminal acquires the user information of the first user, and acquires the plurality of target user sets according to the user information of the first user and the plurality of target voice channels.
Wherein, the user information may include but is not limited to: the first user's vitality related information, the first user's scene position, the first user's feature element, the first user's associated user identification of multiple users, etc. Based on the four user set determination manners indicated by the multiple candidate voice channels, the process of determining the target user set by the terminal may include the following four cases. The terminal can determine at least two target user sets through the following implementation modes corresponding to at least two situations.
In a first case, the target voice channel includes: a voice channel for indicating a first set of user determination means for determining a set of users based on vitality related information of the users.
The terminal acquires the vitality related information of the first user, determines a first distance radius corresponding to the vitality related information, acquires at least one second user of which the virtual distance with the first user is smaller than the first distance radius, and forms a target user set.
In the embodiment of the present invention, a virtual scene may be provided in the application, and the virtual scene may include a virtual object that is used to represent the first user in a virtual manner. The terminal may represent the location of the first user with a scene location of the virtual object in the virtual scene. The vitality related information is used for representing the strength of the vitality of the virtual object in the virtual scene. The vitality-related information may be a contribution value, a blood volume, a vital value, etc. of the virtual object. The terminal can obtain the vitality related information of the first user, obtain a first distance radius corresponding to the vitality related information of the first user from the corresponding relation between the vitality related information and the distance radius according to the vitality related information, obtain at least one second user of which the virtual distance with the first user is smaller than the first distance radius based on the first distance radius, and form a target user set. The terminal can acquire at least one second user of which the virtual distance from the first user is smaller than the first distance radius from the server.
In a possible implementation scenario, the vitality related information indicates that the stronger the vitality of the virtual object in the virtual scenario, the larger the first distance radius corresponding to the vitality related information may be; the vitality related information indicates that the weaker vitality of the virtual object in the virtual scene is, the smaller the first distance radius corresponding to the vitality related information may be. The vitality related information is taken as an example to explain, when the blood volume of the first user is smaller, the vitality of the virtual object corresponding to the first user is weaker, the first distance radius corresponding to the smaller blood volume is smaller, and in the virtual scene, the first user can ask for help from the second user in a close area range through voice communication, for example, ask for help from a teammate close to the first user, so that the first user can be rescued quickly, the communication scenes of the user are enriched, and the game experience of the user is further improved.
In a possible embodiment, the process of determining at least one second user based on the vitality-related information of the first user may also be performed by the server, and the process may be: the terminal may send a channel identifier of a voice channel indicating a first user set determination manner to the server, and the server acquires information related to vitality of the first user according to the channel identifier. The server determines a first distance radius corresponding to the vitality related information, acquires at least one second user of which the virtual distance with the first user is smaller than the first distance radius to form a target user set, and sends a user identifier of the at least one second user to the terminal. The process executed by the server is the same as the process executed by the terminal, and is not described herein again.
In a second case, the target voice channel includes: and the voice channel is used for indicating a second user set determination mode, and the second user set determination mode is used for determining a user set based on the preset distance radius.
And the terminal acquires at least one second user of which the virtual distance with the first user is smaller than the first distance radius according to a preset second distance radius to form a target user set.
Wherein, a virtual scene can be provided in the application, and the user set can include a plurality of second users located in a certain virtual space range in the virtual scene. A virtual object may be included in the virtual scene that is virtual for representing the first user. The terminal may represent the location of the first user with the scene location of the virtual object. The server stores scene positions of a plurality of users, the terminal can send a channel identifier of a voice channel for indicating a second user set determination mode to the server, the server obtains the scene position of the first user and the preset second distance radius according to the channel identifier, the server obtains at least one second user of which the virtual distance with the first user is smaller than the second distance radius based on the scene position of the first user and the second distance radius, and sends the user identifier of the at least one second user to the terminal.
In a possible implementation, the server may determine a target virtual scene range, which is a circular area range having a virtual distance from the first user smaller than a second distance radius, according to the second distance radius and the scene position of the virtual object. And the terminal determines at least one second user positioned in the range of the target virtual scene to form a target user set. Of course, the process of determining the target virtual scene range may also be executed by a terminal, where the terminal determines the target virtual scene range, sends the range information of the determined target virtual scene range to a server, and the server determines, based on the target virtual scene range, at least one second user whose virtual distance from the first user is smaller than a second distance radius.
During the running process of the game application, the terminal can detect the scene position of the virtual object in the virtual scene in real time. The terminal stores scene position coordinates corresponding to each scene position in the virtual scene in advance, and the terminal can adopt the scene position coordinates to represent the scene position of the virtual object.
It should be noted that, the terminal accurately determines, by using the scene position based on the first user, a plurality of second users located within a virtual space range of the first user that does not exceed a second distance radius. In addition, the terminal can be expanded to perform voice communication in a certain area based on the scene position coordinates of the virtual object, so that the accuracy of determining the first voice communication range is improved, the first user can perform voice communication with surrounding users in the virtual scene, communication between the user and surrounding virtual environment users is facilitated, the game experience of the user in the virtual scene is greatly improved, and the user experience is improved.
In a third case, when the target voice channel includes: and the voice channel is used for indicating a third user set determination mode, and the third user set determination mode is used for determining the user set based on the feature element where the user is located.
The terminal acquires the feature element of the first user, acquires at least one second user in the area corresponding to the feature element, and forms a target user set.
The terminal can acquire scene position coordinates of a virtual object in the virtual scene, determine feature elements corresponding to the scene position coordinates from a mapping relation between the scene position coordinates and the feature elements according to the scene position coordinates, acquire at least one second user in an area corresponding to the feature elements, and form a target user set. The terminal can send the area information of the area corresponding to the feature element to the server, the server obtains at least one second user in the area corresponding to the feature element, and sends the obtained user identifier of the at least one second user to the terminal.
The terminal can store the mapping relation between the scene position coordinates and the feature elements, and the terminal acquires the feature elements where the virtual objects are located according to the scene position coordinates of the virtual objects. The feature element may be an element having a certain outline or a building structure in the virtual scene, for example, the feature element may be a warehouse, a villa, a hospital, or the like.
In a possible implementation manner, the process that the terminal determines the target user set based on the feature element may also be executed by the server, and then the terminal may send a channel identifier of a voice channel for indicating a determination manner of the third user set to the server, and the server acquires, based on the channel identifier, at least one second user in an area corresponding to the feature element where the first user is located, so as to form one target user set. The process executed by the server is the same as the process executed by the terminal, and is not described herein again.
The terminal may determine the target user set based on the feature elements with a certain outline, such as a hospital and a park, so that a plurality of users located in the same feature element can perform voice communication in real time, and the use scene of the voice communication function is enriched. And moreover, the second user in the area corresponding to a certain building or geographic element is obtained from the representation, the virtual area range where the second user is located is defined more clearly in vision, the accuracy of determining the target user set is improved, and the game experience of the user is improved.
Fourth, when the target voice channel includes: and the voice channel is used for indicating a fourth user set determination mode, and the fourth user set determination mode is used for determining the user set based on the incidence relation between the users.
The terminal acquires at least one second user associated with the first user to form a target user set.
The terminal may store a user identifier of at least one second user associated with the first user, and the terminal obtains the user identifier of the at least one second user associated with the first user and determines a user set formed by the at least one second user as a target user set. Wherein the terminal may obtain the user identification of the at least one second user associated with the first user from the server.
The first user is associated with a second user in a game team where the first user is located, the target user set can be a game team formed by a plurality of users, and the plurality of users in the game team play games together.
In a possible implementation manner, the process that the terminal determines the target user set based on the association relationship of the users may also be executed by the server, and then the terminal may send a channel identifier of a voice channel for indicating the fourth user set determination manner to the server, and the server acquires at least one second user associated with the first user based on the channel identifier to form a target user set. The server sends the user identification of the at least one second user to the terminal. The process executed by the server is the same as the process executed by the terminal, and is not described herein again.
In a possible implementation manner, the terminal may send a channel identifier corresponding to the target voice channel to a server, and the server determines the plurality of target user sets according to the plurality of target voice channels and the user information of the first user. Then this step may be: and the terminal sends a voice communication request to the server according to the target voice channel, wherein the voice communication request carries the target voice channel identifier. The server determines the plurality of target user sets based on the target voice channel identification and the user information of the first user. The server sends the user identifications of the plurality of second users included in the plurality of target user sets to the terminal. Of course, the process of determining the plurality of target user sets by the server is the same as the process of determining the plurality of target user sets by the terminal, and is not described herein again.
As shown in fig. 6, taking the plurality of voice channels as an example, the team voice channel and the range voice channel are used to indicate that the target user set is determined based on the game team where the first user is located, and the voice communication method corresponding to the team channel may be: and performing voice communication in a game team where the first user is, wherein the voice communication mode corresponding to the voice channel in the range can be as follows: and carrying out voice communication with users whose distance to the first user is less than the first distance radius. The voice channels indicated corresponding to all options in fig. 5 may include team voice channels and range voice channels. As shown in fig. 7, the first user may select a team voice channel or a range voice channel based on options in the setting interface, the terminal may detect in real time that the team voice channel and the range voice channel are selected, and when all the options are detected to be selected, the terminal opens the team voice channel and the range voice channel. Of course, the setting interface may further include a team only option, and when it is detected that the team only option is selected, the terminal may also only turn on the team voice channel.
It should be noted that the server may use the user associated with the first user as a target user set, so that voice communication may be performed among multiple users associated with each other in the game application, and the achievable scenarios of voice communication are enriched, thereby better meeting the voice communication requirements of the users, and improving the practicability of the voice communication method provided by the embodiment of the present invention.
204. And when receiving the voice message of the first user, the terminal sends the voice message to a terminal where a second user of the target user sets is located.
When receiving the voice message of the first user, the terminal may send the voice message to a server, and the server sends the voice message to a terminal where users of the target users are located.
The terminal can collect the voice message of the first user from the surrounding environment in real time and send the voice message of the first user to the server. In a possible implementation manner, the terminal further detects a scene position of the virtual object in the virtual scene of the application in real time, and synchronizes the scene position of the virtual object to the server in real time.
As can be seen from step 203, the terminal may send a channel identifier of a target voice channel to the server, and the server determines a plurality of target user sets, or the terminal may determine the plurality of target user sets, and the voice communication request may carry the channel identifier of the target voice channel, or the voice communication request may carry user identifiers in the plurality of target user sets.
In this step, the server receives the voice message of the first user sent by the terminal, and sends the voice message to the terminal where the second user of the target user set is located. When the server receives the voice message of the first user of the terminal, the server sends the voice message to the terminal where the second users of the target user sets are located in real time according to the second users of the target user sets corresponding to the first user.
The server can receive the voice communication request of the terminal, and when the voice communication request carries the user identifiers in the target user sets, the server directly sends the voice message to the terminal where the second user of the target user sets is located according to the user identifiers in the target user sets. Or, when the voice communication request carries the channel identifier of the target voice channel, the server may first determine the multiple target user sets based on the channel identifier of the target voice channel, and send the voice message to the terminal where the second user of the multiple target user sets is located according to the users in the multiple target user sets. The process of determining the multiple target user sets by the server is the same as the process of determining the multiple target user sets by the terminal in step 203, and is not described here any more.
Of course, the server may also send the voice message of the second user in the multiple target user sets to the terminal of the first user in real time, so as to implement voice communication between the first user and the second user in the multiple target user sets.
In a possible implementation manner, the target user set corresponding to the second user includes the first user. The server also needs to implement voice communication between the first user and the second user based on a target user set corresponding to the second user in the target user set. The process may be: the server acquires a plurality of target user sets corresponding to the second user, and when the plurality of target user sets corresponding to the second user comprise the first user, the server sends the voice message of the first user to the terminal where the second user is located, and sends the voice message of the second user to the terminal where the first user is located. Of course, the server may also send the scene position of the first user to the terminal where the second user is located, and the server also sends the scene position of the second user to the terminal where the first user is located. The terminal receives the voice message of the second user and the scene location of the second user, which are sent by the server.
And the terminal receives the voice message of the second user sent by the server and plays the voice of the second user. When the terminal receives the scene position of the second user sent by the server, the terminal can also play the voice message of the second user based on the scene position of the second user. The process may be: and when receiving the voice message of any second user in the target user sets, the terminal plays the voice message of the second user in a voice playing mode corresponding to the scene position of the second user according to the scene position of the second user. The terminal can play the voice of the second user based on the relative position of the first user and the second user to adapt to the sound effect of the relative position. The terminal can also determine the relative position of the first user and the first user according to the scene position of the first user and the scene position of the second user; and the terminal acquires the voice playing mode corresponding to the relative position and plays the voice of the second user in the voice playing mode corresponding to the relative position.
In one possible implementation, the user may also communicate voice based on a peripheral device such as a headset. The voice playing mode may include: a left channel enhanced voice playing mode, a right channel enhanced voice playing mode and the like. The terminal may determine that the second user is located on the left side and the right side of the first user according to the scene position of the first user and the scene position of the second user. When the terminal determines that the second user is located at the left side of the first user, the terminal may play the voice message of the second user in a left channel enhanced voice playing manner. When the terminal determines that the second user is located at the right side of the first user, the terminal may play the voice message of the second user in a right channel enhanced voice playing manner. Taking the left channel enhanced voice playing mode as an example, the terminal may increase the volume of the left channel, decrease the volume of the right channel, and play the voice message of the second user. Of course, the voice playing mode may further include: for example, when the second user moves around the first user continuously along the circular track, the terminal may play the voice of the second user by using the voice playing mode of the 3D surround sound.
It should be noted that the terminal may also collect channel data of the first user in the voice communication in real time, and provide basic data for other behavior operations of the first user in the application based on the channel data, for example, the terminal may select a third user as a teammate of the first user, where the third user is the same as the voice channel information or the voice communication range of the first user, for the first user according to the voice channel information or the voice communication range of the first user.
It should be noted that the voice communication method of the present application may also be implemented by a server, and the embodiment of the present invention is described only by taking a terminal as an example, but the embodiment of the present invention does not specifically limit the execution subject for implementing the voice communication method. When executed by the terminal, the voice communication is performed through the process of the above-mentioned step 201 and step 204. When implemented by the server, voice communication may be implemented through the process of step 202 and step 204 described above. In addition, the process implemented by the server is the same as the process implemented by the terminal, and is not described in detail here.
In the embodiment of the invention, the terminal determines a plurality of target voice channels in a plurality of candidate voice channels, and a plurality of target user sets are determined according to the user information of the first user and the target voice channels, so that the voice message of the first user can be sent to the terminal where the second user of the target user sets is located, the communication requirement of the user for communicating with the target user sets can be met simultaneously, the user does not need to frequently switch between the two voice channels, and the voice communication efficiency is improved.
Fig. 8 is a schematic structural diagram of a voice communication apparatus according to an embodiment of the present invention. Referring to fig. 8, the apparatus includes: a determination module 801 and a sending module 802.
A determining module 801, configured to determine a plurality of target voice channels among a plurality of candidate voice channels;
the determining module 801 is further configured to determine a plurality of target user sets according to the user information of the first user and the plurality of target voice channels, where each target user set corresponds to one target voice channel;
a sending module 802, configured to send the voice message to a terminal where a second user of the multiple target user sets is located when receiving the voice message of the first user.
In one possible implementation, the determining module 801 includes:
a first obtaining unit, configured to obtain a first voice channel from the candidate voice channels as the target voice channel;
a second obtaining unit, configured to obtain a second voice channel from the candidate voice channels as the target voice channel;
a third obtaining unit, configured to obtain, in the multiple candidate voice channels, a third voice channel corresponding to the channel selection instruction as the target voice channel according to the channel selection instruction of the first user;
and the determining unit is used for determining a fourth voice channel according to the historical channel use information of the first user and determining the target voice channel from the fourth voice channel.
In a possible implementation manner, the determining unit is further configured to count the number of times of usage of the candidate voice channels by the first user within a preset time period, and acquire a candidate voice channel whose number of times of usage meets a preset condition as a fourth voice channel.
In a possible implementation manner, the third obtaining unit is further configured to display a voice communication setting interface, where the voice communication setting interface includes a plurality of candidate voice channel options, and each candidate voice channel option corresponds to one candidate voice channel; receiving a channel selection instruction of the first user; and acquiring a third voice channel corresponding to the channel selection instruction of the first user as the target voice channel.
In one possible implementation, the determining module 801 includes:
a fourth obtaining unit, configured to obtain vitality related information of the first user, determine a first distance radius corresponding to the vitality related information, obtain at least one second user whose virtual distance from the first user is smaller than the first distance radius, and form a target user set;
a fifth obtaining unit, configured to obtain, according to a preset second distance radius, at least one second user whose virtual distance from the first user is smaller than the first distance radius, and form a target user set;
a sixth obtaining unit, configured to obtain a feature element where the first user is located, obtain at least one second user in an area corresponding to the feature element, and form a target user set;
and the seventh acquiring unit is used for acquiring at least one second user associated with the first user to form a target user set.
In one possible embodiment, the apparatus further comprises:
and the playing module is used for playing the voice message of the second user in a voice playing mode corresponding to the scene position of the second user according to the scene position of the second user when the voice message of any second user in the target user sets is received.
In one possible implementation, the target user set corresponding to the second user in the plurality of target user sets includes the first user.
In the embodiment of the invention, the terminal determines a plurality of target voice channels in a plurality of candidate voice channels, and a plurality of target user sets are determined according to the user information of the first user and the target voice channels, so that the voice message of the first user can be sent to the terminal where the second user of the target user sets is located, the communication requirement of the user for communicating with the target user sets can be met simultaneously, the user does not need to frequently switch between the two voice channels, and the voice communication efficiency is improved.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
It should be noted that: in the voice communication apparatus provided in the above embodiment, only the division of the above functional modules is used for illustration during voice communication, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the voice communication apparatus and the voice communication method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
Fig. 9 is a block diagram illustrating a terminal 900 according to an exemplary embodiment of the present invention. The terminal 900 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 900 may also be referred to by other names such as user equipment, portable terminals, laptop terminals, desktop terminals, and the like.
In general, terminal 900 includes: a processor 901 and a memory 902.
Processor 901 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 901 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 901 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 901 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 901 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 902 may include one or more computer-readable storage media, which may be non-transitory. The memory 902 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 902 is used to store at least one instruction for execution by processor 901 to implement the voice communication methods provided by the method embodiments herein.
In some embodiments, terminal 900 can also optionally include: a peripheral interface 903 and at least one peripheral. The processor 901, memory 902, and peripheral interface 903 may be connected by buses or signal lines. Various peripheral devices may be connected to the peripheral interface 903 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of a radio frequency circuit 904, a touch display screen 905, a camera 906, an audio circuit 907, a positioning component 908, and a power supply 909.
The peripheral interface 903 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 901 and the memory 902. In some embodiments, the processor 901, memory 902, and peripheral interface 903 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 901, the memory 902 and the peripheral interface 903 may be implemented on a separate chip or circuit board, which is not limited by this embodiment.
The Radio Frequency circuit 904 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 904 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 904 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 904 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 904 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 904 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 905 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 905 is a touch display screen, the display screen 905 also has the ability to capture touch signals on or over the surface of the display screen 905. The touch signal may be input to the processor 901 as a control signal for processing. At this point, the display 905 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 905 may be one, providing the front panel of the terminal 900; in other embodiments, the number of the display panels 905 may be at least two, and each of the display panels is disposed on a different surface of the terminal 900 or is in a foldable design; in still other embodiments, the display 905 may be a flexible display disposed on a curved surface or a folded surface of the terminal 900. Even more, the display screen 905 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display panel 905 can be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and other materials.
The camera assembly 906 is used to capture images or video. Optionally, camera assembly 906 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 906 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuit 907 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 901 for processing, or inputting the electric signals to the radio frequency circuit 904 for realizing voice communication. For stereo sound acquisition or noise reduction purposes, the microphones may be multiple and disposed at different locations of the terminal 900. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 901 or the radio frequency circuit 904 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuit 907 may also include a headphone jack.
The positioning component 908 is used to locate the current geographic Location of the terminal 900 for navigation or LBS (Location Based Service). The Positioning component 908 may be a Positioning component based on the GPS (Global Positioning System) in the united states, the beidou System in china, the graves System in russia, or the galileo System in the european union.
Power supply 909 is used to provide power to the various components in terminal 900. The power source 909 may be alternating current, direct current, disposable or rechargeable. When power source 909 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 900 can also include one or more sensors 910. The one or more sensors 910 include, but are not limited to: acceleration sensor 911, gyro sensor 912, pressure sensor 913, fingerprint sensor 914, optical sensor 915, and proximity sensor 916.
The acceleration sensor 911 can detect the magnitude of acceleration in three coordinate axes of the coordinate system established with the terminal 900. For example, the acceleration sensor 911 may be used to detect the components of the gravitational acceleration in three coordinate axes. The processor 901 can control the touch display 905 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 911. The acceleration sensor 911 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 912 may detect a body direction and a rotation angle of the terminal 900, and the gyro sensor 912 may cooperate with the acceleration sensor 911 to acquire a 3D motion of the user on the terminal 900. The processor 901 can implement the following functions according to the data collected by the gyro sensor 912: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 913 may be disposed on the side bezel of terminal 900 and/or underneath touch display 905. When the pressure sensor 913 is disposed on the side frame of the terminal 900, the user's holding signal of the terminal 900 may be detected, and the processor 901 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 913. When the pressure sensor 913 is disposed at a lower layer of the touch display 905, the processor 901 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 905. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 914 is used for collecting a fingerprint of the user, and the processor 901 identifies the user according to the fingerprint collected by the fingerprint sensor 914, or the fingerprint sensor 914 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, processor 901 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 914 may be disposed on the front, back, or side of the terminal 900. When a physical key or vendor Logo is provided on the terminal 900, the fingerprint sensor 914 may be integrated with the physical key or vendor Logo.
The optical sensor 915 is used to collect ambient light intensity. In one embodiment, the processor 901 may control the display brightness of the touch display 905 based on the ambient light intensity collected by the optical sensor 915. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 905 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 905 is turned down. In another embodiment, the processor 901 can also dynamically adjust the shooting parameters of the camera assembly 906 according to the ambient light intensity collected by the optical sensor 915.
Proximity sensor 916, also known as a distance sensor, is typically disposed on the front panel of terminal 900. The proximity sensor 916 is used to collect the distance between the user and the front face of the terminal 900. In one embodiment, when the proximity sensor 916 detects that the distance between the user and the front face of the terminal 900 gradually decreases, the processor 901 controls the touch display 905 to switch from the bright screen state to the dark screen state; when the proximity sensor 916 detects that the distance between the user and the front surface of the terminal 900 gradually becomes larger, the processor 901 controls the touch display 905 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 9 does not constitute a limitation of terminal 900, and may include more or fewer components than those shown, or may combine certain components, or may employ a different arrangement of components.
Fig. 10 is a schematic structural diagram of a server according to an embodiment of the present invention, where the server 1000 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1001 and one or more memories 1002, where the memory 1002 stores at least one instruction, and the at least one instruction is loaded and executed by the processors 1001 to implement the voice communication method provided by the foregoing method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
In an exemplary embodiment, a computer-readable storage medium, such as a memory, including instructions executable by a processor in a terminal to perform the voice communication method in the above embodiments is also provided. For example, the computer-readable storage medium may be a ROM (Read-Only Memory), a RAM (random access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (15)

1. A method of voice communication, the method comprising:
determining a plurality of target voice channels in the plurality of candidate voice channels;
determining a plurality of target user sets according to user information of a first user and the plurality of target voice channels, wherein each target user set corresponds to one target voice channel, the target user set is a user set formed by at least one second user performing voice communication with the first user, and the user information comprises vitality related information of the first user, a scene position of the first user, a feature element where the first user is located, and user identifications of a plurality of users related to the first user;
and when receiving the voice message of the first user, sending the voice message to a terminal where a second user of the target user set is located.
2. The method of claim 1, wherein determining the plurality of target voice channels among the plurality of candidate voice channels comprises at least two of:
acquiring a first voice channel from the candidate voice channels as the target voice channel;
acquiring a second voice channel from the candidate voice channels as the target voice channel;
in the candidate voice channels, according to a channel selection instruction of the first user, acquiring a third voice channel corresponding to the channel selection instruction as the target voice channel;
and determining a fourth voice channel according to the historical channel use information of the first user, and determining the target voice channel from the fourth voice channel.
3. The method of claim 2, wherein determining a fourth voice channel based on historical channel usage information of the first user comprises:
and counting the use times of the plurality of candidate voice channels by the first user in a preset time period, and acquiring the candidate voice channel with the use times meeting a preset condition as the fourth voice channel.
4. The method according to claim 2, wherein the obtaining, as the target voice channel, a third voice channel corresponding to the channel selection instruction from among the plurality of candidate voice channels according to the channel selection instruction of the first user comprises:
displaying a voice communication setting interface, wherein the voice communication setting interface comprises a plurality of candidate voice channel options, and each candidate voice channel option corresponds to one candidate voice channel;
receiving a channel selection instruction of the first user;
and acquiring a third voice channel corresponding to the channel selection instruction of the first user as the target voice channel.
5. The method of claim 1, wherein determining the plurality of target user sets based on the user information of the first user and the plurality of target voice channels comprises at least two of:
acquiring vitality related information of the first user, determining a first distance radius corresponding to the vitality related information, acquiring at least one second user of which the virtual distance with the first user is smaller than the first distance radius, and forming a target user set;
according to a preset second distance radius, at least one second user with the virtual distance from the first user being smaller than the second distance radius is obtained, and a target user set is formed;
acquiring a feature element of the first user, acquiring at least one second user in an area corresponding to the feature element, and forming a target user set;
and acquiring at least one second user associated with the first user to form a target user set.
6. The method of claim 1, further comprising:
and when the voice message of any second user in the target user sets is received, playing the voice message of the second user in a voice playing mode corresponding to the scene position of the second user according to the scene position of the second user.
7. The method of claim 1, wherein a target user set corresponding to a second user of the plurality of target user sets comprises the first user.
8. A voice communication apparatus, characterized in that the apparatus comprises:
a determining module, configured to determine a plurality of target voice channels among the plurality of candidate voice channels;
the determining module is further configured to determine a plurality of target user sets according to user information of a first user and the plurality of target voice channels, each target user set corresponds to one target voice channel, the target user set is a user set composed of at least one second user performing voice communication with the first user, and the user information includes vitality related information of the first user, a scene position of the first user, a feature element where the first user is located, and user identifiers of a plurality of users associated with the first user;
and the sending module is used for sending the voice message to a terminal where a second user of the target user sets is located when the voice message of the first user is received.
9. The apparatus of claim 8, wherein the determining module comprises:
a first obtaining unit, configured to obtain a first voice channel from the candidate voice channels as the target voice channel;
a second obtaining unit, configured to obtain a second voice channel from the multiple candidate voice channels as the target voice channel;
a third obtaining unit, configured to obtain, in the multiple candidate voice channels, a third voice channel corresponding to the channel selection instruction as the target voice channel according to the channel selection instruction of the first user;
and the determining unit is used for determining a fourth voice channel according to the historical channel use information of the first user and determining the target voice channel from the fourth voice channel.
10. The apparatus of claim 9,
the determining unit is further configured to count the number of times of use of the plurality of candidate voice channels by the first user within a preset time period, and acquire a candidate voice channel whose number of times of use meets a preset condition as the fourth voice channel.
11. The apparatus of claim 9,
the third obtaining unit is further configured to display a voice communication setting interface, where the voice communication setting interface includes multiple candidate voice channel options, and each candidate voice channel option corresponds to one candidate voice channel; receiving a channel selection instruction of the first user; and acquiring a third voice channel corresponding to the channel selection instruction of the first user as the target voice channel.
12. The apparatus of claim 8, wherein the determining module comprises:
a fourth obtaining unit, configured to obtain vitality related information of the first user, determine a first distance radius corresponding to the vitality related information, obtain at least one second user whose virtual distance from the first user is smaller than the first distance radius, and form a target user set;
a fifth obtaining unit, configured to obtain, according to a preset second distance radius, at least one second user whose virtual distance from the first user is smaller than the first distance radius, and form a target user set;
a sixth obtaining unit, configured to obtain a feature element where the first user is located, obtain at least one second user in an area corresponding to the feature element, and form a target user set;
and the seventh acquisition unit is used for acquiring at least one second user associated with the first user to form a target user set.
13. The apparatus of claim 8, further comprising:
and the playing module is used for playing the voice message of the second user in a voice playing mode corresponding to the scene position of the second user according to the scene position of the second user when the voice message of any second user in the target user sets is received.
14. An electronic device, comprising one or more processors and one or more memories having stored therein at least one instruction that is loaded and executed by the one or more processors to perform operations performed by the voice communication method of any one of claims 1 to 7.
15. A computer-readable storage medium having stored therein at least one instruction which is loaded and executed by a processor to perform operations performed by the voice communication method of any one of claims 1 to 7.
CN201910069735.2A 2019-01-24 2019-01-24 Voice communication method, device, electronic equipment and storage medium Active CN110152309B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910069735.2A CN110152309B (en) 2019-01-24 2019-01-24 Voice communication method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910069735.2A CN110152309B (en) 2019-01-24 2019-01-24 Voice communication method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110152309A CN110152309A (en) 2019-08-23
CN110152309B true CN110152309B (en) 2021-10-26

Family

ID=67644828

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910069735.2A Active CN110152309B (en) 2019-01-24 2019-01-24 Voice communication method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110152309B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111355653B (en) * 2020-02-17 2022-03-25 腾讯科技(深圳)有限公司 Instant messaging relationship establishing method and device, storage medium and electronic equipment
CN114124501A (en) * 2021-11-16 2022-03-01 武汉光阴南北网络技术咨询中心 Data processing method, electronic device and computer storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999588A (en) * 2012-11-15 2013-03-27 广州华多网络科技有限公司 Method and system for recommending multimedia applications
CN104022944A (en) * 2014-06-27 2014-09-03 北京奇虎科技有限公司 Method and device for carrying out instant messaging based on game platform terminal
US20180229112A1 (en) * 2013-11-05 2018-08-16 Voyetra Turtle Beach, Inc. Method And System For Inter-Headset Communications Via Data Over In-Game Audio

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999588A (en) * 2012-11-15 2013-03-27 广州华多网络科技有限公司 Method and system for recommending multimedia applications
US20180229112A1 (en) * 2013-11-05 2018-08-16 Voyetra Turtle Beach, Inc. Method And System For Inter-Headset Communications Via Data Over In-Game Audio
CN104022944A (en) * 2014-06-27 2014-09-03 北京奇虎科技有限公司 Method and device for carrying out instant messaging based on game platform terminal

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
匿名.王者荣耀怎么切换语音频道 组队/全队频道切换方法一览.《http://www.973.com/z163446》.2017, *
王者荣耀怎么切换语音频道 组队/全队频道切换方法一览;匿名;《http://www.973.com/z163446》;20171206;第1页 *
第五人格或推新功能附近语音系统 可以跟周围人对话;佚名;《https://www.apk8.com/zixun/9606_1.html》;20180417;第1页 *

Also Published As

Publication number Publication date
CN110152309A (en) 2019-08-23

Similar Documents

Publication Publication Date Title
CN108465240B (en) Mark point position display method and device, terminal and computer readable storage medium
CN108619721B (en) Distance information display method and device in virtual scene and computer equipment
CN110992493B (en) Image processing method, device, electronic equipment and storage medium
CN108710525B (en) Map display method, device, equipment and storage medium in virtual scene
CN112907725B (en) Image generation, training of image processing model and image processing method and device
CN109324739B (en) Virtual object control method, device, terminal and storage medium
CN110300274B (en) Video file recording method, device and storage medium
CN111445901B (en) Audio data acquisition method and device, electronic equipment and storage medium
CN111246095B (en) Method, device and equipment for controlling lens movement and storage medium
CN108897597B (en) Method and device for guiding configuration of live broadcast template
CN110740340B (en) Video live broadcast method and device and storage medium
CN110772793A (en) Virtual resource configuration method and device, electronic equipment and storage medium
CN110956580B (en) Method, device, computer equipment and storage medium for changing face of image
CN110401898B (en) Method, apparatus, device and storage medium for outputting audio data
CN111142838A (en) Audio playing method and device, computer equipment and storage medium
CN112565806B (en) Virtual gift giving method, device, computer equipment and medium
CN110743168A (en) Virtual object control method in virtual scene, computer device and storage medium
CN111613213B (en) Audio classification method, device, equipment and storage medium
CN110808021B (en) Audio playing method, device, terminal and storage medium
CN110297684B (en) Theme display method and device based on virtual character and storage medium
CN110152309B (en) Voice communication method, device, electronic equipment and storage medium
CN112738606B (en) Audio file processing method, device, terminal and storage medium
CN112367533B (en) Interactive service processing method, device, equipment and computer readable storage medium
CN111986700B (en) Method, device, equipment and storage medium for triggering contactless operation
CN111061369B (en) Interaction method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant