WO2023216502A1 - 游戏中的显示控制方法及装置、存储介质、电子设备 - Google Patents

游戏中的显示控制方法及装置、存储介质、电子设备 Download PDF

Info

Publication number
WO2023216502A1
WO2023216502A1 PCT/CN2022/124322 CN2022124322W WO2023216502A1 WO 2023216502 A1 WO2023216502 A1 WO 2023216502A1 CN 2022124322 W CN2022124322 W CN 2022124322W WO 2023216502 A1 WO2023216502 A1 WO 2023216502A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual character
sound
target
game
parameter information
Prior art date
Application number
PCT/CN2022/124322
Other languages
English (en)
French (fr)
Inventor
李赞晨
Original Assignee
网易(杭州)网络有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 网易(杭州)网络有限公司 filed Critical 网易(杭州)网络有限公司
Publication of WO2023216502A1 publication Critical patent/WO2023216502A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/424Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/822Strategy games; Role-playing games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/807Role playing or strategy games
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/80Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game specially adapted for executing a specific type of game
    • A63F2300/8076Shooting

Definitions

  • the present disclosure relates to the field of terminal display technology, and in particular to an in-game display control method, an in-game display control device, a computer-readable storage medium, and an electronic device.
  • UI User Interface, User Interface
  • the volume histogram is displayed at the compass position at the top of the screen to mark the direction of the sound to remind the player of the direction of the sound. Further, based on the compass, it can be expanded to show the effect of the sound sender being above or below the player's position.
  • the HUD UI cannot display multiple voiceprints at the same time, which will cause players to make misjudgments and degrade the player's gaming experience.
  • the purpose of this disclosure is to provide an in-game display control method, an in-game display control device, a computer-readable storage medium and an electronic device, thereby overcoming, at least to a certain extent, the inaccuracy of sound source positioning caused by limitations of related technologies.
  • a display control method in a game providing a graphical user interface through a target terminal device, and the content displayed by the graphical user interface at least includes all or part of the game scene of the game.
  • the game scene includes a target virtual character controlled and operated by the target terminal device, and a first virtual character controlled and operated by other terminal devices.
  • the method includes: obtaining the monitoring parameter information of the target virtual character, and Obtain the sound parameter information of the first virtual character; calculate the listening parameter information and the sound parameter information to obtain a sound monitoring result, the sound monitoring result includes: whether the sound of the first virtual character can be heard ; When the sound of the first virtual character can be heard, determine the corresponding mapping position on the graphical user interface according to the first position of the first virtual character in the game scene; where A representational graphic representing the first virtual character is displayed at the mapping position.
  • the monitoring parameter information includes: target sound type, monitoring capability level and noise level information;
  • the obtaining the monitoring parameter information of the target virtual character includes: obtaining the The target sound type of the target virtual character; obtaining the listening capability level of the target virtual character; obtaining the noise level information of the target virtual character.
  • obtaining the listening capability level of the target virtual character includes: obtaining target attribute information of the target virtual character, and obtaining the target attribute information and the Mapping relationship between monitoring capability levels; query the monitoring capability level corresponding to the target attribute information in the mapping relationship.
  • obtaining the noise level information of the target virtual character includes: obtaining the target sound intensity emitted by the target virtual character according to the target sound type, and obtaining the target sound intensity according to the target sound type.
  • the target sound type obtains the monitoring noise threshold of the target virtual character; the target sound intensity and the monitoring noise threshold are compared to obtain a comparison result, and the noise level of the target virtual character is determined according to the comparison result. information.
  • the method further includes: determining the noise level information of the target virtual character when the target virtual character makes no sound.
  • the sound parameter information includes: a first sound type and a sound propagation distance
  • the first sound type includes at least one of the following types: first movement type , the first attack type and the first prepared attack type.
  • the monitoring parameter information includes: monitoring capability level and noise level information;
  • the sound parameter information includes: a first sound type and a sound propagation distance, and the pair of Calculating the monitoring parameter information and the sound parameter information to obtain a sound monitoring result includes: when the noise level information indicates that the first virtual character is monitored, obtaining the first sound intensity according to the first sound type;
  • the monitoring capability level obtains the monitoring coefficient of the target virtual character, and calculates the first sound intensity and the monitoring coefficient to obtain monitoring capability information;
  • the sound monitoring result is obtained by comparing the listening capability information and the sound propagation distance.
  • the representational graphic is a model outline of the first virtual character.
  • the schematic graphic is a graphic obtained by blurring the model outline of the first virtual character.
  • determining the corresponding mapping position on the graphical user interface according to the first position of the first virtual character in the game scene includes: The first position of the first virtual character in the game scene and the camera parameters of the virtual camera determine the corresponding mapping position of the first position on the graphical user interface; wherein the virtual camera is used to Photograph all or part of the game scene of the game to obtain the game scene picture displayed on the graphical user interface.
  • displaying a schematic graphic representing the first virtual character at the mapping position includes: determining, according to the camera parameters, the first virtual character for the first virtual character.
  • the non-visible area of the target virtual character displays a representational graphic representing the non-visible area of the first virtual character at the mapping position; wherein the non-visible area includes the first virtual character All or part of the area of the first virtual character, and the part of the first virtual character includes one or more virtual body parts of the first virtual character.
  • the method further includes: determining that all areas of the first virtual character are visible to the target virtual character according to the camera parameters, and not displaying the areas used to characterize the first virtual character.
  • An expressive graphic of a virtual character is a graphic of a virtual character.
  • displaying a schematic graphic representing the first virtual character at the mapping position includes: blurring the first virtual character to obtain the graphical representation representing the first virtual character.
  • the expressive graphic of the first virtual character is obtained, and the expressive graphic is displayed at the mapping position.
  • performing blurring processing on the first virtual character to obtain a representational graphic characterizing the first virtual character includes: performing image deduction processing on the first virtual character. A first picture is obtained; Gaussian blur is performed on the first picture to obtain an expressive graphic representing the first virtual character.
  • the listening parameter information includes: a target sound type; the sound parameter information includes: a first sound type and a sound propagation distance, and the first picture is Gaussian blur is used to obtain a graphical representation of the first virtual character, including: based on the target sound type and/or the first sound type, performing Gaussian blur on the first picture to obtain a representation of the first virtual character.
  • the blur parameters of the expressive graphics are determined according to the target sound type and/or the first sound type, the blur parameters include the size and/or clarity of the expressive graphics; or based on the sound propagation distance, Gaussian blurring is performed on the first picture to obtain an expressive graphic representing the first virtual character.
  • the blur parameter of the expressive graphic is determined according to the sound propagation distance.
  • the blur parameter includes the size and size of the expressive graphic. /or clarity.
  • displaying a schematic graphic representing the first virtual character at the mapping position includes: based on the listening parameter information and/or the sound parameter information. Determine a display duration of the expressive graphics used to characterize the first virtual character; display the expressive graphics at the mapping position according to the display duration.
  • the method further includes: in response to a change in the first position, updating the mapping position in real time, thereby updating the display of the representation graphic on the graphical user interface. position, so that the representation graphics reflects the position change of the first virtual character in real time.
  • the sound parameter information includes: sound propagation distance
  • the method further includes: generating a tracking control according to the sound monitoring result, the tracking control including the sound propagation distance. ;Display the tracking control for characterizing the first virtual character at the mapped location.
  • a display control device in a game which provides a graphical user interface through a target terminal device, and the content displayed by the graphical user interface at least includes all or part of the game scene of the game.
  • the game scene includes a target virtual character controlled and operated by the target terminal device, and a first virtual character controlled and operated by other terminal devices, including: an information acquisition module configured to obtain monitoring of the target virtual character Parameter information, and obtain the sound parameter information of the first virtual character; an information calculation module, configured to calculate the listening parameter information and the sound parameter information to obtain a sound monitoring result, the sound monitoring result includes: whether The sound of the first virtual character can be heard; the position determination module is configured to, when the sound of the first virtual character can be heard, determine the position of the first virtual character in the game scene according to the position of the first virtual character. a position to determine a corresponding mapping position on the graphical user interface; a graphic display module configured to display a representational graphic representing the first virtual character at the mapping position.
  • an electronic device including: a processor and a memory; wherein computer readable instructions are stored on the memory, and when the computer readable instructions are executed by the processor, the above mentioned An in-game display control method in any exemplary embodiment.
  • a computer-readable storage medium having a computer program stored thereon, which when executed by a processor implements display control in the game in any of the above exemplary embodiments method.
  • the in-game display control method, in-game display control device, computer storage medium and electronic device in the exemplary embodiments of the present disclosure have at least the following advantages and positive effects:
  • the monitoring parameter information of the target virtual character and the sound parameter information of the first virtual character are obtained as the data basis for rendering the representational graphics, which enriches the data dimension of the rendering representational graphics and improves the It improves the dynamics and real-time nature of graphic rendering, improves the accuracy of sound source positioning, and provides more realistic auditory and visual effects.
  • the expressive graphics of the first virtual character are rendered and displayed according to the sound monitoring results, and the first virtual character is displayed in a blurred manner to accurately depict the position of the first virtual character while over-exposing the position of the first virtual character. To achieve a balance between real-time tracking and marking the pointing effect of the first virtual character.
  • the graphical graphics of multiple first virtual characters are rendered and displayed at the same time, the problem of being unable to track multiple sound sources in the same direction can be further solved, making it easier for players to grasp the number of first virtual characters and optimizing the player's gaming experience.
  • Figure 1 shows a schematic diagram of an interface showing a sound sender in the related art
  • Figure 2 schematically shows a flow chart of a display control method in a game in an exemplary embodiment of the present disclosure
  • Figure 3 schematically shows a flowchart of a method for obtaining listening parameter information of a target virtual character in an exemplary embodiment of the present disclosure
  • Figure 4 schematically shows a flow chart of a method for obtaining the listening capability level of a target virtual character in an exemplary embodiment of the present disclosure
  • Figure 5 schematically illustrates a flow chart of a method for obtaining noise level information of a target virtual character in an exemplary embodiment of the present disclosure
  • Figure 6 schematically shows a flow chart of a method for calculating listening parameter information and sound parameter information in an exemplary embodiment of the present disclosure
  • Figure 7 schematically illustrates a flowchart of a method for blurring a first virtual character in an exemplary embodiment of the present disclosure
  • Figure 8 schematically illustrates a flowchart of a method for performing Gaussian blur on a first picture in an exemplary embodiment of the present disclosure
  • Figure 9 schematically illustrates an interface schematic diagram showing the model outline of all areas of the first virtual character in an exemplary embodiment of the present disclosure
  • Figure 10 schematically illustrates an interface diagram showing a model outline of a partial area of a first virtual character in an exemplary embodiment of the present disclosure
  • Figure 11 schematically illustrates an interface diagram displaying model outlines of multiple first virtual characters in an exemplary embodiment of the present disclosure
  • Figure 12 schematically shows a flowchart of a method for displaying representational graphics according to display duration in an exemplary embodiment of the present disclosure
  • Figure 13 schematically shows an interface schematic diagram that reflects the position change of the first virtual character in real time in an exemplary embodiment of the present disclosure
  • Figure 14 schematically illustrates a flowchart of a method for displaying tracking controls based on sound monitoring results in an exemplary embodiment of the present disclosure
  • Figure 15 schematically shows a structural diagram of a display control device in a game according to an exemplary embodiment of the present disclosure
  • Figure 16 schematically illustrates an electronic device for implementing a display control method in a game in an exemplary embodiment of the present disclosure
  • FIG. 17 schematically illustrates a computer-readable storage medium for implementing a display control method in a game in an exemplary embodiment of the present disclosure.
  • Example embodiments will now be described more fully with reference to the accompanying drawings.
  • Example embodiments may, however, be embodied in various forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concepts of the example embodiments.
  • the described features, structures or characteristics may be combined in any suitable manner in one or more embodiments.
  • numerous specific details are provided to provide a thorough understanding of embodiments of the disclosure.
  • those skilled in the art will appreciate that the technical solutions of the present disclosure may be practiced without one or more of the specific details described, or other methods, components, devices, steps, etc. may be adopted.
  • well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the disclosure.
  • the volume histogram is displayed at the compass position at the top of the screen to mark the direction of the sound to remind the player of the direction of the sound. Further, based on the compass, it can be expanded to show the effect of the sound sender being above or below the player's position.
  • FIG. 1 shows a schematic diagram of an interface showing a sound sender in the related art.
  • 200 indicates that the current position is in a direction of 200°.
  • the first diamond-shaped volume histogram indicates that there is a sound producer in the horizontal direction; the second volume histogram in the horizontal direction downward indicates that there is a sound producer below the player's position; the third in the horizontal direction upward
  • the volume histogram presented indicates that there is a sound producer above the player's position.
  • the heading displayed at the end is a horizontal rightward triangle symbol and a curve indicating that there are other sound emitters beyond the current display range of the compass.
  • this method based on compass guidance is always displayed at the top of the HUD.
  • the player's head When the player's head is facing up or down, it is easy to misunderstand the direction, so it can only roughly indicate the location of the sound. Moreover, since it can only distinguish whether it is above or below the player's position, but cannot distinguish whether it is above the head or in the front, it is impossible to accurately locate the sound above the player. Not to mention that it’s difficult to track the source of the sound in real time. Especially when the sounder and the listener move at the same time, if the HUD UI follows up in real time, the voiceprint UI will become distorted and the information clarity will be reduced.
  • the HUD UI cannot display multiple voiceprints at the same time, which will cause players to make misjudgments and degrade the player's gaming experience.
  • the display control method in the game can be run on a local terminal device or a server.
  • the method can be implemented and executed based on a cloud interaction system, where the cloud interaction system includes a server and a client device.
  • various cloud applications can be run under the cloud interactive system.
  • Cloud gaming refers to a gaming method based on cloud computing.
  • the running body of the game program and the game screen rendering body are separated.
  • the storage and operation of the display control method in the game are completed on the cloud game server, and the role of the client device is used for data
  • the client device can be a display device with data transmission function close to the user side, such as a mobile terminal, a television, a computer, a handheld computer, etc.; but the information processing is Cloud gaming server in the cloud.
  • the player operates the client device to send operating instructions to the cloud game server.
  • the cloud game server runs the game according to the operating instructions, encodes and compresses the game screen and other data, and returns it to the client device through the network.
  • the cloud game server performs operations through the client device. Decode and output game screen.
  • the local terminal device stores the game program and is used to present the game screen.
  • the local terminal device is used to interact with players through a graphical user interface, that is, conventionally downloading, installing and running game programs through electronic devices.
  • the local terminal device may provide the graphical user interface to the player in a variety of ways. For example, it may be rendered and displayed on the display screen of the terminal, or provided to the player through holographic projection.
  • the local terminal device may include a display screen and a processor.
  • the display screen is used to present a graphical user interface.
  • the graphical user interface includes a game screen.
  • the processor is used to run the game, generate the graphical user interface, and control the graphical user interface. displayed on the display screen.
  • embodiments of the present disclosure provide a display control method in a game, providing a graphical user interface through a terminal device, where the terminal device may be the aforementioned local terminal device or the aforementioned Client devices in the mentioned cloud interaction systems.
  • the present disclosure proposes a display control method in a game, which provides a graphical user interface through the target terminal device.
  • the content displayed by the graphical user interface at least includes all or part of the game scene of the game.
  • It includes a target virtual character whose operations are controlled by the target terminal device, and a first virtual character whose operations are controlled by other terminal devices.
  • Figure 2 shows a flow chart of a display control method in a game. As shown in Figure 2, the display control method in a game at least includes the following steps:
  • step S210 the monitoring parameter information of the target virtual character is obtained, and the sound parameter information of the first virtual character is obtained.
  • step S220 the listening parameter information and the sound parameter information are calculated to obtain a sound monitoring result.
  • the sound monitoring result includes: whether the voice of the first virtual character can be heard.
  • step S230 when the sound of the first virtual character can be heard, the corresponding mapping position on the graphical user interface is determined according to the first position of the first virtual character in the game scene.
  • step S240 a representational graphic representing the first virtual character is displayed at the mapping position.
  • the listening parameter information of the target virtual character and the sound parameter information of the first virtual character are obtained as the data basis for rendering the expressive graphics, which enriches the data dimensions of the rendered expressive graphics and improves the efficiency of the expressive graphics rendering. Dynamic and real-time, it improves the accuracy of sound source positioning and provides more realistic auditory and visual effects. Furthermore, the expressive graphics of the first virtual character are rendered and displayed according to the sound monitoring results, and the first virtual character is displayed in a blurred manner to accurately depict the position of the first virtual character while over-exposing the position of the first virtual character. To achieve a balance between real-time tracking and marking the pointing effect of the first virtual character. When the graphical graphics of multiple first virtual characters are rendered and displayed at the same time, the problem of being unable to track multiple sound sources in the same direction can be further solved, making it easier for players to grasp the number of first virtual characters and optimizing the player's gaming experience.
  • step S210 the monitoring parameter information of the target virtual character is obtained, and the sound parameter information of the first virtual character is obtained.
  • the target virtual character may be a game virtual character controlled and operated by the current player through the target terminal device.
  • the listening parameter information includes: target sound type, listening ability level and noise level information.
  • Figure 3 shows a flow chart of a method for obtaining listening parameter information of a target virtual character, as shown in Figure 3 , the method may at least include the following steps: in step S310, obtain the target sound type of the target virtual character.
  • the target sound type of the target avatar is the sound type of the sound produced by the target avatar.
  • the target sound type may include the type of movement of the target avatar, the type of attack by the target avatar, the type of the target avatar preparing to attack, etc., which are not specifically limited in this exemplary embodiment.
  • the target avatar's movement types may include running, crouching, jumping, slow walking, fast walking, etc.
  • the target avatar's attack types may include shooting, throwing grenades, etc.
  • the target avatar's prepared attack types may include changing. Magazines, safety bolts for opening sights and extracting grenades, etc.
  • step S320 the monitoring capability level of the target virtual character is obtained.
  • Figure 4 shows a schematic flowchart of a method for obtaining the listening capability level of a target virtual character.
  • the method may at least include the following steps: In step S410, obtain the target virtual character target attribute information, and obtain the mapping relationship between the target attribute information and the monitoring capability level.
  • the target attribute information of the target virtual character may be the game level of the target virtual character, or it may be the number of tasks completed by the target virtual character to improve the listening ability level or the progress of the tasks, etc. This exemplary embodiment does not specifically limit this. .
  • a mapping relationship may be preset between the target attribute information of the target virtual character and the listening ability level.
  • the mapping relationship may be a unified relationship set for all game virtual characters, or a corresponding relationship set differentially for different game virtual characters, which is not particularly limited in this exemplary embodiment.
  • step S420 the monitoring capability level corresponding to the target attribute information is queried in the mapping relationship.
  • the monitoring capability level corresponding to the current target attribute information of the target virtual character can be queried in the mapping relationship.
  • This monitoring ability level can also be used as the growth value of the target virtual character to enhance the player's long-term retention effect.
  • the listening ability level of the target virtual character can be obtained by querying the mapping relationship between the target attribute information and the listening ability level.
  • the determination method is simple and accurate, and provides a method for displaying the schematic graphics of the first virtual character.
  • the data information of the target virtual character is simple and accurate, and provides a method for displaying the schematic graphics of the first virtual character.
  • step S330 noise level information of the target virtual character is obtained.
  • Figure 5 shows a schematic flowchart of a method for obtaining noise level information of a target virtual character.
  • the method may at least include the following steps: In step S510, according to the target sound type Obtain the target sound intensity emitted by the target virtual character, and obtain the listening noise threshold of the target virtual character according to the target sound type.
  • the corresponding target sound intensity can be determined according to the obtained target sound type of the target virtual character.
  • the corresponding target sound intensity when the target sound type is running, the corresponding target sound intensity is 30 decibels; when the target sound type is shooting, the corresponding target sound intensity can be 40 decibels.
  • the corresponding monitoring noise threshold can be determined according to the acquired target sound type of the target virtual character.
  • the listening noise threshold can be used to characterize the maximum sound intensity that can be monitored when the target virtual character emits the target sound intensity.
  • the corresponding monitoring noise threshold when the target sound type is running, the corresponding monitoring noise threshold may be 20 dB; when the target sound type is gun shooting, the corresponding monitoring noise threshold may be 10 dB.
  • step S520 the target sound intensity and the monitoring noise threshold are compared to obtain a comparison result, and the noise level information of the target virtual character is determined based on the comparison result.
  • the target sound intensity and the monitoring noise threshold can be compared to obtain a comparison result.
  • the noise level information of the target virtual character is determined according to the comparison result.
  • the comparison result is that the target sound intensity is less than or equal to the listening noise threshold, it indicates that the target virtual character can monitor the sound emitted by the first virtual character corresponding to the target virtual character.
  • the comparison result is that the target sound intensity is greater than the monitoring noise threshold, it indicates that the first virtual character corresponding to the target virtual character cannot be monitored because the target virtual character itself emits too much noise.
  • the acquisition of the noise level information can be more consistent with actual scenes such as if the target virtual character is shooting, it is more difficult to capture gunshots in the environment, and the simulation effect is more realistic.
  • the noise level information of the target virtual character can be determined by comparing the target sound intensity and the monitoring noise threshold, which provides a basis for judging whether the first virtual character can be monitored, and also provides a basis for displaying the expression of the first virtual character. Graphics provide a baseline for judgment.
  • the corresponding noise level information can be directly determined.
  • the noise level information of the target virtual character is determined.
  • the target virtual character When the target virtual character makes no sound, it indicates that the target virtual character will not produce noise that affects the monitoring. Therefore, the current noise level information of the target virtual character can be directly determined to represent the ability to monitor the sound made by the first virtual character corresponding to the target virtual character.
  • the sound parameter information of one or more first virtual characters of the target virtual character can also be obtained simultaneously to track one or more sound sources at the same time, so that the player controlling the target virtual character can accurately obtain the first virtual character. quantity information.
  • the first virtual character may be one or more game virtual characters determined from the game virtual characters of the enemy camp of the target virtual character, or may be one or more teammates who belong to the same camp as the target virtual character.
  • Game virtual characters are not specifically limited in this exemplary embodiment.
  • the sound parameter information includes: a first sound type and a sound propagation distance
  • the first sound type includes: a first movement type, a first attack type, and a first prepared attack type.
  • the first sound type may include a first movement type, a first attack type, a first preparation attack type, etc., which are not specifically limited in this exemplary embodiment.
  • the sound propagation distance may be the distance traveled by the first virtual character to reach the same position of the target virtual character, that is, the distance between the first virtual character and the target virtual character under the current situation.
  • the first movement type may include the first virtual character's movement methods such as running, squatting, jumping, slow walking, and fast walking;
  • the first attack type may include the first virtual character's attack methods such as shooting and throwing grenades;
  • One type of prepared attack may include preparatory methods such as the first virtual character changing the magazine, opening the sight, and pulling out the safety bolt of the grenade.
  • the acquisition of the first sound type of the first virtual character can provide data support for the effect that screaming sniper gun sounds are easier to capture than short submachine gun sounds or running sounds, and the sound propagation distance can provide data support for the first virtual character's expression.
  • the visualization capabilities of graphs have an impact.
  • step S220 the listening parameter information and the sound parameter information are calculated to obtain a sound monitoring result.
  • the sound monitoring result includes: whether the voice of the first virtual character can be heard.
  • the listening parameter information and the sound parameter information of the different first virtual characters can be processed.
  • Parallel computing is used to obtain the sound monitoring results of the target virtual character to ensure simultaneous tracking and rendering of expressive graphics from multiple sound sources.
  • the monitoring parameter information includes: monitoring capability level and noise level information;
  • the sound parameter information includes: the first sound type and the sound propagation distance.
  • Figure 6 shows the monitoring parameter information and the sound parameter information. A schematic flow chart of a calculation method is shown in Figure 6. The method may at least include the following steps: In step S610, when the noise level information indicates that the first virtual character is monitored, the first sound intensity is obtained according to the first sound type. .
  • the noise level information means that the first virtual character can be heard at this time.
  • the first sound intensity at this time can be obtained according to the acquired first sound type.
  • the corresponding first sound intensity when the first sound type is running, the corresponding first sound intensity is 30 decibels; when the first sound type is gun shooting, the corresponding first sound intensity may be 40 decibels.
  • step S620 the monitoring coefficient of the target virtual character is obtained according to the monitoring capability level, and the first sound intensity and the monitoring coefficient are calculated to obtain the monitoring capability information.
  • the corresponding monitoring coefficient is set in a manner that is linearly related to the monitoring capability level, after obtaining the monitoring capability level of the target virtual character, the corresponding monitoring coefficient can be obtained.
  • corresponding listening capability information can be obtained by calculating the first sound intensity and the monitoring coefficient.
  • the method of calculating the first sound intensity and the monitoring coefficient may be to multiply the first sound intensity and the monitoring coefficient, or other calculation methods may be set according to the actual situation.
  • This exemplary embodiment does not impose special limitations on this. .
  • step S630 the listening capability information and the sound propagation distance are compared to obtain the sound monitoring result.
  • the monitoring capability information and the obtained sound propagation distance can be compared to obtain the sound monitoring result.
  • the sound monitoring result may be that the monitoring capability information is greater than or equal to the sound propagation distance, or it may be that the monitoring capability information is less than the sound propagation distance.
  • U represents the set of first virtual characters that the target virtual character can listen to
  • a1 represents the first sound type
  • b1 represents the target sound type
  • a2 represents the first virtual character that speaks
  • b2 represents the target virtual character.
  • X represents the target sound intensity of the target virtual character or the first sound intensity of the first virtual character
  • Y represents the monitoring noise threshold
  • M represents the monitoring coefficient
  • dS represents the sound propagation distance.
  • the sound monitoring result can be further determined, and whether the first virtual character cannot be monitored due to the distance being too far can be determined. It is more relevant and accurate to the actual situation, and strikes a balance between displaying the expressive graphics of the first virtual character and ensuring the safety of the first virtual character.
  • step S230 when the sound of the first virtual character can be heard, the corresponding mapping position on the graphical user interface is determined according to the first position of the first virtual character in the game scene.
  • the first virtual character may be located in the game scene according to the third location of the sound monitoring result.
  • a position determines the corresponding mapping position.
  • the mapping position corresponding to the first position on the graphical user interface is determined according to the first position of the first virtual character in the game scene and the camera parameters of the virtual camera; wherein the virtual camera is used for shooting All or part of the game scene of the game is obtained to obtain the game scene screen displayed in the graphical user interface.
  • the game scene picture displayed on the graphical user interface of the target terminal device is the game scene content captured by the virtual camera.
  • a virtual camera can be set on the head of the target virtual character. Camera parameters such as the direction of the virtual camera rotate following the rotation of the target virtual character.
  • the game scene rendered on the graphical user interface is equivalent to the virtual camera. The content of the game scene captured.
  • the virtual camera can be set above or behind the target virtual character. Camera parameters such as the direction of the virtual camera follow the movement of the target virtual character, and the game can capture a certain area around the target virtual character at a fixed angle. Scenes.
  • the mapping position corresponding to the first position of the first virtual character in the game scene it can be determined based on camera parameters such as the orientation of the virtual camera of the target virtual character, that is, based on the perspective of the target virtual character. definite.
  • mapping position corresponding to the first position may be the same position as the first position, or may be another associated position, which is not specifically limited in this exemplary embodiment.
  • step S240 a representational graphic representing the first virtual character is displayed at the mapping position.
  • the sound monitoring result is that the sound of the first virtual character can be heard, and after the mapping position is determined, the expressive graphic of the first virtual character can be displayed at the mapping position.
  • the non-visible area of the first virtual character for the target virtual character is determined according to the camera parameters, and a schematic graphic representing the non-visible area of the first virtual character is displayed at the mapping position; wherein, non-visible area
  • the visible area includes all or a partial area of the first virtual character, and the partial area of the first virtual character includes one or more virtual body parts of the first virtual character.
  • the target virtual character When observing the first virtual character from the perspective of the target virtual character according to the camera parameters of the virtual camera, the target virtual character may be able to observe the complete first virtual character, or the target virtual character may only be able to view the first virtual character. part of the role.
  • the non-visible area of the first virtual character under the perspective of the target virtual character can be determined according to the camera parameters.
  • the non-visible area is the area of the first virtual character that cannot be seen by the target virtual character.
  • the non-visible area is the entire area of the first virtual character, that is, the entire first virtual character is a non-visible area; when the target virtual character can observe the first virtual character
  • the non-visible area is the first avatar's non-visible area, which is other areas except the head.
  • the non-visible area of the first virtual character may be one or more virtual body parts of the first virtual character.
  • the division of the non-visible area is not strictly based on the virtual body parts of the first virtual character, but is the same as the real perspective in the actual scene. There may be a part of the virtual body part that is visible, while the other part is non-visible.
  • an expressive graphic representing the non-visible area of the first virtual character may be generated to display the expressive graphic.
  • the first virtual character is blurred to obtain a representational graphic representing the first virtual character, and the representational graphic is displayed at the mapping position.
  • the part to be blurred on the first virtual character is the non-visible area of the first virtual character, and the first virtual character does not need to blur the visible area of the target virtual character.
  • FIG. 7 shows a schematic flowchart of a method for blurring a first virtual character.
  • the method may at least include the following steps: In step S710, blurring the first virtual character.
  • the virtual character performs image deduction processing to obtain the first picture.
  • the first picture may be a picture formed based on the outline of the first character and its interior.
  • step S720 Gaussian blur is performed on the first picture to obtain a representational graphic representing the first virtual character.
  • Gaussian Blur also called Gaussian smoothing
  • Adobe Photoshop an image processing software
  • GIMP GPU Image Manipulation Program
  • Paint.NET an image and photo processing software
  • the visual effect of the image generated by this blurring technology is like observing the image through a frosted glass, which is significantly different from the out-of-focus effect of the lens, bokeh, and the effect in the shadows of ordinary lighting.
  • Gaussian smoothing is also used in the pre-processing stage of computer vision algorithms to enhance images at different scale sizes.
  • the Gaussian blur process of an image is the convolution of the image with a normal distribution. Since the normal distribution is also called Gaussian distribution, this technique is called Gaussian blur.
  • Gaussian blur is a low-pass filter for the image.
  • Gaussian blur is an image blur filter that uses a normal distribution to calculate the transformation of each pixel in the image.
  • the normal distribution equation in N-dimensional space is:
  • the two-dimensional space for Gaussian blurring of the enemy outline is defined as:
  • r is the blur radius and ⁇ is the standard deviation of the normal distribution.
  • the contours of the surface generated by this formula are concentric circles that are normally distributed starting from the center.
  • the convolution matrix composed of pixels with non-zero distribution is transformed with the original image.
  • the value of each pixel is the weighted average of the values of surrounding neighboring pixels.
  • the value of the original pixel has the largest Gaussian distribution value, so it has the largest weight. As the adjacent pixels become farther and farther away from the original pixel, their weights become smaller and smaller.
  • This blur preserves edge effects to a higher degree than other equalized blur filters.
  • the monitoring parameter information includes: the target sound type; the sound parameter information includes: the first sound type and the sound propagation distance.
  • Figure 8 shows a schematic flow chart of the method of performing Gaussian blur on the first picture. , as shown in Figure 8, the method may at least include the following steps: In step S810, based on the target sound type and/or the first sound type, perform Gaussian blur on the first picture to obtain an ideographic graphic representing the first virtual character.
  • the blur parameters of the graphics are determined according to the target sound type and/or the first sound type, and the blur parameters include the size and/or clarity of the representation graphics.
  • the degree of Gaussian blur is determined by the Gaussian matrix. Specifically, the larger the size of the Gaussian matrix, the larger the standard deviation, and the greater the degree of blurring of the representation graphics obtained by Gaussian blur.
  • the standard deviation can be set to 0 for Gaussian blur. Therefore, the blur degree of the representation graphics produced by Gaussian blur is determined by the Gaussian matrix.
  • the blur parameter reflecting the degree of blur of the ideographic graphics is determined by the target sound type and/or the first sound type.
  • the size of the Gaussian matrix may be determined according to the mapping relationship and the target sound type to perform Gaussian blur on the first picture by the target sound type. Get the corresponding graphical representation.
  • the size of the Gaussian matrix can be determined according to the mapping relationship and the first sound type to determine the size of the Gaussian matrix through the first sound type.
  • the first picture is subjected to Gaussian blur to obtain the corresponding representational graphics.
  • different target sound types and first sound types determine the size of the Gaussian matrix at the same time to obtain corresponding representation graphics.
  • the blur parameter reflecting the degree of blur of the representation graphic may include the size and/or clarity of the representation graphic.
  • the target sound type and/or the Gaussian matrix determined by the target sound type is larger, it can be determined that the size of the representation graphic is smaller and/or the clarity is worse; when the target sound type and/or the Gaussian matrix determined by the target sound type is The smaller it is, the greater the size and/or the clarity of the graphic can be determined to be.
  • step S820 based on the sound propagation distance, Gaussian blur is performed on the first picture to obtain an expressive graphic representing the first virtual character.
  • the fuzzy parameters of the expressive graphic are determined based on the sound propagation distance.
  • the fuzzy parameters include the size and/or clarity of the expressive graphic. .
  • the degree of Gaussian blur is determined by the Gaussian matrix. Specifically, the larger the size of the Gaussian matrix, the larger the standard deviation, and the greater the degree of blurring of the representation graphics obtained by Gaussian blur.
  • the standard deviation can be set to 0 to perform Gaussian blur directly. Therefore, the blur degree of the representation graphics produced by Gaussian blur is determined by the Gaussian matrix.
  • the blur parameter reflecting the degree of blur of the expressive graphics is determined by the sound propagation distance.
  • the size of the Gaussian matrix can be directly equal to the size of the sound propagation distance.
  • the blur parameter reflecting the degree of blur of the representation graphic may include the size and/or clarity of the representation graphic.
  • the Gaussian matrix determined by the sound propagation distance When the Gaussian matrix determined by the sound propagation distance is larger, it can be determined that the size of the representation graphics is smaller and/or the clarity is worse; when the Gaussian matrix determined by the sound propagation distance is smaller, it can be determined that the size of the representation graphics is larger. and/or the better the clarity.
  • the outline position information of the first virtual character that is further away can be blurred, which further simulates the difficulty of judging distant sounds in the real world.
  • the blurred representation graphics will not accurately display the position of the enemy avatar, reducing the possibility of the enemy avatar being accurately killed. Furthermore, when the enemy avatar's voice is monitored, the representational graphics of the enemy avatar will be rendered on the enemy avatar in real time, achieving a real-time visualization effect.
  • the expressive graphic representing the first virtual character
  • the expressive graphic may be displayed at the determined mapping position.
  • the representational graphic is a model outline of the first virtual character.
  • Figure 9 shows a schematic interface diagram showing the model outline of all areas of the first virtual character.
  • 910 is the schematic graphic of the enemy virtual character.
  • the schematic graphic is a model outline of the first virtual character obtained by blurring all areas of the first virtual character.
  • Figure 10 shows a schematic interface diagram showing the model outline of a partial area of the first virtual character.
  • 1010 is a representational graphic of the enemy virtual character.
  • the schematic graphic is a model outline of the first virtual character obtained by blurring a partial area of the first virtual character.
  • 1020 is the normal display screen of the enemy avatar except in other areas of the visible area. Since other areas represented by 1020 are visible to the target avatar, there is no need to blur 1020 .
  • the display priority of the model outline can be set to the highest, so that the model outline is at the top of the game scene and will not be blocked by walls or other obstacles in the game.
  • Model outlines provide a way to display perspective effects.
  • the blurred model outline also solves the problem of not exposing too much information of the enemy virtual character and over-exposing the position information of the enemy virtual character, such as the accurate position information of the enemy virtual character, including the head position. and body orientation, etc., to prevent the player controlling the target virtual character from using the perspective effect to perform operations such as head locks that disrupt the balance of the game.
  • Figure 11 shows a schematic interface diagram showing the model outlines of multiple first virtual characters.
  • first virtual character is an enemy virtual character
  • 1110, 1120, 1130, 1140 and 1150 are five.
  • the schematic graphic is the model outline of the first virtual character obtained by simultaneously blurring five enemy virtual characters.
  • multiple sound monitoring results can be obtained through parallel calculation and processing of the sound parameter information of multiple different first virtual characters, ensuring that the expressive graphics of multiple first virtual characters can be tracked and rendered at the same time.
  • Gaussian blur degrees for different first virtual characters are different, so the size of the model outline of the first virtual character and / Or there will be some differences in clarity.
  • representation graphics can also be other graphics related to the model outline.
  • the representational graphic is a graphic obtained by blurring the model outline of the first virtual character.
  • the blurred model outline may be further blurred to obtain the expressive graphics.
  • the display duration of the ideographic graphics may also be determined based on the monitoring parameter information and/or the sound parameter information.
  • Figure 12 shows a schematic flowchart of a method of displaying representation graphics according to the display duration. As shown in Figure 12, the method may at least include the following steps: In step S1210, based on the listening parameter information and/ Or the sound parameter information determines the display duration of the representational graphics used to characterize the first virtual character.
  • the display duration of the schematic graphic may be related to the first sound type of the first virtual character and the listening ability level of the target virtual character.
  • the display duration corresponding to the first sound type of the first avatar being the first attack type may be greater than the display duration corresponding to the first sound type being the first preparation attack type, and the first sound type of the first avatar being the first attack type.
  • the display duration of the prepared attack type is longer than the display duration of the first sound type being the first movement type.
  • the display duration of the expressive graphic may also be related to the first sound type of the first virtual character and the listening ability level of the target virtual character at the same time, etc. This exemplary embodiment does not make a special limitation on this.
  • step S1220 the representation graphic is displayed at the mapping position according to the display duration.
  • the expressive graphics of the first virtual character can be displayed according to the display duration.
  • the duration of the graphical display of the first virtual character can present differentiated performance based on the monitoring parameter information and/or the sound parameter information, increasing the depth of the entire system. and long-term training experience.
  • mapping position In order to track the sound source of the first virtual character in real time, the mapping position can be updated in real time through changes in the first position.
  • the mapping position in response to the change of the first position, is updated in real time, thereby updating the display position of the representation graphics on the graphical user interface, so that the representation graphics reflect the position change of the first virtual character in real time.
  • Figure 13 shows a schematic diagram of an interface that reflects the position change of the first virtual character in real time.
  • 1310, 1320 and 1330 are representational graphics of the first virtual character at different times.
  • the graphical representation is obtained by updating the mapping position in real time according to changes in the first position of the first virtual character at different times, thereby displaying the model outline at the current time at the corresponding mapping position.
  • the graphical graphics at different times can reflect the position changes of the first virtual character, and the display position of the first virtual character on the graphical user interface can be tracked and displayed in real time.
  • 1310, 1320 and 1330 are the model outlines obtained by blurring all areas of the first virtual character when the entire first virtual character is a non-visible area.
  • 1340 is a game screen that is displayed normally without blurring any area of the first virtual character when the entire first virtual character is visible to the target virtual character.
  • the sound parameter information includes: sound propagation distance.
  • Figure 14 shows a schematic flowchart of a method for displaying tracking controls based on sound monitoring results. As shown in Figure 14, the method may at least include the following steps: In step S1410, a tracking control is generated according to the sound monitoring result, and the tracking control includes the sound propagation distance.
  • the tracking control may be in the form of a 2D UI, such as a circular control, a square control, or an arrow-style control, which is not specifically limited in this exemplary embodiment.
  • the position of the first virtual character can be tracked in real time through the tracking control.
  • the sound propagation distance information can also be added to the tracking control, or at relevant positions such as next to the tracking control.
  • step S1420 a tracking control used to represent the first virtual character is displayed at the mapping position.
  • the tracking control can be displayed at the mapping location.
  • the mapping position may be an internal position of the first virtual character such as the first virtual character's head or chest, or may be an external position such as the left or right side of the first virtual character. This exemplary embodiment does not apply to this. Make special restrictions.
  • the monitoring parameter information of the target virtual character and the sound parameter information of the first virtual character are obtained as the data basis for rendering the representation graphics, which enriches the data dimension of the rendering representation graphics. , which improves the dynamics and real-time nature of graphic rendering, improves the accuracy of sound source positioning, and provides more realistic auditory and visual effects.
  • the expressive graphics of the first virtual character are rendered and displayed according to the sound monitoring results, and the first virtual character is displayed in a blurred manner to accurately depict the position of the first virtual character while over-exposing the position of the first virtual character. To achieve a balance between real-time tracking and marking the pointing effect of the first virtual character.
  • the graphical graphics of multiple first virtual characters are rendered and displayed at the same time, the problem of being unable to track multiple sound sources in the same direction can be further solved, making it easier for players to grasp the number of first virtual characters and optimizing the player's gaming experience.
  • a display control device in a game which provides a graphical user interface through the target terminal device.
  • the content displayed by the graphical user interface at least includes all or part of the game scene of the game.
  • the game The scene includes a target virtual character whose operations are controlled by the target terminal device, and a first virtual character whose operations are controlled by other terminal devices.
  • Figure 15 shows a schematic structural diagram of the display control device in the game.
  • the display control device 1500 in the game may include: an information acquisition module 1510, an information calculation module 1520, a position determination module 1530 and a graphics display module 1540. .
  • the information acquisition module 1510 is configured to obtain the monitoring parameter information of the target virtual character, and obtain the sound parameter information of the first virtual character;
  • the information calculation module 1520 is configured to calculate the monitoring parameter information and the The sound monitoring results are obtained by calculating the sound parameter information.
  • the sound monitoring results include: whether the sound of the first virtual character can be heard;
  • the position determination module 1530 is configured to determine when the sound of the first virtual character can be heard. When the sound is heard, determine the corresponding mapping position on the graphical user interface according to the first position of the first virtual character in the game scene;
  • the graphical display module 1540 is configured to A representational graphic representing the first virtual character is displayed.
  • the monitoring parameter information includes: target sound type, monitoring capability level and noise level information;
  • the obtaining the monitoring parameter information of the target virtual character includes: obtaining the The target sound type of the target virtual character; obtaining the listening capability level of the target virtual character; obtaining the noise level information of the target virtual character.
  • obtaining the listening capability level of the target virtual character includes: obtaining target attribute information of the target virtual character, and obtaining the target attribute information and the Mapping relationship between monitoring capability levels; query the monitoring capability level corresponding to the target attribute information in the mapping relationship.
  • obtaining the noise level information of the target virtual character includes: obtaining the target sound intensity emitted by the target virtual character according to the target sound type, and obtaining the target sound intensity according to the target sound type.
  • the target sound type obtains the monitoring noise threshold of the target virtual character; the target sound intensity and the monitoring noise threshold are compared to obtain a comparison result, and the noise level of the target virtual character is determined according to the comparison result. information.
  • the method further includes: determining the noise level information of the target virtual character when the target virtual character makes no sound.
  • the sound parameter information includes: a first sound type and a sound propagation distance
  • the first sound type includes at least one of the following types: first movement type , the first attack type and the first prepared attack type.
  • the monitoring parameter information includes: monitoring capability level and noise level information;
  • the sound parameter information includes: a first sound type and a sound propagation distance, and the pair of Calculating the monitoring parameter information and the sound parameter information to obtain a sound monitoring result includes: when the noise level information indicates that the first virtual character is monitored, obtaining the first sound intensity according to the first sound type;
  • the monitoring capability level obtains the monitoring coefficient of the target virtual character, and calculates the first sound intensity and the monitoring coefficient to obtain monitoring capability information; and compares the monitoring capability information and the sound propagation distance to obtain the sound Monitor the results.
  • the representational graphic is a model outline of the first virtual character.
  • the schematic graphic is a graphic obtained by blurring the model outline of the first virtual character.
  • determining the corresponding mapping position on the graphical user interface according to the first position of the first virtual character in the game scene includes: The first position of the first virtual character in the game scene and the camera parameters of the virtual camera determine the corresponding mapping position of the first position on the graphical user interface; wherein the virtual camera is used to Photograph all or part of the game scene of the game to obtain the game scene picture displayed on the graphical user interface.
  • displaying a schematic graphic representing the first virtual character at the mapping position includes: determining, according to the camera parameters, the first virtual character for the first virtual character.
  • the non-visible area of the target virtual character displays a representational graphic representing the non-visible area of the first virtual character at the mapping position; wherein the non-visible area includes the first virtual character All or part of the area of the first virtual character, and the part of the first virtual character includes one or more virtual body parts of the first virtual character.
  • the method further includes: determining that all areas of the first virtual character are visible to the target virtual character according to the camera parameters, and not displaying the areas used to characterize the first virtual character.
  • An expressive graphic of a virtual character is a graphic of a virtual character.
  • displaying a schematic graphic representing the first virtual character at the mapping position includes: blurring the first virtual character to obtain the graphical representation representing the first virtual character.
  • the expressive graphic of the first virtual character is obtained, and the expressive graphic is displayed at the mapping position.
  • performing blurring processing on the first virtual character to obtain a representational graphic characterizing the first virtual character includes: performing image deduction processing on the first virtual character. A first picture is obtained; Gaussian blur is performed on the first picture to obtain an expressive graphic representing the first virtual character.
  • the listening parameter information includes: a target sound type; the sound parameter information includes: a first sound type and a sound propagation distance, and the first picture is Gaussian blur is used to obtain a graphical representation of the first virtual character, including: based on the target sound type and/or the first sound type, performing Gaussian blur on the first picture to obtain a representation of the first virtual character.
  • the blur parameters of the expressive graphics are determined according to the target sound type and/or the first sound type, the blur parameters include the size and/or clarity of the expressive graphics; or based on the sound propagation distance, Gaussian blurring is performed on the first picture to obtain an expressive graphic representing the first virtual character.
  • the blur parameter of the expressive graphic is determined according to the sound propagation distance.
  • the blur parameter includes the size and size of the expressive graphic. /or clarity.
  • displaying a schematic graphic representing the first virtual character at the mapping position includes: based on the listening parameter information and/or the sound parameter information. Determine a display duration of the expressive graphics used to characterize the first virtual character; display the expressive graphics at the mapping position according to the display duration.
  • the method further includes: in response to a change in the first position, updating the mapping position in real time, thereby updating the display of the representation graphic on the graphical user interface. position, so that the representation graphics reflects the position change of the first virtual character in real time.
  • the sound parameter information includes: sound propagation distance
  • the method further includes: generating a tracking control according to the sound monitoring result, the tracking control including the sound propagation distance. ;Display the tracking control for characterizing the first virtual character at the mapped location.
  • modules or units of the display control device 1500 in the game are mentioned in the above detailed description, this division is not mandatory.
  • the features and functions of two or more modules or units described above may be embodied in one module or unit.
  • the features and functions of one module or unit described above may be further divided into being embodied by multiple modules or units.
  • an electronic device capable of implementing the above method is also provided.
  • FIG. 16 An electronic device 1600 according to such an embodiment of the present disclosure is described below with reference to FIG. 16 .
  • the electronic device 1600 shown in FIG. 16 is only an example and should not bring any limitations to the functions and scope of use of the embodiments of the present disclosure.
  • electronic device 1600 is embodied in the form of a general computing device.
  • the components of the electronic device 1600 may include, but are not limited to: the above-mentioned at least one processing unit 1610, the above-mentioned at least one storage unit 1620, a bus 1630 connecting different system components (including the storage unit 1620 and the processing unit 1610), and the display unit 1640.
  • the storage unit stores program code, and the program code can be executed by the processing unit 1610, so that the processing unit 1610 performs various exemplary methods according to the present disclosure described in the "Example Method" section of this specification.
  • Example steps such as:
  • a graphical user interface is provided through the target terminal device, and the content displayed by the graphical user interface at least includes all or part of the game scene of the game.
  • the game scene includes the target virtual character controlled by the target terminal device, and The first virtual character controlled through other terminal devices, the method includes:
  • the listening parameter information and the sound parameter information are calculated to obtain a sound monitoring result.
  • the sound monitoring result includes: whether the voice of the first virtual character can be heard;
  • a representational graphic representing the first virtual character is displayed at the mapping position.
  • the monitoring parameter information includes: target sound type, monitoring capability level and noise level information;
  • the obtaining the listening capability level of the target virtual character includes:
  • obtaining the noise level information of the target virtual character includes:
  • the target sound intensity and the listening noise threshold are compared to obtain a comparison result, and the noise level information of the target virtual character is determined based on the comparison result.
  • the method also includes:
  • the noise level information of the target avatar is determined.
  • the sound parameter information includes: a first sound type and a sound propagation distance
  • the first sound type includes at least one of the following types: a first movement type, a first attack type, and a first sound propagation distance. Prepare for attack type.
  • the monitoring parameter information includes: monitoring capability level and noise level information;
  • the sound parameter information includes: first sound type and sound propagation distance,
  • the calculation of the monitoring parameter information and the sound parameter information to obtain the sound monitoring result includes:
  • the sound monitoring result is obtained by comparing the listening capability information and the sound propagation distance.
  • the schematic graphic is a model outline of the first virtual character.
  • the schematic graphic is a graphic obtained by blurring the model outline of the first virtual character.
  • determining the corresponding mapping position on the graphical user interface according to the first position of the first virtual character in the game scene includes:
  • the mapping position corresponding to the first position on the graphical user interface is determined according to the first position of the first virtual character in the game scene and the camera parameters of the virtual camera; wherein the virtual camera is Photograph all or part of the game scene of the game to obtain the game scene picture displayed on the graphical user interface.
  • displaying a representational graphic representing the first virtual character at the mapping position includes:
  • the non-visible area includes all or a partial area of the first virtual character
  • the partial area of the first virtual character includes one or more virtual body parts of the first virtual character.
  • the method also includes:
  • displaying a representational graphic representing the first virtual character at the mapping position includes:
  • the first virtual character is subjected to fuzzy processing to obtain an expressive graphic characterizing the first virtual character, and the expressive graphic is displayed at the mapping position.
  • the fuzzy processing of the first virtual character to obtain a representational graphic characterizing the first virtual character includes:
  • Gaussian blur is performed on the first picture to obtain an expressive graphic representing the first virtual character.
  • the monitoring parameter information includes: target sound type; the sound parameter information includes: first sound type and sound propagation distance,
  • Gaussian blur is performed on the first picture to obtain an expressive graphic characterizing the first virtual character, and the fuzzy parameters of the expressive graphic are based on the target sound type. and/or the first sound type is determined, the blur parameter includes the size and/or clarity of the representational graphic; or
  • Gaussian blur is performed on the first picture to obtain an expressive graphic representing the first virtual character.
  • the fuzzy parameters of the expressive graphic are determined based on the sound propagation distance, and the fuzzy parameters include the The size and/or clarity of representational graphics.
  • displaying a representational graphic representing the first virtual character at the mapping position includes:
  • the representation graphic is displayed at the mapping position according to the display duration.
  • the method also includes:
  • mapping position In response to the change of the first position, the mapping position is updated in real time, thereby updating the display position of the representation graphic on the graphical user interface, so that the representation graphic reflects the position of the first virtual character in real time Variety.
  • the sound parameter information includes: sound propagation distance,
  • the method also includes:
  • the tracking control characterizing the first virtual character is displayed at the mapped location.
  • the listening parameter information of the target virtual character and the sound parameter information of the first virtual character are obtained as the data basis for rendering expressive graphics, enriching the data dimensions of rendering expressive graphics, and improving the dynamics and real-time performance of expressive graphics rendering. It improves the accuracy of sound source positioning and provides more realistic auditory and visual effects.
  • the expressive graphics of the first virtual character are rendered and displayed according to the sound monitoring results, and the first virtual character is displayed in a blurred manner to accurately depict the position of the first virtual character while over-exposing the position of the first virtual character. To achieve a balance between real-time tracking and marking the pointing effect of the first virtual character.
  • the graphical graphics of multiple first virtual characters are rendered and displayed at the same time, the problem of being unable to track multiple sound sources in the same direction can be further solved, making it easier for players to grasp the number of first virtual characters and optimizing the player's gaming experience.
  • the storage unit 1620 may include a readable medium in the form of a volatile storage unit, such as a random access storage unit (RAM) 1621 and/or a cache storage unit 1622, and may further include a read-only storage unit (ROM) 1623.
  • RAM random access storage unit
  • ROM read-only storage unit
  • Storage unit 1620 may also include a program/utility 1624 having a set of (at least one) program modules 1625 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, Each of these examples, or some combination, may include the implementation of a network environment.
  • program/utility 1624 having a set of (at least one) program modules 1625 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, Each of these examples, or some combination, may include the implementation of a network environment.
  • Bus 1630 may be a local area representing one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, a graphics acceleration port, a processing unit, or using any of a variety of bus structures. bus.
  • Electronic device 1600 may also communicate with one or more external devices 1800 (e.g., keyboard, pointing device, Bluetooth device, etc.), may also communicate with one or more devices that enable a user to interact with electronic device 1600, and/or with Any device (eg, router, modem, etc.) that enables the electronic device 1600 to communicate with one or more other computing devices. This communication may occur through input/output (I/O) interface 1650.
  • the electronic device 1600 may also communicate with one or more networks (eg, a local area network (LAN), a wide area network (WAN), and/or a public network, such as the Internet) through the network adapter 1660. As shown, network adapter 1660 communicates with other modules of electronic device 1600 via bus 1630.
  • network adapter 1660 communicates with other modules of electronic device 1600 via bus 1630.
  • the technical solution according to the embodiment of the present disclosure can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, U disk, mobile hard disk, etc.) or on the network , including several instructions to cause a computing device (which may be a personal computer, a server, a terminal device, a network device, etc.) to execute a method according to an embodiment of the present disclosure.
  • a computing device which may be a personal computer, a server, a terminal device, a network device, etc.
  • a computer-readable storage medium is also provided, on which a program product capable of implementing the method described above in this specification is stored.
  • various aspects of the present disclosure may also be implemented in the form of a program product, which includes program code.
  • the program product is run on a terminal device, the program code is used to cause the The terminal device performs the steps according to various exemplary embodiments of the present disclosure described in the above-mentioned "Exemplary Method" section of this specification, for example:
  • a graphical user interface is provided through the target terminal device, and the content displayed by the graphical user interface at least includes all or part of the game scene of the game.
  • the game scene includes the target virtual character controlled by the target terminal device, and The first virtual character controlled through other terminal devices, the method includes:
  • the listening parameter information and the sound parameter information are calculated to obtain a sound monitoring result.
  • the sound monitoring result includes: whether the voice of the first virtual character can be heard;
  • a representational graphic representing the first virtual character is displayed at the mapping position.
  • the monitoring parameter information includes: target sound type, monitoring capability level and noise level information;
  • the obtaining the listening capability level of the target virtual character includes:
  • obtaining the noise level information of the target virtual character includes:
  • the target sound intensity and the listening noise threshold are compared to obtain a comparison result, and the noise level information of the target virtual character is determined based on the comparison result.
  • the method also includes:
  • the noise level information of the target avatar is determined.
  • the sound parameter information includes: a first sound type and a sound propagation distance
  • the first sound type includes at least one of the following types: a first movement type, a first attack type, and a first sound propagation distance. Prepare for attack type.
  • the monitoring parameter information includes: monitoring capability level and noise level information;
  • the sound parameter information includes: first sound type and sound propagation distance,
  • the calculation of the monitoring parameter information and the sound parameter information to obtain the sound monitoring result includes:
  • the sound monitoring result is obtained by comparing the listening capability information and the sound propagation distance.
  • the schematic graphic is a model outline of the first virtual character.
  • the schematic graphic is a graphic obtained by blurring the model outline of the first virtual character.
  • determining the corresponding mapping position on the graphical user interface according to the first position of the first virtual character in the game scene includes:
  • the mapping position corresponding to the first position on the graphical user interface is determined according to the first position of the first virtual character in the game scene and the camera parameters of the virtual camera; wherein the virtual camera is Photograph all or part of the game scene of the game to obtain the game scene picture displayed on the graphical user interface.
  • displaying a representational graphic representing the first virtual character at the mapping position includes:
  • the non-visible area includes all or a partial area of the first virtual character
  • the partial area of the first virtual character includes one or more virtual body parts of the first virtual character.
  • the method also includes:
  • displaying a representational graphic representing the first virtual character at the mapping position includes:
  • the first virtual character is subjected to fuzzy processing to obtain an expressive graphic characterizing the first virtual character, and the expressive graphic is displayed at the mapping position.
  • the fuzzy processing of the first virtual character to obtain a representational graphic characterizing the first virtual character includes:
  • Gaussian blur is performed on the first picture to obtain an expressive graphic representing the first virtual character.
  • the monitoring parameter information includes: target sound type; the sound parameter information includes: first sound type and sound propagation distance,
  • Gaussian blur is performed on the first picture to obtain an expressive graphic characterizing the first virtual character, and the fuzzy parameters of the expressive graphic are based on the target sound type. and/or the first sound type is determined, the blur parameter includes the size and/or clarity of the representational graphic; or
  • Gaussian blur is performed on the first picture to obtain an expressive graphic representing the first virtual character.
  • the fuzzy parameters of the expressive graphic are determined based on the sound propagation distance, and the fuzzy parameters include the The size and/or clarity of representational graphics.
  • displaying a representational graphic representing the first virtual character at the mapping position includes:
  • the representation graphic is displayed at the mapping position according to the display duration.
  • the method also includes:
  • mapping position In response to the change of the first position, the mapping position is updated in real time, thereby updating the display position of the representation graphic on the graphical user interface, so that the representation graphic reflects the position of the first virtual character in real time Variety.
  • the sound parameter information includes: sound propagation distance,
  • the method also includes:
  • the tracking control characterizing the first virtual character is displayed at the mapped location.
  • the listening parameter information of the target virtual character and the sound parameter information of the first virtual character are obtained as the data basis for rendering expressive graphics, enriching the data dimensions of rendering expressive graphics, and improving the dynamics and real-time performance of expressive graphics rendering. It improves the accuracy of sound source positioning and provides more realistic auditory and visual effects.
  • the expressive graphics of the first virtual character are rendered and displayed according to the sound monitoring results, and the first virtual character is displayed in a blurred manner to accurately depict the position of the first virtual character while over-exposing the position of the first virtual character. To achieve a balance between real-time tracking and marking the pointing effect of the first virtual character.
  • the graphical graphics of multiple first virtual characters are rendered and displayed at the same time, the problem of being unable to track multiple sound sources in the same direction can be further solved, making it easier for players to grasp the number of first virtual characters and optimizing the player's gaming experience.
  • a program product 1700 for implementing the above method according to an embodiment of the present disclosure is described, which can adopt a portable compact disk read-only memory (CD-ROM) and include program code, and can be used on a terminal device, For example, run on a personal computer.
  • CD-ROM portable compact disk read-only memory
  • the program product of the present disclosure is not limited thereto.
  • a readable storage medium may be any tangible medium containing or storing a program that may be used by or in conjunction with an instruction execution system, apparatus, or device.
  • the program product may take the form of any combination of one or more readable media.
  • the readable medium may be a readable signal medium or a readable storage medium.
  • the readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination thereof. More specific examples (non-exhaustive list) of readable storage media include: electrical connection with one or more conductors, portable disk, hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • a readable signal medium may also be any readable medium other than a readable storage medium that can send, propagate, or transport the program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a readable medium may be transmitted using any suitable medium, including but not limited to wireless, wireline, optical cable, RF, etc., or any suitable combination of the foregoing.
  • Program code for performing operations of the present disclosure may be written in any combination of one or more programming languages, including object-oriented programming languages such as Java, C++, etc., as well as conventional procedural Programming language—such as "C" or a similar programming language.
  • the program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server execute on.
  • the remote computing device may be connected to the user computing device through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computing device, such as provided by an Internet service. (business comes via Internet connection).
  • LAN local area network
  • WAN wide area network

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Stereophonic System (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种游戏中的显示控制方法及装置、存储介质、电子设备。该方法包括:获取目标虚拟角色的监听参数信息,并获取第一虚拟角色的声音参数信息;对监听参数信息和声音参数信息进行计算得到声音监听结果,声音监听结果包括:是否能够监听到第一虚拟角色的声音;当能够监听到第一虚拟角色的声音时,根据第一虚拟角色在所述游戏场景中所处的第一位置,确定在图形用户界面上对应的映射位置;在映射位置处显示用于表征第一虚拟角色的表意图形。本方法提升了表意图形渲染的动态度和实时性,提高了声音源定位的准确度,在精准刻画第一虚拟角色的所在方位和过度暴露方位之间取得平衡,实现了在同一方向上追踪多个声音源。

Description

游戏中的显示控制方法及装置、存储介质、电子设备
相关申请的交叉引用
本公开要求于2022年05月10日提交的申请号为202210505766.X、名称为“游戏中的显示控制方法及装置、存储介质、电子设备”的中国专利申请的优先权,该中国专利申请的全部内容通过引用并入本文。
技术领域
本公开涉及终端显示技术领域,尤其涉及一种游戏中的显示控制方法与游戏中的显示控制装置、计算机可读存储介质及电子设备。
背景技术
目前,在游戏中对声音进行可视化展示成为增强玩家视听感受和优化玩家游戏体验的重要途径。在射击游戏中,声音的可视化方式主要是在屏幕顶部展示UI(User Interface,用户界面)提示。其中,通过和罗盘结合或者使用类似罗盘的展示逻辑在屏幕上方的罗盘位置显示音量柱状图来标注声音发出的方位,以提醒玩家声音的发出方向。进一步的,在罗盘的基础上扩展成可以展示声音发出者相对于玩家位置的上方或下方的效果。
但是,这种基于罗盘指引的方式常态显示于HUD(Head Up Display,抬头显示或平视显示系统)顶部,当玩家头朝天或朝地时容易出现对方向的错误认知,因此只能大致指出声音的位置。并且,由于只能分出区分相对于玩家位置的上方或下方,不能分出是头顶还是靠前的位置,因此无法对声音在玩家上方多少进行准确定位,更不用提这种方式难以实时跟踪声音源。尤其是在发出声音者和监听者同时运动时,如果HUD UI进行实时跟进的话,会造成声纹UI变花,降低信息清晰度。进而,如果在一个方向上有多个人同时发出声音,由于难以同时跟踪多个声音源,HUD UI无法对多个声纹进行同时展示,会使得玩家做出误判,劣化玩家的游戏体验。
鉴于此,本领域亟需开发一种新的游戏中的显示控制方法及装置。
需要说明的是,在上述背景技术部分公开的信息仅用于加强对本公开的背景的理解,因此可以包括不构成对本领域普通技术人员已知的现有技术的信息。
发明内容
本公开的目的在于提供一种游戏中的显示控制方法、游戏中的显示控制装置、计算机可读存储介质及电子设备,进而至少在一定程度上克服由于相关技术的限制而导致的声音源定位不准确、实时追踪和并行追踪效果差的技术问题。
本公开的其他特性和优点将通过下面的详细描述变得显然,或部分地通过本公开的实践而习得。
根据本公开实施例的第一个方面,提供一种游戏中的显示控制方法,通过目标终端设备提供图形用户界面,所述图形用户界面显示的内容至少包含所述游戏的全部或部分的游戏场景,所述游戏场景中包括通过所述目标终端设备控制操作的目标虚拟角色、以及通过其他终端设备控制操作的第一虚拟角色,所述方法包括:获取所述目标虚拟角色的监听参数信息,并获取所述第一虚拟角色的声音参数信息;对所述监听参数信息和所述声音参数信息进行计算得到声音监听结果,所述声音监听结果包括:是否能够监听到所述第一虚拟角色的声音;当能够监听到所述第一虚拟角色的声音时,根据所述第一虚拟角色在所述游戏场景中所处的第一位置,确定在所述图形用户界面上对应的映射位置;在所述映射位置处显示用于表征所述第一虚拟角色的表意图形。
在本公开的一种示例性实施例中,所述监听参数信息,包括:目标声音类型、监听能力等级和噪音水平信息;所述获取所述目标虚拟角色的监听参数信息,包括:获取所述目标虚拟角色的所述目标声音类型;获取所述目标虚拟角色的所述监听能力等级;获取所述目标虚拟角色的所述噪音水平信息。
在本公开的一种示例性实施例中,所述获取所述目标虚拟角色的所述监听能力等级,包括:获取所述目标虚拟角色的目标属性信息,并获取所述目标属性信息和所述监听能力等级之间的映射关系;在所述映射关系中查询与所述目标属性信息对应的所述监听能力等级。
在本公开的一种示例性实施例中,所述获取所述目标虚拟角色的所述噪音水平信息,包括:根据所述目标声音类型获取所述目标虚拟角色发出的目标声音强度,并根据所述目标声音类型获取所述目标虚拟角色的监听噪声阈值;对所述目标声音强度和所述监听噪声阈值进行比较得到比较结果,并根据所述比较结果确定所述目标虚拟角色的所述噪音水平信息。
在本公开的一种示例性实施例中,所述方法还包括:当所述目标虚拟角色未发出声音时,确定所述目标虚拟角色的所述噪声水平信息。
在本公开的一种示例性实施例中,所述声音参数信息,包括:第一声音类型和声音传播距离,所述第一声音类型,包括下述类型中的至少一种:第一移动类型、第一攻击类型和第一准备攻击类型。
在本公开的一种示例性实施例中,所述监听参数信息,包括:监听能力等级和噪音水平信息;所述声音参数信息,包括:第一声音类型和声音传播距离,所述对所述监听参数信息和所述声音参数信息进行计算得到声音监听结果,包括:当所述噪声水平信息表征监听到所述第一虚拟角色时,根据所述第一声音类型获取第一声音强度;根据所述监听能力等级获取所述目标虚拟角色的监听系数,并对所述第一声音强度和所述监听系数进行计算得到监听能力信息;
对所述监听能力信息和所述声音传播距离进行比较得到声音监听结果。
在本公开的一种示例性实施例中,所述表意图形为所述第一虚拟角色的模型轮廓。
在本公开的一种示例性实施例中,所述表意图形为对所述第一虚拟角色的模型轮廓进行模糊处理而获得的图形。
在本公开的一种示例性实施例中,所述根据所述第一虚拟角色在所述游戏场景中所处的第一位置,确定在所述图形用户界面上对应的映射位置,包括:根据所述第一虚拟角色在所述游戏场景中所处的第一位置和虚拟摄像机的摄像机参数确定所述第一位置在所述图形用户界面上对应的映射位置;其中,所述虚拟摄像机用于拍摄所述游戏的全部或部分的游戏场景以获得在所述图形用户界面显示的游戏场景画面。
在本公开的一种示例性实施例中,所述在所述映射位置处显示用于表征所述第一虚拟角色的表意图形,包括:根据所述摄像机参数确定所述第一虚拟角色对于所述目标虚拟角色的非可视区域,在所述映射位置处显示用于表征所述第一虚拟角色的非可视区域的表意图形;其中,所述非可视区域包括所述第一虚拟角色的全部或者部分区域,所述第一虚拟角色的部分区域包括所述第一虚拟角色的一个或多个虚拟身体部位。
在本公开的一种示例性实施例中,所述方法还包括:根据所述摄像机参数确定所述第一虚拟角色的全部区域对于所述目标虚拟角色可视,不显示用于表征所述第一虚拟角色的表意图形。
在本公开的一种示例性实施例中,所述在所述映射位置处显示用于表征所述第一虚拟角色的表意图形,包括:对所述第一虚拟角色进行模糊化处理得到表征所述第一虚拟角色的表意图形,并在所述映射位置处显示所述表意图形。
在本公开的一种示例性实施例中,所述对所述第一虚拟角色进行模糊化处理得到表征所述第一虚拟角色的表意图形,包括:对所述第一虚拟角色进行扣图处理得到第一画面;对所述第一画面进行高斯模糊得到表征所述第一虚拟角色的表意图形。
在本公开的一种示例性实施例中,所述监听参数信息,包括:目标声音类型;所述声音参数信息,包括:第一声音类型和声音传播距离,所述对所述第一画面进行高斯模糊得到表征所述第一虚拟角色的表意图形,包括:基于所述目标声音类型和/或所述第一声音类型,对所述第一画面进行高斯模糊得到表征所述第一虚拟角色的表意图形,所述表意图形的模糊参数根据所述目标声音类型和/或所述第一声音类型确定,所述模糊参数包括所述表意图形的尺寸和/或清晰度;或基于所述声音传播距离,对所述第一画面进行高斯模糊得到表征所述第一虚拟角色的表意图形,所述表意图形的模糊参数根据所述声音传播距离确定,所述模糊参数包括所述表意图形的尺寸和/或清晰度。
在本公开的一种示例性实施例中,所述在所述映射位置处显示用于表征所述第一虚拟角色的表意图形,包括:根据所述监听参数信息和/或所述声音参数信息确定用于表征所述第一虚拟角色的表意图形的显示时长;按照所述显示时长在所述映射位置处显示所述表意图形。
在本公开的一种示例性实施例中,所述方法还包括:响应于所述第一位置的变化,实时更新所述映射位置,从而更新所述表意图形在所述图形用户界面上的显示位置,以使得所述表意图形实时反映所述第一虚拟角色的位置变化。
在本公开的一种示例性实施例中,所述声音参数信息,包括:声音传播距离,所述方法还包括:根据所述声音监听结果生成追踪控件,所述追踪控件包括所述声音传播距离;在所述映射位置处显示用于表征所述第一虚拟角色的所述追踪控件。
根据本公开实施例的第二个方面,提供一种游戏中的显示控制装置,通过目标终端设备提供图形用户界面,所述图形用户界面显示的内容至少包含所述游戏的全部或部分的游戏场景,所述游戏场景中包括通过所述目标终端设备控制操作的目标虚拟角色、以及通过其他终端设备控制操作的第一虚拟角色,包括:信息获取模块,被配置为获取所述目标虚拟角色的监听参数信息,并获取所述第一虚拟 角色的声音参数信息;信息计算模块,被配置为对所述监听参数信息和所述声音参数信息进行计算得到声音监听结果,所述声音监听结果包括:是否能够监听到所述第一虚拟角色的声音;位置确定模块,被配置为当能够监听到所述第一虚拟角色的声音时,根据所述第一虚拟角色在所述游戏场景中所处的第一位置,确定在所述图形用户界面上对应的映射位置;图形显示模块,被配置为在所述映射位置处显示用于表征所述第一虚拟角色的表意图形。
根据本公开实施例的第三个方面,提供一种电子设备,包括:处理器和存储器;其中,存储器上存储有计算机可读指令,所述计算机可读指令被所述处理器执行时实现上述任意示例性实施例中的游戏中的显示控制方法。
根据本公开实施例的第四个方面,提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述任意示例性实施例中的游戏中的显示控制方法。
由上述技术方案可知,本公开示例性实施例中的游戏中的显示控制方法、游戏中的显示控制装置、计算机存储介质及电子设备至少具备以下优点和积极效果:
在本公开的示例性实施例提供的方法及装置中,获取目标虚拟角色的监听参数信息和第一虚拟角色的声音参数信息作为渲染表意图形的数据基础,丰富了渲染表意图形的数据维度,提升了表意图形渲染的动态度和实时性,提高了声音源定位的准确度,提供更加逼真的听觉效果和视觉效果。更进一步的,根据声音监听结果渲染并显示第一虚拟角色的表意图形,对第一虚拟角色进行模糊化展示,精准刻画第一虚拟角色的所在方位,同时在过度暴露第一虚拟角色的方位之间取得平衡,实现了实时追踪和标记第一虚拟角色的指向效果。当同时渲染并显示多个第一虚拟角色的表意图形时,能够进一步解决无法在同一方向上追踪多个声音源的问题,便于玩家掌握第一虚拟角色的数量,优化玩家的游戏体验。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
图1示出了相关技术中展示声音发出者的界面示意图;
图2示意性示出本公开示例性实施例中一种游戏中的显示控制方法的流程示意图;
图3示意性示出了本公开其中一示例性实施例中获取目标虚拟角色的监听参数信息的方法的流程示意图;
图4示意性示出了本公开其中一示例性实施例中获取目标虚拟角色的监听能力等级的方法的流程示意图;
图5示意性示出本公开其中一示例性实施例中获取目标虚拟角色的噪音水平信息的方法的流程示意图;
图6示意性示出本公开其中一示例性实施例中对监听参数信息和声音参数信息进行计算的方法的流程示意图;
图7示意性示出本公开其中一示例性实施例中对第一虚拟角色进行模糊化处理的方法的流程示意图;
图8示意性示出本公开其中一示例性实施例中对第一画面进行高斯模糊的方法的流程示意图;
图9示意性示出本公开其中一示例性实施例中显示第一虚拟角色的全部区域的模型轮廓的界面示意图;
图10示意性示出本公开其中一示例性实施例中显示第一虚拟角色的部分区域的模型轮廓的界面示意图;
图11示意性示出本公开其中一示例性实施例中显示多个第一虚拟角色的模型轮廓的界面示意图;
图12示意性示出本公开其中一示例性实施例中按照显示时长显示表意图形的方法的流程示意图;
图13示意性示出本公开其中一示例性实施例中实时反映第一虚拟角色的位置变化的界面示意图;
图14示意性示出本公开其中一示例性实施例中根据声音监听结果显示追踪控件的方法的流程示意图;
图15示意性示出本公开其中一示例性实施例中一种游戏中的显示控制装置的结构示意图;
图16示意性示出本公开其中一示例性实施例中一种用于实现游戏中的显示控制方法的电子设备;
图17示意性示出本公开其中一示例性实施例中一种用于实现游戏中的显示控制方法的计算机可读存储介质。
具体实施方式
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施方式使得本公开将更加全面和完整,并将示例实 施方式的构思全面地传达给本领域的技术人员。所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施方式中。在下面的描述中,提供许多具体细节从而给出对本公开的实施方式的充分理解。然而,本领域技术人员将意识到,可以实践本公开的技术方案而省略所述特定细节中的一个或更多,或者可以采用其它的方法、组元、装置、步骤等。在其它情况下,不详细示出或描述公知技术方案以避免喧宾夺主而使得本公开的各方面变得模糊。
本说明书中使用用语“一个”、“一”、“该”和“所述”用以表示存在一个或多个要素/组成部分/等;用语“包括”和“具有”用以表示开放式的包括在内的意思并且是指除了列出的要素/组成部分/等之外还可存在另外的要素/组成部分/等;用语“第一”和“第二”等仅作为标记使用,不是对其对象的数量限制。
此外,附图仅为本公开的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。
目前,在游戏中对声音进行可视化展示成为增强玩家视听效果和优化玩家游戏体验的重要途径。在射击游戏中,声音的可视化方式主要是在屏幕顶部展示UI提示。
其中,通过和罗盘结合或者使用类似罗盘的展示逻辑在屏幕上方的罗盘位置显示音量柱状图来标注声音发出的方位,以提醒玩家声音的发出方向。进一步的,在罗盘的基础上扩展成可以展示声音发出者相对于玩家位置的上方或下方的效果。
图1示出了相关技术中展示声音发出者的界面示意图,如图1所示,200表示当前处于在朝向200°的方向上。第一个呈菱形的音量柱状图表明在水平方向上有一声音发出者;第二个在水平方向向下呈现的音量柱状图表征在玩家位置的下方有一声音发出者;第三个在水平方向向上呈现的音量柱状图表征在玩家位置的上方还有一声音发出者。
除此之外,在最末尾显示的朝向为水平向右三角符号和曲线表示超出罗盘当前的显示范围还有其他声音发出者。
但是,这种基于罗盘指引的方式常态显示于HUD顶部,当玩家头朝天或朝地时容易出现对方向的错误认知,因此只能大致指出声音的位置。并且,由于只能分出区分相对于玩家位置的上方或下方,不能分出是头顶还是靠前的位置,因此无法对声音在玩家上方多少进行准确定位。更不用提这种方式难以实时跟踪声音源。尤其是在发出声音者和监听者同时运动时,如果HUD UI进行实时跟进的话,会造成声纹UI变花,降低信息清晰度。进而,如果在一个方向上有多个人同时发出声音,由于难以同时跟踪多个声音源,HUD UI无法对多个声纹进行同时展示,会使得玩家做出误判,劣化玩家的游戏体验。
在本公开其中一种实施例中的游戏中的显示控制方法可以运行于本地终端设备或者是服务器。当游戏中的显示控制方法运行于服务器时,该方法则可以基于云交互系统来实现与执行,其中,云交互系统包括服务器和客户端设备。
在一可选的实施方式中,云交互系统下可以运行各种云应用,例如:云游戏。以云游戏为例,云游戏是指以云计算为基础的游戏方式。在云游戏的运行模式下,游戏程序的运行主体和游戏画面呈现主体是分离的,游戏中的显示控制方法的储存与运行是在云游戏服务器上完成的,客户端设备的作用用于数据的接收、发送以及游戏画面的呈现,举例而言,客户端设备可以是靠近用户侧的具有数据传输功能的显示设备,如,移动终端、电视机、计算机、掌上电脑等;但是进行信息处理的为云端的云游戏服务器。在进行游戏时,玩家操作客户端设备向云游戏服务器发送操作指令,云游戏服务器根据操作指令运行游戏,将游戏画面等数据进行编码压缩,通过网络返回客户端设备,最后,通过客户端设备进行解码并输出游戏画面。
在一可选的实施方式中,以游戏为例,本地终端设备存储有游戏程序并用于呈现游戏画面。本地终端设备用于通过图形用户界面与玩家进行交互,即,常规的通过电子设备下载安装游戏程序并运行。该本地终端设备将图形用户界面提供给玩家的方式可以包括多种,例如,可以渲染显示在终端的显示屏上,或者,通过全息投影提供给玩家。举例而言,本地终端设备可以包括显示屏和处理器,该显示屏用于呈现图形用户界面,该图形用户界面包括游戏画面,该处理器用于运行该游戏、生成图形用户界面以及控制图形用户界面在显示屏上的显示。
在一种可能的实施方式中,本公开实施例提供了一种游戏中的显示控制方法,通过终端设备提供图形用户界面,其中,终端设备可以是前述提到的本地终端设备,也可以是前述提到的云交互系统中的客户端设备。
针对相关技术中存在的问题,本公开提出了一种游戏中的显示控制方法,通过目标终端设备提供 图形用户界面,图形用户界面显示的内容至少包含游戏的全部或部分的游戏场景,游戏场景中包括通过目标终端设备控制操作的目标虚拟角色、以及通过其他终端设备控制操作的第一虚拟角色。图2示出了游戏中的显示控制方法的流程图,如图2所示,游戏中的显示控制方法至少包括以下步骤:
在步骤S210中,获取目标虚拟角色的监听参数信息,并获取第一虚拟角色的声音参数信息。
在步骤S220中,对监听参数信息和声音参数信息进行计算得到声音监听结果,声音监听结果包括:是否能够监听到第一虚拟角色的声音。
在步骤S230中,当能够监听到第一虚拟角色的声音时,根据第一虚拟角色在游戏场景中所处的第一位置,确定在图形用户界面上对应的映射位置。
在步骤S240中,在映射位置处显示用于表征第一虚拟角色的表意图形。
在本公开的示例性实施例中,获取目标虚拟角色的监听参数信息和第一虚拟角色的声音参数信息作为渲染表意图形的数据基础,丰富了渲染表意图形的数据维度,提升了表意图形渲染的动态度和实时性,提高了声音源定位的准确度,提供更加逼真的听觉效果和视觉效果。更进一步的,根据声音监听结果渲染并显示第一虚拟角色的表意图形,对第一虚拟角色进行模糊化展示,精准刻画第一虚拟角色的所在方位,同时在过度暴露第一虚拟角色的方位之间取得平衡,实现了实时追踪和标记第一虚拟角色的指向效果。当同时渲染并显示多个第一虚拟角色的表意图形时,能够进一步解决无法在同一方向上追踪多个声音源的问题,便于玩家掌握第一虚拟角色的数量,优化玩家的游戏体验。
下面对游戏中的显示控制方法的各个步骤进行详细说明。
在步骤S210中,获取目标虚拟角色的监听参数信息,并获取第一虚拟角色的声音参数信息。
在本公开的示例性实施例中,目标虚拟角色可以是当前玩家通过目标终端设备控制操作的游戏虚拟角色。
在可选的实施例中,监听参数信息,包括:目标声音类型、监听能力等级和噪音水平信息,图3示出了获取目标虚拟角色的监听参数信息的方法的流程示意图,如图3所示,该方法至少可以包括以下步骤:在步骤S310中,获取目标虚拟角色的目标声音类型。
目标虚拟角色的目标声音类型是目标虚拟角色所发出声音的声音类型。
举例而言,该目标声音类型可以包括目标虚拟角色移动的类型、目标虚拟角色攻击的类型和目标虚拟角色准备攻击的类型等,本示例性实施例对此不做特殊限定。
具体的,目标虚拟角色移动的类型可以包括跑步、蹲走、跳跃、缓慢行走和快速行走等;目标虚拟角色攻击的类型可以包括开枪和投掷手雷等;目标虚拟角色准备攻击的类型可以包括换弹夹、开瞄准镜和拔手雷的安全栓等。
在步骤S320中,获取目标虚拟角色的监听能力等级。
在可选的实施例中,图4示出了获取目标虚拟角色的监听能力等级的方法的流程示意图,如图4所示,该方法至少可以包括以下步骤:在步骤S410中,获取目标虚拟角色的目标属性信息,并获取目标属性信息和监听能力等级之间的映射关系。
其中,目标虚拟角色的目标属性信息可以是目标虚拟角色的游戏等级,也可以是目标虚拟角色完成的提升监听能力等级的任务个数或任务进度等,本示例性实施例对此不做特殊限定。
并且,还可以针对目标虚拟角色的目标属性信息和监听能力等级之间可以预设一映射关系。
该映射关系可以是针对所有游戏虚拟角色设置的统一关系,也可以是针对不同的游戏虚拟角色差异化设置的对应关系,本示例性实施例对此不做特殊限定。
在步骤S420中,在映射关系中查询与目标属性信息对应的监听能力等级。
在获取到目标虚拟角色的目标属性信息和监听能力等级之间的映射关系之后,可以在该映射关系中查询与目标虚拟角色当前的目标属性信息对应的监听能力等级。
该监听能力等级同时可以作为目标虚拟角色的成长数值,提升玩家的长期存留效果。
在本示例性实施例中,通过查询目标属性信息和监听能力等级之间的映射关系能够获取到目标虚拟角色的监听能力等级,确定方式简单准确,并且为显示第一虚拟角色的表意图形提供了目标虚拟角色方的数据信息。
在步骤S330中,获取目标虚拟角色的噪音水平信息。
在可选的实施例中,图5示出了获取目标虚拟角色的噪音水平信息的方法的流程示意图,如图5所示,该方法至少可以包括以下步骤:在步骤S510中,根据目标声音类型获取目标虚拟角色发出的目标声音强度,并根据目标声音类型获取目标虚拟角色的监听噪声阈值。
由于针对不同的目标声音类型设置有对应的声音强度,因此可以根据获取到的目标虚拟角色的目标声音类型确定对应的目标声音强度。
举例而言,当目标声音类型为跑步时,对应的目标声音强度为30分贝;当目标声音类型为开枪时,对应的目标声音强度可以是40分贝。
由于针对不同的目标声音类型还设置有对应的监听噪声阈值,因此可以根据获取到的目标虚拟角色的目标声音类型确定对应的监听噪声阈值。
该监听噪声阈值可以用来表征在目标虚拟角色发出目标声音强度的情况下,能够监听到的最大声音强度。
举例而言,当目标声音类型为跑步时,对应的监听噪声阈值可以为20分贝;当目标声音类型为开枪时,对应的监听噪声阈值可以是10分贝。
在步骤S520中,对目标声音强度和监听噪声阈值进行比较得到比较结果,并根据比较结果确定目标虚拟角色的噪声水平信息。
在根据目标声音类型分别获取到目标声音强度和监听噪声阈值之后,可以对目标声音强度和监听噪声阈值进行比较,以得到比较结果。
进一步的,根据比较结果确定目标虚拟角色的噪声水平信息。
当比较结果为目标声音强度小于或等于监听噪声阈值时,表明此时目标虚拟角色能够监听与目标虚拟角色对应的第一虚拟角色发出的声音。
当比较结果为目标声音强度大于监听噪声阈值时,表明此时由于目标虚拟角色自身发出噪声过大,因此无法监听与目标虚拟角色对应的第一虚拟角色。
因此,该噪声水平信息的获取能够与目标虚拟角色如果在开枪,则更难捕捉到环境中的枪声等实际场景更加贴合,模拟效果更佳真实。
在本示例性实施例中,通过对目标声音强度和监听噪声阈值的比较能够确定目标虚拟角色的噪声水平信息,为是否能够监听第一虚拟角色提供判断依据,也为显示第一虚拟角色的表意图形提供了先决判断基准。
除此之外,当目标虚拟角色没有发出声音的情况下,可以直接确定出对应的噪声水平信息。
在可选的实施例中,当目标虚拟角色未发出声音时,确定目标虚拟角色的噪声水平信息。
当目标虚拟角色没有发出声音时,表明目标虚拟角色不会产生噪音影响监听,因此,可以直接确定目标虚拟角色当前的噪声水平信息表征能够监听与目标虚拟角色对应的第一虚拟角色发出的声音。
另一方面,还可以同时获取与目标虚拟角色的一个或多个第一虚拟角色的声音参数信息,以同时追踪一个或多个声音源,使得控制目标虚拟角色的玩家能够准确得到第一虚拟角色的数量信息。
其中,该第一虚拟角色可以是从目标虚拟角色的敌对方阵营的游戏虚拟角色中确定出的一个或多个游戏虚拟角色,也可以是与目标虚拟角色同属同一阵营的队友的一个或多个游戏虚拟角色,本示例性实施例对此不做特殊限定。
在可选的实施例中,声音参数信息,包括:第一声音类型和声音传播距离,第一声音类型,包括:第一移动类型、第一攻击类型和第一准备攻击类型。
其中,第一声音类型可以包括第一移动类型、第一攻击类型和第一准备攻击类型等类型,本示例性实施例对此不做特殊限定。
声音传播距离可以是第一虚拟角色到达目标虚拟角色同一位置所移动的距离,亦即当前情况下第一虚拟角色和目标虚拟角色的相隔距离。
具体的,第一移动类型可以包括第一虚拟角色进行跑步、蹲走、跳跃、缓慢行走和快速行走等移动方式;第一攻击类型可以包括第一虚拟角色开枪和投掷手雷等攻击方式;第一准备攻击类型可以包括第一虚拟角色换弹夹、开瞄准镜和拔手雷的安全栓等攻击方式前置的准备方式。
因此,第一虚拟角色的第一声音类型的获取能够对尖啸的狙击枪声比短促的冲锋枪声或者跑步声更容易捕捉的效果提供数据支持,而声音传播距离能够对第一虚拟角色的表意图形的可视化能力产生影响。
在步骤S220中,对监听参数信息和声音参数信息进行计算得到声音监听结果,声音监听结果包括:是否能够监听到第一虚拟角色的声音。
在本公开的示例性实施例中,分别获取到目标虚拟角色的监听参数信息和敌方第一角色的声音参数信息之后,可以对该监听参数信息和不同的第一虚拟角色的声音参数信息进行并行计算得到目标虚拟角色的声音监听结果,以保证同时追踪和渲染多个声音源的表意图形。
在可选的实施例中,监听参数信息,包括:监听能力等级和噪音水平信息;声音参数信息,包括:第一声音类型和声音传播距离,图6示出了对监听参数信息和声音参数信息进行计算的方法的流程示意图,如图6所示,该方法至少可以包括以下步骤:在步骤S610中,当噪声水平信息表征监听到第一 虚拟角色时,根据第一声音类型获取第一声音强度。
当目标虚拟角色的目标声音强度小于或等于监听噪声阈值时,或者当目标虚拟角色没有发出声音时,噪声水平信息表征含义为此时能够监听到第一虚拟角色。
在该种情况下,由于针对不同的第一声音类型也设置有对应的第一声音强度,因此可以根据获取到的第一声音类型获取此时的第一声音强度。
举例而言,当第一声音类型为跑步时,对应的第一声音强度为30分贝;当第一声音类型为开枪时,对应的第一声音强度可以是40分贝。
在步骤S620中,根据监听能力等级获取目标虚拟角色的监听系数,并对第一声音强度和监听系数进行计算得到监听能力信息。
由于按照与监听能力等级线性相关的方式设置有对应的监听系数,因此,在获取到目标虚拟角色的监听能力等级之后,可以获取对应的监听系数。
进一步的,对第一声音强度和监听系数进行计算能够获取到对应的监听能力信息。
具体的,对第一声音强度和监听系数进行计算的方式可以是将第一声音强度和监听系数进行乘法计算,也可以根据实际情况设置其他计算方式,本示例性实施例对此不做特殊限定。
在步骤S630中,对监听能力信息和声音传播距离进行比较得到声音监听结果。
在计算出监听能力信息之后,可以对监听能力信息和获取到的声音传播距离进行比较,以得到声音监听结果。
其中,声音监听结果可以是监听能力信息大于或等于声音传播距离,也可以是监听能力信息小于声音传播距离。
该声音监听结果的确定与前一判断过程为显示第一虚拟角色的表意图形形成双重逻辑缜密的判断层级,该判断层级可以通过公式(1)表征:
Figure PCTCN2022124322-appb-000001
其中,b2∈U a2,U表示目标虚拟角色能够监听到的第一虚拟角色的集合,a1表示第一声音类型,b1表示目标声音类型,a2表示发声的第一虚拟角色,b2表示目标虚拟角色,X表示目标虚拟角色目标声音强度或第一虚拟角色的第一声音强度,Y表示监听噪声阈值,M表示监听系数,dS表示声音传播距离。
在本示例性实施例中,通过对第一声音强度和监听系数进行计算和比较的方式,能够进一步确定声音监听结果,对是否会因为距离过远导致无法监听第一虚拟角色的情况进行判断,与实际情况更为贴合和准确,在显示第一虚拟角色的表意图形和保障第一虚拟角色的安全之间取得平衡。
在计算出声音监听结果的过程中,将多种不同维度的数据加入到计算过程中,为对应的声纹系统的优化提供支持,也增加了声纹系统在后期数值成长的深度,为声纹系统的不同体验提供了保障,例如不同等级的监听者能够从同样的声音中获取不同数量的信息等,改变玩家后期的游戏策略。
在步骤S230中,当能够监听到第一虚拟角色的声音时,根据第一虚拟角色在游戏场景中所处的第一位置,确定在图形用户界面上对应的映射位置。
在本公开的示例性实施例中,在计算出声音监听结果之后,如果声音监听结果是能够监听到所述第一虚拟角色的声音时,可以根据第一虚拟角色在游戏场景中所处的第一位置确定对应的映射位置。
在可选的实施例中,根据第一虚拟角色在游戏场景中所处的第一位置和虚拟摄像机的摄像机参数确定第一位置在图形用户界面上对应的映射位置;其中,虚拟摄像机用于拍摄游戏的全部或部分的游戏场景以获得在图形用户界面显示的游戏场景画面。
其中,目标终端设备的图形用户界面上所显示的游戏场景画面即为虚拟摄像机拍摄到的游戏场景内容。
例如,在第一人称游戏中,虚拟摄像机可以被设置在目标虚拟角色的头部,虚拟摄像机的朝向等摄像机参数跟随目标虚拟角色的转动而转动,图形用户界面上渲染得到游戏场景画面相当于虚拟摄像机拍摄到的游戏场景内容。
在第三人称游戏中,虚拟摄像机可以被设置在目标虚拟角色的后上方或正上方,虚拟摄像机的朝向等摄像机参数跟随目标虚拟角色的移动而移动,以固定角度拍摄目标虚拟角色周围一定区域的游戏场景。
因此,在根据第一虚拟角色在游戏场景中所处的第一位置所对应的映射位置时,可以是以目标虚拟角色的虚拟摄像机的朝向等摄像机参数确定的,亦即以目标虚拟角色的视角确定的。
具体的,与第一位置对应的映射位置可以是与第一位置相同的位置,也可以是其他关联位置,本示例性实施例对此不做特殊限定。
在步骤S240中,在映射位置处显示用于表征第一虚拟角色的表意图形。
在本公开的示例性实施例中,确定声音监听结果为能够监听到第一虚拟角色的声音,以及确定映射位置之后,可以在映射位置处显示第一虚拟角色的表意图形。
在可选的实施例中,根据摄像机参数确定第一虚拟角色对于目标虚拟角色的非可视区域,在映射位置处显示用于表征第一虚拟角色的非可视区域的表意图形;其中,非可视区域包括第一虚拟角色的全部或者部分区域,第一虚拟角色的部分区域包括第一虚拟角色的一个或多个虚拟身体部位。
在根据虚拟摄像机的摄像机参数,以目标虚拟角色的视角观察第一虚拟角色时,可能会出现目标虚拟角色能够观察到完整的第一虚拟角色,或者是出现目标虚拟角色只能观看到第一虚拟角色的部分的情况。
那么,为了根据目标虚拟角色的视角生成第一虚拟角色的表意图形,可以根据摄像机参数确定第一虚拟角色在目标虚拟角色视角下的非可视区域。
其中,非可视区域即为目标虚拟角色无法看到的第一虚拟角色的区域。
当目标虚拟角色无法观察到第一虚拟角色时,该非可视区域即为第一虚拟角色的全部区域,亦即整个第一虚拟角色都是非可视区域;当目标虚拟角色可以观察到第一虚拟角色的头部等部分区域时,那么,非可视区域即为第一虚拟角色的非可视区域即为除头部之外的其他区域。
因此,第一虚拟角色的非可视区域可以是第一虚拟角色的一个或者多个虚拟身体部位。
值得说明的是,非可视区域的划分并不严格以第一虚拟角色的虚拟身体部位为依据进行划分,而是与实际场景下的真实视角相同的,可能会出现一个虚拟身体部位的一部分是可视的,而另一部分是非可视的。
在确定出第一虚拟角色对于目标虚拟角色的非可视区域之后,可以生成用于表征第一虚拟角色的非可视区域的表意图形,以显示该表意图形。
在可选的实施例中,对第一虚拟角色进行模糊化处理得到表征第一虚拟角色的表意图形,并在映射位置处显示表意图形。
值得说明的是,对第一虚拟角色进行模糊化处理的部分为第一虚拟角色的非可视区域,第一虚拟角色对于目标虚拟角色的可视区域无需进行模糊化处理。
在可选的实施例中,图7示出了对第一虚拟角色进行模糊化处理的方法的流程示意图,如图7所示,该方法至少可以包括以下步骤:在步骤S710中,对第一虚拟角色进行扣图处理得到第一画面。
利用人像分割技术或者智能抠图技术对第一虚拟角色进行抠图处理,以得到第一虚拟角色的第一画面。该第一画面可以是依据第一角色的轮廓及其内部形成的画面。
在步骤S720中,对第一画面进行高斯模糊得到表征第一虚拟角色的表意图形。
其中,高斯模糊(Gaussian Blur)也叫高斯平滑,是在Adobe Photoshop(一种图像处理软件)、GIMP(GNU Image Manipulation Program,GNU图像处理程序)以及Paint.NET(一种图像和照片处理软件)等图像处理软件中广泛使用的处理效果,通常用它来减少图像噪声以及降低细节层次。
这种模糊技术生成的图像,其视觉效果就像是经过一个毛玻璃在观察图像,这与镜头焦外成像效果散景以及普通照明阴影中的效果都明显不同。
高斯平滑也用于计算机视觉算法中的预先处理阶段,以增强图像在不同比例大小下的图像效果。从数学的角度来看,图像的高斯模糊过程就是图像与正态分布做卷积。由于正态分布又叫作高斯分布,所以这项技术就叫作高斯模糊。
图像与圆形方框模糊做卷积将会生成更加精确的焦外成像效果。由于高斯函数的傅立叶变换是另外一个高斯函数,所以高斯模糊对于图像来说就是一个低通滤波器。
高斯模糊是一种图像模糊滤波器,它用正态分布计算图像中每个像素的变换。N维空间正态分布方程为:
Figure PCTCN2022124322-appb-000002
在对敌方轮廓画面进行高斯模糊的二维空间定义为:
Figure PCTCN2022124322-appb-000003
其中,r是模糊半径,σ是正态分布的标准偏差。
在二维空间中,这个公式生成的曲面的等高线是从中心开始呈正态分布的同心圆。分布不为零的像素组成的卷积矩阵与原始图像做变换。每个像素的值都是周围相邻像素值的加权平均。原始像素的值有最大的高斯分布值,所以有最大的权重,相邻像素随着距离原始像素越来越远,其权重也越来越小。
这样进行的模糊处理比其它的均衡模糊滤波器更高地保留了边缘效果。
在可选的实施例中,监听参数信息,包括:目标声音类型;声音参数信息,包括:第一声音类型和声音传播距离,图8示出了对第一画面进行高斯模糊的方法的流程示意图,如图8所示,该方法至少可以包括以下步骤:在步骤S810中,基于目标声音类型和/或第一声音类型,对第一画面进行高斯模糊得到表征第一虚拟角色的表意图形,表意图形的模糊参数根据目标声音类型和/或第一声音类型确定,模糊参数包括表意图形的尺寸和/或清晰度。
高斯模糊的程度是由高斯矩阵决定的。具体的,高斯矩阵的尺寸越大,标准差越大,高斯模糊得到的表意图形的模糊程度越大。
一般的,可以将标准差设置为0,以进行高斯模糊。因此,高斯模糊出的表意图形的模糊程度由高斯矩阵决定。
那么,在确定生成表意图形的高斯矩阵时,可以根据目标声音类型和/或第一声音类型确定,因此,反映表意图形的模糊程度的模糊参数由目标声音类型和/或第一声音类型确定。
举例而言,不同的目标声音类型可以与高斯矩阵的大小之间存在映射关系,因此,可以根据该映射关系和目标声音类型确定高斯矩阵的大小,以通过目标声音类型对第一画面进行高斯模糊得到对应的表意图形。
同样的,不同的第一声音类型可以与高斯矩阵的大小之间也可以存在对应的映射关系,因此,可以根据该映射关系和第一声音类型确定高斯矩阵的大小,以通过第一声音类型对第一画面进行高斯模糊得到对应的表意图形。
或者是,不同的目标声音类型和第一声音类型同时决定高斯矩阵的大小,以得到对应的表意图形。
那么,反映表意图形的模糊程度的模糊参数可以包括表意图形的尺寸和/或清晰度。
当目标声音类型和/或目标声音类型确定出的高斯矩阵越大,可以确定表意图形的尺寸越小,和/或清晰度越差;当目标声音类型和/或目标声音类型确定出的高斯矩阵越小,可以确定表意图形的尺寸越大,和/或清晰度越好。
在步骤S820中,基于声音传播距离,对第一画面进行高斯模糊得到表征第一虚拟角色的表意图形,表意图形的模糊参数根据声音传播距离确定,模糊参数包括表意图形的尺寸和/或清晰度。
高斯模糊的程度是由高斯矩阵决定的。具体的,高斯矩阵的尺寸越大,标准差越大,高斯模糊得到的表意图形的模糊程度越大。
一般的,可以将标准差设置为0,以直接进行高斯模糊。因此,高斯模糊出的表意图形的模糊程度由高斯矩阵决定。
那么,在确定生成表意图形的高斯矩阵时,可以根据声音传播距离确定,因此,反映表意图形的模糊程度的模糊参数由声音传播距离确定。
举例而言,高斯矩阵的大小可以直接等于声音传播距离的大小。或者也可以是高斯矩阵的大小与声音传播距离之间存在映射关系,因此,可以根据该映射关系和声音传播距离确定高斯矩阵的大小,以通过声音传播距离对第一画面进行高斯模糊得到对应的表意图形。
那么,反映表意图形的模糊程度的模糊参数可以包括表意图形的尺寸和/或清晰度。
当声音传播距离确定出的高斯矩阵越大,可以确定表意图形的尺寸越小,和/或清晰度越差;当声音传播距离确定出的高斯矩阵越小,可以确定表意图形的尺寸越大,和/或清晰度越好。
因此,通过对第一虚拟角色进行模糊化处理能够使得越远的第一虚拟角色的轮廓位置信息越模糊,更加模拟真实世界对于远处声音的判断难度。
并且,当第一虚拟角色为敌方虚拟角色时,模糊化处理后的表意图形不会准确的显示敌方虚拟角色的位置,降低敌方虚拟角色被精准击毙的可能性。更进一步的,当敌方虚拟角色的声音被监听到时,对敌方虚拟角色的表意图形会被实时渲染在敌方虚拟角色身上,达到实时可视化的效果。
在生成表征第一虚拟角色的表意图形之后,可以在已确定出的映射位置处显示该表意图形。
在可选的实施例中,表意图形为第一虚拟角色的模型轮廓。
图9示出了显示第一虚拟角色的全部区域的模型轮廓的界面示意图,如图9所示,当第一虚拟角色为敌方虚拟角色时,910即为敌方虚拟角色的表意图形。该表意图形为对第一虚拟角色的全部区域进行模糊化处理得到的第一虚拟角色的模型轮廓。
图10示出了显示第一虚拟角色的部分区域的模型轮廓的界面示意图,如图10所示,当第一虚拟角色为敌方虚拟角色时,1010即为敌方虚拟角色的表意图形。该表意图形为对第一虚拟角色的部分区域进行模糊化处理得到的第一虚拟角色的模型轮廓。
1020即为敌方虚拟角色除非可视区域的其他区域的正常显示画面。由于1020表征的其他区域对于目标虚拟角色来说是可视的,因此无需对1020进行模糊化处理。
为显示敌方虚拟角色的模型轮廓,能够将该模型轮廓的显示优先级设置为最高,使该模型轮廓处于游戏场景的最上方,不会被游戏内的墙体或者其他障碍物所遮挡,为模型轮廓提供透视效果的显示方式。
同时,模糊化的模型轮廓也很好的解决了不能暴露敌方虚拟角色的过多信息和过度暴露敌方虚拟角色的位置信息的问题,例如敌方虚拟角色的准确位置信息,包括头部位置和身体朝向等,避免控制目标虚拟角色的玩家利用透视效果进行锁头等破坏游戏平衡的操作。
图11示出了显示多个第一虚拟角色的模型轮廓的界面示意图,如图11所示,当第一虚拟角色为敌方虚拟角色时,1110、1120、1130、1140和1150即为5个敌方虚拟角色的表意图形。该表意图形为同时对5个敌方虚拟角色进行模糊化处理得到的第一虚拟角色的模型轮廓。
因此,通过对多个不同第一虚拟角色的声音参数信息的并行计算和处理能够得到多个声音监听结果,保证同时追踪和渲染多个第一虚拟角色的表意图形。
并且,由于不同第一虚拟角色的目标声音类型和/或第一声音类型,以及声音传播距离的不同,对不同第一虚拟角色的高斯模糊程度不同,因此第一虚拟角色的模型轮廓的尺寸和/或清晰度都会存在一定差异。
除此之外,该表意图形还可以是与模型轮廓相关的其他图形。
在可选的实施例中,表意图形为对第一虚拟角色的模型轮廓进行模糊处理而获得的图形。
在生成表征第一虚拟角色的表意图形之后,为了进一步保护第一虚拟角色的位置信息,还可以对已经模糊化处理的模型轮廓继续进行模糊处理,以得到表意图形。
除此之外,也可以根据实际情况对生成的表意图形进行其他变形处理,本示例性实施例对此不做特殊限定。
进一步的,对表意图形的显示时长也可以是根据监听参数信息和/或声音参数信息确定好的。
在可选的实施例中,图12示出了按照显示时长显示表意图形的方法的流程示意图,如图12所示,该方法至少可以包括以下步骤:在步骤S1210中,根据监听参数信息和/或声音参数信息确定用于表征第一虚拟角色的表意图形的显示时长。
其中,该表意图形的显示时长可以与第一虚拟角色的第一声音类型和目标虚拟角色的监听能力等级相关。
举例而言,第一虚拟角色的第一声音类型为第一攻击类型对应的显示时长可以大于第一声音类型为第一准备攻击类型的显示时长,第一虚拟角色的第一声音类型为第一准备攻击类型的显示时长大于第一声音类型为第一移动类型的显示时长。
或者是目标虚拟角色的监听能力等级与表意图形的显示时长呈正相关。例如,目标虚拟角色的监听能力等级越高,表意图形的显示时长越久;目标虚拟角色的监听能力等级越低,表意图形的显示时长越短。
除此之外,表意图形的显示时长也可以同时与第一虚拟角色的第一声音类型和目标虚拟角色的监听能力等级相关等,本示例性实施例对此不做特殊限定。
在步骤S1220中,按照显示时长在映射位置处显示表意图形。
在确定出表意图形的显示时长之后,可以按照该显示时长显示第一虚拟角色的表意图形。
在本示例性实施例中,当第一虚拟角色被监听到时,第一虚拟角色的表意图形显示的持续时长能够根据监听参数信息和/或声音参数信息呈现差异化表现,增加整个系统的深度和长线培养体验。
为了实时跟踪第一虚拟角色的声音源,可以通过第一位置的变化对映射位置进行实时更新。
在可选的实施例中,响应于第一位置的变化,实时更新映射位置,从而更新表意图形在图形用户界面上的显示位置,以使得表意图形实时反映第一虚拟角色的位置变化。
图13示出了实时反映第一虚拟角色的位置变化的界面示意图,如图13所示,1310、1320和1330是一个第一虚拟角色在不同时刻的表意图形。该表意图形为在不同时刻下,根据第一虚拟角色的第一位置的变化实时更新映射位置,从而在对应映射位置显示当前时刻的模型轮廓得到的。
因此,不同时刻下的表意图形能够反映第一虚拟角色的位置变化,实时跟踪显示第一虚拟角色在图形用户界面上的显示位置。
其中,1310、1320和1330为第一虚拟角色的整体均为非可视区域的情况下,对第一虚拟角色的全部区域进行模糊化处理得到的模型轮廓。
通过第一虚拟角色的表意图形的准确刻画和渲染,便于玩家实时且直观的掌握声音源的方向,对 包括左右、上下和远近等多个维度的方位准确了解,解决了声音方向只能展示大致位置,无法实现实时追踪的问题。
除此之外,1340为第一虚拟角色的整体对于目标虚拟角色都可视的情况下,未对第一虚拟角色的任何区域进行模糊化处理,正常显示的游戏画面。
因此,还可能会出现目标虚拟角色可以观察到整体的第一虚拟角色的情况,那么,此时是没有非可视区域的。
在可选的实施例中,根据摄像机参数确定第一虚拟角色的全部区域对于目标虚拟角色可视,不显示用于表征第一虚拟角色的表意图形。
当目标虚拟角色观察到第一虚拟角色的整体时,不存在非可视区域,亦即第一虚拟角色的全部区域对于目标虚拟角色来看都是可视的。
在这种情况下,无需进一步生成用于表征第一虚拟角色的表意图形,也无需显示该表意图形。当第一虚拟角色被监听到时,除了能够根据声音监听结果生成并显示第一虚拟角色的表意图形之外,还可以根据声音监听结果显示一追踪控件。
在可选的实施例中,声音参数信息,包括:声音传播距离,图14示出了根据声音监听结果显示追踪控件的方法的流程示意图,如图14所示,该方法至少可以包括以下步骤:在步骤S1410中,根据声音监听结果生成追踪控件,追踪控件包括声音传播距离。
当声音监听结果为监听能力信息大于或等于声音传播距离时,可以生成一追踪控件。该追踪控件可以是2D UI的形式,例如圆形控件、方形控件或者箭头样式的控件等,本示例性实施例对此不做特殊限定。
通过该追踪控件可以实时追踪第一虚拟角色的方位。
并且,为了进一步展示该追踪控件所指征的第一虚拟角色距离目标虚拟角色的距离,还可以在追踪控件上,或者是在追踪控件的旁边等相关位置增加声音传播距离的信息。
在步骤S1420中,在映射位置处显示用于表征第一虚拟角色的追踪控件。
在确定映射位置和生成追踪控件之后,可以在该映射位置处显示追踪控件。
此时,该映射位置可以是在第一虚拟角色的头部、胸部等第一虚拟角色内部位置,也可以是在第一虚拟角色的左边或者右边等外部位置,本示例性实施例对此不做特殊限定。
在本示例性实施例中,通过在第一虚拟角色的映射位置显示生成的追踪控件的方式,为实时追踪和渲染声音源提供了另一种实现方式,丰富了声音源的展示效果,提升了玩家的游戏体验。
在本公开的示例性实施例中的游戏中的显示控制方法,获取目标虚拟角色的监听参数信息和第一虚拟角色的声音参数信息作为渲染表意图形的数据基础,丰富了渲染表意图形的数据维度,提升了表意图形渲染的动态度和实时性,提高了声音源定位的准确度,提供更加逼真的听觉效果和视觉效果。更进一步的,根据声音监听结果渲染并显示第一虚拟角色的表意图形,对第一虚拟角色进行模糊化展示,精准刻画第一虚拟角色的所在方位,同时在过度暴露第一虚拟角色的方位之间取得平衡,实现了实时追踪和标记第一虚拟角色的指向效果。当同时渲染并显示多个第一虚拟角色的表意图形时,能够进一步解决无法在同一方向上追踪多个声音源的问题,便于玩家掌握第一虚拟角色的数量,优化玩家的游戏体验。
此外,在本公开的示例性实施例中,还提供一种游戏中的显示控制装置,通过目标终端设备提供图形用户界面,图形用户界面显示的内容至少包含游戏的全部或部分的游戏场景,游戏场景中包括通过目标终端设备控制操作的目标虚拟角色、以及通过其他终端设备控制操作的第一虚拟角色。图15示出了游戏中的显示控制装置的结构示意图,如图15所示,游戏中的显示控制装置1500可以包括:信息获取模块1510、信息计算模块1520、位置确定模块1530和图形显示模块1540。其中:信息获取模块1510,被配置为获取所述目标虚拟角色的监听参数信息,并获取所述第一虚拟角色的声音参数信息;信息计算模块1520,被配置为对所述监听参数信息和所述声音参数信息进行计算得到声音监听结果,所述声音监听结果包括:是否能够监听到所述第一虚拟角色的声音;位置确定模块1530,被配置为当能够监听到所述第一虚拟角色的声音时,根据所述第一虚拟角色在所述游戏场景中所处的第一位置,确定在所述图形用户界面上对应的映射位置;图形显示模块1540,被配置为在所述映射位置处显示用于表征所述第一虚拟角色的表意图形。
在本公开的一种示例性实施例中,所述监听参数信息,包括:目标声音类型、监听能力等级和噪音水平信息;所述获取所述目标虚拟角色的监听参数信息,包括:获取所述目标虚拟角色的所述目标声音类型;获取所述目标虚拟角色的所述监听能力等级;获取所述目标虚拟角色的所述噪音水平信息。
在本公开的一种示例性实施例中,所述获取所述目标虚拟角色的所述监听能力等级,包括:获取所述目标虚拟角色的目标属性信息,并获取所述目标属性信息和所述监听能力等级之间的映射关系;在所述映射关系中查询与所述目标属性信息对应的所述监听能力等级。
在本公开的一种示例性实施例中,所述获取所述目标虚拟角色的所述噪音水平信息,包括:根据所述目标声音类型获取所述目标虚拟角色发出的目标声音强度,并根据所述目标声音类型获取所述目标虚拟角色的监听噪声阈值;对所述目标声音强度和所述监听噪声阈值进行比较得到比较结果,并根据所述比较结果确定所述目标虚拟角色的所述噪音水平信息。
在本公开的一种示例性实施例中,所述方法还包括:当所述目标虚拟角色未发出声音时,确定所述目标虚拟角色的所述噪声水平信息。
在本公开的一种示例性实施例中,所述声音参数信息,包括:第一声音类型和声音传播距离,所述第一声音类型,包括下述类型中的至少一种:第一移动类型、第一攻击类型和第一准备攻击类型。
在本公开的一种示例性实施例中,所述监听参数信息,包括:监听能力等级和噪音水平信息;所述声音参数信息,包括:第一声音类型和声音传播距离,所述对所述监听参数信息和所述声音参数信息进行计算得到声音监听结果,包括:当所述噪声水平信息表征监听到所述第一虚拟角色时,根据所述第一声音类型获取第一声音强度;根据所述监听能力等级获取所述目标虚拟角色的监听系数,并对所述第一声音强度和所述监听系数进行计算得到监听能力信息;对所述监听能力信息和所述声音传播距离进行比较得到声音监听结果。
在本公开的一种示例性实施例中,所述表意图形为所述第一虚拟角色的模型轮廓。
在本公开的一种示例性实施例中,所述表意图形为对所述第一虚拟角色的模型轮廓进行模糊处理而获得的图形。
在本公开的一种示例性实施例中,所述根据所述第一虚拟角色在所述游戏场景中所处的第一位置,确定在所述图形用户界面上对应的映射位置,包括:根据所述第一虚拟角色在所述游戏场景中所处的第一位置和虚拟摄像机的摄像机参数确定所述第一位置在所述图形用户界面上对应的映射位置;其中,所述虚拟摄像机用于拍摄所述游戏的全部或部分的游戏场景以获得在所述图形用户界面显示的游戏场景画面。
在本公开的一种示例性实施例中,所述在所述映射位置处显示用于表征所述第一虚拟角色的表意图形,包括:根据所述摄像机参数确定所述第一虚拟角色对于所述目标虚拟角色的非可视区域,在所述映射位置处显示用于表征所述第一虚拟角色的非可视区域的表意图形;其中,所述非可视区域包括所述第一虚拟角色的全部或者部分区域,所述第一虚拟角色的部分区域包括所述第一虚拟角色的一个或多个虚拟身体部位。
在本公开的一种示例性实施例中,所述方法还包括:根据所述摄像机参数确定所述第一虚拟角色的全部区域对于所述目标虚拟角色可视,不显示用于表征所述第一虚拟角色的表意图形。
在本公开的一种示例性实施例中,所述在所述映射位置处显示用于表征所述第一虚拟角色的表意图形,包括:对所述第一虚拟角色进行模糊化处理得到表征所述第一虚拟角色的表意图形,并在所述映射位置处显示所述表意图形。
在本公开的一种示例性实施例中,所述对所述第一虚拟角色进行模糊化处理得到表征所述第一虚拟角色的表意图形,包括:对所述第一虚拟角色进行扣图处理得到第一画面;对所述第一画面进行高斯模糊得到表征所述第一虚拟角色的表意图形。
在本公开的一种示例性实施例中,所述监听参数信息,包括:目标声音类型;所述声音参数信息,包括:第一声音类型和声音传播距离,所述对所述第一画面进行高斯模糊得到表征所述第一虚拟角色的表意图形,包括:基于所述目标声音类型和/或所述第一声音类型,对所述第一画面进行高斯模糊得到表征所述第一虚拟角色的表意图形,所述表意图形的模糊参数根据所述目标声音类型和/或所述第一声音类型确定,所述模糊参数包括所述表意图形的尺寸和/或清晰度;或基于所述声音传播距离,对所述第一画面进行高斯模糊得到表征所述第一虚拟角色的表意图形,所述表意图形的模糊参数根据所述声音传播距离确定,所述模糊参数包括所述表意图形的尺寸和/或清晰度。
在本公开的一种示例性实施例中,所述在所述映射位置处显示用于表征所述第一虚拟角色的表意图形,包括:根据所述监听参数信息和/或所述声音参数信息确定用于表征所述第一虚拟角色的表意图形的显示时长;按照所述显示时长在所述映射位置处显示所述表意图形。
在本公开的一种示例性实施例中,所述方法还包括:响应于所述第一位置的变化,实时更新所述映射位置,从而更新所述表意图形在所述图形用户界面上的显示位置,以使得所述表意图形实时反映所述第一虚拟角色的位置变化。
在本公开的一种示例性实施例中,所述声音参数信息,包括:声音传播距离,所述方法还包括:根据所述声音监听结果生成追踪控件,所述追踪控件包括所述声音传播距离;在所述映射位置处显示用于表征所述第一虚拟角色的所述追踪控件。
上述游戏中的显示控制装置1500的具体细节已经在对应的游戏中的显示控制方法中进行了详细的描述,因此此处不再赘述。
应当注意,尽管在上文详细描述中提及了游戏中的显示控制装置1500的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本公开的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。
此外,在本公开的示例性实施例中,还提供了一种能够实现上述方法的电子设备。
下面参照图16来描述根据本公开的这种实施例的电子设备1600。图16显示的电子设备1600仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图16所示,电子设备1600以通用计算设备的形式表现。电子设备1600的组件可以包括但不限于:上述至少一个处理单元1610、上述至少一个存储单元1620、连接不同系统组件(包括存储单元1620和处理单元1610)的总线1630、显示单元1640。
其中,所述存储单元存储有程序代码,所述程序代码可以被所述处理单元1610执行,使得所述处理单元1610执行本说明书上述“示例性方法”部分中描述的根据本公开各种示例性实施例的步骤,例如:
通过目标终端设备提供图形用户界面,所述图形用户界面显示的内容至少包含所述游戏的全部或部分的游戏场景,所述游戏场景中包括通过所述目标终端设备控制操作的目标虚拟角色、以及通过其他终端设备控制操作的第一虚拟角色,所述方法包括:
获取所述目标虚拟角色的监听参数信息,并获取所述第一虚拟角色的声音参数信息;
对所述监听参数信息和所述声音参数信息进行计算得到声音监听结果,所述声音监听结果包括:是否能够监听到所述第一虚拟角色的声音;
当能够监听到所述第一虚拟角色的声音时,根据所述第一虚拟角色在所述游戏场景中所处的第一位置,确定在所述图形用户界面上对应的映射位置;
在所述映射位置处显示用于表征所述第一虚拟角色的表意图形。
可选的,所述监听参数信息,包括:目标声音类型、监听能力等级和噪音水平信息;
所述获取所述目标虚拟角色的监听参数信息,包括:
获取所述目标虚拟角色的所述目标声音类型;
获取所述目标虚拟角色的所述监听能力等级;
获取所述目标虚拟角色的所述噪音水平信息。
可选的,所述获取所述目标虚拟角色的所述监听能力等级,包括:
获取所述目标虚拟角色的目标属性信息,并获取所述目标属性信息和所述监听能力等级之间的映射关系;
在所述映射关系中查询与所述目标属性信息对应的所述监听能力等级。
可选的,所述获取所述目标虚拟角色的所述噪音水平信息,包括:
根据所述目标声音类型获取所述目标虚拟角色发出的目标声音强度,并根据所述目标声音类型获取所述目标虚拟角色的监听噪声阈值;
对所述目标声音强度和所述监听噪声阈值进行比较得到比较结果,并根据所述比较结果确定所述目标虚拟角色的所述噪音水平信息。
可选的,所述方法还包括:
当所述目标虚拟角色未发出声音时,确定所述目标虚拟角色的所述噪声水平信息。
可选的,所述声音参数信息,包括:第一声音类型和声音传播距离,所述第一声音类型,包括下述类型中的至少一种:第一移动类型、第一攻击类型和第一准备攻击类型。
可选的,所述监听参数信息,包括:监听能力等级和噪音水平信息;所述声音参数信息,包括:第一声音类型和声音传播距离,
所述对所述监听参数信息和所述声音参数信息进行计算得到声音监听结果,包括:
当所述噪声水平信息表征监听到所述第一虚拟角色时,根据所述第一声音类型获取第一声音强度;
根据所述监听能力等级获取所述目标虚拟角色的监听系数,并对所述第一声音强度和所述监听系 数进行计算得到监听能力信息;
对所述监听能力信息和所述声音传播距离进行比较得到声音监听结果。
可选的,所述表意图形为所述第一虚拟角色的模型轮廓。
可选的,所述表意图形为对所述第一虚拟角色的模型轮廓进行模糊处理而获得的图形。
可选的,所述根据所述第一虚拟角色在所述游戏场景中所处的第一位置,确定在所述图形用户界面上对应的映射位置,包括:
根据所述第一虚拟角色在所述游戏场景中所处的第一位置和虚拟摄像机的摄像机参数确定所述第一位置在所述图形用户界面上对应的映射位置;其中,所述虚拟摄像机用于拍摄所述游戏的全部或部分的游戏场景以获得在所述图形用户界面显示的游戏场景画面。
可选的,所述在所述映射位置处显示用于表征所述第一虚拟角色的表意图形,包括:
根据所述摄像机参数确定所述第一虚拟角色对于所述目标虚拟角色的非可视区域,在所述映射位置处显示用于表征所述第一虚拟角色的非可视区域的表意图形;其中,所述非可视区域包括所述第一虚拟角色的全部或者部分区域,所述第一虚拟角色的部分区域包括所述第一虚拟角色的一个或多个虚拟身体部位。
可选的,所述方法还包括:
根据所述摄像机参数确定所述第一虚拟角色的全部区域对于所述目标虚拟角色可视,不显示用于表征所述第一虚拟角色的表意图形。
可选的,所述在所述映射位置处显示用于表征所述第一虚拟角色的表意图形,包括:
对所述第一虚拟角色进行模糊化处理得到表征所述第一虚拟角色的表意图形,并在所述映射位置处显示所述表意图形。
可选的,所述对所述第一虚拟角色进行模糊化处理得到表征所述第一虚拟角色的表意图形,包括:
对所述第一虚拟角色进行扣图处理得到第一画面;
对所述第一画面进行高斯模糊得到表征所述第一虚拟角色的表意图形。
可选的,所述监听参数信息,包括:目标声音类型;所述声音参数信息,包括:第一声音类型和声音传播距离,
所述对所述第一画面进行高斯模糊得到表征所述第一虚拟角色的表意图形,包括:
基于所述目标声音类型和/或所述第一声音类型,对所述第一画面进行高斯模糊得到表征所述第一虚拟角色的表意图形,所述表意图形的模糊参数根据所述目标声音类型和/或所述第一声音类型确定,所述模糊参数包括所述表意图形的尺寸和/或清晰度;或
基于所述声音传播距离,对所述第一画面进行高斯模糊得到表征所述第一虚拟角色的表意图形,所述表意图形的模糊参数根据所述声音传播距离确定,所述模糊参数包括所述表意图形的尺寸和/或清晰度。
可选的,所述在所述映射位置处显示用于表征所述第一虚拟角色的表意图形,包括:
根据所述监听参数信息和/或所述声音参数信息确定用于表征所述第一虚拟角色的表意图形的显示时长;
按照所述显示时长在所述映射位置处显示所述表意图形。
可选的,所述方法还包括:
响应于所述第一位置的变化,实时更新所述映射位置,从而更新所述表意图形在所述图形用户界面上的显示位置,以使得所述表意图形实时反映所述第一虚拟角色的位置变化。
可选的,所述声音参数信息,包括:声音传播距离,
所述方法还包括:
根据所述声音监听结果生成追踪控件,所述追踪控件包括所述声音传播距离;
在所述映射位置处显示用于表征所述第一虚拟角色的所述追踪控件。
通过上述方式,获取目标虚拟角色的监听参数信息和第一虚拟角色的声音参数信息作为渲染表意图形的数据基础,丰富了渲染表意图形的数据维度,提升了表意图形渲染的动态度和实时性,提高了声音源定位的准确度,提供更加逼真的听觉效果和视觉效果。更进一步的,根据声音监听结果渲染并显示第一虚拟角色的表意图形,对第一虚拟角色进行模糊化展示,精准刻画第一虚拟角色的所在方位,同时在过度暴露第一虚拟角色的方位之间取得平衡,实现了实时追踪和标记第一虚拟角色的指向效果。当同时渲染并显示多个第一虚拟角色的表意图形时,能够进一步解决无法在同一方向上追踪多个声音源的问题,便于玩家掌握第一虚拟角色的数量,优化玩家的游戏体验。
存储单元1620可以包括易失性存储单元形式的可读介质,例如随机存取存储单元(RAM)1621和/或高速缓存存储单元1622,还可以进一步包括只读存储单元(ROM)1623。
存储单元1620还可以包括具有一组(至少一个)程序模块1625的程序/实用工具1624,这样的程序模块1625包括但不限于:操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。
总线1630可以为表示几类总线结构中的一种或多种,包括存储单元总线或者存储单元控制器、外围总线、图形加速端口、处理单元或者使用多种总线结构中的任意总线结构的局域总线。
电子设备1600也可以与一个或多个外部设备1800(例如键盘、指向设备、蓝牙设备等)通信,还可与一个或者多个使得用户能与该电子设备1600交互的设备通信,和/或与使得该电子设备1600能与一个或多个其它计算设备进行通信的任何设备(例如路由器、调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口1650进行。并且,电子设备1600还可以通过网络适配器1660与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图所示,网络适配器1660通过总线1630与电子设备1600的其它模块通信。应当明白,尽管图中未示出,可以结合电子设备1600使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
通过以上的实施例的描述,本领域的技术人员易于理解,这里描述的示例实施例可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本公开实施例的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、终端装置、或者网络设备等)执行根据本公开实施例的方法。
在本公开的示例性实施例中,还提供了一种计算机可读存储介质,其上存储有能够实现本说明书上述方法的程序产品。在一些可能的实施例中,本公开的各个方面还可以实现为一种程序产品的形式,其包括程序代码,当所述程序产品在终端设备上运行时,所述程序代码用于使所述终端设备执行本说明书上述“示例性方法”部分中描述的根据本公开各种示例性实施例的步骤,例如:
通过目标终端设备提供图形用户界面,所述图形用户界面显示的内容至少包含所述游戏的全部或部分的游戏场景,所述游戏场景中包括通过所述目标终端设备控制操作的目标虚拟角色、以及通过其他终端设备控制操作的第一虚拟角色,所述方法包括:
获取所述目标虚拟角色的监听参数信息,并获取所述第一虚拟角色的声音参数信息;
对所述监听参数信息和所述声音参数信息进行计算得到声音监听结果,所述声音监听结果包括:是否能够监听到所述第一虚拟角色的声音;
当能够监听到所述第一虚拟角色的声音时,根据所述第一虚拟角色在所述游戏场景中所处的第一位置,确定在所述图形用户界面上对应的映射位置;
在所述映射位置处显示用于表征所述第一虚拟角色的表意图形。
可选的,所述监听参数信息,包括:目标声音类型、监听能力等级和噪音水平信息;
所述获取所述目标虚拟角色的监听参数信息,包括:
获取所述目标虚拟角色的所述目标声音类型;
获取所述目标虚拟角色的所述监听能力等级;
获取所述目标虚拟角色的所述噪音水平信息。
可选的,所述获取所述目标虚拟角色的所述监听能力等级,包括:
获取所述目标虚拟角色的目标属性信息,并获取所述目标属性信息和所述监听能力等级之间的映射关系;
在所述映射关系中查询与所述目标属性信息对应的所述监听能力等级。
可选的,所述获取所述目标虚拟角色的所述噪音水平信息,包括:
根据所述目标声音类型获取所述目标虚拟角色发出的目标声音强度,并根据所述目标声音类型获取所述目标虚拟角色的监听噪声阈值;
对所述目标声音强度和所述监听噪声阈值进行比较得到比较结果,并根据所述比较结果确定所述目标虚拟角色的所述噪音水平信息。
可选的,所述方法还包括:
当所述目标虚拟角色未发出声音时,确定所述目标虚拟角色的所述噪声水平信息。
可选的,所述声音参数信息,包括:第一声音类型和声音传播距离,所述第一声音类型,包括下述类型中的至少一种:第一移动类型、第一攻击类型和第一准备攻击类型。
可选的,所述监听参数信息,包括:监听能力等级和噪音水平信息;所述声音参数信息,包括:第一声音类型和声音传播距离,
所述对所述监听参数信息和所述声音参数信息进行计算得到声音监听结果,包括:
当所述噪声水平信息表征监听到所述第一虚拟角色时,根据所述第一声音类型获取第一声音强度;
根据所述监听能力等级获取所述目标虚拟角色的监听系数,并对所述第一声音强度和所述监听系数进行计算得到监听能力信息;
对所述监听能力信息和所述声音传播距离进行比较得到声音监听结果。
可选的,所述表意图形为所述第一虚拟角色的模型轮廓。
可选的,所述表意图形为对所述第一虚拟角色的模型轮廓进行模糊处理而获得的图形。
可选的,所述根据所述第一虚拟角色在所述游戏场景中所处的第一位置,确定在所述图形用户界面上对应的映射位置,包括:
根据所述第一虚拟角色在所述游戏场景中所处的第一位置和虚拟摄像机的摄像机参数确定所述第一位置在所述图形用户界面上对应的映射位置;其中,所述虚拟摄像机用于拍摄所述游戏的全部或部分的游戏场景以获得在所述图形用户界面显示的游戏场景画面。
可选的,所述在所述映射位置处显示用于表征所述第一虚拟角色的表意图形,包括:
根据所述摄像机参数确定所述第一虚拟角色对于所述目标虚拟角色的非可视区域,在所述映射位置处显示用于表征所述第一虚拟角色的非可视区域的表意图形;其中,所述非可视区域包括所述第一虚拟角色的全部或者部分区域,所述第一虚拟角色的部分区域包括所述第一虚拟角色的一个或多个虚拟身体部位。
可选的,所述方法还包括:
根据所述摄像机参数确定所述第一虚拟角色的全部区域对于所述目标虚拟角色可视,不显示用于表征所述第一虚拟角色的表意图形。
可选的,所述在所述映射位置处显示用于表征所述第一虚拟角色的表意图形,包括:
对所述第一虚拟角色进行模糊化处理得到表征所述第一虚拟角色的表意图形,并在所述映射位置处显示所述表意图形。
可选的,所述对所述第一虚拟角色进行模糊化处理得到表征所述第一虚拟角色的表意图形,包括:
对所述第一虚拟角色进行扣图处理得到第一画面;
对所述第一画面进行高斯模糊得到表征所述第一虚拟角色的表意图形。
可选的,所述监听参数信息,包括:目标声音类型;所述声音参数信息,包括:第一声音类型和声音传播距离,
所述对所述第一画面进行高斯模糊得到表征所述第一虚拟角色的表意图形,包括:
基于所述目标声音类型和/或所述第一声音类型,对所述第一画面进行高斯模糊得到表征所述第一虚拟角色的表意图形,所述表意图形的模糊参数根据所述目标声音类型和/或所述第一声音类型确定,所述模糊参数包括所述表意图形的尺寸和/或清晰度;或
基于所述声音传播距离,对所述第一画面进行高斯模糊得到表征所述第一虚拟角色的表意图形,所述表意图形的模糊参数根据所述声音传播距离确定,所述模糊参数包括所述表意图形的尺寸和/或清晰度。
可选的,所述在所述映射位置处显示用于表征所述第一虚拟角色的表意图形,包括:
根据所述监听参数信息和/或所述声音参数信息确定用于表征所述第一虚拟角色的表意图形的显示时长;
按照所述显示时长在所述映射位置处显示所述表意图形。
可选的,所述方法还包括:
响应于所述第一位置的变化,实时更新所述映射位置,从而更新所述表意图形在所述图形用户界面上的显示位置,以使得所述表意图形实时反映所述第一虚拟角色的位置变化。
可选的,所述声音参数信息,包括:声音传播距离,
所述方法还包括:
根据所述声音监听结果生成追踪控件,所述追踪控件包括所述声音传播距离;
在所述映射位置处显示用于表征所述第一虚拟角色的所述追踪控件。
通过上述方式,获取目标虚拟角色的监听参数信息和第一虚拟角色的声音参数信息作为渲染表意 图形的数据基础,丰富了渲染表意图形的数据维度,提升了表意图形渲染的动态度和实时性,提高了声音源定位的准确度,提供更加逼真的听觉效果和视觉效果。更进一步的,根据声音监听结果渲染并显示第一虚拟角色的表意图形,对第一虚拟角色进行模糊化展示,精准刻画第一虚拟角色的所在方位,同时在过度暴露第一虚拟角色的方位之间取得平衡,实现了实时追踪和标记第一虚拟角色的指向效果。当同时渲染并显示多个第一虚拟角色的表意图形时,能够进一步解决无法在同一方向上追踪多个声音源的问题,便于玩家掌握第一虚拟角色的数量,优化玩家的游戏体验。
参考图17所示,描述了根据本公开的实施例的用于实现上述方法的程序产品1700,其可以采用便携式紧凑盘只读存储器(CD-ROM)并包括程序代码,并可以在终端设备,例如个人电脑上运行。然而,本公开的程序产品不限于此,在本文件中,可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
所述程序产品可以采用一个或多个可读介质的任意组合。可读介质可以是可读信号介质或者可读存储介质。可读存储介质例如可以为但不限于电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。
计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了可读程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。可读信号介质还可以是可读存储介质以外的任何可读介质,该可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于无线、有线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言的任意组合来编写用于执行本公开操作的程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、C++等,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算设备上执行、部分地在用户设备上执行、作为一个独立的软件包执行、部分在用户计算设备上部分在远程计算设备上执行、或者完全在远程计算设备或服务器上执行。在涉及远程计算设备的情形中,远程计算设备可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算设备,或者,可以连接到外部计算设备(例如利用因特网服务提供商来通过因特网连接)。
本领域技术人员在考虑说明书及实践这里公开的公开后,将容易想到本公开的其他实施例。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由权利要求指出。

Claims (21)

  1. 一种游戏中的显示控制方法,通过目标终端设备提供图形用户界面,所述图形用户界面显示的内容至少包含所述游戏的全部或部分的游戏场景,所述游戏场景中包括通过所述目标终端设备控制操作的目标虚拟角色、以及通过其他终端设备控制操作的第一虚拟角色,所述方法包括:
    获取所述目标虚拟角色的监听参数信息,并获取所述第一虚拟角色的声音参数信息;
    对所述监听参数信息和所述声音参数信息进行计算得到声音监听结果,所述声音监听结果包括:是否能够监听到所述第一虚拟角色的声音;
    当能够监听到所述第一虚拟角色的声音时,根据所述第一虚拟角色在所述游戏场景中所处的第一位置,确定在所述图形用户界面上对应的映射位置;
    在所述映射位置处显示用于表征所述第一虚拟角色的表意图形。
  2. 根据权利要求1所述的游戏中的显示控制方法,其中,所述监听参数信息,包括:目标声音类型、监听能力等级和噪音水平信息;
    所述获取所述目标虚拟角色的监听参数信息,包括:
    获取所述目标虚拟角色的所述目标声音类型;
    获取所述目标虚拟角色的所述监听能力等级;
    获取所述目标虚拟角色的所述噪音水平信息。
  3. 根据权利要求2所述的游戏中的显示控制方法,其中,所述获取所述目标虚拟角色的所述监听能力等级,包括:
    获取所述目标虚拟角色的目标属性信息,并获取所述目标属性信息和所述监听能力等级之间的映射关系;
    在所述映射关系中查询与所述目标属性信息对应的所述监听能力等级。
  4. 根据权利要求2所述的游戏中的显示控制方法,其中,所述获取所述目标虚拟角色的所述噪音水平信息,包括:
    根据所述目标声音类型获取所述目标虚拟角色发出的目标声音强度,并根据所述目标声音类型获取所述目标虚拟角色的监听噪声阈值;
    对所述目标声音强度和所述监听噪声阈值进行比较得到比较结果,并根据所述比较结果确定所述目标虚拟角色的所述噪音水平信息。
  5. 根据权利要求4所述的游戏中的显示控制方法,其中,所述方法还包括:
    当所述目标虚拟角色未发出声音时,确定所述目标虚拟角色的所述噪声水平信息。
  6. 根据权利要求1所述的游戏中的显示控制方法,其中,所述声音参数信息,包括:第一声音类型和声音传播距离,所述第一声音类型,包括下述类型中的至少一种:第一移动类型、第一攻击类型和第一准备攻击类型。
  7. 根据权利要求1所述的游戏中的显示控制方法,其中,所述监听参数信息,包括:监听能力等级和噪音水平信息;所述声音参数信息,包括:第一声音类型和声音传播距离,
    所述对所述监听参数信息和所述声音参数信息进行计算得到声音监听结果,包括:
    当所述噪声水平信息表征监听到所述第一虚拟角色时,根据所述第一声音类型获取第一声音强度;
    根据所述监听能力等级获取所述目标虚拟角色的监听系数,并对所述第一声音强度和所述监听系数进行计算得到监听能力信息;
    对所述监听能力信息和所述声音传播距离进行比较得到声音监听结果。
  8. 根据权利要求1所述的游戏中的显示控制方法,其中,所述表意图形为所述第一虚拟角色的模型轮廓。
  9. 根据权利要求1所述的游戏中的显示控制方法,其中,所述表意图形为对所述第一虚拟角色的模型轮廓进行模糊处理而获得的图形。
  10. 根据权利要求1所述的游戏中的显示控制方法,其中,所述根据所述第一虚拟角色在所述游戏场景中所处的第一位置,确定在所述图形用户界面上对应的映射位置,包括:
    根据所述第一虚拟角色在所述游戏场景中所处的第一位置和虚拟摄像机的摄像机参数确定所述第一位置在所述图形用户界面上对应的映射位置;其中,所述虚拟摄像机用于拍摄所述游戏的全部或部分的游戏场景以获得在所述图形用户界面显示的游戏场景画面。
  11. 根据权利要求10所述的游戏中的显示控制方法,其中,所述在所述映射位置处显示用于表征所述第一虚拟角色的表意图形,包括:
    根据所述摄像机参数确定所述第一虚拟角色对于所述目标虚拟角色的非可视区域,在所述映射位置处显示用于表征所述第一虚拟角色的非可视区域的表意图形;其中,所述非可视区域包括所述第一虚拟角色的全部或者部分区域,所述第一虚拟角色的部分区域包括所述第一虚拟角色的一个或多个虚拟身体部位。
  12. 根据权利要求10所述的游戏中的显示控制方法,其中,所述方法还包括:
    根据所述摄像机参数确定所述第一虚拟角色的全部区域对于所述目标虚拟角色可视,不显示用于表征所述第一虚拟角色的表意图形。
  13. 根据权利要求1所述的游戏中的显示控制方法,其中,所述在所述映射位置处显示用于表征所述第一虚拟角色的表意图形,包括:
    对所述第一虚拟角色进行模糊化处理得到表征所述第一虚拟角色的表意图形,并在所述映射位置处显示所述表意图形。
  14. 根据权利要求13所述的游戏中的显示控制方法,其中,所述对所述第一虚拟角色进行模糊化处理得到表征所述第一虚拟角色的表意图形,包括:
    对所述第一虚拟角色进行扣图处理得到第一画面;
    对所述第一画面进行高斯模糊得到表征所述第一虚拟角色的表意图形。
  15. 根据权利要求14所述的游戏中的显示控制方法,其中,所述监听参数信息,包括:目标声音类型;所述声音参数信息,包括:第一声音类型和声音传播距离,
    所述对所述第一画面进行高斯模糊得到表征所述第一虚拟角色的表意图形,包括:
    基于所述目标声音类型和/或所述第一声音类型,对所述第一画面进行高斯模糊得到表征所述第一虚拟角色的表意图形,所述表意图形的模糊参数根据所述目标声音类型和/或所述第一声音类型确定,所述模糊参数包括所述表意图形的尺寸和/或清晰度;或
    基于所述声音传播距离,对所述第一画面进行高斯模糊得到表征所述第一虚拟角色的表意图形,所述表意图形的模糊参数根据所述声音传播距离确定,所述模糊参数包括所述表意图形的尺寸和/或清晰度。
  16. 根据权利要求1所述的游戏中的显示控制方法,其中,所述在所述映射位置处显示用于表征所述第一虚拟角色的表意图形,包括:
    根据所述监听参数信息和/或所述声音参数信息确定用于表征所述第一虚拟角色的表意图形的显示时长;
    按照所述显示时长在所述映射位置处显示所述表意图形。
  17. 根据权利要求1所述的游戏中的显示控制方法,其中,所述方法还包括:
    响应于所述第一位置的变化,实时更新所述映射位置,从而更新所述表意图形在所述图形用户界面上的显示位置,以使得所述表意图形实时反映所述第一虚拟角色的位置变化。
  18. 根据权利要求1所述的游戏中的显示控制方法,其中,所述声音参数信息,包括:声音传播距离,
    所述方法还包括:
    根据所述声音监听结果生成追踪控件,所述追踪控件包括所述声音传播距离;
    在所述映射位置处显示用于表征所述第一虚拟角色的所述追踪控件。
  19. 一种游戏中的显示控制装置,通过目标终端设备提供图形用户界面,所述图形用户界面显示的内容至少包含所述游戏的全部或部分的游戏场景,所述游戏场景中包括通过所述目标终端设备控制操作的目标虚拟角色、以及通过其他终端设备控制操作的第一虚拟角色,包括:
    信息获取模块,被配置为获取所述目标虚拟角色的监听参数信息,并获取所述第一虚拟角色的声音参数信息;
    信息计算模块,被配置为对所述监听参数信息和所述声音参数信息进行计算得到声音监听结果,所述声音监听结果包括:是否能够监听到所述第一虚拟角色的声音;
    位置确定模块,被配置为当能够监听到所述第一虚拟角色的声音时,根据所述第一虚拟角色在所述游戏场景中所处的第一位置,确定在所述图形用户界面上对应的映射位置;
    图形显示模块,被配置为在所述映射位置处显示用于表征所述第一虚拟角色的表意图形。
  20. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-18中任意一项所述的游戏中的显示控制方法。
  21. 一种电子设备,包括:
    处理器;
    存储器,用于存储所述处理器的可执行指令;
    其中,所述处理器被配置为经由执行所述可执行指令来执行权利要求1-18中任意一项所述的游戏中的显示控制方法。
PCT/CN2022/124322 2022-05-10 2022-10-10 游戏中的显示控制方法及装置、存储介质、电子设备 WO2023216502A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210505766.X 2022-05-10
CN202210505766.XA CN114904267A (zh) 2022-05-10 2022-05-10 游戏中的显示控制方法及装置、存储介质、电子设备

Publications (1)

Publication Number Publication Date
WO2023216502A1 true WO2023216502A1 (zh) 2023-11-16

Family

ID=82766661

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/124322 WO2023216502A1 (zh) 2022-05-10 2022-10-10 游戏中的显示控制方法及装置、存储介质、电子设备

Country Status (2)

Country Link
CN (1) CN114904267A (zh)
WO (1) WO2023216502A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114904267A (zh) * 2022-05-10 2022-08-16 网易(杭州)网络有限公司 游戏中的显示控制方法及装置、存储介质、电子设备

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090253512A1 (en) * 2008-04-07 2009-10-08 Palo Alto Research Center Incorporated System And Method For Providing Adjustable Attenuation Of Location-Based Communication In An Online Game
CN105617658A (zh) * 2015-12-25 2016-06-01 新浪网技术(中国)有限公司 基于真实室内环境的多人移动射击游戏系统
CN110652726A (zh) * 2019-09-27 2020-01-07 杭州顺网科技股份有限公司 一种基于图像识别和音频识别的游戏辅助系统
CN110833694A (zh) * 2019-11-15 2020-02-25 网易(杭州)网络有限公司 游戏中的显示控制方法及装置
CN110898430A (zh) * 2019-11-26 2020-03-24 腾讯科技(深圳)有限公司 音源定位方法和装置、存储介质及电子装置
JP2020156589A (ja) * 2019-03-25 2020-10-01 株式会社バンダイナムコエンターテインメント ゲームシステム、プログラム及びゲーム装置
CN113398590A (zh) * 2021-07-14 2021-09-17 网易(杭州)网络有限公司 声音处理方法、装置、计算机设备及存储介质
CN114904267A (zh) * 2022-05-10 2022-08-16 网易(杭州)网络有限公司 游戏中的显示控制方法及装置、存储介质、电子设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090253512A1 (en) * 2008-04-07 2009-10-08 Palo Alto Research Center Incorporated System And Method For Providing Adjustable Attenuation Of Location-Based Communication In An Online Game
CN105617658A (zh) * 2015-12-25 2016-06-01 新浪网技术(中国)有限公司 基于真实室内环境的多人移动射击游戏系统
JP2020156589A (ja) * 2019-03-25 2020-10-01 株式会社バンダイナムコエンターテインメント ゲームシステム、プログラム及びゲーム装置
CN110652726A (zh) * 2019-09-27 2020-01-07 杭州顺网科技股份有限公司 一种基于图像识别和音频识别的游戏辅助系统
CN110833694A (zh) * 2019-11-15 2020-02-25 网易(杭州)网络有限公司 游戏中的显示控制方法及装置
CN110898430A (zh) * 2019-11-26 2020-03-24 腾讯科技(深圳)有限公司 音源定位方法和装置、存储介质及电子装置
CN113398590A (zh) * 2021-07-14 2021-09-17 网易(杭州)网络有限公司 声音处理方法、装置、计算机设备及存储介质
CN114904267A (zh) * 2022-05-10 2022-08-16 网易(杭州)网络有限公司 游戏中的显示控制方法及装置、存储介质、电子设备

Also Published As

Publication number Publication date
CN114904267A (zh) 2022-08-16

Similar Documents

Publication Publication Date Title
US11495017B2 (en) Virtualization of tangible interface objects
EP3882870B1 (en) Method and device for image display, storage medium and electronic device
JP6408019B2 (ja) 画像デバイスにおける写真構図および位置ガイダンス
CN109445662B (zh) 虚拟对象的操作控制方法、装置、电子设备及存储介质
US10573060B1 (en) Controller binding in virtual domes
EP3916632A1 (en) Virtualization of tangible interface objects
US20090202114A1 (en) Live-Action Image Capture
US10755486B2 (en) Occlusion using pre-generated 3D models for augmented reality
KR20220070032A (ko) 위조의 가상 오브젝트의 검출
CN111228821B (zh) 智能检测穿墙外挂方法、装置、设备及其存储介质
WO2023029900A1 (zh) 视频帧的渲染方法、装置、设备以及存储介质
CN112206517B (zh) 一种渲染方法、装置、存储介质及计算机设备
WO2023216502A1 (zh) 游戏中的显示控制方法及装置、存储介质、电子设备
WO2024021557A1 (zh) 反射光照确定、全局光照确定方法、装置、介质和设备
US10740957B1 (en) Dynamic split screen
WO2023130808A1 (zh) 动画帧的显示方法、装置、设备及存储介质
CN113694522B (zh) 一种破碎效果的处理方法、装置、存储介质及电子设备
CN116966557A (zh) 游戏视频流分享方法、装置、存储介质与电子设备
CN114245907A (zh) 自动曝光的光线追踪
CN111538410A (zh) 一种vr场景中确定目标算法的方法及装置、计算设备
CN112822396B (zh) 一种拍摄参数的确定方法、装置、设备及存储介质
CN109167992A (zh) 一种图像处理方法及装置
CN117197319B (zh) 图像生成方法、装置、电子设备及存储介质
US12001750B2 (en) Location-based shared augmented reality experience system
CN112791418B (zh) 拍摄对象的确定方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2023575839

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22941440

Country of ref document: EP

Kind code of ref document: A1